Up next


Bayesian Deep Learning and Probabilistic Model Construction - ICML 2020 Tutorial

3,394,765 Views
AI Lover
3
Published on 12/17/22 / In How-to & Learning

Bayesian Deep Learning and a Probabilistic Perspective of Model Construction
ICML 2020 Tutorial

Bayesian inference is especially compelling for deep neural networks. The key distinguishing property of a Bayesian approach is marginalization instead of optimization. Neural networks are typically underspecified by the data, and can represent many different but high performing models corresponding to different settings of parameters, which is exactly when marginalization will make the biggest difference for accuracy and calibration.

The tutorial has four parts:

Part 1: Introduction to Bayesian modelling and overview (Foundations, overview, Bayesian model averaging in deep learning, epistemic uncertainty, examples)

Part 2: The function-space view (Gaussian processes, infinite neural networks, training a neural network is kernel learning, Bayesian non-parametric deep learning)

Part 3: Practical methods for Bayesian deep learning (Loss landscapes, functional diversity in mode connectivity, SWAG, epistemic uncertainty, calibration, subspace inference, K-FAC Laplace, MC Dropout, stochastic MCMC, Bayes by Backprop, deep ensembles)

Part 4: Bayesian model construction and generalization (Deep ensembles, MultiSWAG, tempering, prior-specification, posterior contraction, re-thinking generalization, double descent, width-depth trade-offs, more!)

Slides: https://cims.nyu.edu/~andrewgw/bayesdlicml2020.pdf
Associated Paper: "Bayesian Deep Learning and a Probabilistic Perspective of Generalization" (NeurIPS 2020)
https://arxiv.org/pdf/2002.08791.pdf


Thanks to Kevin Xia (Columbia) for help in preparing the video.

Show more
0 Comments sort Sort By

Up next