#60 Geometric Deep Learning Blueprint (Special Edition)
Patreon: https://www.patreon.com/mlst
Discord: https://discord.gg/ESrGqhf5CB
"Symmetry, as wide or narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty, and perfection." and that was a quote from Hermann Weyl, a German mathematician who was born in the late 19th century.
The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact tractable given enough computational horsepower. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning and second, learning by local gradient-descent type methods, typically implemented as backpropagation.
While learning generic functions in high dimensions is a cursed estimation problem, many tasks are not uniform and have strong repeating patterns as a result of the low-dimensionality and structure of the physical world.
Geometric Deep Learning unifies a broad class of ML problems from the perspectives of symmetry and invariance. These principles not only underlie the breakthrough performance of convolutional neural networks and the recent success of graph neural networks but also provide a principled way to construct new types of problem-specific inductive biases.
This week we spoke with Professor Michael Bronstein (head of graph ML at Twitter) and Dr.
Petar Veličković (Senior Research Scientist at DeepMind), and Dr. Taco Cohen and Prof. Joan Bruna about their new proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges.
Enjoy the show!
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
https://arxiv.org/abs/2104.13478
[00:00:00] Tim Intro
[00:01:55] Fabian Fuchs article
[00:04:05] High dimensional learning and curse
[00:05:33] Inductive priors
[00:07:55] The proto book
[00:09:37] The domains of geometric deep learning
[00:10:03] Symmetries
[00:12:03] The blueprint
[00:13:30] NNs don't deal with network structure (TedX)
[00:14:26] Penrose - standing edition
[00:15:29] Past decade revolution (ICLR)
[00:16:34] Talking about the blueprint
[00:17:11] Interpolated nature of DL / intelligence
[00:21:29] Going tack to Euclid
[00:22:42] Erlangen program
[00:24:56] “How is geometric deep learning going to have an impact”
[00:26:36] Introduce Michael and Petar
[00:28:35] Petar Intro
[00:32:52] Algorithmic reasoning
[00:36:16] Thinking fast and slow (Petar)
[00:38:12] Taco Intro
[00:46:52] Deep learning is the craze now (Petar)
[00:48:38] On convolutions (Taco)
[00:53:17] Joan Bruna's voyage into geometric deep learning
[00:56:51] What is your most passionately held belief about machine learning? (Bronstein)
[00:57:57] Is the function approximation theorem still useful? (Bruna)
[01:11:52] Could an NN learn a sorting algorithm efficiently (Bruna)
[01:17:08] Curse of dimensionality / manifold hypothesis (Bronstein)
[01:25:17] Will we ever understand approximation of deep neural networks (Bruna)
[01:29:01] Can NNs extrapolate outside of the training data? (Bruna)
[01:31:21] What areas of math are needed for geometric deep learning? (Bruna)
[01:32:18] Graphs are really useful for representing most natural data (Petar)
[01:35:09] What was your biggest aha moment early (Bronstein)
[01:39:04] What gets you most excited? (Bronstein)
[01:39:46] Main show kick off + Conservation laws
[01:49:10] Graphs are king
[01:52:44] Vector spaces vs discrete
[02:00:08] Does language have a geometry? Which domains can geometry not be applied? +Category theory
[02:04:21] Abstract categories in language from graph learning
[02:07:10] Reasoning and extrapolation in knowledge graphs
[02:15:36] Transformers are graph neural networks?
[02:21:31] Tim never liked positional embeddings
[02:24:13] Is the case for invariance overblown? Could they actually be harmful?
[02:31:24] Why is geometry a good prior?
[02:34:28] Augmentations vs architecture and on learning approximate invariance
[02:37:04] Data augmentation vs symmetries (Taco)
[02:40:37] Could symmetries be harmful (Taco)
[02:47:43] Discovering group structure (from Yannic)
[02:49:36] Are fractals a good analogy for physical reality?
[02:52:50] Is physical reality high dimensional or not?
[02:54:30] Heuristics which deal with permutation blowups in GNNs
[02:59:46] Practical blueprint of building a geometric network architecture
[03:01:50] Symmetry discovering procedures
[03:04:05] How could real world data scientists benefit from geometric DL?
[03:07:17] Most important problem to solve in message passing in GNNs
[03:09:09] Better RL sample efficiency as a result of geometric DL (XLVIN paper)
[03:14:02] Geometric DL helping latent graph learning
[03:17:07] On intelligence
[03:23:52] Convolutions on irregular objects (Taco)