Up next


CS224W: Machine Learning with Graphs | 2021 | Lecture 7.3 - Stacking layers of a GNN

2,660 Views
AI Lover
3
Published on 06/02/23 / In How-to & Learning

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3BcmeEA

Jure Leskovec
Computer Science, PhD

Having defined a GNN layer, the next design step is how to stack GNN layers together. To motivate different ways of stacking GNN layers, we first introduce the issue of over-smoothing that prevents GNNs learning meaningful node embeddings. We learn 2 lessons from the problem of over-smoothing: (1) We should be cautious when adding GNN layers; (2) we can add skip connections in GNNs to alleviate the over-smoothing problem. When the number of GNN layers is small, we can enhance the expressiveness of GNN by creating multi-layer message / aggregation computation, or adding pre-processing / post-processing layers in the GNN.

To follow along with the course schedule and syllabus, visit:
http://web.stanford.edu/class/cs224w/

Show more
0 Comments sort Sort By

Up next