CS224W: Machine Learning with Graphs | 2021 | Lecture 7.3 - Stacking layers of a GNN
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3BcmeEA
Jure Leskovec
Computer Science, PhD
Having defined a GNN layer, the next design step is how to stack GNN layers together. To motivate different ways of stacking GNN layers, we first introduce the issue of over-smoothing that prevents GNNs learning meaningful node embeddings. We learn 2 lessons from the problem of over-smoothing: (1) We should be cautious when adding GNN layers; (2) we can add skip connections in GNNs to alleviate the over-smoothing problem. When the number of GNN layers is small, we can enhance the expressiveness of GNN by creating multi-layer message / aggregation computation, or adding pre-processing / post-processing layers in the GNN.
To follow along with the course schedule and syllabus, visit:
http://web.stanford.edu/class/cs224w/