Latent Space Visualisation: PCA, t-SNE, UMAP | Deep Learning Animated
In this video you will learn about three very common methods for data dimensionality reduction: PCA, t-SNE and UMAP. These are especially useful when you want to visualise the latent space of an autoencoder.
If you want to learn more about these techniques, here are some key papers:
- UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction https://arxiv.org/abs/1802.03426
- Stochastic Neighbor Embedding https://papers.nips.cc/paper_f....iles/paper/2002/hash
- Visualizing Data using t-SNE https://www.jmlr.org/papers/vo....lume9/vandermaaten08
And if you want to learn about even more recent techniques such as TriMAP and PACMAP, here are the papers:
- TriMap: Large-scale Dimensionality Reduction Using Triplets https://arxiv.org/abs/1910.00204
- PaCMAP https://arxiv.org/abs/2012.04456
Chapters:
00:36 PCA
05:15 t-SNE
13:30 UMAP
18:02 Conclusion
This video features animations created with Manim, inspired by Grant Sanderson's work at @3blue1brown. Here is the code that I used to make this video: https://github.com/ytdeepia/La....tent-Space-Visualisa
If you enjoyed the content, please like, comment, and subscribe to support the channel!
#DeepLearning #PCA #ArtificialIntelligence #tsne #DataScience #LatentSpace #Manim #Tutorial #machinelearning #education #somepi