WGAN implementation from scratch (with gradient penalty)
I was inspired to make these videos by this specialization: https://bit.ly/3SqLuA6
In this video we implement WGAN and WGAN-GP in PyTorch. Both of these improvements are based on the loss function of GANs and focused specifically on improving the stability of training.
Resources and papers:
https://www.alexirpan.com/2017..../02/22/wasserstein-g
https://arxiv.org/abs/1701.07875
https://arxiv.org/abs/1704.00028
GitHub Repository:
https://github.com/aladdinpers....son/Machine-Learning
GAN Playlist:
https://youtube.com/playlist?l....ist=PLhhyoLH6IjfwIp8
PyTorch Playlist:
https://www.youtube.com/playli....st?list=PLhhyoLH6Ijf
OUTLINE:
0:00 - Introduction
0:27 - Understanding WGAN
6:53 - WGAN Implementation details
9:15 - Coding WGAN
15:50 - Understanding WGAN-GP
18:48 - Coding WGAN-GP
25:29 - Ending