Latest videos
In this video we go through backward propagation calculations for a feedforward- neural network!
In this first video we go through the necessary notation in order to make the mathematical calculations for the forward as well as the backward propagation.
As I said in the video if you have never heard of neural networks before but still want to learn and wonder where to start to gain some understanding and intuition here is a great place:
https://www.youtube.com/watch?v=aircAruvnKk
3blue1brown has more videos of neural networks that I also recommend you to watch!
- Dino
Explanation and implementation of interval scheduling problem using a greedy algorithm.
CODE REPOSITORY: https://github.com/AladdinPerz....on/Algorithms-Collec
In this video I explain how Word2Vec works and it's two model variants in Continuous Bag of words (CBOW) and Skip-Gram. I also give an intuitive understanding of what embeddings are, why they are important as this is fundamentally what this algorithm is trying to learn.
Timestamps:
0:00 - Introduction to Word2vec
0:54 - Understanding Embeddings
5:20 - CBOW model of Word2Vec
8:46 - Skip-Gram model of Word2Vec
9:34 - Outro
This is my solution to costFunction.m function in Programming assignment 2 from the famous Machine Learning course by Andrew Ng.
Github: https://github.com/AladdinPerz....on/Courses/tree/mast
In this video we do the math for forward propagation in generalized notation!
Comparison of several examples using the the same prompt for generation.
Timestamps:
0:00 - Intro
0:49 - Prompt 1
2:30 - Prompt 2
3:47 - Prompt 3
4:39 - Prompt 4
5:32 - Prompt 5
6:24 - Prompt 6
7:34 - Prompt 7
9:32 - Other comparisons
12:23 - Winner
Implementation in Python of the weighted interval scheduling problem in Python using dynamic programming. I try to keep the code as clean as possible and hopefully it's crystal clear for you guys!
Code repository:
https://github.com/AladdinPerz....on/Algorithms-Collec
Explanation of algorithm: https://youtu.be/iIX1YvbLbvc
With all the juicy drama this week I hope you brought popcorn 🍿
In this video I walk through a general text generator based on a character level RNN coded with an LSTM in Pytorch in the application of generating new baby names.
People often ask what courses are great for getting into ML/DL and the two I started with is ML and DL specialization both by Andrew Ng. Below you'll find both affiliate and non-affiliate links if you want to check it out. The pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link.
ML Course (affiliate): https://bit.ly/3qq20Sx
DL Specialization (affiliate): https://bit.ly/30npNrw
ML Course (no affiliate): https://bit.ly/3t8JqA9
DL Specialization (no affiliate): https://bit.ly/3t8JqA9
GitHub Repository:
https://github.com/aladdinpers....son/Machine-Learning
✅ Equipment I use and recommend:
https://www.amazon.com/shop/aladdinpersson
❤️ Become a Channel Member:
https://www.youtube.com/channe....l/UCkzW5JSFwvKRjXABI
✅ One-Time Donations:
Paypal: https://bit.ly/3buoRYH
Ethereum: 0xc84008f43d2E0bC01d925CC35915CdE92c2e99dc
▶️ You Can Connect with me on:
Twitter - https://twitter.com/aladdinpersson
LinkedIn - https://www.linkedin.com/in/al....addin-persson-a95384
GitHub - https://github.com/aladdinpersson
PyTorch Playlist:
https://www.youtube.com/playli....st?list=PLhhyoLH6Ijf
In this video we go through how to implement a dynamic algorithm for solving the sequence alignment or edit distance problem. This is also referred to as the Needleman Wunsch algorithm, it seems as if this algorithm has quite many names :)
Code repository:
https://github.com/AladdinPerz....on/Algorithms-Collec
I recommend watching the explanation video before watching this implementation:
https://youtu.be/bQ7kRW6zo9Y
In this video we look at the datasets that are available to us through TensorFlow Datasets (tfds) and how we load them and then doing preprocessing, shuffling, batching, prefetching etc. For the example we load an image dataset (mnist) and a text dataset (imdb) and create simple models to do image classification and sentiment analysis.
New packages/imports we used in the video:
https://anaconda.org/conda-forge/matplotlib (pip install matplotlib)
https://anaconda.org/anaconda/tensorflow-datasets (pip install tensorflow_datasets)
I learned a lot and was inspired to make these TensorFlow videos by the TensorFlow Specialization on Coursera. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link.
affiliate: https://bit.ly/3t3tgI5
non-affiliate: https://bit.ly/3kZgN5B
GitHub Repository:
https://github.com/aladdinpers....son/Machine-Learning
✅ Equipment I use and recommend:
https://www.amazon.com/shop/aladdinpersson
❤️ Become a Channel Member:
https://www.youtube.com/channe....l/UCkzW5JSFwvKRjXABI
✅ One-Time Donations:
Paypal: https://bit.ly/3buoRYH
Ethereum: 0xc84008f43d2E0bC01d925CC35915CdE92c2e99dc
▶️ You Can Connect with me on:
Twitter - https://twitter.com/aladdinpersson
LinkedIn - https://www.linkedin.com/in/al....addin-persson-a95384
GitHub - https://github.com/aladdinpersson
TensorFlow Playlist:
https://www.youtube.com/playli....st?list=PLhhyoLH6Ijf
OUTLINE:
0:00 - Introduction
0:16 - Keras vs TFDS vs tf.data
1:54 - Imports
2:38 - Image Loading with TFDS
12:59 - Text loading with TFDS
28:15 - Results and Ending
This is my solution to predict.m function in Programming assignment 3 from the famous Machine Learning course by Andrew Ng.
Github: https://github.com/AladdinPerz....on/Courses/tree/mast
Explanation step by step of how the sequence alignment algorithms problem works. Other common names of this algorithm is the Needleman Wunsch algorithm. In the next video we will code this algorithm from scratch using Python. On Leetcode this algorithm can be found under "Edit Distance", it seems this algorithm has many different names!
Code Repository:
https://github.com/AladdinPerz....on/Algorithms-Collec
In this tutorial we go through how an image captioning system works and implement one from scratch. Specifically we're looking at the caption dataset Flickr8k. There are multiple ways to improve the model: train a larger model (the one used is relatively small), train for longer and improve the model by adding attention similar to this paper: https://arxiv.org/abs/1502.03044.
Video of dataset (link in that video description to download the dataset yourself):
https://youtu.be/9sHcLvVXsns
✅ Support My Channel Through Patreon:
https://www.patreon.com/aladdinpersson
PyTorch Playlist:
https://www.youtube.com/playli....st?list=PLhhyoLH6Ijf
Github Repository:
https://github.com/aladdinpers....son/Machine-Learning
I stole the thumbnail image from Yunjeys Github on Image Captioning which I also used as a resource. The implementation in the video differs a bit, but it's definitely worth checking out:
https://github.com/yunjey/pytorch-tutorial
OUTLINE:
0:00 - Introduction
0:12 - Explanation of Image Captioning
05:15 - Overview of the code
06:07 - Implementation of CNN and RNN
20:03 - Setting up the training
30:36 - Fixing errors
32:18 - Small evaluation and ending