Latest videos

Generative AI
0 Views · 3 months ago

Github repo: http://www.github.com/luisguis....errano/singular_valu

Grokking Machine Learning Book:
https://www.manning.com/books/....grokking-machine-lea
40% discount promo code: serranoyt

In this video, we learn a very useful matrix trick called singular value decomposition (SVD), in which we express a matrix as a product of two rotation matrices and one scaling matrix.
We also show a very interesting application to image compression.

Similar videos:
Principal component analysis (PCA): https://www.youtube.com/watch?v=g-Hb26agBFg
Matrix factorization and Netflix recommendations: https://www.youtube.com/watch?v=ZspR5PZemcs

Introduction: (0:00)
Transformations: (0:50)
A puzzle: (1:27)
A harder puzzle: (2:21)
Linear transformations: (3:50)
Dimensionality reduction: (10:50)
Image compression: (23:57)

Generative AI
1 Views · 3 months ago

This video shows how you can detect if a sequence is periodic or not, and also find its period, using the Discrete Fourier Transform.

This is the second in a series of videos about Fourier transforms.

- First video: The Discrete Fourier Transform https://www.youtube.com/watch?v=T8XZxR5H04E
- Second video: Detecting Periodicity with the DFT (this one)

Grokking Machine Learning Book:
https://www.manning.com/books/....grokking-machine-lea
40% discount promo code: serranoyt

00:00 Introduction
2:37: Periodic sequences
12:54 In the inverse DFT

Generative AI
1 Views · 3 months ago

Correction (10:26). The probabilities are wrong. The correct ones are here:
For Die 1: 0.4^4 * 0.2^2 * 0.1^1 * 0.1^1 * 0.2^2
For Die 2: 0.4^4 * 0.1^2 * 0.2^1 * 0.2^1 * 0.1^2
For Die 3: 0.1^4 * 0.2^2 * 0.4^1 * 0.2^1 * 0.1^2

Kullback Leibler (KL) divergence is a way to measure how far apart two distributions are.
In this video, we learn KL-divergence in a simple way, using a probability game with dice.

Shannon entropy and information gain: https://www.youtube.com/watch?v=9r7FIXEAGvs&t=1066s&pp=ygUic2hhbm5vbiBlbnRyb3B5IGluZm9ybWF0aW9uIHRoZW9yeQ%3D%3D

Grokking Machine Learning book: www.manning.com/books/grokking-machine-learning
40% discount code: serranoyt

Generative AI
7 Views · 3 months ago

Reinforcement Learning with Human Feedback (RLHF) is a method used for training Large Language Models (LLMs). In the heart of RLHF lies a very powerful reinforcement learning method called Proximal Policy Optimization. Learn about it in this simple video!

This is the first one in a series of 3 videos dedicated to the reinforcement learning methods used for training LLMs.

Full Playlist: https://www.youtube.com/playli....st?list=PLs8w1Cdi-zv

Video 0 (Optional): Introduction to deep reinforcement learning https://www.youtube.com/watch?v=SgC6AZss478
Video 1: Proximal Policy Optimization https://www.youtube.com/watch?v=TjHH_--7l8g
Video 2 (This one): Reinforcement Learning with Human Feedback
Video 3 (Coming soon!): Deterministic Policy Optimization

00:00 Introduction
00:48 Intro to Reinforcement Learning (RL)
02:47 Intro to Proximal Policy Optimization (PPO)
4:17 Intro to Large Language Models (LLMs)
6:50 Reinforcement Learning with Human Feedback (RLHF)
13:08 Interpretation of the Neural Networks
14:36 Conclusion

Get the Grokking Machine Learning book!
https://manning.com/books/grok....king-machine-learnin
Discount code (40%): serranoyt
(Use the discount code on checkout)

Generative AI
6 Views · 3 months ago

A video about reinforcement learning, Q-networks, and policy gradients, explained in a friendly tone with examples and figures.

Introduction to neural networks: https://www.youtube.com/watch?v=BR9h47Jtqyw

Introduction: (0:00)
Markov decision processes (MDP): (1:09)
Rewards: (5:39)
Discount factor: (8:51)
Bellman equation: (10:48)
Solving the Bellman equation: (12:43)
Deterministic vs stochastic processes: (16:29)
Neural networks: (19:15)
Value neural networks: (21:44)
Policy neural networks: (25:44)
Training the policy neural network: (30:46)
Conclusion: (34:53)

Announcement: Book by Luis Serrano! Grokking Machine Learning. bit.ly/grokkingML
40% discount code: serranoyt

Generative AI
0 Views · 3 months ago

In this video you'll learn about ways to tell probability distributions apart. method for finding zeros of a polynomial.
This is part of the Math for ML Specialization with @Deeplearningai Check it out here!
https://bit.ly/4imAtNz

Other samples of the M4ML Specialization:
Linear Algebra: Discrete Dynamical Systems: https://www.youtube.com/watch?v=7SfocUa8gis
Calculus: Newton's method https://www.youtube.com/watch?v=TEsJpHTGURo
Probability/Statistics: (this one)

00:24 Average
04:31 Variance
10:24 Skewness
16:32 Kurtosis

Generative AI
1 Views · 3 months ago

Announcement: New Book by Luis Serrano! Grokking Machine Learning. bit.ly/grokkingML
40% Discount code: serranoyt

Welcome! I believe that math concepts can be learned through simple explanations, analogies and easy-to-understand visualizations. I am passionate about teaching math concepts in relatable, friendly and simple ways. My videos are designed so that beginners can clearly learn new concepts while experts can see them under a new light. I hope you enjoy the channel and please drop me a line if you have any comments or suggestions. Twitter: @luis_likes_math.

Generative AI
2 Views · 3 months ago

CORRECTION: at 13:41, the probability is 6.1e-5 and not 4.8e-4 (however, the entropy is 1.75, which is correct). Thank you @dlyChimi!

Learn Shannon entropy and information gain by playing a game consisting in picking colored balls from buckets.

Announcement: New Book by Luis Serrano! Grokking Machine Learning. bit.ly/grokkingML
40% discount code: serranoyt

Accompanying blog post: https://medium.com/p/5810d35d54b4/

0:00 Shannon Entropy and Information Gain
2:22 What ball will we pick?
4:33 Quiz
5:06 Question
5:14 Game
7:17 Probability of Winning
7:45 Products
11:00 What if there are more classes?
12:34 Sequence 2
13:44 Sequence 3
14:57 Naive Approach
15:34 Sequence 1
19:44 General Formula

Generative AI
1 Views · 3 months ago

A video about autoencoders, a very powerful generative model. The video includes:
Intro: (0:25)
Dimensionality reduction (3:35)
Denoising autoencoders (10:50)
Variational autoencoders (18:15)
Training autoencoders (23:36)

Github repo: www.github.com/luisguiserrano/autoencoders

Recommended videos:
Generative adversarial networks: https://www.youtube.com/watch?v=8L11aMN5KY8
Restricted Boltzmann machines: https://www.youtube.com/watch?v=Fkw0_aAtwIw
Matrix factorization: https://www.youtube.com/watch?v=ZspR5PZemcs
Singular value decomposition: https://www.youtube.com/watch?v=DG7YTlGnCEo
Neural networks: https://www.youtube.com/watch?v=BR9h47Jtqyw
Convolutional neural networks: https://www.youtube.com/watch?v=2-Ol7ZB0MmU
Recurrent neural networks: https://www.youtube.com/watch?v=2-Ol7ZB0MmU
Logistic regression: https://www.youtube.com/watch?v=jbluHIgBmBo
Shannon entropy: https://www.youtube.com/watch?v=9r7FIXEAGvs

Book by Luis Serrano! Grokking Machine Learning. bit.ly/grokkingML
40% discount code: serranoyt

0:00 Introduction
0:13 Generative models
3:03 Variational autoencoders
3:45 Dataset of images
10:16 Denoising autoencoders
10:27 Linear methods
10:53 A friendly introduction to deep learning and neural networks
11:58 Mapping the real numbers to the interval (0,1)
12:23 Sigmoid function
12:41 Perceptron
15:02 Correct noise
18:20 Autoencoders as generators
20:16 Latent space
23:41 Training a neural network - loss function
25:18 Training an autoencoder
25:32 Training autoencoders
25:46 Reconstruction loss (Mean squared error)
26:31 Reconstruction loss (log-loss)
27:11 Training a variational auto encoder

Correction: At 30:05, the number in the middle of the red graph should be 0.4, not 0.3.

Generative AI
1 Views · 3 months ago

Highlight: At 33:30 Josh Starmer, Statsquatch, and the Normalsaurus from @statquest have an awesome cameo!

Generative AI
3 Views · 3 months ago

State Space Models (SSMs) are a new architecture that is revolutionizing Large Language Models. Learn about them in this friendly video!

00:00 Introduction
00:33 Example of state space models
10:34 SSMs for language generation
17:40 Mamba
18:32 Convolutions

Grokking Machine Learning, by Luis Serrano
www.manning.com/books/grokking-machine-learning
40% discount code: serranoyt

Generative AI
7 Views · 3 months ago

This video is a friendly introduction to quantum computing and machine learning. No knowledge required.

This video is part of a blog post with Zapata computing:
https://www.zapatacomputing.co....m/why-generative-mod

QCBM Paper: https://www.nature.com/articles/s41534-019-0157-8

Video 2: Non-gradient optimizers, CMA-ES and PSO https://www.youtube.com/watch?v=oi5GQvJzy5I
Video 3: The mathematics behind qubits (coming soon!)

Introduction: (0:00)
Quantum and classical machine learning: (1.46)
Probability: (5:12)
The qubit: (8:23)
Quantum measurement: (12:08)
Qubits as generative models: (13:42)
Measuring with different bases: (14:15)
Quantum gates: (22:27)
Quantum entanglement: (25:23)
Entanglement gates: (35:31)
Quantum machine learning (36:04)
Training models: (39:50)
Loss functions and KL divergence: (47:55)
Labs, code, etc: (49:59)

Generative AI
7 Views · 3 months ago

Grokking Machine Learning Book: https://www.manning.com/books/....grokking-machine-lea
40% discount promo code: serranoyt

A friendly introduction to the main algorithms of Machine Learning with examples.
No previous knowledge required.

What is Machine Learning: (0:05)
Linear Regression: (2:25)
Gradient Descent: (4:10)
Naive Bayes: (6:20)
Decision Trees: (10:35)
Logistic Regression: (13:20)
Neural networks: (17:00)
Support Vector Machines: (18:50)
Kernel trick: (20:05)
K-Means clustering: (26:00)
Hierarchical Clustering: (28:30)
Summary: (29:40)

(Thanks to Nick Kartha for breaking down the topics!)

If you like this, there's an extended version in this playlist:
https://www.youtube.com/playli....st?list=PLAwxTw4SYaP

Generative AI
0 Views · 3 months ago

Announcement: New Book by Luis Serrano! Grokking Machine Learning. bit.ly/grokkingML
40% discount code: serranoyt

A friendly explanation of how computers predict and generate sequences, based on Recurrent Neural Networks.
For a brush up on Neural Networks, check out this video: https://www.youtube.com/watch?v=BR9h47Jtqyw

0:00 A friendly introduction to Recurrent Neural Networks
1:38 A friendly introduction to Deep Learning and Neural Networks
2:11 Vectors
5:22 Perfect Roommate
7:13 Simple Neural Network
7:54 Simple (Recurrent) Neural Network
10:03 Cooking Schedule
11:47 More Complicated RNN
12:06 Food
13:31 Weather
14:38 Add
16:02 Merge
20:53 Start with random weights
21:05 Use Gradient Descent
21:41 New Error Function

Generative AI
4 Views · 3 months ago

The KL divergence of distributions P and Q is a measure of how similar P and Q are.
However, the KL Divergence of P and Q is not the same as the KL Divergence of Q and P.
Why?
Learn the intuition behind this in this friendly video.

More about the KL Divergence formula:
https://www.youtube.com/watch?v=sjgZxuCm_8Q




Showing 16 out of 580