Latest videos

Generative AI
1 Views · 20 days ago

💥 We’re launching the Modern Route-Full Stack Generative & Agentic AI Batch 💥
📅 Starts: 15th March 2026
⏰ Time: 8 PM – 11 PM IST
🗓️ Schedule: Every Saturday & Sunday
This is not old-school AI ❌
This is how AI is actually built, deployed, and scaled in 2026 —
from GenAI to Agentic AI, end-to-end & hands-on 🚀
🔗 Enroll here:
👉 https://www.krishnaik.in/liveclass2/genai?id=9
If you were waiting for the right time, right roadmap, and right guidance —
this is your signal 💯
📞 Have questions or need guidance?
Reach out to Krish Naik's counselling team:
+91 91115 33440
+91 84848 37781

Generative AI
38 Views · 3 months ago

How To Train AI Models using Unsloth
Unlock the secrets to training powerful AI models that can outperform giants like Chat GPT and Claude, right from your personal computer and for less than $5! This video reveals how specialized, fine-tuned models can achieve superior accuracy on specific tasks compared to their larger counterparts. Discover the critical, yet often overlooked, aspects of AI training that big companies don't talk about, including the most effective ways to structure your data for success with tools like Unsloth. Whether you're a business owner, developer, or AI enthusiast, learn the practical steps to build your own custom AI solutions efficiently.

🔗 What We Cover:
- The Underdog Advantage: Understand why smaller, fine-tuned AI models can surpass large language models like GPT-4 in specialized tasks, backed by surprising accuracy stats.
- Cost-Effective AI Training: Learn how to train high-performing AI models in just a couple of hours on your personal computer for minimal cost (even free using Google Colab with a T4 GPU!).
- The Data Structuring Secret: Master the simple yet crucial two-column (instruction/input and output) data format required for effective model training with Unsloth, avoiding common pitfalls.
- Practical Fine-Tuning Examples: See real-world data structuring for an AI gym trainer and a customer service response bot.
- Step-by-Step Unsloth Tutorial: Follow along as we build an AI workout generator using Unsloth in a Google Colab notebook, from data preparation to model training and testing.
- Beyond the Hype: Uncover the techniques that companies like Google, DeepMind, and OpenAI use, adapted for your own projects.

💡 Embrace the Future of AI:
Step into the world of custom AI model training! This guide empowers you to bypass the need for massive datasets and expensive infrastructure. Learn the secrets to creating specialized AI that truly understands your unique context and delivers exceptional results.

Join me as I demystify AI model training, showing you exactly how to achieve remarkable performance without breaking the bank. Don't forget to like, share, and subscribe for more insights into practical AI implementation!

👨‍💻 Follow Me:
Website: https://www.rolandburke.com
Be there first to know when the community drops: https://www.rolandburke.com/join-now
Twitter: https://twitter.com/rjburkejr
LinkedIn: https://www.linkedin.com/in/rolandburke

Videos I Think You'll ❤️:
How To Install Text Generation Web UI
https://youtu.be/Lm2xpJ5TQBo

How To Use ChatGPT Inside Of Airtable
https://youtu.be/extjQChhE-M
========================

🎥 Video Breakdown
0:00 - Introduction: Training AI Better Than ChatGPT for Cheap
0:06 - The High Cost of Traditional AI Training
0:25 - The Big Secret: Smaller Models, Better Results?
0:47 - Why Fine-Tuned Models Outperform Giants (Study Results)
2:09 - The Power of Small Datasets (200-500 Examples)
2:20 - Why Specialization Beats Generalization in AI
2:34 - Critical Mistake: The Importance of Training Data Structure
3:14 - Simplifying Data Structure with Unsloth: The Two-Column Method
3:54 - Example 1: Structuring Data for an AI Gym Trainer
4:44 - Example 2: Structuring Data for Customer Service AI
5:43 - Step-by-Step: Training Your AI Model with Unsloth in Google Colab
6:01 - Demo: Building an AI Workout Generator - Data Prep
6:25 - Colab Setup: Choosing a Model (Meta Llama 3.1 8B) & Lora Adapters
7:39 - Training the Model: Settings & Process (Max Steps, Epochs, Learning Rate)
8:25 - Analyzing Training Results & Loss Rate
8:31 - Testing Your Fine-Tuned AI Model: Workout Generator in Action
9:01 - Conclusion: Train Your Own AI for Free/Cheap!
9:14 - Beyond Fine-Tuning: Access a Suite of AI Tools
9:29 - Join the AI Community & Waitlist

Generative AI
10 Views · 3 months ago

Ready to become a certified watsonx AI Assistant Engineer? Register now and use code IBMTechYT20 for 20% off of your exam → https://ibm.biz/Bdnd3d

Learn more about Large Language Models (LLMs) here → https://ibm.biz/Bdnd3x

What if you could run large language models locally with just one command? 🚀 Cedric Clyburn shows how Ollama, an open-source tool, simplifies deploying LLMs on your machine. 💡 Protect your data, save costs, and explore powerful AI solutions—all from your own system. ✨

AI news moves fast. Sign up for a monthly newsletter for AI updates from IBM → https://ibm.biz/Bdnd3D

#ollama #opensourceai #llm

Generative AI
10 Views · 3 months ago

For more information about Stanford's Artificial Intelligence programs visit: https://stanford.io/ai

This lecture provides a concise overview of building a ChatGPT-like model, covering both pretraining (language modeling) and post-training (SFT/RLHF). For each component, it explores common practices in data collection, algorithms, and evaluation methods. This guest lecture was delivered by Yann Dubois in Stanford’s CS229: Machine Learning course, in Summer 2024.

Yann Dubois
PhD Student at Stanford
https://yanndubs.github.io/

About the speaker: Yann Dubois is a fourth-year CS PhD student advised by Percy Liang and Tatsu Hashimoto. His research focuses on improving the effectiveness of AI when resources are scarce. Most recently, he has been part of the Alpaca team, working on training and evaluating language models more efficiently using other LLMs.

To view all online courses and programs offered by Stanford, visit: http://online.stanford.edu

Chapters:
00:00 - Introduction
00:10 - Recap on LLMs
00:16 - Definition of LLMs
00:19 - Examples of LLMs
01:16 - Importance of Data
01:20 - Evaluation Metrics
01:33 - Systems Component
01:41 - Importance of Systems
01:47 - LLMs Based on Transformers
01:57 - Focus on Key Topics
02:00 - Transition to Pretraining
03:02 - Overview of Language Modeling
04:17 - Generative Models Explained
05:15 - Autoregressive Models Definition
06:36 - Autoregressive Task Explanation
07:49 - Training Overview
08:48 - Tokenization Importance
10:50 - Tokenization Process
13:30 - Example of Tokenization
16:00 - Evaluation with Perplexity
20:50 - Current Evaluation Methods
24:30 - Academic Benchmark: MMLU

Generative AI
15 Views · 3 months ago

Unlock the secrets of AI model fine-tuning in this easy-to-follow guide! Learn how to:

• Customize AI responses without complex coding
• Create your own dataset for personalized results
• Fine-tune Mistral using MLX on Apple Silicon
• Implement your fine-tuned model with Ollama

Discover why fine-tuning isn't as daunting as it seems, and how you can tweak AI models to match your unique style. Perfect for beginners and those intimidated by traditional Python notebook tutorials.

Don't miss this opportunity to level up your AI skills and create models that truly understand you!

#AITutorial #MachineLearning #FineTuning #MistralAI #AppleSilicon
t

My Links 🔗
👉🏻 Subscribe (free): https://www.youtube.com/technovangelist
👉🏻 Join and Support: https://www.youtube.com/channe....l/UCHaF9kM2wn8C3CLRw
👉🏻 Newsletter: https://technovangelist.substack.com/subscribe
👉🏻 Twitter: https://www.twitter.com/technovangelist
👉🏻 Discord: https://discord.gg/uS4gJMCRH2
👉🏻 Patreon: https://patreon.com/technovangelist
👉🏻 Instagram: https://www.instagram.com/technovangelist/
👉🏻 Threads: https://www.threads.net/@techn....ovangelist?xmt=AQGzo
👉🏻 LinkedIn: https://www.linkedin.com/in/technovangelist/
👉🏻 All Source Code: https://github.com/technovangelist/videoprojects

Want to sponsor this channel? Let me know what your plans are here: https://www.technovangelist.com/sponsor



00:00 - AI is Amazing
00:26 - Two approaches to tweaking models
00:36 - What is fine tuning
00:57 - Why is it hard to get started
01:12 - The biggest problem
01:39 - Just 3 steps
01:52 - The hardest part
02:09 - Start with step 1
02:27 - How to figure out what to do
03:51 - My first fine tune
04:44 - What to put where
04:58 - Move on to the next step
05:31 - Huggingface login
05:48 - The mlx command
06:43 - The results
06:56 - Define the new model
07:32 - A couple of gotchas

Generative AI
14 Views · 3 months ago

Sometimes, a large language model is too much horse, so to speak. For tasks that are smaller in scope and more fine-tuned for a specific task, a small language model, or SLM, might be a better fit. SLMs are cheaper than LLMs, so they're great for smaller businesses or just automation tasks that don't need all the bells and whistles an LLM offers.

🔎 Read more:
SLM defined ➡️ https://www.techtarget.com/wha....tis/definition/small

------------------------------------------------------------------------------

🔔Subscribe to Eye on Tech: https://www.youtube.com/@Eyeon....Tech?sub_confirmatio

------------------------------------------------------------------------------

Follow Eye on Tech:
Twitter/X: https://twitter.com/EyeonTech_TT
LinkedIn: https://www.linkedin.com/showcase/eyeontech/
TikTok: https://www.tiktok.com/@eyeontech
Instagram: https://www.instagram.com/eyeontech_tt/

#generativeai #llm #smallbusiness #eyeontech

Generative AI
13 Views · 3 months ago

Dive into the world of Language Model as I guide you through the process of training a small language model using GPT-2! In this tutorial, we'll explore how to leverage the powerful distilgpt2 transformer to understand diseases and symptoms better.

📋 Tutorial Highlights:

Dataset Loading: Learn how to load a relevant dataset on diseases and symptoms from Hugging Face datasets.
Tokenization and Model Setup: Understand the crucial steps of tokenization using GPT-2's tokenizer and initializing the language model.
Training Loop: Walk through the training loop, exploring each epoch, monitoring training and validation losses, and ensuring your model is learning effectively.
Hyperparameter Tuning: Fine-tune your model by adjusting batch sizes, learning rates, and more.
Text Generation: Witness the power of your trained model by generating meaningful text based on input strings.

🤖 Why Train a domain specific Language Model like MedLLM?
Training a language model allows you to teach your model about the relationships between diseases and symptoms, enabling it to generate informative and context-aware responses.

🔔 Don't forget to like, comment, and subscribe for more exciting tutorials on Gen AI and machine learning! Your support keeps the channel thriving.

Join this channel to get access to perks:
https://www.youtube.com/channe....l/UC-zVytOQB62OwMhKR

📁 Download Code: https://github.com/AIAnytime/T....raining-Small-Langua
📚 Resources:
Hugging Face Model: https://huggingface.co/distilgpt2
Dataset Source: https://huggingface.co/dataset....s/QuyenAnhDE/Disease

#generativeai #llm #ai

Generative AI
9 Views · 3 months ago

Explore the capabilities and considerations for Small Language Models (SLMs). Whether you’re using out-of-the-box SLMs or customizing/fine-tuning them with your own data, we’ll cover practical considerations and best practices. Enhance your language processing capabilities with SLMs!

Chapters:
00:00 - Introduction
01:12 - SLM
02:47 - Advantages of SLMs
03:24 - Local Models
03:54 - Architecture
04:37 - Demo
07:12 - Getting started

✔️Resources:
https://aka.ms/azuresql-slm

📌 Let's connect:
Twitter - Anna Hoffman, https://twitter.com/AnalyticAnna
Twitter - AzureSQL, https://aka.ms/azuresqltw

🔴 Watch even more Data Exposed episodes: https://aka.ms/dataexposedyt

🔔 Subscribe to our channels for even more SQL tips:
Microsoft Azure SQL: https://aka.ms/msazuresqlyt
Microsoft SQL Server: https://aka.ms/mssqlserveryt
Microsoft Developer: https://aka.ms/microsoftdeveloperyt

#AzureSQL #SQL #LearnSQL

Generative AI
9 Views · 3 months ago

The Qwen3 family of thinking large language models has just been released and the smallest model in the family is just 523MB! But what can such a small model do? Is it just an academic exercise or is it actually useful? Let's find out.
---

Twitter: https://twitter.com/garyexplains
Instagram: https://www.instagram.com/garyexplains/

#garyexplains

Generative AI
5 Views · 3 months ago

Dr. Raj Dandekar (MIT PhD) conducted a 7-hour small language model workshop. This is part 2 of that workshop.

In this, we cover the following four aspects:

1. 00:00 Recap of part 1: data pre-processing, tokenization, input-output pairs
2. 10:55 Building blocks of the SLM architecture
3 01:08:34 Attention Mechanism Explained

If you want to get access to the detailed lecture notes, register here: https://vizuara.ai/courses/bui....ld-slm-from-scratch-

Generative AI
5 Views · 3 months ago

Never get stuck without AI again. Run three Small Language Models (SLMs)—also called Local LLMs—TinyLlama, Gemma-3 and Phi-4-mini—completely offline; all fit in 4 GB or less and work on any laptop and older hardware.

────────────────────
🔧 Hardware & Software used
• Laptop Ryzen 5 4500U, 8GB RAM, Ollama (no GPU needed!)
• Phone iPhone 13 Pro with Mobile PocketPal AI (local GGUF)

────────────────────
🔗 Model resources
• ChatGPT global outage (news)
https://timesofindia.indiatime....s.com/etimes/trendin
• Phi-4-mini reasoning paper
https://www.microsoft.com/en-u....s/research/wp-conten
• TinyLlama 1.1 https://huggingface.co/TinyLlama/TinyLlama_v1.1
└ GGUF Q4_0 637 MB https://huggingface.co/TheBlok....e/TinyLlama-1.1B-Cha
• Gemma-3 https://huggingface.co/blog/gemma3
└ GGUF Q4_K_M 0.8 GB https://huggingface.co/Maziyar....Panahi/gemma-3-1b-it
• Phi-4-mini https://huggingface.co/microso....ft/Phi-4-mini-reason
└ GGUF Q4_K_M 2.5 GB https://huggingface.co/lmstudi....o-community/Phi-4-mi

────────────────────
🎬 More on local AI
• End of VRAM? https://youtu.be/M9ZphDPRP_w
• Is local AI image generation dying? https://youtu.be/ad7jBaNgIW8

🛠 Support the channel
Patreon https://www.patreon.com/NextTechAndAi

────────────────────
▼ Comment Poll
What would YOU use offline AI for?

#SmallLanguageModels #LocalLLM #OfflineLLM #LocalAI

Generative AI
2 Views · 3 months ago

This video covers about Small Language Models with a cast study

⏱ Chapter Timestamps
====================
00:00 - Intro
00:28 - What is Small Language Model
02:31 - Need for SLMs?
03:00 - Case Study: Offline Applications
05:06 - SLMs in the market

Join this channel by contributing to the community:
https://www.youtube.com/channe....l/UCB12jjYsYv-eipCvB

📌 Related Links
=============
🔗CoPilot+PCs - https://www.youtube.com/watch?v=5JmkWJNng2I

📌 Related Playlist
================
🔗 AI Primer Playlist - https://www.youtube.com/playli....st?list=PLTyWtrsGknY
🔗Spring Boot Primer - https://www.youtube.com/playli....st?list=PLTyWtrsGknY
🔗Spring Cloud Primer - https://www.youtube.com/playli....st?list=PLTyWtrsGknY
🔗Spring Microservices Primer - https://www.youtube.com/playli....st?list=PLTyWtrsGknY
🔗Spring JPA Primer - https://www.youtube.com/playli....st?list=PLTyWtrsGknY
🔗Java 8 Streams - https://www.youtube.com/playli....st?list=PLTyWtrsGknY
🔗Spring Security Primer - https://www.youtube.com/playli....st?list=PLTyWtrsGknY

💪 Join TechPrimers Slack Community: https://bit.ly/JoinTechPrimers
📟 Telegram: https://t.me/TechPrimers
🧮 TechPrimer HindSight (Blog): https://medium.com/TechPrimers
☁️ Website: http://techprimers.com
💪 Slack Community: https://techprimers.slack.com
🐦 Twitter: https://twitter.com/TechPrimers
📱 Facebook: http://fb.me/TechPrimers
💻 GitHub: https://github.com/TechPrimers or https://techprimers.github.io/

🎬 Video Editing: FCP

---------------------------------------------------------------
🔥 Disclaimer/Policy:
The content/views/opinions posted here are solely mine and the code samples created by me are open sourced.
You are free to use the code samples in Github after forking and you can modify it for your own use.
All the videos posted here are copyrighted. You cannot re-distribute videos on this channel in other channels or platforms.
#SmallLanguageModel #LLMs #RAG

Generative AI
11 Views · 3 months ago

Earn a Generative AI certificate today → https://ibm.biz/BdKUNX
Learn more about watsonx: https://ibm.biz/BdvDnr

AI promises to touch every aspect of work and life, but how do they get made?

In this video Martin keen walks through a five step framework for how to build and deploy AI models.

AI news moves fast. Sign up for a monthly newsletter for AI updates from IBM → https://ibm.biz/BdKUNr

Generative AI
0 Views · 3 months ago

Discover how startups can benefit from adopting or creating Small Language Models and the role they have in enabling agentic AI. Learn from world-class experts Julien Simon, Chief Evangelist at Arcee, and Nicolas David, Sr. Startup Architect at AWS

Generative AI
4 Views · 3 months ago

Build your first app today with Mocha: https://www.getmocha.com?utm_source=matthew_berman

Download Humanities Last Prompt Engineering Guide (free) 👇🏼
https://bit.ly/4kFhajz

Download The Matthew Berman Vibe Coding Playbook (free) 👇🏼
https://bit.ly/3I2J0YQ

Join My Newsletter for Regular AI Updates 👇🏼
https://forwardfuture.ai

Discover The Best AI Tools👇🏼
https://tools.forwardfuture.ai

My Links 🔗
👉🏻 X: https://x.com/matthewberman
👉🏻 Forward Future X: https://x.com/forward_future_
👉🏻 Instagram: https://www.instagram.com/matthewberman_ai
👉🏻 Discord: https://discord.gg/xxysSXBxFW
👉🏻 TikTok: https://www.tiktok.com/@matthewberman_ai

Media/Sponsorship Inquiries ✅
https://bit.ly/44TC45V

Links:
https://arxiv.org/html/2510.04871v1




Showing 14 out of 580