StatQuest with Josh Starmer
StatQuest with Josh Starmer
  • Видео 278
  • Просмотров 66 155 167
Human Stories in AI: Amy Finnegan
In this episode we have special guest Dr. Amy Finnegan, the Deputy Director of Data Science at IntraHealth International. Amy is a demographer and data scientist with over 10 years of experience working in global health, development, and data science in emerging economies on four continents. Amy is also an adjunct faculty at Duke University’s Global Health Institute, and before these jobs, she was a research scholar at Duke.
If you'd like to support StatQuest, please consider...
Patreon: www.patreon.com/statquest
...or...
RUclips Membership: ruclips.net/channel/UCtYLUTtgS3k1Fg4y5tAhLbwjoin
...buying my book, a study guide, a t-shirt or hoodie, or a song from the StatQuest store...
statquest.org...
Просмотров: 3 362

Видео

Human Stories in AI: Xavier Moyá
Просмотров 3,7 тыс.21 день назад
In this episode we have special guest Xavier Godoy, the director of customer experience and automation at HBX Group in Mallorca, Spain. Xavier has had a career driven by curiosity and a desire to learn more while simultaneously making sure that customer satisfaction is always is the focus of his efforts. If you'd like to support StatQuest, please consider... Patreon: www.patreon.com/statquest ....
Human Stories in AI: Tommy Tang
Просмотров 4,1 тыс.Месяц назад
In this episode we have special guest Tommy Tang, the Director of Computational Biology at Immunitas Therapeutics. Tommy is a computational biologist with over ten years of computational experience and six years' wet lab experience committed to reproducible research and open science. At Immunitas Therapeutics, Tommy employs a single-cell sequencing platform to dissect the biology of immune cell...
Human Stories in AI: Simon Stochholm
Просмотров 4 тыс.Месяц назад
In this episode we have special guest Simon Stochhom, a lecturer at UCL in Denmark. Simon applies machine learning, especially deep learning, to images, video and time series in wide variety of settings. And by “wide variety”, I really mean it. Simon is fearless when it comes to seizing opportunities that come up and somehow turns them all into success stories. If you'd like to support StatQues...
Log_e Song - Official Lyric Video
Просмотров 5 тыс.2 месяца назад
Check out the track on Spotify: open.spotify.com/track/4OcFh2yFOTUqzmjjwJF5QY When I first started making StatQuest videos it never dawned on me that people would try to re-do my math on their own. I was also new to explaining things and just assumed that everyone already knew that, in statistics and machine learning, when you use the log, you use base 'e'. Big rookie mistake! Ever since then, ...
Human Stories in AI: Brian Risk@devra.ai
Просмотров 4,5 тыс.2 месяца назад
In this episode we have special guest Brian Risk, a multi-talented data scientist and the President and Founder of devra.ai, a company specializing in automated coding. Brian is also a great personal friend of mine and an amazing musician. If you'd like to support StatQuest, please consider... Patreon: www.patreon.com/statquest ...or... RUclips Membership: ruclips.net/channel/UCtYLUTtgS3k1Fg4y5...
The matrix math behind transformer neural networks, one step at a time!!!
Просмотров 47 тыс.2 месяца назад
Transformers, the neural network architecture behind ChatGPT, do a lot of math. However, this math can be done quickly using matrix math because GPUs are optimized for it. Matrix math is also used when we code neural networks, so learning how ChatGPT does it will help you code your own. Thus, in this video, we go through the math one step at a time and explain what each step does so that you ca...
Human Stories in AI: Fabio Urbina
Просмотров 4,3 тыс.2 месяца назад
In this episode we have special guest Fabio Urbina, an Associate Director at Collaborations Pharmaceuticals. Fabio combines computational tools and machine learning with classical small-molecule, molecular, and cell biology techniques to address previously-difficult to probe scientific problems. Specifically, Fabio finds solutions to drug discovery with machine learning. If you'd like to suppor...
Human Stories in AI: Khushi Jain
Просмотров 8 тыс.3 месяца назад
In this episode we have special guest Kushi Jain, who works in Data Analytics Development at John Deere and in the Master’s in Computer Science - Data Science at the University of Illinois. Having recently graduated with her bachelor’s, Kushi participated in the data science club and also completed several internships at John Deere. If you'd like to support StatQuest, please consider... Patreon...
Human Stories in AI: Achal Dixit
Просмотров 8 тыс.3 месяца назад
In this episode we have special guest Achal Dixit, a Data Scientist at Delhivery, the largest fully integrated logistics services in India. Achal solves problems using Data, statistics, and machine learning with a focus on business and people. Before Delhivery, Achal was a Business Technology Analyst at ZS. And before that, Achal was a research assistant at Imperial College London. If you'd lik...
Human Stories in AI: Rick Marks
Просмотров 8 тыс.4 месяца назад
In this episode we have special guest Rick Marks, a professor at the University of North Carolina Chapel Hill School of Data Science and Society. Before UNC, Rick was a director at Google's Advanced Technology and Projects group, exploring new interaction approaches for ambient computing environments. And before that, Rick founded the PlayStation Magic Lab at PlayStation R&D. If you'd like to s...
Essential Matrix Algebra for Neural Networks, Clearly Explained!!!
Просмотров 45 тыс.6 месяцев назад
Although you don't need to know matrix algebra to understand the ideas behind neural networks, if you want to code them or read the latest manuscripts about the field, then you'll need to understand matrix algebra. This video teaches the essential topics in matrix algebra and shows how a neural network can be written as a matrix equation, and then shows how understand PyTorch documentation, err...
Word Embedding in PyTorch + Lightning
Просмотров 31 тыс.7 месяцев назад
Word embedding is the first step in lots of neural networks, including Transformers (like ChatGPT) and other state of the art models. Here we learn how to code a stand alone word embedding network from scratch and with nn.Linear. We then learn how to load and use pre-trained word embedding values with nn.Embedding. NOTE: This StatQuest assumes that you are already familiar with Word Embedding, ...
The Golden Play Button, Clearly Explained!!!’
Просмотров 24 тыс.8 месяцев назад
The Golden Play Button is usually super confusing. In this video, we break it down and walk you through it one-step-at-a-time. By the end of this StatQuest, you'll completely understand The Golden Play Button.
Another 3 lessons from my Pop!!!
Просмотров 11 тыс.9 месяцев назад
Another 3 lessons from my Pop!!!
Decoder-Only Transformers, ChatGPTs specific Transformer, Clearly Explained!!!
Просмотров 103 тыс.9 месяцев назад
Decoder-Only Transformers, ChatGPTs specific Transformer, Clearly Explained!!!
Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
Просмотров 603 тыс.11 месяцев назад
Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
Attention for Neural Networks, Clearly Explained!!!
Просмотров 233 тыс.Год назад
Attention for Neural Networks, Clearly Explained!!!
Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks, Clearly Explained!!!
Просмотров 166 тыс.Год назад
Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks, Clearly Explained!!!
The Ukulele: Clearly Explained!!!
Просмотров 16 тыс.Год назад
The Ukulele: Clearly Explained!!!
Word Embedding and Word2Vec, Clearly Explained!!!
Просмотров 268 тыс.Год назад
Word Embedding and Word2Vec, Clearly Explained!!!
The AI Buzz, Episode #5: A new wave of AI-based products and the resurgence of personal applications
Просмотров 11 тыс.Год назад
The AI Buzz, Episode #5: A new wave of AI-based products and the resurgence of personal applications
CatBoost Part 2: Building and Using Trees
Просмотров 17 тыс.Год назад
CatBoost Part 2: Building and Using Trees
CatBoost Part 1: Ordered Target Encoding
Просмотров 30 тыс.Год назад
CatBoost Part 1: Ordered Target Encoding
The AI Buzz, Episode #4: ChatGPT + Bing and How to start an AI company in 3 easy steps.
Просмотров 7 тыс.Год назад
The AI Buzz, Episode #4: ChatGPT Bing and How to start an AI company in 3 easy steps.
One-Hot, Label, Target and K-Fold Target Encoding, Clearly Explained!!!
Просмотров 45 тыс.Год назад
One-Hot, Label, Target and K-Fold Target Encoding, Clearly Explained!!!
The AI Buzz, Episode #3: Constitutional AI, Emergent Abilities and Foundation Models
Просмотров 5 тыс.Год назад
The AI Buzz, Episode #3: Constitutional AI, Emergent Abilities and Foundation Models
Mutual Information, Clearly Explained!!!
Просмотров 81 тыс.Год назад
Mutual Information, Clearly Explained!!!
Cosine Similarity, Clearly Explained!!!
Просмотров 79 тыс.Год назад
Cosine Similarity, Clearly Explained!!!
Long Short-Term Memory with PyTorch + Lightning
Просмотров 58 тыс.Год назад
Long Short-Term Memory with PyTorch Lightning

Комментарии

  • @fotter9567
    @fotter9567 21 час назад

    You are an absolute genius when it comes to explaining stuff. Every single time I come across a new concept and want to get a good solid basic understanding, I turn to your channel first. Thank you so very much for doing this fantastic work.

  • @YourHeartFeelings
    @YourHeartFeelings День назад

    Thank you very much

  • @mostafamarwanmostafa9975
    @mostafamarwanmostafa9975 День назад

    Thank you sir for this amazing video, it helped me last year in my NLP exam and now i'm refreshing my information's about transformers hoping to land an interview soon!

  • @joganice2197
    @joganice2197 День назад

    this was the best explanation i've ever seen in my life, (i'm not even a english native speaker, i'm brazilian lol)

  • @aapje180
    @aapje180 День назад

    Bedankt

  • @aszx-tv4pq
    @aszx-tv4pq День назад

    Yo! StatQuest please make a video about pipeline concept in machine learning.(like this one clear and true)

    • @statquest
      @statquest День назад

      I'll keep that in mind.

  • @MandeepKaur-ks6lk
    @MandeepKaur-ks6lk День назад

    We understood the calculation of weights and biases. But how would i know about the nodes...how do i understand the logic to connect all the inputs to activation function and to the output.?..and how many hidden layes we need? And no example for more than one hidden layer Could you please help me here

    • @statquest
      @statquest День назад

      Designing neural networks is more of an art than a science - there are general guidelines, but generally speaking you find something that works on a related dataset and then train it with your own data. In other words, you rarely build your own neural network. However, if you are determined to build your own, the trade off is this - the more hidden layers and nodes within the hidden layers, the better your model will be able to fit any kind of data, no matter how complicated, but at the same time, you will increase the computation and training will be slow.

  • @ronakbhatt4880
    @ronakbhatt4880 День назад

    Can't we use correlation factor instead of Mutual information for continuous variable?

    • @statquest
      @statquest День назад

      If you have continuous data, use R^squared.

  • @alisavictory2969
    @alisavictory2969 День назад

    Thank you so much for a very easy-and-nice-to-walk-through video! I really enjoyed the explanations and well-prepared slides! Also you made it very nicely paced. Thank you :))

  • @user-wm8gv1ng9i
    @user-wm8gv1ng9i День назад

    BAM!

  • @ghatshilagogol
    @ghatshilagogol День назад

    awesome pca for dim reduction with vertical+horizontal+depth all in one 3-d rotates

  • @sameepshah3835
    @sameepshah3835 День назад

    The amount of effort for some of these animations, especially in these videos on Attention and Transformers in insane. Thank you!

    • @statquest
      @statquest День назад

      Glad you like them!

  • @mmouz2
    @mmouz2 День назад

    @statquest - Hi Josh - Does it make sense to perform PCA on categorical variables.

  • @SleepThatBurns
    @SleepThatBurns День назад

    BAM!

  • @thienan7206
    @thienan7206 2 дня назад

    Hi all, with this seq2seq, can we apply for embedding a sentence, than using the output vector for semantic similarity ?

    • @statquest
      @statquest День назад

      Maybe - I think it is more common to use an Encoder-Only Transformer like BERT. Encoder only transformers are just like Decoder-Only Transformers ( ruclips.net/video/bQ5BoolX9Ag/видео.html ), except they don't use Masked Attention.

  • @user-np8mg5yg9n
    @user-np8mg5yg9n 2 дня назад

    I am on vacation in Hawaii but I am watching your neural network video. This video is so entertaining to watch :) Tai

    • @statquest
      @statquest День назад

      BAM! Have a great vacation! :)

  • @AlejandroMonroyAzpeitia
    @AlejandroMonroyAzpeitia 2 дня назад

    36:49

  • @marchanselthomas
    @marchanselthomas 2 дня назад

    The explanation is so clean. I was clapping for him from my room. How can someone be so good at their job!

  • @KhapitarBalakAagya
    @KhapitarBalakAagya 2 дня назад

    How these hidden states weights are determined, is it via backpropogation?

    • @statquest
      @statquest 2 дня назад

      Yes. All neural networks are trained the same way, with backpropagation.

  • @samsimmons8370
    @samsimmons8370 2 дня назад

    I feel like 15:15 deserved a triple bam, but maybe that's just me

    • @statquest
      @statquest 2 дня назад

      I think you might be right on that one.

  • @obi8061
    @obi8061 2 дня назад

    man was womp womping before womp womping was "cool"

  • @Smrigankiitk
    @Smrigankiitk 2 дня назад

    amazing thank you for the hard work!

  • @YourHeartFeelings
    @YourHeartFeelings 2 дня назад

    Thank you very much

  • @Blackoutfor10days
    @Blackoutfor10days 2 дня назад

    Where where I can apply the concept of principal component analysis 🙃

    • @statquest
      @statquest 2 дня назад

      Whenever you have lots of things you are measuring.

  • @christiangallo241
    @christiangallo241 2 дня назад

    goated intro

  • @pittysmile
    @pittysmile 2 дня назад

    Way way better than those taught in uni. Easily understood

  • @AarushiSharma-w8o
    @AarushiSharma-w8o 2 дня назад

    BAM!!❤

  • @harsharangapatil2423
    @harsharangapatil2423 3 дня назад

    Too much biology in the beginning itself!

    • @statquest
      @statquest 3 дня назад

      Sorry about that. I made this video for my colleagues at work - I used to work in a genetics laboratory - so I wanted them to understand the concepts in the context of the work that they did every day. I never expected anyone else to watch this video.

  • @elenatishina7639
    @elenatishina7639 3 дня назад

    I did not understand at 8:30 minutes. Please explain. Why 0.9 * 100 * 0.15 ? Not 100*0.15 or log2(1/(0.9**100))?

    • @statquest
      @statquest 3 дня назад

      100 = 100 flips. 0.9 = the probability of getting heads. 0.15 the amount of surprise for getting heads. Thus, the total surprise after 100 coin flips = 100 *0.9*0.15.

  • @edwinokwaro9944
    @edwinokwaro9944 3 дня назад

    you did not show how the rest of the weights are updated? I need to understand how the derivative of the activation function affects the weight update

    • @statquest
      @statquest 3 дня назад

      See this video for details and just replace the derivative of the softmax with 0 or 1 depending on the value for x: ruclips.net/video/GKZoOHXGcLo/видео.html

  • @aayushithakre1632
    @aayushithakre1632 3 дня назад

    BAM BAM BAMMMM LOVED THE EXPLAINATION!!

  • @anjansamanta8593
    @anjansamanta8593 3 дня назад

    I have a doubt..if the ROC plot is given, how to determine the threshold value used for classification for that confusion matrix?

    • @statquest
      @statquest 3 дня назад

      I talk about that in this video: ruclips.net/video/qcvAqAH60Yw/видео.html

  • @KhapitarBalakAagya
    @KhapitarBalakAagya 3 дня назад

    I saw so many videos before, IDK , what the fcuk they were teaching. This explaination is awesome.

  • @R_ooo000ooo_R
    @R_ooo000ooo_R 3 дня назад

    8:24 The main part of the video! Everyone must watch!!!

  • @krishj8011
    @krishj8011 3 дня назад

    Amazing Video ✨

  • @SoheilLotfi
    @SoheilLotfi 3 дня назад

    statquest is the goat ngl

  • @HiasHiasHias
    @HiasHiasHias 3 дня назад

    StatQuest never disappoints

  • @fatemeh2222
    @fatemeh2222 3 дня назад

    Thank you man. Appreciate such thorough but concise explanation.

    • @statquest
      @statquest 3 дня назад

      Glad it was helpful!

  • @spenmop
    @spenmop 3 дня назад

    Your videos are awesome! Makes things so much clearer! But I have a couple of questions: How do you handle the situation where a point has many identical points (ie. high-dim distance = 0)? How to calculate sigma_i? For example, if k = 10, but 7-8 of the neighbours are duplicates with Dij = 0, then sigma_i is undefined. Do I de-duplicate the data first and then add it back in at the end? And symmetrizing: Wij' = Wji' = Wij + Wji - Wij x Wji, yes? But aren't Wij and Wji only calculated for neighbours of i and j? What happens if Wij exists, but Wji does not? Do I add i as another neighbour of j's? (but then j would have more than k neighbours) I'm so confused.

    • @statquest
      @statquest 3 дня назад

      To be honest, I would just try UMAP out and see what it does. It could treat duplicate points as a single point or do something else.

  • @Naturalbanarasi
    @Naturalbanarasi 3 дня назад

    You are very interactive person😉

  • @alanbouwman5627
    @alanbouwman5627 4 дня назад

    Are the conditions at each leaf random? Or is there a criteria for selecting them?

    • @statquest
      @statquest 4 дня назад

      Gradient Boost usually uses regression trees. These are very similar to classification trees, but have a slightly different way to decide how to add branches and leaves. For more details, see: ruclips.net/video/g9c66TUylZ4/видео.html

  • @alishaterian6125
    @alishaterian6125 4 дня назад

    So . In coding how can we calculate the derivative of loss function dont using math

    • @statquest
      @statquest 4 дня назад

      It depends. For neural networks, tensors can calculate the derivatives for you. In other settings you have to do it by hand.

  • @sudhat-d6b
    @sudhat-d6b 4 дня назад

    you are great .. learning the concept with music awesome..

  • @aszx-tv4pq
    @aszx-tv4pq 4 дня назад

    Heaven Of statistics!

  • @SG-jg7ly
    @SG-jg7ly 4 дня назад

    Why don't we just equate d(sum of squared residuals)/d slope to 0, that point will give either the minima or maxima, if it is giving minima, then that will be the value of intercept, we need to take. Why can't we do this instead of checking using gradient descent?

    • @statquest
      @statquest 4 дня назад

      A lot of people ask why we are using Gradient Descent to estimate the parameters in this video when we could just use least squares. We use least squares to produce a "gold standard estimate". This is the best possible estimate. We then attempt to derive the same estimate using Gradient Descent. This shows 1) how gradient descent works and 2) that the estimate is pretty good compared to the "gold standard".

    • @SG-jg7ly
      @SG-jg7ly 4 дня назад

      ​@@statquestunderstood, thanks a lot!