Stability AI Founder: Halting AI Training is Hyper-Critical For Our Security w/ Emad Mostaque |EP#36
Video Statistics and Information
Channel: Peter H. Diamandis
Views: 49,720
Rating: undefined out of 5
Keywords: peter diamandis, longevity, xprize, abundance, emad mostaque, stability ai, machine learning, stable diffusion, artificial intelligence, deep learning, ai art, generative ai, ai podcast, stability ai tutorial, stability ai stable diffusion, emad mostaque interview, emad mostaque ai, emad mostaque peter diamandis, emad mostaque stability ai, emad mostaque stable diffusion, AI race, ai race
Id: SKoYhcC3HrM
Channel Id: undefined
Length: 69min 2sec (4142 seconds)
Published: Thu Apr 06 2023
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.
Transcribed quote from the video:
"you get to a thousand two thousand chips you've stopped being able to scale because the information couldn't get back and forth fast enough across these. That has been fixed now as of this new generation that's about to hit, the Nvidia h100, the TPU v5s, it basically scales almost linearly to thousands and thousands of chips. To give you an example the fastest supercomputer in the UK: 640 chips. NASA has about 600 as well and now you're building the equivalent of 30 to 100,000 chip supercomputers using this new generation technology because you can stack and scale. It's insane. Now we've got about six months before that hits..."
Let's not forget that the H100 is 9x faster at training AI and 30x faster at inference (These are Nvidia's numbers so take them with a grain of salt).
Problem is, nobody is going to stop, Huawei certainly wonβt, so the point is moot.
People are only going to invest in developing better models working in tandem with the new hardware until we get AGI.
Some people are getting reactionary/luddish to AI because itβs getting good, and nothing more. This was predictable. We all saw this phase coming.
I don't get what these numbers represent. I already see supercomputers that consist of ~10 million cores. How is this different?
But can it run Crysis?
Hate the thumbnail.
Pausing AI development isnβt even worth discussing. So stupid.
But not everyone can afford them and you have to prove you are not a bad actor. Meanwhile, the petition for a AI CERN-like org is at about 16%, so this is mostly good news for Big Tech, but not necessarily for ordinary consumers.
Oh dear ... its boarding time.
Reminder that the 6 months are almost over and 1. Meta and Microsoft have tens of thousands of H100s and 2. Google is creating a 2 million (?) TPU v5 cluster