Stability AI Founder: Halting AI Training is Hyper-Critical For Our Security w/ Emad Mostaque |EP#36

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Transcribed quote from the video:
"you get to a thousand two thousand chips you've stopped being able to scale because the information couldn't get back and forth fast enough across these. That has been fixed now as of this new generation that's about to hit, the Nvidia h100, the TPU v5s, it basically scales almost linearly to thousands and thousands of chips. To give you an example the fastest supercomputer in the UK: 640 chips. NASA has about 600 as well and now you're building the equivalent of 30 to 100,000 chip supercomputers using this new generation technology because you can stack and scale. It's insane. Now we've got about six months before that hits..."

πŸ‘οΈŽ︎ 56 πŸ‘€οΈŽ︎ u/freeThePokemon256 πŸ“…οΈŽ︎ Apr 07 2023 πŸ—«︎ replies

Let's not forget that the H100 is 9x faster at training AI and 30x faster at inference (These are Nvidia's numbers so take them with a grain of salt).

πŸ‘οΈŽ︎ 15 πŸ‘€οΈŽ︎ u/Ezekiel_W πŸ“…οΈŽ︎ Apr 07 2023 πŸ—«︎ replies

Problem is, nobody is going to stop, Huawei certainly won’t, so the point is moot.

People are only going to invest in developing better models working in tandem with the new hardware until we get AGI.

Some people are getting reactionary/luddish to AI because it’s getting good, and nothing more. This was predictable. We all saw this phase coming.

πŸ‘οΈŽ︎ 27 πŸ‘€οΈŽ︎ u/HeinrichTheWolf_17 πŸ“…οΈŽ︎ Apr 07 2023 πŸ—«︎ replies

I don't get what these numbers represent. I already see supercomputers that consist of ~10 million cores. How is this different?

πŸ‘οΈŽ︎ 7 πŸ‘€οΈŽ︎ u/PinguinGirl03 πŸ“…οΈŽ︎ Apr 07 2023 πŸ—«︎ replies

But can it run Crysis?

πŸ‘οΈŽ︎ 6 πŸ‘€οΈŽ︎ u/Kanute3333 πŸ“…οΈŽ︎ Apr 08 2023 πŸ—«︎ replies

Hate the thumbnail.

Pausing AI development isn’t even worth discussing. So stupid.

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/Ohigetjokes πŸ“…οΈŽ︎ Apr 07 2023 πŸ—«︎ replies

But not everyone can afford them and you have to prove you are not a bad actor. Meanwhile, the petition for a AI CERN-like org is at about 16%, so this is mostly good news for Big Tech, but not necessarily for ordinary consumers.

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/No_Ninja3309_NoNoYes πŸ“…οΈŽ︎ Apr 07 2023 πŸ—«︎ replies

Oh dear ... its boarding time.

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/ChoiceOwn555 πŸ“…οΈŽ︎ Apr 08 2023 πŸ—«︎ replies

Reminder that the 6 months are almost over and 1. Meta and Microsoft have tens of thousands of H100s and 2. Google is creating a 2 million (?) TPU v5 cluster

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/[deleted] πŸ“…οΈŽ︎ Aug 29 2023 πŸ—«︎ replies
Captions
everybody Peter here I just spent the last hour with emad mustak the CEO of stability AI talking about the recent petition that's been going around to Halt or pause the development of large language models since early April I just had him on stage last week at abundance 360 but that wasn't the subject this conversation was around the fears the hopes of large language models and some of the points we discussed was his belief that the next six months are hyper critical that we have six months to figure things out why there are a couple of very important reasons that he discusses in this next segment also going to talk about how this technology is going to transform the world of Education of health care of supporting governments around the world one of the best conversations I've had with Iman in a long time and also he's one of the most brilliant CEOs and also the only private AI Company CEO to sign that petition so so listen up it educated me got me feeling hopeful hope it does the same for you just for folks who may not know you uh emad is the CEO founder of stability AI truly one of the most extraordinary AI companies large language model companies out there with a mission of being an open a truly open uh AI company to transform support Humanity across Health against education again about uplifting Humanity so Imad I love what you do and proud to be a supporter so thanks for joining today it's my pleasure yeah so let me let me set it up for everybody there's been a huge amount of conversation around AGI of late and I titled this session AGI threader opportunity large language models go faster go slow open versus closed and there's a lot to discuss and you know my view on this has been shifting over time and I'm going to be curious about whether yours has as well uh you know there's a lot of talk about artificial general intelligence uh you know where are we really right now in the development how fast are things truly moving and should people be concerned about it or excited about it and other questions are can it be regulated can it be slowed down uh is that possible and then I think an important conversation uh really to highlight uh what stability is doing is open versus closed and then one final conversation which is can we actually get to AGI using large language models or do we need to have another sort of structural transformation to get there so um Shall We Begin yeah let's talk about the future of humanity let's go I love that I love that and you and I share the same vision of uh of really creating abundance around the world uplifting every man woman and child uh and really using AI for Global education Global Health dealing with the grand challenges and slaying them um I I you know let's let's kick it off with uh how fast are things moving in your in your opinion and and should people be concerned about the speed of progress right now yeah I mean I think every day that's outside of this like what's going on there's inside of this like what's going on right uh it's like everything everywhere all at once not only is it kind of for all the smartest people in the world planning in on this that 80 of the research in AI now is in this generative AI Foundation model field you know and if you look at the research paper graph and archive for ML papers is a literal exponential is that people are implementing exploring and understanding these things like nothing else the final bit that's really interesting is that uh clearly half of all code on GitHub now is AI generated and the average coder using copilot this is a micro study it's 40 more efficient so people are using the AI to build better Ai and like that becomes a bit of an interesting feedback thing right against that there is the hardware question and so um there was a paper in 2017 attention is all you need which kind of led to this boom about how to take AI from extrapolating large data sets to instead paying attention to the important bits of information just like we do when we base basically start creating principles of heuristics right it turned out that was very amenable to scale so the original thing on open Ai and others was just applying scale more and more compute which happened literally at the same time as it's called GPU thing that Jensen had been building for years at Nvidia so the previous generations of super compute ships were like gpus and video ones in particular stacked on top of each other uh but as you get to a thousand two thousand chips you've stopped being able to scale because the information couldn't get back and forth fast enough across these um that has been fixed now as of this new generation that's about to hit the Nvidia h100 the TPU v5s it basically scales almost linearly uh to thousands and thousands of chips um to give you an example the fastest super can be in the UK 640 chips you know NASA has about 600 as well and now you're building the equivalent of 30 to 100 000 ship supercomputers using this new generation technology because you can stack and scale it's insane now we've got about six months before that hits and then keep your T4 level models will be available to just about well not anyone but more and more companies gpt3 when it was Trend I think took three months on the news and that was two years ago on the new supercomputers you can train four of those a day that's extraordinary and so how many large language models are there going to end up being you think I think it'll only be a few I think that these things are quite complicated they're actually like game consoles like uh a lot of the track GPT functionality was already present from the model over a year ago but it just wasn't in nice format and usable and testable so you can explore the capabilities of that game console or game engine the way I put that is because you look at the start of the Wii U or Playstation versus the games at the end of that life cycle exploring that space is very important and people don't want dozens so like we really stable diffusion our image model one of those funny pieces of Open Source software ever only five companies in the world that we now are have created their own versions because why not just use the one that you have sure let's put the popularity in context you're able to bitcoin and ethereum and cumulative developer stars in about two months and the whole ecosystem now has overtaken Linux which is insane for six months you know and yet only three companies got their own because why would you so I think in the future it'll probably be just a handful of companies building proprietary models you know Microsoft the openai Google deepmind Nvidia a few others maybe and then uh models in terms of models that you can see the code the weights the data Etc which is essential for private data so pal the question ultimately is it's great to have powerful large language models and the Next Generation that are coming online are amazing and the question is do we need more than that is gpt4 equivalent large language models enough to give us uh sort of the benefits to humanity without sort of bordering on the existential threats of AGI and and ultimately the parallel question to that is are large language models sufficient to get to AGI in your opinion yeah so we have to Define what AGI is first because different people have different definitions right and AI they can do just about anything a human can do well you know half of humans are below average intelligence right the uh okay mathematicians yeah I mean I'm not quite correct on that but so you know what I mean like the bar for average after is not that high and you see gpt4 passing the bar exam and GRE maybe it'll go to Stanford soon you know like it's it's getting there for these specific things but it's not genetic as yet the wires that we have is you know as we scale these models they show emerging properties and we feed them a crap the whole internet at the moment so what we have at the moment are these models that are like really talented grads that occasionally go off their meds and we fed with crap that's the base model then what we do is we do this thing called reinforcement learning if you want feedback where we tell it what a good and wrong response is when you take these really creative models uh because you're taking categorized Club data and hundreds of terabytes of data and you're compressing it down to just a few hundred gigabytes like disable the future a hundred thousand gigabytes of images and the outputs are two gigabyte file of course it can't know everything but you're telling it you must know these things so you reduce its freedom it's creativity because these aren't the thinking machines they're reasoning machines including them certain principles right and then putting that mask on the front to reacclimatize them to Human but you start with a very fragile base and this is the concern on the scaling because what happens if as you scale it learns to lie in other things and it starts being agent we don't know to get to AGI though in terms of something that can be as capable as a human across a generalized set of principles there's electric reading manuals I don't know if this is enough it may be or may not be well it was a very seminal paper by uh sorry you know I'm just going to say the thing that is interesting of course is the rate at which uh these large language models are uh surpassing humans and coding capability and I guess the question is will it get recursive and allow them to improve on their own models yeah you could say already now the kind of overall ecosystem just like Bitcoin Provisions humans to buy assets this whole kind of conceptual thing it's really made programmers 40 more efficient yeah from that study uh if you kind of cake it as a human computer hardware type of thing um so if you get the level of a human that's one thing and then ASI is when you go superhuman which you don't really understand what it looks like there was an interesting paper they mentioned by meta called Cicero where eight language models are combined together and they outperform humans in the game of diplomacy so they have convinced humans on things and that's today so we don't know where this takeoff point is where it can get software personally better we don't know when it gets to human level it's just we can see it's as good as an average human like gpt4 out of the box without tuning can pass the level three Google programmer exam and it can pass the uh you know the medical licensing exam you know something which I never finished doing I got through part two but not part three it's extraordinary and it's that's this year and of course humans are not getting smarter but every year we're going to see continued progression to the of all of these large language models can we take a second I'm I never you know I always tend towards the Abundant side of the conversation and I'm clear all the amazing things that uh these large language models are enabling and will enable let's just take a second to list uh people's concerns what are people threatened by just so we can address those individually for a moment what would be top of your list so you know we talk to a lot of incredibly talented creatives and artists and programmers and others the top level people love this there's an MIT study that showed recently the 37th decent like 20 to 30 better writing reports but the top five percent got way way better yeah and this is what we typically see with people and tools like the top creatives love this those who are not so good it raises the average level you know so you have to get better and that's difficult for some people to embrace like Lisa doll being beaten by alphago at go the average level of a goat player has shown up dramatically and he's got way better you know it's not humans versus AI it's it's human plus AI versus AI in that sense it's yeah or versus Humans right like again you either have to embrace it or you can get irrelevant very quickly um and there's certain jobs that are sticky you know and there's certain jobs that are not so regulated Industries are quite sticky but still like something like Indian BPL is probably not for basic programming and so this is a real danger it's a real threat there are questions around you in the data sets being used because they typically are from scrapes and literally the entire internet yeah what are the rights around that there are questions around the output is it computer generated is it copyrightable you know and there's finally the question of are we building something more powerful than a nuke that is an existential threat to humanity yeah you know what is a super intelligence like yeah I remember Sam Altman wrote A Blog and he said I'm going to just quote from it he said some people in the AI field think the risk of AGI and the successor systems are fictitious we would be delighted if they turned out to be right but we're going to operate as if these risks are existential and so now the question is if in fact you're operating as if the risks are existential why are you building potentially existential Tech right I mean that's an interesting conversation debate to have yeah I mean like you mentioned ABC recently because there's only a few people that can build this Tech right um and she just asked if there's a five percent chance that this Tech could wipe out humanity and you could push a button to stop it would you do it and his answer was I would push a button to slow it down uh we can wind it back we can turn it off and that didn't make me feel too comfortable uh because my uh base assumption actually um I was talking to the head of a trillion dollar company the founder earlier this week um is that HEI will probably find us boring just like in her my favorite my favorite AI movie the one that was the most realistic and the least dystopian her that's great yeah well you know broken hearts you know just like when replica turned off their uh boyfriend girlfriend feature on Valentine's Day they broke tens of thousands of Parts um I think that would be the case you know I spoke to Elon last week and he said he thinks if we make the AGR curious then why would it have any reason to Thomas you know because again we're interested in some ways I don't think we're interesting but I could be wrong are you familiar uh Imad with uh with Mo Gadot who was a CEO at Google X he wrote a book called scary smart which I read recently and his basic thesis is the large language models are are progeny we're they're learning from reading our content which they are and they're learning from how we interact with each other and how we treat our machine and if we're good parents and you and I are both parents of biological organisms uh if they're good parents and we uh they see us respecting each other they'll tend to follow suit but if we are disrespectful and harming each other they might tend to at least in the early days uh attend in that way do you you know people don't realize large language models are a reflection of a human society to a large degree you know I've been sympathetic with that view so like when web3 came about we tried to create a system outside the existing system and all the money was made and lost at the intersection with these models they are literally trained on our Collective Consciousness on the internet and it's the equivalent of you know taking a super proportious kid taking his eyes open and showing them everything no wonder they sound a bit weird at times right yeah so I think we should probably be feeding it better stuff and teaching it better stuff but right now we're at the bulky before cutting point so that's why we create you know you might have seen these memes about these shog off you know this elder weird tentacle being that's what it comes out of the oven looking like the Steep learning thing and then you have reinforcement learning with human feedback you tell it actually human you know so the shovel ground becomes more and more human but there's still something lurking behind if you did better stuff I think is what we've got to do now rather than racing ahead because again like I think we'll probably be fine but I'm not sure and these things are becoming more and more capable because as you scale you see emergent properties you know what happens when you've got like in the gpt4 paper my God they said you know we uh experimented in a closed loop system giving it some money and telling it to go and make more money and it was hiring task rabbit people and things like that I said that's not a good idea you don't know what happens there I don't know what happens it's trying to create it's trying to create jobs for us but then again this is the amazing thing so uh again we don't officially publicly know details about gpt4 but Nvidia said they designed their new tea the h100 and the linked ones the chips for it that's just two 80 gigabyte chips stuck together it's 160 gigabytes of vram has gpd4 with all its capabilities in which implies it's about 200 250 billion parameter model a single that's crazy so what does that mean when you start chaining them together and you get them to check each other's outputs and things they tell you what it gets a lot better um and there's a lot more that I didn't particularly want to discuss but I don't know what the upper limit is even at this stage and this is before again this massive ramp up in compute that's coming whereby like I know of clusters of h100s so the fastest supercomputer in the world right now is um our cluster is probably the 10th fastest public cluster in the world right now the new supercomputers are 20 times faster 40 times faster like there it's an insane pickup that then will lead to emotional properties and this becomes again I don't know what happens we don't know what happens nobody knows what happens I think it'll probably be fine I could be wrong this episode is brought to you by levels one of the most important things that I do to try and maintain my Peak vitality and Longevity is to monitor my blood glucose more importantly the foods that I eat and how they Peak the glucose levels in my blood now glucose is the fuel that powers your brain it's really important High prolonged levels of glucose what's called hyperglycemia leads to everything from heart disease to alzheimer's to sexual dysfunction to diabetes and it's not good the challenges all of us are different all of us respond to different foods in different ways like for me if I eat bananas it spikes my blood glucose if I eat grapes it doesn't if I eat bread by itself I get this prolonged Spike in my blood glucose levels but if I dip that bread in olive oil it blunts it and these are things that I've learned from wearing a continuous glucose monitor and using the levels app so levels is a company that helps you in analyzing what's going on in your body it's continuous monitoring 24 7. I wear it all the time really helps me to stay on top of the food I eat remain conscious of the food that I eat and to understand which foods affect me based upon my physiology and my genetics you know on this podcast I only recommend products and services that I use that I use not only for myself but my friends and my family that I think are high quality and safe and really impact a person's life so check it out levels.link slash Peter we give you two additional months of membership and it's something that I think everyone should be doing eventually this stuff is going to be in your body on your body part of our future of medicine today it's a product that I think I'm going to be using for the years ahead and hope you'll consider as well so let's talk about the other question which has been the debate over the last week or two which is uh can we slow down development should we slow down development you know I remember asking a question when I was God and I was in college I said if Einstein understood what EMC Equity squared would lead to the nuclear bomb could you have stopped thinking about it would he have stopped developing it right so the question that you mentioned earlier about Sam Alton being asked if there's a five percent chance would you would you halt it or slow it down can we regulate can we even regulate against this because we live in a nation or a world of poorest borders and Technology very quickly becomes globalized and we might slow it down but I don't know if other nations around the world would slow it down so what do you think is possible here the question is would you turn it off not would you slow it down and you shifted it so I'd slow it down which that was interesting uh there's only a few entities even now that can train these large models are literally a handful and if you look at again at the open AIA AGI paper they said it's time to offer transparency to governments transparency on governance so one of my big issues with open Ai and deepmind and others is there's no transplants from the governance of these systems that could overthrow our democracy and that could kill us all again I don't think they will but I could be wrong uh I think that there isn't a Slowdown in thinking that they can be a stop on training before this big ramp up because you know people came actually say what if China creates an AGI and I was like you know they don't have the chips they don't have the know-how you've seen no evidence of that and they don't want to overthrow their system they want to perpetuate it and you know what the best way for them to get an agile is to send someone with a flash drive into deepmind or open Ai and just download it again it fits on a flash drive why would you bother it's much cheaper to do it that way so Now's the Time to put in proper governance proper oversight again the open AI blog on this lists a whole bunch of things that they say in the future I think the future is now you should do it now and you should put um proper security systems in place if you think it's that dangerous so you don't get leaking files going everywhere around these really large models which I don't think are optional way to do things by the way but we can discuss that later six months ago you've been doing this three months ago and six months it will be too late for a public debates about this about systems that could take away our freedom that could kill us all and everything in between you know did you actually sign that petition that was being circulated last week I believe I was the only private Company CEO in AI apart from Elon uh to sign that yes amazing I didn't agree with everything in there but again I think now is the time uh and it was a six-month slow down the right thing how do you define that versus a pause so you're never going to get a pause because Master pressures I mean for me six months is the amount of time you need to get used to these new Next Generation systems that are landing they can scale almost infinitely so I was like it's kind of the right thing and again you know even six months ago that was unstable the fusion first came out it's been what three four months since chat GPT first came out like you needed the permutation around the internet and you're seeing things now like actually producing band in Italy right you're seeing the Haves and Have Nots and you're seeing competitive pressure where this is now big business it's affecting premium other companies I remember when Uber was banned in in France and I was like okay we're going to start to label countries as Pro and anti-technology I mean uh it's kind of insane to think that you can ban these types of Technologies um and still remain competitive on a on a national or Global stage uh can we flip it over one second uh the end point of all of this technology is to make us in one sense superhuman to allow anyone to be educated to be a creative agent to be extra healthy adding decades onto one's life you know is the goal here of this level of AI development uh to allow us to work less to be more productive to create abundance on the planet what's your vision of where we're going here I think everyone that's building this maybe if that's some depends on the exceptions is actually doing this with good intentions right um because they want to create something that can help Humanity now some people want to automate humanity and you know uh like literally you look at some of the statements so some of these leaves and they're like well yeah this will transform money and then we can redistribute it I'm like oh and you're in charge of that other people like we want to augment it Humanity so I'm very much in the augmentation camp where I believe humans aren't computers to be anything and I'm like uh this allows us to scale humans we've solved that problem it's still early stages with iPhone 3G stage but any child in the world soon in the next few years will be able to have their own personalized tutor or personalized doctor yeah and then this allows us to scale society as well because the YouTuber Crest was an amazing thing because it allowed us to take stories down and we're driven by stories but it's lossy you know all those notes you're taking in your meetings and everything like that are lossy now then new systems you look at the office and teams and all the hostages that are using that you know you fill in the gaps it's no longer loss it automatically writes your emails and summarizes stuff for you it can be that grad and he's generally on his meds it's going to be pretty awesome I think and a real changer to the way that we all collaborate and Achieve our potential but if it's centralized and that's the only option then it probably won't go so well I think we've had a history of that you know what happens if new technology is controlled by a few hands power is really attractive they're really really attractive well and this is the power that's going to drive uh to a new Global set of trillionaires uh in uh and I I think a complete transformation of every industry so let's talk about open versus closed you very much well it's begin back now eight plus years ago when Sam Altman and Elon talk about open AI being truly open with a mission to help guide the development of AI for humanities and it began as a non-profit and then you know Elon put 100 million in he's pissed right now it's turned into a for-profit and is anything but open and I love the fact that uh if you go to uh open Dot AI it doesn't point at open AI it points towards stability AI which is very which is very interesting but you made a You made a conscious choice to build an open platform can you speak about that yeah I thought that ethically it's the correct thing to do and it was a better business as well because the value in the world is in private data and knowledge and people want control so again if these are grads that occasion of their meds using open AI or Google or anthropic or anyone like that is hiring it for McKenzie but you want to have your own ones but then I think this technology had to be distributed like when I came in as a relative Outsider that I've only ever been to San Francisco once before October and I have a non-conventional background um I talked to a lot of AI ethicists who were like well we have to control this technology and has access to it until it's safe and I was like when will it ever be safe for Indians or Africans I don't think it ever will be um and it reminded me a lot of the Old Colonial mindset you know and access to technology and I was like if this can really activate every child's potential you know and you know working with the X prize for learning uh winners and deploying this with imagine worldwide.org my co-founders charity into refugee camps I was like how can I hold it back from these kids you know yeah but by the way just for just for those who don't know uh many years ago Elon Musk and Tony Robbins funded something called the Global Learning X prize it was a 15 million dollar prize asking teams to build software that could teach children in this case in Tanzania Reading Writing and arithmetic on their own and uh emad was uh was a partner in one of the winning teams and you're now taking it much further much faster which is amazing yeah what did the rcts the randomized control trials with UNESCO and others and so in refugee camps around the world 76 percent of children gets listed in numeracy in 13 months and just one hour a day with this basic software what if they all had a track GPT they will change the world right and so you figure out how to scale that to every child in Malawi and then across African Asia in the next few years with the world back to others and again like it was interesting again to have this discussion because things are very insular when you have power and control and it's very tempting to keep power and control and this is you know one of the things of how do you view the world are we stronger together and we can deal with any problem or do I need to be in the lead and control it if it's a powerful technology because people are fundamentally good or bad so first of that's a scary thought I mean I I fundamentally believe that humans are good by their basic nature and that the majority of humans are good and want to make the world a better place but we do have to guard against those that have uh sort of a uh a bent Arrow yeah and this is why I look at what is the world's infrastructure run on it runs on Linux it runs on open source MySQL that's the most resilience Windows is not resilient which is a very interesting analogy I think so how do you make money as an open source company I was pretty straightforward Every Nation wants to have their own model so this is a question of sovereignty so we're helping multiple Nations do that and then every single regulated industry wants their own models so we're helping them do that too working with Cloud providers system integrators and others lots of announcements to come private data is valuable public data is less valuable than a bit of a race to the bottom so I think people are looking on the wrong side of the firewall um and again we'll see that going forward but they're not going to start Microsoft Google Nvidia others I'm going to stop they'll keep scaling and now that there's actually Revenue that can be generated from these models you're going to move from research to revenue hundreds of millions billions will be spent on training and again I think the scaling Paradigm is one of them where you see all these things up I think it's an incorrect one to get us to let's say a level of AGI so major that I'm interested in is the human Colossus humans coming together to solve The World's problems augmented by technology yes I think that Collective and swarm intelligence will beat an AGI any day and that's what I'm interested in I think arming everyone with their own eyes every single person company country culture in the world we build a new intelligent internet that can solve any problem because a lot of our problems are information and coordination and this technology enables that so very different view from most of the other CEOs of companies in this space Global Collective augmented as opposed to artificial general intelligence that can replace humans shall we say let me just make a mention we'll be going to questions from uh from those watching listening for those listening in a few minutes uh so you might you know is gpt4 and its equivalence large enough for us to achieve what we want do we need to go to yet another Generation Um uh you know can we can we build enough intelligence without tipping over to AGI nobody will ever be able to tell you that there's less than 12 months to HDI because you don't know until it gets there yeah yeah I I I used to hate the all the doomsayers on AGI my my standard statement was always listen I'm not worried about artificial intelligence I'm worried about human stupidity and to a large degree that remains That Remains the Same right and what we're going to see with all of with Chachi PT and all of these large language models are people stupidity being Amplified and people's Brilliance being Amplified as well and so uh the question is does it go out of bounds on the low side to cause us any real issues no it's a dangerous point right now because you're not quite intelligent enough to align and you have amplification of these things where you people have powerful tools right I think when it does stay intelligent enough I think there's no way that we can perfectly align it because alignment is orthogonal to Freedom we all know people more capable than us and the only way to make sure they do exactly as we want is take away their freedom I don't think a super AGI will like that necessarily but again I think again we're kind of boring so I think that we're on this dangerous space now between those two things when it can't be self-correcting until you know you know Peter that's stupid why are you going to do that you know don't be a dick um and you know it's these are dangerous powerful technologies that can be downloaded on a flash drive yeah why would you bother training yourself and you can just take it and use it in really original way okay I don't want to get into too much detail but it's probably not just deep learning we've already implemented reinforcement learning and there's all sorts of ways that you can create much better systems like the simplest one just get two gpt4s to check each other's answers everything becomes better yes right he's surprising that you know uh I have to say having you as the CEO makes me feel uh safer uh in terms of at least for stability AI uh I'm I feel you have a mission uh to build a strong and viable company while uplifting humanity and doing good in the world which is important I think that has to be at the core for all of these companies because the technology is so fundamentally powerful yeah I think realistically sound has an amazing Mission Dennis has an amazing Mission Daria has an amazing Mission like everyone is quite Mission driven who are the CEOs of the companies that could build this type of Technology um it's just none of us should be in control of it I don't know how it should be but I think this is the time for a public debate around it and you know everyone should realize that this is taking off crazily um and that's what democracy is about you know I remember when I was in medical school years ago uh restriction enzymes were first coming online and there was a huge amount of fear-mongering about designer babies and being able to clone children and it was uh you know we're playing with god-like powers and we were going to lead to killer viruses this is in the late 80s early 90s and there was concern about regulating and could this even be regulated because you could sneak out through striction enzymes in a you know a few Pika leaders of of a fluid uh and what came together was a series of summits called the Asilomar conferences in which the heads of the industries got together and self-regulated um versus the government coming down on them have you seen that conversation taking place here at all I think it's starting to emerge now because we need to stop by as an industry we need to increase transparency over our governance you know and how we make decisions like last year open AI with dally too forbade every Ukrainian from using it and if you typed in a Ukrainian term that threat to ban you why did they make that decision who knows is there any redress no you know and so we have to be really mindful about things like this and say industry norms and now I think it's time to come together to people reason this is a big deal I think we want to do it before it's forced upon us but at the same time the discussion needs to be widened here because this affects every part of life it's kind of insane we've never seen anything like this before right and we have to do it quick quick and this is the last window that we have before again everything takes off so that's that is one of that's I've I remember when we first met I asked you is this time different because we've been hearing about this conversation for forever and so you're pretty clear definitively this time it is fundamentally different dude I mean like any coder that uses GPT for us like wow look at that and like multiple times better right it's like the thing can write better than me it can grow better than me it complain better than me like I I think people don't want to believe that things are different um I recently went and said some talks this is a big impact on the pandemic because all information system flow changes like things that involve atoms don't change but so much of our life is about information and this has made a meaningful difference in that just play with it try it Go in depth and you can't go away with any other answer I think let me let me uh sort of uh go down a few of these uh we've talked about education the potential there if you would five years from now what's your vision and the potential for generative AI in global education every child has their own AI that looks out for them and customizes them are you in order she learn a visual learner are you dyslexic that's all information flow and you can fix all of that right and it brings students together to work together as well because most of school right now is a child care system mixed with the status game yeah that's why I think Peter we were having discussion before an incredibly smart young kid ask a question you know uh the way School says that it's cheating to use tragedy PT and I'm like it's not a competition education right we made it a competition education is about actuation and so I think these things will fundamentally change the way when everyone's got their own amazing teacher the young ladies Illustrated prior from the diamond age how exciting is that and 100 we can get there in five years for everything in the world well just about everything that's amazing that will change the world for the cost of fundamentally electricity which is getting cheaper all the time yeah let's put it this way gpt4 if it does run on two h100s as in videos indicated is 1400 Watts the human brain is 25 watts yes especially when we start feeding at junk [Laughter] all right let's go to hell for next you and I are both passionate about that you've done a lot in your in your life in the health area and I'm clear I mean one of the Beautiful Things is that all eight billion people on their Planet are biologically identical so something that works for someone in India or Iraq or Mexico is the same that works for you in San Francisco what's your vision there yeah the people of people just like education if you have open source education because it goes Healthcare is the same relation to increase the information density and organize knowledge so when my son was diagnosed with autism they said there's no cure no treatment how to build an AI system to analyze all the medical clinical trials and do a first principles analysis what could cause it and then we did drug repurposing to ameliorate the symptoms that should be available to everyone you know anyone who's had a condition which is difficult the process of finding knowledge is so complex you know and so our system is a gerdic it treats a thousand tosses of the coin the same as a thousand coins tossed in a row right like percent of the population has a cyber and p450 mutation it means you metabolize things quicker so you're more your codeine becomes morphe those are the ones that die of Fentanyl how do you not even know that right and so we have all the tools in place now to have your own personal doctor you know to have all the knowledge and Medicine aligned so we can see across and have that information set and I think as we do this with longevity with cancer with Alzheimer's within 10 years we'll be able to cure all of these because the knowledge is all there it's just not all in the right place so everyone's trying to do their own things the large on clinical trials the information leakage it's ridiculous we can capture every piece of information on that yeah if people don't realize the all of the failures in these trials is extraordinarily valuable information that is not captured and not utilized at all the other thing just to hammer it home because those of you who know my work I'm focused on longevity age reversal how do we make us live longer healthier lives this decade is different and we're going to be understanding why certain people age how to slow it how to stop it a lot of it's going to come out of the work that companies like stable stability and others are doing then we have Quantum Technologies coming soon which are going to put sort of uh fuel on the fire here my guess is what within five years or less it'll be malpractice to diagnose without AI in the loop yeah I think it'll be human's class AI it won't be AI that diagnoses again destroy Pilots for everything is going to be the way I think just like self-driving cars I think are eminent in terms of getting into that level four level five um and I think you know it's five ten years it's gonna you have to really prove yourself to drive on the road you know because it's better you have an army of graduates that will become Associates that will pass the bar you know that will pass their medical exams over the coming years and it would most implicitly replicate them yes what people don't realize is every time an AI learns something it shares it with every other AI out there as you update the models and AIS are getting better every year while we humans pretty Plateau after a certain point um the other thing I love is yeah I think the thing I need to really drive home here is when you're trying to chat GPT and gpd4 that's just one of them imagine if there was a hundred of them checking each other no doubt the outputs would be even better than you've seen and then three ten of them were learning about everything about you so right now we're like single sooner we'll be parallelizing these things yeah and just like you've had a hundred grants that were exceptional you know and generally good all working around you and you wouldn't have to like oversee them your life would be better uh when we were on stage at abundance 360 we were talking about uh interesting subject of who would you hire into your company and what background would you require if you're hiring someone into your company and what is the most important skill that people will need in this world of exponentially growing Ai and and your answer was passion all the way around I mean like this we've had 15 year olds contributing to open source code based 60 year olds right because you don't know where it comes from you have to be passionate and throw yourself into this because it's going so fast if you're not passionate you won't be able to keep out and then you bring the latest breakthroughs to your company and kind of communicate it and you can use the technology to communicate it even better which is kind of awesome it'll automatically prepare the presentations and slides for you so I think passion is kind of the key thing plus you know a level of intelligence and capability but if people aren't passionate they're not going to be able to keep up here and that's the key thing yeah hey everybody this is Peter a quick break from the episode I'm a firm believer that science and technology and how entrepreneurs can change the world is the only real news out there worth consuming I don't watch the crisis News Network I call CNN or Fox and hear every devastating piece of news on the planet I spend my time training my neural net the way I see the World by looking at the incredible breakthroughs in science and technology how entrepreneurs are solving the world's Grand challenges what the breakthroughs are in longevity how exponential Technologies are Transforming Our World so twice a week I put out a Blog one blog is looking at the future of longevity age reversal biotech increasing your health span the other blog looks at exponential technology is AI 3D printing synthetic biology AR VR blockchain these Technologies are transforming what you as an entrepreneur can do if this is the kind of news you want to learn about and shape your neural Nets with go to dmandis.com backslash blog and learn more now back to the episode listen I would love to open it up for some questions if you're open for that um yes I am hey I'm at how are you uh first question I met is how's your Noggin doing that's good man I had a bit of an offender vendor so my brain was jiggled for the last few days I think it's backed on that okay please make sure to go check on it and make sure it's all in order because we know we need those neurons uh continuing to fire for a few more years to come so uh this is my uh this is my question to you and I'm very curious about your thoughts on this one we are clearly on a trajectory now where we're going to have a Cambridge Cambridge analytica multiplied by a billion times assuming you're familiar with Cambria genetic and what they did right yeah yeah so so now imagine uh on a website like character AI where I can go and create characters that are listed replicas of people like there's a digital network of Elon Musk and then if it was very model if it was modeled very correctly then I can now start playing arguments against it just like alphago played itself and I would play it against itself for billions and billions and billions of iterations until I find the perfect way to get them to do whatever I do so it's Alpha subjugates I can substitute subjugate any citizen to my will and you said earlier it's very tempting to keep power and control how are we gonna react how are we going to solve that that's a very big problem in my opinion I'm curious your thoughts on it like how do we stop that from happening that's going to happen no it's inevitable right um and again it's like it's here pretty much now um creative ways of using this for Mass persuasion are there like you really get robocallers that call up with the voice of your like grandmother saying it's an emergency now um and you can't tell the difference like this is going massive and voice is very very convincing so I think the only way you could I think the only way you can do this is you have to build your own AI to protect you I need to earn those AIS I need to look out for you honestly and then trusted authentication schemes you know the Twitter tick on steroids like we have to move fast fast because you said armies of replicants are coming here as well as being able to gamify that particular thing we shouldn't talk too much about it though because yeah so counter measures so you're saying it's gone through measures right it's just like antivirus virus antivirus so need something that is counter measure at the speed that is catches this before it becomes much worse so is that something that's any of us are working on should be should we unify efforts to work on that yes uh but I've got some things in that space um it's uh yeah Idea Idea Idea antivirus there we go so to close out that you know find of saying the world's biggest problems the world's biggest business opportunities and believe me as that issue comes online there will be multiple entrepreneurs looking to display it a mod with stability AI one of the initiatives is uh open bio ml I was wondering if you could speak about you know best case where is that going where are we going to be in five years if sort of your best case Vision turns out for that what will that look like yes our medical competition Biology one so we're one of the main factors of opening fault um to do kind of Opera type things with a quest DNA diffusion to kind of see how DNA Falls see chemical reactions based on language models um there's a whole bunch of things that I just think the opportunity is massive because again the data quality is poor but now we can create synthetic data and we can analyze data better than anything and we can figure out how chemicals interact the work of deepmind and alcohol was amazing and now we can do things in silica so we can test chemical reactions drug interactions with others that when combined with a knowledge based on all the medical knowledge in the world and research and able to extract patterns from it that are superhuman I don't think there's anything we can't cure honestly but we've got to build this as a public common good in my opinion and give these tools out to The Experts to use and unify we have to be this very intentionally so I've environmental stress at the start now and we also have medarc which is the healthcare equipment about doing like synthetic radiology and stampid Ami to create more data sets of rare lung diseases and things like that and it's working great so I think Community Commons is the way and by the way all of this is how we create an abundance of Health around the planet right the best diagnosticians on the planet are going to be AIS and the best surgeons will be robots driven by AIS and that then becomes available everywhere and I just want to hit on a point I think is important to look for folks to hear a world in which children have access to the best education and the best health care is a world that is more peaceful which I think is one of our our key objectives here how do we uplift every man woman and child on the planet I wanted to ask a personal question here and it's uh something I haven't had a chance to ask Peter yet about AI and Peter I know that you and seemingly Ahmad generally have an abundance and optimistic mindset I understand your views are changing is the Potential Threat of catastrophic job loss concerning to you both and if so how do you potentially uh suggest people address finding meaning in their lives without consistent work or careers I'd love to hear your thoughts Peter as I know you're typically really focused on this from an abundance mindset and also you're as a mod so yeah so listen I think we're lucky almost all of us here taking the time to listen this conversation are extraordinarily lucky uh we're doing a job I don't know what everybody's doing but I'm pretty much guessing uh we're doing jobs that we love uh that we dream about that we're excited about the majority of humans on the planet unfortunately are not doing what they love they're doing what they need to put food on the table get insurance for their family so one of the things I think about is how do you use these externally powerful Technologies to self-educate and to self-empower to go and do the things that you love to augment yourself to have a co-pilot if you want to be a physician a teacher a writer whatever it might be and that trains you on the job as you're doing the job so it's going to allow people to dream a lot more you know yeah I think the only fear I have about AI today is what I would call its uh its toddler into adolescence phase I think you know when AI is in its earliest AGI state if we get there um before it's developed uh fundamentally sort of an ethical emotional capability I think the more intelligent in general that systems are the more empathic and the more good nature there will be I don't think I don't we live in a universe of infinite resources I don't buy any of the dystopian Hollywood movies where they have to come and grab all of the whatever off the planet Earth that's there's so much you know abundance of everything I think we just have to deal with the early stages where humans are using it in a dystopian fashion or the toddler doesn't know its own strength yeah I think I think that people will create new jobs and they'll do it very fast I think the pace will be picked up by a mixture of open and closed Technologies as well you see the Innovation around stable diffusion new language models coming out and other things as well but I think I'm really excited about imagine markets in particular I think they'll lead to intelligence augmentation just like they latched over the PC to the mobile you know and that will create ridiculous abundance and value there because they Embrace technology because they need to and they want to and there's amazing Elite people there that just haven't had the opportunity to now have the opportunity with this so I think you know um I think it's a bigger economic disruption in the pandemic I don't know which way but I believe it will be positive to be honest um and you'll see points added to the GDP of India and other countries uh once they get going this brought up a lot of questions really great conversation um the one that I think is uh most pertinent to me is um after hearing umad talk uh and I think Peter made the comment he's he's thankful that you are one of the leaders of this and leading your company because you clearly have uh good intentions you sound like a giver you can't in the Adam Grant give and take sense how do we control against the other players that are that have their fingers on the proverbial button right because this like if you have people with really bad motivations and uh Takers versus imagine Trump was smart enough to be a CEO of one of these companies I mean that is terrifying I prefer not to how do we how do we control against that and part of it part of I'll answer part of that I think I'm an investor an early stage investor I think it is incumbent upon investors to be picking good uh good actors versus Bad actors and who they back so anyway this is a really concerning note because I'm glad that you're one of the good guys but I'm sure they're also bad guys I think actually the most dangerous are the good guys in some ways um like most of the evil in the world is done by people who think they're doing good uh looks not me but you know um you never know I think that the key thing here is transparency if you are using powerful technology that has the ability to impact the freedom of others you need to be transparent and you need to have proper governance and other things like you know we build out in the open and I think that there needs to be some mechanisms for that again open AI listed a whole bunch of them in a blog post without saying whenever they do it we need to implement that now so that's that's really important you know it's not just being open or closed it's really being transparent and having uh an understanding of how the technology is being developed and utilized and then having investors and uh and governments I listen I never depend on governments for much of anything um but uh but uh you know it's not bad to have a set of requirements that Society holds your feet to the fire on and this is the last time we could do that in my opinion this next period of six to 12 months that that is the single most important thing I've heard said Imad uh putting that time frame on it uh because of the you know uh the the clusters of h100s and and the new capability that we're adding and what most people don't realize is that we're in this situation today with these large language models and deep learning because we've seen massive growth in computation over the last just a few years and massive amount of label data out there over the last few years uh the ideas for deep learning have been around for 30 plus years right but it's just now that it's capable and it's adding you know fuel to the fire so um yes everybody just listen up it's uh it's the time frame is now but just a quick note like how do you how do you make that a forcing function over other than just a letter I mean the letter that's great but like is that really leverage I mean why like what's going to cause that and what happens if if they choose not to and and we don't pause for the next six months I don't think there'll be a pause I think they'll continue right like there's nothing impossible within that period but I think that they will feel more and more pressure to come and build industry standards and I think there will be policy responses literally like we've just seen in actually PT's lab band in Italy and on the flip side you don't have chat GPT in Saudi Arabia why because open now decided not to so we need to have some standards around this sooner rather than later and it will be a mixture of public pressure government pressure investor pressure and more I think um but it's not easy man it's not easy and then you might one of the things I truly hope is that your voice and one of the reasons I wanted to do this uh this Twitter space conversation with you is I I really want to hear your voice in this world uh of AI I know how brilliant you are I know how giving your heart is and um I want people to uh I want you know we keep on hearing from Sam and Elon uh and others like Jeffrey Hinton and and Folks at uh uh a deep mind but um I'm excited to to hear you lead in this industry do our best I think again everyone needs to speak up now it's the time hello thank you so much for having me uh Peter Ahmad um just had a question around uh uh community and I was wondering you know with all this kind of stuff happening with Automation and things are going to change do you think if we can like start leaning more into humans finding purpose in in-person events I just want to hear your thoughts on that like what role does in-person events have in this you know new future humans no I think we're pro-social species right and AI can help us enact better with people around us um so you know I mean we'll come up with this covid weirdness and now I'm getting used to meeting people in person I think it's super important you know I think um there was an interesting thing as well like the fs study in Argentina but rather than giving Direct Cash handouts they actually said Universal basic jobs pick your jobs for your community now people loved it because they have purpose they have meaning you know and what loads of women into the workforce and they're like don't give us back to Direct Cash handouts people like being with people fundamentally and so we've got to use this technology to increase engagement with other humans rather than that Wally type future of the fat guy with the VR headset you know just being modeled around I think people like stories and stories when this together and this will allow us to better stories yeah up until now uh connecting and building Community has typically been almost a random process if you happen to be in the right place if you happen to read the right thing or or sign into the right Community imagine systems that are able to proactively gather people who are great matches and didn't know that they are so I think we're going to see a lot of things accelerate in the directions that we choose so what do we want um and be be careful what you ask for again we made things from the book and the goods and black face there's a massively velocity to be able to capture some of the complexities of humanity like you know when you watch Moana why does my Audi pick up the sun with a fishing hook from the ocean it's because it's all they ever need and we can start to capture these stories of the world so we can better understand each other and engage or the vice versa this is up to us now I have two very quick ones and pick your poison you can answer one or both of them um but um one is on the pause and development so what do you think the implications are for the economic competition um between countries and like should we even be looking at it this way um and then the other question is that the conversation is really sharply focused on AI right now um but I think the real power is going to be in the converging of different Technologies So based on what you're seeing what do you guys think is going to be the next big Tech to reach Mass adoption the way that we've seen with AI in recent months that's really going to accelerate um um this sort of convergence with AI and and create really interesting things uh cool so on the first question I'd say there's only two companies in the world that are actually basically building this Ai and that's the UK and the United States like you're not seeing GPT four level models in many other countries um and so the pools was basically those two countries um on the second one I'm not sure I'm gonna put such a Peter because Peter's got a much wider view of me so what's interesting is a I think about something called user interface moments uh when Mark Andreessen created the web browser it became a user interface on top of arpanet and it made it accessible and usable right we can see throughout time even the uh you know the App Store as a user interface moment the ultimate user interface moment is in fact AI uh so if you don't know how to 3D print and you don't know how to use any graphic programs but you know how to describe what your intention and desire is you could speak to your own version of Jarvis and I think all of us are going to have our own version of Jarvis our own personal AI that we've hyper customized that we give permission to know everything in our life because it makes our life better and you can say listen I'd like to create a device that's got a handle on it that looks like this you describe it physically and then your AR glasses or VR glasses you're seeing it come together being shaped as you describe it right that technology is here today and then you say that's it print and then it says great and then you say add it to the store and it's available for anybody for free or for a penny so AI is going to be the ultimate user interface for all exponential Technologies from computation sensors networks robotics 3D printing synthetic biology AR VR blockchain and that's when it starts becomes interesting because it used to be that you needed very specialized knowledge now what you the knowledge you need is what's in your mind what's your intention what's your desire and AI becomes your your partner in implementation going from minds and materialization if you think I wanted to get imods and Peter's thoughts on something called the factorial Paradox it's uh it's in the game factorial where once you learn to automate everything and build factories that are smarter than your factories you would think that the work for human decreases but actually your work increases because now you have out the whole map to expand to and I was wondering if there is a possibility that this might also happens with the adoption of AI tools that human labor might actually become even more expensive and more rare as we kind of see now that we've used I mean we've been using codex has been out for a year half the code as you said emad now it's generated by AI yet we're still finding uh it hard to hire people on the on stuff so what do you think what do you guys think about that I think that's great you know I've spent thousands of hours in Factoria so I'm very familiar with it um and I think that's the thing human discernment is still very important and human violence is still very important it's just as we build new technologies it lifts us up right and it adjusts where our point in that cycle is so you know I think it'll replace all like I think I said well there's no more coders there's no more coders as we see them right now when I started coding uh God was it 20 years ago when I was like about 22 years ago when I was 17. we didn't have GitHub all of that stuff we started using sub version that has just come out you kids have it easy these days you know those who are listening do all this kind of thing so we get better and better abstractions of kind of knowledge and the role but the human changes and it becomes more valuable because you can do so much more it enhances our capabilities I think like so that's the exciting thing rather than oh my God no one's got anything to do they're just at home getting fat and watching their own customized movies right you know uh there's another part of the conversation we've been we haven't discussed which is uh the coming Singularity whether you believe it or not I you know feel we are moving very rapidly in that direction uh Ray occurs Weil described it actually Werner vinge described it first and Rey projects it to be some 20 25 years out the point at which the speed of innovation is moving so rapidly you can't project what's coming next um and there's another book that I loved called zero marginal Society by Jeremy Rifkin talks about what happens when we have AGI and we've got nanotechnology and we've got abundant energy right everything becomes possible pretty much all at once everywhere we were making that that joke on stage emot you know where I have a I have a effectively a replicator a nanobot materials are effectively abundant Energy's abundant uh information is open source so uh it starts to become an interesting World um and you know what I find even more interesting it's going to happen during our lifetimes uh you know we can get into a long conversation about whether we're living in a simulation or not but it's the next 20 years you know it's the next five years when all what we're talking about today is playing out definitively but in the next 20 years as we're adding decades onto our healthy life as uh brain computer interface starts coming online uh things uh you know we thought things moved quickly this year they're gonna move uh we're gonna see I think the estimate that Ray Kurzweil talked about and we've talked about at singular University for a while is we're going to see a century worth of progress in the next 10 years so what does that look like um and the biggest concern is I think governments don't do well in this kind of hyper growth in this kind of disruptive change young kids will do reasonably well but governments which are you know governments religions are structured to keep things the way they are any reaction yeah I mean I think they perpetuate the status quo um I think decision making under uncertainty you minimize for maximum regret that's why I set up stability to offer stability at this time so we can help out governments and companies and others and standardize the open source models and others that everyone Builds on it's the foundation for the future um I think that I'm glad I'm glad yeah yeah and then I think I think that you know it's like we're setting up subsidiaries in every single country and they'll eventually be owned by the people of that country we've got interesting things coming that will announce the interesting thing you said that like why is it happening like this you move it's not Moore's Law right this is like peels law metcalf's law it's like a network effect that's happening right now and that's why you're getting this acceleration because Network effects are really exponential where everyone's trying out this thing and exploring this new type of technology and sharing information back and forth quicker than anything you've seen and all these Technologies happen to be mature at the same time so I can't see that far out you know everyone said why do we say 20 years we just pull it out about that so we just got Thumb in the air right well we know is that things are never the same again you will never be able to set an essay for homework again at a school for every school in the world and there's more and more of that that's coming but these things also take time to fit into our existing system so as you noted you know programming so Pilots you have for a year it's got a lot better but it takes time to integrate into workloads it's not like it goes and takes over everything at once you know how long does it take to get into things like 1.5 million people still use AOL you know Lotus Notes are still use around the world I think it makes a hundreds of millions a year but this may change quicker than anything we've ever seen before but it still takes time and it's very exciting yeah yeah it has changed quicker than anything we've seen before and there will be something that that moves 10 times faster than it did um yeah it it is the most extraordinary and exciting time ever to be alive other than perhaps tomorrow um uh you might I just want to say thank you uh uh I know your heart and I know your uh your mission and I'm grateful for what stability AI is doing um and uh and thanks for joining me today and thank you everybody listening I put the questions that's probably pleasure you know I'm thankful to the community and I think it's going to take all of us to guide this properly you know so just embrace it dig in and you said the most exciting thing ever and if you haven't uh followed Imad on Twitter yet please do he does tweet on a regular basis and uh hi pal I'll talk to you in the days ahead thank you everyone [Music]
Info
Channel: Peter H. Diamandis
Views: 49,720
Rating: undefined out of 5
Keywords: peter diamandis, longevity, xprize, abundance, emad mostaque, stability ai, machine learning, stable diffusion, artificial intelligence, deep learning, ai art, generative ai, ai podcast, stability ai tutorial, stability ai stable diffusion, emad mostaque interview, emad mostaque ai, emad mostaque peter diamandis, emad mostaque stability ai, emad mostaque stable diffusion, AI race, ai race
Id: SKoYhcC3HrM
Channel Id: undefined
Length: 69min 2sec (4142 seconds)
Published: Thu Apr 06 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.