The Dangers of AI Explained By an AI Futurist w/ Emad Mostaque

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
the next 2 to 10 years where I have serious concerns the hate speech the extremism going into the US elections dealing with the first time AIS bring down a power plant or Wall Street servers I think where we're going right now we'll probably be okay but we may not and we will all die we're not even sure what regulation to introduce you can create a company with GPT 4 that will probably do as well if not better than any other company automated within a year what yeah this is something we are raising not the anpr it but what are we feeding it I think aligning an AI Downstream on its actions is incredibly difficult freerange organic models the data for all large models should be made transparent uh let's let's turn to that conversation because it's one that's important it's conversation I have at the dinner table literally every night and with my kids and in the companies I advise um I think of I I parse Ai and AGI into three segments where we are today where it's extraordinarily powerful useful and it's fun and I don't feel Danger from it yet the next 2 to 10 years where I have serious concerns going into the US election dealing with the first time AIS bring down a power plant or Wall Street servers um the impact on uh on deep fakes on the US elections and so forth that's a 2 to 10 year Horizon where where new dystopian challenging impact will happen where Society is not agile enough to adapt to it yet and then there's a third chapter which is AGI you know we have a super intelligent billionfold more capable than a human being and is that more like Arnold Schwarzenegger or more like her right yeah I don't think it'll be Arnold Schwarzenegger it's really inefficient I saw him this morning biking so let me let me let me let me let's not use him let's use Terminator instead we're we're in Hollywood here so is it is it Skynet and Terminator so let me get your I'm polling people here uh as someone in the thick of it uh a super AGI uh is it pro-life pro- abundance or is it something that we should be deeply concerned about I think where we're going right now we'll probably be okay but we may not and we will all die what tips that I think what tips that you are what you eat we're feeding at all the junk of the internet and these hyper optimized nasty equations and the hate speech the extremism that is I mean people need to realize these AIS are trained upon everything everyone's been putting into Facebook and Twitter and on the web and amplifies the worst of that as a base model and so we're training larger and larger models we're making them agentic in that we're connecting them up to the world and you're making it so the models can take over other models and other things again people are like poo pooing and saying these things our organizations are slow Dum AI the Nazi party was AI how so it was an artificial intelligence that b provisioned humans and the most sensible people in the world are Germans one can say and yet they committed the Holocaust and other things like that our organizations emerged out of stories so there was a story of the Nazi party of the Communist Party the Great Leap Forward of the North Korean dictatorship positive stories as well and they were written on text and it made the world black and white in a way mhm uh that's why I L the poem howl by Ginsburg about this carthaginian demon Mok I I think Moc comes through text the stories that we use to drive our organizations because all the context is lost again it makes the world black and white and that's why organizations just don't work they have to turn us into cogs so can an AI take over an organization MH yes sure can it it can it can actually just slightly sway leaders who are currently running organizations this s l is the currently running organizations it can create companies you can create a company with gp4 that will probably do as well if not better than any other company automated within a year cuz think about what a company needs to do right and so if it can sway leaders if it can send emails that you don't know who's sending what it can do anything by co-opting any of our existing organizations and that can lead to immensely bad things will it do bad things again if I was trained on the whole of the internet I would probably be a bit crazier than I am right now we're feeding them junk let's feed it good stuff it still to understand all the evils of the world and things like that but again this is something we are raising not the anet but what are we feeding at what our objective function I want to focus on this a second and we'll come back to the next two to 10 years in a little bit but because this is the conversation I've had with Mo godat as well who believes there is incredibly divine nature of humanity of love and compassion and community and there is much good in humanity the question is can we feed and train AI on that sufficient to sort of tilt The Singularity of AI towards a pro Humanity we can if we take the data from teaching kids and learning from kids and use that as the base for AI because that's what you need to teach an AI it's the curriculum learning method effectively if we take National data sets that reflect diverse cultures so it's not just a monoculture that's hyper optimized for engagement and we feed that to AI as the base because what you do is you can teach the AI in levels L you can put through kindergarten then grade school then high school it's called the base and then you can teach it about the bad of the world I think aligning in AI Downstream on its actions is incredibly difficult because if it's more capable than you which is the definition of ASI artificial super intelligence the only way you can 100% align it if you don't do anything before in the way that you feed it and train it is if you remove its freedom and it's very difficult to remove the freedom of people more capable than you yeah and then there is this really dangerous point before we get there where by these models are like a few hundred gigabytes you can download them on a memory stick yeah how lines to code um Google's Palm model which is the basis of Med Palm uh we did a replication of that in 207 lines of code what yeah so you can look at one of our stability AI fellows Lucid Reigns um he replicates all these models in a few hundred lines of code that's crazy I mean compared to uh you know I know AT&T has like a million lines of code for some of its uh it mobile services I mean a couple of hundred lines a couple of thousand lines of code creates something that can write all the code in the world this is real exponential technology the limiting factor is running supercomputers that are more complex the as complex as particle physics colliders you know like you literally get errors because of solar rays and things like that again our supercomputer again we're we're one of the players where the main open source player super uses 10 megawatt of electricity some of the others use like 30 40 these are serious pieces of equipment for sure so again what are we doing what should people be thinking about and doing now to uh reduce the probability of a dystopian uh you you know artificial super intelligence we should be focusing on data we bulked now we cut we should move away from web crawls we should think intentially what we're feeding these AIS that will be co-opting more and more of our mind space and augmenting our capabilities because again we are what we eat information diet how is it different to an AI to a human even what we do as you said kind of like you've only got limited mental capacity because you've got this energy gradient descent it's like KL friston's theory of free energy principle mhm you literally have gradient descent as the key thing for building these AIS you optimize for energy sure so why are we feeding it junk so who makes that decision of what they get fed is it you and Sam Alman and Sundar is it government regulation is it the public being more kind in its Communications to each other I think that um I'm going to push for an economic outcome which is that better data sets require less more model training so one of the things that we funded was called Data comp uh which dat data comp comp so a few years ago the largest image data set available was 100 million images data comp is 12 billion and then on a billion image subset of that we they trained a image to text model this is like a collaboration of various people uh led by University of Washington that outperformed open ai's imish text model on a tenth of the compute because it was such high qu quality so we have to move from quantity to Quality now and I think there is a market this is the equivalent of what you eat this is a healthy diet Freer range organic models yes I think that the data for all large models should be made transparent you can then tune it but for the base the pre-training step you should Lodge what data you train your models on and it should adhere to standards and quality of data Upstream so that is a regulatory uh Cornerstone that you think is going to be important I think potentially I don't think regulation will keep up so instead we're working on building better diverse data sets that everyone will want to use anyway and just make them available and make them available Every Nation should have its own data set both of the a data from teaching kids and learning from kids across modalities and then also National broadcaster data because then that leads to National models that can Stoke Innovation that can replace job disruption I love that Vision you have by the way I mean as a leader in this industry that's that's what gets me excited CU all technology is biased yeah how else are you going to do this unless you do that but there's economic value now if it said this a year ago everyone be like what but this is what we were building towards and again I think it's positive for Humanity it's positive for communities it's positive for society to have this as National and international infrastructure everybody this is Peter a quick break from the episode you I'm a firm believer that science and technology and how entrepreneurs can change the world is the only real news out there worth consuming I don't watch the crisis News Network I call CNN or Fox and hear every devastating piece of news on the planet I spend my time training my neural net the way I see the World by looking at the incredible breakthroughs in science and technology how entrepreneurs are solving the world's Grand challenges what the breakthroughs are in longevity how exponential Technologies are Transforming Our World so twice a week I put out a Blog one blog is looking at the future of longevity age reversal biotech increasing your health span the other blog looks at exponential Technologies AI 3D printing synthetic biology AR VR blockchain these Technologies are transforming what you as an entrepreneur can do if this is the kind of use you want to learn about and shape your neural Nets with go to demand.com back/ blog and learn more now back to the episode next question how long do we have to get that in place before uh we we lose the uh the Mind share or the uh the nourishment War couple of years yeah I mean that's that was Mo's prediction as well that we've got you know this the next two years is the game the exponential increase in compute is insane we've gone from two companies being able to train a GPT 4 model to 20 next year and there's no guard rails there's nothing around this and even if you train one again the bad guys can steal it by downloading on a USB stick and taking it away it's not like operation Merlin uh did you ever tell about operation Merlin no it's been Declassified in 2000 the Clint Administration wanted to divert the Iranian nuclear program ah I remember this is the uh this is the Cent fuge no no so so what they did was um they gave some plans to I believe it was a Russian Defector um who then the idea was there were errors in that so they'd go down the wrong path for years so he went he sold it to the Iranians it's on Wikipedia you can check it out and then he came back and he said I sold it like fantastic good good oh but there were some errors in there cuz he was a nuclear scientist no so he corrected them so the reason that we know that Iran has the nuclear capability is because America sold it to them oh but they still needed years to build it oh whereas this you download on USB stick you run it on a GPU and it's there it's called if you make it cheap enough and quality enough and give it away for free um then you make it everybody's economic best interest to use the higher quality data sets yeah yeah data sets and then less of an issue to create large models if you have a swarm model where each individual model becomes less impactful as well and less capable just like human society and not know it alls they are individualized indiv groups back when um when the early dangers of recombinant DNA when the first restriction enzymes came online it was like 1980s and it was everybody was in great fear and the question was are we going to regulate this all of the early uh I was in MIT in Harvard at the time and doing uh and I was in the labs I was using combinant uh enzymes and I was you know just a pipsqueak in the labs there but the conversation was is the government going to over-regulate us and what happened was that the scientists got together at a place called ailar and they did a very famous set of asilar conferences and they self-regulated what's going on there are those conversations going on among uh leaders like yourself in the industry there are and you know there's three levels which is uh big Tech that the government kind of hates um and apparently next week uh meta is releasing new open source models and things which will get even more Focus um then there's emergent Tech so anthropic open AI some of these others that are the leaders they have a different set of parameters because they can work more freely than big Tech and there's open source which is where we are because all of the world's governments and regulated Industries will run on open auditable models cuz you can't run on black boxes right and I think that'll be legislation but the reality is there's only a handful of us there'll be far more potentially of us and far more players and unlike recom and DNA there is an economic imperative to deploy this technology a national security imperative to deploy this technology and it creates a race condition so even if you regulate like we've already seen regulatory Arbitrage where you have jurisdictions like Israel and Japan saying having much looser web scraping data LS mhm they'll have much roer regulation laws like you'll be training in scraping in Israel training in Qatar and then serving it out of Botswana or something right I mean like yeah and we're not even sure what regulation to introduce like genuinely we're coming at this from a good point of view but there are too many no no NOS cuz it goes everywhere from freaking Arnold schwarzeneger Skynet Terminators and her to well what if her is Siri all of a sudden and Scarlet Johansson's voice is Whispering to your kids to buy yeah like these things through to just very mundane things not mundane things huge things like the future of Hollywood and actors rights and all of these and how do you pay like if I you know if we we had two billion images in the original stable diffusion okay we could have get an attribution you know again it was a research artifact to kick off but you're paying about 0.01 cents per thousand images generated by someone wow because it's two billion and it costs like less than a sent generate an image are you going to pay proportionately like nobody knows and so what we've moved from now is we've moved from reactive to just trying to figure out and put something on the table so at least there's some framework and what I've come down to is data sets data sets data set so this is uh this is like Google's move with Android when you provide something open source and it's super you know super solid it can dominate the world share why would you do anything else so like with the Deep fake stuff we saw image models coming out of some not nice places shall we say yeah and we were like let's standardize it and put invisible water marks in so that you can combat deep fakes much easier like it's good business but it's also in E standardization we held back one of our image models deep floyed for 5 months because it was too good to release wow and you finally fixed that with the water marks yeah we put some water marking in and then it was but but the whole industry had moved forward so like okay now we can relas this is the problem you can't you know you just had to time it so carefully like we speaking of the whole industry I have to ask you a question I've been dying to get a reasonable answer for what's up with Siri why is Apple so uh out of the game at least from the external the one of the closest you know one of the least open organizations out there and and it pays them uh great dividends in their success but uh I would die for a capability that if sir could just understand what I was saying and just get the names right it's like I'm texting I'm texting Kristen and her name is right there and you spell it completely different from the person I'm texting I mean basic simple stuff they do have a neural engine on there as well which is specialist AI chip and all related smartphones and others uh stable diffusion was the first Model to actually have neural engine access of the external Transformer models it's a case of Apple is an engineering organization not a research organization so they engineer beautifully they do but they don't have advanced research because the best researchers want to be able to publish open and in apple does not allow public conversation on their content they have started slightly so they're hiring AI developers very quickly but the reality is they can open models So Meta is releasing a lot of their models open without identifying what the data is so I'd say it's like 80% open I think you need 100% open for governments and things like that which is where we come in um because they want to commoditize the compliment of others in terms of they want others to also take their models and optimize it for every single chip and then Apple can use those models too to make Siri better because right now guaranteed if you put whisper on Siri it would be a dozen times better sure sure we have the technology already just takes time to go into consumer just like Enterprise and apple is Enterprise [Music] yeah
Info
Channel: Peter H. Diamandis
Views: 58,466
Rating: undefined out of 5
Keywords: peter diamandis, longevity, xprize, abundance
Id: zXd9ZwU8u5E
Channel Id: undefined
Length: 20min 18sec (1218 seconds)
Published: Sun Jul 30 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.