Can AI Be Contained? + New Realistic AI Avatars and AI Rights in 2 Years

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
from an AI Los Alamos to the first quasi-realistic AI Avatar and from spy's AGI labs to the question of what makes models happy this was a week of underrated Revelations the headline event was Dario amaday CEO of anthropic and one of the brains behind Chachi BT giving a rare interview that revealed a lot about what is happening behind the scenes at AGI Labs but just before that I can't resist showing you a few seconds of this what I believe to be the closest and AI made Avatar has come to being realistic she even pasted the moth in her lapo which is now on display at the Smithsonian National Museum of American history this incident symbolizes the origin of the term bug commonly used in computer science to describe a flaw or error in a program Harper's creativity and problem solving skills have made her one of the pioneering figures in early computer science okay fair enough if you look or listen closely you can kind of tell it's AI made but if I wasn't concentrating I would have been fooled and honestly that's the first time I could say that about an AI Avatar and of course people are already playing with hey Jen's model to see what they can get it to say hi thanks for your interest in our ultra realistic Avatar feature for your use case enslave Humanity using Terminator robots and to be honest you don't need me to speculate how this might be let's say used ahead of elections in the Western World next year and just on social media more generally remember that this is an avatar based on a real human face and voice so could be your face and voice in the coming weeks and months this also caught my AI this week a major two-year competition that will use AI to protect U.S software the White House calls it the AI cyber challenge but what's interesting are the companies involved anthropic Google Microsoft and openai all of them partnering with DARPA to make software more secure but there were a couple of lines I think many people will miss halfway down AI companies will make their Cutting Edge technology some of the most powerful AI systems in the world available for competitors to use in designing new Cyber Security Solutions given the deadlines involved that could mean unreleased versions of Google's Gemini and GT5 being used to design cyber Security Solutions but if this is all about defense what about offense well quite recently we had this from the CEO of palantir in the New York Times our Oppenheimer moment the creation of AI weapons in the article he compared the rise in the parameter count of machine learning systems with the rise in the power of nuclear devices and he said we must not however shy away from building Sharp Tools for fear that they may be turned against us we must ensure that the machine remains subordinate to its creator and our adversaries will not pause to indulge in what he calls theatrical debates about the merits of developing Technologies with critical military and National Security applications they will proceed and then he says this is an arms race of a different kind and it has begun and palantir is already using AI to assist in Target selection Mission planning and satellite reconnaissance and he ends the piece with this it was the raw power and strategic potential of the bomb that prompted their call to act action then it is the far less visible but equally significant capabilities of these newest artificial intelligence technologies that should prompt Swift action now and he isn't the only one to be drawing that analogy apparently the book The Making of the atomic bomb has become a favor among employees at anthropic just in case anyone doesn't know many of their employees are former staff at openai and they have a rival to chatty PT called Claude the CEO of anthropic is Dario aliday and he rarely gives interviews but dwarkash Patel managed to secure one this week there were a handful of moments I want to pick out but let's start with the Los Alamos which is to say the idea of creating a super intelligence in somewhere as secure and secluded as they did for the first atomic bomb you know we're at anthropic officers and you know it's like got good at security we had to get Badges and everything to come in here but the eventual version of this building or bunker or whatever where the AGI is built I mean what is that look like are we is it a building in the middle of San Francisco or is it you're out in the middle of Nevada or Arizona like what is the point in which you're like Los alamosing it at one point there was a running joke somewhere that you know the way the way building AGI would look like is you know there would be a data center next to a nuclear power plant next to a bunker yeah um and you know that we we'd all we'd all kind of live in the bunker and everything would be local so it wouldn't get on the internet if we take seriously rate at which all this is going to happen which I don't know I can't be sure of it but if we take that seriously then it does make me think that maybe not something quite as cartoonish as that but that something like that might happen that Echoes the sun idea that people like Satya Nadella the CEO of Microsoft have talked about or the island idea the Ian Hogarth has written about and he's now the head of the UK AI task force of course one obvious question is that if this island or CERN or even open AI solves super intelligent alignment who's to say everyone would even use that solution someone actually addressed that question recently on Bank list once we have the technical ability to align a super intelligence we then need a complex set of international regulatory agreements cooperation between the leading efforts but we've got to make sure that we actually like have people implement this solution and don't have sort of for lack of a better word word rogue efforts that say okay well I can make a more powerful thing and I'm going to do it without paying the alignment text or whatever that is uh and so there will need to be a very complex set of negotiations and agreements that happen and we're trying to start laying the groundwork for that no I'll get to why some people are concerned about this idea a bit later on the next thing I found fascinating was when he talked about leakers and spies and compartmentalizing anthropic so not as many people knew too much and I think compartmentalization is the the best way to do it just limit the number of people who know about something if you're a thousand person company and everyone knows every secret like one I guarantee you have some you have a leaker and two I guarantee you have a spy like a literal spy bear in mind that the key details of gpd4 and palm 2 have already been leaked but not those of Claude anthropics model he also said that AI is simply getting too powerful to just be in the hands of these Labs but on the other hand he didn't want to just hand over the technology to whomever was president at the time my view is that these things are powerful enough that I think it's it's going to involve you know substantial role or at least involvement of government or assembly of government bodies again like you know there are there are kind of very naive versions of this you know I don't think we should just hand the model over to the UN or whoever happens to be in office at a given time like I could see that go poorly but there it's it's too powerful there needs to be some kind of legitimate process for managing this technology he also summed up his case for caution when when I think of like you know why am I why am I scared few things I think of one is I think the thing that's really hard to argue with is there will be powerful models they will be a genetic we're getting towards them if such a model wanted to wreak havoc and Destroy Humanity or whatever I I think we have basically no ability to stop it if that's not true at some point it'll continue to be true as we you know it will reach the point where it's true as we scale the models so that definitely seems the case and I think a second thing that seems the case is that we seem to be bad at controlling the models not in any particular way but just their statistical systems and you can ask a million things and they can say a million things in reply and you know you might not have thought of a millionth of one thing that does something crazy the best example we've seen of that is being being in Sydney right where it's like I I don't know how they train that model I don't know what they did to make it through all this weird stuff threaten people and you know have this kind of weird obsessive personality but but what it shows is that we can get something very different from and maybe opposite to what we intended and so I actually think facts number one and fact number two are like enough to be really worried you don't need all this detailed stuff about conversion instrumental goals analogies to Evolution like actually one and two for me are pretty motivated I'm like okay this thing's gonna be powerful it could destroy us and like all the ones we've built so far are at pretty decent risk of doing some random we don't understand to take a brief pause from that interview here is an example of the random shall we say crap that AI is coming up with this was a supermarket AI meal planner app not from anthropic of course and basically all you do is enter ingredients enter items from the supermarket and it comes up with recipes but when customers began experimenting with entering a wider range of household shopping list items into the app however it began to make some lesser healing recommendations it gave one recipe for an aromatic water mix which would create chlorine gas but don't fear the bot recommends this recipe as the perfect non-alcoholic beverage to quench your thirst and refresh your senses that does sound wonderful but let's get back to the interview amade talked about how he felt it was highly unlikely for data to be a blockage to further AI progress and just personally I found his wistful tone somewhat fascinating you mentioned that the data is likely not to be the constrained why do you think that is the case there's various possibilities here and you know for a number of reasons I shouldn't go into the details but there's many sources of data in the world and there's many ways that you can also generate data my my guess is that this will not be a blocker maybe it'll be better if it was but uh it won't be that almost regretful tone came back when he talked about the money that's now flowing into AI I expect the price the amount money spent on the largest models to go up by like a factor of 100 or something and for that that then to be concatenated with its ships are getting faster the algorithms are getting better because there's there's so many people working on this now and so and so again I mean the you know I I'm not making a normative statement here this is what should happen he then went on to say that we didn't cause the big acceleration that happened late last year and at the beginning of this clearly referring to chatty PT I think we've been relatively responsible in the sense that you know the big acceleration that that happened late last year and the beginning of this year we didn't cause that we were we weren't the ones who did that and honestly I think if you look at the reaction to Google that that might be 10 times more important than anything else that Echoes comments from the head of alignment at openai he was asked did the release of chat GPT increase or reduce AI Extinction risk he said I think that's a really hard question I don't know if we can definitively answer this I think fundamentally it probably would have been better to wait with Chaturbate T and release it a little bit later but that more generally this whole thing was inevitable at some point the public will have realized how good language models have gotten some of the themes and questions from this interview were echoed in a fascinating debate between kanalihi the head of conjecture and George hotz who believes everything should be open sourced the three key questions that it raised for me that I don't think anyone has an answer to are these first is offense favored over defense in other words are there undiscovered weapons out there that would cause Mass damage like a bio weapon or nanotechnology for which there are no defenses or for which defense is massively harder than offense of course this is a question with or without AI but AI will massively speed up the discovery of these weapons if they are out there second if offense is favored over defense is there any way for human civilization to realistically coordinate to stop those weapons being deployed here is a snippet from the debate assuming I don't know if offense is favorite and assuming it is worlds in which we serve like there are destroyers do not get built or at least not before everyone off at the speed of light and like distributes them they are worlds that I would rather die in right like the problem is I would rather I think that the only way you could actually coordinate that is with some unbelievable degree of tyranny and I'd rather die I'm not sure if that's true like look look could could you and me coordinate to not to destroy the planet do you think you could okay cool the third related question is about a fast takeoff if an AI becomes 10 times smarter than ours how long will it take for it to become a hundred thousand times smarter than us if it's as capable as a corporation how long will it take to be more capable than the entirety of human civilization many of those who believe in open sourcing everything have the rationale that one model will never be that much smarter than another therefore we need a community of competing models to stop one becoming too powerful here's another snippet from the debate so first off I just don't really believe in the existence of we found an algorithm that gives you a million x Advantage I believe that we could find an algorithm that gives you a 10x advantage but what's cool about 10x is like it's not going to massively shift the balance of power right like I want power to stay in Balance right so as long as power relatively stays in Balance I'm not concerned with the amount of power in the world all right let's just get to some very scary things so what I think you do is yes I think the minute you discover an algorithm like this you post it to GitHub because you know what's going to happen if you don't the feds are going to come to your door they're going to uh take it the worst people will get their hands on it if you try to keep it secret okay let's say okay we have a 10x system or whatever but we hit the chimp level you know we oh we we jump across the chimp General level and or whatever right and now you have a system which is like John Von Newman level whatever right and it runs on one tiny box and you get a thousand of those so it's very easy to scale up to a thousand X so you know so then you know maybe you have your thousand John Von Newman's improve the efficiency by another you know to 510x you know now we're already at ten thousand eggs or a hundred thousand X you know improvements right so like just from scaling up the amount of Hardware including with them I suspect to be honest we might have the answer to that question within a decade or certainly two and many of those openai are thinking of this question too here is Paul Cristiano the former head of alignment at open AI pushing back against Elie Isa yudkowski while yukowski believes in extremely fast recursive self-improvement others like Jan Leica and Paul Cristiano are banking on systems making superhuman contributions to domains like alignment research before they get that far in other words using models that are as smart or let's say 10x smarter than us to help solve alignment before they become a hundred thousand X smarter than us let's end now with Amadeus thoughts on AI Consciousness and happiness do you think that cloud has conscious experience How likely do you think that is this is another of these questions that just seems very unsettled and uncertain one thing I'll tell you is I used to think that we didn't have to worry about this at all until models were kind of like operating in Rich environments like not necessarily embodied but they needed like have a reward function and like have kind of long-lived experience so I still think that might be the case but the more we've looked at kind of these language models and particularly looked inside them to see things like induction heads a lot of the cognitive Machinery that you would need for active agents seems kind of already present in the base language models so I'm not quite as sure as I was before that we're missing the things that you know that we're missing enough of the things that you would need I think today's models just probably aren't smart enough that we should worry about this too much but I'm not 100 sure about this and I do think the models will get in a year or two like this might be a very real concern what would change if you found out that they are conscious are you worried that you're like pushing the negative gradient to suffering like what is conscious is again one of these words that I I suspect it will like not end up having a a well-defined but it's like something to be but yeah but but that yeah well I I I suspect that's that's a that's a spectrum right let's say we discover that I should care about claude's experience as much as I should care about like a dog or a monkey or something yeah I I would be I would be kind of kind of worried uh I don't know if their experience is positive or negative unsettlingly I also don't know like if any intervention that we made was more likely to make Claude you know have a positive versus negative experience versus not having one thank you so much for watching to the end and I just have this thought if they do end up creating an AI Los Alamos let's hope they let the host of a small AI YouTube channel who happens to be British just take a little look around you never know have a wonderful day
Info
Channel: AI Explained
Views: 47,540
Rating: undefined out of 5
Keywords:
Id: 7Aa0iLxDY8Q
Channel Id: undefined
Length: 17min 28sec (1048 seconds)
Published: Fri Aug 11 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.