AGI in 3 to 8 years

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
you willon see subcultures that want to remain like Legacy humans such as we have the the amus and others on the planet now you will see people that want to remain Legacy humans but without death and disease and mental illness and all that just like live your best life as as as a human certainly you will see cyborgs want to pack extra cores into their brain I think you will also see people mind upload just upload your brain into a digital substrate live in a virtual world but then let that be the seed and let yourself grow and then within some number of Cycles you may become something utterly different than how you started when will we have artificial general intelligence hello and welcome to Tech first my name is John kir Tech verse is about smart matter sure but part of that is AI distributed AI centralized AI but intelligence that permeates the things we surround ourselves with when will that intelligence approach and surpass human intelligence something think it already has many think it hasn't some think it's decades away to chat we have the CEO of Singularity net Dr Ben gzel he's the chairman of the artificial general intelligence Society he's the chief scientist at mosy Health he was the chief scientist at Hanson robotics has a PHD in math from Temple University and was a professor of computer science at the University of New Mexico among many other things welcome Ben hey good to be here let's start with a big question question when will we get AGI well if we want to Define AGI as the creation of machines with the general intelligence of a really smart human on on on their best day I would say we're three to eight years from that if I want to put a range on it so I think I think we're pretty close on the other hand we're not there yet and as we've seen in one year we can have a lot of really material advance in the AI field at at at the current time you're talking about LMS I'm guessing when you say in one year we have a significant push forward where do you think llms fit on the path AGI are they a part of it are they an interesting Avenue are they a dead end there's lots of different opinions on that yeah as as you've seen some people believe llm basically just need a bit of improvement to become human level AI then on the other hand you have say Yan leun who runs Facebook's AI division who said on the on on the on the road to AGI llms are an offramp right and and I mean my my view is somewhat in between those two I mean I don't think you need llms to get to human level AGI they're not a critical techn ology for it and I don't think that just adding a few more bells and whistles to llms or making the bigger or something is going to get you a human level AGI on on the other hand I think they can be a powerful accelerant toward the creation of of of AGI both sort of serving as information feeders and information oracles to help teach early say AGI systems and serving as components of AGI systems in in in various ways say one of the issues that an AGI system faces is what to pay attention to and applying an llm just training a Transformer net on the histories of what goes on in an ai's mind can help that AI mind decide what to pay attention to right so I mean I think I think there's a lot of obvious and non-obvious ways Transformer neural Nets which is the main technology behind llms can help build agis but still if I had to put a number on it I mean Transformers may end up being 20% of your ultimate AGI architecture like not not 90% but also not 1% would be would be my my current best guess on the other hand my opinion is uh subject to uh to revision based on the on on what we learned I think that perplexing thing about llms is in a way they're narrow systems that look to us like very general systems they're narrow because they can't go that far beyond what they've been trained to program to do on the other hand what they've been trained to program to do is huge compared to what can fit in any human being's grade right so being it's like they're narrowly constrained to a little halo around a huge field of stuff which is like most human knowledge create created so far so there get a very it's a weird kind of system and the way it's an unnatural kind of system so it is very artificial right but but it's super cool and and Powerful that the the other thing is you might be able to obsolete 80 90 95 % of human jobs without getting to AGI because most of what we do to earn a living is repetitive of stuff that has been done been done before like maybe the llm couldn't figure out how to do it the first time but not that much of human productive labor is doing something for the first time it's you're showing how to do what other people did right so there's that's been something I didn't quite foresee like I I sort of thought you'd have to get to AGI to have such a broad economic impact but now you can see by making this different sort of system you can have a huge economic impact you know even without having a system that can imagine and pivot and learn wild new things the way the people can it's really interesting because you gave what seems like a fairly aggressive estimate for the onset of AGI 3day years I believe you said and we talked about llms llms have been the most noisy hyped uh busy area of development in AI in the last year what are we not paying attention to if llms are not necessarily the thing that gets us to AGI what are we not hearing about that is still happening Innovation that's still happening in AGI and other areas so I don't I don't think there are any big secrets here actually the field of AI has been around since the middle of the last century with that name and has been around since a bit before that in terms of preliminary work that was happening and deep neural networks as we now call them have been around at least since the 1960s argu arguably the 50s but there have been other AI paradigms around almost almost as long so you've had logic based AI systems doing sort of formal logical reasoning and uncertain reasoning Common Sense reasoning been around since the 60s you've had evolutionary learning systems that try to create stuff by mutating and combining what they what they've seen before and these have been around since the 1970s and I think these sorts of systems are going to come into their own in the next few years for similar reasons to what has driven the the growth with with with deep neural networks I mean more data more processing power more people banging on the problem right so what I think we're going to see is sort of hybrid systems with a deep neuronet aspect a logical reasoning aspect an evolutionary learning aspect and combined together in the in in in Integrated Systems and you can see what role that would play very clearly by looking at the shortcomings of current llms right so llms and other deep neural Nets right now they're not very good at reliable complex multi-step reasoning like you need to do to to write a high quality original scientific paper or something well logic systems are good at complex multi-step reasoning if you look at creativity llms are good at recombining stuff but they're quite derivative in what they create well evolutionary learning I mean this is an algorithm that already has a bunch of of patents to its name in various forms with using evolutionary learning to create music and and imagery I mean back in the in the 90s which in a way was more creative than the stuff we're seeing out of out of deep neural Nets now right so you you have other algorithms with the long track record behind the better classes of algorithms that are good at exactly the things that llms suck at right and so I mean combining them together with llms in the hybrid architecture isn't extremely natural thing to do there are practical obstacles because the way our whole software and Hardware stack is created now has been very well refined for deep neural networks and Les so for these other sorts of algorithms on the other hand there are companies and teams of researchers working on addressing precisely this problem and people working on these problems are finding it way easier to raise money to hire people to help them than than was the case before chat GPT right so I mean I mean we're well while yeah the bulk of AI resources are going into deep neuronet stuff the enthusiasm for AI is now so big and so broad that I think other other species of AI are also having a much easier ter than they used to even if not as easy as as as deep their own minuts it's fascinating to hear you talk about that that AGI is probably going to be created as we combine all these different methods together and that makes a sort of a sort of intuitive sense because if you think about how we process there's different types of things there's a background level of knowledge that's stored there's immediate attention that is paid to what's going on there's spatial intelligence there's other types of intelligence there's in intuition there's reasoning there's logic there's logical gaps fallacies that we fall prey to but there's different kind of engines that combine to form whatever is going on inside our brains yeah absolutely I mean a human brain contains many different subn networks with different architectures different mixes of neurons types different mixes of neur neurotransmitters and each of them was evolved over a period of time to serve certain functions and the Deep neural Nets that we're looking at now in computer science are a model of just a few regions of the human brain really I mean a lot of it came from modeling sort of primary visual cortex and mostly just like feed forward activity from the sense organs in into the cognitive Center just mostly feed forward activity of of Vis visual cortex a bit of auditory cortex so there's a lot going on in the brain that is not touched by the neural network models currently being used and I think I have no doubt you could make a human level AGI with just formal neural networks but they won't be Transformer neural Nets per se it would take a variety of different neural components with different architectures connected together and I think it's also an interesting approach to make a system where some of the components are formal neurom models and some are just different sorts of of of computer programs I mean I think that's another point about AI though is you wouldn't expect there to be just one approach to making a human level AGI I mean the the well-worn but still apt metaphors to Flying machines right I mean you got airplanes you got helicopters you got blimps Freeman Dyson schemed up a Starship that explodes nuclear bombs behind itself boom boom boom boom s right I mean you got a lot back backpack helicopters you got a lot of different ways to fly the thing is if you have a theory of aerodynamics then then with your with your theory of aerodynamics of course you can try to understand the strengths and weaknesses of all these flying machines we don't we don't have that sort of fully flush out theory of general intelligence yet on the other hand even in aerodynamics in the end you're doing Wind tnal experiments to see if your thing is going to fly right I mean there there's even with a solid Theory there's a lot of lot of experimentation so I think I think there's going to be a variety of approaches to AGI on the other hand the Dynamics with AGI and its development are a little different than with flying machines because the first really successful flying machine didn't build even better flying machines which then built even better flying machines right whereas exactly with AGI there's a sense in which Whatever Gets first to being a full-on human level AGI then that AGI itself can build the next level of AGI faster than the competing human teams are going to be able to be able to do it right but what's really interesting about how you're saying AGI could develop which is a combination of multiple different methods and multiple different types of reasoning thinking memory other things like that compute joining together in sort of a simulacra of actual or biological intelligence the implication of that is very different than sort of the popular conception of what AGI would be which AGI would be sort of this omniscient um never wrong or almost never wrong uh cold logically processing a machine because if it has these multiple components and these multiple competing in some sense as engines similar to what's going on in a biological brain it almost starts to have some sense of there's some subconscious in an artificial general intelligence there's some competing initiatives and and and what wins out and you almost wonder is there a super ego is there this sort of I think I think once you have what I would call a super intelligence meaning an AGI that is you know significantly more intelligent than the totality of of of the human species at its best right in the same way that in some ways you or I is more intelligent than than the you know the totality of all pygm shws on the planet or something right so once you have a a super intelligence I think in many senses a super intelligence will be always right relative to to to human understanding anyway I mean I mean it will be able to resolve matters of fact that are confusing to us very very simply right now on the other hand that doesn't mean it will be coldly rational in the sense that that that we are thinking that it's not going to necessarily be uh you know Mr Spock from uh from the original Star Trek or something it may have all manner of complex dynamics that that we cannot understand any better than a you know a mouse a worm or even a chimpanzee can understand you know the politics inside Microsoft or something right so I mean I mean I I I I wouldn't say having a superior ability to make hypotheses and evaluate hypotheses relative to data that doesn't necessarily imply not having an unconscious doesn't necessarily imply not being intuitive or being an emotional or any of that I mean I think there's there's a tendency to answer foror F and think well what what would I do if I was in the ai's point of view but that that's not really meaningful you see the same thing with people's worries about you know once an AI takes power it won't need us it will turn us all into batteries it will mulch us to make synthesis guys to feed its uh to feed its engine right but I mean the whole idea that power corrupts and absolute power corrupts absolutely it's a statement about human nature and human psychology right it doesn't it doesn't have to be the case for every type of intelligent system I mean we evolved as we did for specific and and well-known reasons which are not actually relevant to the to the life of of an engineered AGI system which is which is not to say we have a guarantee that superintelligence will be nice friendly and shiny it's it's it's just to say that we shouldn't assume the opposite just because we think like what have Donald Trump had an IQ of of 10,000 superhuman Powers right because I mean we're engineering different kinds of systems and unlike having a human baby I mean we are engineering the system there there is an emergent and spontaneous unpredictable aspect yes on the other hand we are architecting and and building its mind right so I mean there's a certain level certain level of design and control we have that just isn't the case with with human beings but that we is a very complex we because that's kind of General we like Humanity Wei and there's very different people I mean just the other day gab the ultra right-wing social network released uh about a hundred different um llms um Eis uh whatever you want to call them gpts uh several of which say the Holocaust never happened others uh similar things like that and I'm not saying that's going to be the general reality out there but there's there's um AI teams in Russia in China in Africa in North America South America they're going to come from very different ideological perspectives want different things demand different things for instance in China it's not going to ask questions about tianan square or things like that so we are very involved in creating this intelligence and that's got to have some impact um on what it I mean I think so it's interesting you mention all these places because in my own organization Singularity net I mean we have people in all these places contributing to the same system I mean I mean we've had an AI lab in Alis saaba Ethiopia since 2014 and I've worked with a whole bunch of AI developers from Nova seers in St Petersburg in in Russia now I lived in Hong Kong for 10 years my wife is an AIP Chief from Shaman University so we had a bunch bunch of connection to the Chinese a community then I'm we have an office in horizant Brazil people I've worked with there since when would it have been since 98 I think in in Brazil so I mean we have people from all around the world and I haven't given the full list because it gets boring but we have people from all around the world contributing to a common open source decentralized troto AGI platform right now and yeah there are of course there are differences and I mean I know we're we're working with a couple groups in mainland China who are very interested in getting eang and lau and sort of classical Chinese thought into a Knowledge Graph to condition the the dialogue and learning of of an llm and it's like how how do you make a logical reasoning system to help interpret the eing hexagrams in in the context of natural language dialogue right so that I mean there's there are pretty cool cultural differences that that come up on the other hand I mean if you look at the Baseline everyone we're working with in every country wants to make machines that are compassionate to human beings and that that Advance science and that Advance medicine that will tell care take care of old people that will help educate children I mean there's there's quite a lot of of commonality there right so I mean of course I agree a you remember Isaac azimov and the three laws of robotics and how uh certain robots were def given a different Dash definition of human in order to be able to uh harm certain what we would call humans so yeah there but there's I I agree with what you're saying makes perfect sense there are a point though in terms of geopolitics you can see government is following the evolution of open science and open software and not vice versa right so I mean science has been open new AI algorithms are on art.org new AI code is on GitHub and and gitlab and other Cal repositories from Russia and Mainland China even from Iran right so you have open science published you have open source software which is driven the AI Revolution now not all trained models are open a lot are I mean Google just just released gemo which is a new model Facebook has released a lot of lot of open open stuff Alibaba has released a lot of open stuff also in China right so on the whole so far we'd have to say governments are following what happens in the open AI Community much as has happened with the internet right in in the internet context governments have followed what happened in open networks and companies are being forced to play along with with the open networks or else or else be be left be left behind right so I mean it's quite different than in the development of weapons technology or something right where I mean the development of space lasers is primarily by governments who are wanting to defend their borders the development of AI could have been that way like if you remember the old movie Colossus the forbin project which might have been the late 60s or early 70s like Cold War era movie the US and Russia both these huge Ai supercomputers and that was the main Nexus of AI on the planet they s them to destroy each other in the end the AI supercomputers made friends and and took over and shut down the wars but that's how people thought AI would develop then though like it would be military supermind versus military supermind could have been I mean the AI was founded in a way by US military and DARPA and all that but but that's not what's happening right it's it's it's folding more like the internet or Linux with sort of an Ambiance of open networks than companies and governments have to have to play along with that which which I think is is very very positive from from my own sort of uh crypto libertarian anarchos socialist sort of sort of perspective right right that is a good segue to talk about Ai and who owns it and who controls it uh which which becomes complex when you enter when you bring AGI into it because owning and controlling uh an intelligent or super intelligence system is either awful or horrible or just insanely laughable because it's impossible but let's start with where we are today in terms of owning and controlling AGI because as it stands most of the AI systems the average person interacts with are owned by a company um or organization and are built and maintained and and crafted to serve the interests of that company or organization much of the science fiction that we read revolves around people having engagements and interactions with AI systems that either they control or that are allied with them in some way shape or form and we're starting to sound very science fictiony here but who owns AI as it gets developed and how can I ensure that an AI system that I want to use for let's say Finance or maybe just managing my life is actually operating under for my best interests and not the best interests of whoever wrote it created it maintains it runs it um yeah I think you could ask who owns the internet or who owns the Linux operating system and in both of those cases the answer is basically nobody or else a lot of parties of different sizes and orientations in a very complex way right like obviously some big companies own more of the internet than the average person no one owns enough of the internet that they could take it over or or or shut it down unilaterally and Linux operating system pretty much the same way I mean there's a konel team but if that whole Colonel team was kidnapped by aliens I mean there's a lot of us out there who could form a new Colonel team if if if we had we had to do it right so I mean I think these are models for how the ownership and control of AI could develop and how I think it should should develop by my Best by my best understanding and I think there's some pointers in that direction like the openness of the research in code underlying the vast majority of of AI systems but of course it doesn't have to be that way it could it could it could go a different direction like it it could be that once there's a real breakthrough toward human level AGI then governments try really hard to crack crack down on it and like AI researchers are not allowed to get out of their country they're not they're not allowed to cross the border and they're you know it becomes illegal to upload AI code to to a repository I mean you could you could see a fascist Crackdown once AI gets really really serious I I don't think we're going to see that but I mean it's not it's not entirely un un Unthinkable that that this would happen I also think there are a lot of subtleties to openness and decentralized control in context because it's not just about code and it's not just about the algorithms it's what happens when the code and data and processing power come together and right now while big tech companies are opening their algorithms in code they're not opening the data and often they legally can't because the data is stuff they've collected with confidentiality agreements in the course of their business model right or they've just grabbed it from wherever yeah well it's it's a no but I mean if if you look at Google Google has a lot of data like that they got from our chats or something or Google Voice who will turn their voice models on Google Voice conversations I mean even if I believe I tend to believe them they're not using our private Communications in any unored way they're using the sound of the voice right but I mean they still that's how they got their voice models were well was from what we said through Google Voice and hangouts and and all that right so yeah open AI seems to have just they just use common crawl which is a huge spider of of of of of of the web right so I mean so anyway the data data is so much data that it's hard for an average person to download like I could download the code and the algorithms are pretty small files the data I would need like huge bunch of hard drives to store and then like I I know how to train not gp4 exactly I know how to train Mixr which is an llm almost as good as gp4 right the code is open it's all pretty clear I mean you need you need at least tens of millions of dollars of Hardware right so so I mean the this is one issue for o for openness is it actually doing AI yourself at the Modern scale requires more than the code and algorithm which of course is part of why big tech companies are so willing to open up the code and algorithms because they know you need to have a shitload of money to download all this data and and and buy or rent all these servers right now that on the other hand that still is freeing compared to not having the algorithms and the code right because I mean there's a lot of companies and countries that could set up a big server farm and there's a possibility to glom together a whole bunch of decentralized resources owned by a large number of parties and pull them into a data and compute Network which is what we're doing in in Singularity net so what one thing we're looking at with a singularity net project and other projects like hypercycle in our ecosystem we're getting crypto mining Farms to Repose some of their machines to running AI processing rather than mining crypto I mean they're already set up they've got electricity they've got cooling right and they just need to upgrade the the gpus the CPUs of bit then the network of crypto running Farms becomes a huge network of computers that can be used to run to run AI right so I I sweet yeah yeah I mean there so it's it's not trivial to glum together the data and compute power on on the other hand it's also it's also not impossible like it it doesn't require rare earth materials or plutonium or something right I mean it's it's just commodity Hardware plugged into the wall Network together download code write a spider to download some data so it does take money but it on the other hand it's kind of something anyone can do and that this is part of what I think will make it hard to have an autocratic Crackdown on on AI even if governments want to because if one country tries to seal itself off but nobody else does that country will then fall behind even if it's the US which invented AI would still fall behind if it turn itself off to overseas researchers and and then in that case American researchers would either sneak out of the country and discuss or just stop doing Ai and discuss like you're not and you can't put a gun to a research said and say like be more creative than the enemy right doesn't it it doesn't it doesn't work like the Manhattan Project worked but that's because the scientists actually beli what they were doing was right and made sense right but but I mean AI researchers don't believe AI should be siloed off like that so I just I don't really see the Dynamics coming together to make that happen which means I think it's most likely AI will develop in some form in this complex Global open networks of like Linux and the and the the internet which is it's not to say that some companies or governments won't have a lot of power but I don't think any of them is going to have like autocratic power or even that there will be an oligopoly of like three companies ruling Ru ruling everything I I think it will be more heterogeneous than than that which has pluses and minuses from the big picture it does uh we have to start bringing this to a close um maybe let's end here with the role of humans and Humanity in an Ever evolving AGI and AI scenario I met and chatted with Ray kerswell a few years ago and we were talking about Ai and what humans would become he thought that eventually we'd kind of add cores to our brains sort of like adding servers to a server Farm or an Amazon cluster or something like that and so that we would boost our compute power are available uh recall other things like that and be able to kind of compete in a machine dominated era uh you see others like Elon Musk with neuralink seeing hey how can I connect myself to a machine how can I add power to a brain of course the first uh attempts there are more around medical and assistance related objectives how do you view humans in a ro in a world with AGI and in a world of artificial intelligence I think the answer will be quite diverse and and and heterogeneous rather than they just being being one answer you may have many many genres of human and post-human mind mus as we have endless genres of of of music on the on the on the internet now right so I mean I think you will see subcultures that want to remain like Legacy humans such as we have the the amas and others on the planet now you will see people that one remain the humans but without death and disease and mental illness and all that just like live your best life as as as a human certainly you will see cyborgs people want the packets Rec cores into their brain I think you will also see people mind upload just upload your brain into a digital substrate live in a virtual world but then let that be the seed and let yourself grow and then within some number of Cycles you may become something utterly different than how you started and you might even put the memories of your original human life into cold storage somewhere because you don't need to consult them anymore than we now need to consult like what we did every day in preschool or something right so I mean I I think you will see all of these options and it might be that any one of us has a chance to partake in numerous of these of these options like you could have one of you living its best human life another one mind upload and that you know romp around in virtual reality and and you know multiply its intelligence by by a 100 times right because once once you're copying into a digital substrate there then I mean there doesn't have to be just a single instance of a of a of a person either so I mean I think I think the possibilities are wide open and will be quite incredible and I'm I'm a big optimist about how amazing post Singularity life will will be I'm more worried about the path there which is one of the big topics at the beneficial AGI Summit and own conference that that that I I've convened that we're holding uh a few days from this conversation what we're now having because I think once we get to an AGI I think it probably will be beneficially oriented toward humans as we're creating it and we pretty much all all want that I think that's pretty much what we're likely to build but it's going to come step by step I mean if it takes eight or 10 years to happen that's a blip on the historical time frame but it's meaningful in in our our lifespan like I have a six-year-old and a three-year-old kid it's more than their whole lives right so during that five eight 10 years while AGI is getting smarter and smarter it's obsil in jobs here and there it's disrupting Supply chains and taking over Industries here and there we already have rather screwed up global social and economic system we have major Wars in different parts of the world I mean that the develop world is ridiculously unwilling to help literally starving children in the developing world like how does all this mess get disrupted on the path to a singularity that's much less clear in my mind than how amazing things can be once you have a beneficial human level AGI and the inability of our national and Industry leaders to deal with much simpler things that we face now on the planet doesn't give you a lot of faith that they can deal with the Advent of AGI in a well planned rational and compassionate way which leads one to think it's just going to unfold and and self self-organized like look how badly we dealt with the pandemic right the the only good thing we did was develop vaccines obviously I'm not an antivaxer right but that I mean that was that was Triumph of science that was done by National Lab University and Industry of researchers collaborating in the science science way right so but I I I mean I mean other than that our Global political systems dealt with that ridiculously badly it's a much smaller thing than the Advent of AGI right so I mean I think there's a lot there is a lot to worry about particularly as regards the impact on the developing World during the transitional period I I I think on the on the other hand there's a lot to be excited about also because I mean if we have a system that's well disposed toward us and that's 10 times smarter than us it's going to be able to rapidly solve a lot of problems that seem intractable to us at at at the present time which is uh is a very logical conclusion but emotionally very uh very exciting to consider Ben it uh there's one thing we could say about the future and that is uh it will be interesting um thank you so much for this time I look forward to seeing you and being part of the conference uh next week and uh talk to you soon thanks a [Music] lot
Info
Channel: John Koetsier
Views: 32,203
Rating: undefined out of 5
Keywords: John Koetsier
Id: UjKmqD-Zv68
Channel Id: undefined
Length: 40min 11sec (2411 seconds)
Published: Tue Mar 05 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.