We Need to Do This Before AI Gets Too Powerful | CEO of Microsoft AI Mustafa Suleyman

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I feel like I've been saying it to military Hawks forever this new arms race with China what people are really worried about is this kind of like nation state level massive effort to build this super intelligence what happens though when we create something that's 10,000 times smarter than us there's going to be a question of how you actually constrain something that powerful A system that powerful is is unlikely to follow your instruction the AI might be able to make itself smarter but it knows okay I need six more of these data centers to do that it has to trick a nation state into creating that to have that available to absolutely anybody is going to create a lot of [Music] instability I have to say your book really puts AI into I hate saying things like like this puts it into perspective but it really does put it AI into a unique perspective because it's not a it's clearly not a book where someone who wants to be relevant is like let me throw some buzzword AI is really trending right now let's do something on that you've really been immersed in this field and and the repercussions of what it's going to this is going to bring to us on a global level in a super deep way so I'm excited to have this conversation I think it's going to be fun thank you I I I really appreciate that I mean it's been almost 15 years in the field now thinking about the consequences of AI and trying to build it so that it delivers on the upsides and it's just a surreal time to be alive I mean it's just been such a privilege to be creating and making and building at a time like this um you know for 10 years in AI development the curve was pretty flat like we we were doing some cool things there were some research demos we played a bunch of games really well you know but now we've really crossed this this this this moment this threshold where now computers can increasingly talk our language and that is just mind-blowing I think people are still not fully absorbing how completely nuts that is and what it means when every single one of your devices that you have now your tablets your screens your cars your fridges those are going to become conversational end points they are going to talk to you about everything you are trying to get done the things you believe things you like what you're afraid of they're going to come alive in a sense I don't like saying alive Loosely but you know they're really going to start to feel much more animated in your life and I just think that that's going to change what it means to be human it's going to change society in a very fundamental way it will change work and so on so it's it's just a crazy time to be alive thinking about what you have just said it there's going to be this huge sth of people that s swath I never know how to pronounce that word now that I think about it I've only read it there's gonna be this hu do you know I have to say I don't know I was just about to say I do not know sve I say sve sve sounds but I think the American version is swath swath also sounds right see we're not going to solve this right now going to look it up I'm just going to say swath there's going to be this huge swath of people that actually never UND never realize how amazing that whole thing is and also how it works right that ven diagram is never really going to become a circle because my kids are going to be like of course you can talk to the refrigerator about a problem you're having at school or whatever but and and I'm going to go wow I can't believe I'm talking to my refrigerator about like sitting there it tells me that broccoli is about to expire and I'm like am I wasting my time doing this and it gives me like a reasoned opinion you know but and that's going to be amazing but I'm not necessarily going to understand exactly how that's happening whereas my son might be like obviously it's just got an llm built into it and it connects with Amazon's Cloud servers or whatever and you're just like okay fine but to him it's not that amazing because he grew up with he'll have grown up essentially with this being like electric like I wasn't amazed when I saw my parents turn a light on in the bedroom it was just a thing that was ubiquitous uh by that time totally I mean all of this is becoming second nature so quickly it's sort of mind-blowing I mean I I have to say I even periodically find myself picking up a regular magazine and then wanting to like pinch and zoom on some of the text you know or just like swipe over I've done that a few times and I I think it's odd you know we we sort of don't fully appreciate how quickly things are happening but also how quickly we're changing right on the one hand it feels super scary and amorphous and hard to Define and we don't know what the consequences are and the next thing you look around and everybody has has has a phone with a camera with a listing device that enables you to video with somebody on the other side of the world that can stream content left right and Center I mean you know if if I described to you 30 years ago a world in which pretty much every one of us in the developed world are going to have a laptop a desktop probably a a tablet certainly a phone that our TV would have a camera on the end of it and all of those are going to be listening devices and video devices you would think that I was a crazy dis Ian you know you know scary sci-fi addict but actually that has happened seamlessly naturally and actually without huge consequence like you know yes there are downsides and for sure you know we have to be conscious of those learn the lessons of those and really talk openly about them and not belittle them at the same time the world is clearly smarter more engaged more connected way more productive we all have much more access to information that's hugely democratizing and hugely libera so you know it happens more seamlessly and in a kind of more profound way than we're ever able to imagine ahead of time yeah it's it you're you're right you're on to something here of course I mean having thought quite a bit about this yourself uh in preparation for the book the mobile phone and man the laptop thing you mentioned at first that happened so fast I remember because I was in law school and it was like one year where I graduated from college nobody had a laptop in class maybe one person in a class of two or three hundred people would have a laptop and you're like that guy's kind of weird but I guess he's really organized or he's really into computers or something and he'd be typing and his battery would run out 45 minutes into class because that's how long batteries lasted on laptops at that time or whatever and then I went to teach English abroad in former Yugoslavia for a year and I came back and in law school the first year this is bear in mind like a year and change later 80% of people had laptops and there were some older people who were like I'm not gonna type things out that's ridiculous and then the next year all of those people had given in and they're like it's just easier man I can search for things I don't have to look through this note but there was maybe like one person handwriting notes in that person was like I can't focus when I have the laptop and I'm not paying attention in class and I was like oh that's actually really smart yeah I should probably do that too didn't take that advice should have done it would have learned more about the law but it changed almost overnight in terms of the number of years and with the iPhone as well like all my Blackberry friends at the law firm were like oh I'm never getting one of those I already have this Blackberry it has brick breaker it it has a keyboard I'm not going to use a touch screen I got a keyboard I need a keyboard and one or two years later they were like have you seen this this has apps on it it's unbelievable and I'm like yeah I told you and and and nobody went back right nobody even my dad is addicted to his damn phone well and people also say you know of course it's going to make you dumber right it's going to make you lazy you know you'll forget how to write and I mean I probably have forgotten how to hand to be quite honest pretty awful yeah although I mean doctors right all day and their handwriting is the worst is it so I don't know what that's about but you know they they said that about calculators you know they said that about you know phones it makes us more lazy it makes us less connected you know I think that's partly true sort of but it's also a connection in a new kind of way like you know I'm a huge fan of Tik Tok I I actually love it you know yes do I get addicted to it periodically absolutely do I need to take a break from it you know it's a kind of strange relationship but at the same time it gives me access to an unbelievable amount of content that is so obscure and strange and detailed and subtle and it's just mind-blowing to see people who never would have thought of themselves as quote unquote creators right they didn't go to drama school you know they're not art directors you know they they haven't been studying film all their lives they've just suddenly been given this tool and you know whether it's like harmonizing with the air conditioning unit or filming a beautiful frog or doing a silly dance whatever it is they do like there's just this massive range of creativity and output and I think that you know it's sort of important to not you know sort of downgrade or diminish how beautiful it is to see billions of people have access to knowledge and tools to be creative and productive because it is incredible and it it isn't you know so far it hasn't made us Dumber it hasn't made us slower it hasn't made us more disconnected and you know we should be alert to those risks no question but I think we're trending in a pretty good direction yeah was it Aristotle or Socrates I always get these guys confused with this particular statement had said books are going to be bad because nobody's going to memorize information anymore and that was the basis for being a learned person back then and I think it was Socrates and it was like don't write anything down that's the end of the civilization as we know it because you're supposed to have this stuff in your brain where it mixes with other ideas and he has kind of a point there but it's like that doesn't mean you can't have it in a book too so yeah this the technophobic attitude is always going to be there but it also sounds like and and look Kevin Kelly has said this he said AI is going to change the world more than electricity did do you think that's accurate with without question with absolutely without question I mean AI it's even hard to describe AI as a technology so you know we are a technological species from the beginning of time we have been trying to create shelter use stone tools you know do needle work to create Fabrics we have been manipulating the environment to reduce our suffering and that is the purpose of a tool but a tool has always been inanimate right it can only ever do precisely what you instruct it to do I mean you may instruct it with your hands you know less language but it's been an engineering output of our activity whereas now I think the Prof found shift that we're going through is that we're sort of giving rise to these new you know this new phenomena that I hesitate to call a tool because you know it has these amazing properties to be able to create and produce and invent Way Beyond and disconnected to what we've actually directed it to to to do you know when you when you say write me a poem and it produces a poem are you really the tool user in that setting you know that that you you you've maybe framed the poem with a zebra and you know a French classical style and you know in you know about its relationship to a Czech shirt okay you've you've invented three you know you've you've asked to connect three random Concepts but really the power is in the production of this output and in time you know these are going to get more autonomous they're going to have more and more agency we're going to give them more freedom to operate and you know maybe we you know people will even design them to have their own goals and their own drives and so the the kind of fundamental qualia of this new phenomena or this design material feels to me quite different to the engineering you know of steam or electricity or the printing press yeah that okay that makes a lot of sense but one thing kind of tripped me up here was you said it would you could program it to have its own goals that that's where it gets a little bit scary right because a goal in a human it evolves or in an intelligent being I should say evolves cuz I'm sure maybe a goal for a dog evolves too is it satiates hunger or whatever I don't know we're getting philosophical here but that that could be kind of bad news right and that's the plot of every sort of dystopian AI sci-fi movie from Terminator to I don't know whatever is the AI is supposed to protect peace on Earth and it's like oh the problem are humans they're the ones causing all the Wars Let me just get rid of those folks and and it's like I get that it's great that goals can evolve but controlling this Tech and we'll get to this in a bit your ideas for how to control it or contain it it just seems impossible because we're essentially designing it maybe not to be able to to be limited in that way yeah and I think the The crucial word there is we are designing it like who is that we um it you know we kind of implies that you could like point at a specific lab or a government department or a specific company and obviously all of those actors are involved in making Ai and experimenting with this new tool and Technology but the truth is that there's this massive morass of you know billions now millions of developers who all have their own motivations and incentives who are all experimenting in different ways most of this is open source software it's all happening you know in many many different locations and so there isn't really a coordinated centralized Wei and I think that's the first big thing that we have to wrap our heads around if we're going to think about how we can main it is that actually this is a very distributed set of incentives driving forward creation so I think the thing that I am most concerned about touching on what you've said is that it is going to be possible to give these things goals um it is going to be possible to give these things more autonomy it is going to be possible to um design them so that they self-improve you know those three capabilities will be pretty dangerous right it you know for sure um it is going to be increasing the level of risk because a system can wander off and come up with its own plans instead of following your plans right and so you know what we have to start to think about is how we coordinate as a species over the next 20 30 40 years because these capabilities will arise there's no putting the capabilities back in the box we have to decide what we don't think is acceptable where the risk is too much and what has to be off limits just by the way as we do with many many other Technologies I mean you know you you can't just like get a plane and fly it around you know downtown you know Seattle you you know you can't fly a drone around you can't drive a car in a way that violates the highway code there's you can't drive a tank down the street even though you can buy one privately you know there's There Are Rules everywhere about everything so we've done this before and we can do it again it's just that each time we create these new rules it's significantly different in important ways and that's what feels scary and unprecedented about it different to what's come before um and of course this is very different you know this has this kind of semi lifelike or digital personlike um you know characteristics and that that does feel pretty sci-fi and it's it's going to be a very strange time we've done some episodes on on AI in the past and and people are worried about a surveillance state that might come as a result of it or backlash in some other way missing information running rampant deep fakes and things like that which I'm sure we'll touch on later in the episode but one that comes up all the time that I think is more relatable or likely is this mass unemployment idea and that seems like more likely than than I don't know total Annihilation by Skynet right in the in the past Mark Andre said this on the on the show he said in the past hey technology gets rid of jobs but then it creates other jobs and there might be a lag but it it's happening so fast with AI that I'm not really sure if jobs will be created at the same rate or a similar rate as AI makes them obsolete because AI is developing so fast so I don't I don't know I'm I'm curious what you think it just seems like AI is developing on a curve that is is so fast as especially as AI learns to develop itself that a lawyer isn't just going to be like oh well now that I don't have to do legal research anymore I'm just going to do this totally different thing and then that gets taken up a year later or six months later the guy's just going to retire yeah I mean I I completely agree I mean I think that you know the thing that sort of people don't pay enough attention to is that you know just because it's happened in the past doesn't mean it's going to happen in the future like it's that's such a simple line of reasoning I mean you know people said you know that people always often say like like like I guess Mark Andre that we've always created new jobs well in in order for you to believe that you have to make the argument today that the very thing that is disrupting existing jobs is not going to do the new work that is supposedly created as well right like if if you're a knowledge worker and a or a lawyer or you know you work as a project manager or you just do a regular job using a computer for most of your day and you zoom and you send emails like these AIS are going to be able to do those tasks very very cheaply quite accurately and you know 247 and then you have to ask yourself okay so what incentive to companies have to keep people in work versus use this cheaper thing you know to kind of replace them it's pretty obvious that like the shareholder incentive is going to say well we might be able to make a lot money lot more money if we could cut out this this labor so then you have to say Okay well what is this new type of work that is going to come which won't be able to do and how do we fund it right and that that's not a stupid point like I mean it that's pretty reasonable like maybe we could start to properly fund healthare workers maybe we could properly fund and pay for Education right maybe we could properly fund you know Elderly Care and home help or Community work right physical things in the real world that you know aren't going to be you know naturally what AI can do in the next few decades because AI is mostly going to target white collar work and and that's again I think surprising to people because the narrative from sci-fi and from the last few years has been well you know the robots are coming for the manufacturing jobs right absolutely not right it's just you know robots are a long way behind what what's actually going to happen is knowledge workers that work in a big bureaucracy who you know spend most of their time doing payroll or Administration or Supply Chain management or accounting or paralal work these kinds of things you know I think we're already seeing it in the last you know 12 months or so are are going to be the first to be displaced and that just leaves a question for society is like what do we do with that right we we we that's great value the question is who captures that value and how is it redistributed yeah this it's it's fascinating one of the biggest plot twists I think of my life in terms of tech is seeing now that robots are coming much later than robotic brains or artificial brains like I I think we were kind of all raised to be like oh man eventually A robot's going to do this A robot's going to do that no we still need the guy who unloads the truck we just don't need the CEO of the company or whatever anymore like that guy the legal department is now useless the accounting department is now useless and all these other the pretty much everybody in that Skyscraper the company bought is mostly redundant because now we have a box somewhere in the cloud you know an Amazon data center that does all that we still need the entire network of people that are driving and bringing the package to your door like those people are fine it's just a really big kind of upside down uh apple cart yeah it's it's totally the opposite of what sci-fi predicted which you know is a good reason to not take anything for granted and not just assume that we're going to create new jobs or that the narratives of the past are actually the you know what's going to happen in the future it's unprecedented and so you have to evaluate you know the technology Trends in its own right for its own reasons right and I think when you actually look at the substance of it AIS use the same tools that we use to do our work right they use browsers they'll be able to navigate using a mouse and a keyboard effectively in the back end using apis and they can process images right so they can just read the screen of what is on you know your desktop or inside of your web page and you know they'll be able to and they can now write emails and send emails and negotiate contracts and you know design Blueprints and you know produce entire um you know spreadsheets and slide decks um and write the contract so you know th those skills combined are what most of us do dayto day for you know our regular jobs um you know in in in kind of White Collar work and so that's what we're going to have to confront over the next decade or two it's quite it's quite fascinating how quickly this is all happening and unfortunately the head in the sand approach seems to be kind of the policy among people and in the book you say something along the lines of humans are reacting like ah waves are everywhere in human life this is just the the latest wave you know we had the wave of this we had the wave of that the internet came everyone said why 2K nothing happened we're still computerized the internet's great why is this wave with the the AI wave why is this different you know why isn't this the exact sort of same worry fearmongering fear the unknown that everything else has been in the past yeah that's a great question and I think that the first thing to say is that the results are self-evident right in this case you can actually now talk to a computer there's there's there's no programming required you know you can actually get it to produce novel images this is the kind of funny thing is like people said well okay AIS are never going to be creative right AIS will be able to do rule based math do you remember that that was only a couple years ago Chang so right and now you look at a piece of that an AI produced or you look at one of these image generators and it's like stunningly creative and now obviously producing real-time video as well so it's pretty clear that AIS are quote unquote creative and then people always used to say well AIS will never be able to do empathy and compassion and kindness and humanlike conversation you know that's always going to be The Preserve of human to human touch well actually it's self-evident the results speak for themselves like if you look at our AI Pi for example um that we make at inflection is unbelievably fluent and smooth and friendly and conversational I mean it's like chatting to a human and many people find it better than speaking to a human it doesn't judge you it's always available you know it's kind and supportive you know so um I think that that's the first reason is that you can actually see the power of these models in practice and then the second thing is just the rate of improvement is kind of incredible and what's driving this rate of improvement is training these large scale models and what we've seen over the last 10 orders of magnitude of computation so 10times 10times 10 10 times in a row of adding more computers to train these large models is that with each order of magnitude you get better results right the image quality is better the speech recognition is better the language trans translation is better the transcription is better the the the the language generation is better right you know you can clearly see that this curve has been very predictable and over the next sort of five to 10 years you know many labs are going to add orders of magnitude 10x 10x 10x per year and so I think it's quite reasonable to predict that there's going to be a new set of capabilities Beyond just understanding images and video and and and text you know AIS are going to be able to take actions they're going to be able to use apis they're going to be able to predict and plan over extended time sequences and so I think that's why we're all predicting that this time is different it it really is amazing to think that yes you're right the creativity thing blew me away looking at some of these image generators I couldn't believe that somebody somebody posted something this is literally maybe a year or two ago at most look at this AI created image and I thought well okay but how does it create the image surely it just had an image and then Chang some of the things in the image and then redrew it and it's like no someone asked this to draw I don't know Jordan Harbinger in front of a communist flag standing on a mountain and it's like there it is in a few seconds that that was really mind-blowing this kind of thing because if we can do that with still images and you mention now with real-time video that just eliminates a ton of work but also eventually you're not going to have to ask it to do anything it's just going to start creating things I mean could easily I'm sure already craft an AI that would just start making things according to your own preferences and then continue to do that Mark Andre also gave the example of unlimited say you tell instead of watching something on Netflix and they hope to get it right you just tell n Netflix what you like or it already knows because you've already watched 10,000 things on Netflix over the past you know 30 years by that point it just says we've made a show for you it's kind of like Game of Thrones except it's got that futuristic dystopian stuff all the dragons are robots and it takes place in space because you like Star Wars and you're just like I'll watch that right and then after the first episode It's Like H you were your eyes were more engaged when the dragons were fighting so the next one's going to have the next episode's gonna have way more of that kind of conflict oh you don't like the space stuff and zero gravity all right fine we're going to bring it back down to earth in the next episode because you're more and it's just going to be able to do that kind of thing and and people will of course say well how is it going to know what you really like to your point I think when when people have said hey these are they're not humans they can't read emotions I think now computers are better at reading emotions than humans are in tests like a robotic doctor could actually have a better bedside manner than a human Doctor Who's actually really good at their job yeah you're totally right and that kind of personalized content generation is is definitely coming I mean it's actually what we're trying to do with text and imag articles with pi right so Pi actually generates you every morning a news briefing now that's personalized to you five stories in spoken text with a nice image to go with it summarizing what's happened in the news and then you can actually talk about the news with pi um and based on how you react to the different stories you know you may say oh I'm really not interested in that kind of sport or you know I'm sick of hearing about this war that's going on you know or I'm really into you know like bicycles and you know the next day P's going to produce something that is closer to what you like and what you're interested in and that is in a way where we're already at right so let's not get too carried away here I mean that's what a podcaster does that's what a you know a content creator does on Tik Tok right they're constantly trying to produce things which are more interesting and surprising and educational to people and so we're now just kind of automating and speeding up that process but you're right I think the thing that we have to think about as a society is where are the boundaries and where are the limits how do you contain this like what is you know what is off limits there have to be some limits right um what what subject matter um you know what style of persuasion is it okay if just I get to control do I get to consume whatever information I want just as an individual should it be entirely free and decentralized clearly we don't want it to be topped down and run by a tiny number of companies right um we also don't want it to be run by a tiny number of governments that can say you know sensor this that and the other I mean we can see what's happening in China as a example of a way that we don't want to live right so you know like no one has the answer so if anyone comes to you and is lecturing you about well it should be this this is the problem that's the criminal the truth is none of us fully know exactly what the right step to take is next but the more we sort of talk about the risks and then more we proactively lean into those conversations and and not you know like you said earlier put your head in the sand right you know in the book I tried to frame it around this pessimism aversion trap I think it's particularly an issue in the US where there's such a desire to believe that the future is going to be better and the kind of bias towards optimism that I think it leads people to just be afraid of potentially talking about dark outcomes like we have to talk about the potential ways in which things can go wrong so that we can proactively manage them and so we can actually start putting in place checks and balances and limits and and and and not just have a bias towards optimism that leads to you know us missing the boat when it comes to the consequences affect everybody yeah this this is wise because look when I was a kid I had an apple 2C it was the kind of computer we had at school I had one at home cuz my mom was a teacher so we got one it had 64 kilobytes of ram now I think I've got 64 gigabytes of RAM in my gaming laptop over here which is for people who don't know it's a hell of a lot more and I just to within memory I think I drove to a computer store and I bought a 420 megabyte hard drive and I remember getting home and going I'm never going to fill this thing up and now if I go and I download a game the update to the game that has like bonus graphics on it or something is way more than 420 megabytes it's probably 42 gigabytes or something like that right that that hard drive wouldn't even scratch it but it would also take me three hours to to write to that hard drive or more so this is in part due to something called Moors law when it comes to processors right so Moors law can you first of all tell us what Moors law is and then naturally my follow-up is is there a Moors law for AI yeah great great question so Moore's law was predicted by this computer engineer Gordon Moore who was the founder of Intel that manufactured computer chips back in the late 50s he predicted that transistors um or computer chips were going to get cheaper um radically cheaper half in cost every year for the next you know 60 70 years and the crazy thing is that is exactly what's happened and so we've been able to cram more transistors onto the same square inch um for the same price and so we've just seen this reduction year after year after year in the cost and increase in the density of transistors which basically is what you're describing is your your your hard dis is still the same size in fact in many cases actually got smaller right you you now have a thumb drive which is the size of your your you know your SSD back in the day right your 420 megabyte SSD so that has been the main thing that has been powering this massive Revolution because for the same price we can store more process more etc etc which means we can have photorealistic Graphics which means we can have these AI models now that have access to like all the information on the web and super amounts of knowledge um so in the context of AI there is a more extreme Trend than that right which is that as I mentioned earlier this 10x increase in the amount of compute used to train the cutting Ed AI models per year so instead of doubling per year which is the MS law Trend we're increasing the amount of compute by 10 times per year because in this case we don't need the compute to be smaller we can just daisy chain more computers together so our server farm at inflection for example is the size of four football pitches right it's absolutely astronomical uses like 50 megawatt of power um and you know so you look at it it's like absolutely mind-blowing it Roars like like an engine and all of that is is really just graphics cards you know just like you have in your in your you know if you have a desktop gaming machine you might have a GPU graphics card we just daisy chain tens of thousands of these up together so that they can do parallel processing um on you know trillions of words from the open web every time Pi produces one word when you're in conversation with it it does a lookup of 700 million other words right that's Banas I mean it kind of lights up or activates or kind of pays subtle attention to 700 million words every time it look it produces a new word obviously so when it's producing you know paragraphs and paragraphs of text that's a huge amount computation um and so that that is the trend that is accelerating much much faster than Moors law and is going to continue um for many years to come I would assume at some point the AI itself will figure out how to make that process more efficient because it's learning everything that there is to know from at least that there is on the internet which is pretty close to everything it just seems yeah I mean we we have that today like so we have really th th those that um server farm that I described to you we train one giant AI out of that server farm and we actually use it to teach and talk to smaller AIS which are cheaper for us to run in production when you get to chat to it because it's more efficient for us to have rather than paying tens of thousands of humans to talk to our small AIS to teach them which we do do as well we have 25,000 AI teachers from all walks of life and all backgrounds of and all kinds of expertise and they talk to the AI all the time and they're paid to give it instruction say this isn't this is factually incorrect you know this isn't very kind this is what funny looks like you know etc etc now we're actually getting the AI so good that it can do the job of the AI teacher better than the human Ai and teach these smaller models to you know behave well so you know what you described in a way is kind of already happening that reminds me of the way podcasts work in Bri that people think oh I download this from your server but not really right so I upload this to a server which is probably on one of Amazon's data centers but if somebody in Japan downloads this episode of the podcast there's a copy of that file cached somewhere on servers that are probably I don't know outside of Tokyo somewhere and then if somebody else in Japan downloads it they don't connect to my server in the United States they connect to that server in Japan Japan server says hey is this file the same one that you're still putting out over there in America and our our Network says yeah but we want to put this ad in there that's in Japanese because the other one that ran was in English just switched that out and the server essentially says okay cool and gives them the exact same file it sounds like that's a little bit of how how at least how Pi works right is this it's almost like oh other people have looked up recipes in the United States we don't have to Ping the main guy right over there in that giant football multi football pitch data center we kind of we understand how to tell them how to cook this soup this has been done here it is it's I know I'm oversimplifying it but this sounds no I mean it's actually that's actually a great uh metaphor I mean another way of putting it is that you don't need to ask the you know professor of computational Neuroscience you know how to make the recipe for spaghetti bolog Andes because you could you can go to a you know a sort of an expert in that kind of area um that doesn't require you know 20 years of of training in Neuroscience to uh you know become that expert so that's exactly the concept just like we deliver content to different parts of the web we have different specialist AIS that you know are really small and efficient at answering different types of questions thanks for watching on YouTube remember you can also enjoy the Jordan Harbinger show on Apple podcasts Spotify or wherever you listen to podcasts our podcast feed is a treasure Trove of insights from intellectuals authors spies artists athletes Pioneers Engineers former Mafia bosses and Business Leaders all sharing their secrets to success for more information click the link in the description now back to the show tell me about the video game playing AI machine for lack of a better word that you designed back in the day because this was kind of it sounds like one of your first experiences seeing AI do something that was it's funny I'm putting this in air quotes truly amazing because it's something you do when you're 9 years old and you're playing the same video game but it's still at that point right was totally mind-blowing yeah I mean that that was more than 10 years ago now 2013 and we trained an AI to play the old school Atari games um so things like breakout and pong for example where you have two paddles and you bat them back and forth or breakout where you have to bounce a ball up and down with a paddle at the bottom that you get to control left and right to knock down the bricks or Invaders where you you know shoot the enemy ships and the crazy thing about this is that instead of writing a rule that said you know if you're in this position um and the ball is coming at these degrees then move the paddle left one degree blah blah blah you know you basically allow the AI to just watch the screen and randomly move the paddle back and forth left and right until it accidentally stumbles across an increase in score and then it's like oh that's pretty cool I I managed to increase the score how how did that happen I'll try and do that next time I'm in that position and so through random self-play that you know millions of times playing against itself because it sees all the screens 24 frames a second frame by frame all the pixels um it's able to learn a pretty good strategy of playing the game and then and then one day we saw that it had actually learned a strategy called tunneling where it would ping the ball up one side as often as possible and try and aim it you know up in the same place and then that would force the ball to bounce Behind the Bricks back and forth up and down and get kind of Maximum score with with minimum effort um and that was not a strategy that most human players knew about right like Mo most of us didn't really discover that some of us did but you know I certainly didn't and that was pretty mindblowing I was like wow you know this these things can not only learn to do it well but it can actually learn new knowled or new strategies or discover you know techniques and tricks which you know could actually be useful to us and that was after all why we started building AI I mean that's what we want from AI we want AI to be able to solve our big problems in the world you know we want it to help us tackle climate change and improve drugs and improve health care and you know give us self-driving cars and you know we we want to solve these massive problems that we have in the world of of trying to feed you know 8 billion people people and growing and so on right so to me that's always been my main motivation and when I first saw that that was like a you know the first sign that we were on to something back 10 years ago the reason that's so amazing and I think it's easy to gloss over this and go so what it learned from the best players and it copied the strategy but that's not what happened right it didn't see somebody who is really good at Brick Breaker and they go oh okay what he does he breaks the side and then he gets the ball stuck in there and then it does the rest of the work on its own and you can't really lose it figured that out through trial and error which is really incredible because you might have to play brick breaker for a few weeks months or even years before you come across that strategy by accident and then go oh I need to replicate that so this this this can figure it out in in seconds potentially something like that and then we also now want the want want we want we want AI to figure out the the equivalent of tunneling for I don't know cancer research or something in quantum physics right that we would never fig fure out because humans haven't been there yet and the AI goes huh if I want this particle to last longer than a few milliseconds in controlled environment I need to do all these other things and bada boom bada bing now I can make elements that don't exist that can be used to create power for example generate power yeah yeah I mean I think that's totally right that's the ambition and I think it's a very Noble one because you know in the world today we got a lot of challenges you know whether it's food or climate or Healthcare I mean you know the the prize is big and we need other you know we need assistance in trying to invent uh you know our way out of these challenges and all that we've got so far is is our human intelligence and everything that is of value in the world today is a product of us being smart at predicting things and that's basically what these AIS do they absorb tons of data and information and they make great predictions and so the thesis is well we could maybe scale up this prediction engine for the next couple decades and and really have some some massive impact are you able to explain just briefly how AI works because you mentioned before it searches 700 million words and in the mark andreon episode he was like ah it's like a really fancy or Smart Auto Complete which I understand what autocomplete is because I use Google and my phone tries to guess what I'm going to say and it's it's often right but that doesn't really scratch the itch for me because then it's just reliant on looking at what humans have done which is not really what we're saying AI does right so I think that's right look at it is very difficult to describe because it's hard for us to really intu it and deeply understand very large numbers and very large information spaces so I think the first thing to try to wrap your head around is that one of these large language models reads many many times everything that has been digitized on the open web and so this is trillions and trillions of words you know books and blog posts and podcasts and YouTube downloads and everything that you know where where there's there's text it's consumed it and what it's learning to do is cover up so it it covers up the future words and given the past words it predicts which word is likely to come next so it's almost like it memorizes the whole thing and then you test it and you say given this phrase the cat sat onthe what is the probability that the next word is head chair car Plane Road banana you know continent right and so there's going to be some probability assigned to every single one of those those words even the really really weird words that have never appeared after that sentence and of course the most ly one is is Matt right but that's a very simplistic description because not only is it good at predicting or autoc completing which word is going to come next it's able to do that with reference to a stylistic Direction so just as you say to an image generator produce me you know a banana in the shape of an owl in the style of sazan right MH now you might be able to IM you know imagine that kind of weird combination in your head what the AI is able to do is to take those three concepts and not just the concept the plain word banana but actually its entire experience of banana every single setting in which banana has arisen right all the different kinds of combinations and shapes and styles and it has this very multi-dimensional hazy representation of banana and then it's able to interpolate which is predict the distance between banana and Owl right so that's a very powerful thing because it's a stylistic you know it's a position on the curve it could be very very like owl it could be very very like banana now imagine that you add in all the other words imagine the owl is flying imagine it's big and red imagine it's a banana that's going off imagine it's a banana that's been thrown off the edge of a building you know now we're coning in and we're adding we're reducing the size of the search base it's almost like adding filters to reduce the size of all possible things and and that's just a very difficult thing to grasp when it's massively multi-dimensional I've only described it in the context of two or three concepts but now imagine that it's like hundreds of Concepts or thousands of concepts of stylistic control and as the models have got larger and they get more access to more compute you can have more fine grained control they become more and that's why they're more accurate right they're more useful because they they're able to attend to multiple you know sort of stylistic directions simultaneously so as this stuff so that's incredible by the way that was a really good explanation um as this stuff gets more complex are we going to have trouble or perhaps we're already there getting under the hood of an llm or of an AI as we know it today and see why decisions are made because if I look at a human brain right um and I go hey brain why did you buy that jacket when you already have lots of jackets and you live in California and my brain goes well there's going to be an occasion where I really need a a brown suede jacket and this one has fine details and I like it and it's it's going to come in useful and I don't really care I just really wanted the jacket right and I can do that and I'm I'm quite self-aware that I bought a jacket that I didn't freaking need and now I'm really trying to rationalize that purchase because it was expensive this is the best my brain can do and I'm like reason qualified human I'm leading a mostly successful life right what happens when we're looking at a brain AI that is so much more sophisticated than our own but it's being terrible in some way are we going to be able to get in there and diagnose that or is it going to be just too complicated of a black box well it's it's that's a cool question and I think you you kind of nailed the answer in your question which is that you know humans hallucinate all the time our main mode of communication is to retrospectively invent some narrative that seems to fit the bill right we're constantly being creative and making stuff up in fact when you remember something you don't really remember right what did you have for breakfast this morning you have a very vague loose memory maybe you can get it what did you you know what did you do two weekends ago right you're GNA be creating all kinds of stuff that is plausible and vaguely within you know so we make things up all the time and that's what creativity is that's what a hallucination actually is we don't have very good ways of inspecting inside a human brain you can whack somebody in an fmri scanner but you know it's pretty crude and you know it's it's it's it's not reliable so the way that we trust one another is that we observe what you say and what you do and if what you say and what you do is consistent with what what you have said you're going to say and said you're going to do then over time we build up trust because we have that continuity between intent and outcome and that's the behaviorist model of psychology right we observe the output and we focus Less on the introspection and the inner analysis and I think that practically speaking that's going to be the standard to which we hold a lot of these AIS for the you know foreseeable future now you could ask which I think is what you're getting at which is in the long term well isn't this thing going to be really good at deceiving us because it's just going to get smarter and smarter and smarter and I think you know maybe the question is the good news is we can actually interrogate these models better than we can interrogate humans so it's not perfect but we we're certainly developing methods of identifying when an AI has been deceiving when it's misrepresented something when um you know we in the model different types of ideas or concepts sit um and what the causal relationship was that led up to a particular output the the challenge is that's very early and early research but the good news is is it's software and so we have a better time of investigating and interrogating software than we do you know sort of the biology of the human mind that that does make a lot of sense right because if I if I go back and ask myself why I got that jacket I have to really even if I'm really trying to be honest with myself I'm still going to sugarcoat the answer so I don't feel like a dumbass right but if you ask the AI why it said something preist or racist sounding it might actually just go oh because this training data set over here says that this kind of person often does this kind of thing and you're like okay okay we got to take that out of the soup that's not the kind of data that we want floating around in here it's not accurate we don't want that affecting your decisions in the future and the AI goes okay it's it's good as good as on right it can ignore that I can't do that in my brain I can't stop yeah exactly well and and that that's actually one of the weaknesses of Being Human in a way that we have emotional drives and at the moment AIS don't have emotional drives and it's unclear whether they need them right so going back to my list of capabilities that that should be off limits because they potentially cause more risk I listed autonomy right I listed recursive self-improvement where AI can get better over time um on its own and this on its own right and and I I listed that it had its own goals it could set its own goals and you know I would add to that has emotional drives I mean it's not clear that we want AIS that have intrinsic motivation you know ego um impulse desire to do things or go places like really these should be treated as tools that work for us they can still be very very capable right um but adding IES I'm not clear that I see the the justification that that would be a massive um you know benefit uh to to society so far so these are the kinds of tricky conversations we have to have like what is the benefit there it's it's not clear I think we can have an amazing scientist an amazing teacher you know I think we can have amazing knowledge workers that can be useful to you know businesses and be creative and so on without actually having emotional drives yeah the the idea of a computer or well an AI having emotional drives is is something straight out of Star Trek or something like that I mean just thinking about a computer that has an ego I mean I say computer I'm oversimplifying it because you know a lot of laymen are listening to this like myself and the idea that the computer would go but I have to be right about this one thing or you're making me feel bad I'm G to destroy your whole civilization like we yeah we don't want that we don't want that uh that that sounds it doesn't take a genius to figure out that that would be a bad outcome and that we might when I say that that's off limits yeah especially as we create this this amazing tool or set of tools if we can even call it that that's so much smarter than us you mentioned in the book something called the gorilla problem and by the way folks if you buy the book please use our links in the show notes to help support the show the gorillas that we see of course are bigger and stronger than us but it's it's them who live in zoos we're smarter so we can sort of trick them into getting into a cage and then we put them in the zoo and they can't get out we're currently masters of the ocean in the land the air increasingly even of space so what happens though when we create something that's 10,000 times smarter than us in pretty much every measurable area are they you know the idea is that maybe it'll put us in a zoo and I just hope the zoo looks a lot like where we are right now although now now it sounds like I'm talking about simulation Theory maybe we're already in the zoo know I look I I think the good news is 10,000 times smarter than us is is a long way off and so we've got time to figure out that problem I I believe so I mean some people think that it's closer to 20 years you know I think it's you know hard to say that it's like maybe more 40 or more but you know beyond that time Horizon it gets very hazy it's hard to judge but it it doesn't feel like we're on the cusp of that anytime soon um and I know that's like not the most scientific analysis but just in instinctively that's where I think me and most of the field are at at the moment look I think the I think the point to say is we wouldn't be able to prove that A system that is that powerful could be contained and would be safe and therefore until we can prove unequivocally that it is we shouldn't be inventing it right that that I think is a pretty straightforward Common Sense reality right we can still get tons and tons of benefit from building these these narrow practical applied AI systems they'll still talk to us we'll have personal assistance you know they will automate a bunch of work that we don't want to do they will create vast amounts of value we'll have to figure out how we redistribute that value so that everybody you know ultimately will end up with an income but that does not mean that we have to create a super intelligence just means that we will have created a huge amount of value in the world and the current structure of society and the politics and governance around that is going to look very different to what it is today and I can get behind that I think a lot of people can get behind that I think the only place where people might take a little bit of issue is okay we should probably not build that and then you know China's going okay fine don't build it we're probably going to try to build it though and then I think You' said something along the lines of if one side is not in an arms race but the other side thinks that they're in an arms race then there's an arms race yeah I I I did say that and I think that is true true um which is a trap that we have to unpick ourselves from because the other side doesn't want to self-destruct either right they're not crazies I mean they they the thankfully even Putin doesn't want to commit suicide right everybody has a survival Instinct and that is what has led us to create relative Global Peace in the postor War era with nuclear weapons you know this idea of mutually assured destruction has actually been an incredible Doctrine right um even though there have obviously been a huge amount of suffering and War over the last 70 years we haven't been at World War and that's great news because it shows that everybody will ultimately act like a rational actor if their future life is truly threatened so I think that the argument that I've often made and others in the field is that assist that powerful is is unlikely to follow your instruction to obey you as a ruler as much as it is you as an enemy right because at that point it's not going to care you know whether you're China or India or Russia or the UK or you're a government or you're just a random academic you know it it is it is there's going to be a question of how you actually constrain something that powerful regardless of where you're from so I I think that that's an initial starting point for thinking about how you know we all like add some serious caution here if and when we get to that moment in decades to come like just to be clear we're nowhere near that right now but you know it's it's the it's a question that we have to start thinking about yeah it's it's scary to see the prediction that AI could then self-improve right because it seems like as soon as it gets to that point it could that that curve could go so fast that we just wake up one day and it's it surprises everybody or is that sort of a Sci-Fi concern that I don't really need to have I think it's a Sci-Fi concern we haven't seen that kind of um it's called an intelligence explosion I mean there's just no evidence that we've seen that kind of thing before however the more we deliberately designed these AIS to be recursively self-improving like to close the loop and they update their own code and they interact with the world and then update their own code and then if you just give a system like that infinite compute right because ultimately the good news they run on physical things right right they feel like information space bits but they're actually grounded in atoms those atoms live in servers those servers live you know on land which is regulated by governments and so there is a choke point around which governments the Democratic process you know people in general can hold these things accountable and can rate limit progress and that's Obviously good news so I don't see this happening in a garage you know lab anytime soon right that that does make sense it's kind of like if a car was sentient it still needs gasoline and that gasoline still has to come from refined petroleum which has to you still have to dig for the petroleum and have the oil well so it's like the AI might be able to make itself smarter but it knows okay I need six more of these football field size data centers to do that it can't just sneakily get those overnight it has to somehow trick a nation state right into creating that for8 or something and then do something nefarius with it and which gives us in theory a lot of opportunity to go do we want to do this is this a good idea maybe we shouldn't do this maybe we need a safeguard maybe we need an off switch that's physical where somebody can go rip this plug out of the wall if this thing starts going cuckoo on it and just to be clear like that is what's already happening right so this this company NVIDIA that makes the AI chips the gpus I described earlier um you know in the last year the share price has I don't know gone up three times or something crazy it's it's the you know one of the big trillion dollar companies now and you know those chips were regulated by US Government last year right so that they couldn't be exported to China um the very Cutting Edge chips so I I think there's already a pretty good understanding of the potential for this to be used for military purposes as well and you know government has has moved fast on this proactively intervening to protect National Security and now as a catch up starting to think about you know how it actually affects us domestically as well so you know I I I I agree with you I think that there's going to be friction here naturally in the system that gives us time to you know see look are we just you know fooling ourselves and being doomers and exaggerating you know this this kind of nonsense is it actually just nuts or is it actually real right and is it actually happening and do we need to take some other Interventional measures to to kind of you know make sure it turns out the right way if if we do end up in an AI arms race and we get to AGI first and I think maybe we should explain what AGI kind of is as separate from AI but let's say we get there first so the Pinnacle of of a AI development right we breathe a sigh of relief but then what do we use our AI to prevent other nations from developing AGI because it seems like whoever gets there first right they want Supremacy in this area and that requires somehow preventing any else from also getting it and I wonder what that looks like yeah thank you for making that point because it seems so obvious and I feel like I've been saying it to military Hawks you know forever you know this new arms race with China well what is the end point in this arms race and let's say that we're winning let's say that we win it whatever that means that we cross the finish line first what on Earth do we do do we just go and whack them with it or something and prevent them from getting I mean this is just like such basic think and so I I think look what what the way that we think about an AGI is this 10,000 times more you know smart and capable than a regular human and you know that that's the thing that I think we have to be cautious of regular AI which is just an assistant in your phone that is going to help you be more productive and efficient and so on that's that's really not within the scope of this arms raise thing I think what what what people are really worried about is this you know this this kind of like nation state level massive effort to build this super intelligence and I I think um it is very unclear to me what we would do with that I'm not even sure we want to finish first in something like that because it's you know like you said earlier it's just very unclear that we could make that provably safe yeah it's it there's going to end up being some sort of us AGI I mean that's my term Pentagon feel free to use it they have to develop some sort of strategy to develop and implement this in because containing your own AI really hard how do we someone else's we can't get there we'd have to even if we blow up their data center obviously they've thought of that the internet was invented to withstand strikes like that in the first place so you know it's man that's a really thorny product I've I've heard people say you know the only the only defense against an AGI is to have your own AGI right we've we've we've heard that one before right so you know it's it's um look it's not that we shouldn't try and build these things there's going to be huge amount of benefit in the next few decades it's just that there are some big unknowns and we we we just have to start talking about those AI is obviously going to be one of the greatest accelerants of wealth and prosperity in human history it's going to be the Industrial Revolution plus electricity plus whatever Plus internet but do you think it's going to spread more equally do you think it'll spread the wealth more equally than say the Industrial Revolution where a few folks own the railroads and the factories or or is this going to end up being pure chaos without question I mean we're already seeing it right the rate of proliferation of this technology is unlike any other technology in history and that is mostly because all of the infrastructure for making this technology available has already been built everyone's already got a smartphone we've already got a laptop we've already got a browser we've already got Cloud we've already got the internet and so this is just a tiny add-on to that apparatus for accessing information and talking to your computer and that's why it spread so quick I mean there's already billions of people talking to AIS like pi and chat gbt every day um so and I expect that to continue I mean you know the the models are getting smaller they're getting cheaper they'll be available in the developing World on SMS on WhatsApp you know very very quickly giving you access to you know a super expert in every possible field so and and likewise the ability to actually make AIS that are you know applicable to your own small business or relevant to your cultural context or using your personal data or you know understanding of your local business Etc like that's going to get much much easier as well because the barrier to entry has been lower than it's ever been in terms of manufacturing the really large ones you know like we said earlier it still requires gasoline you know you need these chips you need these data centers there's only a handful of groups in the world that can do that and so that's going to end up being quite concentrated um that's for sure yeah these these supply chain choke points are kind of what we're relying on right now I think you'd mentioned that we can't or Nvidia can't send certain chips to China and I know they're sort of figuring out oh we can make a dum down version of the chip that still is around sanctions but there's these lithography machines and I've talked about this on the episode we did about semiconductors that are as big as the or probably much bigger than the room I'm in now that cost I don't know $180 million and you can't buy one even if you have that much money because the Netherlands the company that makes them is not just going to sell them to anyone and they probably take a hell probably like constructing an aircraft carrier in terms of complexity right it's not something you can just have shipped and so we've got these supply chain choke points and that's kind of heartening because you know that like a drug cartel is not just buying a bunch of AI stuff to figure out how to evade the the War on Drugs which is not going so well anyway maybe they don't even need it but but the idea that a disinformation farm in North Korea could buy their own semiconductor Factory and then start making these is is is sort of not possible right now uh but that's kind of one of the only things we have right because otherwise we have treaties and things like that that that might be International uh in nature but man regulation when it comes to an arms race is tough we kind of did it with nukes but we just had more visibility into the problem and it was slower yeah yeah I mean in a way I think the nukes are going to help keep the peace this time round as well because the the because look nobody wants nukes to get into the hands of small non-state actors and you know I don't think this technology is is rightly compared against nukes but it certainly would not be desirable to have very very powerful AIS that you know can take massive actions in our world that can you know campaign and persuade and you know do stuff just like a really smart group of humans to have that available to absolutely anybody is going to create a lot of instability so you know I think there's going to be a collective desire among all nation states to try to keep the peace just like we have today with all of the new technologies that arrive in our world you know we we we we have sensible regulations and they they broadly work right like we we generally don't have fake drugs that kill a bunch of people we you know generally speaking aircrafts are pretty safe like nuclear power is pretty safe like you know we haven't always got it right you know probably tobacco should have been banned a little bit earlier um you know we certainly haven't got it right always in social media it's been pretty chaotic right so we just have to learn the lessons and and keep moving forward so I've I've asked other experts on this and I asked Mark andreon about this as well he seemed to think okay AI can't be good it can't be evil per se and I suppose that makes he's like hey it's a machine man but if AI is absorbing human bias from training data can't it absorb other undesirable human traits like malice or recklessness or am I just not understanding the categories of what it can do oh yeah definitely I mean it it's it's It generally will reproduce the data distribution that it's been trained on so if it's never seen a black face it's not going to produce a black face when it's asked to generate an image um so you know it it doesn't know what it doesn't know and by implication it therefore knows mostly what it has seen um so the training data does matter having said that um you know it also has stylistic control which is very accurate you know you can you can frame its policy of behavior right in One Direction or another you can say you know adhere to this set of values or a different set of values and it generally does it quite well still makes mistakes but I think those mistakes are going down and down and down over the years and so I expect it to get you know pretty much perfect um in the next few years at at imitating the style that you want it to present with I know predicting the future is really tough and the further out we go the less accurate everything gets you mentioned earlier hey when you go so far out everything's hazy you've coined this term ACI as separate from AGI tell me a little bit more about that and what maybe the I don't know if this is right term road map for that is and what this looks like over the next 5 years or so well back in the 50s Alan churing the computer scientist um came up with a test that tried to evaluate whether an AI or a computer system um was intelligent and he basically said if it could um you know speak behind a screen and deceive a human into thinking that it was actually a human and not a machine then that would be intelligent and today it's pretty clear that these models these chat Bots like K and and others g chat GPT and stuff you know are pretty good at conversation and sometimes it's hard to tell whether it's an AI or whether it's a human so I think we've probably passed that test the touring test um you know or some lighter version of it perhaps but so I think a better measure of progress is to focus on what the AI can do and um capabilities are are are really what matters not so much this abstract idea of what intelligence is like like can it write emails right can it book flights can it come up with a new product design can it um you know negotiate a contract can it market and sell and persuade um can it do all of those things in concert in sync with one another in order to make a bunch of money right that would be a good test because money is a very you know it captures a lot of complexity um you know to reduce everything to profit and it's it's quite simplistic but it definitely is a pretty good test so I propose that a measure of an ACI an artificial capable intelligence um would be one that could go off and make a million dollars with a $100,000 investment by creating a new product promoting it online creating a website for it marketing it um getting it manufactured getting it drop shipped etc etc that to me seems very doable in the next three to five years um and I think that would be a much more sort of profound test the uring test if you like um that would actually tell us something material about what it means for labor and and the economy stuff is so interesting man this really flew by I want to thank you for doing the show I've got so many more notes man we we'll have to do another round at some point and I just want to say I appreciate your time and expertise really interesting thank you very much it was a huge amount of fun yeah see you next time thank you for checking out this entire episode on YouTube if you want to follow up on this topic check out our podcast feed or visit us on our website at Jordan Harbinger tocom where you can learn more about our guest and dive even deeper into what we discussed today and remember YouTube is not the only place that you can check out the Jordan Harbinger show any podcast app should have us check out the links in the description where you will find access to our shows that don't appear on YouTube like skeptical Sunday where we debunk topics like Crystal healing GMOs conspiracy theories Homeopathy tipping even Lawns to find out if they're backed by science and logic or if they're just complete nonsense spoiler many of them are complete nonsense also our feedback Friday shows where we help people escape from Cults get raises at work and take all manner of questions from you the audience all the way down to the bottom of the barrel and every episode of the Jordan Harbinger show has something useful you can take away and apply in your own life and help you navigate what I know can often seem like the overwhelming and paralyzing challenges of Modern Life Life can be hard yes but we are here to help and if you appreciate how we help remember to like comment and subscribe
Info
Channel: The Jordan Harbinger Show
Views: 21,467
Rating: undefined out of 5
Keywords: podcast, interview, best podcast, top rated podcast, lifelong learning, the jordan harbinger show, jordan harbinger, soft skills, social science, social influence, social psychology, personal development, self development, podcast full episode, podcast clip
Id: Wn244ffkc8I
Channel Id: undefined
Length: 72min 54sec (4374 seconds)
Published: Wed Apr 10 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.