The race to build AI that benefits humanity | Sam Altman | TED Tech

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi this is ted tech i'm simone ross today let's talk about one of the hottest topics in the tech space ai artificial intelligence depending on who you talk to ai is either the solution to or cause of a whole host of problems in our future for today's podcast we thought we'd try something a little different earlier this year head of ted chris anderson spoke with someone who has a totally optimistic take on ai i'll let chris take it from here but if you like what you're hearing make sure to go check out other episodes of the ted interview [Music] the place i want to start is with ai artificial intelligence this of course is the next innovative technology that is going to change everything as we know it for better or for worse today we'll see it painted not with the usual dystopian brush but by someone who truly believes in its potential sam altman is the former president of y combinator the legendary startup accelerator and in 2015 he and a team launched a company called openai dedicated to one noble purpose to develop ai so that it benefits humanity as a whole you may have heard by the way recently a lot of buzz around an ai technology called gbt3 that was developed by open eye and prove the quality of the amazing team of researchers and developers they have working there we'll be hearing a lot about gpd3 in the conversation ahead but sticking to this lofty mission of developing ai for humanity and finding the resources to realize it haven't been simple openai is certainly not without its critics but their goal couldn't be more important and honestly i found it really quite exciting to hear sam's vision for where all this could lead okay let's do this [Music] so sam altman welcome thank you for having me so sam here we are in 2021 a lot of people are fearful of the future at this moment in world history how would you describe your attitude to the future i think that the combination of scientific and technological progress and better societal decision making better societal governance is going to solve in the next couple of decades all of our current most pressing problems there will be new ones but i think we're going to get very safe very inexpensive carbon-free nuclear energy to work and i think we're going to talk about that time that the climate disaster looks so bad and how lucky we are we got saved by science and technology i think we've already now seen this with the rapidity that we were able to get vaccines deployed we are going to find that we are able to cure or at least treat a significant percentage of human disease including i think we'll just actually make progress in helping people have much longer decades longer health spans and i think in the next couple of decades that will look pretty clear i think we will build systems with ai and otherwise that make access to an incredibly high quality education more possible than ever before i think the lives you know when we look for like a hundred years 50 years even the quality of life available to anyone then will be much better than the quality of life available in the very best case to anyone today to any single person today so yeah i'm super optimistic i think like it's always easy to do scroll and think about how bad are the bad things are but the good things are really good and getting much better is it your sincere belief that artificial intelligence can actually make that future better certainly uh how look with any technology i don't think it will all be better i think there are always positive and negative use cases of anything new and it's our job to maximize the positive ones minimize the negative ones but i truly genuinely believe that the positive impacts will be orders of magnitude bigger than the negative ones i think we're seeing a glimpse of that now now that we have the first general purpose ais built out in the world and available via things like our api i think we are seeing evidence of just the breadth of services that we will be able to offer as this sort of technological revolution really takes hold and we will have people interact with services that are smart uh really smart and it will feel like as strange as the world before mobile phones feels now to us yeah you mentioned your api i guess that stands for what application programming interface it's the technology that allows complex technology to be accessible to others so sam give me a sense of a couple of things that have got you most excited that are already out there and then how that gives you visibility to a pathway forward that is even more exciting so i think that the things that we're seeing now are very much glimpses of the future we released gbt3 which is a general purpose natural language text model in the summer of 2020. you know there's hundreds of applications that are now using it in production that's ramping up all of the time but there are things where people use gbt3 to really understand the intent behind the search query and deliver results and sort of understand not only the intent but all of the data and deliver the thing of what you want so you can sort of describe a fuzzy thing and it'll understand documents it can understand you know short documents not full books yet but bring you back the context of what you want there's been a lot of excitement about using the generative capabilities to create sort of games or sort of interactive stories or letting people develop characters or chat with a sort of virtual friend there are applications that for example help a job seeker polish a tailored application for each individual company there's the beginning of ai tutors that can sort of teach people about different concepts and take on different personas you know we could go on for a long time but i think anything that you can imagine that you do today via computer that you would like to really understand and get to know you and not only that but understand all of the data and knowledge in the world and help you have the best experience that is possible that that will all happen so what gets opened up what new adjacent possible state is there as a result of these powers frame this question from the point of view of someone who's starting out on a career for example they're trying to figure out what would be a really interesting thing to do in the future that has only recently become possible what are some new things that this opens up in a world where you can talk to a computer and get the output that would normally require you hiring the world experts back immediately for almost no money i would say think about what's possible there so that could be like as you said what can normally only the best programmer in the world or a really great programmer do for me and can i now instead just ask in english and have that program written so all these people that you know want to develop an app they have an idea they don't know how to program now they can have it you know what does a service look like when anyone on earth who wants really great medical advice can get better medical advice than any single doctor could ever get because this has the total medical knowledge and reasoning ability that the some humanity has ever produced you know when you want to learn something you have a sort of ai tutor that understands your exact style how you best learn everything you know and custom teaches you whatever concept you want to learn someday you can imagine that like you have an ai that reads your email and your task list and your calendar and the documents you've been sent and any meeting maximally perfectly prepares you and has all of the information that you need and all the context of your entire career right there for you we could go on for a long time i think these will just be powerful systems so it's really fun playing around with gpt3 um one compelling example for someone who's more text based is try googling the guardian essay that was written entirely by different gpt3 queries and stitched together it's an essay on why artificial intelligence isn't a threat to humanity and it's impressive it's very compelling i actually tried inputting one of the um the online uis i asked the question what is interesting about sam altman oh no here's what it came back with it was rather philosophical actually came back with i don't understand what you mean by interesting it seems to me that this word has no unambiguous meaning in the context of human society beyond its literal definition as somewhat pleasing or entertaining i do not believe there to be any such thing as interestingness except in the mind of a human or other sentient being but to my knowledge this is an entirely subjective state that varies from person to person however i will grant that this is an interesting question to ask this does not mean it has been answered there is no answer to be found well so you can you grade that as somewhere between profound and um gibberish i i it's not almost where where the state of play is i mean that's where we are today i think somewhere between profound and gibberish is the right way to think about the current capabilities of of say gpt-3 yeah i think like word definitely had a bubble of hype about gbt3 last summer but the thing about bubbles is the reason that smart people fall for them is there's a kernel of something really real and really interesting that people get over excited about and i think people definitely got and still are over excited about gbt3 today but still probably underestimated the potential of where these models will go in the future and so maybe there's this like short-term overhype and long-term under hype for the entire field for text models for whatever you'd like that's going on and as you said there's like clearly some gibberish in there but on the other hand those were like well-formed sentences and there were a couple of ideas in there that i was like oh like actually maybe that's right and i think if artificial intelligence even in its current very larval state can make us confront new things and sort of inspire new ideas that's already pretty impressive give us a sense what's actually happening in the background there i think it's hard to understand because you read these words seem like someone is trying to mean something obviously i don't think you believe that there's whatever you've built there that there's a sort of thinking sentient thing that's going oh i must answer this question um so so what how would you describe what's going on you've got something that has read the entire internet essentially all of wikipedia et cetera we've written this read like a small fraction a random sampling of the internet we will eventually train something that has read as much of the internet or more of the internet than we've done right now but we have a very long way to go i mean we're still i think relative to what we will have operating at quite small scale with quite small ais but what is happening is there is a model that is ingesting lots of text and it is trying to predict the next word um so we use transformers they take in a context which is a particular architecture of an ai model they take in a context of a lot of words let's say like a thousand or something like that and they try to predict the word that comes next in the sequence and there's like a lot of other things to happen but fundamentally that's it and i think this is interesting because in the process of playing that little game of trying to predict the next word these models have to develop a representation and understanding of what is likely to come next and i think it is maybe not perfectly accurate but certainly worth considering to say that intelligence is very near the ability to make accurate predictions what's confusing about this is that there are so many words on the internet which are foolish as well as the words that are wise and and how do you build a model that can distinguish between those two and this was prompted actually by another example that i typed in like i asked um you know what is a powerful idea i mean very interested in ideas that was my question that was a powerful idea and it came back with several things some of which seemed moderately profound some have seen which seemed moderately gibberish but then he was he was one that it came back with the idea that the human race has quote evolved unquote is false evolution or adaptation within a species was abandoned by biology and genetics long ago so i'm going whoa wait a second that's news to me what have you been reading and um i presume this has been pulled out of some recess of the internet but uh but how is it possible even in theory to imagine how a model can gravitate towards truth wisdom as opposed to just like majority views or um how how do you avoid something taking us further into the sort of uh you know the maze of errors and bad thinking and so forth that has already been a worrying feature the last few years it's a fantastic question i think it is the most interesting area of research that we need to pursue now i think at this point the questions of whether we can build really powerful general purpose ai system um i won't say they're in the rear view mirror we still have a lot of hard engineering work to do but i'm pretty confident we're going to be able to and now the questions are like what should we build and how and why and what data should we train on and how do we build systems not just that can do these like phenomenally impressive things but that we can ensure do the things that we want and then understand the concepts of truth and falsehood and you know alignment with human values and misalignment with human values um one of the pieces of research that we put out last year that i was most proud of and most excited about is what we call reinforcement learning from human feedback and we showed that we can take these giant models that are trained on a bunch of stuff some of it good some of it bad and then with a really quite small amount of feedback from human judgment about hey this is good this is bad this is wrong this is the behavior i want i don't want this behavior we can feed that information from the human judges back into the model and we can teach the model behave more like this in less like that and it works better than i ever imagined it would and that gives me a lot of hope that we can build an aligned system we'll do other things too like i think curating data sets where there's just less sort of bad data to train on will go a very long way and as these models get smarter i think they inherently develop the ability to sort out bad data from good data and as they get really smart they'll even start to do something we call active learning which is where they ask us for exactly the data they need when they're missing something when they're unsure when they don't understand but i think as a result of simply scaling these models up building better i hate to use the word cognition because it sounds so anthropomorphic but let's say building a better ability to reason into the models to think to challenge to try to understand and combining that with this idea of aligning to human values via this technique we developed that's going to go a very long way now there's another question which you sort of just kick the ball down the field to which is how do we as a society decide to which set of human values do we align these powerful systems yeah indeed so if i if i understand rightly what you're saying there you're saying that it's possible to look at the output at any one time of gpt3 and if we don't like what it's coming up with some wise human can say no that was off don't do that whatever algorithm or process led you to that undo it yeah and that the system is then incredibly powerful at avoiding that same kind of mistake in future because it sort of back replicates the the instructions correct yeah and eventually and not too much longer i believe that we'll be able to not only say that was good that was bad but say that was bad for this reason and also tell me how you got to that answer so i can make sure i understand but at the end of the day someone needs to decide who is the wise human for sure humans who are looking at the results so it's a big difference is you know someone who who grew up with intelligent design worldview could look at that and go that's a brilliant outcome well gold star well done and someone else would say uh something's gone awfully wrong here so do you how do you avoid and this is a version of the the problem that a lot of the i guess silicon valley companies are facing right now in terms of the pushback they're getting on the output of social media and so forth how do you assemble that pool of experts who stand for human values that we actually want um i mean we talk about this all the time i don't think this is like solely or even not even close to majorly up to opening i to decide i think we need to begin a societal conversation now about how we're going to make those decisions how we're going to make sure we have representational input in that and how we sort of make these very difficult global governance systems my personal belief is that we should have pretty broad rules about what these systems will never do and will always do but then the individual user should get a system that kind of behaves like they want and there will be you know people do have very different value systems some of them are just fundamentally incompatible no one gets to use ai to like exploit other people for example i hopefully we can all agree on but do you want the ai to like you know support you and your belief of intelligent design like do i think open ai should say it can't even though i've humanly disagree with that is like a scientific conclusion no i i wouldn't take that stance i think the thing to remember about all of this is that gpt-3 is still quite extraordinarily weak it still has such big problems and is still so unreliable that for most use cases it's still unsuitable but when we think about a system that is like a thousand times more powerful and let's say a million times more reliable you know it just doesn't it doesn't say gibberish very often it doesn't totally lose the plot and get distracted a system like that is going to be one that a lot of the economic activity in the world comes to rely on and i think it's very important that we don't have a small group of people sort of saying you can never use it for this thing that like most of the world wants to use it for um because it doesn't match our personal beliefs talk a bit more about some of the other uses of it because one of the things that's most surprising is it's not just about sort of text responses it's it can take generalized human instructions and build things so for example you can say to it write a python program that is designed to put a flashing cursor in one corner of the screen and the google logo in the other corner and and it can go in and do something like that shockingly quite well effectively yeah i it contributes that's amazing i mean that seems amazing to me that opens the door to an entirely way to think about programmers for the future that you could you could have people who can program just in human natural language potentially and gain rapid efficiency and let them let the ai do the engineering we're not that far away from that world we're not that far away from the world where you will write a spec in english and for a simple enough program the ai will just write the code for you as you said you can see glimpses of that even in this very weak gbt3 which was not trained to code like i think this is important to remember we trained it on the language on the internet very rarely you know internet language on the internet also includes some code snippets and that was enough so if we really try to go train a model on code itself and that's where we decide to put the horsepower of the model into just imagine what would be possible it'll be quite impressive but i think what you're pointing to there is that because models like gpt-3 to some degree or other and it's like very hard to know exactly how much understand the underlying concepts of what's going on and they're not just regurgitating things they found on a website but they can really apply them and say oh yeah i kind of like know about this word and this idea in code and this is probably what you're trying to do and i won't get it right always but sometimes i will just generate this like brand new program for nothing that anyone has ever asked before and it will work that's pretty cool and data is data so it can do that from english to code it can do that from english to french again we never told it to learn about translation we never told it about the concepts of english and french but it learned them even though we never said this is what english is and this is what french is and this is what it means to translate it can still do it wow i mean for creative people is there a world coming where the sort of the palette of possibility that they can be exposed to is just explodes i mean if you're a musician is there a near future where you could say to your ai okay i'm going to bed now but in the morning i'd love you to present me with a thousand two bar jingles with words attached that you think have a sort of meme factor to them and you you come down in the morning and the computer you know shows you this stuff and one of them you go wow that is it that is a top 10 hit and you build a song from it or is that going to we really actually beat the value add we released something last year called jukebox which is very near what you described where you can say i want music generated for me in this style or this kind of stuff and it can like come up with the words as well and it's like pretty cool and i you know really enjoy listening to music that it creates and uh you can sort of do full songs two bars of a jingle whatever you'd like and one of my very favorite artists reached out cold to open ai after we released this and said that he wanted to talk and i was like whoa i like total fanboy here i'd love to join that call and i was so nervous that he was going to say this is terrible this is like a really sad thing for human creativity like you know why are you doing this this is like whatever and he was so excited he's like this has been so inspiring i want to do a new album with this you know it's like giving me all these new ideas it's making me much better at my job i'm going to make better music because of this tool and that was awesome and i hope that's how it all continues to go and i think it is going to lead to this we see a similar thing now with dolly where graphic designers sometimes tell us that they just they see this new set of possibilities because there's new creative inspiration and their cycle time like the amount of time it takes to just come up with an idea and be able to look at it and then decide whether to go down that path or head in a different direction goes down so much and so i think it's going to just be this like incredible creative explosion for humans and how far away are we sam before an ai it comes up with a genuinely powerful new idea an idea that solves a problem that humans have been wrestling with it doesn't have to be as quite on the scale as of okay we've got a virus coming please describe to us what a national rational response should look like but some some kind of genuinely innovative idea or solution like one one um internal question we've asked ourselves is when will the first genuinely interesting purely ai written ted talk show up i think that's a great milestone i will say it's always hard to guess timelines i'm sure i'll be wrong on this but i would guess the first genuinely interesting ted talk thought of written delivered by an ai is within the kind of the seven-ish year time frame maybe a little bit less and it feels like i mean just reading that guardian essay that was kind of it was a composite of several different uh gpt3 responses to questions about you know the threats of robotics or whatever if you throw in a human editor into the mix you could probably imagine something much sooner in indeed like tomorrow yeah so the hybrid the hybrid version where it's basically a tool-assisted ted talk but that is better than any ted talk a human could generate in 100 hours or whatever if you can sort of combine human discretion with ai horsepower i suspect that's like a next year or two years from that kind of thing where it's just really quite good that's that's really interesting how do you view the the impact of ai on jobs there's obviously been the familiar story is that every white collar job is now up for destruction what's your view there you know it's i i think it's always hard to make these predictions that that is definitely the familiar story now um five years ago it was every blue collar job is up for destruction maybe like last year it was every creative job is up for destruction because of things like jukebox i i think there will be an enormous impact on the job market and i i really hate it i think it's kind of gross when people like working on a i pretend like there's not going to be or sort of say oh don't worry about it it'll just all obviously better it doesn't always obviously get better i think what is true is every technological revolution produces a change in jobs we always find new ones at least so far it's difficult to predict from where we're sitting now what the new ones will be and this technological revolution is likely to be again it's always tempting to say that this time it's different maybe i'll be totally wrong but from what i see now this technological revolution is likely to be more dramatic more of a staccato note than most and i think we as a society need to figure out how we're going to cushion everybody through that i've got my own ideas about how to do that i i wouldn't say that i have any reason to believe they're the right ones but doing nothing and not really engaging with the magnitude of what's about to happen i think it's like not an acceptable answer so there's going to be huge impact it's difficult to predict where it shows up the most i think previous predictions have mostly been wrong but i i'd like to see us all as society certainly as a field engage in what what the shifts we want to make to the social contract are to kind of get through that in a way that is maximally beneficial to everybody i mean in every past revolution there's always been a space for humans to move to that it is if you like kind of moving up the food chain it's sort of um we've retreated to the things that humans could uniquely do think better be more creative and so forth i i guess the worry about ai is the in principle and i think you probably believe this that there is no human cognitive feat that won't ultimately be doable probably better by artificial general intelligence simply because of the extra firepower that ultimately they can have the vast knowledge that they bring to the table and and so forth um is is that basically right that there is there is ultimately no safe sort of um space where we can say oh but they'll never be able to do that on a very long time horizon i agree with you um but that's such a long time horizon i think that you know like maybe we've merged by that point like maybe we're all plugged in uh and and then like work this sort of symbiotic thing um like i think this an interesting example is what we were talking about a few minutes ago where right now we have these systems that have sort of enormous horsepower but no steering wheel like you know incredible capabilities but no judgement and there's like these obvious ways in which today even a human plus gpt3 is far better than either on their own many people speak about a world where sort of ai is this external threat you speak about at some point we actually merge with ais in some way what do you mean by that there's a lot of different versions of what i think is possible there um you know in some sense i'd argue that merge has already like begun the human technology emerged like we have this thing in our hands that sort of dictates a lot of what we think but it gives us real superpowers and that can go much much further maybe it goes all the way to like the elon musk vision of neuralink and having our brains plugged into computers and sort of like literally we have a computer on the back of our head or goes the other direction and we get uploaded into one or maybe it's just that we all have a chat bot that kind of constantly steers us and helps us make better decisions than we could but in any case i think the fundamental thing is it's not like the humans versus the ais competing to be the smartest sentient thing on earth or beyond um but it's that this this idea of being on the same team i certainly get very excited by the sort of um the medium term potential for creative people of all sorts if they're willing to expand their palette of possibilities but with the use of ai because it has to be willing to i mean the one thing that the history of technology has shown again and again is that something this powerful and with this much benefit is unstoppable and you will get rewarded for embracing it the most and the earliest [Music] [Music] so talk about what can go wrong with ai so let's move away from just the the sort of economics displacement factor um you were a co-founder of open ai because you saw existential risks to humanity from ai today like what would you put as the sort of the most worrying of those risks and how is open ai working to minimize them i still think all of the really horrifying risks exist i i am more confident much more confident than i was five years ago when we started that there are technical things we can do about how we build these systems and the research and the alignment that make us much more likely to end up in the kind of really wonderful camp but you know like maybe open ai falls behind and maybe somebody else builds agi that thinks about it in a very different way or doesn't care as much as we'd like about safety and the risks or has like a different trade-off of how fast we should go with this and where we should sort of just say like you know like let's push on for the economic benefits but i think all of the sort of like you know traditionally what's been in the realm of sci-fi risks are real and we should not ignore them and i still lose sleep over them and just to update people agi is artificial general intelligence right now we have incredible examples of powerful ai operating on specific areas agi is the ability of a computer mind to connect the dots and to make decisions of the same level of breadth that that humans have had what's your sort of um elevated pitch on agi about how to identify it and how to think of it yeah i mean the way that i would say it is that for a while we were in this world of like very narrow ai you know that could like classify images of cats or whatever more advanced stuff than that but that kind of thing we are now in the era of general purpose ai where you have these systems that are still very much imperfect tools but that can generalize and one thing like gpd3 can write essays and translate between languages and write computer code and do very complicated search it's like a single model that understands enough of what's really going on to do a broad ray of tasks and learn new things quickly sort of like people can and then eventually we'll get to this other realm some people call it agi some people call it lots of other things but i think it implies that the systems are like to some degree self-directed have some intentionality of their own is a simple summary to say that like the fundamental risk is that there's the potential with general artificial intelligence of a sort of runaway effect of self-improvement that can happen far faster than any kind of humans can even keep up with so that the day after you get to agi suddenly computers are thousands of times more advanced than us and we have no way of controlling what they do with that power yeah and that is certainly in the risk space which is that we build this thing and at some point somewhat suddenly it's much more powerful than we are we haven't really done the full merge yet there's an event horizon there and it's sort of hard to see to the other side of it again lots of reasons to think it will go okay lots of reasons to think we won't even get to that scenario but that is something that i don't think people should brush under the rug as much as they do it's in the possibility space for sure and in the possibility subspace of that is one where like we didn't actually do as good of a job on the alignment work as we thought and this sort of child of humanity kind of acts in a very different way than we think a framework that i find useful is to sort of think about like a two by two matrix which is short timelines to agi and long timelines to agi and a slow take off and and a fast take off on the other axis and in the short timelines fast takeoff quadrant which is not where i think we're going to be but if we get there i think there's a lot of scenarios in the direction that you are describing that are worrisome and we would want to spend a lot of effort planning for i mean the fact that a computer could can start editing its own code and improving itself while we're asleep and you wake up in the morning and it's got smarter that is the start of something super powerful and potentially scary i have tremendous misgivings about letting an ai system not one we have today but one that we might not have in too many more years start editing its own code while we're not paying attention i think that's the kind of thing that is worth like a great deal of societal discussion about you know just because we can do that should we yes because one of the things that's that's um been most shocking to you about the last few years has been just the power of unintended consequences it's like you don't have to have a belief that there's some sort of waking up of a of an alien intelligence that suddenly decides it wants to wreak havoc on humans that may never happen um what you can have is just incredible power that goes amok so a lot of people would argue that what's happened in technology in the last few years is actually an example of that you know social media companies created these intelligences that were programmed to maximally harvest attention for example for sure and the understanding the consequences from that turned out to be in some ways horrifying and extraordinarily damaging is that a meaningful sort of canary in the coal mine saying look out humanity this could be really dangerous and how how on earth do you protect against those kinds of unintended consequences i think you raise a great point in general which is these systems don't have to wish ill to humanity to cause ill just when you have like very powerful systems i mean unintended consequences for sure but another version of that is and i think this applies at the technical level at the company level at the societal level incentives are super powers charlie munger had this thing which is incentives are so powerful that if you can spend any time whatsoever you know working on the incentive system that's what you should do before you work on anything else and i really believe that and i think that applies to the individual models we build and what their reward functions look like i think it applies to society in a big way and i think it applies to our corporate structure and open ai you know we sort of observed that if you have very well meaning people but they have this incentive to sort of like maximize attention harvesting and profit forever through no one's ill intentions that leads to a quite undesirable outcome um and so we set up open ai as this thing called a capped profit model specifically so that we don't have the system incentive to just generate maximum value forever with an agi that seems like obviously quite broken but even though we knew that was bad and even though we all like to think of ourselves as good people it took us a long time to figure out the right structure to figure out a charter that's going to govern us and a set of incentives that we believe will let us do our work and kind of these we have these like three elements that we talk about a lot research sort of engineering development deployment policy and safety put those all together under a system where you don't have to rely on anything but the natural incentives to push in a direction that we hope will minimize the sort of negative unintended consequences so help me understand this because this is i think this is confusing to some people so you started open ai initially i think elon musk was a co-founder and yeah there was a group of you and and the argument was this technology is too powerful to be left developed in secret and and to be left developed purely by corporations who have whatever incentive they may have we need a non-profit that will develop and share knowledge openly first of all just even at that early stage some people were confused about this it was saying if this thing is so dangerous why on earth would you want to make make its secrets even more available maybe giving the tools to the sort of ai terrorist in his bedroom somewhere i think i think we got misunderstood in the way we were talking about that we certainly don't think that the right thing to do is to like build this a super weapon and hand it to a terrorist that's obviously awful one of the reasons that we like our api model is it lets us make the most powerful ai technology anyone in the world has as far as we know available whoever would like to use it but to put some controls on its usage and also if we make a mistake to be able to pull it back or change it or tweak it or improve it or whatever but we do want to put and we this is continuing will continue to be true with appropriate restrictions and guardrails very powerful technology in the hands of people i think that is fair i think that will lead to the best results for the society as a whole um and i think it will sort of maximize benefit but that's very different than sort of shipping the whole model and saying here do whatever you want with it we're able to enforce rules on it we also think and this is part of the mission that like something the field was doing a lot of that we didn't feel good about was sort of saying like oh we're going to keep the pace of progress and capabilities secret that doesn't feel right because i think we do need a societal conversation about what's what's going on here what the impacts are going to be and so we although we don't always say like you know here's the super weapon hopefully we do try to say like this is really serious this is a big deal this is going to affect all of us we need to have a big conversation about what to do with it help me understand the structure a bit better some because you definitely surprised a bunch of people when you announced that microsoft were putting a billion dollars into the organization and in return i i guess they get certain exclusive licensing rights and so for example they are the exclusive licensee of gpt3 so talk about that structure of how you i mean microsoft presumably have invested not purely for altruistic purposes they think that they will make money on that billion dollars i sure hope they do i love capitalism but a thing that i really loved even more about microsoft as a partner and i'll talk about the structure and the exclusive license in a minute is that you know we like went around the people that might fund us and we said one of the things here is that we're gonna try to make you some money but like agi going well is more important and we need you to sign this document that says if things don't go the way we think and we can't make you money like you just cheerfully walk away from it and we do the right thing for humanity and they were like yes we are enthusiastic about that we get that the mission comes first here so uh again i hope a phenomenal investment for them but they were like they really pleasantly surprised us on the upside of how aligned they were with us about how strange the world may get here and the need for us to have flexibility and put our mission first even if that means they lose all their money which i hope they don't and don't think they will so so the way it's set up is that if at some point in the coming year or two two years microsoft decided that there's some incredible commercial opportunity that they could realize out of the ai that you've built and you feel actually no that's that's damaging you can block it you can veto it correct so the full most powerful version of gpt3 and its successors are available via the api and we intend for that to continue um what microsoft has is the ability to sort of put that model directly into their own technology if they want to do that we don't plan to do that with other people because we can't have all these controls that we talked about earlier but they're like a close trusted partner and they really care about safety too but our goal is that anybody who wants to use the api can have the most powerful versions of what we've trained and and the structure of the api lets us continue to increase the safety and fix problems when we find them but but the structure so we started out as a non-profit as you said um we realized pretty quickly that although we went into this thinking that the way to get to agi would be about smarter and smarter algorithms that we just needed bigger and bigger computers as well and that was going to require a scale of capital that no one well at least certainly not me could figure out how to raise as a non-profit um we also needed to sort of be able to compensate very highly compensated talented individuals that do this but a full for-profit company had this runaway incentives problem um among other things also just one about sort of fairness in society and wealth concentration that didn't feel right to us either and so we came up with this kind of hybrid where we have a non-profit that governs what we do and it has a subsidiary llc that we structure in a way to make a fixed amount of profit uh so that all of our investors and employees hopefully if things go how we like if not no one gets any money um but hopefully they get to make this one time a great return on on their investment or the time that they spent at open eye and their equity here and then beyond that all the value flows back to the nonprofit and we figure out how to share it as fairly as we can with the world and i think that this structure and this non-profit with this very strong charter in place and everybody who joins signing up for like the mission coming first and the fact that the world may get strange i think that that was at least the best idea we could come up with um and i think it feels so far like the incentive system is working just as i sort of watch the way that we and our partners make decisions but if i read it right the cap on the gain that investors can make is 100x and it's a massive well that was for our very first round investors it's way way lower like as we now take incremental big months capital it's way way lower so your deal with microsoft isn't you can only make the first hundred billion dollars no no it's way lower then after that we're giving it to the world no it's way lower than that have you disclosed what don't know if we have so i won't accidentally do it now all right um okay so explain a bit more about the the charter and how it is that you hope to avoid or i guess help contribute to an ai that is safe for humanity what what do you see as the keys to us avoiding the worst mistakes and really holding on to something that's that's beneficial for humanity my answer there is actually more about like technical and societal issues than the charter so if it's okay for me to answer it from that perspective sure okay i'm happy to talk about the charter too um i think this question of alignment that we talked about a little earlier is paramount and then i think to understand that it's useful to differentiate between accidental misuse of a system an intentional misuse of a system um so like intentional would be a bad actor saying i've got this powerful system i'm going to use it to like hack into all the computers in the world and wreak havoc on the power grids and accidental would be kind of the nick bostrom make a lot of paper clips and view humans as collateral damage in both cases but to varying degrees if we can really truly technically solve the alignment problem and the societal problem of deciding to which set of human values do we align then the systems understand right and wrong and they understand probably better than we ever can unintended consequences from complex actions in very complex systems and you know if we can train a system which is like don't harm humanity and the system can really understand what we mean when we say that again who is we and what does that have some asterisks on them i'm sorry go ahead well i was gonna say that's if they could understand what it means to not harm humanity that there's a lot wrapped up in that that sentence because what's been so striking to me about efforts so far is that they seem to have been based on a very naive view of human nature go back to the sort of facebook and twitter examples of well the engineers building some of their systems would say we've just designed them around what humans want to do we said well if someone wants to click on something we will give them more of that thing and um what could possibly be wrong with that we're just supporting human choice ignoring the fact that humans are complicated for weird animals for sure who are constantly making choices that a more reflective version of themselves would agree is not in their long-term interests so that's one part of it and then you've got lead on top of that all the complications of systemic complexity where you know multiple choices by thousands of people end up creating a reality that no one could possibly have designed so how how do you cut through that like an ai has to make a decision based on a moment on a specific data set as those decisions get more powerful how can we be confident that they don't lead to this sort of system crashing basically in some way a thing that i've heard a lot of behavioral psychologists and other people that have studied this say in different ways are that i hate to keep picking on facebook but we can do it one more time since we're on the topic um maybe you can't in any given moment at night where you're tired and you had a stressful day stop yourself from the dopamine hit of scrolling on instagram even though you know that's bad for you and it's not leading to your best life but if you were asked in a reflective moment where you were sort of fully alert and thoughtful do you want to spend as much time as you do scrolling through instagram does it make you happier or not you would actually be able to give like the right long-term answer it's sort of a the spirit is willing but the flesh is weak kind of moment and one thing that i am hopeful is that humans do know what we want and what on the whole and if presented with research or sort of an objective view about what makes us happy and doesn't we're pretty let's say great about it but pretty good but in any particular moment we are subjected to our animal instincts and it is easy for the lower brain to take over the ai will i think be an even higher brain and as we can teach it you know here is what we really do value here's what we really do want it will help us make better decisions than we are capable of even in our best moments so is that being proposed and talked about as an actual role because it strikes me that there is something potentially super profound here to introduce some kind of rule for development of ais that they they have to tap into not what humans want which is an ill-defined question but as to what humans in reflective mode want yeah we talk about this a lot i mean do you see a real chance where something like that could be incorporated as a sort of an absolute golden rule and and if you like spread around the community so that it seeps into corporations and elsewhere because that that i've seen no evidence that yeah well that would potentially be a game changer corporations have this weird incentive problem right what i was trying to speak about was something that i think should be technologically possible and it's something that we as a society should demand and i think it is technically possible for this to be sort of like a layer above the neocortex that makes even better decisions for us and our welfare and our long-term happiness and fulfillment than we could make on our own and i think it is possible for us as a society to demand that and if we can do like a pincer move between what the technology is capable of and what we what we as society demand maybe we can make everybody in the middle act that way i mean there are instances of even though companies have their incentives to make money and so forth they also in the knowledge age can't make money if they have pissed off too many of their employees and customers and investors by analogy of the climate space right now you can see more and more companies even those that are emitting huge amounts of carbon dioxide saying wait a sec we're struggling to recruit talented people because they don't want to work for someone who's evil and their customers are saying we don't want to buy something that is evil and so you know ultimately you can picture processes where they do better and i i believe that most engineers for example working in silicon valley companies are actually good people who want to design great products for humanity i think that the people who run most of these companies want to be a net contribution to humanity it's we've we've rushed really quickly and designed stuff without thinking it through properly and it's led to a mess up so it's like okay don't move fast and break things slow down and build beautiful things that are built on a real version of human nature and on a real version of system complexity and the risks associated with systemic complexity is that the agenda that fundamentally you think that you can push somehow yes but i think the way we can push it is by getting the incentive system right i think most people are fundamentally extremely good um very few people wake up in the morning thinking about how can i make the world a worse place but the incentive systems that we're in are so powerful and even those engineers who join with the absolute best of intentions get sucked into this world where they're like trying to go up you know from an l4 or an l5 or whatever facebook calls those things and you like it's pretty exciting you get caught up playing the game you're rewarded for kind of doing things that move the company's key metrics it's like fun to get promoted it feels good to make more money and the incentive systems of the company and thus what it rewards in individual performance are maybe like not what we all want and here i don't want to pick on facebook at all because i think there's versions of this at play like every big tech company including in some ways i'm sure at open ai but to the degree that we can better align the incentives of companies with the welfare of society and then the incentives of an individual at those companies within the now realign incentive of those companies the more likely we are to be able to have things like agi that follow an incentive system of what we want in our most reflective best moments and are even better than what we can think of ourselves is it is it still the vision for open ai that you will get to artificial general intelligence ahead of other corporations so that you can somehow um put a stake in the ground and build it the right way is that really a realistic um thing to to dream for and if not how do you live up to the mission and help ensure that this thing doesn't go off the rails i think it is i look i certainly don't think we will be the only group to build an agi but i think we could be the first and i think if you are the first you have a lot of norm setting power and i think you've already seen that you know we have released some of the most powerful systems to date and i think the way that we have done that kind of in controlled release where we've released a bigger model than a bigger one than a bigger one and we sort of try to talk about the potential misuse cases and we try to like talk about the importance of releasing this behind an api so that you can make changes other groups have followed suit in some of those directions and i think that's good so yes i don't think we can be the only i do think we can be ahead and if we are ahead i think we can use that leverage to hopefully push people in a better direction or maybe we're wrong and somebody else has a better direction we're doing something bad do you have a structural advantage in that your mission is to do this for everyone as opposed to for some corporate objective and that that that allows you why is it that gpg3 came out of open ai and not someone else it's like it's it's surprising in some ways when you're up against so much money and so much talent in these other companies that that you came up with this platform ahead of you know in some sense it's surprising and in some sense like the startup wins most of the time like i'm a huge believer in startups as the the best force for innovation we have in the world today um i talked a little bit about how we combine these three different clans of research engineering and sort of safety and policy that don't normally combine well and i think we have an unusual strength there we're clearly like well funded we have super talented people but what we really have is like intense focus and self belief that what we're doing is possible and good and i appreciate the implied compliment um but you know we like work really hard and if we stop doing that i'm sure someone would run by us fast tell us a bit more about some of your prior life sam because yeah i mean for several years you're running y combinator which has this incredible impact on some so many companies and there are so many startup stories that began at y combinator what what were key drivers in your own life that took you on the path you're on and how did that pass end up at y combinator no exaggeration i think i have back to back had the two jobs that are at least the most interesting to me in all of silicon valley i i was i went to college to study computer science i was a major computer nerd growing up i knew like a little bit about startups but like not very much i started working on this project the same year i started working on that this thing called y combinator started and funded me and my co-founders and we dropped out of school and did this company um which i ran for like seven years and then after that got acquired i had stayed close to y combat the whole time i thought it was just this incredible group of people and spirit uh and set of incentives and just badly misunderstood by most of the world but obvious to everyone within it that it was going to create huge amounts of value and do a lot of new things um my company got acquired pg who is the founder of yc and like truly one of the most incredible humans and business people and paul graham yeah asked me if i wanted to run it and kind of like the central learning of my career why see ai individual startups has been that if you really scale them up remarkable things can happen um and i i did it and i was like one of the things that would make this exciting for me personally and motivating would be if i could sort of push it in the direction of doing these hard tech companies one of which became open ai um so describe actually what y combinator is you know how many people come through it to give us a couple of stories of its impact yeah so you basically apply as a handful of people in an idea maybe a prototype and say i would like to start a company and will you please fund me and we review those applications and we i shouldn't say we anymore i guess they fund 400 companies a year you get about 150 000 yc takes about 7 ownership and then gives you lots of advice in a network and it's sort of this like fast track program for starting a startup um i haven't looked at this in a while but at one point a significant fraction of the billion dollar plus companies in the us that got started at all came through the yc program um some recently in the news ones have been like airbnb doordash coinbase instacart stripe um and i think it's just it has become an incredible way to help people who understand technology get a three-month course in business but instead of like hurting you with an mba we actually teach you the things that that matter and uh kind of go on to do incredible incredible work what is it about entrepreneurs why do they matter some people just find them kind of annoying um but i think you would argue i think i would argue that they have done as much as anyone to shape the future why what is it about them i think it is the ability to take an idea and by force of will uh to make it happen in the world and in an incentive system that rewards you for making the most impact on the most people like in our system that's how we get most of the things that that we use that's how we got the computer that i'm using the software i'm using to talk to you on it like all of this you know everyone in life everything has a balance sheet there's plenty of very annoying things about them and there's plenty of very annoying things about the system that sort of idolizes them but we do get something really important in return and i think that as a force for making things that make all of our lives better happen um it's very cool otherwise you know like if you have like a great idea but you don't actually do anything useful with it for people that's still cool it's still intellectually interesting but like there's got to be something about the reward function in society that is like did you actually do something useful did you create net value and i think entrepreneurship and startups are a wonderful way to do that um you know we get all these great software companies but i also think it's like how we're going to get agi how we're going to get nuclear fusion how we're going to get life extension and like on any of those topics or a long list of other things i could point to there's like a number of startups that i think are doing incredible work some of which will actually deliver i mean it is a truly amazing thing when you pull the camera back and to believe that a human being can be lying awake at night and something pops inside their mind as a re-patterning of the neurons in their brain that is effectively them saying aha i can see a way where the future could be better and and they can actually picture it and then they wake up and then they talk to other people and they persuade them and they persuade investors and so forth and the the fact that this this system can happen and that you can then actually change the history changes in some sense it is mind-boggling that that happens that way um and it happens you know again and again so you've seen so many of these stories happen what would you say is the is there um a key thing that differentiates good entrepreneurs from others if you could double down on one trait what would it be if i could pick only one i would pick the termination i think that is the biggest predictor of success the biggest at least differentiated predictor and if you would allow a second i would pick like communication skills or evangelism or something in that direction as well there are all of the obvious ones that matter like intelligence but there's like a lot of smart people in the world and when i look back at kind of the thousands of entrepreneurs i've worked with all of many of whom were like quite capable i would say that's like one and two of the surprisingly differentiated characteristics well it's it's when i look at the different things that you've built and been working on i mean it could not be more foundational for the future i mean entrepreneurship ai you know this is i agree that this is really what has driven the future do you see some people get really you know they look at silicon valley and they look at this story and they worry about the culture right that it's this is a bro culture do you see prospects of that changing anytime soon and would you welcome it can we get better companies by really working to expand a group of people who can be entrepreneurs who can contribute to ai for example for sure um and in fact i think i i'm hopeful since those are these are like the two things i've thought the most about i'm excited for the day when someone combines them and uses ai to better select who to uh more fairly maybe even select who to fund and how to advise them and really kind of make entrepreneurship super widely available that will lead to like better outcomes and sort of more societal wealth for all of us so uh so yeah i think broadening the set of people able to start companies and that sort of get the resources that you need that is like an unequivocally good thing and it's something that i think silicon valley is making some progress in but i hope we see a lot more and i do really truly think that the technology industry entrepreneurship is one of the greatest forces for you know self-betterment if we can just figure out how to be a little bit more inclusive and how we do things my last question sam um ted is all about ideas we're spreading if you could inject one idea into the mind of everyone listening what what would that idea be we've touched on it a bunch but the one idea would be that agi really is going to happen um you have to engage with it seriously and you shouldn't just listen to this and then brush it aside and go about life as if it's not going to happen because it is going to affect everything and we will all we all i think have an obligation uh but also an opportunity to figure out what that means and how we want the world and this sort of one-time shift to go sam altman i'm kind of bored by the breadth of things you're engaged with thank you so much for spending so much time sharing your vision thanks so much for having me [Music] okay that's it for today you can read more about open ai's vision and progress at openai.com if you want to try playing with gpt3 yourself it's a little tricky you have to find a website that has licensed the api the one i went to was philosopherai.com where you just you pay a few dollars to get access to a very strange mind it's actually quite a lot of fun the ten interview is part of the ted audio collective a collection of podcasts dedicated to sparking curiosity and sharing ideas that matter this show is produced by kim nedefein peterser and edited by grace rubinstein and sheila orwano sam bear is our mixer fact check is by paul durbin and special thanks to michelle quint colin helms and anna phelan if you like the show please rate and review it it helps other people find us we read every review so thanks so much for listening see you next time [Music] you
Info
Channel: TED Audio Collective
Views: 18,505
Rating: undefined out of 5
Keywords:
Id: Q3E5fagbcsA
Channel Id: undefined
Length: 68min 22sec (4102 seconds)
Published: Sat Apr 23 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.