Beyond AI with Sam Altman and Greg Brockman

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Ah, yes, renowned Socialist and Venture Capitalist, Sam Altman.

👍︎︎ 1 👤︎︎ u/moreVCAs 📅︎︎ Oct 07 2019 🗫︎ replies
Captions
[Music] morning guys morning all right we're gonna talk about artificial general intelligence now I don't know that everybody knows exactly what that is so maybe Greg since you're the tech guy why don't you explain that to us alright so today's artificial intelligence you can almost think as starting to match the function of individual brain regions like a module for vision a module for speech by contrast so artificial general intelligence is going to be something much more powerful it's going to be a system that can actually understand an entire domain of study can actually understand multiple domains integrate information between them and actually discover new knowledge work with humans to solve global scale challenges that we don't have solutions for and that's very much the mission of open AI to get to that point and it's not just to get there it's to make sure that it really benefits not just a small set of people but all of humanity got it now to do that you first set up a nonprofit organization and then earlier this year you switch to a for-profit but with capped returns for the first round investors why did you do that yeah you know the way that we started opening I was we had this mission we wanted a GI artificial general intelligence to benefit everyone and so we started in the most obvious way as a nonprofit but one thing we found as we started to make progress is we've really tried to be at the cutting edge and make AI just continually do impossible things is that it's actually a very expensive endeavor that you know just like in physics you got to build these massive particle accelerators in AI we have to build these massive supercomputers and so we needed to be able to raise investment capital but still stay true to the mission and so we ended up designing a structure that is neither exactly a for-profit nor a nonprofit but something where we can take advantage we only owe a fixed return to those investors into employees who have equity and everything else is owned by the nonprofit to benefit everyone right and that return is capped at 100x so that was that's true for the first round of investment you can almost think of it as you know as we make progress that future investors are going to come in for a lower multi sure 100 exes that's a lot not not for a start-up investor and sort of the first right okay you know we tried to think about what what a good start of investment in sort of our seed round would be and then we did it with Microsoft we did it sort of a lower multiple to model of progress but our goal is to have returns to investors like like a good startup investment but not sort of unprecedented ly good yes moving to this model changed how you do your research how you do your business not at all in spirit we stole a few ourselves is working for the world but I think you have to sort of play the field as it lies and we just need so much capital to do our work I think like we'll need more capital than any nonprofit has ever raised probably and so this was sort of a reflection of reality but we put you know a year's worth of work into designing a new structure to let us keep our mission and still be able to succeed like get the capital we need to succeed even as a for-profit you can still usei work for the world off it we're an LLC we're not a standard thing we don't have traditional shareholders we have documents that say in like very clear words at the top in a purple box that you know this is not a standard investment we can choose to make it worth nothing at any point if we need to super speculative but but I think the technology we're working on has so much promise if it works if we're successful in our mission that it's not going to be a particularly difficult thing for us to provide a great return to our investors and also unprecedented value to the world there's a big if they're in though if short works right well where do you put your chances of actually making a tea are you being the first one to make AGI work well or at all if you well if you if you ask a typical startup what their chances are of success you know I think that you don't start on these endeavors because they're they're certain you embark on them because they're important but you've got a technical challenge that still needs to be solved you know just building a social network out that maybe gets useless or not absolutely so you know I think that AGI is something where there's just like there's been 60 years worth of debate on when this thing is going to come and there were people on the and the 80s and people in the 90s have thought that it was just around the corner and I think it might still be true that it's just around the corner I think we can't rule it out but I think one thing that's also really important is that along the way to AGI it's not quite like making a singular breakthrough and either you got it or you don't that we're producing these super valuable AI technologies they're becoming more and more powerful and so I think that actually the upside of if we truly succeed in the mission we build this kind of system that we're talking about make it benefit everyone that's going to be something on the order of the Industrial Revolution all right and if we fail on that then we'll just have created really amazing technologies that can Stoll be used to benefit the world I think it's always you know maybe people succeed maybe we won't it certainly we have a enormous ly difficult challenge in front of us but I was reflecting just now on January of 2016 opening I started in Greg's apartment I was mostly working at YC at the time but I came by a few days and it was like I don't know eight people sitting around a kitchen table and I remember talking about what we thought we'd be capable of over the next let's say three years and that so badly undershot what three years later were actually capable of that I suspect in the way that humans are bad at estimating exponential curves we still are under estimating what we'll accomplish in the next three years I think we have technology that learns this is this amazing amazing secret in the world that's what makes humans special we figured out how to do it like that's like figuring how to go to the moon and we're able to keep driving that forward and every year I think the field is doing things that experts say are difficult or impossible and like recursively defined if that keeps going um we'll get very far is that why you decided to go to open AI after Y Combinator yeah my general like strategy is if you don't know the most important thing to work on you should work on many things and take a portfolio strategy and I love love YC it was a great way to work on many many things but once you identify like the one most important thing and power laws being what they are you should just go to try to work as hard as you can on that and I think expected value of us succeeding this is like by far the most important thing I can imagine working on what's the ideal outcome for open AI well the ideal outcome is that if we succeed the mission so you could imagine what we're really trying to do right now is trying to continually be at the cutting edge of making AI do impossible things all the way up to building a system that can really be an artificial general intelligence and it's not just about then you know X again it's about actually applying this to these global scale challenges that right now it seems like might just be out of our reach think about climate change right there just so many aspects of that problem that are super hard and it should we you think about technical technological approaches you think about societal and incentive problems and if we can actually have tools that can work with humans that can really help us figure out how to tackle that like that's what we really want to be able to accomplish but you're not doing any commercial products yet that you're just doing research at this point it's very deliberate yeah so that that's very important right the way to think about is that you know you know the quote of AI you know that one of the best ways of doing a start-up is you just find an exponential and you write it so it turns out there right now we are on this insane exponential of AI progress you can actually look at it as driven by the amount of computational power that underlies these models which has been increasing about five times faster than Moore's law for the for the past six or seven years which is just this like really unprecedented amount of people don't get that like intuition on that brake zone that's right it just totally breaks down right so I guess one way to think about it is that I it's almost like if your phone battery lasted for one day you wake up five years later it lasts for 800 years and then you you wait five more years at last for like you know 100 million years like that's the kind of progress we're seeing on the computational power and so that I think for us that the most important thing is to always be at that cutting edge mmm-hmm I think it's basically a waste of time for you to lock yourself into the state of technology right now yeah I think that the the the that it's not a waste because I think that there's so much there's so much good applications and actually one upside of the of this Microsoft deal is that it's not just an event investment it's also a partnership and that we have we have an arrangement where if we choose to license technology that they're actually super excited to commercialize it put into Microsoft products and that's one way that we can both you know continue to fund the research but also get some of these breakthroughs to actually make it out into the world sure let's talk about that Microsoft deal for a second because he took a billion dollars from Microsoft I think there was some confusion about what that actually meant the billion dollars can you just explain what kind of deal this is is it it's not just our credits right no it's cash it's beautiful ah ton answer but it's in cash yeah right so what what is Microsoft getting for that yeah so you know first again there's just an investment so that they know they now share some some of the instrument but they're secondly this partnership I so will be running all of our things on Asscher and that we're working together to go elusively that's right that's right and that we're working together to build these massive supercomputers and push forward AI technology and I think that people in the world who use Azure are going to see that platform to just get better and better for AI development mm-hmm yeah if Microsoft it can collaborate with us and we can build like an amazing supercomputer to train these models on that clearly benefits us we really need that Microsoft's great partner to build it but that should eventually benefit all our customers with what they learned from it why lock yourself into Azure though well it's logistically much easier for us to be on one platform and we need you know or again we sort of interested in particular supercomputers that work for our workloads and you know we sort of talked to all of the players and I think this is like a this will be really it's already is off to a really good start and just just as some historical context we've actually used every major cloud plus our own physical Hardware over the course of open AI so we've kind of taken a look at everything that's out there okay so you said you talked to all the players would you've taken the same deal from Google um I think that's particularly good about Microsoft is the degree to which their mission aligned with us it's hard to find a partner that can you know really scale with us on the capital side over the coming years have the capability to build the computing infrastructure we need and have a super aligned vision for us on the mission so we're super happy we found that Microsoft oh and I think I think it's it's hard to you to under emphasize how important that mission alignment is for investors like as we said at the beginning we have a very atypical structure we have a very atypical mission the idea that we might actually need to serve the world over serving investors it's not something that most investors won't sign up for but I think to Microsoft's credit I think there's actually super aligned with their corporate mission because the overall mission of Microsoft is also to build AI for the greater good of humanity so the overall mission of Microsoft actually is to empower not get this work forward but to empower every organization and individual to achieve more and the the actual individuals at the top who run the company I think are super aligned with this mission of trying to make sure that artificial intelligence and general intelligence happens in a way that benefits everyone I think effort south yesterday the same word right was there that's it genuinely believes that AI will be like the most important trend for Microsoft to get right in the coming decade so and he really thinks like we do about a distribute the benefits sure now you've talked about how much money it takes to just run the models and build the models and you know you've got the money now let's talk a little bit what you're building right now and you've had some you've talked about the money that enough it's done it's talked about the multi agent model you showed and maybe we can put the video up on screen of the work you just showed I think it was last week yeah it's a week or two ago so what we did is we trained I some some a eyes to play hide and seek and what what they learned I was the sequence of strategies and counter strategies right when we first loop that video a couple of times yeah I think it just went away yep and so that at first they just learned to hide in a room and put some boxes in there and then they learn to I you know that there are these ramps that the Seekers could use to jump over the walls and eventually they got better and better and got to the point that actually they figured out how to break the physics so what we're actually seeing here is is one of the seekers I figured out that it can actually break the physics of the simulation in order to you know kind of have ultimate out catting and mousing and can can actually get to where the little blue guys are hiding in the corner I think there's there's a second one two of these that these unconventional strategies that we we didn't know about there should be a second video which hopefully can put up where these seekers learned hey I can box myself in and the or the hydros learned think a box themselves in the Seekers learn hey I can actually use the physics of this world to get on top of this box and surf my way into the shelter and so I think the thing that's really exciting about this is that we kind of have this open world where you have this very evolutionary process right we know that evolution is the thing that made us a smart and there's a question that people have had for a very long time of can you use it to make AI smart and this kind of shows that yeah you can actually even have them discover strategies and ways of exploiting your physics that humans couldn't have imagined and I think this actually really showcases like why we want to build these systems at all right we really want to keep pushing the intelligence we want to make the world more open have more diversity of things these things can do and then you can actually have them discover things that we didn't know that we didn't know we're possible yeah I think you know in a very simple environment you can have incredibly complex behavior emerge just because of agents having to outsmart each other if you think about humans you know we don't need evolution did not give us such giant energy inefficient brains because they help us I run a lion or run down an antelope we have them to deal with each other and in similarly you know most of the complexity in that environment comes from the agents having to deal with each other interact with each other cooperate with each other and I think that actually gives me a lot of optimism about a path to intelligence is if you imagine what we just showed you know on this exponential ramp of sort of 8x improvement per year as the models get smarter and smarter I think you can imagine enormous and complex behavior coming new knowledge discovery coming that we ourselves didn't have from this multi-agent strategy they're exploiting basically a glitch in the setup is that what's happening and and they figured out a way to do that yeah is that a good thing you also so I think I think it's actually it's an interesting thing right that I think ultimately what we want are a is that can work with us to help us solve problems we can't all right why do we want tools at all right is to extend what what humans can do so I think there's real promise there but I think it also highlights the fact that yeah we're going to have to really think about as these systems come into society you don't want your self-driving car to go and do some crazy strategy that you know maybe has some some bad externalities and so part of what we work on it's not just the capabilities it's also safety it's also figuring out how do we make these systems actually do what humans won't be aligned with our values I think that's that's really crucial getting the systems to do what we want not what we try to say I think'll being increasingly important interesting and and I actually think that I think that there's a lot of people out there who I think this think this problem is intractable like how are you ever supposed to write down the reward function for Humanity how are you supposed to write down what you want you can learn it exactly and this I think is the really key insight that we already have technology that can learn super well all of these problems and solutions that humans can't specify and we're trying to do the exact same thing with what do humans 1 mm-hmm some people would be a nervous about that I I mean don't tell me more about about why just just in terms of you've got your training you you may have good intentions training these models right somebody else may say the reward model is very different so this I think is actually you know all the things we've been talking about to date have been these technical problems the question of capital questions how you build these computers I think this is actually the most important question all right let's say you can really succeed and build these super powerful systems what values are in there whose values are they you know we as humanity are not very good at agreeing on a global value system and I think that this is also a super important challenge and we actually of a policy team at opening I whose job it is just to think about this mm-hmm and do you have any results from that do you do how do you approach that yeah I think there are some things that most of humanity agrees on as values and then you know there's a few big spectrums and sort of moral philosophies maybe you talked about like individualism versus collectivism is one big one how we start a process know about a global discussion about what deployment of a dynasty is going to look like we've really turned our attention at given the acceleration of our results we just this year put a team on that for the first time saying like you know what this sounds a little premature but better to be a little even a little late but I think what it's gonna come down to a step one is just sort of a global discussion which has not even really been started I think as people start to take this seriously as the non tech world starts to take this seriously the first stage is just a discussion about what kind of world we want on the other side and I think I think one really key part of that it's not just a Silicon Valley problem right I think that so far you know Silicon Valley I think it's kind of start to to really isolate itself off and it's a guy it's causing a lot of issues and for the most powerful technology ever we just got to really be able to get global engagement sure in part I'm asking because you had this other result recently GPT - it's a model that can generate text based on a problem like an undergraduate would get basically in a you know in an English class and you decided to withhold some of those result its east of models for awhile and roll this out in a staged fashion because you thought the model was too good yeah so this is a result that is I you know just like five years ahead of where we thought we were going to get it there was you know we generated this model that I was just just trained to by looking at internet text and you could ask it to write an essay on safe why recycling is bad for the world and it came up with this great argument of why I you know why are we generating so much waste in the first place like we're just masking the symptoms with recycling and we'd never seen anything like it and so everyone who looked at it with an opening I thought that the implications were totally obvious but we couldn't agree on exactly what those implications were some people thought it was obviously totally benign people use it to generate books and things like that other people thought that it could potentially be used for malicious purposes for example generating fake news at massively unprecedented scale and you know when we started this company trying to make a GI benefit everyone we always knew that not publishing everything would be part of that day would come and we knew is super important to be a year too early rather than you're too late and so you know we might have been a year too early but but we decided to just do a staged release of rolling out bigger and bigger versions rather than everything at once by this staged release so some people accuse you oh this is just a publicity stunt they want to show off we're so good that were stuff is already dangerous and we can't just show it but but why do this this stage didn't say that it's dangerous we said it might be I'm not sure and we said it's you know this is like a good time for a draw a practice run at a minimum or maybe like actually someone will misuse it it certainly got reported as us saying this is too dangerous but we definitely never said that they'd use thank you a I wrote that exactly so so I think we said is like it publication norms deserve a conversation and that I think will stand by mm-hmm very strongly yeah and I think that worked we're now at a stage where people have like there's really this industry working group that's thinking about these problems and we've seen various people replicate the model but also hold it back because you know in some extent it's not about GPT - it's about scamming people of money exactly GBC 20 will be dangerous or will be capable of misuse that's right and so I think that it's so important that you have the norms in place you know I'm actually kind of glad that we took a lot of anger and ridicule and you know whatever emotions people people wanted to pour our way the funny thing is there's also a lot of support and people who you know it's funny I tweeted something about that recycling thing and someone replied to that got a whole lot of likes saying like I can't believe these people aren't taking this seriously why do they think this is funny and I looked at that and I was like exactly why we're taking this seriously got it when are you going to roll out the the full model when are you going to show that to the world so I mean my expectation if everything goes if there's nothing if something crazy happens is within the next couple months if you look at the history of our ramp that we basically release the next version get some data see how people are using it and then go from there and internally you already have GPT ten that you're not showing it we do not comment on the release - like three four something like that all right all of this work you've been doing on the multi-agent the GPT that it's all still you know within the not beat AGI realm how does this work get you there yeah so the way to think about the way that we think about progress right I think the way that the you know sort of in science you normally think about your progress it's about ideas right the earlier buildings matter of okay I came up with this theory that explains these phenomena and simplifies everything and now I can make the next theory and eventually you got to whatever applications you want the way that we think about AI think we take much more of almost startup mindset to it is to think about capability we think of it as can we solve a super hard challenge and in doing so we think will generate these general-purpose methods that can be reused elsewhere and so for example you know we built a system that beat the world champions at the complex video game dota 2 we actually took that exact same training system pointed at robotics I solve this robotic hand problem that no one had to be able to solve and so I think the way I think about it is yeah for sure everything we generate still has limitations but somehow they're becoming more powerful they're able to do things that are years ahead of when we thought would be kicked possible and as long as we're on that ramp then I think we're being successful do you think that's enough just making the current methods better or do you need a completely new technology at some point so I think there's only one way to find out like our hope and honestly our hope with dota was that we would find our current methods would hit a wall we'd scale everything up we push it as far as we could and it just wouldn't do it and that I think is the current unsolved challenge and AI is trying to find a real task that we just can't solve through pushing all existing methods we have to fail at something to know what to do next we're not that I would say if you look at capabilities stuff like dota the multi-agent the robot you know given a goal and a simulator we can solve that and if you look at stuff like GPT - given a massive information we can understand it and those are two pretty powerful capabilities towards intelligence that's right now there's something that's funny about GPT - its GPT - is terrible at math you mask it to add some numbers it just starts making stuff up because the the data you fed it doesn't it just has nothing to do with math exactly right and and maybe and maybe if you just feed some data that has to do with math it'll work but maybe we're missing reasoning maybe that's something that's not there and so right now we don't really know that answer and that's one thing that we're really trying to discover sure it can write a book at this point though that's right done right yeah yeah so I I think may be a good thing to wrap it up is I I brought unstaged this book it's not written by a human this was written by GPT too and illustrated by human but you can kind of look through it and see if I can get in the camera properly and you can kind of see that it's got lots of nice like you know make game mechanics for this RPG game that it wrote it's got lots of nice lists of different elves and items and things like that and I think that this shows you why these technologies are amazing today is that you can actually work with them in order to build artifacts or to build creative things that humans wouldn't be able to do on their own so that AI is a bit of a nerd turns out what's next what's what's the next big step you're expecting to see it continue the ramp I mean I think that that reasoning is a good example of a technology that no one's ever seen I think that that GB t2 is on the cusp of having machines that we can actually talk to and can understand us but we're not quite there and I think that there's work that we and everyone else in the field are trying to do to get there got it you're happy about your move yes good I'm happy about the interview thanks guy thank you thank you so much [Applause]
Info
Channel: TechCrunch
Views: 16,721
Rating: 4.8634148 out of 5
Keywords: tech, technology, newest technology, hottest technology, brand new tech, gadgets, technology gadgets, hottest gadgets 2019, 2019 tech picks, techcrunch, techcrunch disrupt, tc disrupt 2019, will smith, marc benioff, ashton kutcher, aaron levie, dennis crowley, joseph-gordon levitt, hitrecord, tcdisrupt
Id: 14Qfi6n-U4U
Channel Id: undefined
Length: 25min 58sec (1558 seconds)
Published: Thu Oct 03 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.