Sam Altman talks GPT-4o and Predicts the Future of AI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
we've had like the idea of voice control computers for a long time they've never to me felt natural to use and this one the fluidity the pliability whatever you want to call it I just can't believe how much I love using it welcome to Logan barlet show on this episode what you're going to hear is a conversation I have with co-founder and CEO of open AI Sam Alman now if this is your first time listening to the Logan Bartlett show this is a podcast where I discuss with leaders and Technology as well as investors some of the lessons that they've learned in operating or investing in businesses mostly in the technology field this discussion with Sam is a little bit different in which I pushed on a number of things related to artificial intelligence as well as where open AI is headed given how topical it is in the news and Sam's perspective on such a leading Frontier that is artificial intelligence you'll hear that discussion with Sam here now thanks for doing this yeah of course all right I want to start off easy uh what's the weirdest thing that's changed in your life in the last four or five years runting open AI like what's the most usual shift that's happened um I mean quite a lot of things but the sort of inability to just be like mostly Anonymous in public is very very strange I I think if I had thought about that ahead of time I would have said okay this is like a weirder this would be a weirder thing than it sounds like um but I didn't really think about it it's like a much weirder thing it's like a strangely isolating way to live you believed in Ai and the power of the business so did you just not think through the derivative implication of running something that I didn't think there were all these other things like oh is going to be like really important be really important company I didn't think I would like not be able to like go out to dinner yeah in my in my own City that's weird that's weird uh you made an announcement earlier today we did multimodal 40 yeah it's the Omega sign right the oh just the like oh like Omni yeah Omni okay sorry uh it works across text voice Vision um can you speak to why this is important um because I think it's like an incredible way to use computer this fact this I we've had like voice the idea of like voice controll computers for a long time you know we had Siri and we had things before that they've never to me felt natural to use and this one many different reasons what it can do the speed adding in other modalities the inflection the naturalness the fact that you can do things like say hey talk faster or talk in this other voice and that it's that that the the fluidity the pliability whatever you want to call it uh I just can't believe how much I love using it yeah Spike Johns would be broud it's are there use cases that you've gravitated to well I've only had it for like a week or something um but one surprising one is putting my phone on the table while while I'm like really in the zone of working and then without having to like change Windows or change what I'm doing using it as like another channel so I'm like working on something I would normally like stop what I'm doing switch to another tab Google something click around or whatever but while I'm like still doing it to just ask and get like an instant response without changing from what I was looking at on my computer that's been a surprisingly cool thing what actually made this possible was it an architectural shift or more compute I mean it was like all of the things that we've learned over the last several years we've been working on Audio models we've been working on visual models we've been working on tying them together we've been working on more efficient ways to train our models um it's not like okay we unlock this one crazy new thing all at once but it was putting a lot of pieces together do you think you need to develop like an on device model to decrease latency to the point for usability uh for video maybe it would be hard to deal with network latency at some point like like that the the a thing that I've always thought would be super amazing is to put on someday a pair of AR goggles or whatever and just like speak the world in real time and watch things change and that might get harder over Network latency but for this uh you know two 300 milliseconds of latency feels super like feels faster than a human responding to me in many many cases is is video in this case images oh sorry I meant video if you wanted like generated video not pro not not input video got it got it so so currently it's working with actual video as is well like frame by frame of the frame by frame so okay got um you led recently to uh chat GPT maybe not being there the next big launch not being GPT 5 it feels like there's been sort of an iterative approach to model development that you guys have have taken is it fair to say that's how we should think about it going forward that it's not going to be some big launch here's chat GPT 5 but instead we honestly don't know yet uh I I think that definitely one thing I've learned is that Ai and surprise do not go well together and although you know the traditional way a tech company launches products we should probably do something different now we could still call it gp5 and launch it in a different way or we could call it something different um but I don't think we figured out how to do the naming a brand for these things yet like it made sense to me from like gpt1 to GPT 4 at the launch now obviously gp4 has continued to get much better we also have this idea that there's going to be like you know maybe there's like one underlying kind of like virtual brain and it can like think harder in some cases than others uh or maybe it's different models but maybe the user doesn't care if they're different or not so I don't think we know the answer to how we're going to like product Market all of this yet does that mean maybe that the the uh the needs of the compute to make incremental progress on models might be less than what it's been historically I sort of think we'll always use as much compute as we get now we are finding incredible efficiency gains and that's really important one of the you know the cool the cool thing that we launched today is obviously the voice mode but maybe the most important thing is we were able to make this so efficient that we're able to serve it to free users like best model in the world by a good amount if you go look at that little thing served to uh like anybody who wants to download chat GPT for free and it was a remarkable efficiency game over gp4 and gp4 turbo and we have a lot more to gain there I've heard you say that chat GPT didn't actually change the world in and of itself but maybe just changed people's expectations for the world yeah like I don't think you can find much evidence in the economic measurement of your choice that chat gbt really inflected productivity or whatever maybe customer support maybe some maybe some areas like if you look at like Global GDP you know can you detect when chat GPT launched probably not is there is there a point that you think will be able to determine a GDP inflation yeah I don't know if you'll ever be able to say like this was the one model that did it but I think if we look at the graph a couple of decades in the future be like H something changed yeah are there applications or areas you think think are most promising in the next 12 months I'm sure I'm biased just because of where what we do here but coding I think is a is a really big one kind of related to The Bitter lesson you spent some time recently talking about the difference between deeply specialized models uh trained on specific data for specific purposes versus generalized models that are capable of true reasoning I would bet that it's the generalized model that's going to matter and what is the most important thing there as you think about like someone that's focused singular ly on a data set and all the Integrations associated with something very narrow if the model can do generalized reasoning if it can like figure out new things then if it needs to figure out how to work with a new kind of data you can feed it in and it can do it um but it doesn't go the other way around like a bunch of specialized models that I don't think a bunch of specialized models put together can't figure out the generalized reasoning so the implications for that of coding specific models probably be I I I I think a better way of saying this is I think the most important thing to figure out is the true reasoning capability and then we can use it for all sorts of things what do you think the principal means of communication between humans in AI is in two years natural language seems pretty good I I I'm interested in this general idea that we should design a future that humans and AIS can sort of use together um use in the same way so I'm like more excited about humanoid robots than I am for other forms of Rob robots because I think the world is like very much now designed for humans and I don't want that to get reconfigured for some more efficient kind of thing uh I like the idea that AI that we talk to AI in language that like very well human optimized and that they even like talk to each other that way maybe I don't know um but I think this is I think this is generally an interesting direction to push you said recently something to the effect of uh the models might ultimately get commoditized over time but the most important thing would likely be the personalization of the models to each individual first yeah do do I have that right I'm not certain on this but I think it's like a thing that I would that would seem like reasonable to me yeah then beyond personalization do you think it's just normal business UI and ease of use that ultimately wins for end users those are those will for sure be important they they always are um you know I can imagine other things where there's like a sort of marketplace or network effect of some sort that matters where it's you know we want our agents to communicate there's yeah different companies in an app store but I I sort of think that the rules of business kind of generally apply and whenever you have a new technology you're tempted to say they don't but that's always like fake news and not always usually fake news and all of the traditional ways that you create uring value will will still matter here when you see open- Source models like catch up to benchmarks and all of that um what's your reaction to it is that I think it's great yeah I mean I I think that there are you know like many other kinds of Technology there will be a place for open source there'll be a place for like hosted models and that's fine it's good I'm not going to ask about uh any specifics related to this but there have been press answer there's been press reports related to uh looking to raise major amounts of money uh Wall Street Journal I think was a credible one to Galvanize investment in Fabs um semi- industry atmc and Nidia have been ramping pretty aggressively to meet expectations of the need for AI infrastructure uh you recently said that you think the world needs more AI infrastructure and then you said a lot more AI infrastructure um is there something you're seeing on the demand side that would require way more AI infrastructure than what we're currently getting out of tsmc and Nidia so first of all uh I'm confident that we will figure out how to bring costs to deliver current systems way way down I'm also confident that as we do that demand will increase by a huge amount and third I'm confident that by building bigger and better systems there will be even even more demand we should all hope for a world where intelligence is too cheap to meter it's just wildly abundant people use it for all sorts of things and you don't even think about whether like oh you know do I want this do I have you know do I want this like reading all my emails and responding to them for me or do I want this like curing cancer of course you pick curing cancer but the answer is like you'd love for it to do both things and I just want to make sure we have enough for every to have that I don't need you to comment on your own personal efforts here although again if you want to uh please let me know but uh Humane and Limitless and some of these like different physical device assistants what do you think those have gotten wrong or where do you think the um the adoption Maybe hasn't met user uh desires just yet I think it's just early um I I have been an early adopter of many types of computing um I had and very much loved the uh compact tc1000 like like when I was a freshman in college uh I thought it was just like so cool and like that was a long way from the iPad uh long long way from the iPad but you know it was directionally right um then I got a a trio I was like the I was very not cool college kid I had like a old Palm Trio and when it was like a that was not a thing that kids had and that was a long way from the iPhone but we got there eventually and you know these things feel like a very promising Direction that's going to take some iteration you mentioned recently that a number of businesses that are building on top of uh gbd4 will be steamrolled I think was your term by Future uh GPT um I guess can can you elaborate on that point and second like what are the characteristics of AI first businesses that you think Will Survive gpt's advancement the only framework that I have found that works for this is you you can either build a business that bets against the next model being really good or a model that bets on that happening and benefits from it happening so uh if you're doing a lot of work to make one use case really work that was just beyond the capability of GPT 4 GT4 oh no and then you get it to work but then gb5 comes out and it does that and everything else really well uh you're kind of like sad about the effort you put into that one thing to get it to barely work but if you had something that just like kind of worked okay across the board and people were finding things to use for but you didn't put in like tons of work to to make this one thing kind of possible and then GPT 5 or whatever we call it comes along and just way better everything you like you got the rising tide lift at all your boats effect you know what I would suggest is like you're not building an AI business in most cases you're building a business and AI is a technology that you use in the early days of the App Store I think there were a lot of things that like filled in some very obvious crack and then eventually Apple fix that and there wasn't you know you didn't keep needing like a flashlight app from the app store which just like part of the OS and that was like going to happen um and then there were I think things like uber that were enabled by by having smartphones but really built a very defensible long-term business and I think you just want to go for that latter category I can come up with a lot of incumbent businesses that leverage you all that fit that framework uh in some ways are there any like novel types of Concepts that you sort of think is in that Example The Uber and it doesn't need to be it could be a real company if you think of one or even if it's a toy or just something that's interesting that you think is like enabled in that way um I would actually bet on the new companies for like many of these cases a very common example people use is trying to build like the AI doctor like the AI diagnostician and people talk about oh I don't want to do a startup here because you know Mayo Clinic or take your pick is going to do it and I'd actually bet it's a new company that does something like that do you have any advice for CEOs beyond that who are want to be proactive about preparing for these types of disruptions I I I would say like bet that intelligence as a service gets better and cheaper every year and it is necessary but not sufficient for you to win so the big companies that take you know years to implement this you can like beat them but every other startup is that's you know paying attention is going to do this too and so you still have to figure out like what what's the long-term defensibility of my business now the the playing field is way more open than it's been in a long time there's incredible new things to do but you don't get a pass on like the hard work of building auring value even though you can now do it in more ways is there a job title or a type of uh job responsibility that you could Envision existing or being mainstream in five years because of AI that like is maybe Niche or non-existent today that's a great question and I don't think I ever gotten it before it's people always ask like what job is going to go away the new one is a more interesting question let me think for a second um I mean there there's like a lot of things that I could talk about that I think are sort of less interesting or less huge uh what I'm trying to do is like come up the areas of like what will a 100 million people do or 50 million people do um the broad category of new kinds of art entertainment sort of more like humano human connection I don't know what that job title is going to be but I think and I don't know if this like we get there in five years but I think there's going to be a premium on like human in-person like fantastic experiences I don't know what we'll call that but I can see that being like a very huge category of something new that we do the most recent public tender of open AI was 90 billion or or something in in about there um are there one or two things that you sort of look at as milestone stones that will get open AI to be a trillion doll company short of AGI uh I think if we can just keep improving our technology at the rate we've been doing it and figuring out how to continue to make good products with it and revenue keeps growing like it's growing uh I don't know about specific numbers but I think we'll be fine is the the business monetization model today the one that you think creates the $1 trillion Equity value I mean the chbt subscription model like really works well for us like surprisingly I wouldn't have bet on that I wouldn't have been confident it's going to do as well as it has but it's been good do do you think post AGI whatever that term actually means will be able to I don't know ask AGI what the monetization model is that might be different yeah yeah should be able to I think we maybe saw in November not the rehash uh that that the existing open AI structure left some things to be desired which I don't think we need to rehash in total you talked about it uh enough I think but um You' spoken to making changes along the way what do you think the appropriate structure is going forward um I think we're close to being ready to talk about that uh we're like we've been hard at work on all sorts of conversations and brainstorming there uh I think like hopefully in a like this year I think we'll be right talk about this calendar year you tell me first we'll see when Larry and Brett Taylor got Battlefield promoted to board uh directors I was waiting for you know my call never came through but I uh one of the interesting things I think uh about preconceptions around AI to your point on the monetization model and all that is I think we've all I've heard you speak about it manual work obviously first followed by you know white collar followed by creative obviously it's proven to be uh kind of the opposite in some ways are there other things that are counterintuitive that that you've looked at being like well I would have presupposed it to be this way but it's actually proven to be the exact opposite that that's definitely the mega surprise to me uh the one that you mentioned there's other like I don't think I would have expected it to be so good at legal work so early just because I think of that as like a very precise complex thing but but no definitely the big one is the the observation of like physical labor cognitive creative labor for those that haven't heard you make the point about Ai and why you dislike the term can can you elaborate on on that point because I know I I don't I no longer think it's like a moment in time um I I I obviously have so many naive conceptions when you start any company uh and and particularly in a field that's like moving around as much as this one is but my naive conception when we started is that we would like get to a moment where we didn't have AGI and then we did and it would be a a real discontinuity and I still think there's some chance of a real discontinuity but on the whole I think it's going to look much more like a continuous exponential curve where what matters is the pace of progress year over year over year and you and I will probably not agree on the month or even the year that we're like okay now that's AGI we can come up with other tests that we will agree with but even that is harder than it sounds and you know gb4 is definitely not over a threshold that I think almost anyone would call an AGI and I don't expect our next big model to be either but I can imagine that we're like only maybe one or two or some small number of ideas away and a little bit more scale from something we're like this is now kind of different and I think it's important to stay vigilant about that is there a more modern like Turing test we can call it the Bartlet test uh where that that you think like hey when it crosses this threshold I think when it's capable of doing better research than like all of open a ey put together even one open a ey researcher that is like a somehow very important thing that feels like it could or maybe even should be a discontinuity does that feel close probably not but I wouldn't I wouldn't rule it out what do you think the biggest obstacles that you see to reaching AGI uh sounds like you think maybe the scaling laws have have Runway currently and holding for the next couple years yeah I think the biggest obstacles are new research uh and you know one of the things I've had to learn shifting from like internet software to AI uh is research does not kind of work on the same schedule as engineering which usually means it takes much longer it doesn't work but sometimes means it works tremendously faster than anyone could have predicted what what is that can can you elaborate on that point that it's like not as linear in progress I think the best way to elaborate on that is like historical examples I'm going to get the numbers wrong here but I'm sure no one will try to correct you on someone will yeah um I think the neutron was first theorized you know in the early 1900s it was maybe first detected in the T or 20s uh and the work on what became the atomic bomb started in the 30s and happened in the 40s like from from not really having no idea that such like that there was even the idea of a thing like a neutron to being able to like make an atomic bomb and just like break all of our intuitions about physics um that's like wildly quick um there are other examples that are sort of less pure science like there's the famous quote about the right Brothers again I'm going to get the numbers wrong WR here but let's say it was like 1906 they said they thought flight was 50 years away in 1908 they did it whatever something like that and then many many other examples throughout the history of Science and Engineering there's also plenty of things that we theorize that never happen or take you know decades or centuries longer than we thought but but sometimes it does go really fast interpretability where are we on this path and how important is that long term for AI there's different kinds of interpretability there the like do I understand like every what's happening at like every mechanical layer through the the network and then there's um can I like look at the output and say there's a logical flaw here or whatever I am excited about the work going on at open Ai and elsewhere in this direction uh and I think that interpretability as a broader field seems like promising and exciting I won't pin you down I assume you'll have a nice announcement when you're ready to say something uh but um do you think that that is uh going to be a requisite to mainstream AI adoption maybe within interprises or something gb4 is like quite at this point yeah yeah that's fair there's maybe a few things that I think uh you get asked questions about or ACC maybe accused is too strong of a term but uh that people are suspicious about uh one of which is um I think there's this needle threading that exists between being excited about AGI but also feels like you have a um uh personal kind of uh apprehension about you Sam open AI generally being the ones to harness it and unilaterally make decisions which has led to some you know some body some governmental structure where there's elected leaders instead of you making these decisions yeah I think for like I it' be a mistake to go regulate heavily regulate like current capability models uh but when the models which I believe they will pose significant catastrophic risk to the world um I think having some sort of oversight is probably a good thing now there is some needle threading about where you set those thresholds and how you test for it and it would be a real shame to sort of stop the tremendous upsides of this technology and letting people that want to go train models in their basement be able to do that that'd be really really bad but you know if we have international rules for nuclear weapons um I think it's a good thing the regulatory capture Group which I'm sure we can think of which VCS fall into that bucket of accusatory uh around this regular regulation um what do you think they don't see about the potential risks inherent in AI well I think I just don't get I don't think they like on the whole series wrestled with AGI these were also people who like some of the loudest voices about a AI regulatory capture were you know totally decrying it as a possibility not that long ago not all but I do have empathy for where they're coming from which is like regulation has net been really bad for technology like look what happened to the European technology industry like I get it I really do and yet I think that there is a threshold that we are heading towards above which we may all feel a little bit different do you think open source mod themselves present inherent danger in in some ways no current one does but I could imagine one that could I've heard you say that uh safety is kind of a false framing in some ways because it's more of a discussion about um what we explicitly accept like Airlines yeah it's more like safety is not a binary thing like you are willing to get on airplanes because you think they're pretty safe even though you know they crash once in a while and what it takes for to call an airline safe is like a matter of some discussion some people have different opinions on and it's a topical point right now topical point right now they have gotten just unbelievably safe overall like triumphantly safe but safe does not mean no one will ever die in an airplane similarly medicine we really big the side effects and some people have adverse uh consequences around it and then there's the implicit side of safety as as well like social media right or things that have negative association um is there something that you could imagine seeing on the safety Paradigm that would cause you to act differently than pushing forward yeah we have this thing called our preparedness framework that's sort of exactly that saying that you know in these categories at these levels we act differently I've had elaser on the podcast how was that it was wonderful we sat for longest podcast I've ever done I think it was four hours of us uh going he has more free time than me so I apologize I can't go that long I we can we can do multiple sessions we don't need to do them all now I think uh that his his points I think stay fairly he exists he's a very interesting guy to sit down with for four hours and talk uh we went a bnch a bunch of different directions but I'd be remiss as a as a friend of the Pod to not ask a fast takeoff uh question um I'm curious like there's so many different fast takeoff scenarios and one of the constraints that I think we point to today is just a lack of AI infrastructure right um and uh I guess if there was some some researcher developed a modification to the current Transformer architecture where suddenly the amount of um data and Hardware scale needed drastically reduced more like human brain or something like that um is it possible we could see like a fast takeoff scenario possible of course uh and it may not even need a modification um it is still not what I believe is the most probable path but I don't discount it and I think it's important that we consider it in the space of what could happen I think things will turn to be more continuous even if they're accelerating I don't think we're likely to go to sleep one day with like pretty good Ai and wake up the next day with genuine super intelligence um but even if even if the takeoff happens over a year or a few years that's still fast in some sense there's another question about even if you got to this like really powerful AGI how much does that change society on the next day versus the next year versus the next decade and my guess is in most ways it's not a next day or next year thing but over the course of a decade then the world will look quite different I think the inertia of society is like a good helpful thing here one of the things I think people also find uh the they have suspiciousness around I imagine the questions you don't love getting are uh Elon uh equity and November board structure those are probably the the three uh which one answer them a lot of times which one of those do you like the least I don't I mean I I don't hate any of them I just don't have anything new to say on any of them um well I I guess I'm not gonna ask the the equity one specifically because I think you've answered that in more than enough ways although it is people still don't seem to like the uh the answer that enough money is uh a thing yeah if I like made a trillion dollars and then gave it away it would just it would like fit with I think the expectation or the sort of way it's usually done or there was another sh that thought about oh that's true trying that in some way yeah comparatively no I just mean like most people who make a ton of money out yeah um what uh what do you feel like your motivations this pursuit of AGI like outside of the the equity I think most people take solace in the fact that like oh well even if I have some higher Mission uh I still get paid for it uh in some ways like what are your motivations now coming into work every day like what's the most fulfillment derived from look I tell people this all the time I I'm willing to make a lot of other life trade-offs and sacrifices right now because I think this is the most exciting most important like best thing I will ever touch and it's an insane time and I'm happy it won't be forever like you know Som I get to go retire on the farm and I'll remember this fondly but be like oh man those were stressful long long stressful days but it's also just incredibly cool like I can't believe I'm this is happening to me it's like this is like amazing was there a single moment I guess we go back to the the the the fame example of not being able to go out in your city or whatever but has there been a single moment that was most surreal that like oh geez I don't know uh I mean you've done a podcast with Bill Gates I'm sure you have your speed dial if I took your phone right now would have a lot of very interesting people on it was there a single moment over the course of the last couple years that you were like this is a uniquely surreal moment and kind of every day there's something that's like wow if I could like if I had like a little bit more mental space to step back it's like this would be crazy um kind of a fish in water but yeah it is kind of like that effect uh after all of that like no November stuff happened that you know like that day or the next day whatever I got like uh I don't know 10 20 text something like that from like major world like presidents Prime Ministers of countries whatever and that was not the weird part uh the weird part was that happened and I was like um you know kind of responding saying like thanks or whatever and it felt like very normal and then I we had these like insane super jammed like four and a half days in just this like crazy State and it was it was just like weird like not sleeping much not really eating um energy levels like very high very clear very focused but just like your body was like in some weird like adrenaline charge state for a long time and then it was like all this happened the week before Thanksgiving it was kind of crazy crazy crazy got resolved on Tuesday night um you canceled our podcast canceled our podcast sorry I don't usually cancel things um anyway then on that Wednesday like now it's the Wednesday before Thanksgiving Ally and I drove up to Napa and and uh stopped at this Diner got is very good and on the drive up there I realized I hadn't like eaten in like days and then all of a sudden like kind of like normal it was just like okay you know this is like normally where we be doing on weekend heading up like go whatever and go to uh gots order like four Entre like heavy like you know fried like heavy ENT like two milkshakes just for me I was sat there and ate and it was very satisfying um and as I was doing that um one of them president of this one country texted again and just said like oh I'm sorry this off like great whatever and then it hit me that like oh yeah like all of these people had texted me and it wasn't weird and the weird part was like realizing that that had like happened in the middle of it and that that should have been this very weird experience and it wasn't so that was like one that sticks out yeah that is interesting my takeaway is human adaptability to almost anything is just like much more remarkably strong than we realize and you can get used to anything as The New Normal good or bad pretty fast and I kind of like over the last couple of years have learned that blesson many times um but I think it says something to remarkable about humanity and good for us and good as we stare down this like big transition I remember post 911 I'm sure you remember exactly you were but I I was in submit New Jersey and our town just you know what whatever dozens of people passed away and the how close the town came together after a terrorist attack happened and it seemed so normal like that it was just the normaly of it or I have friends in Israel right now and you talk to them about it and they they're like no it's it's normal I'm like well there's a war going like it's got to be surreal and they're like well I mean what are you going to do you go about your day you go get your food all that and it's it's it's amazing these psycholog iCal impacting things at the end of the day we need to go get food and we need to you know talk to our friends and and all this stuff so it is it is amazing how much that can happen it really like genuinely that's been my big surprising takea away to like feel it so B really as you think about like models becoming smarter and smarter what um you kind of touched on this a little bit earlier with the the creative element like what do you think remains uniquely human as models start doing more and more capabilities of what we used to consider I think many many years from now humans are still going to care about other other humans I you know I was reading the internet just a little bit and everyone's like oh everyone's going to fall in love with Chachi BT now and everybody's you know like it's going to be the Chach BT girlfriend whatever whatever I bet not I bet we're I I think we're so wired to care long term about other humans and all sorts of like big and small ways that that's going to remain like our obsessional with other people sounds like you hear a lot of conspiracy theories about me you probably don't hear a lot of conspiracy theor about AI you might not care if you did hear one I think we're like not going to watch Robots play each other in soccer probably as our like main hobby as you run open AI the company itself and you you uh built a lot of rules or or Frameworks uh at YC on uh how to run businesses and then you've you've broken a lot of them some some are there are there different types of people you hire for this business than you would have had you started a um a consumer internet company within the executive ranks or a B2B software company or something um researchers are very different than like product Engineers for the most part and it's Brad or miror or some of the executives like researchers are unique but does open aai bring in a different type of executive or do you hire for a different trait so I mostly have not like I I am sometimes you hire externally for executives but I'm a big believer if like you generally promote it's not it's probably a mistake to only promote people to be Executives because that could reinforce a monoculture and you know I think you want to bring in some new very senior people um but we mostly like homegrown Talent here and I think that's a a positive given how different what we do is from what you would do somewhere else is there a decision um that you've made over the course of open AI that that felt the most important at the time of making it and how did you go about making it it'd be hard to point to just a single one but the decision that we're going to do what we call iterative deployment that we're not going to go build AGI in secret and then put it out into the world at once which was the prevailing wisdom and the are plan in others I think that was like a quite important decision we made and it felt like a really important one at the time if if another company that betting on language models was an important decision and felt like an important one at the time I actually don't know the story of of betting on language models how did that come to be originally um well we were we had these other projects we were doing the robot thing and video games and there was a very small effort started with one person looking at looking at language modeling and Ilia really believed in it uh really believed in like the general direction became language models let's say and we did gpt1 we did gpt2 we started to study SC scaling laws scaled gpt3 and and then we made a big bet this was what we were going to do and it was not it looks so all of these things look so obvious in retrospect they really don't feel that way at the time one other thing you brought up recently was the there's two approaches to AI the replication of yourself and then the smartest employee oh it's not not AI itself but like how you want to use it like when you imagine using your personal AI which so there's a subtle distinction when you said it but but can you can you expound on because it seemed like a fairly profound distinction of how at least Sam thinks about the the future of AI use cases so C can you explain that point again because clearly I misunderstood it if you're G to text me in you know five years in the future I think you want to be clear of whether you're texting me or my AI assistant and then if it's my AI assistant that's GNA like you know bundle messages together and you'll get a reply later or or you know if it can easily do something you might ask my human assistant to do then fine you'll know that um I think there will be value in keeping what those things are separate and not that it's like all right AI is truly just an extension of Sam I don't know if I'm talking to Sam or Sam's AI ghost but that's okay because it's the same thing it's this merged entity I think I think there will be like Sam and Sam's AI assistant and also I want that for myself like I don't want to feel like this thing is just like this weird extension of me but that it's a separate entity that I can communicate with across a barrier you see it in uh in music or creative where it becomes pretty easy to replicate a drake or a Taylor Swift audio we probably need some form of validation or some centralization that uh validates hey this is actually the the creative work of XYZ person you're probably gonna want some version of that at a personal level too yeah but it's like you know the way I think about like open AI is it's I don't like there's different people and I'm asking them to do things and they go off or they ask me to do things and I go off um but it's not a single like Bor and I think that's like a way we're all comfortable and so so what is that can can you tie that back like the the decentralization of of letting individuals do their no not well also that but I I I meant more just kind of like what is the abstraction of what my personal AI is going to be like got it like do I think of that is this is just me and it's G to like take over my computer and do what's best and because it's me that's going to be totally fine and it's answering messages on my behalf and it's you know gonna just like I'm slowly gonna like take my hands off the controls and it's slowly gonna like be me or do I think of this as like this is a really great person I work with that I can say hey can you do this thing get back and when you're done but I think of it as not me as you think about the uh uh educational system and as we think about like the class of college class of 2030 or 2035 or whatever whatever um some some group in the future um are there changes specifically that you think think should be made within the college educational system to prepare people for the future we have the biggest one is I think people should not only be allowed but required to use the tools there will be some cases where we want people to do something the oldfashioned way um because it helps with understanding you know like I remember sometimes in math class or whatever there'd be something you can't use no calcat test yeah but on the whole like in real life you get to use the calculator and so you need to understand it but then you got to be proficient using the calculator too and if you did math class and never got to use the calculator you would be like a less less good at the work you need to do later you know if all of the open a researchers never got to use a calculator open I probably wouldn't have happened on computers at least you know um we don't try to teach people not to use calculators not to use computers and I think we shouldn't train people not to use AI either it's just going to be an important part of like doing valuable work in the future last one um in planning for AGI and Beyond you wrote the first AGI will be just a point along the Continuum of intelligence which we spoke about earlier uh we think it's likely the progress will continue from there possibly sustaining the rate of progress we've seen over the past decade for a long period of time do you ever uh personally stop and process or visualize like what the future will look like in that or is it just too abstract to contemplate um all the time I mean I don't visualize it like you know we have these like flying cars in a Star Wars future city and not like that but like definitely what it means when one person can do the work of hundreds or thousands of well-coordinated people and what what it means when I don't want to say we can discover all of science but kind of what it feels like like what it would feel to us like if we could discover all of science be pretty cool yeah Sam thanks for doing this thank you thank you for listening to this episode of the Logan Bartlet show with CEO and co-founder of open AI Sam Alman if you enjoyed this conversation really appreciate it if you like And subscribe and share with anyone else that you think might find interesting as well as come back for next week where we'll have another exciting episode with a different founder and CEO of an important company of Technology thanks everyone for listening and have a good week he [Music] [Applause] [Music]
Info
Channel: The Logan Bartlett Show
Views: 747,617
Rating: undefined out of 5
Keywords: Sam Altman, ChatGPT, ChatGPT-4o, OpenAI, AI, artificial intelligence, tech, entrepreneurship, startups, venture capital, Silicon Valley, Logan Bartlett
Id: fMtbrKhXMWc
Channel Id: undefined
Length: 46min 14sec (2774 seconds)
Published: Tue May 14 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.