Sam Altman on Q* | Lex Fridman Podcast

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
there's just some questions I would love to ask your intuition about what's GPT able to do a not so it's allocating approximately the same amount of compute for each token it generates is there room there in this kind of approach to slower thinking sequential thinking I think there will be a new paradigm for that kind of thinking will it be similar like architecturally as what we're seeing now with llms is it a layer on top of the llms uh I can imagine many ways to implement that I think that's less important than the question you were getting at which is do we need a way to do a slower kind of thinking where the answer doesn't have to get like you know it's like like I guess like spiritually you could say that you want an AI to be able to think harder about a harder problem right and answer more quickly about an easier problem and I think that will be important is that like a human thought that we're just having you should be able to think hard is that wrong intuition I suspect that's a reasonable intuition interesting so it's not possible once the GPT gets like gpt7 we just be instantaneously be able to see you know here's here's the proof from our stum it seems to me like you want to be able to allocate more compute to harder problems like it seems to me that a system knowing if if you ask a system like that prove from ml theorem versus what's today's date unless it already knew and had memorized the answer to the proof assuming it's got to go figure that out seems like that will take more compute but can it look like a basically llm talking to itself that kind of thing maybe I mean there's a lot of things that you could imagine working what like what the right or the best way to do that will be uh we don't know this does make me think of the mysterious the lore behind qar what's this mysterious qar project is it also in the same nuclear facility uh there is no nuclear facility that's what a person with a nuclear facility always says I would love to have a secret nuclear facility there isn't one all right uh maybe someday someday all right one can drink open AI is not a good company at keeping secrets it would be nice you know we're like been plagued by a lot of leaks and it would be nice if we were able to have something like that can you speak to what qar is we are not ready to talk about that see but an answer like that means there's something to talk about it's very mysterious Sam I mean we work on all kinds of research uh we have said for a while that we think better reasoning in these systems is an important direction that we'd like to pursue we haven't cracked the code yet uh we're we're very interested in it is there going to be moments qar or otherwise where there's going to be leaps similar to tgpt where you're like that's a good question um what do I think about that it's interesting to me it all feels pretty continuous right this is kind of a theme that you're saying is there's a gradual you're basically gradually going up an exponential slope but from an outsider perspective for me just watching it that it does feel like there's leaps but to you there isn't I do wonder if we should have so you know part of the reason that we deploy the way we do is that we think um we call it iterative deployment we uh rather than go build in secret until we got all the way to GPT 5 we decided to talk about gpt1 2 3 and four and part of the reason there is I think Ai and surprise don't go together and also the world people institutions whatever you want to call it need time to adapt and think about these things and I think one of the best things that open ey has done is this strategy and we we get the world to pay attention ition to the progress to take AGI seriously to think about what systems and structures and governance we want in place before we're like under the gun and have to make a rust decision I think that's really good but the fact that people like you and others say um you still feel like there are these leaps makes me think that maybe we should be doing our releasing even more iteratively I don't know what that would mean I don't have an answer ready to go but like our goal is not to have shock updates to the world the opposite yeah for sure more it iterative would be amazing I think that's just beautiful for everybody but that's what we're trying to do that's like our state of the strategy and I think we're somehow missing the mark so maybe we should think about you know releasing gp5 in a different way or something like that yeah 4.71 4.72 but people tend to like to celebrate people celebrate birthdays I don't know if you know humans but they kind of have these milestones and I do know some humans um people do like mil Stones I uh I totally get that I think we like Milestones too it's like fun to you know say declare Victory on this one and go start the next thing but but yeah I feel like we're somehow getting this a little bit wrong
Info
Channel: Lex Clips
Views: 64,439
Rating: undefined out of 5
Keywords: ai, ai clips, ai podcast, ai podcast clips, artificial intelligence, artificial intelligence podcast, computer science, consciousness, deep learning, einstein, elon musk, engineering, friedman, joe rogan, lex ai, lex clips, lex fridman, lex fridman podcast, lex friedman, lex mit, lex podcast, machine learning, math, math podcast, mathematics, mit ai, philosophy, physics, physics podcast, sam altman 2, science, tech, tech podcast, technology, turing
Id: vdv8TF8n52Y
Channel Id: undefined
Length: 6min 5sec (365 seconds)
Published: Fri Mar 22 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.