OpenAI CEO Sam Altman and CTO Mira Murati on the Future of AI and ChatGPT | WSJ Tech Live 2023

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
So here's my first question for you. Very, very simple question. What makes you human? Me? Both of you, you both have to answer what makes you human? Oh, and one word you get one word, humor, humor, emotion. OK. Um To confirm you're both human. I'm gonna need you to confirm which of these boxes have a traffic light. I think. Uh I think I can do that too now. Ok. All right. Well, Sam, you are actually here nine years ago at our first tech live and I actually wanna roll the clip of what you said. Certainly the, the fear with, with A I or machine intelligence in general is that it replaces drivers or doctors or whatever. Um, but the optimistic view on this and certainly what backs up what we're seeing is that computers and humans are very good at very different things. So a computer doctor will out crunch the numbers and do a better job than a human on looking at a massive amount of data and saying this. But on cases that require judgment or creativity or empathy, we are nowhere near any computer system that is any good at this. Ok? Does 2023. Partially right and partially wrong. Ok. It could have been worse, could have been worse. What's your outlook? Now, people, I think the prevailing wisdom back then, was that a I was gonna um do the kind of like robotic jobs really well first. So it would have been a great robotic surgeon, something like that. Um And then maybe eventually it was gonna do the, the sort of like higher judgment tasks. Uh And then, you know, then it would kind of do the empathy and then maybe never, it was gonna be like a really great creative thinker and creativity has been in some sense. And at this point, the definition of the word creativity is up for debate, but creativity in, in some sense has been easier for A I than people thought you can, you know, see dolly three generate these like amazing images um or write these creative stories with G BT four or whatever. Um So that part of the answer maybe was not perfect. Uh And GP I, I certainly would not have predicted GP T 49 years ago um quite how it turned out. But a lot of the other parts about people still really want a human doctor. Uh That's definitely very true. And I wanna quickly shift to a G I, what is a G I, Mira? If you could just define it for everybody in the audience, I will say it's a system that can generalize across many domains that, you know, would be equivalent to human work. Um They produce a lot of uh productivity and economic value. And, but you know, we're talking about one system that can generalize across a lot of digital domains of human work. And Sam, why is a G I the goal, the, the two things that I think will matter most over the next decade or few decades um to improving the human condition, the most giving us sort of just more of what we want. Our uh abundant and inexpensive intelligence. Um The more powerful, the more general, the smarter, the better. Uh I think that is A G I and then, and then abundant and cheap energy. And if we can get these two things done in the world, then uh it's almost like difficult to imagine how much else we could do. Uh We're, we're big believers that you give people better tools and they do things that astonish you. And I think A G I will be uh the best tool humanity has yet created uh with it, we will be able to solve all sorts of problems. We'll be able to express ourselves in new creative ways. We'll make just incredible things um for each other, for ourselves, for the world, for, for kind of this unfolding human story. Uh And you know, it's new and anything new comes with change and changes, uh not always all easy. Um But I think this will be just absolutely tremendous upside and gonna, you know, we're gonna nine more years. If you're nice enough to invite me back, you'll roll this question and people will say, like, how could we have thought we didn't want this? Like, how, I guess two parts to that? My next question. When will it be here and how will we know it's here from? Well, either one of you, I mean, you can both predict how long I think we'll call you in 10 years and we'll tell you you're wrong than that. I mean, yeah, I, yeah, probably in the next decade, but I would say it's a bit tricky because we, you know, when will it be here? Right. And I just kind of give you a definition but then often we talk about intelligence and you know, how intelligent is it or whether it's conscious and sentient and all of these terms. And, you know, they're not quite right because they sort of define our, our own intelligence and we're building something slightly different and you can kind of see how the definition of intelligence evolves from, you know, machines that were really great at chess and off ago and now the GP T series and then what's next, but it continues to evolve and it pushes what, how we define intelligence. We, we, we kind of define a G I as like the thing we don't have quite yet. So we've moved I mean, there were a lot of people who would have 10 years ago said art if you could make something like G BT four G BT five, maybe that would have been an A T I and, and now people are like, well, you know, it's like a nice little chat bot or whatever. And I think that's wonderful. I think it's great that the goalposts keep getting moved. It makes us work harder. Um But I think we're getting close enough to whatever that A G I threshold is gonna be that we no longer get to hand wave at it and the definition is gonna matter so less than a decade for some definition. OK. All right. The goalpost is mo moving. Um Sam, you've used the word. Um, and, and, and previously, when describing a G I, the term median human, can you explain what that is? Um I, I think there are experts in areas that are gonna you better than A I systems for a long period of time. Um And so like, you know, you could come to like some area where I'm like, really an expert at some task and I, I'll be like, all right, you know, GP T four is doing a horrible job there. GP T 56, whatever, doing a horrible job there. But you can come to other tasks where I'm ok, but certainly not an expert. I'm kind of like, maybe like an average of what different people in the world could do with something. And for that, uh then I might look at it and say, oh, this is actually doing pretty well. So what we mean by that is that the in any given area, expert, humans may uh like experts in any area can like just do extraordinary things. And that may take us a while to be able to do with these, these systems. But for kind of the more average case performance. So, you know, me doing something that I'm like, not very good at. Anyway, maybe our future versions can help me with that a lot. So am I a median human uh at some tasks? I'm sure. And it's some clearly at this, you're a very expert human and no GP T is taking your job anytime soon. Ok. That makes me feel that makes me feel a little better. Uh Mira, how's the G BT five going? Um We're not there yet, but it's kind of need to know basis. I'll let you know that's such a diplomatic answer. I'm gonna make merry to all of this. I would have no, I would have just said, oh yeah, here's what's happening. That's great. No, no, we're not sending him back here. Pair of these two who paired, whose idea was this? Um You're working on it, you're training it. We're always working on the next thing. Just do a staring contest. That's what makes us human. Um All of these steps though, with GP T, right? Is it, or, you know, GP T 33.5 for our steps towards A G I with each of them? Are you looking for a benchmark? Are you looking for? This is what we want to get to? Yeah. So, you know, before we had the product, we were sort of looking at academic benchmarks and how well these models were doing a academic benchmarks and, you know, open A I is known for betting on scaling, you know, throwing a ton of compute and data on this uh neural networks and seeing how they get better and better at predicting the next token. But it's not that we really care about the prediction of the next token, we care about the tasks in the real world to which this correlates to. And so that's actually what we started seeing once we put out um research in the real world and we, we build out products through the API eventually through A G BT as well. And so now we actually have real world examples. We can see how our customers do in um specific domains, how it moves the needle for specific businesses. Um And of course, with GP T four, we saw that it did really well in um exams like SAT and LS A and so on. So it kind of goes to our earlier point that we're, you know, continually evolving our definition of what it means for these models to be more capable. Um But you know, as we increase the the capability vector, what we really look for is reliability and safety. Uh these are very interweaved and it's very important to make systems that of course, are increasingly capable, but that you can truly rely on and they are robust and that you can trust the output of the system. So we're kind of pushing in uh both of these vectors at the same time. And um you know, as we build the next model, the next set of technologies, we're both betting continuing to bet on scaling. But we're also looking at, you know, this other uh element of multimodality. Um because we want these models to kind of perceive the world in a similar way to how we do and, you know, we per perceive the world, not just in text but images and sounds and so on. So we want to have robust representations of the world um in, in these models will G BT five solve the hallucination problem? Well, I mean, actually, maybe like, let's see, um we've made a ton of progress on the hallucination issue um with G BT four, but we're still quite uh we're not where, where we need to be, but, you know, we're sort of on the right track and it's, it's unknown, it's research, it, it could be that uh continuing in this path of reinforcement learning with human feedback, we can get all the way to really reliable outputs. And we're also adding other elements like retrieval and search. So you can um you have the ability to, to provide more factual answers or to get more factual outputs from the model. So there is a combination of technologies that we're putting together to kind of reduce the hallucination issue. Sam, I'll, I'll ask you about the data, the training data. Obviously, there's, there's been, you know, maybe maybe some people in this audience who may not be thrilled about some of the data that you guys have used to train some of your models. Not too far from here in, in Hollywood, people have not been thrilled. Uh publishers when you're, when you're considering now as you're as you're walking through and to going to work towards this these next models, what are the conversations you're having around the data? So a few thoughts in different directions here, one, we obviously only wanna use data that people are excited about us using. Like we don't, we, we want the model of this new world to, to work for uh everyone. And we wanna find ways to make people say like, you know what I see why this is great. I see why this is like gonna be a new, it may be a new way that we think about some of these issues around data ownership and uh like how economic flows work. But we want to get to something that everybody feels really excited about. But one of the challenges has been people, you know, different kinds of data owners have very different pictures. So we're just experimenting with a lot of things we're doing partnerships of different shapes. Um And we think that like with any new field, we'll find something that sort of just becomes a, a new standard also, uh I think as these models get smarter and more capable, we will need less training data. So I think there's this view right now, which is that we're just gonna like, you know, models are gonna have to like train on every word humanity has ever produced or whatever. And I, I technically speaking, I don't think that's what's gonna be the long term path here, like we have existential proof with humans that that's, that's not the only way to become intelligent. Um And so I think the conversation gets a little bit um led astray by this because what, what really will matter in the future is like particularly valuable data. You know, people want people trust the Wall Street Journal and they want to see content from that. And the Wall Street Journal wants that too. And we find new models to make that work. But I think the, the conversation about data and the shape of all of this uh because of the technological progress we're making, it's about to, it's about to shift. Well, publishers like my mine who might be out there somewhere. They want money for that data is the future of this entire race about who can pay the most for the best data. Um No, that was sort of the point I was trying to make, I guess in elegantly the, but you still need some, you will need some. But the core, like the thing that is the thing that people really like about a GP T model uh is not fundamentally that it has that it knows particular knowledge, there's better ways to find that it's that it has this larval reasoning capacity and that's gonna get better and better. But that's, that's really what this is gonna be about. And then there will be ways that you can set up all sorts of economic arrangements as a user or as a company making the model or whatever to say. All right. Now, you know, I, I understand that you would like me to go get this data from the Wall Street Journal. I can do that, but here's the deal that's in place. So there will be things like that. But, but the fundamental thing about these models is not that they memorize a lot of data. So sort of like the model where also you right now you've got being integrated, it goes out looks for some of that data and can bring back some of that. And that's, you know, on the internet, we decided again, back in the early days the internet, there were a lot of conversations about the different models could be and we all kind of decided on, you know, here's the, the core framework and there's different pieces in there. Of course. And we're all gonna have to figure that out for a I, well, speaking of bing, you and Satya Nadella, your $10 billion friends or frenemies friends. Yeah, I won't pretend that it's like a perfect relationship but nowhere near the front of me category. It's really good. Like we have our squabbles. It just seems like increasingly as you guys are releasing more and more products that they, they seem to compete in some places. Um I mean, I think that that's, that there's something core about this language interface that is a big deal and so there's gonna be a lot of people doing things for that and, and then there's other places like, you know, we offer a version of API, they offer a version of API but like that just, that's like a very friendly thing and we all, we like, we work it out so that we all benefit and we're all happy and, and we just want like we jointly want as much usage of our models uh of our as, as possible. So we're super aligned there. Um But yeah, it's like it makes sense, friends, friends that text. We do a lot. OK. All right. All right. Um Mira, you, you over the last number of months have started to roll out a lot more personalization to the model, right? You can put in custom instructions. You get a, get the voice now and we're gonna bring out one of the, bring down the screen to hear one of these clips. But I wanted to play one of these conversations I had with chat GP TS just yesterday. It's a, it's a slow screen that G BT responds quicker. That's really embarrassing. Yeah. We're gonna get that better though. We'll get it faster that I can't help you with. But we'll get chas G BT. No, I'm saying cha G BT is faster than this. But, and I'm saying that's like a, it's not often that people say chag BT is faster than X. I've got a big interview with Open A I executives and I'm nervous. Any advice. Absolutely. Given your background as a technology columnist, you're well equipped for this. Just be yourself. Do your homework on Open A's latest developments and trust in your expertise. Remember, they'll likely appreciate thoughtful and well researched questions and hey, everyone gets nervous. Take a deep breath and rock it. Am I rocking it? I'm rocking it. All right. Are we rocking it up here? Ok. But what's so striking about that? First of all I want every, that's, that's the voice of Chat GP. T they have one of five. Yeah. Yeah. That, I mean, that is the voice. It, it is, it sounds so human. It sounds so natural. It knows about me because I've already put into custom instructions. I'm a tech journalist. It also knows I'm allergic to avocado. It's always putting that in there. Don't eat avocado. I'm like, I'm not asking about avocado. We got some work to do. Is there, is there a, a future and this is what you're maybe trying to build here where we have deep relationships with this type of bo it's going to be a significant relationship, right? Because, you know, we're, we're building the systems that are going to be everywhere in, at your home, in your educational environment, in your work environment. And maybe, you know, when you're having fun. And so that's why it's actually so important to get it right? And we have to be so careful about how we design this interaction so that ultimately, it's, you know, elevating and it's fun and it's uh it, it makes productivity better and it enhances creativity. Um And, you know, this is ultimately where we're trying to go. And as we increase the capabilities of the technology, we also want to make sure that, you know, on, on the product side, um we feel in control of this, these systems in the sense that we can steer them to do the things that we want them to do and the output is reliable, that's very important. And of course, we want it to be personalized, right? And as, as it has more information about your preferences, the things you like, the things you do um and the capabilities of the models increase and other features like memory and so on. It has, of course, it will become more personalized and that's, that's a goal, it will become more useful and it's, it's going to become uh more fun and more creative and it's not just one system, right? Like you can have many such systems personalized for specific domains and tasks. That's a big responsibility though. And you guys will be in the sort of control of people's friends, maybe people's, it gets to being people's lovers. Uh How do you, how do you guys think about that control? First of all, I think there's, we're not gonna be the only player here, like there's gonna be many people. So we have, we have, we get to put like our nudge on the trajectory of this technological development and we've got some opinions. Uh but a we really think that the decisions belong to sort of humanity, society as a whole, whatever you wanna call it. And b we will be one of many actors building sophisticated systems here. So it's gonna be a society wide discussion. It's, and, and there's gonna be all of the normal forces, there'll be competing products that offer different things, there will be different kind of like societal embraces and pushbacks, there'll be regulatory stuff. Uh It's gonna be like the same complicated mess that any new technological birthing process goes through and then we, we pretty soon will turn around and we'll all feel like we had smart A I in our lives forever. And, you know, that's just, that's, that's the way of progress and I think that's awesome. Um I personally have deep misgivings about this vision of the future where everyone is like super close to A I friends and not like more so than human friends or whatever. I personally don't want that. Uh I accept that other people are gonna want that. Um And you know, some people are gonna build that and if that's what the world wants and what we decide makes sense, we, we're gonna get that. I, I personally think that personalization is great. Personality is great, but it's important that it's not like person this and, and at least that, you know, when you're talking to A I and when you're not, uh you know, we named it Chat G BT and not, it's a long story behind that, but we name it Chat G BT and not a person's name very intentionally. And we do a bunch of subtle things in the way you use it to like, make it clear that you're not talking to a person. Um And I, I think what's gonna happen is that in the same way that people have a lot of relationships with people, they're gonna keep doing that. And then there will also be these like a is in the world but you kind of know they're just a different thing when you're saying this is another question for you. What is the ideal device that we'll interact with these on? And I'm wondering if you, I hear you and Johnny Ive have been talking, you bring something to show us. Um, I think, I think there is something great to do but I don't know what it is yet. You must have some idea, a lot of ideas. I mean, I'm interested in this topic. I think it is possible. I think most of the current thinking out there in the world is quite bad about what we can do with this new technology in terms of a new computing platform. And I do think every sufficiently big new technology uh it enables some new computing platform. Um but lots of ideas but like in the very nascent stage. So it doesn't, I guess the question for me is is there something about a smartphone or ear buds or a laptop or a speaker that doesn't quite work right now. Of course, so much smartphones are great. Like I have no interest in trying to go compete with a smartphone. Like it's a phenomenal thing uh at what it does. But I think the way what A I enables is so fundamentally new um that it is possible to and maybe we won't like, you know, maybe, maybe it's just like for a bunch of reasons doesn't happen. But I think it's like, well worth the effort of talking about or thinking about, you know, what can we make? Now that before we had computers that could think was, uh, or computers that could understand whatever you wanna call it was not possible. And if the answer is nothing, it would be like a little bit disappointed. Well, it sounds like it doesn't look like a humanoid robot, which is good. Definitely not. I don't think that quite works. Ok. Speaking of hardware, are you making your own chips? You want an answer now? Um Directed here. Uh Are we making our own chips? We are trying to figure out what it is going to take to scale to, to deliver at the scale that we think the world will demand. Um And at the model scale that we think that the research can support, um that might not require any custom hardware. Um And we have like wonderful partnerships right now with people who are doing amazing work. Um So the default path would certainly be not to, but I wouldn't, I would like, I would never rule it out. Are there any good alternatives to NVIDIA out there? Uh NVIDIA certainly has something amazing, amazing. Uh But, you know, I think like the magic of capitalism is doing its thing and a lot of other people are trying and we'll see where it all shakes out. We had Renee Haas here from arms. I hear you guys have been talking his friends. Oh, you said hello? Not as close as Sata. You're not, you're not as close as, not as, ok. Got it, got it. Um um this is where we're getting. Yeah, we're getting to the hard, we actually we're about to get to the hard hitting. So um my colleagues recently reported you guys are, are, are, are actually looking at the valuation is 80 to 90 billion and that you're expected to reach a billion in revenue. Are you raising money? No. Well, I mean, always but not like this minute. Not right now, not, not right now. There's the people here with money. All right, let's talk. Um We, we will need huge amounts of capital to complete our mission and we have been extremely upfront about that. Um There has got to be something more interesting to talk about in our limited time here together than our future capital raising plans, but we will need a lot more money. We don't know exactly how much we don't know exactly how it's gonna be structured, what we're gonna do. But um you know, it shouldn't come as a surprise because we have said this all the way through. Like it's just a tremendously expensive endeavor where, which part of the business though right now is growing the most mirror you can also jump in. Definitely in the product side. Yeah, with, with the research team is very important to have, you know, density of talent, small teams that innovate quickly the product side, you know, we're doing a lot of things. We're trying to push great uses of A I out there both on platform side and first party and work with customers. So that's certainly, and, and the revenue is coming mostly from that api the the revenue for the company revenue. Oh, I'd say both sides, both sides. Yeah. So my, my subscription to Chat G BT Plus. Is that? Yeah, yeah. How many people here actually are subscribers to Chat G BT Plus? Thank you all very much. Ok. You guys make a family plan. It's serious. It's serious because I'm spending on two and we'll talk about it. Ok. This is what we're really here for tonight. Um, moving out a little bit into policy and, and some of the fears it's not like super cheap to run if we had a way to like say like, hey, you know, you can have this for like we can give you like way more for the 20 bucks or whatever we would like to do that. And as we make the models more efficient, we'll be able to offer more, but it's, it's not for like lack of us wanting more people to use it that we don't do things like family, family plan for like $35 for two people that the kind of haggling, you know. Well, I gave you the sweatshirt. And so, you know, it's, there's, there's something we can do there. How do we go from the chat that we just heard that told me to rock it to one that I don't know, can rock the world and end the world. Well, I don't think we're gonna have like a chat bot that ends the world. But how do we go to this idea of? We have, uh, we, we've got simple chat bots are not simple. They're, they're advanced what you guys are doing. But how do we go from that idea to this fear that is now pervading everywhere. If, if we are right about the trajectory, things are going to stay on and if we are right about, not only the kind of like scaling of the GP TS but new techniques that we're interested in that could help generate new knowledge and someone with access to a, a system like this can say, like help me hack into this computer system or help me design uh you know, like a new biological pathogen that's much worse than COVID or any number of other things. It seems to us like it doesn't take much imagination to think about scenarios that deserve great caution. And, and again, we, we, we all come and do this because we're so excited about the tremendous upside and that the incredibly positive impact. And I think it would be like a moral failing not to go pursue that for humanity, but we've got to address and this happens with like many other technologies, we've got to address the downsides that come along uh with this. And it doesn't mean you don't do it, it doesn't mean you just say like this A A I thing. We, we're gonna like, you know, we're gonna like go like full dune and like blow up, you know, and have not have computers or whatever. Um But it means that you like, are thoughtful about the risks. You try to measure what the capabilities are and you try to build your own technology in a way and that, that mitigates those risks. And then when you say like, hey, here's a new safety technique, you make that available to others. And as you guys are thinking about building in, in, in this direction, what are some of those specific safety risks you're looking to put in? I mean, like Sim said, you've got the capabilities and then there is always a downside whenever you have such immense and great capability, there's always a downside. So we've got a fierce task ahead of us to figure out what are these downsides, discover, understand them, build the tools to mitigate them. And it's not, you know, like a single fix, you usually have to intervene everywhere from the data to the model to um the tools in the product. And of course, policy. And then thinking about the entire regulatory and um um societal infrastructure that can kind of keep up with these technologies that we're building. Because ultimately, what we want is to slowly roll out these capabilities in a way that makes sense and allow society to adapt. Um because, you know, the the progress is incredibly rapid and we want to allow for adaptation and for the whole infrastructure that's needed for these technologies to actually be absorbed productively to exist and be there. So, you know, when you think about what are sort of the concrete safety um uh measures along the way, I would say, number one is actually rolling out the technology um and slowly making contact with reality, understanding how it affects um uh certain use cases and industries and actually dealing with the implications of that, whether it's regulatory copyrights, um you know, whatever the impact is actually absorbing that and dealing with that and moving on to more and more capabilities. I don't think that building the technology in a lab in a vacuum without contact with the real world and with the friction that you see with reality is a good way to actually deploy it safely and this might be where you're going. But it, it seems like right now you're also policing yourself, right? You're setting this better and, and Sam, that's where I was gonna ask you. I mean, you seem to spend more time in Washington than Joe Biden's dogs right now and I'm sure I've only been twice this year. Really, that's, I think his dog like three days or so. Anyway. Um, but what is it specifically that you would rather the government and our regulators do versus you have to do? First? The point I was making, I think is, is really important that, that it's very difficult to make a technology safe in the lab. Um, society uses things in different ways and adapts in different ways. And I think the more we deploy A I, the more A I is used in the world, the safer A I gets and the more we kind of like, collectively decide, hey, here's a thing that is not an acceptable risk tolerance and this other thing that people are worried about, that's, that's totally ok. Um, and, you know, like we see this with many other technologies, airplanes have gotten unbelievably safe. Um, even though they didn't start that way and it was, uh, it was like careful, thoughtful engineering and, um, understanding why when something went wrong it went wrong and how to address it. And, you know, the shared best practices there, I think we're gonna see in all sorts of ways that the things that we worry about with A I in theory don't quite play out in practice. Um, you just like a ton of talk right now about deep fakes and, you know, the, the, the impact that's gonna have on uh, society in all these different ways. I think that's an example of where we were thinking about the last generation too much and a I will disrupt society in all of these ways. But, you know, we all kind of are like they're like, oh, that's a deep fake or oh, it might be a deep fake. Oh, that picture or video or audio like we, we learn quickly but, but maybe the real problem, this is like speculation. This is hard to know in advance is not the deep fake ability, but the sort of customized one on one persuasion. And that's where the influence happens. It's not, it's not like the fake image. It's the this thing has a subtle ability, these things have a subtle ability to influence people and then we learn that that's the problem and we, we adapt. Uh So in terms of what we'd like to see from governments, uh I think we've been like very mischaracterized here. We do think that international regulation is gonna be important for the most powerful models. Nothing that exists today, nothing that will exist next year. Uh But as we get towards a real super intelligence, as we get towards a system that is like more capable uh than like any humans. Um I think it's very reasonable to say we need to treat that with like caution and uh and a coordinate approach. But like we think what's happening with open source is great. We think start ups need to be able to train their own models and deploy them into the world and a regulatory response on that would be a disastrous mistake for this country or others. Um So the message we're trying to get across is you gotta embrace what's happening here. You gotta like make sure that we get the economic benefits and the societal benefits of it. But let's like, look forward at where this, where we believe this might go and let's not be caught flat footed if that happens. You mentioned deep fakes and I, I wanna talk about A I generated content that's all over the internet. Now, who do you guys think is responsible or, or should be responsible for policing some of this or not policing but detection of some of this is this on the social media companies? Is this on open A I and all the other A I companies, we're definitely responsible for the technologies that we develop and put out there and uh you know, misinformation and that's, that's clearly a big issue as we create more and more capable models. And we've been developing technologies to deal with um uh the provenance of an image or a text and detect output, but it's a bit complicated because, you know, you want to give the user sort of flexibility and they, you also don't want them to feel monitored. And so you have to consider the user and you also have to consider people that are impacted by the system that are not users. And so these are quite nuanced issues that require um a lot of interaction and input from not just your users of the product but also of society more broadly and figuring out, you know, also with partners um that, that bring on this technology and integrate it, what are the best ways to, to deal with these issues? Because right now there's no way or no tool from open A I, at least that I, that I can put in an image or some of the text. And ask, is this A I generated for image? We have actually technology that's uh really good almost, you know, 99% reliable, but we're still testing it. It's early and we want to be sure that it's going to work. And even then it's not just a technology problem, misinformation is such a nuanced and broad problem. So you still have to be careful about how you roll it out where you integrate it. Um But we're certainly working on the research side and for, for image, at least we have a very reliable tool in, in the early stages. Yeah, and say it's worth, when might you release this? You said you, you said you're, you're working on this right now. Is this something you plan to release? Oh, yes, yes. For both images and text, for text, we're trying to figure out what actually makes sense. Um For, for images, it's a bit more straight, straightforward problem. Um But in either case, we definitely test it out because we don't have all the answers, right? Like we're building these technologies first, we don't have all the answers. So often we will experiment, we will put out something, we will get feedback, but we want to do it in a controlled way, right? Um And sometimes we'll take it back and we'll make it better and roll it out again. I, I'll also add that. I think this idea of watermarking content is not something that everybody has the same opinion about what is good and what is bad. There's a lot of people who really don't want their generated content watermarked and that's understandable in many cases. Uh Also it's not, it's not gonna be super robust to everything. Like maybe you could do it for images, maybe for longer text, maybe not for short text. But over time there will be systems that don't put the watermarks in. And also there will be people who really like, you know, this is like a tool and up to the human user, how you use the tool. And I don't like this is why we want to engage in the conversation. Like we, we are willing to sort of like follow the the collective wishes of society on this point. And I don't think it's a black and white issue. Uh at least think people are still evolving as they understand all the different ways we're gonna use these tools, they're still evolving, their thoughts about what they're gonna want here also to Sim's earlier point. It's not, you know, um it's not just about truthfulness, right? And what's, what's real and what's not real. Actually, I think in the world that we're going towards marching towards the, the bigger risk is really this individualized pers uh persuasion and, and how to deal that and that's going to be a very tricky problem to deal with, right? I realize I have five minutes left and we were gonna do some audience questions so we can get to one audience or two audience questions. I'm gonna finish 111 last thought here. Um I can actually not see a thing out there. So um I will ask one last question, then we'll, we'll hopefully have time for one or two. So 10 years you were here 10 years ago. What we, we touched on this as we were, we're starting here. But what is your biggest fear about the future? And what is your biggest hope with this technology? II, I think the future is gonna be, be like amazingly great. Uh We, we wouldn't come work so hard on this if we didn't, I, I think this is gonna be like, I think this is one of the most significant inventions humanity has yet done. Um So I'm super excited to see it all play out. Uh I think like things can get so much better for people than, uh, than they are right now. And I'm, I feel very hopeful about that. We, we covered a lot of the fears. It, it, like, again, we're clearly dealing with something very powerful that's gonna impact all of us in ways we, we can't perfectly foresee it. Um, but like what a time to be alive and, and, and get to witness this. You're not so fearful that I, I was gonna actually ask this, but I'll, I'll ask him. Now, do you have a bunker? This is the, this is, this is the question, the question, not better than you. I'm gonna let that clock run. I'm not gonna pay attention to that. But as we're thinking about fears, I just, I'm wondering what if you have a bunker and what I would say that you have that you say I have like structures, but I wouldn't say like a bunker structures. None of this is gonna help if a G I goes wrong. This is a, it's a ridiculous question to be honest. OK. Good, good, good, Mira. What's your hope and fear? I mean, the hope is definitely to push our civilization ahead with augmenting um, our collective intelligence and the fears. We talked a lot about the fears, but, you know, we've got this opportunity right now. Um, and we've got summers and winters in A I and so on. But, you know, when we look back 10 years from now, I hope that we get this right. And I think there are many ways to, to mess it up. Um And we've seen that with many technologies, so I hope we get it right. All right. We've got time right here. Hi. Um Pam Dylan, preferably uh sensory consumer products. A I my question has to do with the inflection point. We are where we are with respect to A I and A G I. What is the inflection point? How do you define that moment where we go from where we are now to however you would choose to define what is A G I, I think it's, it's gonna be much more continuous than that. We're just on this beautiful exponential curve. Whenever you're on a curve like that, you look forward, it looks vertical, you look back, it looks horizontal. That's true at any point on there. So a year from now we'll be in a dramatically more impressive place than a year ago. We were in a dramatically less impressive place, but it'll be hard to point. People will try and say, oh, it was Alphago that did it, it was GP T three that did it, it was GP T four that did it, but it's just brick by brick, 1 ft in front of the other up climbing this exponential curve right here in the front. Thank you. My name is Mariana Michael. I'm the chief information officer at the Port of Long Beach, but I'm also a computer scientist by training a few decades ago. I'm older than you. I remember working with some of the early A I people. I have a general question. I agree with you. This is one of the most significant innovations to happen. One of the things I've struggled with over the last 20 years in thinking about this, we're about to change the nature of work. This is that significant and I feel that people are not talking about it, there will be a significant, there'll be a transition, time period where significant population in the world and in this country will not have had the types of discussion and the sense that we have. So they can, like you mentioned, society needs to be a part of it. There's a large portion of society that's not even in this discussion. So the nature of work will change. It used to be that things that were just um gonna be automated. There will be a time where people who define themselves by work since thousands of years will not have that and we're hurtling towards it. What can we do to make sure that we take that into account? Because when we talk about society, it's not like they're all together, ready to discuss this. Some of the effects of some of the technologies that we brought into the world have actually made people separate from each other. How do we get some of those not regulations but how do we come up with some of those frameworks and voluntarily bring things about that will actually result in a better world that doesn't leave everybody else behind. Thank you. OK. I, I'll give you my perspective. I, I think I completely agree with you that it's one of, it's the ultimate technology that could really increase inequality and make, make things so much worse for us as human beings and civilization. Or it could be, you know, really amazing and it could bring along a lot of creativity and productivity and enhance us and, you know, maybe a lot of people don't want to work um eight hours or 100 hours a week, maybe they want to work four hours a day and do a bunch of other things and, you know, um I, I think it's certainly going to lead to a lot of disruption in the workforce and we don't know exactly the scale of that, um or, or the trajectory along the way, but that's, that's for sure. And one of the things that, um I, in retrospect, it's not that we specifically planned it, but in retrospect I'm happy about is that with the release of Child G BT, we sort of brought a I into the, um you know, collective consciousness and people are kind of paying attention because they're not reading about it in the press. Um People are not just telling them about it but they can play with it. They can interact with it and get a sense for the capabilities. And so I think it's actually really important to bring these technologies into the world and make them as widely accessible as possible. Um You know, Sam mentioned earlier, like we're working really hard to make these models cheaper and faster, so they're accessible very broadly. But I think that's key for people themselves to actually interact with the technology and experience it. Um And sort of visualize how it might change their way of life, their way of being and participate uh as you know, uh as, as in providing uh product feedback. But also, you know, I institutions need to actually prepare for these changes in the workforce and economy. I'll give you the last word. Yes, I, I think it's a super important question. Um e every technological revolution affects the job market uh and over human history, you know, every maybe 100 years, you feel different numbers for this 150 years, half the kind of jobs go away, totally change whatever. Um I'm not afraid of that at all. In fact, I think that's good. I think that's the way of progress and we'll find new and better jobs. The thing that I think we do need to confront as a society is the speed at which this is going to happen. It seems like over, you know, two maximum three, probably two generations we can adapt, society can adapt to almost any amount of, of job market change. But a lot of people like their jobs or they dislike change and going to someone and saying, hey, the future will be better. I promise you and society is gonna win but you're gonna lose here. That, that doesn't work. That's not a, that's not cool. Like that's, that's not a nice, that's not an easy message to get across. And al although I tremendously believe that we're not gonna run out of things to do people that want to work less fine, they'll be able to work less. But, you know, probably many people here don't need to keep working and, and we all do like, we, we, there's like great satisfaction in expressing yourselves in, in being useful and sort of contributing back to society that's not going away. Uh That, that is such an innate human desire like evolution doesn't work that fast. Uh Also the sort of ability to creatively express yourself and to sort of leave something to, to, to add something back to the trajectory of the species is that, that's, that's like a wonderful part of the human experience. So we're gonna keep finding things to do and the people in the future will probably think some of the things that we, we think some of the things those people do are very silly and not real work in a way that like a hunter gatherer probably wouldn't think this is real work either. You know, we're just trying to like entertain ourselves with some silly status game. That's fine with me, that's how it goes. Um The, but we are gonna have to really do something about this transition. It is not enough to just give people a universal basic income. People need to have agency, the ability to influence this. They need, we need to sort of jointly be architects of the future. And one of the reasons that we feel so strongly about de deploying this technology as we do, as you said, not everybody is in these discussions but more and more every year. And by putting this out in people's hands and making this super widely available and getting billions of people to use chat G BT, not only do people have the opportunity to think about what's coming and participate in that conversation. Um but people use the tool to push the future forward. Um And that's really important to us.
Info
Channel: The Wall Street Journal
Views: 561,466
Rating: undefined out of 5
Keywords: openai, ai, ai news, sam altman, sam altman interview, wsj, wsj interview, openai ceo, mira murati, openai cto interview, openai interview, future of ai, chat gpt 4, chatgpt news, future gpt models, generative ai, large language model, microsoft openai, tech things wsj, sam altman congress, tech advancements, ai tools, joanna stern, sam altman leaves openai, altman out, open ai, sam altman forced out, altman leaves openai, mira murati ceo, interim ceo, openai news, techy
Id: byYlC2cagLw
Channel Id: undefined
Length: 49min 36sec (2976 seconds)
Published: Sat Oct 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.