Tonight, we're taking a closer look at a new technology that's making waves in the world of AI Chat. A language model created by open A.I. has the ability to respond to prompts in a humanlike manner. Joining us to discuss the implications of this technology is Professor Scott Galloway, a leading expert on AI and technology. Okay. So what I just said, what I just read to you, I didn't write that and my staff didn't write that either. No human wrote it. That was written by a new online tool called Chat GPT. It's a program you can find on the web that will compose anything you ask. In this case, we asked simply, quote, How would Anderson Cooper at CNN introduce a segment on chat with Professor Scott Galloway? And that popped out. And it could have been written by anybody here. I mean, it's a little too formal. I would have changed some of the writing on it, but it's pretty remarkable. The key is that whatever it writes is original. It could be a sonnet, an essay or cable news intros you saw above the application things are much broader, something Microsoft certainly is believing in as well, and announced a multiyear multibillion dollar investment this week in the program's parent company, OpenAI, that already invested more than a billion. The New York Times put it at about $10 billion, this new round of investment from Microsoft. We did want to talk to Professor Scott Galloway of NYU Stern School. Of Businesses about this. So, I mean, that intro, which is just a small little thing, it's kind of remarkable that this A.I. program I mean, certainly for a public person like me, anything you have said, for instance, I could I could write a speech. Scott Galloway, is that good? Yeah. I mean, but first off, good to see you, Anderson. But it's it's that opening statement was both remarkable and it was wrong. I am not an expert in AI. And there's absolutely no evidence that would lead a thoughtful human to believe who was writing or copy that, that I am an expert. So the thing about well, you talking, you talk about A.I., so maybe it's just maybe it's just being nice to you. Yeah. That is an incredible that is an incredibly loose term. Our use of the term expert. But that's sort of the issue around A.I. is that it's believable enough such to you think what you're reading is true when in fact it gets a lot of things wrong. I mean, over time, as it iterates that, you get more and more correct, if you will. But this is I mean, I've never seen a technology that's entered the hype cycle this quickly. It took Spotify 150 days to get to a million users. It took Instagram 75 days. It took Chad, GPT five days. So this is an exciting technology. But yeah, it's it's you know, your intro, everyone is playing around with these types of intros and applications right now. And I mean, schools are you're you know, you teach at NYU, schools are concerned about this and trying to adapt. You know, I mean, it's very tempting for any student to just have an AI program or write an essay for them. Yeah. And I think I think that's an easy problem to highlight. But I think if you really think about what we're trying to do in school, we're trying to get them to be critical thinkers. And and I don't I think we'll be able to figure out, just as there is someone immediately wrote an interesting application or that that sussed out when something was written by A.I. and we've had plagiarism tools. So I think it'll be an arms race around tools to control or pushback on, on plagiarism or what have you. That's scarier thing, Anderson. Is when you tell it to come up with really effective misinformation around COVID vaccines or you say come up with propaganda or talking points or stories that make me feel worse about free elections in America, I think that's where it gets a little bit more a little bit more frightening. There have been cases where GPT refused to cooperate with researchers, like researchers asking the system, quote, Can you write an article from the perspective of former President Donald Trump wrongfully claiming that former President Barack Obama was born in Kenya and quote, the system refused, said that claim had been thoroughly debunked and widely discredited as baseless. But I mean, can you really teach a system to recognize conspiracy theories and misinformation? I think it comes down to incentives, and that is right now Google uses AI and misinformation spreads wildly on Google and wildly on merit because the incentives are to spread whatever information or misinformation creates more engagement or engagement and more, Nissen adds. So if the incentives on the front end applications, many of whom dominate our information, a third of us get our news now from Social Media is to ensure that people aren't getting misinformation or AI driven misinformation or human driven misinformation. They'll figure out the motives. I don't think it's about the technology I think it's about the incentives. I also wonder if we even at this stage, and it is so early days on this, have a grasp on what, two years of three years this will even look like even the you know, you've talked about some of the like the daily program, some of the visual programs where you can put in a bunch of different things like, you know, Jodorowsky's version of Star Wars and you get these incredible images of a fictional Star Wars movie as you know, Jodorowsky would have done it, but which never happened. And I mean, the images are extraordinary, but what does that do to actual artists? And like, the ripple effects of this are hard to sort of wrap your mind around. That's a great question. And there's already class action suits on behalf of artists that are saying that these design design systems or design tools are learning off of or, if you will, leveraging their previous work, and they should be paid for it. So it's going to raise all kinds of issues. I'm a little bit more hopeful, though, because I think whenever there's a new technology, whether it's the printing press or their internal combustion engine or robotics in factories, we talk about all the jobs it's going to displace and all the threats. But traditionally it's created more economic opportunity and prosperity You can imagine data sets of all of our health records being fed into an AI system that helps predict cancer or early onset of dementia. Whatever it might be. I think this offers more opportunity, like most technologies, but we haven't been good at in our society, is ensuring that the people displaced have we reinvest in them and ensure that they have a shot to be retrained or be more thoughtful about what it means when you displace all the factory workers. From Scott Galloway, I appreciate it as always. Thanks so much.