After Chat GPT CEO says he’s ‘a little scared’ of AI, should you be? | ABCNL

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign [Music] welcome back the CEO of the company behind chat GPT says artificial intelligence will reshape society as we know it the chatbots creators say that they believe it can help solve problems like climate change and transform everything from Health Care to education but there are downsides to it Chief business correspondent Rebecca Jarvis got into all of that with open AI CEO Sam Altman in an ABC News exclusive interview an exclusive look inside of open AI the company behind the groundbreaking artificial intelligence chat bot that everyone is talking about chat GPT showing up now I spoke to open AI CEO Sam Altman inside of their San Francisco headquarters on the day they released their latest version gpt4 which can write essays and speeches offer logical reasoning analyze pictures and even take tests outperforming most humans scoring in the 90th percentile on the uniform bar exam and 700 on the SAT math portion what changes because of artificial intelligence part of the exciting thing here is we get continually surprised by the creative power of of all of society I think that word's surprise though it's both exhilarating as well as terrifying for people I think people should be happy that we're a little bit scared of this I think people should you're a little bit scared a little bit you personally if I said I were not you should either not trust me or be very unhappy I'm in this job Chief technology officer Mira mirati showed us how it can help with your taxes we input information for a typical family of three so you're copying tax law just multiple pages of tax law and within seconds the standard deduction lgbt4 says it's twenty four thousand dollars which was correct and it even identified a picture of a dog and was able to recognize the dog was probably hungry watch as we take a photo of what's inside this refrigerator the technology analyzes what's there I see that you have some bread some mozzarella cheese tomatoes and mayonnaise you can make a simple grilled cheese sandwich with these ingredients you can make a strawberry toast by spreading the raspberry fruit spread on the bread and topping it with sliced strawberries perfect providing a recipe but is this also a recipe for something else on the one hand there's all of this potential for good on the other hand there's a huge number of unknowns that could turn out very badly for society how confident are you that what you've built won't lead to those outcomes well we'll adapt it you'll adapt it as negative things occur for sure for sure the current systems are still weak relative to what we expect to create and so putting these systems out now while the stakes are fairly low learning as much as we can I think is how we avoid the more dangerous scenarios is there a kill switch a any engineer can just say like we're going to disable this for now the model is model itself can it take the place of that human could it become more powerful than that human it waits until someone gives it an input this is a tool that is very much in human control breaking all right thanks to Rebecca Jarvis for that a very thought-provoking interview and subjects let's bring in Dr Sarah kreps for this she is a professor at Cornell also a scholar at West Point's Modern War Institute Sarah what a pleasure to have you let's start with the big headline from Rebecca's interview Sam Altman the CEO of open AI saying that he's a little bit scared about the potential implications of what he created what do you think should we be scared too well you know I think fundamentally he must be an AI Optimist or he wouldn't be in the business that he is but at the same time I think it's pretty typical for people who are experts in their field to be attuned to the risk so I actually think he's kind of playing this exactly right and I think he's right to be excited about the opportunity that AI has to really transform Society well that that transformative power is what has people uh concerned and and even him scared I'm reminded of what Albert Einstein said when he heard that the atomic weapon the atomic bomb was used he had pressed Franklin Roosevelt to do it he said now everything has changed except human nature accept Human Nature and I just wonder if we're in another situation where vast power has been turned over to a system that maybe we can't control I I think the nuclear parallel is really apt here which is that the people who are involved in the Manhattan Project had that sort of similar ambivalence about what they were doing that they thought it was important but they also anticipated that there would be risks and these scientists kind of arrived at the these perils and the risks at different times but they were all pretty aware of that and I think that there's a lot of that in the AI Community right now you know what's uh and actually I'm going to borrow a phrase that President Putin used which is that whoever uh rules AI is going to rule the world and I think right now there are a lot of short-term and longer term economic incentives to get this done and so it is really hard to put this back in the bottle and I think especially you know not least because this is just code and so this code is easy to proliferate and it's growing exponentially fast and I think that again is where it's similar to you know earlier scientific Pursuits whereas is happening and it almost feels inexorable how do you put it back you can't and now the best thing you can do is figure out what the appropriate guard rails are so this technology is so new how can policy makers even know how to make rules and regulations concerning AI right now and and what do you think they should be looking at specific factors they should be looking at as we do go forward probably needing those right right and I do think actually there's a risk to try to kind of lean too far into preempting and regulating what is a very new technology I mean we've had natural language models for some time uh but at the same time because there are so many beneficial uses to AI I I you know I think a lot of what will happen will be kind of domain specific so the journal Nature has released uh it's a you know one of the biggest science science journals its own sort of protocols for the use of GPT in publishing in in that particular Journal at our University here we have a committee that is trying to figure out appropriate use of GPT in the classroom because we're aware of these kind of that ambivalence the pros and the cons the uses and misuses so we want to leverage the way in which GPT these language models can really be kind of a handmade to creativity but we also really want to be aware of how we can put in place again these guard rails that can kind of protect against the potential misuses well that said the tech industry isn't usually receptive to regulation anyway right so is this truly the wild west right now when it just comes to this type of technology and as you say we just kind of have to see where this goes from here right I and I think one of the challenges again is that this is a this is code and you know if open AI we know that all the big players are doing this open AI has has really clearly had a lead in this but all of the big players are doing this and so it really is sort of a difficult sort of nut to crack when it comes to trying to regulate it I actually think one of the bigger ways to you know one of the misuses that I think from a government perspective we need to kind of wrap our heads around are ways in which let's say foreign governments who we know have been intervening in elections in the past and weren't so good at it because their English wasn't great you know now they have these tools where the lane you know thinking about how these actors might use and misuse the technology and what to look for so I think kind of a new form of digital literacy uh can be really helpful here kind of identifying these Hallmarks of AI generated text which is longer sentences and more complex sentences language that a lot of people wouldn't sort of more complicated language that a lot of people wouldn't naturally use and so I think that's sort of the evolution that we need to kind of catch up to is now understanding these Technologies are here and now kind of uh developing a new sort of vernacular and digital literacy around it absolutely and the question with language is is can it be generated obviously it can just by hoovering up all the language that's already online and pre and then pretending uh in some ways that you're speaking or is there something that's in the software of the human person him and herself that that make it different I guess we're going to find out Sarah crafts will be back to you I know on it thanks very much thank you hi everyone George Stephanopoulos here thanks for checking out the ABC News YouTube channel if you'd like to get more videos show highlights and watch live event coverage click on the right over here to subscribe to our Channel and don't forget to download the ABC News app for breaking news alerts thanks for watching
Info
Channel: ABC News
Views: 17,246
Rating: undefined out of 5
Keywords: After, Chat, Cornell, GPT, abc, abcnl, artificial, intelligence, news, p_cmsid=2494279, p_vid=news-97949320
Id: SGctA6my1Fc
Channel Id: undefined
Length: 9min 36sec (576 seconds)
Published: Fri Mar 17 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.