'Should We Be Concerned?': Josh Hawley Asks OpenAI Head About AI's Effect On Elections

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Target the harms and avoid unintended consequences to the good thanks Senator Hawley thank you again Mr chairman thanks to the witnesses for being here Mr Altman I think you grew up in St Louis I did not mistaken it's great to see if he's a great place Missouri and here it is thank you I want to I want that noted especially underlining the record Missouri is a great place that is the takeaway from today's hearing maybe we should stop there Mr chairman um let me ask you Mr Altman I think I'll start with you and I'll just preface by saying my questions here are an attempt to get my head around and to ask all of you to help us to get our heads around what these this generative AI particularly the large language models what it can do is I'm trying to understand its capacities and then its significance so I'm looking at a paper here entitled large language models trained on media diets can predict public opinion this is just posted about a month ago the authors are two Andreas and Roy and their conclusion of this work was done at MIT and then also at Google the conclusion is that large language models can indeed predict public opinion and they go through and and model why this is the case and they they conclude ultimately that an AI system can predict human survey responses by adapting a pre-trained language model to subpopulation specific media diets in other words you can feed the model a particular set of media inputs and it can with remarkable accuracy and the paper goes into this predict then what people's opinions will be I I'm I want to think about this in the context of Elections if these large language models can even now based on the information we put into them quite accurately predict public opinion you know ahead of time I mean predict it's before you even ask the public these questions what will happen when entities whether it's corporate entities or whether it's governmental entities or whether it's campaigns or whether it's foreign actors take this survey information these predictions about public opinion and then fine-tune strategies to elicit certain responses certain behavioral responses I mean we already know this committee has heard testimony I think three years ago now about the effect of something as prosaic it now seems as Google search the effect that this has on voters in an election particularly undecided voters in the final days of an election who may try to get information from Google search and what an enormous effect the ranking of the Google search the articles that it returns has come an enormous effect on an undecided voter this of course is orders of magnitude far more powerful far more significant far more directive if you like so Mr Altman maybe you can help me understand here what some of the significance of this is should we be concerned about models that can large language models that can predict survey opinion and then can help organizations entities fine-tune strategies to illicit behaviors from voters should we be worried about this for our elections yeah uh thank you Senator Hawley for the question it's one of my areas of greatest concern the the the more General ability of these models to manipulate to persuade and to provide sort of one-on-one uh you know interactive disinformation I think that's like a broader version of what you're talking about but giving that we're going to face an election next year and these models are getting better I think this is a significant area of concern I think there's a lot there's a lot of policies that companies can voluntarily adopt and I'm happy to talk about what we do there I do think some regulation would be quite wise on this topic uh someone mentioned earlier it's something we really agree with people need to know if they're talking to an AI if content that they're looking at might be generated or might not I think it's a great thing to do is to make that clear I think we also will need rules guidelines about what what's expected in terms of disclosure from a company providing a model that could have these sorts of abilities that you talk about so I'm nervous about it I think people are able to adapt quite quickly when Photoshop came onto the scene a long time ago you know for a while people were really quite fooled by photoshopped images and then pretty quickly developed an understanding that images might be photoshopped this will be like that but on steroids and the the interactivity the ability to really model predict humans well as you talked about I think is going to require a combination of companies doing the right thing regulation and public education do you Mr Professor Marcus do you want to address this yeah I'd like to add two things one is in the appendix to my remarks I have two papers to make you even more concerned um one is in the Wall Street Journal just a couple days ago called help my political beliefs were altered by a chat bot and I think the scenario you raised was that we might basically observe people and use surveys to figure out what they're saying but as Sam just acknowledged the risk is actually worse that the systems will directly maybe not even intentionally manipulate people and that was the thrust of the Wall Street Journal article and it links to an article that I've also linked to called interacting and it's not yet published not yet peer-reviewed interacting with opinionated language models changes users views and this comes back ultimately to data one of the things that I'm most concerned about with gpt4 is that we don't know what it's trained on I guess Sam knows but the rest of us do not and what it is trained on has consequence lenses for essentially the biases of the system we could talk about that in technical terms but how these systems might lead people about depends very heavily on what data is trained on them and so we need transparency about that and we probably need scientists in there doing analysis in order to understand what the political influences of for example of these systems might be and it's not just about politics it can be about health it could be about anything these systems absorb a lot of data and then what they say reflects that data and they're going to do it differently depending on what what's in that data so it makes a difference if they're trained on the Wall Street Journal as opposed to the New York Times or or Reddit I mean actually they're largely trained on all of this stuff but we don't really understand the composition of that and so we have this issue of potential manipulation and it's even more complex than that because it's subtle manipulation people may not be aware of what's going on that was the point of both the Wall Street Journal article and the other article that I called your attention to let me ask you about AI systems trained on personal data the kind of data that for instance the social media companies the major platforms Google meta Etc collect on all of us routinely we've had many a chat about this in this committee over many a year now but the massive amounts of data personal data that the companies have on each one of us an AI system that is that is trained on that individual data that knows each of us better than ourselves and also knows the billions of data points about human behavior human language interaction generally wouldn't we be able wouldn't we can't we foresee an AI system that is extraordinarily good at determining what will grab human attention and what will keep an individual's attention and so for the war for attention the war for uh eclix that is currently going on on all of these platforms is how they make their money I'm just imagining a an AI system these these AI models supercharging that war for attention such that we now have technology that will allow individual targeting of a kind we have never even imagined before where the AI will know exactly what Sam Altman finds uh attention grabbing will know exactly what Josh Hawley finds attention grabbing will be able to elicit to grab our attention and then elicit responses from us in a way that we have here before not even been able to imagine should we be concerned about that for its corporate applications for the monetary applications for the manipulation that that could come from that Mr almond uh yes we should be concerned about that to be clear uh openai does not we we're not off you know we wouldn't have an ad-based business model so we're not trying to build up these profiles of our users we're not we're not trying to get them to use it more actually we'd love it if they use it less because we don't have enough gpus um but I think other companies are already and certainly will in the future use AI models to create you know very good ad predictions of what a user will like I think it's already happening in many ways Miss Marcus anything you want to add hyper yes um and perhaps Ms Montgomery will want to too as well I don't but um hyper targeting of advertising is definitely going to come I agree that that's not been open ai's business model um of course now they're working for Microsoft and I don't know what's in Microsoft's thoughts um but we will definitely see it maybe it will be with open source language models I don't know but the technology there is let's say part way there to being able to do that and we'll certainly get there so we're an Enterprise technology company not consumer focused so the space isn't one that we necessarily operate in in terms of but these issues are hugely important issues and it's why we've been out ahead in developing the technology that will help to ensure that you can do things like produce a fact sheet that has the ingredients of what your data is trained on data sheets model cards all those types of things and calling for as I've mentioned today transparency so you know what the algorithm was trained on and then you also know and can manage and monitor continuously over the life cycle of an AI model the behavior and the performance of that model Senator Durbin thank you I think with
Info
Channel: Forbes Breaking News
Views: 140,046
Rating: undefined out of 5
Keywords:
Id: hu3Iu2WzpBE
Channel Id: undefined
Length: 10min 2sec (602 seconds)
Published: Tue May 16 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.