AI Expert: We Urgently Need Ethical Guidelines & Safeguards to Limit Risk of Artificial Intelligence

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is democracy Now democracynow.org The War and Peace report I'm Amy Goodman with near me Sheikh has more of the public becomes aware of artificial intelligence or AI the Senate held to hearing Tuesday on how to regulate it Senate Judiciary subcommittee chair Richard Blumenthal opened the hearing with an AI generated recording of his own voice what some call a deep fake [Applause] for some introductory remarks [Music] too often we have seen what happens when technology outpaces regulation the unbridled exploitation of personal data the proliferation of disinformation and the deepening of societal inequalities we have seen how algorithmic biases can perpetuate discrimination and Prejudice and how the lack of transparency can undermine public Trust this is not the future we want if you were listening from home you might have thought that voice was mine and the words from me but in fact that voice was not mine the words were not mine and the audio was an AI voice cloning software trained on my floor speeches the remarks were written by chat gbt when it was asked how I would open this hearing Google Microsoft and openai the startup behind chat GPT are some of the companies creating increasingly powerful artificial intelligence technology open AI CEO Sam Altman testified at Tuesday's hearing and warned about its dangers I think if this technology goes wrong it can go quite wrong and we want to be vocal about that we want to work with the government to prevent that from happening but we we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that it's one of my areas of greatest concern the the the more General ability of these models to manipulate and to persuade and to provide sort of one-on-one uh you know interactive disinformation we are quite concerned about the impact this can have on elections I think this is an area where hopefully the entire industry and the government can work together quickly this all comes as the United States has lagged on regulating AI compared to the European Union in China for more we're joined by Mark rochenberg executive director of the center for AI and digital policy Mark welcome to democracy now it's great to have you with us now it's very significant that you have this AI CEO saying please regulate us but in fact isn't he doing it because he wants corporations to be involved with the regulation talk about that and also just what AI is for people who just don't understand what this is all about well Amy first of all thank you for having me on the program and secondly just to take a step back Sam Altman received a lot of attention this week when he testified in Congress but I think it's very important to make clear at the beginning of our discussion that Civil Society organizations experts in AI technology developers have been saying for many many years there's a problem here and I think it's vitally important at this point in the policy discussion that we recognize that these views have been expressed by people like timnet gebrew and Stewart Russell and Margaret Mitchell and the president of my own organization Marva highcock who testified in early March before the house oversight committee that we simply don't have the safeguards in place we don't have the legal rules we don't have the expertise in government for the rapid technological change that's now taking place so while we well welcome Mr aldman's support for what we hope will be a strong legislation we do not think he should be the center of attention in this political discussion now to your point what is AI about and and why is there so much Focus part of this is about a very rapid change taking place in the technology and in the tech industry that many people simply didn't see happening as it did we've known problems with AI for many many years we have automated decisions today widely deployed across our country that make decisions about people's opportunities for education for credit for employment for housing for probation even for entering the country all of this is being done by automated systems that increasingly rely on State mystical techniques and these statistical techniques make decisions about people that are oftentimes opaque and can't be proven and so you actually have a situation where big federal agencies and big companies make determinations and if he went back and said well like why was I denied that loan or why is my Visa application taking so many years the organizations themselves don't have good answers so that was reflected in part with Altman's testimony this week he is on the front lines of a new AI technique that's referred to generally as generative AI it produces uh synthetic information and if I could make a clarification to your opening about Senator blumenthal's remarks those actually were not a recording which is a very familiar term for us it's what we think of when we hear someone's voice being played back that was actually synthetically generated by Senator blumenthal's prior statements and that's where we see the connection to such Concepts as deep fakes this doesn't exist in reality but for the fact that an AI System created it we have an enormous challenge at this moment to try to regulate this new type of AI as well as the pre-existing systems that are making decisions about people oftentimes embedding bias replicating a lot of the social discrimination in our physical world now being carried forward in these data sets to our digital world and we need the legislation that will establish the necessary guard rails I'm Mark Rutenberg can you elaborate on the fact that so many artificial intelligence researchers themselves uh are worried about what artificial intelligence can lead to a recent survey showed that half 50 percent of AI researchers give AI at least a 10 chance of causing human extinction could you talk about that so absolutely and actually I was you know one of the people who who signed that letter that that was circulated earlier this year was a controversial letter by the way because it tended to focus on the long-term existential risks and it included you know such concerns as losing control over these systems that are now being developed there's another group in the AI community that I think very rightly said about the existential concerns that we also need to focus on the immediate concerns and I spoke for example a moment ago about embedded bias and replicating discrimination that's happening right now and that's a problem that needs to be addressed right now now my own view which is not necessarily the view of everyone else is that both of these groups are sending powerful warnings about where we are I do believe that the groups that are saying we have a risk of a loss of control which includes many eminent computer scientists who have won the Turing award which is like the Nobel Prize for computer science I think the right I think there's a real risk of loss of control but I also agree with the people you know at the AI now Institute and the distributed AI Research Institute that we have to solve the problems with the systems that are that are already deployed and this is also the reason that I was frankly very happy about about the Senate hearing this week it was a very good hearing there were very good discussions I felt that the members of the committee came well prepared they asked good questions there was a lot of discussion about concrete proposals transparency obligations privacy safeguards limits on on compute and AI capability and I very much supported what Senator Blumenthal said at the outset you know he said we need to build in rules for transparency for accountability and we need to establish some limits on use I thought that was an excellent place to start a discussion in the United States about how to establish uh safeguards for the use of artificial intelligence and Mark Wittenberg what are the uh benefits that people talk about with respect to artificial intelligence and given the rate as you said at which it's uh spreading these rapid technological advances is there any way to arrest it at this point well there's no question that that AI I mean broadly speaking and it is you know of course it is a broad term uh and and you know even the experts of course don't even agree precisely on on what we're referring to but let's say AI broadly speaking you know is is contributing to Innovation and in the medical field for example uh big breakthroughs with with protein folding it's contributing to efficiency and administration of organizations Better ways to identify uh safety flaws in in products and transportation I I think there's no uh dispute I mean it's a little bit like talking about fire or electricity uh it's one of these uh foundational resources in the digital age that is widely deployed but as with fire or electricity we understand that to maintain to obtain the benefits you know you also need to put in some safeguards and some limits and you see we're actually in a moment right now where the AI techniques are being broadly deployed with hardly any safeguards or limits and that's why so many people in the AI Community are worried it's not that they don't see the benefits it's that they see the risks if we continue down this path
Info
Channel: Democracy Now!
Views: 112,855
Rating: undefined out of 5
Keywords: Democracy Now, Amy Goodman, News, Politics, democracynow, Independent Media, Breaking News, World News
Id: YHwP0yYciF8
Channel Id: undefined
Length: 11min 29sec (689 seconds)
Published: Thu May 18 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.