How AI Is Enabling Racism, Sexism: Algorithmic Justice League’s Joy Buolamwini on Meeting with Biden

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is democracy Now democracynow.org The War and Peace report I'm Amy Goodman with nermeen Sheikh amidst the Boom in artificial intelligence and growing awareness of its potential risks President Biden met Tuesday with critics of the technology he spoke before the closed-door meeting in San Francisco the same here today I want to hear directly from the experts and these are the world some of the world's leading experts on this issue and the intersection of technology and Society who we who we can provide a range who can provide a range of perspectives for us and on ai's enormous promise and its risks for years groups like The algorithmic Justice League have raised alarm about how Ai and algorithms can spread racist and sexist biases the group's founder Dr Joy bolamwini was among those who met with Biden Tuesday she's going to join us in a minute the group recently honored Robert Williams who's African-American and in 2020 he was the first known case of police wrongfully arresting someone in the United States based on a false facial recognition hit when Detroit police wrongfully arrested him at his home as his wife and two young daughters watched who was held overnight in jail interrogated the next day police told him quote the computer must have gotten it wrong and finally released him this is part of the acceptance speech by Robert Williams when he received the gender Shades Justice award I just I just want to say to uh to anybody who's who's listening at this point I guess just to have the opportunity to let my story be a 401 into the to the rest of the world that if it happened to me it could happen to you right I I just was a regular regular I was up I was at work and was trying to get home and I got arrested for something I had that had nothing to do with me and I wasn't even in the vicinity of the crime when it happened right so it's just that uh I guess the way the technology is set up everybody with a driver's license or a state ID is essentially in a photo lineup for more we're joined in Boston by Dr Joy bolamwini founder of the algorithmic Justice League just back from that meeting with President Biden on artificial intelligence San Francisco she's also featured in the documentary coded bias Dr Joy bellamwini welcome back to democracy now you posted on Twitter before meeting with President Biden that you were looking forward to the meeting to talk about the dangers of AI and what we can do to prevent harms already impacting everyday people like mortgages and housing in need of medical treatment encountering workplace surveillance and more I assume and more you're talking about issues like this uh kind of false racial facial recognition based on AI can you talk about the Williams case and so much more what you discussed with President Biden absolutely thank you so much for having me I am actually hopeful after this round table with uh President Biden because we started the conversation really focused not just on what AI can do which we've heard a lot about but centering how it's impacting real people like we saw with uh Robert Williams with the Robert Williams case what we saw was a case of A.I powered Biometrics leading to a wrongful arrest so the research that I've done and many others have done as well has shown documented racial bias gender bias and other types of biases in facial recognition systems and when these systems are used in the real world like we saw with the Robert Williams case you actually have consequences so for Robert to be arrested in front of his wife and in front of his two young daughters you cannot erase those sorts of experiences and then to to be sleeping on a cold slab for 30 hours with just a filthy faucet as a water source so these are the types of real world harms that are concerning and it's also not just on race right we have examples of hiring algorithms that have been showing sex's hiring practices then being automated in a way that appears to be neutral you have people being denied life-saving health care because of bias and inaccurate algorithms and so I was very excited to see the Biden Administration putting the real world Harms in the center of this conversation Joy if you could just explain you know how is it that AI has been a has these kinds of biases because of course AI can only reflect what already exists it's not coming up with something itself so who are the programmers how is it that these biases as you say not just on Race although particularly on Race but also gender and other issues how are they embedded within AI systems well the AI systems that we are seeing on the rise are increasingly pattern recognition systems and so to teach a machine how to recognize a face or how to produce human-like text like we're seeing with uh some of the large language models what you have are large data sets of examples here's a phase here's a sentence here's a whole book right and based on that you have these systems that can begin to learn different patterns but if the data itself is biased or if it contains stereotypes or if it has toxic content what you're going to learn is The Good the Bad and the Ugly as well when it comes to large language models for example and then on the facial recognition side if you have the underrepresentation of certain populations it could be people with a darker skin and it could be children for good reason we don't want their faces in those data sets then when they're used in the real world you have several risks one is misidentification right what we saw with Robert Williams case but even if these systems were perfectly accurate now we have to ask do we want the face to be the Last Frontier of privacy because we're then creating a surveillance State apparatus well joy let's go to a clip from a coded bias the film a documentary film that you're featured in uh this is Sofia umuja Noble the author of the book algorithms of Oppression the way we know about algorithmic impact is by looking at the outcomes for example when Americans are bet against and selected and optimized for failure so it's like looking for a particular profile of people who can get a subprime mortgage and kind of betting against their failure and then foreclosing on them and wiping out their wealth that was an algorithmic game that came out of Wall Street [Music] during the mortgage crisis you had the largest Wipeout of black wealth in the history of the United States just like that this is what I mean by algorithmic oppression the tyranny of these types of practices of discrimination have just become opaque so that's a clip from coded bias a documentary by Chardonnay kantaya which you're featured in your comments a joy I think this is a great clip because it's showing that while we have all of these conversations about the possibilities of AI the reality uh shows the perils and what's even more concerning to me right now is in this rush to adopt algorithmic systems there is a narrative that says we want to have trustworthy AI or we have to have responsible AI but so many of the popular AI systems that have been built have been built on a foundation of Oppression or Foundation of unconsented data some would say stolen data and so something that was concerning to me at the round table was there was an expressed excitement about using AI for education but then when you looked at the models and the AI systems that were being integrated these are known models where the companies aren't sharing the training data those who have labeled the toxic aspects of that data right have spoken out about the exploitative working conditions that they face being paid you know one to two dollars an hour for doing really traumatic uh work and so we can't build responsible AI or expect people to trust in AI systems when we have all of these terrible practices that are undergirding these Foundation models so the foundations themselves need to be excavated and we need to start over Dr bolamwini can you talk about the project of algorithmic Justice League that was just launched called TSA checkpoint scorecard fly.ajl.org and how people can share their experiences dealing with a new facial recognition program that's being used at several airports across the country absolutely so the TSA is starting to roll out facial recognition at domestic checkpoints they're now at 25 airports and this is concerning because the United States needs to start leading on biometric rights uh just last week we had EU lawmakers push forward the euai ACT which explicitly bans the use of biometric Technologies like facial recognition in public spaces the Live use of this technology we are flying in exactly the opposite direction where people don't even know right that they have a choice to opt out and so what we're doing with the algorithmic Justice League is we've released the scorecard so if you have traveled this summer if you're traveling this summer please share your experience so we understand did you give consent what was your experience if you try to opt out that the technology work for you I also think that this is a great opportunity for the U.S government to put into place the blueprint for a Bill of Rights for AI and so this blueprint came out last year and it highlights so many of the issues that we've been talking about which is the need for notice and we need consent as well but also we need protections from algorithmic discrimination we need to know that these systems are safe and effective we need data privacy so that you can't just snatch people's faces right and we need human fallbacks as well so I think it's a great opportunity for the Biden Administration to make true on their promise to make what was put in the blueprint binding through the Office of Management and budget and then to push to make the blueprint federal law can I ask you as it becomes harder to travel longer and longer and longer lines the other day I was at the airport guy comes up I was on an endless line says hey you want to do clear I will get your information and then I'll walk you right to the front of the line it's very hard to say no to that right when you're missing your plane but can you explain what these Iris scans are used for and also fingerprints uh yeah so when you have systems like Claire I want to make a distinction between you electing to use Biometrics when you sign up for Claire or TSA pre-check where they might be looking at Biometrics like your fingerprint your iris or your face this is different from what the TSA has stated in their roadmap which is to go from Pilot to requirement so that the default option when you go to an airport is that you have to submit your face this is what's in their roadmap so agency is absolutely important the right to refusal is absolutely important and you just pointed out a dynamic that so many people face you just made it to the airport your flight's about to go and you're given I don't know the red pill or the blue pill and you make a a snap decision and I'm really cautious about these snap decisions because I worry about what I call convenient shackles so for the few seconds that you might save or maybe minutes Etc you now have given up very valuable face data and we already have examples of data breaches uh with the government of Travelers face data so it's not even hypothetical when we're talking about the Privacy risk that are here and the road map that the TSA has laid out also talks about then using that face data potentially with other government agencies so we have to understand that it doesn't just stop at the check point this is a pilot and a starting point that is going to move us towards more mass surveillance if we don't resist now which is why we launch fly.ajl.org that way you can let your voice be heard you can let your experiences be documented and it's also not the case for example if your face has already been a scanned that there's nothing that can be done meta Facebook deleted over 1 billion uh face prints after a 650 million dollar settlement for violating the biometric information Privacy Act of Illinois this is to say laws do make a difference and again and I do think the U.S has an opportunity to lead when it comes to biometric protections but we are going in the opposite direction right now so I would call for the federal government to Halt tsa's pilot of domestic facial recognition technology at checkpoints and if you've been subjected to it already let us hear your story fly.ajl.org and joy finally we just have 30 seconds but if a Bill of Rights has put in place with the stipulations that you outlined do you see any benefits of artificial intelligence oh absolutely so if we can have ethical uh AI systems that do help us for example uh with medical breakthroughs I think that is something that is worth developing so I am not opposed to beneficial uses of AI but we don't have to build it in a harmful way we can enjoy the promise while mitigating The Perils we want to thank you so much for being with us computer scientists coding expert founder of the algorithmic Justice League to see all our interviews on artificial intelligence you can go to democracynow.org democracy Now is produced with Renee Felts Mike Burke Dean Augusta Messiah Rhodes Maria Teresa Tammy Warren oftarina nadura samalkov tamaria Studio John Hamilton Robbie Karen Hani Masood Sanji Lopez our executive director is Julie Crosby I'm Amy Goodman with Norman share
Info
Channel: Democracy Now!
Views: 213,965
Rating: undefined out of 5
Keywords: Democracy Now, Amy Goodman, News, Politics, democracynow, Independent Media, Breaking News, World News
Id: ckb9DFTOF5s
Channel Id: undefined
Length: 16min 43sec (1003 seconds)
Published: Thu Jun 22 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.