Is Artificial Intelligence a looming threat to humanity?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
does AI threaten our very existence industry leaders already called for a halt to AI development but now some are warning it could lead to our Extinction as governments try to legislate the technology experts warn that autonomous weapons the rapid spread of misinformation and Rogue AI could all be a part of our human future and ultimate failure I'm andreasenke and today's newsmaker is artificial intelligence [Music] thank you [Music] mitigating the risks of our Extinction should be a global priority that's the latest major statement from the center for AI safety more of a threat than climate change or nuclear warfare experts including the chiefs of openai and Google deepmind say it is artificial intelligence that could ultimately lead to our Extinction but others say their fears are overblown and that by focusing on the worst case scenario we're ignoring the immediate risks whichever side of the Doomsday scenario you are on there is no doubt that we're heading into unchartered territory if you're not clear on what the dangers are then just listen to Amika a humanoid robot that has been integrated with large language models and advanced form of generative AI what would be the most nightmare scenario you can imagine with AI and Robotics that might present it Danger the most nightmare scenario I can imagine with AI and Robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge this could lead to an oppressive Society where the rights of individuals are no longer respected well among forecasts for a dystopian future the more immediate fear for many is that robots will take more of our jobs and that's not an unreasonable concern a recent paper published by open AI researchers suggests that about a fifth of workers in the United States may start seeing over half of their work impacted look a little further into the future and it gets even scarier Dan Hendricks the director of the center for AI safety says we should be worried about malicious actors that could intentionally release Rogue AI that actively tries to harm Humanity the organization warns that AI could become increasingly concentrated in fewer and fewer hands leading to pervasive surveillance and oppressive sensor worship there are also major concerns over how autonomous weapons could be developed and used as well as regulated as these concerns grow governments are attempting to tackle the issue Britain and the United States have agreed to work together on artificial intelligence safety I think it's also clear though that it does pose very real risks that we as Leaders need to be cognizant of and put in place the guard rails to mitigate against so president and Biden and I had a very good conversation on this just a couple of weeks ago in Japan in one of our sessions in Hiroshima and we are aligned in wanting to discuss with other countries what those guard rails should be here in the US you've convened all the companies together recently we've done the same in Downing Street just a couple of weeks ago and I think there are a series of measures that we can implement but among all the panic and potential risk founder and CEO of openai Sam Altman says not to lose sight of the potentially great rewards this new technology could bring most Troublesome part of our jobs is that we we have to balance this like incredible promise in this technology that I think humans really need and we can talk about why in a second with confronting that these very serious risks um why to build it number one I do think that when we look back at the standard of living and what we tolerate for people today it will look even worse than when we look back at how people lived 500 or a thousand years ago and we'll say like man can you imagine that people lived in poverty can you imagine people suffered from disease can you imagine that everyone didn't have a phenomenal education were able to live their lives however they wanted it's going to look barbaric I think everyone in the future is going to have better lives and the best people of today so how do we reap those potentially great rewards while protecting Humanity from AI technology disaster joining me now to debate that are from Oxford Michael Osborne he is one of the signatories of the AI Extinction risk statement and a machine learning professor at the University of Oxford from New York Peter asaro a philosopher of science technology and media and in Brisbane Australia is David toughley an applied ethics and cyber security lecturer at Griffith University thanks all so much for being with us uh I'll start by establishing actually where everyone stands here on the real threat level of AI is it really worse than climate change could it really lead to our Extinction Michael go ahead well I think the first thing to say about the future of AI is that no one really knows that we're in a period of very rapid change both in technical capabilities and in the adoption of those Technologies and so I think for me this raises concerns that it's impossible to rule out that we may be facing truly existential risks from this technology and now is the time to tread carefully so if no one really knows why be so alarmist I mean it all almost gives this feeling of you know the boy who cried wolf syndrome here if we keep raising the alarm to this level and then nothing big happens maybe people won't be taking the experts so seriously anymore well I think the reason that I nonetheless felt that we should raise concern about the existential risks of AI is that the risks aren't limited to the extinction of our species AI poses many risks um including the destabilization of geopolitical relationships to the disruption of our democracies the propagation of myths and disinformation and so I think we do need to do something about AI today and the thing that we do is likely to reduce many of those risks simultaneously if we tackle the core issue which is the only people who are currently governing the development and usage of this technology are those in a small number of opaque and extremely powerful Tech firms and if we can tackle that core issue I think we may address many of the risks okay Peter I saw you've nod your head there you on the same page yeah I would argue that more more of the risks that we need to be concerned with are the near-term and middle term risks I think the existential risk is a long-term risk uh may or may not develop there's more uncertainty there but what we do know is that AI is developing very rapidly it's transforming the nature of work uh the automation of all sorts of decisions about our lives that have you know meaningful impact on people where we have to concerns around the transparency of data whether it could be implicit racial and ethnic and gender bias within all of these data sets that we're using to train these AIS the potential for technological unemployment and displacement of workers in a broad range of professions by these new technologies they're going to wreak all kinds of social economic and political Transformations on society that we're not really prepared for and they're they're happening faster and faster let me ask you about that quickly Peter why is it developing faster than even so many experts expected because I mean I remember learning even 30 years ago that robots we're going to be doing most of the work for us today even in our own homes so I really thought at this point we'd be so much further ahead than we are as far as robots are concerned so why is it suddenly so fast uh well I would say you know 20 years ago I moved from AI into robotics um robotics is a lot harder than Ai and the advantage that AI has is that it's purely data driven um and what we have because of the internet are these huge data sets that are being collected by big Tech firms about everything that we do online as well as all sorts of things that are happening in business and government and we have lots of data but we also have are now these giant data farms and cloud computing technologies that allow us to actually process massive amounts of data and so these techniques like neural networks and machine learning that we've been working on for 50 or 60 years are suddenly becoming much more effective because we have so much data and the ability to train these really large networks we can do things like chat GPT that's able to you know do conversation because it's programmed with you know in a men's library of texts and understands all the statistical relationships between those and it's in sort of make up text that sounds real fair enough David are you in as much fear of our AI future as some of the experts are right now well I'm the first to admit that there are certainly risks involved with all of this uh this is a very high stakes game that is unfolding uh as we speak and I think in hundreds of years time this period will be looked back on as being pivotal in in the evolution of our species so um I think the problem really that people are reacting to is that uh the pace of development is really exponential and which is to say it's doubling every you know year or two or a couple of months even and people are really bad at at sort of predicting where things are going if they're moving at an exponential rate I take a what I feel is a realistic Optimist view of all of this and I see that humans have for as long as we've been around for as long as we've been using fire really is we've been uh adopting potentially dangerous Technologies and provided the benefits of those Technologies are real and worth having then it's worth um finding ways to manage the risk and so with our current our current situation here is that I think a lot of this quite alarmists talk is is aimed at pressuring governments into regulating the development of AI and I agree with that pretty much everybody agrees that AI does need to be regulated very nice and we'll we'll only be in trouble if we fail to regulate yeah I mean the the regulatory framework especially since it kind of has to be applied globally to be effective is going to be really tricky because you might have you know uh Michael I'm sure you're aware outlying States um that wouldn't why would they adhere to rules that are made by the U.S or the EU and then if they go ahead with their techno technological development the what might be considered good guys would fall behind and then people would have even greater fears so Michael if you can tell me when they talk about Extinction explain what that means what would be that worst case scenario where we would drive ourselves out of existence well the thing to say about the existential risks of AI is that there is not one scenario that is of concern there are many such scenarios and it's the fact that there are these diversity of things that we have to worry about that should lead us to be more cautious now but to single out one that I think is particularly compelling AI could be used in so-called gain of function research to design a virus that is more pandemic capable even than those that naturally exist and then that AI might somehow persuade a lab tech within such an institution to leak the virus into the outside world at which point if the virus is sufficiently deadly we all could be eliminated and similar risks could be found in you know the exacerbation of the rest of the nuclear war but can't can't we do that already well we could but the thing is um AI could make the risks worse and I think we have to take that exacerbation seriously okay uh Peter let me come back to you and David kind of mentioned the example of fire as well and you've said that's a great example because over the years fire has has killed a lot of people but it's also helped Advance Humanity so with AI there I mean there will be some pitfalls but overall a net positive uh you know probably uh depending on how we regulated and how we integrated into society so with fire you have the short-term risks of you know burning yourself burning down your house things like that we mitigate that in society with fire departments and things like that but there's also these longer term risks that we didn't think about for many centuries like climate change which is largely the result of burning fossil fuels right so uh those sorts of risks uh I think you know that AI poses as we integrate it widespread throughout Society so there's all sorts of unforeseeable risks which like climate change and burning fossil fuels we don't necessarily predict at the beginning of the use of a technology but we need to be able to manage that at a national and Global level as well okay I'll tell you what David I'll come back to you and let's go back to your point about regulation and you tell us what can really be done to regulate this and make sure it does not get out of control worldwide sure well it's going to be a challenge because nation states are laws unto themselves and we can have international treaties that seek to establish uniform standards but there will be those who don't comply but what I would point out though is that you know we we often discussions about AI is often sort of cast in the in the view that it is an alien intelligence when in fact it's really just an extension of ourselves it's it's an extension of human intelligence and to view it as a threat that is separate from ourselves is really to lose sight of what its real function is which is to be a helper to help us be much more productive to help us solve Wicked problems like climate change all sorts of problems how to perfect uh nuclear fusion for example might be assisted by having AI um so all of these things for every bad scenario you can probably name several good scenarios and I think I just come back to the point well your question really is how do we establish a global standard well I think that is an evolutionary process and there will be problems along the way but eventually in the same way that nuclear energy has been regulated to be well more or less safe some some would argue that it's not safe but I mean since you know for decades and decades now since it's you know Inception really we haven't seen the apocalypse since Hiroshima Nagasaki that people had thought would be possible in large part due to you know international law holding holding its place yeah uh that how do we do it well if we had a body a globally recognized governing body like the international atomic energy commission I think or um that oversees this and basically establishes standards and has the means to enforce those standards okay yeah it should be in the cards and as we know uh global leaders are talking about it so we saw Joe Biden and Richie sunak uh looking at what they how they can cooperate um but let me come back to Michael because you know I I heard a very interesting comment that I want to raise with you because there are voices in the Indus industry that are actually saying you know don't exaggerate the dangers here uh this professor Pedro Domingos actually was speaking now he's from the University of Washington speaking about Sky News I saw he said people need to calm down if we want to make AI safe the best way we can do that is to make it more intelligent because it's stupidity that's dangerous and that intelligence and control are completely different things so AI can essentially protect us from human stupidity because it can be engineered to be safely smarter than us well I'd fundamentally disagree with that take so um I would say firstly that yes there are risks of stupidity and AI does offer the prospect of solving many of the problems we face today and that's why I continue to work in the space myself I'm to some degree optimistic about the potential of AI to address risks we already face and to improve the prospects of human flourishing but at the same time we shouldn't underestimate um you know how much harm could accrue from more intelligent algorithms my concern is that even if these algorithms don't in any sense sort of exceed humans capabilities even just having a lot more human-ish systems in the wild could be very dangerous so imagine that we have an AI that serves as a kind of Q Anon an AI driven Q Anon that plays a role in politics that destabilizes the usual rule of things that leads us into a more precarious Global situation I think we could be in a very harmful place but again I don't mean to sound simplistic Michael but don't we have that anywhere and wouldn't we hope that people evolve to be smarter and learn that these kinds of crazy theories aren't true no matter where they originate from you don't think that's that would be the case so there are two points I'd like to make here firstly I think AI is different from humans AI has many deficiencies that humans don't doesn't have the same degree of common sense that humans do which may itself lead us into various harmful scenarios if for instance we expect an AI using a high sake situation like making military decisions on the battlefield for instance could lead it into causing much more harm than a human would have in the same situation but the second point I wanted to make is that even if AI was comparable to humans having a lot more humanish systems in the world in short order that is if we expand the number of humanish systems in the world from 8 billion to 300 billion very rapidly where outside the range of scenarios that our societies have adapted to keep under control okay uh Peter let me turn to you because I mean as for you know weapons autonomous weapons some say there's also an argument for safety to be made because they could prove obviously more precise and since humans don't have to physically fire or drop these weapons you eliminate that kind of trauma which could ultimately be better for humans when it comes to Warfare I'm not saying war is ever good but it's a reality of our existence so yeah there's the argument towards that's actually better for Humanity to have these weapons fully automated that's certainly one of the drivers of the development of those those but the concern of course as was just mentioned is once you give over the authority to kill human beings to automated systems or autonomous weapon systems those systems could make enormous mistakes kill civilians by accident and moreover there's no legal accountability for these autonomous agents or software uh it becomes much more difficult to hold humans accountable when they can't predict the behavior of these complex systems it doesn't though because the people that own those weapons and deployed them in the first place would ultimately have to take responsibility whether or not a human being was at the wheel or not uh to an extent nations are responsible for the acts of their militaries uh it would be very difficult to hold individuals responsible for war crimes which would require mental states of you know understanding the implications of their actions that might be absent in those cases uh and we've already seen with the use of drones and remote killing and operations with those the way in which the legal uh the application of law is diminished in in terms of international law so there are these threats to international law and there's a whole host of potential risks around uh weapons and I think that's a great example of a case where we can uh we can regulate that we can we've been working at the United Nations for more than a decade now at getting a treaty to to regulate autonomous weapon systems that I think we're getting close to that but I think in terms of regulating artificial intelligence more generally it's going to be much more complicated because there's so many different applications and each of those applications will have different kinds of regulatory considerations and requirements okay down to our last few minutes uh so yeah let me bring it kind of Full Circle you know David we there are there was what some said hysteria over nuclear threats in the past there was hysteria even over I mean we'll all remember Y2K people thought you know that the technological issues involved with the changeover to the New Millennium was going to destroy us all and that it couldn't be fixed um it was hysteria in the end because nothing really came of it so are you feeling that okay it's good to sound the alarm a little bit but we have more to look forward to than to fear I certainly would agree with that I think that we do have much more to look forward to than to be afraid of I don't mean to minimize the potential risks I acknowledge that those do exist but we have the capacity and the will to do something proactive to govern this to prevent this from happening just as we've done with every other kind of dangerous technology that we have uh invented and I would just also add to that that this is the has a lot to do with the correct relationship that humans have to AI um it is it's still an extension of human intelligence and it's still a Helper and the human must remain in control so you would never put an AI doctor in charge of life and death uh decisions in a hospital it always must be under the supervision of you know a suitably qualified person okay and so if if it's kept in that sort of perspective then problems are unlikely to occur Michael I can give you 30 seconds for quick final thoughts right well I I disagree with the depiction of Y2K as having been hysteria in fact the concerns about Y2K are exactly what led to the engineering efforts that prevented real problems so fear serves a useful purpose in motivating responses that um allow us to fend off these realistically um concerning prospects and I think we are in a position now where we can undertake such efforts that would ward off these really worrying um scenarios I'm reassured that now the US and UK are discussing I think the US has an enormous lead in Ai and the fact that it has that lead would allow it to serve as a kind of global Watchdog to prevent the Technologies proliferating more broadly okay Michael that will have to be the final word unfortunately we're completely out of time for this edition of the newsmakers I'd like to thank all three of my panelists sincerely so much for being with us and our viewers of course as always for joining us as well remember you can follow us on Twitter and do be sure to subscribe to our YouTube channel I'm Andrea Sankey we'll see you next time thank you foreign
Info
Channel: The Newsmakers
Views: 1,369
Rating: undefined out of 5
Keywords:
Id: MS8oOAkTYos
Channel Id: undefined
Length: 26min 0sec (1560 seconds)
Published: Fri Jun 09 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.