AI Insider Ask's the Question: What If We Can't Control AI? - [Part 2] | Intelligence Squared

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so what should government or let's be concrete what should this government we're in the UK and presumably most people here are from London the the British government wants to be the superpower of AI an AI superpower um and is having an AI conference on AI safety in November there's a big Focus here what should this government or indeed other governments be doing concretely to minimize the risks what should is there stuff that should be banned now is there rules that rules of the road that should be put in place so so the first thing is that governments have to build technology you know we've we've got into this habit of Outsourcing and commissioning third parties to create technology and I I think it's really difficult to be able to control what you don't understand and unless you build it you don't deeply understand it so I think that's just the first thing which in itself is very controversial when I propose that in government people sort of throw up their hands and there's a lack of will there's a lack of self-confidence there's a lack of belief that government can be a Creator and a maker especially on the technology front to do that I think the second thing is that we have to have deeply Technical and Engineering people as well as you know technologists more generally in cabinet positions and at the heads of every government Department you know it's it's pretty crazy to me that we don't have a CTO a chief technology officer in cabinet you know running our big institutions all of that is outsourced the challenge is to be able to do that you just have to pay close to private sector salaries again another highly sensitive topic that no one wants to talk about should never earn more than the Prime Minister you know to me this makes no sense how can we have an open labor market where on the one hand we're saying to people you know go work for whoever you like and on the one hand you know people are being paid 10x and on the other we're saying well take this huge sacrifice in the name of Public Service the Practical reality is that if that happens over many decades the net effect is that you have quality of one type over here and another type over there and that's really what we're facing we have to confront that reality it's very difficult for people to accept that we should be paying super large salaries it creates other issues around how we hold you know those kinds of you know people accountable given how much of the public Pur they might be earning Etc but fundamentally those two things enable us third thing which is governments have to take risks with regulation there is a fear that governments act too aggressively or too experimentally and upset the big companies and you know as someone who's on the receiving end of this quite a lot and have been in the past where I you know mistakes have been made I still think the right thing to do is to give governments a break let them make mistakes let them make investments that don't work praise the experimental governance structures have faith in the political process participate encourage it because otherwise you know there's just this spiral of decline this sort of lack of confidence that we can actually do the right thing that we should do the right thing and then that ultimately leads to the self-fulfilling prophecy much like with China and do you think that your view is the exception in your industry I mean The Stereotype is a bunch of 30-year-old Tech Bros who you know think the government is useless and who are going to kind of change the world with AI and you know WI going to do this is that is that an accurate stereotype are you the exception I mean there is a you know should we worry about the hubis of people in your industry I I think you know we have polarization everywhere so the The Stereotype is probably true but the the counter is that you know you know we we can do it without technology and I think that's totally wrong like technology is an absolutely necessary but not sufficient part of the process and I I think that some people in silic like Silicon Valley does have a tendency to be much more techn libertarian there's no question about that the government is the problem that the objective is to sort of eradicate the state and run it completely independently and I'll be honest there are some very very influential very powerful people who have that objective are building towards that objective with both their companies and their fortunes and you know I'm I'm very skeptical of them and I you know obviously I'm on the other side of that and and that's what shapes a lot of the public fear about this that you have a bunch of hyper powerful people who are shaping this um with without much with kind of disdain for the the state and the Democratic process two quick questions for me which I know someone would ask otherwise and then we're going to audience questions the first one is the whole question of the singularity we can't have a conversation about AI without the singularity will it happen when will it happen I honestly think it's a very unhelpful framing of what's to come and people jump to this framing because it's easy to point to Terminator and sky but it's it's almost like leaping to the Moon before we've even invented the transistor I mean it's a it's hundreds of years away I it's really unhelpful there are many practical near-term operational capabilities that you can predict just as I've tried to describe and you can then use those to wrestle with what are the consequences for the nation state how does this change our businesses what does this mean for our governments so in general I don't make those predictions I'm very skeptical that the superintelligence framing is is useful to us what about the other one that you know backyard wannabe AI commentators are always talking about which is the odds of existential catastrophe what are the odds that we will wipe ourselves out with this again I mean I think very very low I I really think what's very low I I I think infanty small such that it's not worth putting in the reason I ask you that is because I asked one of your um someone somewhat similar to you what this was oh very low they said and I said what's very low oh about 5% yeah so you think it's infinites netive zero okay well that's a good place to end all right we're going to open now to your questions and questions um from the online audience oh this is a good question from Kitty hadock who asks what will be the impact of all that computer power on our carbon emissions or will AI be able to enhance productivity so we reduce carbon elsewhere yeah another hot take on this very low and really inconsequential the amount of carbon that we spend on our data centers is genuinely minuscule relatively speaking secondly most of that happens in completely renewable data centers Google and Microsoft are both entirely 100% renewable Google actually owns the largest wind farm uh largest set of wind farms in the world um one of the projects that I worked on whilst I was at Deep Mind was making the entire windfarm fleet 20% more efficient so you know right from the outset they have been focused on this I'm not saying there aren't other environmental consequences like the use of you know galenium and Cobalt in the actual chip manufacturer and so on but I honestly think that relative to the benefits that we're seeing and with respect to the absolute cost of carbon per unit of computation it's very very small and and just to follow up to that because an argument I have often heard is that the cost of electricity and the access to power will be a constraint on the development of these AIS and their proliferation do you also think that's not true no I I I I think that's not true I mean I think that's not true I I I think that some data centers will be at the 100 megawatt scale which is maybe a single digigit percentage of a small City's electricity consumption but we're talking about a very small number at the 100 megawatt scale I mean that really is enormous nothing like that exists today don't don't worry about the carbon consequences of the actual AIS um from the audience questions here yes lady here in the second row thank you um uh sheru from number of Education hello education companies that use AI um my question to you is um if you think about two industries say Healthcare and education and you think about um the applications that uh that that AI AI has could you choose between the two which you would hold um the most hope for and um how should they be thinking about it should they be thinking about procuring it and how do you safely or procure it well um or again as you said you could produce it but some of those organizations may not be in a position to produce it anytime soon so if you're a procurer um how do you do that well and what are some of the Frameworks that should be used for that yeah thank thank you that's a great question I mean on the I'm probably most excited in terms of the immediate near-term impact about education I mean these models are already being used I think the primary use case of chat GPT is in fact homework help and people often think oh my kids are you know copying and copy pasting but actually if you actually watch the way they're using these models and many people use our models are high for exactly this reason it's a conversational interaction much like an enthusiastic teacher might speak to a child about the interest that they have so the child or the Learner in general gets to phrase the question in exactly their style picking on exactly the thing that they're interested in asking the odd obscure poorly phrased you know not complete picture type question and of course the AI is infinitely patient provides really detailed mostly factual information I mean it's not always perfect but it will be perfect and I think that's an unbelievable um meritocratic gain for everybody I mean I think we need to picture a world in 5 years time where the best education in the world completely personalized entirely factually accurate is available to absolutely everybody who wants it on the planet pretty much for free which sounds amazing um how do you go from where we are now to that uh to that that world I think the beauty of the um of these models is that they have an inherent tendency to proliferate and get smaller I mean that this is the upside of proliferation they spread because everybody wants access everybody wants to integrate them you know there are so many competing models now the cost of um the the the the cost of buying a model per word so if you're building an app for example you'll go to one of the three or four big model creators and you pay per word that cost has come down 70x since January because we're all competing with each other right so that means that you can now take a regular app that you might have been developing for you know years in its current instantiation and add a conversational widget in fact we're doing this at The Economist with the ecobot secret project underway clearly not so secret anymore thank you sorry and you and you integrate you integrate the conversational um element into your existing workflow so you should be able to ask any question in the style and the theme of your brand about the specific content that you have and it will be like a widget it's like a plug and play widget that you can put anywhere in the app and that's what I mean about proliferation obviously everybody finds that useful and you'll be able to use that tool as a in a low code or no code environment it'll you know if you see how the image generation models are being integrated into Adobe today if you're already a user of adobe you're you're using the absolute Cutting Edge AI models in a drag and drop way like no training required you if you're building a new website today it's Dragon drop you just grab a little widget and plop it over here and suddenly you have you know a YouTube player with your video and suddenly you have a conversational you know interaction with a language model that is conditioned over all your data so I think it's important to wrap your head around the idea that this is going to be widely available to everybody there isn't going to be an access issue and the risk and harm comes from mitigating the downsides of the Bad actors who might use you you know use it for nefarious purposes but the upsides are incredible let's get a go lots and lots of hands let's get yes lady there but I'm going to get one from online while you get your microphone and the one from online is a question from sa Paulo gosh your audience is is going from from a long way um Renee dealo Jr asks when you say we will solve this and that who is this we Humanity a good question corporations the UN or Elon Musk I definitely hope it's not Elon Musk I think of it as the kind of the community of researchers inventors and creators there's this sort of dialogue sometimes you see Snippets of it on Twitter sometimes you see in the research papers that academics publish you know you see it in the blogs and the products that big companies produce there is this sort of unfolding you know evolving mold of an ecosystem which is referencing each other creating and evolving and so when I say we I certainly don't mean me at inflection um my current company I just mean the the the ecosystem of humanity like we're we're trending collectively in a direction of invention and creation just one tiny does that ecosystem include Chinese scientists so 10 years ago Chinese scientists were not really part of the conversation they weren't really very relevant over the last 10 years they have launched onto the scene producing very high quality research creative research you know the old stereotype was that they can only copy and steal again I think a demonization partly by Elon Musk actually who was a big proponent of this idea that they were just robbing our intellectual property and there was some of that but largely they were just as creative as us and they wanted to get access to these tools to build their own businesses and and provide new products and services for their own citizens for the same reason as we do and so if you start from that assumption then of course they're participating in this ecosystem of course they're creating incredible models you know they have their own constraints with respect to censorship and that has slowed them down by a little bit but they're actually not going to be that far behind now I mean there are some issues with the export controls and they don't have access to Cutting Edge models but I don't think that's going to hold them back for very long interesting gu go ahead thanks this is excellent uh my question is about AI ideas and the people needed to think of them and if you take someone like Steve Jobs for instance you had very specific person very specific interests and skills and talent to be able to develop not only technology but the brand and a point of view on the world that came with that do you think AI would be capable of coming up let's say with the the version of the Apple idea now will it be in the future or will it simply be a machination of past information so I think people have often characterized these AIS as regurgitating their training data right uh or reproducing whatever they have seen previously and I think that's a kind of misunderstanding of what they do they're almost always doing interpolation the thing I described earlier is predicting the space between two ideas they're saying let me mash together these two concepts just like the dog and the yellow spots and whatever or take your pick of any com combination and that's creativity you know fundamentally when I invent something I'm really being inspired by a huge range of different experiences and ideas and I'm using those to then produce a novel prediction or generation at any given moment and I'm testing it out and seeing if it's you know useful or if it makes sense or if it catches on and then it has life of its own and it's sort of independent of me so I think for the next couple of decades these AIS are going to Aid the human in that process of creation and invention and Discovery they're not going to wander off and have their own agency and do their own thing I mean it's just not just not possible the capabilities just aren't there and won't be there in the near term to do that right and so I think it's going to be the human AI combo for a good time to come that does the creation more brist suppos to be exactly it's more of the assistant exactly the brilliant assistant um right let's go further back yes gentlemen there for Rose back um Are you seriously trying to suggest that the um no that the AI companies are able to self-regulate and didn't the banks prove that that is an impossible concept oh the banks are highly highly regulated and so not just by themselves but look I'm absolutely not proposing self-regulation I mean if if that came across then I apologize I'm wrong I mean in the in the book I really don't say that I go to Great Lengths to say that independent external technical expertise is required to do governance properly I think the Practical challenge as you know zany pushed back on me earlier today when we were talking with youal is where are these competent Regulators who get the technical aspects where is this Democratic process that gives us confidence that we can appoint people to to conduct that kind of of oversight so I think there's there's some pessimism that they're capable of doing that that should not mean that we sit around and do nothing in the process um you know for example we I visited President Biden six weeks ago now at the White House with the other six AI companies Microsoft meta Google deepmind etc etc and we signed up to voluntary commitments that were that are precursor to regulation which the White House designed because they realized they can't pass new primary regul anytime soon but the voluntary commitments are very material they we basically have said publicly we expose our models to expert independent scrutiny to Red Team or stress test find weaknesses in our own models once we identify those weaknesses we share them with each other and we share them publicly so in you know transparency in you know the open light of day and we know that that framework the voluntary commitments are a precursor to an executive order which is coming from the president sometime in the next few months they're also the basis for the Prime Minister Rishi sunak AI Summit in November in in in Bletchley Park where you know many world leaders and all the big tech companies are coming and those voluntary commitments are going to form the basis of the discussions for what becomes binding not just in the UK but hopefully worldwide so I'm totally with you that we're not going for a self-regulatory approach but you don't you don't think there's a conflict of interest well I I mean I I definitely a conflict of interest of course there's a conflict of interest I mean we are a profitable a for-profit company in fact I'm a public benefit Corporation so I think it's kind of an important clarification um it's a new type of company closer to a borp um which is a hybrid for-profit nonprofit Mission it means that our directors have a legal obligation to factor in the impact of our activities on The Wider World both the environment and people materially affected by what we do who aren't just our customers and that doesn't solve all the issues with for-profit businesses and the conflict that you described but it's a first step in the right direction and I I believe that that's how change happens taking small steps in the right direction let's take a question from over there yes gentleman quite near the back with the white T-shirt y right there hello yeah okay my question to you as an electronics engineer is should we now focus on the hardware part of it considering there's a monopoly going on and the concentration of chips to a certain country the harder part of it is raising a very big question we saw it in co uh things are really bad when Hardware Supply goes down so is this a great time to focus on Hardware considering we are good with software part for now that that's a great question I mean we didn't really talk about that too much here but you know just just for everyone's benefit these AI models are trained on gpus Graphics processing units so chips that were previously used for G gaming for representing Graphics in computers and we take each one of these chips and we daisy chain them together thousands and thousands of times we have a computer at inflection which is the size of four football pitches and has 25,000 of these chips daisy chain together an enormous cluster it cost about a billion and a half dollars now all of these chips are manufactured by one company NVIDIA who I'm sure people will have heard have seen their share price go up by 350% since January their chips are manufactured entirely in one Factory called tsmc Taiwan semiconductor Manufacturing corporation which is obviously in Taiwan the key component of their chips of their fabrication facility are manufactured by one company called asml a Dutch company so the supply chain is I mean we can talk about how this happened over 30 years but extremely narrow there really are no competing providers that are material at any of those three stages as a result the good news is that that means that there are choke points that can be used by Regulators to monitor who has access to the critical chips that enable the training of the models and of course restrict access to certain people so I think I Loosely alluded to the export controls minute ago which is a new piece of legislation or a rule that the US Administration imposed on China um last year which prevents China anyone in China any manufacturer in China from getting access to the latest version of these chips which means that they won't be able to train the gbt 5 level model a number of people have referred to this as a declaration of economic war on China and so you know I think I think that we have to be very cognizant of that denying them access to that is likely to deliver a significant Counterattack on you know the West we are hugely dependent on their supply chain in many many respects so yeah chips are absolutely at the heart of this both in good and bad ways so if you're focused on a chip company it's a big bet it takes a long time to mature but it has the potential to be the critical component here just a followup question yes uh do you think that that open- Source Hardware will help in creating a better setup right now considering very few companies are focusing on creating the hardware and all of them are completely non uh completely for profit so something like Open Source Hardware focusing more helping will it help us create a better computers creating better models with less power yeah so I I think open- Source Hardware is a serious effort and just to clarify I mean open source elements of Hardware design are used in many many areas so open ran for example is a hardware designed for 5G Ms which ensures they're interoperable it means that the software that runs your telephone networks actually can run on any type of Hardware because the interface is standardized which is a great thing for competition there isn't a lock in between Hardware the builder of the masts and the software the people who run the operating system that sits on top of that the downside of it is that it has tended to be a bit more flaky than the fully integrated side of things so I think you should be wide-eyed about it it isn't going to be the Panacea to solve all of our problems anytime soon let's take another question there yes lady in the fourth row hi thank you both um I'm javah Rari I lead digital regulation work at Tech UK which is the uh digital Tech trade body in the UK over a thousand members um ranging from Big Tech Deep Mind Google Mata all the way through to cyber security providers smes um many of our members are harnessing the really positive impacts of synthetic media um but many are becoming increasingly concerned with the rising malicious use of deep fakes so everything from Revenge pornography undermining digital ID verification um fraud which is a big one um in your opinion what should companies do now to address the rising um kind of problem of of deep fakes I know you mentioned um voluntary Charters which we already do with things like fraud um but what should we do now yeah it's a great question I mean I I think the first thing to say is that political parties and political campaigns shouldn't be allowed to use AI generators for their content I think we should just start by taking that off the table that's a precautionary principle there potentially some downsides to that but it feels like a safer and sensible thing to do right the second thing to say is that we shouldn't allow the big Tech platforms so Facebook or Twitter or or anywhere where there's a broadcast of information to have digital people counterfeit digital people right so if you you know have a handle zany on Twitter for example only zany should be allowed to represent as zany on Twitter I shouldn't be able to come along create a perfect synthetic fake of zany and have that you know imitate her language now I think that's a reasonably straightforward sensible thing that all the big Tech platforms will commit to it's doesn't address other platforms right outside of you know the big big providers and those tools and techniques are going to be widely available again it's a proliferation question it's going to be really difficult to say to somebody well you know you're using synthetic media to generate a new product design or a new fashion outfit or all these other good uses um you're not allowed to have it because there's a risk that you're going to be able to generate some you know deep fake I think we should also be like wide-eyed about how how quickly we adjust to the risks you know like back in you know 20 odd years ago people were like well we'll never be able to do Financial transactions on the internet because there's so much fraud right we're going to be inundated with fraudulent activity we do tens of trillions of dollars of transactions has completely transformed our world and we have a minuscule amount of Fraud and it's a constant back and forth you know likewise with Spam detection right we we everyone thought we're going to be inundated with SC spam we're going to produce all this automated content increasingly the next threat is that um you know older people are being tricked by AIS that you know can imitate the voice of say your daughter or child who you know might be asking you for a loan or something there's this con man scam type thing which is now a little like more more possible and more capable of course that's a new Threat Vector that causes real harm on the flip side spreading knowledge and information about it there's a very very simple defense which is just to say you know never provide access you know to my account over the phone right I'll never you know call you out of the blue asking for that so we adjust we adapt and it you know it doesn't mean that we can eliminate all of the harms but it means that like net net we just have to be more resilient and more focused on adaptation
Info
Channel: Intelligence Squared
Views: 30,137
Rating: undefined out of 5
Keywords: intelligence squared, debate, intelligence squared debate, top debates, best debates, most interesting debates, intelligence2, intelligencesquared, iq2, iq2 debate, iq squared, Intelligence Squared +, IntelligenceSquared, Intelligence squared plus, IntelligenceSquaredPlus, IntelligenceSquared+, intelligencesquaredplus, intelligencesquared+, AI insider, what if we cant control ai, artificial intelligence, science, technology, ai, mustafa suleyman, mustafa suleyman on artificial intelligence
Id: DYWhphtqodY
Channel Id: undefined
Length: 28min 38sec (1718 seconds)
Published: Sat Sep 23 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.