Francis Fukuyama on AI, technology and how it will impact democracies and world politics

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to the university I've learned it's a great pleasure to be allowed to welcome you here my name is Ryan still I'm director of the University of Oslo one year ago at this scene whoever Donaldson received from the king the cancer society's prize for research he was using artificial intelligence to take picture of tumors to understand tumors and to produce prognosis in September Emmanuel Carpentier is coming to Oslo Nobel Prize winner in medicine doing crispr technology in the interface between biotechnology and nanotechnology myself I'm a materials chemist and I know how to what extent nanotechnology has affected Battery Technology the the fact that we have a lot of electric cars in Norway this really is extremely important for renewable energy and the changes that we are going to see enabling technology ICT bio nanotechnology are making enormous impact on society most people do not know too much about it politicians may not know about it but there are extreme effects of enabling Technologies and then of course what we see that crispr is not necessarily only a positive technology what are the limitations of what you can do with medicine shut GTP is a new technology in March we celebrated that we were able to use open IA to automatically text videos at the University of Oslo Universal Design very efficient still what we also see is that this technology is really accelerating and we see that there are challenges for education but also for Society of course how does this affect polarization Echo campers how do this possibly increase social inequality how does this possibly interfere with the political system and that is the topic of today how does artificial intelligence and Technology possibly interfere with politics we are going to have a conversation between Francis and Matilda we're looking forward to that of course is it a curse or is it a blessing or is it something in between after the conversation there will be prepared questions from the audience there's no not going to be open questions but there are prepared questions so with this I really welcome you again to University and I'm looking forward to the conversation please thank you so much thank you so much for this introduction and good morning and welcome to all of you and of course Welcome To You Frank um last time you were here was right before the covet pandemic hit Europe I mean by a couple of weeks it was dark and since then Russia has invaded Ukraine and you usually talk about all these things all the time because they are huge things happening in the world right now but today we are talking about something else something that you propose that we should talk about and I'm really happy to say that it's going to be maybe the first time we're talking in an audience like this about these things so we're going to have to see what AI biotechnology and technology in general how that afflicts and how that influences global politics a while ago you wrote in American purpose a Blog called blog post called AI technology and equality and you explode possible social impacts of AI and even if you say or you said in that article that it's a Fool's errand to predict something the long term for instance social consequences of AI and Technology can you just tell me what we know so far about the overall effect of social media internet and Technology on Democracy so far so we could start there well so first of all it's a delight to be back at the University of Oslo and to see a lot of old friends here in Norway I'm glad we can travel again so and thank you Matilda for organizing this uh so first of all let me say something you know I think everybody here has been reading the stream of Articles of people opining about what the effect of AI is going to be on society and I'm not going to try to do that because I think it's it's just a Fool's errand to try to make these predictions if you think about you know what it was like in the 1870s when electricity was first invented and people if you had social media back then and people would start speculating about what the impact of electricity would be on you know society as a whole they wouldn't have had any idea and I think we're in a similar position right now that you can see certain Trends you can extrapolate them but the thing is moving very very quickly and the ultimate social impacts I think are going to be very hard we can we can talk more specifically about some particular dangers that I think we need to think about with regard to AI but to get back to the bigger your question that blog post that Matilda referred to was really one where I was trying to look back at the impact of earlier generations of digital technology because I think that it's sometimes misunderstood what you know what had really happened so let's just think about the internet in general we'll get we can get to excuse me social media but if you think about this big transformation that has occurred over the last 50 years with the rise excuse me of uh this whole class of information Technologies there's a broad conclusion by economists that it has increased socio-economic inequality and there's doubtless a lot to this because right now the main social divide in most countries most certainly in most advanced democracies is one that is really based on education and if you have a higher education if you've gone to the University of Oslo or you have a degree or a higher degree you're doing well your income is much higher in the gap between those people and people with just a high school education or less has grown enormously almost everywhere and so there's no question that a lot of the current populism that we see around the world is based exactly on that division that people that vote for populist politicians are usually less educated people they don't live in big cities they are not connected to a kind of larger global economy and that obviously is really upsetting our Democratic politics however one of the consequences that I don't think people have recognized uh sufficiently actually is this massive increase in inequality that was brought about by the transition from an industrial to a post-industrial uh economy and that concerns gender relations right because with the replacement of physical labor by machines and then with the shift in the nature of work from an industrial economy to a post-industrial one in which most people instead of working in factories or lifting heavy objects they're sitting in front of computer screens all day in a service industry this has had an enormous impact in the role of women in the economy so beginning in the late 1960s virtually ever every Advanced Society began to see enormous increases in female labor force participation now some people would say this was an ideological change or a cultural change that you know there was a movement for women's equality that appeared at that time this is one of those areas where I think cause and effect are very hard to disentangle and it is certainly the case that you could not have the degree of female empowerment if you didn't have occupations for women where they could earn salaries support families and be independent of their husbands and this began to happen as tens hundreds of millions of women all over the world entered the workplace in that period in the late 20th century now this shift I I think the the one book I wrote that got the least attention was one called the Great disruption that I published I think in about 1999 or so but I argued that they're actually not just this change in the nature of work but also you know the invention of the of the birth control pill and other biomedical technologies that really shifted the whole nature of the family and uh gender relations so women were empowered there was an increase in the divorce rate there was an increase in the number of children being brought up in single-parent families but there's also a big class division that appeared as a result of this because in fact for educated middle class people that disruption wasn't that great but it was really bad for Working Class People where to this day a lot of children are being brought up in you know in female-headed uh families all of this brought about by the equal by the technology but in the end it had this enormous impact on the role of women in any modern economy and indeed really around the world this change has gone on and so the the consequences of technological change are really very complicated and very hard to foresee and I think that in that respect the you know the shift to digitization has been uh has had both you know positive and negative impacts but it's very hard to say that overall you know it's simply increased inequality you mentioned the book from 1998 and I'll bring up a book from 2003 another one that's 20 years to this year uh you wrote a book called our post-human future consequences of the biotechnology Revolution and how do you see that the ability to change or modify human behavior or biology will affect liberal democracy and how do you think that new advances in biotechnology will affect politics and Society because now you have the Insight you can look at what you wrote 20 years ago and I'm sure lots of things have happened in this area as well so tell me about biotechnology changes so in the late 90s I was running a seminar on new technologies and their impact and we focused on both I.T and biotech and at that time I actually thought that the biotech revolution could be more consequential and I think it still may be right though in the interim obviously we've seen the downsides of you know social media and information technology but the reason that I think that the biotech revolution could have big political effects is the tools that it provides for certain people to control the behavior of other people you know we had the experience of totalitarianism in the 20th century in which you had highly centralized strong governments that tried to control the behavior of their populations and they had things like uh you know agitation and propaganda they had re-education they had you know a police state that tried to enforce a behavior changes in behavior in people and in the end you know those Technologies were not sufficient you know the Soviet Union broke down China is now Reinventing itself as a centralized controlling state but it also has had trouble really using those techniques to control populations but it seemed to me that these new biomedical Technologies were opening up new avenues and in particular if you could actually do germline interventions that would alter certain characteristics of human nature that then would be heritable to later generations that would have a very serious impact on the way that we thought about things like human rights because I believe I can go into longer defense of this if anyone's interested but I believe that our understanding of Human Rights is ultimately based on a certain Theory we may have implicitly or explicitly about human human nature and that the most important rights that we Grant people are those that respond to the deepest you know aspects of what we regard to be a human being and if you can start to manipulate that you can change the nature of Rights you will in the end change the nature of Rights germline engineering is not the only technology I think that we are using you know psychopharmalogical uh interventions different kinds of drugs to control mood behavior and so forth and it's been a major Revolution I think you know these days we control the behavior of our children with uh um uh with SSRI with you know selective serotonin reuptake and Inhibitors like Prozac and Xanax and so forth we control children's Behavior through Ritalin and other uh amphetamines like that and it's going to increase but it turned out that the I.T Revolution produced you know bigger short-term impacts and I think that's really what we've been worried about I think when the internet and social media were first introduced I like many other people thought this would be good for democracy because information was power and if you increased access to information you would spread power out and that would be democratizing it turned out that this was true we were spreading power out but we also were eliminating all the hierarchical structures that certified verified and you know excuse me ensure the quality of the information that we receive and I think the big um challenge that we have now is really the basic lack of trust that has pervaded you know a lot of our societies because we no longer disagree just about ultimate values we are not able to agree on simple factual information and you can see this you know I think quite strongly in the United States where we have this incredible degree of polarization and it's ultimately built around the fact that you know the two sides of this polarization simply have different understandings of what's going on in the world right we have one group of conservative Republicans that think that Donald Trump had the election stolen from him in 2020 and they believe this very passionately because when they turn on Fox News and you know go to their internet websites or their favorite people on Twitter that's the story that they hear and no amount of contrary information actually can and get them off of this belief because they really begin from a different factual basis and that I think is really one of the dangers at least of you know the the coming AI Revolution where the verifiability of virtually any digital artifact is is going to be very difficult I just um so I do a lot of you know photography videography this sort of thing so Adobe just introduced this new beta version of Photoshop that has this thing called generative fill where it used to be that you know if you wanted to take somebody out of a picture they had a tool that would you know it would draw a line around that person and you could get rid of them but it would just leave this blank white space and now the program actually fills in the background whatever you know the person standing behind a brick ahead of a brick wall and it'll draw the bricks you know in back of them it'll even put objects weren't there into the picture and it's very hard to tell that you've actually manipulated it and so this is you know just the beginning I think of one stream of developments that is already with us you know in terms of deep fakes where it's going to be very hard to authenticate uh any kind of digital document I think that one of the general characteristics of our age is a across the board decline of trust in social institutions this has been well documented in a lot of survey surveys across you know many societies where people simply do not believe in traditional institutions and their Authority uh and I think you know you imagine what's going to happen in court cases for example you say well we have photographic evidence that this person was guilty of this particular crime because look in fact we've got a case of this with Donald Trump you know we've got these photographs of him having these classified documents in his Mar-A-Lago Resort and already people are saying well these were doctored photographs you know the FBI simply created this and it's you know they're fakes and I think this is going to get more and more intense as the quality of the kinds of interventions that you can make will increase so when I look at your Instagram post next time I have to just call you and see have you manipulated this or is this truly were you really here or there go into uh the post-human future again with biotech because I remember that you wrote something about differences between the west and China when it comes to manipulating things and I'd like you to to elaborate a bit on that yeah I think that you know one of the problems in regulating well okay let me back up a little bit so I think that um if you look at the history of Technology it's always been this race between the technological development and then the ability of science to regulate it and it always takes a long time for that to happen and this you know you go back to the printing press when Gutenberg invented a movable type you know this had a lot of impacts on the Protestant Reformation for one thing in the spread of you know different ideas or challenges to the Catholic church that's led to 150 years of religious Warfare and eventually people reconcile themselves to printing but you know it did have these uh negative impacts you think about radio television their relationship to the rise of fascism and stalinism you know these were technologies that allowed dictators to connect to you know Mass audiences and again we've kind of figured out how to regulate and deal with those kinds of Technology and so there's always going to be this lag between the time that the technology is introduced and the time that the society can catch up with it and I think that in the case of Biotech it's very hard to regulate uh you know the uh I have a colleague at Stanford who actually works with high school students there's a kind of standardized biotech lab that can do you know different kinds of uh interventions you know using crispr and so forth that fits inside a shipping container but a lot of high schools are having competitions you know between high school students that want to do this kind of genetic manipulation and and what she does is she tries to um create a set of norms that you know these student groups will follow because it is impossible to monitor what they're doing right this is the usual way we regulate things so how do we regulate nuclear weapons and other dangerous technology well we've got overhead Satellite photographs and we've got nuclear inspectors and so forth and because it requires a a country to actually engage in that kind of industrial production of nuclear fuel we can pretty much monitor what's going on in North Korea Iran you know this sort of thing with biotech it's impossible you know the technology does not require large facilities it's very widespread and really the only way that you can hope to regulate it is through some kind of normative you know intervention that will give the researchers some uh standards you know that they need to impose on themselves and a lot of the work is that's being done is very is very scary I have another colleague at Stanford who runs a big biomedical research lab who's been on the I mean he has been really worried about this uh you know you probably are following the controversy over the lab leak the Wuhan lab leak which was dismissed by many people as right-wing propaganda but now you know there's a certain amount of evidence that indicates that this the whole covet epidemic was the result of just sloppy security at you know this this Wuhan Institute of virology and I think that we can expect further things so this colleague of mine has another very scary briefing about uh you know how a researcher an American researcher actually downloaded the um the genetic code for a monkey pox virus that he created in his lab and so this is the complete merger of Information Technology and biotechnology this wasn't a virus that was created from another virus it was simply digital information that was available to anyone on the internet and he used this to actually create a completely novel variant of monkey pox that could be actually much more virulent than the existing one and you know what's going to prevent the spread of this well really the only way that you can you can't monitor whether anyone in the world is doing this kind of research you really need to depend on the responsibility of the organizations and the individual scientists that are working on this sort of thing and so very hard to know you know at what point somebody's going to breach some of those norms and do things that are going to be very dangerous now at the end of the you know I spent a long time in the early 2000s thinking about how to regulate bio technology and one thing I I uh observed and felt was that this stuff is going to happen in Asia long before it's going to happen in Europe or in the United States because and frankly this is just a cultural thing that in uh you know countries with a Christian religious Heritage there is a belief in uh you know a fairly sharp dichotomy between human and non-human nature and that you know God endowed human beings with a certain degree of dignity that non-human nature doesn't have and this becomes the basis for you know a lot of our understanding of Human Rights or the universality of Human Rights but it also means in a way a kind of downgrading of the natural world on the other side of this dichotomy and the dichotomy doesn't really exist in most Asian cultural Traditions there's a kind of Continuum from you know non-human nature through human nature and the ability to manipulate one does you know spills over into the other and indeed you know China was the first country where you actually did a germline experiment on a human embryo now that was shut down and the scientists that did it was punished but I think that you know that those cultural inhibitions are going to be much stronger in Europe and in you know North America than they will be in other parts of Asia so this is again this larger problem of regulating technology of any sort you can decide to do it in your territorial jurisdiction but it's going to happen somewhere else and I think this is the problem now with the idea of regulating AI that if we do it in Europe or we do it in the United States you have this competition with China and with other big countries and they're going to be going ahead and you're going to ask yourself well are we self-limiting you know this critical technology that will then be developed by somebody else and used against us and unless you have a global regulatory scheme uh you know we're we're stuck in this competition well finally a question before we get to the audience a bit uh this spring in Norwegian scientist working on AI I mentioned it to to you in gastrica she wrote a book about machine learning and Ai and I recommend it to the audience if you haven't read it it's not it's sold out but it's still possible on Kindle for the moment um she points to the European Union and to the artificial intelligence intelligence act being uh put forward in the EU and belchi and Martin sambu for instance in the financial times they say that the EU seem to be taking the right approach what are your faults on the EU and what they are doing on regulation even though you just mentioned that you have to do it worldwide and that it's difficult well compared to the United States I think the EU is doing a lot of good work uh you know the Digital Services act I mean this new set of rules about the internet is something that we need in the United States uh you know the EU pioneered Privacy Law with gdpr we don't have that in the United States except in California and certain individual states within the United States and I think that this American hostility to State regulation has been a big problem for the U.S because we really now leave it up to these big technology companies to regulate themselves and they're not doing that great a job and then it turns out that you know Twitter gets bought by Elon Musk and he can change the nature of you know the way they moderate content you know to suit his personal and you know political preferences and that's not something that really ought to be left up to Rich individuals in the private sector you know this is a issue of public concern in a democracy and so therefore laying the foundations for the ability to regulate technology is important the problem I think is that we can we we've had now decades worth of experience with social media and we kind of understand now what some of the unanticipated uh dangers of that technology are and therefore we have a clear regulatory Target whether you can actually do that regulation is a another question but we have a a much better idea of what the dangers are in terms of AI we really don't know what it is we want to uh worry about you know maybe deep fakes maybe this intensified ability to Target advertising uh maybe you know the move towards General artificial intelligence which I think is much further down the road than you know most people think but again if you think about uh go back to the 1870s at the birth of electricity and you had something like the EU at that point that said oh we've got this new technology we need to constrain the bad uses of electricity and encourage the good uses of electricity and we want to save jobs and protect people and so forth what would they have done you know and you could imagine all sorts of false moves and false directions that they would have taken that would have completely missed the target there's you know this one case for example there was a tech Guru that back in about 2015 or 16 said you know if you're a radiologist you might as well give up now because your job is going to be taken away by artificial intelligence because these programs are so good that they can diagnose you know x-rays much better than any human being turns out you know seven years later there's a shortage of Radiologists and their average incomes have gone up because it turns out a radiologist does a lot more than simply interpret uh you know X-rays and I think that this is going to be you know with AI you're probably going to get similar kinds of effects I mean there's a question about whether the technology is going to completely replace a particular occupation or it's going to be complementary so that it will allow people to do the same occupation but do it better and I think in the case of radiology is have that effect it actually gives the Radiologists more tools and then he or she can you know do other things that are connected with that particular job that's my attitude personally about chat GPT it seems to me there's a lot of writing that goes on that's pretty low level thank you letters and you know technical manuals and all sorts of things and it's very possible that people with lower levels of Education will be able to use chat GPT and they'll be able to do useful work with it I mean it'll be like a you know giving them a word processor except one that generates you know a lot more words and it may actually help people with lower levels of education do things that are useful in a you know modern economy but we don't really know what the impact is going to be and we kind of assume the worst that it's going to take away jobs and not simply complement them and that's why I think that premature regulation of this technology is going to be really tricky because we simply don't know you know what the what the real impacts are going to be and there is a danger of aiming at the uh at the wrong target I do think however that it's worthwhile investing in a regulatory capacity because this field is moving ahead so quickly and governments really aren't focused on it and if you actually create a regulatory body that keeps track of the technology I think that's something good to have in place but I'd be a little bit careful about exactly what powers you give it because I don't think we really know you know what it should be aiming at at this point yeah that's what they're mentioning and they're also saying something about some transparency which is important as well because you have to know what's going on inside the machines and as far as I can I'm concerned I'm not sure if we know that so um well let's go to the audience my colleague here I'm professor of philosophy and my colleague in civita large FedEx venson let's see thank you Professor fukuyama recent development within artificial intelligence raises all sorts of interesting questions but I will narrow it down to one will the emergence of artificial intelligence undermine liberal democracy we can assume that artificial intelligence will help authoritarian regimes and hybrid regimes suppress Democratic upper session it will assist governments in detecting unwanted information faster and more comprehensively it will also help in censoring and otherwise neutralize such information and such regimes will not only attempt to suppress or neutralize unwanted information information domestically but also internationally an artificial intelligence will enhance their capacity for doing so one could say that the introduction of artificial intelligence really doesn't bring anything new to the table it's more of a question of efficiency censorship already exists by means of artificial intelligence of course in January this year Russia introduced the Oculus system that that monitors online information both text and images but what is perhaps a bigger threat is how artificial intelligence can be used to create competing content and the competing content doesn't necessarily have to be very good if you have enough of it it will have the the share amount amount of it will have a destructive quality if you put enough junk into any discussion you will ruin it again artificial intelligence does perhaps not bring anything radically new to the table but it makes such practices far easier and cheaper and you can create digital clones Which is far easier and cheaper than making analog clones Russian opposition politicians are used to having clone candidates with their the same names running against them in elections in order to confuse voters at the polls and artificial intelligence will make the creation of digital clones far easier and they were and The Originals will probably just drown in the sea of the fakes perhaps the biggest threats that these technologists post-liberal democracy is that it's not perhaps that they will be used to suppress messages but rather that they will make us so intellectually lazy that we will not have much of a desire to express anything of substance at all so in brief my question is do you think that artificial intelligence will further accelerate uh Democratic decline or do this recession or do you think that there's light in the end of the tunnel so um I think that there are two clear challenges that the current generation of artificial intelligence poses for democracy so the or maybe three uh one of them is clearly the um the general dissolution of our certainty about information that we receive so I've mentioned that already that's obviously a big problem uh you know with something like deep fakes it's going to be very hard to know the authenticity of anything we see on the internet but you know the solution to that is actually also in AI because there are authentication technologies that can be used to actually certify the Providence of a particular digital artifact it's just that we haven't gotten used to using that and you know the amount of money going into the deep fakes is a lot more than the amount of money going into research on Authentication Technologies but this is an example of where the regulation really needs to catch up with the speed of the technology because once we have once we recognize that there's a general problem that we can't trust anything we see then people are going to realize that they do need you know to have technological means of actually verifying that what they're seeing is is a real photograph or real video so forth so I think it'll come and the solution is not going to be to ban the technology as a whole I mean you're going to have to use the technology basically to control the technology now there's another dimension of this which is really an intensification of what already exists which is that social media has been very good at manipulating people through targeted advertising right this is what Facebook and Twitter and all these other social media platforms were aiming at and we've kind of exposed and we're sort of aware of the commercial uses that this is being put to but uh you know with artificial intelligence the targeting can get smarter it can get adaptive so that once people realize that they're being targeted you know the targeting can change and this will be done automatically by machines and not necessarily by human beings and I think that this is you know this is going to be a real problem because it will open up new avenues for manipulation that are going to be very very difficult to detect I've been watching this on Twitter right so I've been Twitter user for for many years and after Elon Musk took over it's not that he banned you know certain categories of speech he certainly allowed you know conservatives to get back on Twitter but you know for example that coverage of the Ukraine war really changed once he took over and you just saw a lot less information you know on the pro-ukrainian side and more things that were sympathetic to Russia and so it's a very subtle kind of manipulation that's not obvious you know at first glance but you know over time it could have a cumulative effects so I do think that that's going to be something important on the question of totalitarian control that's something really that I think is New China through its social credit system is attempting a level of individual monitoring that has simply not been possible up until the invention of you know machine learning and and you know this kind of uh large-scale data um you you think about totalitarian regimes in the 20th century what they could know about an individual they tap telephones or they'd follow people around with agents and now China especially after kovid because of the integration of the covet monitoring system with the general social control system they know where you've been you know who you've talked to they can monitor every conversation you've had and so forth and um you know that's a very powerful form of control on the other hand when people got tired of zero covid they rose up and they protested and you know the authorities really had to back down and so even that level of control didn't give the right signals to the authorities as to you know what they needed to do or they not until the thing broke broke into actual social protests so again I you know it's very hard to know whether this stuff is going to work as well as we fear it will and I think in many cases the only solution we have is to actually use the technology to defeat the technology hmm thank you um I'm not sure if wolf read Hammer is here yes you are there you can come here because the microphone is here and it's a seat for you here so good to see you uh she is professor and founding director of the Oslo nuclear project and she will have a question sorry coming all the way from the back um what a what a fascinating and important conversation my question really picks up I think on on where you left off and it relates to the relative advantages of democracies and autocracies in developing and of course I'm having a trouble hearing my question relates to the relative advantages of democracies and autocracies in developing and applying artificial intelligence applications do you see this as a when it takes it all race or do you see this as a competition where democracies and autocracies may have different areas of interests and Arenas of competition and do you think that there may be reasons to worry about the ability of democracies to compete with authoritarian regimes in some of these areas and if so other things that we should do to fix that thank you you got that question yeah so right so whether this gives a systematic benefit to autocracy over democracy uh again I think it's very hard to say because you know the there's been this dialectical process where the technology is used by a regime you know for certain purposes and then it generates a reaction and you know it sometimes negates the initial advantage that it gave people I think that in the contest between democracy and autocracy there are many other variables other than technology that will determine the outcome of this race so for example I'm after this I'm going to go over to the Oslo Freedom forum and this is what I'm going to tell them there there's some big disadvantages to autocratic government among them being the fact that by concentrating decision-making power in one single individual it means that they can make decisions quickly and very authoritatively they are authoritarian after all but they can also make huge mistakes much more readily than a democratic system that spreads out decision making that requires consensus consultation deliberation and so forth and I think we've seen examples of both of these in Russia and China you know the Ukraine invasion was something that Putin seems to have invented just by himself not really talking to very many people certainly not knowing very much about what Ukraine was thinking and doing and so forth zero covid was another really bad authoritarian decision that could really have only taken place in a country where you had this kind of concentrated decision-making Authority in one individual and in both cases you know the leaders lost control of subsequent events and the technology you know in a way didn't help them restore that restore that Authority um and I think that you know that's the general advantage of liberal democracies is that because they require greater social buy-in in order to produce decisions in in order to govern they tend to be more durable so yes I think the technology will give certain authoritarian countries greater control over their citizens and they will look very strong up until the moment they collapse because they don't really have you know that kind of social support from a broad segment of their uh you know of their societies but you know look at Venezuela you know if there's ever a regime that deserved to collapse it was probably that one and it's still there uh you know two decades after Hugo Chavez started down this road so you know it's it's it's really hard to know in the end which uh which side is going to win this fight and then the last question from the audience it's currently a political editor of the business publication is thank you my question is about the uh the middle class the economy and liberal democracy there's an idea going at least back to Antiquity that the middle classes are a fundamental part of a function of democracy or at least of a well-governed society but if we're in the middle now of a technological Revolution as some say how do you think that will impact the economy and thus also possibly impact our politics right well um that's uh you know it's a hard question to answer I think the first response that many economists would give is that it's put a lot of pressure on what used to be considered the middle class uh so if you think back you know a European or North American democracy in the 1960s uh you know who would be considered middle class well you could be easily middle class with a high school education working in a factory you know you made enough money that you could afford you know a nice vacations you could take your family out to you know to eat with a manufacturing job and so forth and clearly that kind of middle class occupation has disappeared and part of what's happened with the rise of populism is that people with that level of Education have not really participated in the huge expansion of wealth that has been created as we shifted into a kind of post-industrial fully digitized economy you know there's an easy way to think about this if you're really good at math in the late 19th century what would that have gotten you you know in terms of your income uh so you could have been an accountant you know you could have been a professor uh in a in a university but you know it wasn't an obvious ticket to becoming fabulously wealthy whereas today if you're really good at math you can go to Wall Street you can work for a financial firm make yourself a billionaire you can do startups you know you can create enormous amounts of wealth with that kind of very technical knowledge that is possessed by you know a pretty small part of the population and that's been part of the you know one of the drivers of inequality is that the rewards to certain kinds of cognitive ability have gone up tremendously and you know what used to be a level of cognitive ability that made you middle class you know 30 40 years ago is not gonna give you the same degree of of income and that's really what's driving the populism and the inequality in in our societies now this is where artificial intelligence I I just don't know what the impact is going to be because if it turns out to be a technology that simply replaces entire categories of work that had been done by lower skilled people then it's simply going to intensify that kind of disappearance of the middle class that you know we've already seen up to this point on the other hand it may allow people with lower levels of Education to do things that they weren't able to do before because the technology will complement their kind of uniquely human abilities to relate to people to organize things socially you know to have social intelligence and it will actually you know add to their incomes but I just think at this point we don't know which of these impacts is going to be you know the bigger one but you know that doesn't mean that we're not in a pretty you know problematic situation right now with the undermining of what had been middle class incomes and Lifestyles that have already taken place as a result of the technologies that we've seen implemented over the last couple of generations let's go to a topic that I'm sure that the audience here or at least Norwegians in general they are really preoccupied with the U.S elections and I'm sure that you are as well because I know we talked about a lot about that before how do you think that AI technology and uh well the development that we see even from 2016 will influence the forthcoming U.S election uh next year um well it's uh it's complicated because we've already had this back and forth you know I think that in the 2016 election people were not aware of the potential dangers that were posed by social media and it really took you know like the Cambridge analytica uh Scandal you know where Trump was helped by targeted advertising that was done by Cambridge analytica they didn't realize that this could actually have a real world impact on election outcomes and since all American elections are dependent on like five swing States you know and changing you know a hundred thousand votes in those states can actually swing the election one way or the other it potentially could be very big and so this led to this big revolution in the way that Americans have thought about social media so um there's this recognition that these countries these companies these tech companies had this power over Democratic discourse and they were misusing it because they were interested in their bottom lines and not in the health of American democracy so this has led to a big upsurge of interest in you know the whole question of technology and democracy and a backlash against the tech companies so in 2016 all my students wanted to go work for Facebook because that's where all the big money was and today they say well I had to take a job at Facebook because I couldn't find anything better than that right so there has been this backlash but I don't think we've solved the underlying problem because in a democracy I do not think that it is legitimate to have a single private large private company have that degree of influence over Democratic discourse right it's like the situation in in a certain way in the 1960s with the big broadcast channels where you had just three you know big broadcasters and they actually were regulated quite strictly so that they didn't get two partisan and so forth and right now we've got you know three big social media companies that can have a big impact on the way people think about politics but we don't have a regulatory structure in place that will put some limits on the kind of political speech that they can carry and I don't see an obvious way out of this conundrum because I don't think that you want to give it to the government right you don't want the government to Simply dictate what are the limits of acceptable discourse you know in a in a modern democracy if somebody raises a question about the efficacy of vaccines is that something simply going to ban or is it you know kind of legitimate question that people could raise that you know in a democracy you've got to at least debate the question and so forth I think that's very difficult and it's very dangerous if governments take too much of a lead on the other hand I don't think that it is uh democratically legitimate to let a private sector company exert that kind of power now I um I led a Stanford working group on platform scale back in 2020 and we originally were going to look at antitrust you know competition issues we decided that that body of law is really primarily directed to economic impacts of large-scale corporate you know uh concentration and not on the impact on democracy and that that was the thing that we had to worry about and so we spent the whole year you know talking to people deliberating about this and we came up with a solution that I think to my mind is still the only one out there that I think is is workable which is something we call middleware right so at the present moment the algorithm that determines what it is you see when you do a Google search or you go on Facebook or you go on Twitter is a proprietary algorithm that is designed by the the platform to maximize its own you know benefit uh and like I said with Elon Musk you can see the impact of this you know you these companies have been run by people that were kind of socially pretty liberal uh back in 2016 but then musk decides he doesn't like it and so he buys the company and it changes you know the whole direction that Twitter influences people in that I think illustrates something of the problem like our metaphor for this was social media is sort of like a gun that's on the table in front of you and you're trusting that the person sitting on the other side of the table isn't going to pick up the gun and shoot you with it but you have really no way of preventing that from happening and in effect you know that's the situation that we're in right now middleware would take the ability to curate content to moderate content away from the platforms and give it to what would hopefully be a layer of smaller competitive third-party companies that would moderate the information you saw from the internet in the way that you wanted it right so if you leaned you know conservative you could have your your Twitter feed or your Google search moderated by a conservative firm if you're on the left you could have a different thing if you wanted things that were made domestically you could have another one you want something that will give you green content you know because you're very strongly pro-environmental you could have yet another content moderator that will serve you the you know tailored information at the moment these algorithms that we are subject to by these big companies are completely beyond our control and they're non-transparent you know we have no idea why we're being served the content we are we have this rough idea that it's based on our past you know behavior on the internet but you know who knows how they're actually deciding this and if you had something like middleware you you would restore competition and choice to the active moderation the the main complaint that we got about this idea was was that it would reinforce the compartmentalization of political actors so that conservatives would only see conservative content and you know Progressive people would only see you know things that they wanted to see that's true but I think the important thing was to take away the ability to influence political discourse on a very large scale that currently the social media platforms exercise where we got stymied was in the business model because it wasn't clear how you were going to fund these companies and it seems that people are not willing to pay for this particular service in a way that would you know make it something that market forces by themselves would create and therefore I think you have to have a regulatory environment that forces the big platforms to give up this power over content moderation and and gives it to other companies that could offer consumers more choice and right now we don't you know we don't have that but I honestly don't see a good solution to this other than trying to restore some degree of competition to the question of content moderation we have just a few minutes left so the last question will be are you uh I know that in 1992 or in 1989 um you have been accused of being the most positive person on Earth saying it was the end of History I know that that's not true I know that you're actually quite pessimistic and at least you're not I mean you're sober so what are your thoughts on the AI and technology in the future are you optimistic a bit or are you even more pessimistic now well I think that um I do think that there has been a little bit of an overreaction to AI because of our experience our immediate experience with social media right with social media everybody began but I think it was a great thing and then they realized that there's a big downside to it and now we're kind of negative on any new development and I think that you know with a lot of AI it's premature we just don't know what the impact is going to be and you can see a lot of areas where it will actually be of great benefit and actually could be a benefit you know to democracy and so I'm guardedly I mean and but I guess my more fundamental view is that we can't really stop this thing we can regulate it and we can try to guide it and we can try to put limits on the misuses of it but you know the nature of these technological technological changes is a global environment of you know competition where it's very hard to hold you know the the broader shift in technology back so we have to get smart about it and figure out how to put some guard rails around it but accept the fact that it's going to happen and and that we might as well benefit from the parts that we can benefit from and that's about as much as I can say you know thank you so much and before we give you a big Applause I'll just say that people interested in Reading what you are writing at the moment you're writing all the time you have a Blog really fabulous blog on American purpose and it's worthwhile looking into that and of course you're writing in all other papers as well yes and I also have a new YouTube channel yeah you have you yeah and the podcast of course so and Frank is also coming to Liberal Ultimus if it does podcasts later this summer and not talking about liberal democracies but about something else right Adam Smith well thank you so much for coming and thank you for the University thank you for having the organize this event here at the University of Oslo thank you so much and thanks to civitada I wanted to have this breakfast meeting here in the Ola thank you thank you
Info
Channel: Tankesmien Civita
Views: 30,362
Rating: undefined out of 5
Keywords:
Id: zuRJ3RwtXqE
Channel Id: undefined
Length: 59min 56sec (3596 seconds)
Published: Tue Jun 13 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.