System Error: Where Big Tech Went Wrong and How We Can Reboot

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign good afternoon my name is Brent Rell and I'm a senior fellow here at the American Enterprise Institute where I study Workforce Development policy it's my pleasure to welcome you to our book event on System error where big Tech went wrong and how we can reboot with two of the books authors Dr Robert Reich and Dr Jeremy Weinstein of Stanford University unfortunately Dr Miron sahami who had wanted to be with us here today was detained by his work in Palo Alto and is Naval to join us um framing around this issue and why we're here is extremely important why is AEI interested in AI to begin answering that let me read you our mission statement the American Enterprise Institute is a public policy Think Tank dedicated to defending human dignity expanding human potential and building a Freer and safer world the work of our Scholars and staff advances ideas rooted in our belief in democracy free enterprise American strength and Global Leadership solidarity with those at the periphery of our society and a pluralistic entrepreneurial culture as a general purpose technology that is likely to affect all aspects of our lives and relationships with one another as people and as Nations AI touches every strand of the AEI mission defending human dignity check expanding human potential also check building a Freer and safer world yes AI has clear implications for Democratic institutions free enterprise and maintaining America's role in the world as the main Global pillar defending democracy and human rights low income and disadvantaged populations perhaps have the most to Hope from AI in my view as well as much to worry about we are concerned with maintaining an open pluralistic Public Square and see vast potential for good and Ill in how AI could affect public dialogue and of course the potential of AI adding trillions of dollars of value to our economy means we have an intense interest in maintaining an open competitive and entrepreneurial AI ecosystem our topic today is incredibly timely the public release of chat GPT 3 earlier this year and followed by gpt4 has caused multiple frenzies of both hope and in some cases despair President Biden and Majority Leader Chuck Schumer are pressing hard to construct new regulatory structures to manage AI risk the public dialogue around this topic appears as with so many other issues to be polarizing but not between progressives and populists or Republicans and Democrats but mainly between techno optimists and pessimists between those who look at a i and see the enormous potential it has to solve problems improve health and increase prosperity and those who see a different potential a world in which human beings are sidelined by machines and in the worst case might be hurt or destroyed by them we want to foster a difficult uh different I should say dialogue here today AI will not deliver on the millennial hope blending area in hopes of some nor is it likely to cause an apocalypse as others fear we are intelligent adaptable creatures we've gotten through challenging transitions in the past and we can do the same with AI but there are bugs and sometimes even features of AI that requires scrutiny and planning to help bring a bring about the best while mitigating potential dangers our speakers are going to help us understand more clearly what they see as some of the key challenges in a in AI development including some of the challenges in its underlying assumptions about progress and Humanity amplification of undesirable aspects of econ and the way that they might amplify undesirable aspects of our economy Society politics and culture Robert Reich is a professor of political science at Stanford who also holds an appointment with the Department of philosophy and is one of seven associate directors of the Stanford human-centered artificial intelligence project Jeremy Weinstein is a professor of political science and a senior fellow at the Freeman smogly Institute for international studies as well as Stanford Institute for economic policy research for our program today this is the Run of show Rob and Jeremy are going to do a bit of a tag team talking through the main themes of system error before we turn to a conversation that will be facilitated by Shane twos who is our uh is an AEI senior fellow and leader of aei's Technology policy team and myself following that we'll turn to questions from our in-person audience and and our virtual audiences if you're watching online please direct questions via Twitter to the hashtag AEI System error or by email to my research assistant Hunter Dixon at hunter.dixon aei.org one final note you'll notice at the front of the room the table with the wonderful copies of the free book which you are encouraged to take at no charge at the end of the session so with that I'm going to turn it over to Rob and Jeremy to walk us through System error thanks so much thanks everyone for coming out uh today I'm Jeremy and I'm the political scientist social scientist of the writing team this is Rob uh my colleague who has introduced as a political philosopher and our third co-author is Maron sahami who's a computer scientist and together we've been engaged in an effort on campus at Stanford over the last many years to really think about how we prepare and educate computer scientists in particular in the Next Generation to navigate the really complex ethical and policy issues they confront as they build and design new technologies in that sense sometimes when we come to Washington I feel like we're coming from a planet far away we're drinking the water of Silicon Valley we're living the reality of a world that has been focused on large language models long before Washington DC discovered what they were and while I knew the uniform to wear when I come to Washington the only thing I'm missing is my badge uh Rob didn't bring his tie but I knew when I come back to Washington I needed to bring my tie and and part of the way that I want to open the story from my perspective is that I've spent most of my career working on issues of foreign policy uh I served in the Obama Administration on the White House National Security Council and the second term I was the deputy ambassador to the United Nations and it was in that role where I saw in all sorts of powerful ways how new technologies were transforming the policy landscape in a way that government was wholly unprepared to address this came up of course around issues of encryption and the balance between privacy and security that was very much an issue that was at the top of the agenda during the Obama Administration the first set of cyber attacks on major private sector companies were also something that we experienced when I was at the White House and on the deputies committee and so when I finished up my service in government and went back to Stanford I was thinking well what's my role in thinking about this set of Frontier policy challenges we're likely to confront and then when I got back to Stanford I've been gone for many years during the Obama Administration I found this which was a campus that had been absolutely transformed by the growth of computer science as a major and a discipline we had gone from you know about 100 Majors a year in computer science to 350 the largest major for men the largest major for women a campus that previously had a real diversity of fields of study narrowing down exclusively on this emergent discipline of computer science and at that time we had headlines like this in the New Yorker if you want to go and get rich you go to get rich University that's in Silicon Valley the paved pathway from a Stanford education straight up to Sand Hill Road to get your VC funding and then to start your startup company and of course when I got back we were at a moment when Tech utopianism and Tech optimism was being replaced by a set of concerns about the potential harms and consequences of new technologies we'd had the 2016 election campaign concerns about the pollution of the information ecosystem concerns about concentrated tech power and so Rob and Maron and I came together and said at this moment in time when we're equipping students with these extraordinary superpowers of the 21st Century how do we cultivate a mindset among young technologists that helps them to navigate the challenges that are being created by the technologies that they're building and so the book is a story from our perspective of the kind of interdisciplinary mindset that's necessary to manage the challenges of this new technological Frontier that we're at and also to amplify those benefits that Brent spoke about at the outset how should technologists approach their work and how should policy makers think about their relationship with the tech sector and so we're going to walk you through a bit of the outline of the book to get the conversation started and then we'll dig into some of the details of the tech policy conversations of the moment so the book opens with a set of stories and these stories capture what we think was one of the particular pathologies of the culture on campus at Stanford when we started doing this work and so we tell the story of Josh Browder Josh Browder was a Stanford undergraduate he arrived as an 18 year old he developed skills as a computer scientist and very quickly he had an intuition that he ought to drop out of Stanford like many successful Founders had done before and start a company and he had a pretty simple problem that he wanted to solve when he was a 17 year old living in London he got a bunch of parking tickets and those parking tickets were incredibly annoying because they led to large fines that he needed to pay and he wasn't yet in a place in his life where he was earning money and so he didn't want to pay those parking tickets so he raised uh first the first thing he did was he built a bot that could basically get you out of parking tickets it would learn from interaction with government service providers what are the best answers to give to particular questions and then it would automatically file a form on your behalf and get you out of having to pay a parking ticket and he called this app do not pay um and he offered up do not pay as the kind of technology and tool that can get you out of all sorts of really annoying situations that you may might find yourself in and he was able to raise around a venture capital I think the first round was seven million dollars or something from Sand Hill Road as a 19 year old and he dropped out and one of the challenges one of the reasons that we start with this story is and and we've brought Josh Browder to class we engage with him in Hardy debate on a regular basis but Josh Browder wasn't asking some basic questions that we think technologists who are building products ought to ask on a regular basis like why do we have parking tickets what does the function Serve by parking tickets in society I assume many of you live in DC there's street cleaning that happens in DC you have that annoying moment every week where you need to get up early and you need to move your car to the other side and maybe there aren't enough spaces in Dupont Circle or Adams Morgan wherever you're living we have handicap spaces and we preserve spaces for people who are disabled for a reason in some places parking tickets are used to manage congestion right and are part of a strategy for reducing congestion and Emissions so and in the UK parking tickets are actually used to finance the maintenance of Roads so there are Central Revenue raising strategy for government so there are all sorts of reasons that parking tickets exist but Josh Browder wasn't focused on those reasons he was focused on getting himself out of parking tickets because that was a pain point for him as an individual and it speaks to some of the trade-offs that are involved in designing new technologies now Josh browder's version doesn't stop with parking tickets because Josh browder's Amber ambition is to replace lawyers to embrace and to replace the legal sector to use automated tools to negotiate divorces to figure out how to use automated tools to manage the difficult challenge of custody of children when couples separate so a very ambitious Vision about replacing an entire sector of our society with a bot and we use the parking ticket example to say look there's a lot more complexity than building the tool here and the question is how do we weigh the kind of value trade-offs that are inherent in new technologies as they're built story that goes back at least four or five years and I want to at least an initial framing before I then give you a sense of the contents of the book sort of bring us up to date I use chat GPT a few weeks ago to ask the question should AI have professional norms and you get this you know sort of on the one hand on the other hand sort of answer and these days when you think about chat GPT um you think about all the productivity boosts it might bring you in any number of different domains but if if you're Educators like us you also think about the ways in which it can be used by students and classrooms to cheat um for the first time in the past six months nearly every Professor nearly every middle school and high school teacher that I know had to craft the policy about the use of language tools in the completion of assignments and you know since the release of Chachi PT just in November 2022 the shift in the marketplace from it being a product excuse me a research lab orientation to a product lab orientation you've got companies like paragraph AI which are using these base models these Foundation models to try not merely to offer up writing resources that would be used for say people like you in a think tank in a research institute to supplement your already developed writing skills and judgment but catnip for eighth graders writing their essay on whatever it turns out to be a replacement for human judgment rather than a supplement for human judgment this is the sense in which the frame in the book that we use from do not pay all the way up to chat GPT is that the rapid development and deployment and release of Technology digital technology has created a set of negative externalities in this particular case with Chachi BT the decision to release it dumped upon the backs of teachers across the world the responsibility of trying to figure out how to diminish the negative use case of cheating it could have been the case that the really talented people at open AI as they are now doing could have released the tool with watermarking or various forms of authentication and Providence built into the technology but they didn't they put it out there and it was the problem for every instructor across the world to figure out what to do with these negative externalities so that idea of negative externalities is then something that came up in the hearings that happened just recently with Sam Altman the CEO of do not pay excuse me none of do not pay of open Ai and we're now at a moment in which I think it's fair to say a policy window is opening finally in the United States after 30 or 40 Years of a kind of deliberate orientation to have a policy Oasis around Silicon Valley a policy Oasis around the development of technology so that America could win the race initially to pave the super super information Super Highway this policy window that's just opening is the moment we find ourselves in the release just yesterday of Senator Schumer's AI framework um I happen to find myself an extraordinary honor and privilege to be in conversation with President Biden in San Francisco on Tuesday afternoon two days ago um things are beginning to ferment here and now the idea of confronting some of the negative consequences of technology is on the front burner this Frame the idea of externalities will seem to people here at AEI I hope as an ordinary frame something not especially controversial nothing even in terms of intellectual architecture novel this is a familiar frame for anyone who's got an economics training that when we get private investment in the marketplace and products begin to roll out consolidation sometimes of Market power begins to happen the the rising use of some particular product then often brings about some negative consequences when we have negative externalities there's a potential role for regulation to internalize those externalities in order to diminish the negative use cases the unintended consequences while still keeping the great benefits we're going to offer you a frame for thinking about a variety of different decisions about technology through this language of negative externalities I'm going to begin by giving you the 33 part the three-part diagnosis in the book of what's gone wrong what is the system error we want to make every effort we can to communicate that what we see that's happening is not the product of a of a bad person of a badly motivated person we don't want to say that josh Browder is somehow um ill-informed badly motivated Sam Altman the same thing we want to communicate something about a system of incentives that is in place that produces reliably the outcomes we get don't fixate in other words on Elizabeth Holmes of the world people who break the law fixate instead on the system structure of how it is that technology has developed and deployed so part number one of that diagnosis is to start with the mindset of technologists themselves and that mindset is of a Relentless optimization approach not merely to technology but to the world as a whole and optimization despite the kind of self um given impression by technologists of it being a superpower if you can optimize something that makes everything better is actually in our view a potential liability because here's I putting on my philosopher's hat I think the important thing to understand is that optimization efficiency maximization these are means to accomplish some end and if you don't have a complementary portfolio of skills to evaluate the worthiness of the end you're trying to optimize if you optimize for something bad you can make the world worse not better optimization has to be seen as a strategy for getting you to a goal but you have to be able to have confidence that the goal itself was worth pursuing in the first place um to put it in slightly wonkier language efficiency maximization observation is a derivative value not an intrinsically good value it only becomes a good thing to optimize if the thing you're optimizing for is independently worthy itself so the basic Spirit here is to say there's three different problems with the optimization mindset first you can get the choice of bad goals or objectives you optimize for something that's objectively bad and increase the efficiency of the production of that bad thing you make the world the worst place not a better place now I don't think that's frequently the case with technologists although it is something to be aware of the much more common problems are the second and third issues with optimization number two is that when you take a large and general Mission take Facebook's mission to connect to the world or take take Google's mission to organize the world's information to try to make that computationally tractable to try to create a technological solution to that interesting Mission you need to create whatever what a technologist would call some representationally adequate computationally specifiable version of that mission you need to reduce the grand emission to a set of tractable proxies and it's a familiar problem with economics that if you begin to strive to accomplish a proxy rather than the main goal incentives organized around accomplishing the proxy and if the proxy is distant from the main goal then you organize yourself to optimize for something that's far short of your mission and you forget because you had to work with this computationally tractable thing we give an example of the book that you might know of in 2018 I think it was there was a memo released from Facebook at that point by one of their Chief strategists that said at Facebook we strive to connect the world and the way that we measure that is by growth and engagement on our platform even if people do bad things with Facebook even if terrorist incidents happen because we connected people still we strive for more growth and more engagement because this is our measure for whether or not we're connecting people again that's the sense there about the problem of finding a proxy for a good goal the third reason the problem of multiple valuable goals let me ask here this will be a difference between Washington DC and um and Silicon Valley I predict how many of you have enjoyed a Soylent lunch anytime recently how many of you know what Soylent is yeah most of you Soylent is a meal replacement powder many of the things that technologists try to solve for can be solved successfully for a particular goal but if socially what we care about is a balance of goals a balance of objectives mixed together the successful optimization for one thing can upset a greater balance why do I mention soylents in that regard Soylent is the product is described as an optimal replacement for the body's nutritional needs the actual website says food is an inefficient delivery vehicle for what the body needs the value of food of course consists in cultural identity social connection pleasure in eating if you have Soylent alone you're going to lose out on those other values even if we're charitable in thinking that Soylent optimizes for the body's nutritional needs many products have this flavor optimizing for one thing successfully even if that thing is objectively good like the body's nutritional needs comes at the cost of other values that also matter so that's the problem with optimization foreign part of the diagnosis it's the Venture Capital funding structure of tech companies in which there's a you know a desire to have a unicorn you try to scale something as quickly as possible what you do is you try to lock in your Market position get that classic hockey stick of growth and if there turned out to be socially unintended consequences you mop those up later first you try to lock in your Market position you go for scale hyper scaling Blitz scaling and only Downstream do you try to hire for people that are going to try to anticipate socially unintended consequences begin to mitigate risks the venture capital structure puts a premium on scaling as quickly as possible getting a unicorn status and later on dealing with potential problems and then finally third part of the diagnosis is the long-standing regulatory Oasis produced by Washington DC here the decision not to not to try to put in some ordinary guardrails these days the conversations about AI have the following flavor I'm trying to introduce basic Common Sense ideas we don't let cars on the road without some basic safety standards we actually expect industry Norms as well as regulatory Arrangements we don't put milk or meat on the grocery store shelves unless we have some inspection regimes for it we don't allow drug Discovery to take place in people's garages so that you Tinker around with some Lab Set and then sell it to the neighbors down the street to see what happens to them why should the release of a super powerful Technologies like artificial intelligence be any different a common sense approach to thinking about basic safeguards well in Washington DC in the 1990s there was a decision to put a regulatory Oasis around Washington DC the inventor of the internet here you all know him Al Gore we can eliminate many of the regulatory barriers now and Pat in the path of the information superhighway performed the most major surgery in the communications act since it was enacted in 1934. that's what led to section 230 of the communications decency act that was passed in 1996 the famous provision now that immunizes from liability any social media or internet company that hosts user-generated content in any form or fashion that was the policy decision that accelerated the developments of Silicon Valley but now that we've reached a moment in time where these negative consequences these negative externalities are so apparent a regulatory Oasis is no longer a wise approach so the next part of the book the second part of the book takes you through four different kinds of technological questions where instead of just thinking technology good in this case technology bad another case we face and confront value tensions or trade-offs between rival things that we care about and the technologist whose mindset is an optimizing mindset wants always to think that there's a correct answer a uniquely optimal solution to something is often flummoxed by the idea familiar to economists familiar to any human being about their whole balance of their lives that there are value tensions to be navigated so for example in algorithmic models whether they're in hiring or credit scoring or in criminal justice or wherever they happen to be deployed we worry and familiar we're familiar these days with various questions about bieners bias can we get increased accuracy in the model and what happens if we're also trying to solve for algorithmic fairness algorithmic explainability algorithmic due process how do we ensure that we get the maximum accuracy in a model as possible while also ensuring we get a biased diminishment we get various forms of due process and explainability in the meantime we have a variety of examples in in the chapter about algorithmic bias to show how this has worked in practice a famous example about Amazon doing its very best effort to build a hiring tool for its own company since they were trying to hire so many thousands of people a year in which they found they could not eliminate a bias against hiring women from their own hiring model and so they eventually scrapped the algorithmic model for hiring at a certain point because they couldn't determine how to eliminate this persistent bias ways in which algorithms algorithmic decision making has to confront these kinds of value trade-offs another chapter focuses on various questions about data maximization and privacy issues so many of the debates about the internet these days consist in the data abusing practices of either internet or social media companies that exchange for offering us a free product suck up all of our data and then various invocations of privacy guarantees whether it's from a company like apple privacy by Design or whether it's a say a messaging app you know you probably have in your your phone right now either signal or Whatsapp or some type of end-to-end encrypted messaging app the tensions here between things like guaranteeing privacy let's call this an objectively good thing but not the only good thing guaranteeing privacy can come at the expense of national security or personal safety signal is a fantastic technical accomplishment this end-to-end encrypted technology which guarantees not only that the government can't inspect your message but neither can the company itself the signal has been used for child pornography for organizing the January 6th Insurrection all kinds of problems with the use of the platform for using these private messaging services if you ask signal why do you get to decide the balance between privacy and National Security and no one else has a say their answer will be we just think privacy is really important that's what we care about here at signal on these larger social questions confronting these value trade-offs about how much privacy how much National Security how much personal safety these are social questions that deserve wider input another one which is also a frequent topic of conversation is the deployment of automated systems these are frequently these days AI systems but they needn't only be that whether it's automated Vehicles various forms of robotic Solutions and factories how do these automated systems interact with human well-being in the chapter about automation we point to two particularly important trade-offs the potential for Extraordinary increases in productivity through automation at the expense of both human material welfare which is to say simply jobs income that come from the jobs that people had and it might be either transformed by or displaced by the machines but also and importantly human agency itself the value that we as human beings attach to our own exercise of our agency to deliver certain outcomes you probably have a grandparent where if you've communicated to them how in the near future it's possible to have an automated vehicle Fleet and construct roadways in a certain way in which it's going to be massively safer than humans who drive cars and that there's gains for across a whole variety of places less traffic less emissions much greater safety and you're going to have an uncle who's going to say I don't care because I want to have my hands on the wheel with my foot on the gas pedal driving down the road it brings me pleasure let me have my agency and you multiply that across a whole variety of different fields some human beings will feel that the trade-off of their own exercise of their agency against a variety of productivity gains and safety gains won't be worth paying and want to confront those value trade-offs too finally there's a chapter on perhaps the most familiar of all of the trade-offs free speech in an online digital public sphere how do we balance the interests of having an information ecosystem that is not filled with pollution of misinformation disinformation and hate speech with the ability of people now at unprecedented scale to express themselves and to reach an audience through these online mechanisms we have we rightly want to have an attachment to freedom of expression that is an objectively important value but so too is a healthy information ecosystem and if you have an amplification system now driven algorithmically for engagement that incentivizes the production of extremist information of misinformation disinformation and hate speech because that's what generates engagement we have another value tension to confront this is going to be all the more true with these AI driven tools that will pump automated information into our digital public sphere at a new scale so these four chapters take the initial diagnosis of the system optimization the venture capitalist funding obsession with profit maximum maximization and Hyper scaling the policy Oasis from Washington DC map it on to four different domains in order to identify what the value trade-offs are and how to confront these with a variety of different solutions in the book we have a whole variety of proposals about how to manage these trade-offs and to think about changing the basic systems arrangement I'm going to turn the floor back over to Jeremy um for the last bit of the book and where we are today so an important thing to consider about the moment at which we wrote this book is that people found themselves looking at this balance of extraordinary benefits of new technologies and this emergent view of these negative externalities and people felt largely powerless to think about the consequences and what to do about them in essence the operating view at the moment was one in which technology washes over you like a wave and there's not much you can do about it you want the benefits you want these extraordinary returns that come with new technologies you have to accept all the costs and this is a mindset that we wanted to go straight after in our teaching and the work we do with professional technologists and in the book itself because one of the myths of Silicon Valley is that we can't really know the consequences of Technologies when we design them and build them there's no way of identifying what some of these potential harms might be in advance and this gives us and gives industry the license to move forward with deploying products in the world without any effort to think about those consequences or how to mitigate them in advance and we think while that may be true under some conditions some second and third order consequences that can't be evaluated or can't be seen until a technology is rolled out at scale we think it's mostly not true it's mostly the case that when you're designing a product for the world you have to make all sorts of consequential decisions about how you design it how you set up its interaction with human beings what kind of policy environment you think ought to exist that either enables the technology or mitigates some of the harms and in fact when you integrate the thinking of computer scientists with the thinking of social scientists we can be in a position where we anticipate potential impacts we measure those impacts empirically and we think concretely about how we can influence and shape the effect of technology on society this may seem obvious today because it's the conversation we are having about large language models it's the conversation that we're having about AI but it hasn't been a conversation that we've had about technology for much of the last 25 years and part of our message in the book and part of our message to students and to all of you is that the effects of Technology on society are hours to shape and by hours we don't just mean people who are working in the policy sphere we mean this for our young computer scientists we mean this for those who are financing companies uh from Sand Hill Road we mean this from voters and consumers that ultimately we have something at stake in how we referee these value trade-offs and for much of the last 25 years we've left the refereeing of these value trade-offs to those who run the company seeing this as not something that merits public attention and debate and transparency about how these decisions are being made now in many ways that serves the motivation of our largest tech companies right when tech companies are at the table talking about the potential harms that we've shared today their first orientation is to say well all of you are consumers and if you don't like the impact of technology on society don't use our product just opt out but the challenge when it comes to externalities is that you cannot opt out of the consequences of social media platforms on the health of our information ecosystem you cannot opt out of the implications of gig Work Platforms on the financial burden that falls to the federal and the state government for people who don't have a set of basic protections around their labor you don't have the ability to opt out of the consequences of Labor displacement as it might require major investment either in income support or retraining and so just walking away from new technologies as a consumer doesn't enable us to deal with these externalities now the book talks at various different places and we can come to some of this in the Q a about what we might call Point Solutions what do we do about data privacy what do we do about algorithmic auditing to deal with bias how do we think about retraining these are all Point solutions to some of the challenges that each new technology presents but the broader frame of the book built around the system error is to say that we see with each technology the repetition of a dynamic of the realization of extraordinary benefits by scientific advances and the monetization of those scientific advances through our robust private sector but then the development of these externalities that go largely unaddressed and we see that as a system error that requires a set of systemic Solutions and so think about the roadway system we roll out cars it's an extraordinary intervention in the world it changes where we can live and where we can work but we don't encourage everyone to pick which side of the road they drive on we don't encourage people to drive as quickly as they want when they go by a school if that's their preference right we set in place some guard rails to the introduction of this new technology recognizing that it might have potential harms and we offer a frame that says there are three core elements of a systemic change that we need to contemplate that go beyond the point Solutions of the moment the first is that we need to think about this new field of computer science and the engineers that both build products and manage companies as developing a responsible professional ethos a code of conduct an ability to self-regulate their own behavior because of course disruption is always going to move at a pace faster than democracy can move and ultimately we can trust in the book the emergence of a set of professional Norms around the biomedical sciences and in particular the other consequential technology of the most recent decade the development of crispr and Gene editing and think about all of the elements of professional norms and social sanction that exist in the Life Sciences all the way up to formal modes of government regulation but it begins with the self-policing that happens among scientists and among companies themselves the standards to which they hold themselves and when you contrast that with computer science where ACM the the community of computer scientists has a weak code of ethics no irbs exist that really exercise insight and and control over issues and projects around data science and of course no regulatory body so this is a vastly underdeveloped space even when it comes to not government action but social coordination among technologists about how to referee these value trade-offs and to balance the benefits and harms the second element is to think about the concentration of power that has emerged in a small number of very large tech companies this issue is again on the table as we think about potential proposals for regulating AI but a world in which consumers demand and want Variety in privacy policies and content moderation policies in AI tools that may be tested or evaluated in particular ways that might manage some of these downside consequences a world in which it's so hard for smaller companies to enter these spaces that are largely dominated by a small number of companies is one that justifies the intense Focus that we have on Anti-Trust concerns in the present moment and then the final piece of the puzzle and this is something that I think all of you can appreciate is that we can't make progress on these issues in the absence of having a government that is capable of evaluating concerns related to new technologies and thinking about the appropriate balance between policies that promote Innovation and policies that protect consumers and Society more broadly from a set of Harms and I don't think you could find anyone who would disagree with the argument that we don't have this government now and part of the argument that we make in the book is that we don't have this government for a reason that is we have not designed a pipeline of people into our governmental apparatus with the appropriate incentives the appropriate positioning the appropriate salaries to enable us to build out the scientific and technological expertise that we need in Washington and in fact we've done the reverse we've stripped our government of scientific and technological expertise over the last 30 years and that's a story that we tell so as we come to the Q a we find ourselves at a fork in the road when we wrote this book and it came out for the first in hardback about a year and a half ago the last chapter of the book how do we build democracies capable of governing Tech opens with the story of large language models before anyone was focused on them we said this is the example of the next Frontier technology that's going to raise these value trade-offs I'm going to demand a response from society to this moment and today we find ourselves at this fork in the road and people are waiting for Washington to act because Europe has acted Europe passed its AI act and Europe has been developing its AI act long before large language models were on the table and they've adopted a risk-based approach rooted in the precautionary principle with an orientation towards transparency and testing and tracking for algorithmic decision making and AI driven tools that might have significant consequences for society how are they going to do this we still don't know but the framework the articulation of a framework is already in place yesterday at csis we got a speech from Senator Schumer an articulation from the perspective of the majority leader but trying to reach out broadly to both sides of the aisle we're going to need to establish a distinctively American view about the appropriate strategy for regulating artificial intelligence large language models and the like built around a set of principles these principles are principles we've seen in in the AI principles of companies we see them in statements of the G7 but what does it mean to make these principles operational that's the question for all of us on the table and really the next step in this opening policy window about how we sees the benefits of these new technologies but also mitigate the harms and begin to address the system error that has given us these repeated externalities over time thank you [Applause] thank you both for writing this book because it was I got the Early Edition so apparently I have a whole other chapter so I'm gonna have to read the other book no this is still good okay right I took I read it with lots of notes in mind but one of the things that you started with is optimization and one of the challenges of being in Washington is you both have spent time here is regulation right those two things don't normally blend together and then on top of that I spent a lot of time in the internet governance World from the beginning in the very beginning I'd always be challenged with well how are the zeros and ones supposed to know about Humanity right you're just thinking about it from an engineering perspective and so as I was reading this I was thinking about where can we try to land on regulation where we can Encompass a lot of what you focused on in this book and I've really come down to you talked about data maximization in the Privacy um I always say it's I don't really believe in privacy it's a feeling um so but but data is something that we can regulate so data governance seems like a good place to start because I think we've got both the lawyers and the engineers both understanding the objectives then we have the the question which it's interesting as I was looking to see the Schumer and I you know can ask you about this but you know do how can we continue to keep the verticals which have more security measurement to them to the horizontal that we're seeing Europe do which is they kind of throw everything into the basket and sometimes I think that's not the best way to do something because we end up with either something too strong or too weak in the process so um how do you what do you think about that like starting as data governance is maybe our Point here in Washington because we've got to start somewhere yeah can I start on that sure so I just want to I want to sort of emphasize the way you began the question there Shane about um this mismatch between regulation and optimization so one of the stories that I didn't mention just just now but is in the book and was really a big Awakening for me so yeah I have basically oriented My Life as a political philosopher around thinking about democracy as theoretically and then as a issue of measuring the performance of democracies and I got invited to this dinner in Silicon Valley by a bunch of um Venture capitalists and technologists of some of the names you would recognize and um we receded the person who had invited us said the topic for conversation I'd like to have at the salon table tonight is to imagine what it would be like if we could find a place of on earth a piece of land that we devoted to the maximal progress of Science and Technology the guy across the table raised his hand and said Sergey Larry and I have specced it out already and went on from there and about 15 minutes into the conversation I said you know excuse me professor over here can I just ask is this a democracy that we're discussing here this pot of land and as there was no absolutely not democracy holds back the progress of Science and Technology this has to be a beneficent technocracy and my response to that type of view of optimization as something to apply to the World At Large is that you technologists are fundamentally misunderstanding what democracy is for in the first place democracy is a certain type of Technology itself an Institutional arrangement for confronting persistent disagreements among citizens who are equal it's a it's a it's a institutional design for compromise and negotiation that treats people fairly in some form or fashion if you look to democracy to optimize some social output you have engaged in the category mistake don't think that you should get optimal results from democracy they're better for other reasons and so I think that mismatches in certain specs really profound between the optimizing orientation of the technologist who actually doesn't attach value to democracy itself at the end of the day in certain cases because it a sub-optimal arrangement to them in certain ways all right on data governance um within that as a way of thinking about um how it is that something that's in the interest of certain kinds of technologists can map onto a regulatory approach um yes I think you're right that the um the kind of initial foray into this into this Arena by by Europe and the gdpr various forms of data management data governance um that let's be honest take the kind of weak form of dealing with notice and consent by all these pop-ups we get whenever we're browsing internet about what our cookie setting should be we're trying to explore data governance now not as a matter of individual Choice let's choose my data settings and and preferences that might be might maybe follow me across the web but rather something that we position at a social level about who owns the data under what circumstances can it be granted to other people under what circumstances can it be aggregated and then analyzed around various prediction machines I think that is a place to start where it will um limit the basic fuel of how Computing itself operates and so we'll shape all of the downstream choices about not merely the development of new technologies in their deployment but also begin to orient if the United States or any other place goes in this direction some understanding about data as a right for individuals or a social guarantee um but I think that's still an untested proposition gdpr is just the initial way to orient ourselves towards thinking that the this basic just suctioning of data can't be the healthiest approach of all so um I haven't given you a like a particular plan but I'm agreeing with you that it's a right place to begin let me just add one thing to Rob's comment which is to say um of course one of the central challenges in regulating data access or data governance is that data is the fuel for the advances in technology that we're experiencing in the current moment it is also the fuel for the extraordinary profits that the large technology companies have reaped that not only have made people extraordinarily wealthy but also fueled reinvestment in this technological growth and so policies like gdpr or the various bills that continue to be introduced in each Congress run into the challenge that they threaten the core business model that has enabled the tech industry to emerge in the present moment and it's a reason that why we we think of privacy as this low-hanging fruit that people would like more transparency about how their data is being used they'd like the ability to move their data with them as they go from one platform to another they'd like to know when their data is being sold to a data broker these seem like common sensical kinds of changes but contrast those kinds of commitments to people's rights to their own information with the current framework that we have Rob referenced this as notice and consent notice and consent is effectively that moment when you download an app and it asks you to accept the terms of that app and you have to scroll down through something which you cannot even read or understand if you spend time reading it it's not written in a language that is designed to facilitate your own understanding of the end use of your data so the situation is really tilted in the direction right now of sharing everything right and facilitating all of the potential end uses to power New technological developments to enable the reselling of information to facilitate all sorts of personalization of advertising because big Tech is largely the most effective advertising model that we've ever seen and so in that sense it's no surprise that while privacy feels like the low-hanging fruit we see very little progress in passing the Privacy bills we get them every year but they don't pass time now in the economy where we we need together because of cross-border data flows I mean it's interesting because some places like Africa is very far behind but they're so far behind they can just leap ahead because they just don't have they're like we don't have that much data we don't have much money to spend in the market right so we haven't been there we're watching Asia come more online with that and I've always been a big proponent that I'd like to see in terms of use of emojis so I'm like here are the five things that I care about and if one of them is missing I either say yes or no to it and maybe AI will get me there faster now because I'll be able to pick that but um so then my second part is knowing so Schumer this week you know coming out with a safe act safe stands for and this is a favorite Washington game right like figure out what you want to say and then figure out the words that make it into the acronym or figure out the acronym and then figure out I'm pretty sure that the acronym was a longer duel in the room from what I've heard than what the entire operation might be but secure accountability foundations and explain are the simple ones you had it on the screen there do you think they nailed it how'd they do I'll offer a first comment on this which is um I think what we're seeing from Congress is the beginnings of a conversation where we're going to try and find our way to a bipartisan path uh on a set of initial regulatory steps and what I see from the safe Innovation framework is largely a set of high-level principles which a lot of folks who have concerns about this next moment in AI are going to see themselves in some aspect of this now of course Innovation is the one that didn't make it into the acronym but Senator Schumer spoke about it yesterday it's not safe I it's safe and Innovation um and I think Innovation is going to continue to be a priority both for the Democrats and the Republicans that is the fuel of our continued economic growth in society is going to be successful advances in AI um and and our technology sector is a comparative advantage for the United States and the world and so the focus on safe is going to have to be balanced in important ways with this Innovation agenda but what you see in this framework is attention to adversarial uses you see in this framework attention to concerns about bias and discrimination concerns about the lack of transparency around models and the potential harms that might be caused by models without an existing liability regime in place and the concerns about copyright as that relates to the data that fuels the models and you see you know this attention to some of these rights that we talk about in algorithmic decision making the right to ability to understand when the models are being used and and how the model is making a decision a justification that you might expect from a human being but you can't get from an algorithmic model but these are all at the level of principles these are this is not a regulatory model this is a framework to begin the conversation and I think what we heard from Senator Schumer yesterday is the need to initiate a robust conversation a process of self-education among members of Congress a conversation that exists across party lines and between the legislative branch and the executive branch that has to in some sense wrestle with the question of whether the initial regulatory play that we've seen from Europe which is built around the precautionary principle built about around risk assessment how does this relate to a distinctly as Schumer described it American model of approaching artificial intelligence and we've got lots of technical documents we've got the AI Bill of Rights from the Biden Administration we have the knit the nist framework around risk mitigation and risk management and then of course we have what we've seen from Microsoft what we've seen from open AI what we've seen from Google and others these principles will not seem unfamiliar to anyone operating in any of those spaces but they don't yet answer the questions about you know what is the transparency testing and tracking regime is that done by government is it done for every kind of product is it done before something is released in the world or is it only done once we see harms develop these are kinds of critical design questions on which at least what we've seen so far from the safe Innovation framework doesn't yet speak so I want to pick up on this thread I feel like somebody needs to defend big Tech so I'm going to be that person uh and and yeah exactly and somebody needs to defend optimization um I I I would dispute the idea that we are in some sort of Oasis or Wild West situation as it relates to regulation of the U.S economy including high tech we have vast we have vast systems bureaucracies extensive law regulation uh there are many many different ways to address harm now right now the list is very long so this idea that somehow you know um big Tech is out there operating completely on its uh or or any Tech whether you know we've got an awful lot of small Tech moving right now too out there moving without any kind of constraints or oversight just doesn't seem realistic to me I think that we've got we've got extensive uh extensive oversight that's part one of the statement slash question the second one is uh the precautionary principle that you were talking about um Cuts both ways right I mean it isn't just the harms that we are avoiding with our precautionary principle we are inflicting harm with our precautionary principle and this is particularly it's evident in many cases but it's particularly evident in health we had a physician who does at the from the Cleveland Clinic who manages all of the um all the uh the standing up of artificial intelligence platforms for Clinic activities she took that job just a few weeks before covet started uh and uh because of the National Health Emergency had a lot more latitude to do what needed to be done in order to serve patients and I will just add mainly minority low-income patients who are being affected by um by the pandemic she's looking at at what she's done on that Dr uh jheim she's looking at it and saying we could apply all of this to diabetes to kidney disease to high blood pressure to every chronic health condition that is plaguing the American public and we could improve outcomes but we can't because the thicket of regulatory authorities is already so substantial that our legal team just won't sign off okay so what what am I missing there it seems like we have an already pretty heavily regulated situation why do we need more in advance I'm not saying you know harms occur we respond to harms maybe the law doesn't take into account a certain case and we need an amendment to the law uh so why why the precautionary principle you want to start with precautionary principala I'll take on the big Tech thing um well I think where I was going to start was to say the health example that you picked speaks directly to a space where constraints on access to personal information are significant and highly regulated right and so this is absolutely a space where the potential of learning and powerfully affecting People's Health via the use of algorithmic tools or AI driven tools is running directly into a regulatory Arrangement around access to people's personal health information that has deep roots in law right and so when we think about the challenge of enacting electronic medical records and building these systems the challenges of data access and data sharing across Hospital Systems between hostable systems and Academia between Hospital systems and private sector companies this is a space which I think is different than other spaces in big Tech where you're absolutely right that having optimized so far for privacy which is maintaining for all of us the ability to own our medical records and all of the information about the treatments that we've received and how our body has responded is coming at the cost of what you described from the perspective of this doctor in the Cleveland Clinic which is that this doctor looks at this situation and says I can be a social planner and a social Optimizer I can make everyone's Health better off if you're willing to give me your data and I say let's have that conversation in our Democratic institutions is this a time to revisit the way in which we have approached electronic medical records and health information because there are good reasons that we set in place the prior regulatory architecture why because when you apply for a job you do not want someone looking at your medical files to make a judgment about whether you should be hired and how much they should invest in your training over time as they think about whether they're going to get a return on their investment in you that's just one example of the kind of discrimination that has led us to embrace privacy but as we think about the Precision medicine moment that we're in you're absolutely right to say that a precautionary principle frame which doesn't think about the opportunity cost of going slow or doesn't think about the opportunity cost of potential benefits is probably going to miss the boat can I jump in right there so it's it's more obvious in the health case but it is not it's not obscure in a lot of other cases I mean at the broadest level uh the estimates of the potential benefits say economic this is just the economic side of you know the growth in GDP of productivity and so on we only get that by going fast if we don't go fast we don't get it ever right so the precaution again the precautionary principle of trying to limit harm is it's the harm it's the harm shaped as the good that never occurred and that's really what I'm uh at I I don't think it's just limited to health although I I take your point that you know one of the reasons is that we've done a pretty rigorous job of regulating health information I think everybody mostly is glad about that so but maybe we shouldn't be so glad as part of your point but let me make one other point and then I'll hand it to Rob which is to say you know the precautionary principle is the European approach right and the question for us in the United States at this current moment and part of what we suggest in the book is that um the front line of responsible AI is going to be inside the companies themselves that has got to be Central to our strategy and it's going to be not just Company by company but it's going to be the development of a set of industry-wide practices and we already see the the kind of seedlings of these industry-wide practices each of the companies are talking about them responsible AI approaches building it into your production process red teaming your products thinking about unintended consequences of what you build that kind of ethos of professional and social responsibility in technology companies that we see in the Life Sciences is the first and most important place to start because there is no way that we can achieve our Innovation goals with an orientation towards new technologies which says stop and don't do anything until we can put this through a multi-year process of testing and I don't think that that's where we're going to end up I don't think that's realistic given the kind of value trade-offs uh that that we care about but that said should we have a conversation in our politics about whether there are a set of sectors or a set of potential use cases where second and third order effects or potential adversarial uses justify a go or a going slower mentality that's the conversation that we are just beginning to have about Ai and I think that conversation makes sense given the power of these models let me try and hop in here I want to say two things but I'm going to try to um say something on behalf of the precautionary principle just just to test the waters in a way that's not hypothetical but I think true to the debate with AI these days so first it's certainly conceptually possible that the healthcare arena is over regulated and the AI or Tech arena is under regulated so I think I'm likely to sign on to some some statement of that sort I in addition to section 230 addition to a whole variety of statements in the 1990s about deliberately trying to pave the way for America to win the information Super Highway race um to small things like um eula's the end use license agreements that we signed for software our colleague Maron often gives this example in classes that you know like if you're a civil engineer and you build a bridge and the bridge falls down you can lose your license to practice and you can be sued if you're a medical professional and you do something that violates States either you know basic standards of practice again legal liability plus you can lose a license to practice Microsoft sells you office and included an end use license license agreement is you agreeing to even if the underlying software in Excel that computes things mathematically turns out to have a flaw and it miscalculates something systematically it's not responsible legally financially or otherwise it's all on you basically by signing the license agreement the immunization of even kind of ordinary consumer protections there strikes me as the developmental immaturity of the of the kind of professional space that that technology has long existed in okay on the precautionary principle so you know I don't know how widely this was discussed in Washington DC circles but one of the things which is always lurking beneath the surface in Silicon Valley when it comes to AI conversations is the presence of effective altruism and the kind of orientation towards existential risk that large language models Foundation models might pose us and the kind of thing that you know I think as Sam Altman might say possibly sometimes in public but certainly people from open AI or anthropic and private is look these large language models we're not even aware of what the capabilities of them are we're developing them as quickly as we can they're going to have extraordinary benefits but one thing we're worried about is whether or not these could be used in the kind of drug discovery mode to build novel pathogens and if you just open source them to the world and allow people to play with them democratize AI you're basically giving a you know a free pass to anyone to Tinker in their garage with bioterrorism um that sounds really dangerous dear government wouldn't you like to have a licensing and registration regime I'm just rehearsing Altman's points here um does that type of precautionary principle arrest your attention in a way to feel like there are some genuine concerns I think any time that you talk about uh somebody cooking up pathogens in their in their garage you will get people's attention and this is something I wrote about recently which is this negativity bias that we have toward technology we are much more afraid of what we are going to lose than what we might gain and my sense is that given that underlying I think it's genetically selected evolutionarily have taught us this yeah yeah I think uh given that bias that that's our real problem yeah you know not I I mean I grew up in the era of disaster movies you know like the Poseidon Adventure and the Towering Inferno and and Jaws you know and all these things it's like those things don't happen right uh that those are Hollywood generated figments that take advantage of this yeah this fear that we have so we can't make that right the basis for policy and unfortunately I think that's where we're headed right now because the entire conversation is focused on risk what are the dangers what is going to be taken away from me what am I going to lose and uh very little time given over to what are you losing by not doing it yeah but let me ask you one follow-up on this because I read your piece on the negativity bias and I want to ask you is this a description of the current moment or would you say the negativity bias has characterized from 1993 to the present because I think the negativity bias characterizes today as we walk around in this town people have discovered large language models and they are freaked out I mean they're also really fun and people are using them to write rap lyrics for their partner right in in the in the style of Snoop Dogg or whatever it might be but aside from the fun people are freaked and and it doesn't help when you get resignations of the leading AI scientists who basically say I am scared about what I have built and the consequences for the world and people say well I don't know anything about what these things are but if the person who built it like people who created nuclear weapons is walking around saying this could potentially destroy the world then people have that negativity bias but I wouldn't say I mean I'm interested do you do you have the view that the negativity bias characterized the first two decades of tech because I think it's quite the opposite okay so uh I mean I remember I was working in the Senate at the time that we got internet access Revolution it was it was so fun you know uh there was no more waiting on CRS to send you the policy brief there was no more interns have no idea yeah yeah it doesn't matter just trust me it took a long time and it was frustrating and uh and and and then we had the internet and it was it you know go go it was wonderful you know and you're right there there although I I do remember and I've seen I've gone back to look like at the view the YouTube clips of like news broadcast talking about the internet and there was some of this negative oh this is weird you know I'm just not sure that this is a good idea you know that kind of thing uh and and so there was some of that negativity bias but I think what happened isn't we didn't go from uh from 1993 to 2023. we had 2007 8 9 10 in there uh in which we also got social media and that I think has been the conditioning event for the way that we're trying to think about AI yeah that's good I mean I I'm going to join you here if you're willing to accept that that earlier let's call it 1990s to late early 2000s um was a period of techno optimism it was almost all upside um in our teaching and in the book we use these lines um Ronald Reagan said in the in the 1980s that the the um the Goliath of totalitarianism will be taken down by the David of the microchip and the successor President Bush said that imagine if the internet took hold in China and how Freedom would spread um there was this belief in something about the inherent property of digital tools and technologies that made them somehow liberatory or unleashing of human creativity and then we got the huge backlash because of social media awakened that there were these negative externalities in the language that we use in the presentation and what I what I want to think is that we're we're finally entering a period where we see this within the scope of technological innovation and scientific discovery over a long time Horizon and how we need attribute uniquely optimistic Properties or views to it nor uniquely pessimistic views let's treat it as negative externalities that deserve an opportunity to try to use the ordinary tools of Regulation as well as self self Indus self-government or self you know industry Norms in order to get the benefits and contain the risks like in that respect no rocket science involved here and entering a moment of ordinary politics is a really healthy sign no I I agree I agree with everything and Workforce which is one of the issues that you focus on is a perfect example of where what we need are ordinary politics right what we don't need at this current moment is to look at Ai and say jobs are going to be erased because we know over the long historical juray that the labor market adjusts over time but that there is some delay in that adjustment and some people benefit disproportionately from the changes and other people are harmed and that you need a set of Investments to facilitate that kind of evolution but are we having a conversation about what those Investments are we are I can't speak for everybody else but we are but that's exactly the kind of conversation that we need to be having in the Senate and the house and with the white house because ultimately that's the normal politics of adapting to this new technological Frontier I want to give Shane a chance to get in here with another question clarification it was Rick Boucher who is a lovely former congressman from uh Virginia who allowed the internet to become legal outside the US government not out Gore people love to give Al Gore way too much credit for that um so the other thing is I'm here for the machines like think about apis you know it's it's we have I think that the learning language models are really cool I'm having a lot of fun with them but it's it's like when people first had to figure out how to attach themselves to the bigger tech industry that was coming along and do all the cool things all the apps that are on your phone I mean imagine probably half the room here don't really sit in front of a computer most days they're just sitting in front of their mobile device which I can always tell when you guys are sending me stuff that you don't understand that but um that's it it's we're having I think a net positive moment I mean I I this whole idea of like the machines are going to kill us I keep reading the Articles going God maybe we deserve it you know I mean it's um but they uh but the internet I think is a very much a mirror of society and so I think that is part of what we need to put into our process on this is what is it that we are so fearful of that we think the machines are going to reflect back upon us that we need to be thinking about and you know and the guidance we might need is psychotherapy not necessarily better coding or maybe they go together uh but going back to because we still have Schumer here on the screen one of the things I loved about his announcement was just wanting to avoid Congress altogether when he said let's get a bunch of smart people together hopefully smart people in a room this fall and start thinking around this and he didn't call it all the magic words that we call things in Washington you know we just let's let's try to figure this out and then move it forward because Congress is never getting around to it and the first thing Congress did was they had a hearing today they were like no we are going to be in on the game so I think we have a lot to look about in the space we mentioned it but I think it's worth going back to is what about the hit the pause button you know is it was that a bunch of people that are really fearful or was it a bunch of people that that this came out faster than they thought it was going to and they were not in the game fast enough and so they you know they have the ability they're running their own operations to hit the pause button if they're worried about that that kind of befuddled me a little bit yeah I I think the that petition which you know guard whatever 20 000 plus signatures um um not something I I would have signed or would have wished to have gotten as much attention as it did but if I want to be as charitable as possible to some of the people who are the initiators of it um one of the things that I think is a very important difference to Mark between the social media age of web 2 um where whereas Brent said we began to you know have the tech backlash and all kinds of concerns and the current moment is that a large number of the people the scientists are building these models and and the people who work in the organizations have this effective altruist orientation and so they're concerned with what they would describe as the existential risks introduced by runaway or malevolent Ai and the pause was an attempt I think for people who are close to the technology so these aren't Outsiders but rather technologists themselves trying to convey a kind of an alarm Bell for the for the rest of the world about the potential power of what's advancing more quickly than people even inside the Community Technical Community believe would have been possible just a year ago and if there's even a few you know kernels of Truth to the idea that we're on the brink of accomplishing artificial general intelligence Sparks of AGI as the Microsoft document um called it then confronting and grappling with the significance of that is certainly worth somebody's attention why don't I think the pause letter was a good idea I mean the simple answer is that the people who are responsible actors who might have been persuaded by or already felt um something about the document and then internalized it might have slowed down and those who don't care at all had no reason to heed it anyway so it was it had the perverse incentives of as it were slowing down the already responsible people and giving a green light to those to gain various forms of competitive Advantage those who were never going to abide by it in the first place so a kind of simple petition and letter I would have rather have seen behind the scenes closed-door forms of kind of you know track two diplomacy let's call it rather than the kind of public letter maybe the best we can hope from it is an alarm Bell was was sounded about the potential of AGI but there I think we're in the realm of falsifiability I don't want to call it science fiction I just don't say I just I just don't think we have any empirical basis by which to make an honest assessment of whether that's true or not I just want to add two comments on this I mean the letter itself is a strategy um but let's think about what might be the underlying motivations behind this and so you know we might talk about explainability or interpretability part of what we know from AI scientists closest to doing this work is that getting their head around why models are doing particular things and what the potential power of these models is something that they can't even get their heads around and that level of uncertainty and discomfort that many of our colleagues who are on the front lines of doing this work feel is what justifies a step for many of them like the resignation that we saw from Google or a letter like this if you if you can't even understand what the model is doing that it is so beyond human cognition and capability that it surprises you with responses that it provides or steps that it takes takes and you can't even figure out how to tune it this is what the Sparks of AGI look like and a warning shot to those who are not paying attention from those who are in the companies and those who are closest to the science saying hey folks this is beyond what we think we can understand and handle I think is an important signal for everyone to wake up and take some time the second piece is that we've done a bit of a historical tour here through techutopianism and to the tech backlash and I'd also suggest that part of what we saw with the letter is learning some lessons from Web 2.0 learning some lessons about when and under what conditions it might make sense to take some time to think about second and third order effects to Red Team unintended consequences to think about design choices and this all happens against a backdrop of an economic contraction in Tech in which many of the people who were laid off were the responsible AI teams that's something that we know we know those individuals those teams were gutted in the most recent round of layoffs this was before chat GPT right and so the architecture that was being built up in the boom time to do the kind of responsible AI practices that ultimately are going to be necessary was being eviscerated at a moment of economic contraction and anxiety for the companies and so I take this as a signal both of how challenged they are in terms of understanding the new technologies but also concerns that the industry's ability to manage this on its own is not really in place okay I want this has been fantastic thank you so much but I want to give the audience a chance to get in and I want to give first right of first refusal to our senior fellow in science policy Tony Mills who's been watching to see if he's got a question that he wanted to answer or ask two okay go ahead um so thank you Tony Mills I'm senior fellow here at AEI um really enjoyed the conversation um I have two questions that kind of pull in different directions so on the precautionary principle and on the point about Noble Harms in advance um I the question gets to to linking that discussion up with what we were just talking about um so it seems to me that a lot of discussions about AI regulation going on right now um highlighted by the letter and other kinds of high profile conversations the kinds of risks that we're talking about are um existential type risks which have the characteristic as you say of being perhaps not falsifiable I would describe many of them as science fiction um this seems like a very different kind of scenario than one where we can say look well there are some obvious kinds of harms we can think about design choices these seem like kinds of risks which it's impossible to refute but the stakes are so high that to defend not doing anything puts you on the side of wanting to be in favor of human extinction this seems to make it very difficult to have a reasonable deliberative conversation about risk so that first question is do you see the conversation about AI governance as a counter example to what you were describing with trying to identify risks and then the second question relates to sort of more identifiable risks so we think about using Foundation models to to genetically engineer a pathogen right that seems like a clearly identifiable risk one that you know worries me um but of course we can already do that scientifically technologically um uh uh and we also don't really have a good model for governance of that uh dual use research of concern there's a whole sort of array of scientific technological developments which we don't really know how to deal with we have a kind of Patchwork regulatory non-regulatory self regulatory uh structures that try to deal with it we don't deal with it well if it's the case that AI tools are adding more capability to do those kinds of things how do we what's the model for thinking about that government I mean we are we going to try to solve that problem by building on a new regulatory structure which even though we can't already solve this first problem so I'm just I'm curious concretely what what we can do about that um all right let me take a crack at this and I know Jeremy's going to want to add something too um uh let me just use some of the language of the people who invoke existential risk language that I myself am occasionally inclined to not because I think existential risk should be in the foreground of our minds but because I think this is the way to shape one particular version of the debate that's live right now is whether the closed the closed models of open AI Google Microsoft as opposed to the more Open Access open source models that are on hugging face or or that meta is releasing how should we think about those Pathways in the marketplace Dynamic now in which there's an open open Alternatives and then closed alternatives so if you're worried about things like creation of pathogens and you you have in mind the kind of thing where this is akin to nuclear um the nuclear age or um you know bioengineering in the form of adversarial actors you don't want to democratize access to uranium and plutonium and tell people to play with it to explore the capabilities you don't want to say let's open source the smallpox genome now kept in a small number of Laboratories across the world for scientific purposes and say like have at it in your garage with the crispr kit and maybe we'll find something good you like to have these things um Limited in Access and with some regulatory controls around them now of course everyone's going to point out that well AI is not like um in the nuclear age it's not as if we have scarce resources I mean Computing resources are are in certain respects scarce but not like uranium and plutonium and it's not as if we have a kind of laboratory structure across the world with some loose coordination around around things like smallpox or other other kinds of diseases so we have to deal with the particular you know facts on the ground as they present themselves to us and you're going to find this either inadequate or a way of restating your basic premise about we just don't have the mechanisms in place at the moment to adjust some of the existing harms much less confront these things that seem potentially science fiction um but nevertheless if they're real they ought to command our attention I I just want to advert to the idea that computer science is a young field came into existence only 1950s and 1960s people who studied artificial intelligence have only acquired power in the world in the past 20 20 or 30 years and so compared to biomedical research compared to nuclear science um AI is developmentally immature the provocation I offer to computer scientists is to say you're like 19 year olds who are newly aware of their power in the world but your frontal cortex is massively underdeveloped and you're a bunch of socially irresponsible people can we accelerate to your late 20s as fast as possible please and that might happen through a combination of various regulatory threats or at least in the announcement of a regulatory presence and the concentration of the mind that could be brought about by responsible actors who want to confront possibilities of bioterrorism or whatever the case may be so in that respect I want to lean on the idea that over the course of time it's natural for things for professions to do professional lies and I don't see any reason we should think that Ai and CS is immune from that um do we want to wait for a catastrophe in order to have the right kind of reaction um in the way that you know say in biomedical research there were the you know Nuremberg and the experimentations and the death camps or Tuskegee and the responses to to that well that's not a logical necessity that we have to wait for catastrophe even to socially that often is what happens so I would I want to put a lot of energy into the professionalization of AI science AI development and deployment that I think will happen in relation to a policy window that's opening why would I predict extraordinary success I don't think I'm that optimistic but I think it's a natural progression for the for the for the field to professionalize far beyond its current immaturity what I'd add to that and I think even your reference to dual use research of concern is in some sense a recognition that there are spaces that have approached the the sort of oversight and responsible management of technological Frontiers in ways with a level of nuance and attention to potential harms and unintended consequences that's far more mature and fully developed than what we see uh with the AI Frontier and you know part of that is the practices of scientists themselves in the Life Sciences you know that begins with concerns about you know RNA in the 70s and acilimar and the Gathering of that kind of expertise but then takes its form in professional association scientific associations scientific Norms you know my view is that there is no Silver Bullet solution to this there's no the euai ACT does not solve this problem and if people do then they haven't thought through any of the kinds of complexity and Nuance that we're talking about today both balancing in the direction of innovation while being attentive to these risks but also the serious operational challenges is associated with any regime that even is designed to bring transparency bring transparency about what at what time transparency about the existence of a model or how the model works or what generates the kinds of predictions of the model what does testing look like who does testing testing internally testing independently these are all hugely consequential issues but I think what I'd say to you is that there's no perfect solution out there but we're at the incipient moment of beginning to build out an architecture that traverses industry and government and that has to Grapple with the question of what is our social objective is the Democratic Society with respect to these new technologies and what kinds of inputs do we need that will facilitate not just concerns with existential risks which aren't the ones that animate the two of us and Maron in particular but the more near-term risks that we can anticipate that we can see down the road that we know might be amplify or enabled by these Technologies so that we can continue to preserve the space for the kinds of economic growth boom that we can expect from this while also maintaining the political support for this industry to thrive I think that's a great place for us to end the conversation because I I agree that when we get out here that's where catastrophizing happens and if it's too close that we're being irresponsible right but getting into these near-term issues where we can see things and asking ourselves not just should that be regulated but what are our options for doing that what are our options for protecting the public it may be the crispr model of like look you guys this is dangerous you need to figure out how to regulate yourselves on this it may be uh that you know lawsuits and and other regulatory actions can occur that can address that so I I think that that's a really smart way to to think about this is let's pull our time Horizon in rather than getting it way out here so round of applause please for our panelists it's absolutely fantastic thank you and please if you're if you want a copy of the book they're available and um thank you for coming thank you all very much thanks so much thanks
Info
Channel: American Enterprise Institute
Views: 1,972
Rating: undefined out of 5
Keywords: AEI, American Enterprise Institute, politics, news, education
Id: OUCw1JMUvq8
Channel Id: undefined
Length: 93min 9sec (5589 seconds)
Published: Fri Jun 23 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.