"Click Here to Kill Everybody": A Book Talk with Bruce Schneier

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
a very warm welcome to everyone on behalf of the Burkman client center i'm robert fearsome the research director there and i'm really thrilled to act as the very brief host for this at the occasion of the launch of bruce schneier new book click here to kill everyone I hope you're gonna tell us that indeed that's not true that we can't yet do that so I think you all know Bruce Bruce is a longtime member of the berkland Kline family he's a cryptographer an expert in security he is probably perhaps one of the handful of the clearest and most incisive thinkers on cyber security out there and more than that he's really really good at thinking about systems and institutions and understanding how technology intersects sorry how technology intersects with political and social structures he's a very prolific author I'm gonna have to get a new bookshelf that's like a Bruce Schneier section which is really a very good problem to have he's going to be in conversation today with Abby Jakes Abby is a postdoc at MIT she's a philosopher who is working on the ethics of AI and without further ado I'm gonna turn over the floor to Bruce Nabby welcome I said there are chairs sure there sure there was a chair next time raise your hand okay it's right so they're a bunch of chairs so please please come in and like that and they also I forgot some housekeeping so number one you are all under surveillance know that the campus is being filmed and reported number two for folks out there that want to lob a question in over twitter the hashtag is be casey harvard and number three is after this talk books will be forced say ole probably over in that general vicinity where Ruben is anything else I'm forgetting okay all yours okay so I wanted to start my line some groundwork so Bruce is really terrific book everyone I want to hear me great versus excellent book calls our attention to a crucial inflection point so we've gotten used to dealing with computer security we know and in security we know that sometimes we have data loss sometimes we have data breaches so you know we try to protect against identity theft and we have backups things like this but Bruce points out that we're entering really a new era and this new era is characterized by the fact that now everything is becoming a computer so all kinds of things that used to have computers in them or maybe didn't even have computers in them now our computers with various other systems attached to them so our phones obviously but also cars and power plants and all kinds of other systems airplanes and these are things that operate in the physical not just the digital world and that ends up making a really big difference a hacked car can lose control that's brakes on the highway and a hacked power plant or a hacked water treatment plant can cause blackouts or public health emergencies and a hacked bio printer could release a deadly virus into a hospital so not only that if all these kinds of hacks were carried out a scale if we suddenly could hack all the cars or all the airplanes or all the power plants we could have really catastrophic new kinds of risks so Bruce calls our attention to this moment and he very carefully lays out what it would take to protect ourselves from this what a better more secure networked world would look like and the kinds of forces that are gonna make that difficult to achieve and then how we might navigate in the imperfect situation in which we find ourselves so he's really thinking about how there are technological solutions for the majority of these kinds of problems but the real challenges come on the policy side the political will the incentive structures to getting the changes to actually happen so Bruce is there anything you want to say about what's going on in the book that I haven't sort of generally I think that's that's the crux of it I'm trying to write about changing environment that you know the and we all know this that the the old way of dealing with computers is to stare at a screen and the new way to deal with computers is going to be to interact with them in our environment and that'll be our refrigerators our cars other appliances and toys and systems you know I mean we're interactive computers in this room and this we're being we're being recorded but you know I don't see them and that's the new way to interact with them so there's this pervasiveness of these systems and then there's this new power these systems have they they're they add directly affect the physical environment and and this is what a Meishan this is autonomy this is physical agency and that sort of changes we've had all these assumptions are a bad word kind of like negotiated day taunts with the security you know we're doing authentication this way and we're doing patching this way and we know software's not that great but we're managing in this way and and that's all worked really well but when you move into the new environment you know maybe they don't work so well anymore and that's what I'm exploring that's I think and I think that's a really important change and it does mean that those of us sort of know security have a lot to teach the other parts of the of our ecosystem that are don't have that history yeah yeah so because we're here at Berkman klein we have people here who think a lot about policy I wanted to ask you a question about what levers are available in the policy domain so you talked quite a bit how you think that it's very important to get this right that we're gonna need good government doing good you really think that this is a place where market-based solutions or voluntary self-regulation are just not gonna be enough as you see it and it seems very plausible in the context of the kinds of solutions you're proposing but of course you're also very clear-eyed about the challenges that that presents in our moment I think I may be even more pessimistic than you about our moment you mentioned that look once these systems start killing people government's often think that's the moment to to regulate I sort of worry that that will be an excuse to make things much worse but I wonder if we're if we're worried about the the sort of federal level if you think that there are other avenues you mentioned you know that states often are doing better on these things Massachusetts our home and California sometimes and so do you think that there are is enough leverage say at a state level kind of plane of action since software is this write once run everywhere the way that the GDP are is seeming to have some effects because the e use regulations are trickling outward do you think that we could tackle this by trying to go around something like a federal institution and and use state levers or do you think that we're just gonna have to try to make federal solutions work I'm curious where you think I could locate I think the answer is gonna be all over the above yeah I mean right now I we have when we know we have a pre dysfunctional federal government and it's not the place to look for for answers and I think this you know software has this interesting advantage that it is right one cell everywhere and so the car I buy in the United States is not the same car I buy in Mexico environmental laws are different the manufacturers tune the engines to the local laws but the software I buy here is the same software i buy everywhere because it's much easier for a software manufacturer to write it once and sell it everywhere so if they if there is a law passed in California and California has a bill that's going to be signed have a very small change in IOT security you can no longer have default passwords right of the 50 horrible things that are in security this is one of them it's not gonna make a big difference it'll make a little it will make a difference and if a company wants to sell a product California they'll remove the default password they're not gonna have a separate build on the software for us here in Massachusetts and it makes no sense so we will all benefit right similarly the EU passes some law regulating security of interconnected toys which they are about to do there was a pretty some pretty bad security Internet connected dolls that allowed for super creepy spying on children and that's gonna be fixed and we will benefit from that I mean I'm sure that the extra old toys will be dumped on our market because they can be sold in Europe but after we buy them they'll be the better ones and so that's something we really don't have in the privacy realm you know so GDP are the big European data protection regulation Facebook very much wants to figure out who is under that jurisdiction who isn't because they can if they if they can't they can differentiate offerings they can spy on the non Europeans more and and make more profit that doesn't that same desire to circumvent doesn't exist in safety in safety all you have is the desire not to spend the money to fix but once you're forced to spend the money to fix you fix it everywhere right the refrigerator will be improved everywhere the thermostat will be improved everywhere even the car will in a way you don't have in privacy so I think it gives us an advantage in in this particular area which we don't have women someone's trying to steal our data for profit so I'm talking about something that is an interest that you and I share um AI and so your focus is really on bad actors hacking or even just corporations sort of under the influence of surveillance capitalism exploiting their users but when we think about the AI kind of contexts often people think about the kind of stuff that actually my work focuses on which is more the unintended kind of problems where whoops the autonomous vehicle won't do what we expected the helper robot is out of control or there's bias and things built-in that we didn't anticipate so I'm curious what you think the specific risks are around AI and whether you think that there are differences between the kind of domain of specifically bad actors and sort of the more unintended consequences kind of stuff you know and I it's a debate that always I think exists in the security field is security a subset of safety or a safety subset of security hey and what you're thinking about our mistakes things that happened randomly well I'm thinking about is an intelligent malicious adversary now it's different in defenses if you are in nano doing environmental safety and you need to secure buildings against hurricanes you can do things and and there's lots of things you can read about how to make buildings hurricane proof and but you know deep down the hurricanes will never change what they do based on your secure are in your security the hurricanes don't get smarter right they don't adapt that's very different if you're doing you know ATM machine security where your adversary immediately adapts and immediately figures out how to circumvent what you're doing so I mean things are very related and actually when you get to things like crashing cars even as my malice it is more the realm of safety I mean it's going to be the safety regulators that are gonna look at the bad actors taking over cars it won't be the security people in some ways they're very similar you look at some some security events that after the event happens the safety and security response are the same so I mean that after the terrorist acts 9/11 one of the things one of the stories I would tell is I think in the 1940s a plane crashed in the Empire State Building okay it was an accident it wasn't on purpose but if you think about everything that happened the moment after the crash was exactly the same right we need firefighters need rescue people you know we need people who understand how buildings stay up well everything was identical so if you're an emergency responder you don't care whether it was a deliberately set bomb or someone accidentally you know punctured a fuel line and and exploded it's the same response so I think there's a lot of overlap in in in safety versus security but it's all the difference is going to be in that adaptation you know that the the bad actor my programmers you know don't won't make different in new mistakes because you've protected against the old mistake so they keep making mistakes in some Gaussian bell-curve of mistakes space I don't know yeah I mean I think that the overlap point seems right to me I'm not super clear that there's a bright line I mean you mentioned at one point that the of the catastrophic risks that scare you the one that really worries you as a criminal attack that sort of gets out of hand and we've seen some of these before where there's a sort of a bad actor but then what they thought they were doing ends up being not what they actually did and even you know Mirai botnet and things like this that it looks like these are gonna be you know cases where there's sort of a combination of these effects so it seems right that we shouldn't think that there's a big separation between the bad actor cases necessarily and the safety in the security cases and the inadvertent case isn't a bad actor cases and we share the same issues in in in transparency and it's systems that will adapt and to a point where they're you know can't be understood who do you hold accountable right for uh for when an algorithm does something which you would normally homie hold human cur accountable which which human do you pick what do you do if there isn't an obvious human right what do you if the human says yeah I wrote that algorithm but it's way different now and I don't know what it's doing and I'm not in charge and don't blame me for that it's gone weird right now what can you hold an album into account what does it mean to incarcerate an algorithm can you you know maybe we can program it it doesn't like that I don't know you you have to solve this stuff I mean you talk about it or can we pretend they do right I'm useful way even if you know I mean and you've talked about how the courts have been reluctant to hold people responsible for vulnerabilities programmers responsible for vulnerabilities when they were exploited that it's tended to be oh it's the hacker who did the bad thing it's not the programmer who put the vulnerability in there and you might worry that there would be analogous problems with assigning respond stability in this domain this is worth looking up there in the history of software liabilities United States and we deliberately decided not to hold programmers responsible and the belief was that that would be a huge detriment to innovation that we would we would stick the proximal cause right so you know Windows is lousy someone finds a vulnerability a hacker breaks into your machine and steals your money right we could blame a lot of people in that chain but we're going to blame the hacker you know when we choose deliberately not to blame Microsoft for selling you a shoddy piece of goods and pretending it's not right we're not going to even blame the person who discovered the vulnerability and published it but we're going to blame that proximal cause I think that that holds less well this again gets the change in new where computers are that'll hurt the hold less well in a car yeah you know we uh we have a lot of case law that that when a car has a flaw in it and someone crashes we assign a lot of liability to way back the manufacturer of that car of that part and and some of course to the driver in the road condition and maybe you know whoever designed the bad intersection you know but you think of a lot of other you know causes on that chain but we do go back in a way we don't in software and I just see this changing as no where you have all these rules about software we already have rules on cars and medical devices and consumer goods and appliances and all those things these rules aren't going away a software moves into this world these rules are gonna I think gotta chew their way back let's hope so so one thing sort of a small point but I found it striking one thing late in the book you mentioned is that you think that we need to sort of demilitarize the internet we need to think about changing the sort of models and metaphors we use for thinking about cyber security and in particular you say that this militarized talk isn't necessarily the most productive that maybe what would make more sense is talking about pollution or public health or these kinds of different models that naturally suggest a different way of relating to problems we find in our networked world and also of solving them and it struck me because another theme in the book is the way that it's very tempting and common to focus on offense rather than defense and that feels to me like a particular kind of militarized kind of macho posturing kind of mindset that's cued by this particular way of thinking in that if we switched over to something like a public health model we might get a really different intuitive sense of how to approach these problems and also it would help us bring certain kinds of problems under the same umbrella I think about the way that something like Cambridge analytic akai no problems look like unauthorized illegitimate experimentation on human subjects and you might think that it's worth thinking about these kinds of models that we use to to structure our thought about these issues and this is hard I mean the military attack defense model is pervasive in insecurity I use it all the time it is how we talk about these issues and I think it does really limit the way we think about them we think about it in terms of attack and defense and this is very adversarial nature and a public health model you know it's just it just gives us different tools to think about things and I think of cyber peace really channeling coming of Francois who many of us remember and very much talked about this other way of thinking that when you talk about cyber war even if you don't like it you're buying into the frame that there is this natural hostility and there is kind of right I mean I I see it I know it's there but you can use a public health model which does have attackers and defenders and bad actors and good actors but you know we don't think of it in that same militaristic term this really is kind of idealistic I think right now you know what we can do is expand the frame of discussion I don't think we can go to US Cyber Command and say surprise you're now like the NHS and they'd say great right in the dark but you know it is a it is a step in that I think I think that's important I think these sort of other ways of thinking are gonna give us windows into answers that we don't necessarily have right now again no time soon but you know these problems aren't going away so let's talk about power for a second um it looks like with regard to these problems most of the power resides with governments and corporations and you bring out very well how those are precisely the locus of many of the problems um and there's this hope that government power can be the way to mitigate and manage these risks so I'm wondering how can we leverage government power in the right ways when it is itself it's so much part of the problem is it that we need to you know is this just a we remember to vote moment is this a we need to cultivate sites of power outside of governments and corporations is this a man the barricades moment like what do you think about how we can disrupt the power structure enough to actually get these changes to happen this is hard otherwise you would have done it already I tend to think that our best answers lie in multiple power sources watching each other right we know that unbridled government power is bad we know that in Bridal corporate power is bad but if we can get the two of them watching each other they could keep each other in check and it can't be just the two of them there's a strong role for civil society for NGOs to monitor both there's a strong role for journalism to monitor both and in any robust political environment there are multiple political parties that are monitoring judging right there are multiple sources of corporate power that are monitoring each other I mean the problems we have today I mean we can very broadly you know blame on the the the large monopolistic powers that don't really have the checks that you have in a more dynamic and fluid market the at least the United States very narrow political spectrum of you know okay political thought now you have we have a we're not a far-right party in a millwright party right which really limits the the amount of monitoring they're doing on each other for the for NGOs the press these are hard issues plug people notice that Julia Angwin has a new media venture this is this is exciting she's left Pro Publica then woke up yeah with Jeff Larson and is gonna do data-driven investigative journalism right this is fantastic right this is something we and this will serve as a check against some of these problems with algorithms and autonomy and and just one of decision-making that we wrestle with I mean that is a a phenomenal thing we need more of those we need dozens of those I mean you know every every time we get an email from Ron Deibert right telling us of another great piece of investigative computer forensics he's done exposing government abuses of power source surveillance and control in various countries right you know we need dozens of those I mean there isn't a good answer here and we want to push power levels down take of our autonomy and push it up but you know hand-wavy we need more distributed power and I think that's how we get that I'm you know this is again I will turn to your political philosophers and how do we do that how do we how do we make government work in the 21st century you know what does a what does a representative democracy look like in this century I mean you can really make the argument that the current constitutional democracy is the best form of government mid 18th century technology can produce right you know because when travel our new communications are hard we need to pick one of us to go all the way over there and make laws on our behalf that make a lot of sense but now travel and communications are easy so maybe that makes less sense but what will replace it alright we're gonna open up for questions the minute but I have one more question for you so you talked about the need for the internet plus as you call it to be resilient and that connects to something I've been thinking about which is that I think we're gonna need to spend more time thinking about how to make our systems fail gracefully so part of the problem with the sort of Google Photos debacle wasn't just that black faces weren't recognized as at the same rate and as well as white male faces it was also the particular way that failure happened it was that they were a photo of a black face was classified as a photo of a gorilla that's a whole different thing from differences in rates of oh I can tell that's the same person or something and I think that especially in this new era that you've called our attention to where all of a sudden these systems are reaching out into the real world in various ways this sort of there's no undo button once we've got this problem in a in a 757 this failing gracefully is gonna be an important feature and is that part of what you have in mind when you talk about resilience and maybe you could just say a little bit more about what you're thinking there it is very much and we know how systems fail gracefully me airplanes have been sort of two ways Rindy there's the airplane way where there are multiple ways to do something so if if the landing gear fails to deploy through the button of deploy landing gear there are like two or three backup systems including you know going going under on the bottom plane and hand cranking them and bringing them down and so that is a way that systems can fail gracefully that there are multiple backup systems the other main way we do it is is this building which it doesn't really have multiple ways to hold the roof but it's just been over-engineer if we think that the load is going to be X then we design the struts to take 2 X load and both of those are our ways to fail safely your car you know as much as possible fail safely right if you if you if you take your foot you take your hands off the steering wheel it doesn't like a radically glow left and right you can imagine steering looks like I got a hold it steady but no it naturally is steady because that is a you know a better way for it to fail I think we need to start doing that with our sisters that these this fail catastrophically isn't going to work that's really what I'm talking about in in in the cover right this is but at the title I mean that's a little bit science fiction not really but the notion that you could have a system where in one click you ruin it for everybody is how our computers work so don't people followed there's a lock company called Amity they make locks for hotel rooms and these are those key cards and I missed earlier this year there was a vulnerability discovered I think it was last year maybe cuz Lincoln's in the book vulnerability in in in their product what it means is every single hotel room that is secured by this lock is now in secure surprise right hey everything and that's a big motive failure for computers and they do not fail gracefully the way you fix this is you walk up to each lock in the world one at a time and fix it right that is not failing gracefully right that is failing catastrophic feeling maximally catastrophic Lee and I think that is the wrong way of thinking so we then we so we do need sort of this better way because we're not gonna design absolute security I mean nothing is absolutely secure these systems are too complex but maybe we can contain the security right after the blackout and to that for we lost power in the northeast quadrant United States and thousand south southeast quadrant of Canada the power grid of the nation was redesigned so you wouldn't have those kind of catastrophic failures because the failure was a particular K a power line in Mid Ohio and that started a cascade of failures that was a huge blackout we try to limit those airlines and finally realize that they can read how planes are scheduled that if there's a weather failure in a certain city it doesn't affect the entire country just affects planes going in and out of that city and these are ways to to decouple to decentralize to disengage in order to fail more safely and securely and then it's a lot here I mean you talked about some of the tech problems and Union and there are a lot of tech solutions that aren't being deployed for further all these these policy reasons but there's a lot of tech solutions we don't have yet but I tell people is that I mean this stuff is hard but it send a person to the moon hard it's not faster than light travel hard we can do this if we have the economic incentives but we're missing all the incentives for companies to do it better right Equifax when your anniversary a couple of weeks ago big deal everybody's personal information in the country was stolen big press event legislators were annoyed I testified in front of the house one of the committees and there were angry legislators on both sides of the aisle saying this cannot stand something must be done fast forward one year nothing was done zero right the lesson you learn is skimp on security hope for the best if the worst happens whether the press storm get beaten up by Congress you know verbally and then nothing happens right Facebook is good the same these gonna happen don't think anything it's gonna be different and that's unfortunate on that note I think it's time to open questions I think they're Mike's question right there sir thank you sir wait let's uh let's get judging you by kitchen to Mike thanks that was incredibly interesting I'm really happy my manaman fellow I'm a journalist and I'm interested in a growth making accountability reporting and I was I had my little party when I learned yesterday the 20 million grant to the mark up you mentioned and I would be interested in could you expand a little bit on the role of journalism that has played in this field up til now if there has been a positive example to change the field of this interest and if you had a wish list on which issues journalists should focus to next year I could talk a bit about the history uh I mean they've been great wins Julia Angwin is has done some great reporting on on racism of algorithms that do bail and parole decisions this have Kashmir Hill she's also done reporting on algorithms and discriminations and biases and opacity Frank Pasquale has written about is a book called a black box society about algorithms and and lack of transparency how that's bad for society right now I think journalists are the only people holding algorithms and algum designers to account I don't think I mean we may be in Europe there is some government accountability I don't think it's very much so yeah made journalists are what we got here you know I I guess non journalists Latanya Sweeney here at Harvard has some great work on Alvin heshes one who proved that some of Google's rekha ads were were racist yeah and I get in horrifying ways in ways that you just look and say I mean don't you have don't you pay attention I would add that um the the examples that Bruce mentioned are doing such important work and it really is one of the only sort of areas where this stuff is being called attention to in a really public way and it's it's vital I would say if we're thinking about what journalists should be doing the other part of your question I would say precisely focusing on the kinds of issues that those pieces are about so there's also a temptation in other kinds parts of the press to focus on what Bruce likes to call movie plot scenarios the really wild extreme disaster scenarios and I think that that's not helpful in terms of people understanding what the textual technological challenges actually are or what the real plausible locations of harm are and so focusing on these not as sexy kinds you know in certain ways things that you don't feel like Michael Bay would turn them into a film these kinds of snares about bail and parole lending even things like hotel room keys just to get people have a sense of the ordinary objects not kind of diversities I mean Harvard doesn't many universities use automatic scoring mechanisms you know we know Palantir has been hired by the US government to use big data analytics to find illegal aliens that sounds horrifying yeah but you know can we understand what's your false positive rate what's your false negative rate how how good er are you mean do you mean what sorts of controls what sorts of legal protections what sorts of appeal is there you know any of those things I mean there's a lot algorithms are going to do they're gonna make more and more decisions and they're going to be hidden you know you're going to be denied a government service you're gonna be denied admission to some kind of corporate events you're gonna see a certain kind of ad when you go on to Facebook and not see another Algirdas make all those decisions and they make them choosing some David Weinberger isn't here he sent a great little essay recently on five definitions of fairness right go read that but this stuff is robustly hard but you know a little bit of transparency to go a long way by doing it can I slip a question in well well the or anything your charge you give me why exactly so Bruce you had written several years ago about the feudal internet I was wondering if you're thinking on that has changed as we not a feudal line we mean feudal with a D now and by this I mean the Internet where you have a protector and you we know this right some of us are Apple people we have iPhones and I and Apple computers our data is an iCloud they keep our calendar and our email and our photos and then since they are protector others of us use Google in the same way or Microsoft and this is almost like we are serfs to these feudal lords that they offer us protection in exchange for all of our data and kind of annoyingly it's not half-bad a deal right you because doing it yourself is hard you don't want to be a ronin you don't want it right or you want to you know be have a protector I don't think that's changing I mean I as we knew the Internet of Things we are seeing these these big ecosystems right and now we the fight now is is who's the controller right Amazon echo is all about being the central hub for all the things in your home right now if you have an IOT anything you're controlling it on this but your phone is the controlling hub which means it gets to set the rules coffee maker doesn't it's just a coffee maker right if it but if it wants to have its app in the iPhone store because if it doesn't nobody's going to by it it has to sit there it has to follow apple's rules so Amazon wants the echo to be that Google is using Android and whenever it's Voice thing he is right everyone wants to be that that throught that point of throttle right that and that's all about Cutrone that's another gonna be another point of feudalism and you will say you know I don't know anything about these things but but they got an iPhone app I trust that Apple has done some vetting all right this is good as long as our feudal lords are benevolent okay this this goes bad if they turn evil history of corporations doesn't bode well but yes this is that's what this is all about hi my name's Parker Abel I'm a secure and assured systems engineer at Draper's Draper labs mitre recently released the most common weakness list weakness and numeration list in the top 25 most common weaknesses for for systems are hardware based and all we have discussions about our policy issues and software issues but the more software you add the more insecure that a system gets so that's why DARPA has started a challenge for inherently secure processors it's actually been won and they've been created so the defense industry is really concerned about hardware based security and often we see you know military being ahead of the curve in terms of advancement but what is it gonna take for industry to start thinking ok hardware based security is crucial and we actually need to focus on it because it I work on it every day and what I see is you know everyone's plugging fingers in a hole in the dam but we need to reface the dam so I talked about that and I think the problem is the reason no one talks about it is it's insurmountably hard and that this is I think this supply chain security that you know the heart and you know you saw this in public debate recently with should we in the United States by Kaspersky antivirus should we trust a Russian antivirus program and also in the debate of should we buy ZTE phones and who a network equipment should we buy Chinese may devices that plug into our network and you know that's an important question it is not of course not just the u.s. 2014 China ban Kaspersky they also banned semantics because us-based can't be trusted India has ban Chinese made hardware 1997 their debates the United States whether to trust checkpoint and Israeli company with our security and remember Mujahideen secrets the encryption program written by Isis because of course you can't trust Western encryption but you know that really is just discussion of what country the company that the product is made is located right this is not a u.s. made product it's gone right this is made in one of several Asian countries the chips are made in one of several like I think again Asian countries different ones probably the programmers are probably carrying a couple hundred different passports and and any one of those steps in that chain can support the security of this there's a great paper last year you can break an iPhone security with a malicious replacement screen surprise so right these hardware problems the reason they're hard is the industry is deeply international but no one will ever buy a us only iPhone it will cost ten times the price and nobody wants it so the reason nobody's thinking about it because no one wants to because it's hard I mean even the US military just kind of pretends it's gonna advice chips from from China because it has no choice and yes another paper it's about four years ago you could have a you you make a mask rich you can be design your chip and you make a mask basically which is what you give to the chip maker to you know make me a couple of million of these and they can take your mask and slip another layer in that you don't know about make your design when you get back something that you can test from today to tomorrow exactly what you asked for nothing more it has been subverted you don't know it right so that's doable I mean if I was a country I would be doing that to other countries right wouldn't you duh so yeah these are big problems and what will it take get people to to think about it I think it'd mean a disaster and even then it is so it's really hard to think about something that is expensive and nobody wants to you have to be forced to and this one is super expensive we really have built a very international tech industry that gets our expertise in programming from all over the world our expert the expertise in in hardware and software in fab we go where labor is cheapest to do some parts we go where laborers smartest do other parts we go on the net to do parts that are distributed and nobody wants to do anything about that that is a terrible answer and I'm sorry but I'm glad you're thinking about it and when somebody is I want to pause and give my you've mentioned that the title in the cover so that the title is mine I'm so proud of it I'm happy with the car brush I'll give you two reasons I like it one there's a button that says okay there's only one button it says okay and it's clearly not okay and two it looks like this thing has been throwing error messages for the past hour and no one's been reading them thanks so so the cover has curb appeal which is what we want I have actually seen it in airports which is kind of cool and and this so this is my theory of book writing of and we might have another book writing seminar in spring I'm thinking about it so there's a chain of readers the title gets you to read the subtitle the subtitle gets you to read this stuff the flap also known as the Amazon summary and that gets you to read the book right so it is it is very much a chain at any step I can lose my reader it all has to work or I don't get a reader so you go for this is my first ever clickbait title and I kind of back off from it like page three of the book when you're just like all right I mean I got you here but let's say you know let's be reasonable guys but it really is you know cuz I need that flow I know one unless they kind of know me and it's blind buying an extra Neri book right but that's not the reader I'm trying to hook and trying to hook a reader that will go through those steps so I tend to like a provocative title a descriptive subtitle a slightly sensationalist but not inaccurate flap and then a kick-ass good book right that's that's that's that's my recommendation if you are writing about mission accomplished our mission is Olga Polly our friend or our foe when we're trying to deal with this kind of problem forces it seems like it sounds like just like one manufacturer hotel keys and they fail and every hotel in the world fails if there were 20 of them the fighter would be more contained on their hand imagine if there were 20 operate 20 dominant operating systems instead of three or four all four your phones or your computers it seems like that would make things harder to fix so how do you see this well this this is this is the trade-off of between having a few and investing and having many and having some kind of safety numbers I mean you see this in reproductive strategy that two basic systems of reproductive security the one is what we primates tend to use it says have very few offspring and invest a lot of resources in in bringing them to adulthood and then there's a lobster method which is have a couple of million offspring them completely and play the numbers game might both work now we're good likely gonna have some hybrid because there are costs to multiple things that aren't security cost or multiple OSS is annoying for a lot of reasons for interoperability and in some places we don't win we don't want we want everyone to use tcp/ip we want everyone to use PDF files because we want to have that that ability to transfer whatever and to use the same photo format in the video format otherwise we're not going to work so some places there are sort of natural monopolies of interoperable formats some monopolies are accretive because of they just get more valuable all right Facebook I mean no one's on Facebook because they like Facebook right nobody we were all on Facebook because if we're not we don't get invited to parties or whatever right we're on face because the people we need to communicate aren't afar on Facebook and there is that and everyone remembers the moment they had to join Facebook right and there was there that were you sick yes I have to join right there was a thing that you couldn't do otherwise you know I am probably the only person still not on Facebook and that's okay that's right but you know it has a social cost right there II there are things you don't know about in your friends lives there are social events you don't hear about it is a social cost I notice that I feel it right I am ordinary enough to pay it but that makes me three Sigma and like not a useful example on the other hand there is right a lot of benefit to there being multiple sources of even social media platforms or of lock manufacturers or operating systems or phones or apps that there is more security in that diversity and it's going to be some combination and different industries have different sweet spots of where you you draw that line I don't know where it'll be fraud but I will be one line I've been different in different things hi Bruce thanks for this fascinating and slightly scary talk I want to do a little bit of a dive into one of your examples the hotel key problem I mean pretty much everybody kind of gasped when you said you have to go to every lock to fix it but I had a completely different reaction it's which was of that well every hotel employees people that go to every lock you know pretty much every day you know to perform something you could consider a public health function which is cleaning a room now I'm sure though in a hotel lock industry isn't geared to have you know maids fixing locks but why not I think it's a good question I mean I think it's cause the company never envisioned I mean you could easily imagine what we here designing a better you know a better hotel lock we would say hey right we're gonna need to do maintenance we're gonna you two software updates let's make it so that you just plug a USB stick in and or maybe that's a bad idea but pretend it's okay so just so like gen1 right okay let's plug in USB stick in there for an unskilled person could do that and you could integrate it into the normal life cycle of a hotel room instead of thinking well we designed it perfectly nothing bad can ever happen we don't have to think about that but yeah I think that's gonna be when we think about failing safely failing securely right you know cat you know what is gonna be the the update mechanism and that's that's actually a really good idea so maybe I don't make it a USB key but you want to make it something that there isn't or maybe maybe it you know we're gonna mail the each hotel a specialized device and that they're going to you know plug it in and push a button and maybe type a code and they will you know push the software update yes right but I think that is gonna require some better engineering and the way now the lock company isn't sophisticated I mean this is the one of the problems with IOT devices the reason this thing is secure is there's a team of engineers at Apple and there's one at Microsoft and Google for their devices that are designing specifically as possible the first place and when a vulnerability appears they write a patch they push it to my device this thing improves that lock was designed offshore by a third party by an engineering team that came together design it dispersed there's no group of engineers waiting to patch it and you know maybe that lock is patchable your router at home the way you patches you throw away and buy a new one but that's the mechanism there's no patching mechanism let alone a team that could write the patch which there also isn't now throw it away and buy a new in is a valid security upgrade mechanism right we know this also is secure because every three to five years we all get new devices these things have a pretty fast churn and the new iPhone the new Windows the newer that are is better designed more secure than the previous now when you get to consumer goods you do not have that you buy a DVR you're gonna replace it in ten years you buy a refrigerator you want the last for 25 years I bought a thermostat new thermostat my homey last year I expect to replace it approximately never and we don't know how to do that we don't know how to deal with that kind of lifecycle or think of a car you buy a car today let's say this one first two years old you drive for 10 years sell it someone else buys it they drive for 10 years they sell it probably gets put in a boat sent to Latin America someone else buys it drives know 20 years okay go home find a computer from 1976 boot it up try to run it try to make it secure we actually have no idea how to secure 40 year old systems that consumer level we have the faintest clue right we haven't so what how do we make this work right option one is replace cars in the same life cycle as the phone that won't work that I think she'll literally cook the planet right that is not gonna be the answer is it gonna be that Ford has to maintain a a testbed of 300 makes and model years and test every patch anyone who's an engineer is gonna cringe the bad notion we don't how to do that either we're gonna have to figure this out May at the level of cars the expensive things and to level these cheap things you have a DVR it could be could have been part of them arrive on net and there was a really dumb vulnerability one you have no way of knowing to you kind of don't care and three the only way we're gonna remove that vulnerability is when you turn the thing off and throw it away we're stuck this is hard convert we're approximately 40 days away from midterm elections so if you could take everything mentioned about unique systems disparate responses and otherwise apply that and maybe get a prognosis on our voting systems so I've written a lot about voting in election systems the good place for information is verified voting org that's where I'll send people if you want to learn about what machines use and what jurisdictions what vulnerability is there are adding is a lot to be worried about and the three areas so that we have to support a concern of the of the computerized systems determine who's allowable to vote and where that's our registration systems there's the actual voting machines and then there's the computerized system that tabulates all those machines into a final result all through you are vulnerable in different ways and we know that at least the first two were targeted at some level by the Russian government in the 2016 US election you know it's hard to know what will happen certainly there are lots of vulnerabilities I worried just as much about appearances in reality and this is important elections serve two purposes the first one is to choose the winner that's the obvious one the second one is to convince the loser I to the extent an election doesn't convince the loser it has failed as a as a democratic mechanism and for the loser to say that election wasn't fair I didn't actually lose we've lost we don't we've lost everything so they need to be secure in appearance in addition to reality and don't know anything will happen you know my guess is not just because it's it's dangerous propaganda so much easier but we don't know I mean lots of things have happened where we don't think it's enemy action where we think it's a mistakes I mean there it also has a weird mistake she's been opened up and they've been zero votes she's been opened up this some account I got a negative number of votes yeah but those all seem to have been mistakes you know the errors not not actual malicious action so so don't know but hard I wouldn't I would hear what what Abby's thinking about this you know what I mean like one of the questions I get from listening to Bruce speak is where kind of the locus of control and thank you and responsibility lie and the same question applies equally to kind of where's the Public Interest where does X reside in the system are you are you happy with the answer in here or what are we gonna do about this and what's your perspective on this well in think all philosopher I mean in thinking about the the voting I take Bruce's point really seriously about the sort of communicative role of the voting process and you know we just this week in The New Yorker have a piece about you know Russia turning the election based on Facebook activity and a few targeted spaces it looks like there's actually pretty good evidence that the election may have actually turned on social media kind of manipulation it didn't need to be the voting machines just as you were saying and I think that as a philosopher what you worry about is how can we manage all of these questions about what are we to do about our elections precisely when it's distributed and vulnerable in so many ways it just becomes the kind of thing that we can't just say oh well you know will secure our elections it's about how we're gonna communicate about this and how like is it more worrying to really publicize these vulnerabilities then it you know from a point of view of making our system go our you know we think of we're already in a vulnerable moment for democracy do we think it's enough on the margins that we try to kind of say too much I mean there are really very puzzling questions about what to do about this this moment where things feel a little bit like they're teetering on a bring us has two particular problems that other countries don't have one we don't have a bureaucracy to ensure the integrity of elections in the same way that other countries do we were for security we relied more on mutually distrustful parties watching each other right put a Democrat Republican at the table and each will watch what the other does and that was great for you know mid twentieth century threats that was a reasonable security solution it works less well today our second problem is we don't have one election we have like fifty two separate elections and they're all very different under different rules and different machines and different systems and different authorities and we can't you know as a country secure our elections we don't we don't as a country have an election we kind of pretend we do and those two things make it harder for the u.s. to do it then the UK or Australia France or Japan or you know any other country that tries to run free and fair elections and philosophers talk about the difference between ideal theory which is like what should things be like and non-ideal theory there's like what do we do from where we are and this feels like a moment weird deep in the weeds of non-ideal theory and they're striking puzzles about that so I head there I thank you very much for your talk you were speaking about not blaming developers for unintended flaws but what about the growing industry of the zero day vulnerabilities that is it is another actor dies no like playing a huge role so there is a market right there's a basically cyber war military-industrial complex that has sprung up and it has several tiers it has major defense contractors selling cyber weapons to countries like the United States it has these kind of mid tier cyber arms manufacturers selling weapons to countries that we'd all agree probably shouldn't have them buy a Kazakhstan and a Sudan and a Uganda Mexico you mean of all the countries you sort of here a Syria in the citizen lab reports right and there are as they're there people that sell cyber weapons to to criminals and there are markets in in in vulnerabilities and attack tools in in exploits and one of the ways to judge how secure your system is is to look at the going price for a vulnerability in its and you know which means if you've got an iPhone you're doing pretty well and you've got an Android phone you're doing less well because you know I think a good iPhone exploit is now worth half a million dollars and and and that's you know that's that's a that market perturb is the world because if you are a software engineer you can make a legitimate and this is not selling criminals this is selling to actual companies that you know have offices and and mailboxes and pay their taxes you can sell an iPhone exploit to a cyber weapons arms manufacturer it'll be used in ways you probably don't like we may you don't have to look that carefully and if you can you know get by the ethics but you know vulnerabilities go that way you send it to Apple and they'll give you a bounty of a few thousand dollars and you probably you feel better for the world but you know what's that worth this is hard and and the sort of show is again how where we are is making solutions harder I mean I argue in the book that we need to adapt the defense dominant strategy and say that defense has to win the system is too important what's his name liking a name they'll come to me in a second dan dan like a name so I'll get that in a second talks about that in the US should buy wall Vaughn the villagers we should pay top dollar and buy everything and then immediately give it to the defense right that uh dan farmer that we should you know that's what we should do that that would be spy a smart use of our our dollar and buying them and using them for offense is actually a dumb use of our dollar and letting other people buy them is also dumb use we should corner the market corner of the market and and end destroy it that's radical but that's you know an interesting way of thinking about it and and on that is much more of a public health way of thinking about it all right if we can eradicate malaria you know in in Africa that improves us here that's not just foreign aid that's you know planetary health that's a good idea all right and if we can subsidize China to produce cleaner energy that's not foreign aid that's helping us here I mean come on people we're all in this together yeah this is a different way of thinking Jim Gettys has been arguing that the best approach to dealing with some of these security issues is requiring everything to be open including firmware hardware specs and software and essentially the bizarre approach to ensuring or try to ensure system security as opposed to the Cathedral approach of trusting Apple or Google or the Chinese government to protect everybody what do you think of that trade-off you know III don't think it's that's that important I think things that are theoretically open orphaned practically not open I think there's some value and openness but there's also Valley and proprietary nests I'm not convinced that will make an appreciable change in security I might make a change to other things it might be good for society in other ways that are broader than security but for security you know Microsoft is actually has a really secure OS right now they did a good job I think they are more secure than Linux that if you know if you know what you're doing you could you could do right but yeah and that's not because they're closed but it's just sort of shows that that doesn't necessarily have to be closed as worse so I don't think I don't have an ideological dog in that fight although you can certainly argue from a lot of other social goods that openness is important certainly if there's gonna be an algorithm that will determine whether I get released on bail I think it's important for society the outcome is you open right if an algorithm decided that I was drinking a breathalyzer algorithm I should be able to examine that source code and even test it in court that just seems like a no-brainer and that's not so that's less security and more you know public process and because of what we learn again and again is when these algum is a subject of scrutiny there they're you know they're lousy they're embarrassingly bad they work at random occasionally and you know we we have this bias to trust computers we might not in this room but you go outside this room the computer is always right is what people think because you know it's a computer of course it's correct how could it be wrong right it does calculations it doesn't make mistakes but you know we can laugh but that is not the prevailing opinion yeah you know it's but we know how do we know how to deal with that and we can put that algorithm in escrow we can deputize a commission who gets whatever some ever agreements and analyzes it I mean we all would accept that I mean we don't you know I mean I I'm gonna you know go to a reception tonight and I'm not going to get the food all right but I know that there's an organization that did that there were health codes and inspectors and it all happened so we can set up a system where somebody that we all trust looks at google search algorithm and make sure it is not racist and sexist otherwise bias or classist or you know subservient to Russian troll whatever it is whatever things we agree we don't like we don't have to look at it we can solve that it's not the not the only time that we've had to as a public vet proprietary things I think I really think that's a argument I might take the progress of ask the last question so before we disappear out into the world what should we do you've been thinking about this who are we gonna Lobby what are we gonna write what are we gonna study what do we invest in me something I'm working on that I would like help with so maybe you can help me I'm trying to think about how we educate people in different pieces of the process so that they can play a role in making these things better so I'm working on a curriculum for engineers to help them identify and address ethical issues created by their work I think that we need to think about I there's a program I'm involved with at the Media Lab that's going to be about democratizing AI through k-12 education in AI so that all kinds of kids can grow up fluent with these tools I think that we need to think long term about things like this how we make it the case that all of us are more equipped to engage with these issues in the places where we find them both as we encounter them in our lives and in our work and in our politics so that I think is a crucial thing and also vote is my answer similar I and my book with this call for getting policymakers and tech people to understand each other yeah that not just Maya not just a cybersecurity but pretty much all of the hard problems of this century the hard policy problems are deeply technological right AI the future of work climate change food policy and to the extent that we have policymakers and technologies talking past each other I go watch the Facebook hearings you want to see what you know what bad looks like that we're just going to get terrible policy and look at terrible tech so I love the idea of teaching programmers and engineers ethics and you want to teach policymakers what software is like and you know we need to have this this discussion across but you know see peace no calls the two worlds and this is not a new problem but it's become I think much more urgent the going dark debate is all this talking past each other the going dark debate whether the FBI should be able to break into iPhones right you know that's all tech and policy so I'm completely fed past each other and and we need and this is places like Berkman this is what we should be doing is getting tech and policy together we need technologists on congressional staffs at federal agencies at NGOs in the press you know I am trying to teach computer security at the Harvard Kennedy School I'm going the other way start to get you know policy be able to understand second everybody's got a meet in the middle and III think any long term solution this is going to include that it's great this has been wonderful so buy a book Bruce are you willing to sign I'm willing a sign excellent and please join me in thanking Bruce and have you thank you [Applause]
Info
Channel: The Berkman Klein Center for Internet & Society
Views: 820
Rating: 4.5294118 out of 5
Keywords: Internet
Id: FMv2CudhtOs
Channel Id: undefined
Length: 70min 46sec (4246 seconds)
Published: Thu Sep 27 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.