[MUSIC PLAYING] BRUCE SCHNEIER: Thank you. So this is the-- this is the book. So it has one-- my
first-ever clickbait title. [LAUGHTER] And I like the cover. I like the cover
for two reasons. One, there's only one
button that says, OK, when it's clearly not OK. [LAUGHTER] And it looks like this thing's
been throwing error messages for the past hour, and nobody
has been paying any attention to them. [LAUGHTER] So this is-- a
book as a computer. And what I'm writing about
is security in a world where everything is a computer. And I think this is the way
we need to conceptualize the world we're building. This smart phone is
not a phone, it's a computer that
makes phone calls. And similarly,
your microwave oven is a computer that
makes things hot. Your refrigerator is a computer
that keeps things cold. An ATM machine is a
computer with money inside. A car is now a computer with
four wheels and an engine. Actually, that's wrong. A car is about 100-plus
distributed system with four wheels and an engine. Right? This is more than the internet. It's the internet of things, but
it's more than that, as well. In my book, I use the
term internet plus. I hate having to invent
a term, but there really isn't a term we have for
the internet, the computers, the things, the big systems,
like power plants, the data stores, the
processes, the people. And it's that
holistic system that I think we need to look at
when we look at security. So if everything is
becoming a computer, it means two things
that are relevant here. That internet security
becomes everything security, and all the lessons and problems
of internets and computers become problems everywhere. So let me start with
six quick lessons of computer security, which
will be true about everything everywhere. Some of them are
obvious in computers, not so obvious elsewhere. The first, most software is
poorly written and insecure, right? We know this. Basic reason is the
market doesn't want to pay for quality software. Good, fast, cheap--
pick any two. We have picked fast
and cheap over good. With very expensive exceptions,
like avionics and the space shuttle, most software
is really lousy. It kind of just barely works. Now, for security, lots
of vulnerabilities. Sorry-- lots of bugs. Some of those bugs
have vulnerabilities. Some of those vulnerabilities
are exploitable, which means modern
software has lots of exploitable
vulnerabilities, and that's not going to change any time soon. The second lesson is that
the internet was never designed with security in mind. That seems ridiculous today. But if you think back to the
late '70s and early '80s, there were two things
that were true. One, the internet was used
for nothing important ever. [LAUGHTER] And two, you had to be a member
of a research institution to have access to it. And you read the
early designers, and they talked about how
limiting physical access was a security
measure, and the fact that you could
exclude bad actors meant that you didn't have
to worry much about security. So a decision was
made deliberately to leave security
to the end points, not put it in the network. Fast forward today,
and we are still living with the results of
that in the domain name system, in routing, in packet
security, email addresses. Again and again, the
protocols don't have security, and we are stuck with them. Third lesson, the extensibility
of computerized systems means they can be
used against us. Extensibility is not something
that non-computer people are used to. Basically, what
I mean by that is you can't constrain the
functionality of a computer because it's software. When I was a kid, I had a
telephone-- big, black thing attached to the wall. Great device. But no matter how
hard I tried, I couldn't make it be anything
other than a telephone. This is a computer
that makes phone calls. It can do anything
you want, right? There's an app for that. Because this can be programmed,
because it's a computer, it can do anything. You can't constrain
its functionality. It means several
things for security. Hard to test this thing
because what it does changes, how it's configured changes. And it can get additional
features you don't want. That's what malware is. So you can put
malware on this phone, or on an internet-connected
refrigerator in a way that you can't
possibly ever do it in an old electromechanical
refrigerator. Because they're not computers. Fourth lesson is
about complexity. A lot of ways I can say this. Basically, the
complexity of computers means attack is
easier than defense. I can spend an hour
on that sentence, but complex systems
are hard to secure. And it's the most
complex machine mankind has ever built by
a lot, which makes this incredibly hard to secure. Hard to design
securely, hard to test-- everything about it. It is easier to attack a
system than to defend it. A fifth lesson is that there
are new vulnerabilities in the interconnections. As we connect things
to each other, vulnerabilities in one
thing affect other things. Lots of examples. The dying botnets. Vulnerabilities in
internet-connected digital video recorders-- and
webcams, primarily-- allowed an attacker to create
a botnet that dropped a domain name server that, in
turn, dropped a couple of dozen real popular websites. In 2013, Target corporation,
attack through a vulnerability in the HVAC
contractor of several of their
mid-Pennsylvania stores. Earlier this year, there was a
story of a casino in Las Vegas. We don't know the
name of the casino. They had their high
roller database stolen, and the hackers got in through-- and I'm not making this up--
their internet-connected fish tank. [LAUGHTER] So vulnerabilities-- this can
be hard because sometimes, nobody's at fault. I read
a blog a few months ago about a vulnerability that
results from the way Google treats email addresses. The dots don't
matter for your name. And the way Netflix treats email
addresses, the dots do matter. Turns out, you can play
some games with that. Who do we blame? I'm not sure we blame anybody. There's a vulnerability in
PGP, which is actually not really vulnerability in PGP. It's vulnerability
in the way emailers handle PGP, which everyone
is convinced everyone else is at fault. And these
kind of things are going to happen
more and more. The last lesson is that
attacks always get better. Attacks always get
easier, faster, cheaper. Some of this is Moore's law. Computers get faster,
so password guessing gets faster as computers get
faster, not because we're smarter about it. But we also get smarter. Attackers adapt, attackers
figure out new things, and expertise flows downhill. What, today, is a top-secret
NSA program, tomorrow, becomes a PhD thesis, and the next
day is a common hacker tool. And you can see this
again and again. An example might be IMSI
catchers, fake cell phone towers, sting rays. The cause of them
is that cell phones don't authenticate to towers. They automatically trust
anybody who says, I'm a tower. So if you put up
a fake tower, you can now query phones
and get their addresses and sort of know who is there. This was something that
the NSA used, the FBI used. Big government
secret for a while. Expertise flowed downhill. A few years ago-- I think it was-- Motherboard looked
around the DC, found a couple of dozen of them
run by we-don't-know-who around US government buildings. Right now, you can
go on Alibaba.com, buy one of those things
for about $1,000. In China, they're used
to send spam to phones. You'd get a
software-defined radio card. You can download free
software and make your own. What started out as
something that was hard to do is now easy to do. So those are my six
lessons that are going to be true for everything. And none of that is
new, but up to now, it's been basically
a manageable problem. But I think that's
going to change. And the reasons are automation,
autonomy, and physical agency. Computers that can do things. So if you do computer security,
you've heard of the CIA triad-- confidentiality, integrity,
and availability. Three basic properties
we deal with in security. By and large, most of what our
issues are are confidentiality. Someone stole and
misused our data. That's Equifax, that's Office
of Personnel Management, that's Cambridge Analytica. That's all of the
data thefts ever. But when you have
computers that can affect the world in a direct
physical manner, integrity and availability
become much more serious. Because the computers
can do stuff. There's real risk to
life and property. So yes, I'm concerned that
someone hacks my hospital and steals my private
patient medical records. But I'm much more concerned
that they change my blood type. I don't want them to
hack my car and use the bluetooth microphone to
listen in on conversations. But I really don't want
them to disable the brakes. Those are data integrity and
data availability attacks, respectively. So suddenly, the effects
are much greater. And this is cars,
medical devices, drones, any kind of weapon systems,
thermostats, power plants, smart city anything, appliances. I blogged, a couple of days ago,
about an attack where someone-- just theoretical-- if you could
hack enough major appliances, can turn power on and
off, and synchronization, and affect the load
on power plants, and potentially cause blackouts. Now, very much a side effect,
but once I say that, you say, well, yeah, duh, of
course you can do that. Very different sort of attack. There's a fundamental difference
between my spreadsheet crashes, I may lose my data, and my
implanted defibrillator crashes and I lose my life. And it could be the same CPU,
the same operating system, the same vulnerability,
the same attack software. Because of what the
computer can do, the effects are much different. So at the same time we're
getting this increased functionality, there are
some longstanding security paradigms that are failing,
and I'll give three. The first one is patching. Patching is how we get security,
and it's now having trouble. Actually, there's two reasons
why our phones and computers are as secure as they are. The first is that there are
security engineers at Apple, at Microsoft, at Google that
are designing them as secure as they are in the first place. And those engineers can
quickly write and push down patches when vulnerabilities
are discovered. That's a pretty good ecosystem. We do that well. The problem is, it doesn't work
for low-cost embedded systems like DVRs and routers. These are designed and built
offshore by third parties, by ad hoc teams that come
together, design them, and then split apart. There aren't people who can
write those patches when a vulnerability is discovered. Even worse, a lot
of these devices have no way to patch them. If your DVR is vulnerable
to the hack that allows it to be
[INAUDIBLE] to a botnet, the only way you can
patch it is to throw it away and buy a new one. That's the mechanism. We have no other. Now, actually, throw it
away and buy a new one is a reasonable
security measure. We do get security in the
fact that the lifecycle of phones and computers is
about three to five years. That's not true
for consumer goods. You're going to replace
your DVR every 10 years, your refrigerator
every 25 years. I bought a programmable
thermostat last year. I expect to replace it
approximately never. [LAUGHTER] Think about it in
terms of a car. You buy a car today. Let's say the software
is two years old. You're going to drive it
for 10 years, sell it. Someone else buys it, drives
it for 10 years, they sell it. Someone else buys it,
puts it on a boat, sends it to South America, where
someone else there buys it, drives it for another
10 to 20 years. When you go home, find
a computer from 1976. Try to boot it, try to run
it, try to make it secure. We actually have no idea how
to secure 40-year-old consumer software. We haven't the faintest clue. And we need to figure it out. So what, does Chrysler
maintain a test bed of 200 chassis for vulnerability
testing and for patch testing? Is that the mechanism? We're not going to be able
to treat these goods like we treat phones and computers. If we start forcing
the computer lifecycle onto all these other things,
we are probably literally going to cook the planet. So we need some other
way, and we don't have it. The second thing that's
failing is authentication. We've always been only
OK at authentication. But authentication
is going to change. Right now, authentication
tends to be me authenticating to some object or service. What we're going to
see an explosion in is thing-to-thing
authentication, where objects seem to authentic to objects. And there's going
to be a lot of it. Imagine a driverless
car, or even some kind of
computer-assisted driving car. It will want to
authenticate to thousands of other cars, road
signs, emergency vehicles and signals-- lots of things. And we don't know how
to do that at scale. Or you might have 100
IoT objects in your orbit to authenticate to each other. That's 10,000 authentications. 1,000 objects, a
million authentications. Right now, this tends
to be our IoT hub. If you have an IoT
anything, you likely control it via your phone. I'm not sure that scales
to that many things. And while we can do
thing-to-thing authentication, it's very much deliberate. So right now, when
I get into my car, this phone authenticates
the car automatically. That works. Bluetooth works. But it works because I
was there to set it up. And I'll do that for 10
things, for 20 things. I'm not doing it for 1,000. I'm not doing it for a million. So we need some way to do
this automatic thing-to-thing authentication at scale,
and we don't have it. The third thing that's
failing is supply chain. Supply and chain security is
actually insurmountably hard. Now, we've seen-- you've seen
the papers in the past year. It's been one of two stories. It's Kaspersky. Should we trust a Russian-made
anti-virus program? And Huawei and NZTE. Should we trust Chinese-made
phone equipment? But that really is just
the tip of the iceberg. There are other stories,
not just the US. It turns out, in 2014,
China banned Kaspersky. They also banned
Symantec, by the way. 2017, a story from India
identifying 45 Chinese phone apps that they say
shouldn't be used. In 1997-- I don't know
if people remember-- there were worries in
the US about Checkpoint, an Israeli-made
security product. Should we trust it? Also, I like to remember
a 2008 program called Mujahideen Secrets, which was
an ISIS-created encryption program because, of course, you
can't trust Western encryption programs. But the country of
origin of the product is just the tip of the iceberg. Where are the chips made? Where is the software written? Where is the device
[? fabbed? ?] Where are the programmers from? This iPhone probably
has, what, a couple of hundred different
passports that are programming this thing? It's not made in the US. And every part of the
chain is a vulnerability. There are papers showing how you
can take a good chip design-- the masks-- and maliciously
put in another layer, and compromise the
security of the chip without the
designers knowing it, and it doesn't
show up in testing. There was another paper
about two years ago. You can hack an iPhone through
a malicious replacement screen. You have to trust every
piece of the system. The distribution mechanisms. We've seen backdoors
in Cisco equipment. Remember the NSA
intercepted Cisco's routers being sent to the Syrian
telephone company? That was one of the great
pictures from the Snowden documents. We've seen fake apps in
the Google Play Store. We know that Russia attacked
Ukraine through a software update mechanism. I think my favorite story-- this is a hard one. 2003, there was actually a very
clever, very subtle backdoor that almost made it into Linux. We caught it, and we kind
of just barely caught it. We got very lucky. And you look at the code. It really is hard. You have to look for
it to see the backdoor. That could have
easily gotten in. We don't know what else
has gotten in, in what. And solving this is hard. No one wants a US-only iPhone. It's probably, A, impossible
and, B, it'll cost 10x. Our industry is, at every
level, international. It is deeply international. From the programmers, to
the objects, to the cloud, the services. We will not be able
to solve this easily. So in a lot of ways,
this is a perfect storm. Things are failing
just as everything is becoming interconnected. And I think we've been OK
with a unregulated tech space because it fundamentally didn't
matter, and that's changing. And I think this is
primarily a policy problem. And in my book, I spend
most of the time on policy, and I talk about a lot of
different policy levers we have to improve this. I talk about standards,
regulations, liabilities, courts, international treaties. The things is, it's a very
hard political battle. And I don't think
we're going to have, in the US, a total
catastrophic event. I look more to Europe to lead. I could go through
all of this, but I want to give two principles
that I want to pull out. The first is that
defense must dominate. I think we, as a
national policy, need to decide
that defense wins. That no longer can
we accept insecurity for offense purposes. That, as these computers
become more critical, defense is more important. Gone are the days when
you can attack their stuff and defend our stuff. Everyone uses the same stuff. We all use TCP/IP and Cisco
routers, and Microsoft Word, and PDF files. And it's just one world,
one network, one answer. Either we secure our stuff,
thereby, incidentally, securing the bad guys' stuff. Or we keep our stuff
vulnerable in order to attack the bad guys,
thereby, incidentally, rendering us vulnerable. And that's our choice. And it means a whole
bunch of things. To disclose and fix
vulnerabilities; to design for security,
not for surveillance; encrypt as much as
possible; to really separate security from spying; make
law enforcement smarter so they can actually
solve crimes even though there is security;
and create better norms. One other principle is that we
need to build for resilience. We need to start
designing systems assuming they will fail. And how do we contain failures? How do we avoid catastrophes? How do we fail safe,
or fail secure? Where can we remove
functionality or delete data? How do we have systems monitor
other systems to try to provide some level of redundancy? And I think the
missing piece here is government, that the market
will not do this on its own. But I have a problem
handing this to government because there really isn't an
existing regulatory structure that could tackle this
at a systemic level. That's because there's a
mismatch between the way government works and
the way tech works. Government operates in silos. The FAA regulates aircraft, the
FDA regulates medical devices. The FTC regulates
consumer goods. Someone else does cars. And each agency will have its
own rules, and own approach, and own systems. And that's not the internet. The internet is this
free-wheeling system of integrated
objects and networks, and it grows horizontally,
and it kicks down barriers. And it makes people
able to do things they never could do before. And all of that
rhetoric is true. Right now, this device
logs my health information, communicates with my car,
monitors my energy use, and makes phone calls. That's four different--
probably five different regulatory agencies. And this is just
getting started. We're not sure how to do this. So in my book, I talk
about a bunch of options. And what I have, and I think
we're going to get eventually, is a new government agency that
will have some jurisdiction over computers. This is a hard sell to
a low-government crowd. But there is a lot of
precedent for this. In the last century, pretty
much all major technologies led to the formation of
new government agencies. Cars did, planes did, radio
did, nuclear power did. Because government needs to
consolidate its expertise. And that's what happens
first, and then there is need to regulate. I don't think
markets solve this. Markets are short-term,
markets are profit-motivated. Markets don't take
society into account. Markets can't solve
collective action problems. So of course, there are
lots of problems with this. Governments are terrible
at being proactive. Regulatory capture
is a real issue. I think there are differences
between security and safety that matter here. Safety against things like
a hurricane, and security against an adaptive, malicious,
intelligent adversary are very different things. And we live in a fast-moving,
technological environment. And it's hard to see how
government can stay ahead of tech. This is something that's changed
in the past couple of decades. Tech moves faster than policy. The devil's in the details,
and I don't have them. But this is a conversation
I think we need to have. Because I believe that
governments will get involved regardless. The risks are too great and
the stakes are too high. Governments are already
involved in physical systems. They already regulate cars
and appliances and toys and power plants
and medical systems. So they already have this
ability and need and desire to regulate those
things as computers. But how do we give them the
expertise to do it right? My guess is the courts are going
to do some things relatively quickly, because
cases will appear, and that the regulatory
agencies will follow. I think Congress comes last,
but don't count them out. Nothing motivates the
US government like fear. Think back to the terrorist
acts of September 11. We had a very small
government administration create a massive
bureaucracy out of thin air. And that was all fear-motivated. And when something
happens, there will be a push that
something must be done. And we are past the choice
of government involvement versus no government
involvement. Our choice now is smart
government involvement versus stupid
government involvement. And the more we can
talk about this now, the more we can make
sure it's smart. My guess is any good regulation
will incent private industry. But I think the reason we
have such bad security is not technological. It's more economic. There's lots of good tech. And while some of these
problems are hard, they're "send a man
to the moon" hard, they're not "faster-than-light
travel" hard. And once the incentives
are in place, industry will figure
out how to do it right. A good example might
be credit cards. In the early days
of credit cards, we were all liable
for fraud and losses. That changed in 1978, the
Fair Credit Reporting Act. That's what mandated that the
maximum liability for credit card fraud for the
consumer is $50. And you understand
what that means. That means I could
take my card, fling it into the middle of
this room, give you all lessons on
forging my signature, and my maximum liability is $50. It might be worth
it for the fun. [LAUGHTER] But what that meant-- that change, that even if
the consumer is at fault, the credit card
company is liable, that led to all sorts
of security measures. That led to online verification
of credit and card validity. That led to
anti-forgery measures, like the holograms and
the micro-printing. That led to mailing the card,
and the activation information separately, and requiring you to
call from a known phone number. And actually, most
importantly, that enabled the back
end expert systems that troll the transaction
database looking at fraudulent spending patterns. None of that would have happened
if the consumers were liable. Because the consumers had
no ability to implement any of that. You want the entity
that can fix the problem to be responsible
for the problem. That is just smart policy. So I see a lot of
innovation that's not happening because the
incentives are mismatched. So I think Europe is
moving in this direction. The EU is, right now,
the regulatory superpower on the planet. And they are not afraid
to use their power. We've seen that in the
GDPR in the privacy space. I think they're going to
turn to security next. They're already working on what
responsible disclosure means. You ever see, on
manufactured goods, there's a label called CE? That's an EU label. It basically means "meets
all applicable standards." They're working on
standards for cybersecurity. And you will see them get
incorporated into trade agreements, into GATT. And there's an interesting
rising tide effect. It's not necessarily obvious. The car you buy in
the United States is not the car
you buy in Mexico. Environmental laws
are different, and the cars are tuned
to the different laws. Not true in the computer space. The Facebook you get is pretty
much the same everywhere. And if you can imagine
there is some security regulation on a toy, the
manufacturer meets it, they're not going to have a
separate build for the United States. They're going to sell it
everywhere because it's easier. There'll be times
when that's not true. I think Facebook
would like to be able to differentiate between
someone who is subject to GDPR, someone who is not. Because there's
more revenue to be gained through the
greater surveillance. But when you get
to things, I think it's more likely that it'll be a
rising tide and we all benefit. United States look to the
states more, specifically New York, Massachusetts,
California, which are more
aggressive in this space. But I think this is coming. And I want to close
with, I guess, a call. What we need to do is to
get involved in policy. Technologists need to
get involved in policy. As internet security
becomes everything security, internet security
technology becomes more important to
overall security policy. And all of the
security policy debates will have strong
technological components. We will never get
the policy right if the policymakers
get the tech wrong. It will all look like
the Facebook hearings, which were embarrassing. And you see it even in some--
you see it in the going dark debate, you see it in
the equities debate, you see it in voting
machine debates, in driverless car
security debates. That we need
technologists in the room during policy discussions. We have to fix this. We need technologists
on congressional staffs, at NGOs, doing investigative
journalism, in the government agencies, in the White House. We need to make this happen. And right now, you just
don't have that ecosystem. So if you think about
public interest law, 1970s, there was no such thing
as public interest law. There actually wasn't. It was created primarily by the
Ford Foundation, oddly enough, that funded law clinics, funded
internships in different NGOs. And now, you want to make
partner at a major law firm, you are expected to do
public interest work. Today at Harvard Law School,
20% of the graduating class doesn't go into
corporations or law firms. They go into public
interest law. And the university has
soul-searching seminars because that
percentage is so low. Percentage of computer science
graduates is probably zero. We need to fix that. And that's more than
just every Googler needs to do an internship,
because there aren't spaces for those people. So we got to fix the supply,
got to fix the demand, the ecosystem to link the two. This is, of course,
bigger than security. I think, pretty much, all
the major societal problems of this century have a
strong tech component-- climate change, future
of work, foreign policy. And we need to be in the room,
or bad policy happens to us. So that's my talk. There's, of course, a lot more
in the book that I didn't say, and I'm happy to take questions. [APPLAUSE] AUDIENCE: Hi. Do you imagine that some of
the sociopolitical things that we're seeing crop up
fit within this framework? Or do you think that that might
be an entirely separate problem that needs an entirely
separate set of solutions? BRUCE SCHNEIER: I
think it's related. The problems I'm talking about
are pretty purely technical. The problems of internet
as a propaganda vehicle are, I think, much more
systemic and societal. I do blame surveillance
capitalism for a bunch of it. The business model that
prioritizes engagement, rather than quality, has learned
that, if you're pissed off, you stay on Facebook more. So I think there are
pieces that fit in. So some related, some different. AUDIENCE: You seem to be
talking a lot about policy that the United States and,
to some extent, the EU can do. But I wonder, what do
you think will happen as policy everywhere is-- policy is local, the
internet is global. How's that going to play out? BRUCE SCHNEIER: So I think
that never goes away. And some of it's going to be
the rising tide I talked about that-- especially when-- less about privacy, but
when you get to safety, I think it's more
likely that we benefit from a European regulation that
ensures that the smart vacuum cleaner you bought can't
be taken over by somebody, and then attack
you and trip you. We're likely to benefit
from that more than, look, you can't have a
microphone on the thing. We have to assume that there
will be malicious things in whatever system we have. So if we have a
US-only regulation, it will clean up a
lot of the problem because Walmart won't be
able to sell the bad stuff. But you can still buy it and
mail order from Alibaba.com. So there will be some stuff in
the network that is malicious. Much lower percentage,
easier problem. We're still going to
have to deal with that. And I don't think it ever
goes away because we're not going to have world government. There will be a
jurisdiction, or there will be homebrew
stuff that doesn't meet whatever regs we have. That will always happen. AUDIENCE: You talk
about the need for intelligent technologists to
get involved in making policy, but there are only so
many hours in a day, and probably most of us
would be taking a huge pay cut to go work in government
and lend our expertise there. So how can we fix
the incentives there? BRUCE SCHNEIER: Some
of it is desire. I know ACLU attorneys
that are making 1/3 of what they would
make at a big law firm, and they get more resumes
than they have positions. So it works in law. The desire to actually
make the world better turns out to be a
prime motivator. So I think, once we
have the ecosystem, we will get the supply. I think that enough of us will
say, we've had great careers. We're going to take a break. Or we're going to do
something before we go work at a startup or a big company. Or maybe, there will be
a use for sabbaticals, like you see in law firms
or bits of pro bono work, like a 20% project. So yes, people will
be making less money. I don't think that is
going to harm the system. I think we just need to
get the system working. AUDIENCE: The most jarring thing
I saw you write, as a Googler, was that data is a toxic asset. What do you say about
this to this audience? BRUCE SCHNEIER: I know. What does that make you guys? [LAUGHTER] The promise of big
data has been save it all, figure out what
to do with it later. And that's been driven by
the marginal cost of saving it has dropped to zero. It's basically cheaper
now to save it all than to figure out what to save. Disk storage is free, processing
is free, transport is free. But it turns out that
data is a toxic asset. For most companies, having
it is an enormous liability because someone is
going to hack it. It's going to get stolen. You're going to lose it. And I think we need to
start talking about data not as this magic goodness,
but it decays in value and there are dangers
in storing it. The best way to secure
your data is to delete it. And you're going to delete it
if you know you don't need it. So I've seen lots of studies on
data and shopping preferences. And it turns out, some pieces
of data are very valuable, and a lot of it just
isn't very valuable. So is it worth the
extra 0.25% of accuracy to have this data that
is potentially dangerous, and will get you
fined or embarrassed, and stock takes a hit
if it gets stolen? So I think we need to make
more of those decisions. That the data is radioactive. It's toxic. We keep it if we need to. But if we don't,
we get rid of it, and we figure out how to get
rid of it safely and securely. Take, I don't know, Waze. Waze is a surveillance-based
system, very personal data. But probably only valuable
for, like, 10 minutes. Or at least can be sampled. I mean, lots of ways I can
treat that data, understanding it's a toxic asset,
get my value at much less risk to my organization. And that's what I mean by that. AUDIENCE: It's
interesting that there are ways to anonymize
stuff, but there seems to be no demand and no supply. It's marginally more expensive
to do federated machine learning than do
everything in the center, but companies don't care, and
consumers decidedly don't care. BRUCE SCHNEIER:
Consumers don't care. That's why-- you need
these decisions made not by consumers, but by citizens. Consumers don't care. Consumers are buying
the Big Mac at 10% off. Consumers truly don't care. At the point of
purchase, nobody cares. At the point of reflection,
people care a lot. And that's why you don't
want the market doing this. You want us, as our
best selves, doing this. And about anonymity, it
is harder than you think. Most of our ways of
anonymizing data fails. It is a very hard problem. The anonymity
research is really-- actually, breaking
anonymity research is very good these days. And outstripping the
anonymity research. Go for the next. AUDIENCE: One of the things
that I'm thinking about is, a lot of times, when you
see a big vulnerability-- so say there's a big operating
system vulnerability-- it's actually a genuine mistake. It's not that someone put
it in there on purpose. It's they missed something. So how does regulation
solve that problem? Sure, you could have some
great regulation in place that something's supposed
to be done a certain way. But oh, the implementation
was slightly off or slightly broken. How do you fix that
genuine mistake, even if they were trying to do
what the regulation specified as this would be
a secure system? BRUCE SCHNEIER: So you'd
be surprised, but financial motive-- money motivates companies. If companies will be
fined a lot of money if their employees
make a mistake, they figure out ways
for their employees to make fewer mistakes. AUDIENCE: But doesn't that
only take effect after the mistake has already been-- the evil has already
been done, essentially, when a mistake is found. BRUCE SCHNEIER: Initially, but
there's a deterrence effect. AUDIENCE: OK. BRUCE SCHNEIER: So yes. Arresting someone for
murder only takes effect after he's done
murder, but the goal is that the threat of
being arrested for murder will keep you from
murdering someone tomorrow. So we want this
deterrence effect. How to reduce software mistakes? We actually know a
lot of techniques that pretty much all
software manufacturers never do because it would be
slightly more expensive. AUDIENCE: Yeah. BRUCE SCHNEIER: But if
it's a lot more expensive not to do them, suddenly
the math changes. And I need the math to change. I need security to be
cheaper than insecurity. Right now, the market rewards,
let's just take the chance. Let's hope for the best. AUDIENCE: OK. BRUCE SCHNEIER:
And no industry-- did I say this already? Remind me. Yes, no? No. OK. No industry, in the past
100 or something years, has improved security and
safety without being forced to. Cars, planes, pharmaceuticals,
medical devices, food production,
restaurants, consumer goods, workplace, most recently,
financial products. The market rewards doing a
bad job, hoping for the best. And I think it's too risky
to allow that anymore. AUDIENCE: Hi. You mentioned that you were
embarrassed by the Zuckerberg hearings. BRUCE SCHNEIER: I was
embarrassed by the questions at the Zuckerberg hearings. AUDIENCE: OK. BRUCE SCHNEIER:
Let's just be fair. The congressmen embarrassed me. AUDIENCE: Yeah, yeah. So I assumed correctly. I assumed you were
embarrassed by the senators. BRUCE SCHNEIER: Yes, yes. AUDIENCE: Whereas I have
the opposite problem. I was embarrassed by Zuckerberg. What I-- BRUCE SCHNEIER: To be fair,
there's a lot of embarrassment to go around. AUDIENCE: Yes, it's true. No, but-- BRUCE SCHNEIER: We
can both be right. AUDIENCE: I understand. Yeah. But I have a serious
point here, though. While the senators
don't know about tech, I think the tech doesn't
know about law, ethics, political science, philosophy. Do you think Mark
Zuckerberg can even teach an introductory college
course on free speech? Has he even read what anyone
has ever said about it? So shouldn't we all be
learning about the world? BRUCE SCHNEIER: Yes. [LAUGHTER] This has to go in
both directions. AUDIENCE: Yes. BRUCE SCHNEIER: Right? I want techies in
policy positions. I want policy people
in tech companies. So yes, I think we need both. AUDIENCE: Yeah. BRUCE SCHNEIER: We need both
sides talking to each other. AUDIENCE: Right. BRUCE SCHNEIER: And so
I agree with you 100%. AUDIENCE: Good, OK. [LAUGHTER] BRUCE SCHNEIER: So right now,
I teach internet security at the Harvard Kennedy School,
at a public policy institution. So I'm trying to push
people in that direction. At the same time,
there are people at Harvard, in the computer
science department, trying to teach policy issues,
going the other direction. AUDIENCE: I think you
probably know [INAUDIBLE].. [LAUGHTER] BRUCE SCHNEIER: I know,
but I'm not in charge. AUDIENCE: You mentioned
sort of shock events as things that drive
government policy. And I thought the
example of 9/11 was instructive, maybe in a
way you did or did not intend, in that the government
response to 9/11 was to launch two illegal
wars and create a surveillance state that violates
our civil liberties on a day-to-day basis. BRUCE SCHNEIER: No, no, no. AUDIENCE: So-- BRUCE SCHNEIER: I did
intend to evoke that. AUDIENCE: --I'm not
sure that's a good-- I guess I'm curious how you-- BRUCE SCHNEIER: It's terrible. Right. AUDIENCE: So how do
you see the reaction to the mounting threats in
technology as being different? What's going to prevent the same
sort of thing from happening? BRUCE SCHNEIER:
Absolutely nothing. AUDIENCE: OK. BRUCE SCHNEIER:
And that's my fear. That something bad will happen. Congress will say,
something must be done. This is something. Therefore, we must do it. AUDIENCE: It has to be done. [LAUGHTER] BRUCE SCHNEIER: So my goal of
having this conversation now, before this happens, is that
we will, as a community, figure out what should be done
when we have the luxury of time and insights and patience. Because I agree with you
that there is a disaster. We will get a disaster
as a response, and it will be just as bad. So let's get ahead
of it this time. Let's do better. AUDIENCE: How do you
envision preventing everything degenerating to
the lowest common denominator? You said, client side, you
can't really restrict people from doing what they want. Even if we say, OK,
any company that wants to make money in the US
has to follow these provisions. I'm just going to
encrypt my data and send it to
Alibaba Translate. It's 1/3 the price
of Google Translate, but they steal all my data. How do we prevent this? Is there anything we can do? BRUCE SCHNEIER:
Some of the answer is going to be no,
some, it's going to yes. So if you think about
other consumer goods, we do make it hard for
consumers to modify something. It's actually hard
to modify your car to violate emissions control. You can do it, but it's hard. And then we try to
have spot checks. You can imagine
some sort of regime. You can imagine some
system that tries to maintain security anyway. Because there will be
a minority doing that. I think, once we start
hitting the problem for real, we'll come up with
tech solutions. Ways for the system to watch
itself, other systems to watch each other. Can we do this non-invasively? I think we have
to figure it out. So I don't have
the answers here, but these are
certainly the problems. AUDIENCE: I really liked your
phrasing of the problem of, we need to give up on offense
so we can go all in on defense. And I think it's
pretty clear to me where a lot of the
offensive focus is, in terms of law enforcement. But I think one thing
that remains mostly an unknown is on
the military side, and how there is a
ton of investment in military offensive stuff. We kind of know a
little bit more, maybe, about what Russia and
China are using offensively against us. BRUCE SCHNEIER: We aren't
seeing the good stuff yet. AUDIENCE: I hope not. Anyway-- BRUCE SCHNEIER: We're not
sure what we hope, right? [LAUGHTER] AUDIENCE: Yeah. But do we have a sense
of what the military-- the US military, let's
say-- would be giving up to give up this offensive idea? And-- I don't know--
how willing they would be to go that direction? BRUCE SCHNEIER: They
wouldn't be willing, but it's not their
job to be willing. That's why you
don't want the NSA in charge of your privacy policy
because that's not their job. We need people above
the military, the NSA, to make these tradeoffs. Because they are security
versus security tradeoffs. Is the security we get
from being able to spy on and hack the bad
guys greater or less than the security we get
from the bad guys being unable to spy on and hack us? Right? So security versus
surveillance is the wrong way to describe it. It's security versus security. So someone above the military
needs to decide that. It can't be the military. Because the military is not
in charge of overall policy. They're in charge of
the military part. And what we know about the
capabilities is very little. We get some shadows
of it here or there, and it seems to be,
on the one hand, cruder than we'd like it to be. On the other hand, StuckStack
was pretty impressive. In general, the stuff you
see is the minimum tech it has to be to succeed. There's sort of this myth of
these super powerful cyber attacks. They're basically one
iota more than just barely necessary to succeed. You don't need to do
more if you can take out the DNC with a pretty
sloppy phishing campaign. [LAUGHTER] Why bother using
your good stuff? So there's a lot
we just don't know. AUDIENCE: At risk of
revisiting an earlier question, I was interested in
what you thought about-- one of the things,
often, you see cynical in the
finance industry is that they think that
people in finance can outmaneuver all the people
who are regulating them, in part because
they're lesser paid. So I was wondering if
you could revisit that. Because I think law has,
maybe, the exception because it's a little
bit more directly related to human rights
and things like that. BRUCE SCHNEIER:
Yeah, I don't know. Certainly, I worry about
regulatory capture, regulations being evaded. I think all of those
are real risks. This is not a great
answer I have, it's just the best one I have. Because I don't
see any way to put a backstop against this
massive corporate power other than government power. Now, in a sense, I
don't want either power. But tech naturally
concentrates power, at least as it's
configured today. So that's my missing piece. I think you're
right that that is a serious problem and
worry, and something we just have to deal with. Policy is iterative. As techies, it's
hard to accept that. We like to get the answer
right and implement it. Whereas policy gets the
answer slightly less wrong every few months. But that's the way it works. The real question is, can
we do this at tech speed? And that, really, I think,
is an open question. So with that, I'm going to end. Thank you all. Thanks for filling the room. Thanks for coming. [APPLAUSE]
Definitely will save this to listen later--I'm just finishing the book of the same title and it's a very clear, accessible, and thoughtful discussion of present and future computer security problems, their causes, and potential solutions. Can recommend, and will definitely be reading his other works in the future.
We could also try to improve the human condition to the point where we no longer feel a need to seek out secrets.