>> SPEAKER: Please
welcome Bruce Schneier. >> BRUCE SCHNEIER:
Hey, good morning. Thanks for coming out. Let's talk about the tax system
as an IT security problem. It has code. It's just a series of
algorithms that take and inputs tax information for
the year and produces some outputs, the
amount of tax owed. It's incredibly
complex code. It consists of laws,
government laws, tax authority rulings, judicial
decisions, lawyer opinions. There are bugs
in the code. There are mistakes in how
the law is written, how it's interpreted. Some of those bugs are
vulnerabilities, and attackers look for
exploitable vulnerabilities. We call them
tax loopholes. Right? Attackers exploit
these vulnerabilities. We call it tax avoidance. And vulnerabilities,
loopholes are everywhere in the tax code. And, actually, there are
thousands of black hat security researchers that
examine every line of the tax code looking for
vulnerabilities. We call them
tax attorneys. Some of these
bugs are mistakes. There is -- in the 2017
tax law, there was an actual mistake, a typo,
that categorized military death benefits as earned
income, and as a result, surviving family members
got unexpected tax bills of $10,000 or more. Some of these are
emergent properties. There is the, I'm going to
read it, the double Irish with a Dutch sandwich. This is the trick
that lets U.S. companies like Google and
Apple avoid paying U.S. tax, and, actually,
Google is possibly being prosecuted for
that right now. Some of these
vulnerabilities are deliberately created in
the tax code by lobbyists trying to gain some
advantage to their industry. Sometimes a legislator knows
about it; sometimes they don't. I guess this is analogous
to a government sneaking a programmer into Microsoft to
drop a vulnerability in Windows. All right, so this
is my big idea. We here in our community
have developed some very effective techniques to deal
with code, to deal with tech. We started by examining
purely technical systems. Increasingly, we study
sociotechnical systems. Can our expertise in IT
security transfer to broader social systems
like the tax code, like the system we use to
choose our elected officials, like
the market economy? Is our way of thinking,
our analytical framework, our procedural mindset valuable
in this broader context? Can we hack society? And, actually, more
importantly, can we help secure the systems
that make up society? So back to the tax code. We know how to fix
this problem before the code is deployed. Secure development
processes, source code audits. How do we do that
for the tax code? Like, who does it? Who pays for it? And what about
those deliberate vulnerabilities? We know how to fix the
problem with running code. Vulnerability finding by
white hat researchers, bug bounties, patching. How do you patch
the tax code? How do you create laws and
policies to implement the notion of patching? I mean right now passing
tax legislation is a big deal politically. And here's the big
question: Can we design a security system to
deal with bugs and vulnerabilities in the
tax code and then build procedures to
implement it? So security technologists
have a certain way of looking at the world. It's systems thinking with
an adversarial mindset. I call it a
hacker mindset. We think about how systems
fail, how they can be made to fail, and we think
about everything in this way. And we've developed what
I think is a unique skill set: Understanding
technical systems with human dimensions,
understanding sociotechnical systems,
thinking about these systems in an adversarial
mindset, adaptive malicious adversaries, and
understanding the security of complex adaptive
systems, and understanding iterative
security solutions. This way of thinking
generalizes, and it's my contention that the worlds
of tech and policy are converging, that the tax
code is now becoming actual code, and that
where once purely systems are increase
sociotechnical systems. And as society's systems
become more complex, as the world looks more like
a computer, our security skills become more
broadly applicable. So that's
basically my talk. It's preliminary work. I have a lot of examples
and a lot of detail. I'm going to throw a
bunch of stuff at you. And I want to convince you
that we have this unique framework for solving
security problems, and there are new domains
we can apply them to. I guess I want to put a
caveat here in the beginning. I don't want to say that
tech can fix everything. This isn't
technological solutionism. This isn't Silicon
Valley saving the world. This is a way that I think
we can blend tech and policy in a new way. All right. So to do this, we need to
broaden some definitions. Let's talk about a hack. A hack is something a
system allows but is unwanted and unanticipated
by the system designers. More than that, it is an
exploitation of the system. Something desired by the
attacker at the expense of some other part
of the system. So in his memoirs, Edward
Snowden writes that the U.S. intelligence community
hacked the constitution in order to justify
mass surveillance. We can argue whether
that's true or not, but everyone here intuitively
knows what he means by that. Other examples of hacks:
So lack of standing is a hack the NSA used to
avoid litigating the constitutionality
of their actions. Vulnerability, of course,
is that there's a body of law out of reach of
conventional judicial review. Using the old writs act
against Apple as the FBI did in 2016 is a hack. Maybe you think
it's a good hack. All hacks aren't bad. But it is definitely
an unintended and unanticipated use
out of a 1789 law. So this all makes
sense to me in my head. And my guess is it
makes some sense to you, but is it useful? I think it is. I think this way of
looking at the world can usefully inform
policy decisions. Let's talk about hacking
the legislative process. Bills now are so
complicated that no one who votes on them truly
understands them. You just add one sentence
to a bill, it makes references to other laws,
and the combination results in some
specific outcome unknown to most everyone. And there's a whole
industry dedicated to engineering these
unanticipated consequences. It sounds like
spaghetti code. We can think of VC funding
as a hack of market economics. So markets are based on
knowledgeable buyers making decisions amongst
competing products. The pressure to sell to
those buyers depresses prices and incents
innovation. That's basically the
mechanic of the markets. VC funding hacks
that process. The external injection of
money means that companies don't have to compete in
the traditional manner. The best strategy for
a start-up is to take enormous risk to be
successful, because otherwise they're dead,
and they can destroy without providing viable
alternatives as long as they have that external
funding source to do it. And this is a
vulnerability in the market system, which
makes Uber a hack. Right? VC funding means they can
lose $0.41 on every dollar until they destroy
the taxi industry. WeWork is a hack. I guess was a hack. Are they still around? Their business model
loses $2.6 billion a year. We could look at money and
politics as a similar example. The injection of private cash
hacks the Democratic process. So think about markets
more generally. They're really based on
three things: Information, choice, and agency. And they are all
under attack. Complex product offerings
obscure information. Just try to compare
prices of cell phone programs or credit cards. Monopolies remove our
ability to choose. Products and services we
can't reasonably live without deprive
us of agency. There's probably an
entire talk on this. So metaphors matter here. Most people don't consider
our Democratic process or the market as
sociotechnical systems. And I think this is
similar to us only thinking in terms
of tech systems. Remember 15 years ago when
we thought our security domain ended at the
keyboard and chair? Today we know that all
computer systems are actually complex
sociotechnical systems, that they are embedded. In systems, people say nested
in broader social systems. And it turns out all
modern systems are like that, too, just as the
balance between socio and technical are different. There's a difference
between determinism and non-determinism that
I think matters here. A bug in software
is deterministic. Who gets elected, world
events, social trends, those are
non-deterministic. Users are
non-deterministic. Hackers are
non-deterministic. Determinism is a majority
condition of computer systems. We in security deal with
non-determinism all the time and it's a majority
condition in social systems. I think we need to
generalize non-determinism better, both in our
systems and in social systems. Also, what do we
actually mean by a hack? In our world in computer
security, we tend to work with conventional
systems created for some purpose by someone. Social systems aren't
really like that. They evolve. New purposes emerge. A hack can be an emergent
property; it's not clear whether they're
good or bad. There's a lot of
perspective that matters here. If VC funding is simply
a way for the wealthy to invest their money,
then it's the market working as intended. And it's not obvious to me how
to handle this generalization. Another concept that
generalizes: Changes in the threat model. So we know how this works. A system is created for
some particular threat model and then
things change. Maybe its uses changes,
technology changes, circumstance changes, or
just a change in scale that causes a
change in kind. So the old security
assumptions are no longer true. The threat model has
changed, but no one notices it, so the system
kind of slides into insecurity. I've heard political scientists
call this concept drift. So let's talk about a
change in the threat model. Too big to fail. So this is a concept that
some corporations are so big and so important to
the functioning of our society that they can't
be allowed to fail. In 2008, U.S. government
bailed out several major banks to the tune of $700 billion
because of their very bad business decisions because
they were too big to fail. The fear was if the
government didn't do that, the banks would collapse
and take the economy with it. The banks are literally
too big to be allowed to fail. Not the first time. In 1979, U.S. government
bailed out Chrysler. Back then, it was
national security. They were building
the M1 Abrams tank. It was jobs, saving
700,000 jobs, saving suppliers and the whole
ecosystem, and there was an auto trade war going on
with Japan at the time. So this is an emergent
vulnerability. When the mechanisms of
the market economy were invented, nothing could
ever be that big. No one could conceive of
anything being that big. Our economic system is
based on an open market and relies on the fact
that the cost of failing is paid by the entity failing
and that guides behavior. That doesn't work if
you're too big to fail. A company that's trading
off private gains and public losses is not
going to make the same decisions, and this
perturbs market economics. We can look at threat model
changes in our political system. Election security. The U.S. system of securing
elections is basically based on representatives
of the two opposing parties sitting together
and making sure none of them does anything bad. That made perfect sense
against the threats in the mid-1800s. It is useless against modern
threats against elections. The apportioning of
representatives. Gerrymandering is much
more effective with modern surveillance systems. Like markets, Democracy
is based on information, choice, and agency, and
all three are under attack. So another thing we need
to generalize is who the attackers and defenders are. So we know that the term
attacker and defender doesn't carry moral weight. All security systems
are embedded in some broader social concept. We could have
the police attacking and criminals defending. We could have
criminals attacking and the police defending. To us, it's basically
all the same tech. But normally our
attackers and defenders are in different groups. This isn't true
with the tax code or political gerrymandering. The attackers are members of
the same society that's defending. The defenders are society
as a whole and the attackers are some
subset of them. Or worse, it's two groups
trying to game the same system, so each trying to
immunize the system to attacks by the other group
by leading vulnerable attacks to their
own attacks. And you can see this in
voting rights where the different groups try
to attack and defend at the same time. It's more about abstract
principles, notions of equality, justice,
and fairness. And this gets back to our
definition of the word hack. When a lobbyist gets a law
passed, have they hacked the system, or are they
just using it as intended? All right. Some more examples. Let's talk about hacks
of cognitive systems. Remember the security
adage that script kitties hack computers while smart
attackers hack people? Lots of attackers
hack people. Advertising is a hack of
our cognitive system of choice. It's always been psychological;
now it's scientific. Now it's targeted. Lots of people have
written about modern behavioral advertising and
how it affects our ability to rationally choose. It feels like
a hack to me. And kind of all of my
market and democracy examples really bubble up
to persuasion as a hack. Social media hacks our
attention by manufacturing outrage, by
being addictive. And AI and robotics
are going to hack our cognitive systems because
we all have a lot of cognitive shortcuts. Two over one is a face;
a face is a creature; language indicates
intelligence, emotion, intention, and so on. These are all really
reasonable cognitive shortcuts for the
environment we are involved in and they will
all fail with artificial people-like systems. All right. So this is a
lot of examples, but I really want to give you a feel
for sort of how I'm thinking about this. Let me talk about one
thing in a little more detail. So, last fall, I started
using computer security techniques to study
propaganda and misinformation. So I did this work with
political scientist Henry Theral at GW University. Here's our thinking:
Democracy can be thought of as an information
system, and we're using that to understand
the current waves of information attacks,
specifically this question. How is it that the same
disinformation campaigns that act as a stabilizing
influence in a country like Russia can be
destabilizing in the United States? And our answer is
that autocracies and democracies work differently
as information systems. So let me explain. There are two types of
knowledge that society uses to solve
political problems. The first is what I call
common political knowledge. That's information that
society broadly agrees on. It's things like who the
rulers are, how they're chosen, how
government functions. That's common
political knowledge. Then there is contested
political knowledge, and that's the stuff
we disagree about. So it's things like how
much of a role should our government play
in our economy? What sorts of regulations
are beneficial and what are harmful? What should the
tax rates be? That's the stuff
we disagree about. That's contested
political knowledge. So democracies and
autocracies have different needs for common and
contested political knowledge. Democracies draw on
disagreements within their populations to
solve problems. That's how we work. But in order for it to
work, there needs to be common political knowledge
on how governments function and how political
leaders are chosen. All right? We have to know how
elections work so we can campaign for our side. And through that process,
we solve political problems. In an autocracy, you need
common political knowledge over who is in charge,
but they tend to suppress other common political
knowledge about how the government is actually
working, about other political movements
and their support. They benefit from those
things being contested. So that difference in
information usage leads to a difference in threat
models, which leads to a difference in
vulnerabilities. So authoritarian regimes
are vulnerable to information attacks that
challenge their monopoly on common political knowledge. That is why an open
internet is so dangerous to an autocracy. Democracies are vulnerable
to information attacks that turn common political
knowledge into contested political knowledge, which
is why you're seeing information attacks in the
United States and Europe that try to cast doubt on
the fairness of elections, the fairness of the police
and courts, the fairness of the Census. The same information
attack, but they increase the stability in one
regime and decrease the stability in another. Here's another way of
saying this: There is something in political
science called a dictator's dilemma and it
kind of goes like this. As a dictator, you need
accurate information about how your country is
running, but that accurate information is also
dangerous because it tells everybody how not well
your country is running. So you're always trying
to balance this need for information with this need
to suppress the information. There is a corresponding
democracies dilemma, and that's this: It's the same
open flows of information that are necessary for
democracy to function are also potential
attack vectors. This feels like a useful
way of thinking about propaganda and it's
something we are continuing to develop. So let's hack some other
cognitive systems. Fear. I've written years ago
that our sense of fear is optimized for living in
small family groups in the East African highlands in
100,000 BC and not well designed for 2020
San Francisco. Terrorism directly targets
our cognitive shortcuts about fear. It's terrifying, vivid,
spectacular, random. It's basically tailormade
for us to exaggerate the risk and overreact. Right? Trust. Our intuitions are based
on trusting individuals peer to peer, trusting
organizations, brands. It's not what
we're used to. And this can be misused by
others to manipulate us. We naturally
trust authority. Something in print
is an authority. The computer said
so is an authority. Lots of examples of those
trust heuristics being attacked. You can even think of
junk food as hacking our biological systems of food
desirability because our security is based on our
100,000-year-old diet, not on modern processed
food production. The change in the threat model
has led to a vulnerability. I think any industry
that has been upended by technology is worth
examining from this perspective. Our system for choosing
elected officials, not voting specifically,
but election process in general, the news
industry, distance learning and
higher education. Any social system that has
slipped into complexity is worthy of examination. The tech industry,
of course, the media industry,
financial markets. In all of these cases,
differences in degree lead to differences in kind, and
they have security ramifications. We know this is true
for mass surveillance. I think it's true for a
lot of other things as well. The ability of people to
coordinate on the internet has changed the
nature of attack. Remember the great -- I
don't know if it's great -- the story of
Microsoft's chat bot Tay? Turned into a racist,
misogynistic Nazi in less than 24 hours by a
coordinated attack by Fortran. More recently, the people
running the Democratic caucuses in Iowa didn't
realize that publicizing their help number would
leave them vulnerable to denial of service attack. We have moved in a lot of
places from good faith systems to ones where
people and institutions behave strategically. And security against that
stuff is what we're good at. I think power
matters here. All of these hacks are
about rearranging power, just as cryptography is
about rearranging power. In her great book Between
Truth and Power, Julie E. Cohen, law professor,
wrote that in the realm of government, power
interprets regulation as damage and
routes around it. Once the powerful
understood that they had to hack the regulatory
process, they developed competence to do just that,
and that impedes solutions. So elections are
a good example. I have already mentioned
money and politics are changing the threat model. So most U.S. election spending
takes place on television, secondarily on the internet. Now there are ways
to regulate this. Other countries restrict
advertising to some small-time window, and there
are other things they do. But the platforms on which
this debate would occur are the very ones
that profit most from political advertising. And power will fight
security if it's against their interests. Think about the FBI
versus strong encryption. Those in power will fight
to retain their power. So one last concept
I want to look at. The notion of
a class break. So, in general, and we
know the story, computers replace expertise and
skill with an ability. You used to have to train
to be a calligrapher. Now you can use
any font you want. Driving is
currently a skill. How long will that last? This is also true
for security. One expert finds a
Zero-day, publishes it, now anyone can use it,
especially if it's embedded in a
software program. So this generalizes when
you deal with complex sociotechnical systems. Someone invented the
double Irish with a Dutch sandwich, but now
it's a class break. Once the loophole was
found, any company can take advantage of it. Misinformation on social
networks is a class break. And Russia might have
invented the techniques; now everyone can do it. Different techniques of
psychological manipulation are class breaks. The notion of a class
break drastically changes how we need to
think about risk. And I don't think that's
something well understood outside of our world. So we also need to
generalize the solutions we routinely use. I'll hit on a few of them. Transparency is a big one. And we see that in the
greater world, open government laws,
mandatory public tax and informational filings,
ingredient labels on products. Truth in lending
statements on financial products reduce corporate
excesses, even if no one reads them. I think we can achieve a
lot through transparency. We have other solutions in
our tech toolkit, defense in-depth,
compartmentalization, isolation, segmenting,
sandboxing, audit, incident response, patching. Iteration matters here. We know we never actually
solve a security problem; we iterate. Is there some way to iterate
law, to have extensible law? Can we implement some
rapid feedback in our laws and regulations? Resilience is an
important concept. It's how we deal with
systems on a continuous attack, which is the normal
situation in social systems. So when I wrote Beyond
Fear back in 2003, I gave five steps to evaluate
a security system. What are you
trying to protect? What are the risks? How well does your
solution mitigate the risks? What other risks does
your solution cause? And what are the
non-security trade-offs? I think we can generalize
that framework. Systems that are
decentralized and multiply controlled, they're
a lot harder to fix. But we have
experience with that. We have a lot of
experience with that. So all of this leads
to some big questions. What should policy for the
information economy look like? What components will
rule of law 2.0 have? What should economic
institutions for the information
economy look like? Industrial area
capitalism is looking increasingly unlikely. How do we address the
problems that are baked into our technological
infrastructure without destroying what
it provides? And one problem I see
immediately is we don't have policy institutions
with footprints to match the technologies. And Facebook is global, yet
it's only regulated nationally. Those that have been
around for a while remember when tech
used to be a solution; now it's the problem. In reality, it's both. And our problem tends
to be social problems masquerading as tech
problems and tech solutions masquerading
as social solutions. And we need to better
integrate tech and policy. Computer security has long
integrated tech and people. I think we can do this for
a much broader set of systems. I think we need to upend
the idea that society is somehow solid, stable,
and naturally just there. We build society. Increasingly, we build
it with technology. And technology is not on
some inevitable trajectory. It interacts with the
country's political and social institutions. So it's not just one
effective technology. It depends on the
details of society. Computer security has already
had an impact on technology. And now we need to have
an impact on the broader public interest. So this is what I'm
working on right now. Currently, it
is this talk. It will probably become
some articles and essays. Maybe it'll be a book. I think this framework
has some value. It gives structure to
thinking about adversaries inside a social system,
how we delineate the rules of the game, how people
hack the meta game, how they hack the metagame, and
how we can secure all of that. I think it's easy to get
carried away with this kind of thinking. All models are wrong,
but some are useful is the great quote. Which systems are
analogous to network computers and
which are not? When are innovations
analogous to a hack with security implications and
when are they just novel uses or innovations
or social progress? There are bugs
in everything. When is a bug a
vulnerability? When is a vulnerability
deserving of attention? When is it catastrophic? There's probably a good
analogy to cancer here. Everybody has cancerous
cells in their body all the time, but most
cancers don't grow. It depends on the
environment and other external factors. I think it's the
same in our field. The difference, of course,
is that cancer cells are not intelligent malicious,
adaptive adversaries, and that's who we're dealing with. I also think it's
important to have humility in this endeavor. All the examples I used
are large policy issues with history and expertise
and a huge body of existing knowledge. And we just can't think
that we can barge in and solve the world's problems
just because we're good at the problems in
our own world. The literature is filled
with intellectuals who are experts in their field,
overgeneralized, and fell flat on their face. Kind of want
to avoid that. And the last thing we want
is another tech can fix everything solution,
especially coming from the monoculture of Silicon
Valley, at the expense and lives of, like,
everybody else. I think we need a lot
of people from a lot of disciplines working
together to solve any of this, but I like tech
to be involved in these broader conversations. So I once heard this quote
about mathematical literacy. It's not that math
can solve the world's problems; it's just that
the world's problems would be easier to solve if
everyone just knew a little more math. I think the same thing
holds true for security. It's not that the security
mindset or security thinking will solve the
world's problems; it's just that the world's
problems would be easier to solve if everyone
just understood a little more security. And this is important. So I have one final
example about a hack against the tax code. In January, the New York
Times reported about this new kind of tax fraud. It's called cum ex
trading, which is Latin for with/without. I'm going to read a
sentence from the article. Through careful timing
and the coordination of a dozen different
transactions, cum ex trades produce two refunds
for dividend tax paid on one basket of stocks. That's one refund obtained
legally and the second illegally received. It was a hack. This was something
the system permitted, unanticipated and unintended
by the system's creators. From 2006 to 2011, the
bankers, lawyers, and investors who used this
hack made off with $60 billion from EU countries. Right now, there are
prosecutions, primarily in Germany, and it is unclear
whether the law was broken. The hack is
permitted by the system. They're debating whether
there is some metasystem of don't do anything this
blatantly horrible that they can convict the
person of, or we have a vulnerability in our laws
that we need to patch. So a year ago, I stood on
this same stage and talked about the need for public
interest technologists, for technologists to
understand the social ramifications of their
work, for technologists to get involved in public
policy, to bridge the gap between tech and policy. So this is a piece of it. Hacking society and
securing against those hacks is how we in the
computer security field can use our expertise for
broader social progress. And I think we
have to do that. So thank you. >> BRUCE SCHNEIER: So I
left a bunch of time for questions and comments
because I really want questions and comments. This is a work in progress
and something I'm thinking about, so I'm curious
what you all think. There are two microphones
that everyone is scared to get in front of. Here comes one person. And if you don't want
to get in front of a microphone, email me. If you have an idea, a
rebuttal, another example, send it to me. I'm really curious. I'll chase down
the details. But anything that is
sparked by this talk, please tell me. Yes? >> AUDIENCE: So I have
an idea of comparing the social systems to system
when we come to securing both of those. I would like to get your
opinion on what I call the permit hack, where in the
society through fear of all of these kinds of
things move into a situation where the actual
objectives of the system completely change, right? In IT, we only have one
variable where the system itself that we need to
defend is very clear. And society of the
objective of what we are trying to do keeps
changing, right? >> BRUCE SCHNEIER: Yeah.
I think it's less different than you want. We like to think that
our systems end at the keyboard and screen, but,
in fact, they don't. We are used to systems --
internet was invented with one particular threat
model and that completely changed, and we have an
internet designed for a very benign threat
environment being used for critical infrastructure. We're used to
system drift. I think we're used to
systems that expand. We often don't think in
that way, but those of us, I think, who do security
well are constantly thinking about that. But, yes, I think there is
a difference that social systems tend
to evolve more. They are not
deliberately created. Who created the
market economy? Well, it kind of showed up
when it was the right time. We know who created our
constitution and we can look at their debates and
really learn about the threat model they were
thinking about, the vulnerabilities they were
looking at, what they missed. But I think you're right. This is going to be a
tough expansion because of those fuzzy borders. I'm not sure of the
answer, but I think it's still worth poking at. >> DAN WOODS: Dan Woods
from Early Adopter Research. Do you know of any book
that explains the kind of basics of political theory
from utilitarianism, lock, so all of these things for
technologists so that this could be -- they could
then do a better job of mapping what we know to
what the political and social frames know? >> BRUCE SCHNEIER:
It's funny. I teach at the Harvard
Kennedy school and teach tech to policy kids. We're constantly writing
these papers like machine learning for policymakers. You want the other one,
like social systems for techies. I don't know. That's a great idea. I have been reading
political theory books. The way you do this is
you go online, look for political theory classes
at universities, and buy their textbooks. So I have been
reading a bunch. I don't remember names. But there are political
theory books that are used in these undergraduate classes
that go into all of that. They are not
written for techies. They're written for
humanities majors. But one for a techie? That's a great idea. If you need a project, I
would love to read it. >> DAN WOODS: Okay. >> BRUCE SCHNEIER:
It's done. >> DAN WOODS:
I'll get started. >> BRUCE SCHNEIER: Okay.
Let me know next year. >> ALEX ZERBRINA: Hi, Bruce.
Thank you for speaking today. My name is Alex Zerbrina. I am currently a student
at San Francisco State University, and I'm a
political science major who specializes
in terrorism. >> BRUCE SCHNEIER:All right.
Tell me why I'm all wrong. >> ALEX ZERBRINA: Oh, no,
no, no. I'm not saying that. >> BRUCE SCHNEIER: This
is my nightmare scenario, someone who
knows something. >> ALEX ZERBRINA: No, no,
I'm not going to tell you that. No, I'm not going
to tell you that. What I wanted to know
is that you spoke about terrorism and attacks that
seem to be random, but they are really not. How do you think we should
prevent those attacks, especially if they use
technology such as terrorists recruiting
on, say, Twitter? >> BRUCE SCHNEIER: This is
not really the topic of the talk, but the
answer is you can't. I mean, you know, random acts
of violence cannot be prevented. And that's, in a sense,
why it's so fearful. And, unfortunately, I
think the best we can do -- well, a lot of what we
do is we move it around. We block off certain
techniques and targets and force the terrorists
to choose other techniques and targets. That largely doesn't
work very well. We do a lot of stuff
against airplanes because airplanes are particularly
disastrous targets. Right? A bomb goes off in this
room, and some people die, some will get injured,
and everyone is okay. A bomb goes off on this
airplane and the airplane crashes and we all die. That has a particular
failure mode, which is why that is protected more
than other things. Once you get out of
airplanes, you're just moving around what the
terrorists are doing, and that sends you upstream
to geopolitical solutions very quickly, that the
rest of the money is just expended in forcing the
bad guys to change their tactics and target. I see you there. >> AUDIENCE: Hi. When we are
talking about security and securing organizations and
systems, we often raise up the security awareness. What about here if you try
to secure the society? What about the awareness
of the people and how to raise their level
of education? Because with low level
of education, they are certainly a target to a
different kind of hacks like disinformation
and stuff like that. >> BRUCE SCHNEIER: I
think that's interesting. I haven't done a lot of
thinking about awareness as a security measure. I should. Off the top of my head,
a lot of these attacks aren't attacks
against the user. They are more attacks
against the code. You need to think about
what are attacks against the users in these
social systems. If we have those,
how much is awareness going to be
a defense? So maybe an example might
be nutritional labels. If high fructose syrup
is a hack against our biological need for quick
energy and our nutritional labels are some kind of
literacy or education solution. That's where I'd look. My guess is
it's part of it. I tend not to be a big fan
in our field of education as a solution. I mean, I want our systems
to work even with an uneducated user. And I think this is just
sophistication of our field. The early automobiles
were sold with a toolkit and a repair manual. Now they're not. Now everybody can drive. You don't need to be
an expert in internal combustion to drive a car. And you shouldn't need to
be an expert in anything to use a computer. I want the fixes more
embedded in the system than to rely on the user. But that's worth thinking
about in these broader systems because they are
so much more user-focused than a tech system. Thank you for that. Something to think about. Yes. >> AUDIENCE: Some very
interesting ideas. Security, tech security
folks are very good at spotting problems, very good
at coming up with solutions. But what you haven't
talked about is how are we going to get that bell,
that terrific bell on that cat because we're not the
people, usually, who have the power or the
influence to implement. >> BRUCE SCHNEIER: What I
want is more techies in the room. I mean, this is really what I
push in public interest tech. Right now, we don't. We are not involved in the
conversations, and I think we can contribute to
these conversations. Last year, we had
a sitting U.S. Senator in a public hearing asking
Mark Zuckerberg this question: How does Facebook make money? Right? On the one hand, my
god, you don't know? And two, no one on your
staff told you that was a stupid question? The bar is
really low here. And we need to do better. How? I don't have
a good answer. I think we're trying
a lot of things. But, yes, I think that
is a big part of the solution, getting
technologists involved in public policies because
all of these problems have some tech component. That's not a great answer. It's what I got for you. Let me go to
the next person. >> LOGAN: Hi.
My name is Logan. I am a researcher in both
government and computer science at
Georgetown University. I really like
this paradigm. I think it's very
fascinating; however, just off your talk, it seems
like it's focused on ironing out the kinks and
the bugs in systems. But when you look at how
entrenched some of these broader sociopolitical
systems are, some of them may be flawed to the core. Are you worried that this
paradigm may focus more on just ironing out the bumps
when some of the systems may need to be
replaced entirely? >> BRUCE SCHNEIER: Yeah.
That's a good comment. And you're right. In computer security,
we tend to iron out the bumps. That's what we do. Rarely do we say that the
internet is fundamentally broken; make a new one. If we say that, people
look at you and say are you an idiot? That's never
going to happen. So I think this kind of
thinking is about the bumps. You're right. We're not going to fix the
broad structural issues with this kind
of thinking. That takes sort of another
level of abstraction. Am I worried that this
will obscure that? I am not. I think both are a thing
and we have to deal with them. Society is terrible at making
broad structural changes. I don't think
I can fix that. But in the absence of
that, I think starting to think about what power is
doing as hacking and how they're exploiting as
vulnerabilities, I think that would go some way to
changing the way we think of the dynamic. Hopefully that will help. But you make a
very good point. >> LOGAN: Thank you. >> BRUCE SCHNEIER: All right.
You're my last question. >> TOM SEGO: I'm Tom
Sego, CEO of BlastWave. My question is really
around incentives and purpose. I kind of see
two broad groups. One in which one group
is trying to gain and leverage power and
maximize what they can do with that power, and then
the other group is trying to immunize the system
from these hacks. They're trying to
make it invulnerable. And I'm curious, like, how
do you deal with those different types of
opposing purposes? There's no single
requirements document that we've all agreed upon. >> BRUCE SCHNEIER: Right. And I think that's what
I talk about in that the systems evolve. It's not like we have
a spec we can look at. Although there are
vulnerabilities in specs, too. I don't know if I
have a good answer. That's a good question. And I think this might
speak to the edges of where my generalization
starts failing. What do we do when there
isn't a consensus on what the system is
supposed to do? I think I got to that when
I talked about VC funding. Is that a hack or a not? From this
perspective, it is. From that perspective,
it's just that's the way the system works. Are lobbyists a hack? Yeah, kind of. But, no, that's how
we get laws passed. I don't know. As I flesh this out, I'm
going to have to be a little more rigid, but I
think there's value in having a squishy
definition. You can claim legitimately
that gay marriage is a hack. It's taking this
particular system and we're going to use
it in this new way. A lot of us think
that's a great idea. But, you know,
it was a hack. So is that good or bad? Well, you know,
there are good hacks. The question is now, what
is it supposed to do? What are its goals? Whose society? That's where that
gets embedded. So I don't have
a good answer. But that's a
good question. >> TOM SEGO:
Okay, thank you. >> BRUCE SCHNEIER: So I
don't -- I wasn't taking notes. Can you email me
that question again? Just end me an email. Thank you. All right. I have to leave the stage. Thank you all. Any other questions,
comments, suggestions for examples, things to look
at, places where I'm completely wrong, please
email me because I want to keep thinking about this. Thanks, all. Have a great conference. I'll see you next year.