Good evening, everybody,
and welcome back. So if you're like me, you've
probably heard quite a bit about artificial intelligence
over the last few years. It's covered a lot of ground.
On any given day, it might be taking our jobs, beating us in
Jeopardy, powering driverless cars, inspiring medical
breakthroughs, or maybe even, as Elon Musk says, posing the
biggest existential threat to humanity. As AI gets probably
much smarter than humans, he says, the relative
intelligence ratio is probably similar to that between
a person and a cat, maybe even bigger.
We need to be very careful about the advancement of AI.
Tonight, we're not gonna discuss every
potential application of AI, but instead focus on
a specific application that demands our attention, the use
of artificial intelligence for military purposes and national
security. To begin, let's rewind just a little bit. For
decades, there have been long standing ties between Silicon
Valley and the US Military. As described by Leslie Berlin,
a historian and archivist here at Stanford,
all of modern high tech has the US Department of Defense
to thank at its core. Because this is where the money came
from to be able to develop a lot of what is driving the
technology that we're using today. Even the Internet
itself was initially seeded with defense funding.
In the late 1960s, the Department's Advanced
Research Projects Agency, DARPA, stimulated
the creation of ARPANET, which became the basis for
the modern web. And many of the technologies that
are ubiquitous today have similar origin stories. DARPA
funding of SRI International throughout the late 60s and
70s culminated in Shakey the Robot, the first
AI powered mobile robot. And over the course of decades,
that effort would grow into the world's most popular
virtual assistant, Siri, which Apple famously
acquired from SRI in 2010. Another example, in 1995, two Stanford graduate students
received a DARPA NSF grant of their own to conduct research
into digital libraries. That pair turned out to be Sergey
Brin and Larry Page. And their research, supported in
the early days by this grant, was at the core of Google's
original search engine, the flagship offering of the
company they would co-found a few years later. In short,
the military's fingerprints are all over the history of
the technology industry. And by some accounts, the connection between
the government and tech firms is as strong
as ever. Amazon and Microsoft are actively
competing for a $10 billion contract with
Pentagon today. The CIA spun out an organization to
strategically invest in cutting edge startups. And
the Department of Defense has setup an innovation unit
just down the road. But of course, the relationship
between the Pentagon and Silicon Valley hasn't
always been cooperative. Right alongside military
funding of research has always been fierce opposition
to the use of technology for violent ends. The biggest
focal point of such opposition today is the development
of so-called killer robots, lethal, autonomous weapons
that can select and engage targets without
human intervention. Human rights activists and
international policy makers alike are sounding the alarm
about a future in which machines rather than humans
may determine who lives and who dies. Now sometimes,
opposition to the development of autonomous weapons or the
technology that underlies them emerges from within tech
firms themselves. Nowhere was this more apparent than in
the high-profile debate within Google about the company's
work with Pentagon on an initiative called Project
Maven. So as a refresher, in 2017, Google began working
on a project using AI to analyze the massive
amount of data generated by military drones every day.
To classify objects, track movements, and detect
images in video footage far more accurately and quickly
than humans ever could. Upon learning of this
contract, 4,000 Google employees signed an internal
petition in opposition and a dozen employees resigned
in protest. As they saw it, the military was trying to
appropriate the technology they were building in
a nonmilitary context and re-purpose it for possibly objectionable ends.
Specifically, they were troubled that their
work might be used to improve the targeting of drone
strikes, a practice around which there were considerable
human rights concerns. And they were firm in
their beliefs that, quote, Google should not be
in the business of war. So the DoD said, they had
initiated this project to make drone warfare less harmfil,
harmful to innocent civilians. But as Google employees wrote,
this contract puts Google's reputation at risk,
and stands in direct opposition to our core values.
Building this technology to assist the US government in
military surveillance and potentially lethal outcomes is
not acceptable. Weeks later, Google decided not to
renew its contract, citing this employee backlash
as a primary reason for the decision. Now, how
should we think about this? On the one hand, activists and organized tech employees
claimed the decision as victory,
seeing it as an opening for broader reform. A group of
them joined with the tech workers' coalition to urge not
just Google but also Amazon, Microsoft, and IBM to say no
to future military contracts. In their words, DoD contracts
break user trust and signal a dangerous alliance.
Tech companies shouldn't build offensive technology for
one country's military. But on the other hand,
critics of Google's decision have at times described
the company's handling of it as at minimum ill-informed,
perhaps worst, unpatriotic. And perhaps, worst of all,
something amounting to a unilateral disarmament
in a new global AI arms race. In the words of
Christopher Kirchhoff who helps lead the Pentagon's
recent push to collaborate with Silicon Valley. The only
way the military can continue protecting our nation and
preserving the relative peace the world has enjoyed
since World War II, is by integrating the newest
technology into its systems. Denying the military access
to this technology, he says, would over time cripple it,
which would be calamitous for the nation and for the world. So are these newly-empowered
employees right to protest their company's
partnerships with the Pentagon? Or are they short sighted in
their demands, clamoring for change that might threaten the
very global order upon which their lives and companies
depend? Google is after all an American tech company
protected by the rule of law here in the United States.
Now, this debate gets to the heart
of why our topic tonight is so important. It's not merely
about individual companies decisions to pursue or
forgo specific contracts, but about the broader geopolitical
story in which these decisions are unfolding. The military
isn't seeking out and investing in new technology
just because it's exciting. But rather because AI
represents a new frontier for global competition.
These events are taking place in a hyper-competitive
environment. One in which countries are vying not just
for technological superiority, but the economic and military
advantages that will accompany it. Around the world,
global leaders are taking this seriously. Russian President
Vladimir Putin has said, artificial intelligence is
the future, not only for Russia, but for all humankind.
Whoever becomes the leader in this sphere will become
the ruler of the world. Emmanuel Macron announced
a significant AI strategy for France just last spring,
and 15 other countries have released national
plans as well. But when it comes to the global AI
race, the US is primarily up against one rival, China. Over
the last few years, Chinese leaders have forcefully
championed AI development as a national priority. They've
laid out an ambitious strategy that promises major
breakthroughs by 2025, and pledges to make China the
world leader in AI by 2030. Experts estimate that they
will commit something on the order of $150 billion to
the goal over the next decade. And some argue that, absent significant
action to contrary, China is poised to surpass
the US in the years ahead. As investor Kai Fu Lee sees
it, China is a more hospital climate for AI development
at this stage, and is acting more aggressively.
In a competition that he says, essentially, requires four
key inputs, abundant data, hungry entrepreneurs AI
scientist and an AI friendly policy environment China has
some distinct advantages. As an authoritarian regime,
the government permits few boundaries between
itself and it's tech sector, which includes some of the
world most powerful companies Tencent, Baidu, Alibaba.
They have centralized decision making, and access to as much
data as can be collected. Now noting the profound
risk this could pose the United States, former
deputy secretary of defense, Robert Work, has said, we
are now in a big technological competition with great powers.
We cannot assume that our technological superiority is a
given, we are going to have to fight for it. Taking a step in
that direction just last week, President Trump signed a new
executive order to establish the American AI Initiative,
a high-level strategy for AI development here in the US.
Many have criticized the plan for lacking specifics and
funding, but it is yet another reminder of
the global competition for AI dominance and the challenge
that will play out in the years ahead. Examining
that challenge together with a set of formidable guests
will be our task here tonight. We have with us three
extraordinary experts. First is Courtney Bowman,
the director of privacy and civil liberties engineering at
Palantir. His work addresses issues at the intersection
of policy, law, technology, ethics, and social norms. And
in working extensively with government and commercial
partners his team focuses on enabling Palantir to build and
deploy data integration, sharing, and analysis
software that respects and reinforces privacy,
security, and data protection principles and
community expectations. We also have with us
Avril Haines, who's a senior research scholar at Columbia
University, the deputy director of Columbia World
Projects, and a lecturer in law at Columbia Law School.
Prior to joining Columbia, Avril served as assistant to
the President and principal deputy national security
adviser to President Obama. In that role she chaired
the Deputy's Committee, the administration's principle
forum for formulating national security and foreign policy.
And before that she served as deputy director of the Central
Intelligence Agency. And finally, we have Mike Cull, who heads the artificial
intelligence and machine learning portfolio at
the defense innovation unit experimental, DIUX. He served
as a pilot and air staff officer in the US Air Force
and has extensive experience in the software industry
in product development, product management, and as
a senior executive at multiple technology companies and
start ups. Please join me in welcoming
everybody to the stage. >> [APPLAUSE] >> So thanks to our esteemed panelists for being with us,
and thank you all for coming back, for yet another I
hope energizing conversation. I wanna start with you Avril. We had the privilege
of serving together, in the Obama administration
the tables were reversed. You held the gavel, and you
were directing the questions at the rest of us
around the table. Now I hold the gavel.
>> My god. [LAUGH]
>> So welcome back. But I wanna start, you know,
the job of principal deputy National security adviser put
you at the front lines of managing a wide range of
national security threats. Whether it was
the civil war in Syria, nuclear proliferation in Iran
and North Korea, the Russian invasion of Ukraine,
many things that involved in reacting in really rapid
succession to events that were unfolding
around the globe. But you also made it
a priority to try and look over the horizon and
think about the long-term. And it's hard not to think
about the challenge that we have before us tonight about
the role of AI in national security. And think that what
we are confronting today in terms of our national
security threats is likely to change pretty dramatically
over the next decade or two as a function of
developments in AI. And so what I'd like you to
start us off with is your perspective having been in
that role about what the real challenges are that AI
developments present for US national security, and also
what some of the opportunities are that are enabled by this
kind of technological change. >> Sure, so, I think as part of this,
maybe it's useful to talk about the third offset and
in a sense how we think about that in the context of defense
strategy. And the third offset is a very unsexy name for
an effectively, a strategy that was really
intended, to be another way to think about how to promote
conventional deterrence and preserve peace. And
in essence, it was looking at ways in which you could
offset potential adversaries' advantages. With the idea that
the United States can't have the military be everywhere,
and yet at the same time needs to project power into areas in
order to create that kind of deterrence. And there were
really three elements to it, and technology was one
element. There was also the concept of essentially
operational concepts changing, and organizational structures
being a piece of it. And in each of these areas,
the view of the Department of Defense was these
are spaces in which we need to be thinking
about how to position ourselves in order to offset
effectively the potential, you know,
adversaries advantages. And I think if Bob Work
were here today, you know, former deputy secretary of
defense who really spent an enormous amount of time
thinking about this and working this through the
Pentagon and, and through to the interagency in many
respects. He would look at AI as being really at the hub of
the technology aspect of thIs. And that that was a critical
both an opportunity and a challenge in many respects
looking at it across the world. So we challenge
in the sense that, we're not the only one
interested in this technology, and we have to constantly
think about how it is that adversaries may use it against
us in effect. But in also many respects, an opportunity
>> To improve how it is that we function from a defense
perspective, but also across a range of issues
in the United States that we have to think about in
that context. So in so many spaces and I think
I'll just end on this and, Hillary pointed out
one aspect of this. You can think about
the opportunities that a certain technology
presents in the defense and national security space such
as maybe it makes you capable of, being more targeted
in your operations, so that you can actually limit
any kind of collateral damage that's occurring in terms
of civilians or, civilian infrastructure and so on.
That can be one aspect of it. It can be a tremendous
advantage in terms of, trying to analyze enormous
amounts of information or data that you're coming and really picking in on
what actually matters and analyzing it more effectively
by recognizing patterns and so on that are useful to you. There's a whole series of
things in the defense area and then in the intelligence area
that we would think about as being useful uses of AI, but
there's also just a tremendous area of science that can
benefit whether you're combining it with biology
in the context of health or you're thinking about it in
the context of education or in the context. So
many different spaces and part of what you really have to do
from a process perspective and this is one of the challenges,
but also one of the opportunities
is really think about how do we ensure as we're creating
a strategy for a potentially disruptive, but also
advantageous technology for the United States
government as a whole. How do we make sure that we
have all the right people in the room thinking about
that strategy, thinking about the different aspects of
it, so that we can actually take the greatest advantage
of it in terms of science. And in terms our commercial
and, and, private sector advantages and so on. But also
in terms of our defense and our foreign policy, and
all of these other pieces. How can we actually create
a comprehensive strategy in this space?
>> That's great. I wanna use this as a jumping
off point like to bring you into the conversion,
because well Avril was grappling with these issues
and thinking about strategy, you know, from the table
in the, in the last wing. You're here out as the
ambassador in some sense for the defense department to
Silicon Valley, right? So your job is to think
how operationally we take advantage of advances in AI to
achieve our national security, and our national
defense objectives. And part of the challenge that
you confront as whereas 30 or 40 years ago as
Hillary described, Department of Defense
investment was absolutely central to technological
change. Now so much of what's happening is in the commercial
sector and not in defense department research projects,
either at universities or in the national labs. What
does that challenge look like? How can the US have a national
AI strategy when much of the technological
innovation is happening in the private sector?
>> It is, it is a challenge, and, kind of taking off on the
notion of the third offset. Today's third offset is AI
from our perspective. It is, it is, and the evidence is in
what Hillary said earlier. All the other nations are,
imposing a national strategy. Countries are saying
this is an imperative, we must do this and the US
needs to do the same thing in the context of having the
technological advantage that we need for our own defense
and for our own good. It is a challenge to try
to we've, we've come from, the defense department funding
all of the initiatives or a lot of the initiatives that
drove technology early on. The venture community
took over and has done an excellent job
of funding technology, but that funding now outpaces the
R&D funds that DOD provides. And so gotta kinda close
that gap a little bit, if we're gonna get somewhere
and actually advance the state of the art. The
opportunity, it's a challenge. But the opportunity is to cast
the things, the scenarios, the capabilities defense
department needs in the context of the pull
on the science. The science needs to be pulled
along various dimensions. Certainly in terms of health
care, societal benefits, it's all there. But we talked
about recognizing objects. Recognizing objects in
a defense context is a hugely difficult task and it pulls
on the science in a very, in a very powerful and creative
way. If we can combine that, we end up with the ability to,
to find objects for the DOD that are the same kinds of
capabilities we need to help humans when we have big fires
in California to rescue them. To do things such as figure,
figuring out how best to find, where trouble spots are after
a flood and where do you, where do you deploy
forces to do that? So there's a synergy between
the requirements for the DOD and the capability
that's needed by the DOD, and the signs that's being
provided in the valley. And the trick is to try to bring
it together in a meaningful way and at the same time
have a debate about is this, is this an object
I'm recognizing, because I'm going to target it
which I don't wanna have and, and do or is it something
that will benefit everybody. So there's,
there's a that fine line, but there's an opportunity. And if
we're gonna do it as a nation, we do need to do
it as a nation. >> So just one followup, you know, on this front, I get that there are probably
some synergies out there. But let's talk about the cases
when there aren't synergies, right? Part of what is
different about this moment is that we are outsourcing to
a set of venture capitalists in Silicon Valley
the financing to develop the military capabilities that
we need in some fundamental sense to achieve the third
offset. Isn't that a problem from your perspective?
>> No, because I don't see it, I don't quite see it to that
extent. I don't think we're outsourcing the financing
of the capability. The, the capability will
get financed in any case, by the venture community.
What our task is I think is to provide the pull on
the technology with the things that we really need to help
bring it forward faster and to do it in a coherent
way with use cases that are of importance from
a national perspective, and that pulls on the technology.
So I don't think it's all about the money, although
that helps, but I think it's, it's about what capabilities
we're trying to drive into the marketplace.
The same capabilities that we want as
consumers that play into the national sector, as well.
>> Let me bring Courtney into the conversation and
then, you know, invite my colleagues to join
in in the questioning. So Hilary started us off with
this discussion about project Maven and the decision of
Google employees to challenge, you know, the contract that
the company had to be involved in thinking about the use
of AI for drone strikes and the like. Palantir represents
a very different model of a company for Silicone Valley. Everyone knows that Palantir
collaborates with government. In fact, that's part of
your MO as a company, why? I mean, you know, what
Google's got going on here is an internal revolt among
incredibly talented people about their collaboration
with government. Yet, Palantir claims that as a
badge of honor. Is that a good business strategy for
Palantir? Aren't you at risk of losing the really
talented and bright-eyed idealistic engineers that
are gonna be needed to advance your company?
>> Well, I think, I think one thing that Palantir has done
a good job of Since the early days of the company, is very
explicitly acknowledging that there are certain types of
work that we wanted to enable. When the company was
founded the initial set of, of programs that we had built this infrastructure around
we're allowing government institutions to address some
of the challenges that had been identified in the wake
of, of 9/11 commission reports, identifying
that institutions within the intelligence community had
failed to, to pull the pieces together amongst information
that was readily available. So we constituted a company
around the idea of data integration and analysis
with this initial problem set in mind of helping, our
government institutions and agencies, to defend,
the role of democracies and, and the institutions
that inform and preserve, the type of society
that we want to live in. So we made,
we made that very explicit. And I think that was something
that was reflected in decisions that employees that
came to the company, as they thought about different
opportunities. And by the way, we draw from the same talent
pool as as Google and Facebook and other companies
that have, have dealt with these, these issues in public
discourse. And in by and large we're also composed of
a similar set of community members. So I think this is to
some degree there are students that come from ,from
Stanford and, and other elite institutions
that make a choice to, to be involved in this,
in this type of work. But there is some self selection
bias, but I think there is a world of opportunity amongst
people who recognize that there is maybe more nuance
to these sorts of questions. And there is an opportunity
to engage in a meaningful way on the development of power,
powerful technology, but to do so in a way that is also,
respectful of some of the the considerations that I
think that we are here to talk about in terms of the ethics
of of AI applications and powerful information systems.
>> All right, let's stick with this for
a second. So Courtney you've just identified as the you
know, the founding kind of mission of Palantir to work on
behalf on liberal democratic values in the wake of 9/11.
>> [COUGH] >> We've heard about Project Maven and Google now,
so I want to ask Mike and Avril, is Google
being unpatriotic, as a company that's made this
decision not to partner with the US government for these particular purposes
on Project Maven or any other AI deployment on
behalf of the military. Or, if you don't prefer the language
of unpatriotic, how about insufficiently, loyal to the
values of liberal democracy, which doesn't, doesn't make it
just about the United States, but about the set of allies
that the country stands for? >> That's a loaded question. [LAUGH]
>> Well, Palintir is here to show up and be half of
liberal-democratic values, and I would imagine the folks
representing the U.S. Government would not feel
a whole lot of anxiety about saying the same.
>> I-I think that it's not about it being
un-unpatriotic. We're finding, we're finding companies here
in the valley, that are coming to us and saying, we put out
the problem sets and say, here are the problems
we're trying to solve. Do you have that capability?
And we are surprised by the number of companies that
are coming forward and saying, we'd like to work with you.
And in some cases it's not so much about helping the country
necessarily being patriotic or not being patriotic. In some cases it's
a business decision. In some cases it's a I have
this technology, I wanna advance it by working on your
problem kind of a decision. >> Mm-hm. >> So to some degree, I characterize the Google
situation as not so much unpatriotic, but
maybe uninformed. With some information
companies are chosing, or not chosing, to work with
the Department of Defense. But it's after some information,
some why are we doing this, what's this all about.
There's some conversation, there's some debate.
There's some discourse. I think the Google situation
was completely void of all that.
>> But, so what's the information then?
I, I, I'm, I'm at Google, you show up from the, the
Department of Defense unit for investing, and you say you've
made this decision I got some information I wanna
share with you. >> That is the information I, I have been
considering is Google? >> I think it didn't consider you at Google.
>> Yeah. >> Didn't consider what, what was really
the problem set and what, what is the science
we're trying to pull on. I think the DOD frankly
made a mistake- >> Mm-hm. >> By not being open about what they were really trying
to do at Maven. It came across in the press as we're, we're
taking movies from dr, dr, drones, we're gonna analyze
those pictures, and we're gonna use them for bad things.
>> Mm hm. >> And it just sort of flowed in the paper and in the media.
And that, that may sound naive, but I don't think
that's the case. I think if there'd been more information
and more, communication about it with people from
Washington coming here and talking about it, the outcome
may, may be different to a degree. Maybe not
completely different, but I think the outcome would
have been different. >> One more pass at this. So let's say I'm
a Google employee. I'm part of the group that was
protesting the company about the project. And I say as a
Google employee something like I came to work here because I
bought into the mission of do no evil.
>> Mm-hm. >> And I wanna deploy my talents on behalf of
the development of technology. So long as I'm convinced it's not being used to
help kill people. And when I think about making
contracts with the Department of Defense, that's often
times was involved even if it means killing people that,
folks at Palantir or elsewhere in the government,
you know, I, I, I mean people who are terrorists-
>> [COUGH] >> That might be something that as a citizen I might wish for.
>> Mm-hm. >> But as a personal endeavour, my own technological talent
It sounds scary. What do you say to that person?
>> You do have to make a choice. And it's not
necessarily binary. >> Mm-hm. >> A lot of, a lot of the ability to
know precisely whether that's a bad thing or not bad
thing on the ground, helps, helps defend the cause in the
sense that if you decide to do something about that you're
preventing collateral damage, for example.
So that's the extreme. If you are going to use it for
lethal reasons, you're doing you know, a job that has to be
done for lethal reasons but you're also precluding all of
the collateral damage that that may come to bear, that's one like to look at it.
The other way to look at it is, if you, this is, this is
indirect but it's very true. If we have the capability of
knowing who's a bad person and not bad person on the ground
from a drone in the air halfway around the world.
If we have that capability, it precludes the other guys
from form doing what they're going to do because they
know that we're watching. >> Mm-hm. >> So that's not a bad outcome.
>> I mean if I could maybe push this a little bit further to think
about government capability, maybe start with Avril on this
and then move to Courtney. And start with kind of the
strange question, which is, why does Palintir exist
as a compnay? And so the reason why I ask that, if you could give us a little
bit of a notion of dynamics, you were saying at
the beginning, Palintir, was identified as a company
because there was this need the government had. And
we could develop information systems, which seems very
worthy. But if that need was identified early on,
why didn't the government itself actually go to
develop this technology, and develop the capabilities
to be able to do this? Why was there a need to rely
on a private company to do it? So, could you help us
understand a little but of the dynamics of how
government have used that, and then how that turned
into a private company? >> Sure, so, would you mind if I just
started with a little bit of-
>> Please. >> Any on that, just to throw in my hat
a little bit on that one too, because I think it's relevant. But, so from my perspective,
I would not have come to the same conclusion that
Google did, on this issue. Maybe that's obvious, but
just to state it boldly. And, and I actually,
I think, and Courtney to credit you
on this, I think you phrased this in the following
way which I quite agree with. I think it was a missed
opportunity for Google to engage in a conversation
with the government about, essentially, what it was they
were doing in that context. But I do think the
conversation that was had at Google, with its employees and more publicly in some
respects, was an appropriate conversation. In other words,
I think it is critical for, individuals who
are working for a company, individuals who are owning and
managing a company, citizens, to think about the ethics and
the appropriateness of activities that
the government's taking. And whether or not you wanna
participate in that and, effectively facilitate
it in certain ways. And I think that's something that
we should be talking about, and that is relevant to,
you know, a decision that they will then
have the opportunity to make. I also think it's
important to recognize, when you have the skills,
and when you have the capabilities,
in your company or otherwise to do something that
may be useful for society, for your community, for the
country in which you live in. That that should be a factor
that plays into your decision making on these issues. You
know, that there is, talent, for example, in different
parts of the country that can be brought to bear on issues
that we're dealing with. And I think one of the challenges
that I at least saw in this conversation, and that I think
is worth thinking about, is this question of whether or not Google sees itself
as an American company, as part of America or
as a global company, as both. You know, what does that mean?
How do we think about our responsibilities in that
context? You know, what, what are sort of the, the factors
that we should we bringing to bear in that conversation on
the other side of things. And I think the reality is
whether Google decides to contribute to Maven or pull
out of Maven, those are both decisions that have impact
on the United States and its defense and
strategies for the future. So, there is no sideline
that you sit on in this conversation. You're
either doing, one way or you're doing the other, but
you're having an impact on it. So, to put it on that,
I think to your question, the reality is,
there is talent and there is work that is done
outside of the United States government that the United
States government itself within it does not have the
capacity to do. And, you know, one of the ways in which
the intelligence, you know, community and others thought
to essentially promote and, fuel work that be of use in effect to, the national
security of the United States, was through an, essentially,
an entity called In-Q-Tel. And In-Q-Tel, promotes,
essentially, seed money for companies that, do work in
certain areas that are of interest. And, and that is in
fact a part of the mechanism that let to Palantir. And
I'll let Courtney take it from here.
>> So, Palantir, I should, I should, clarify by
way of level setting, is not primarily an AI company.
Or at least we don't think of ourselves as primarily an AI
or machine learning company, we think of ourselves as
a data integration and analytics company. Which is to
say we work with institutions that have complex data assets
that they have access to as part of their normal course of
action. And those data assets are in all different sorts
of forms and shapes, and there're siloed systems and
they don't interact. So you can imagine when you have
one institution that has 20, 30 different data systems,
they're trying to stitch that together to carry out
their mission or mandate. And those, those systems
don't talk to each other, they don't come in
a common format. And you're, you're trying
in exigent circumstance to, to deliver on, on a task, and
to address a significant need. It's a very complex,
order to carry out. So imagine scaling that to the
size of multiple government institutions, where different
government institutions are holding different
pieces of information. And while they may have the lawful
means to be able to share that information, because they
themselves are individually dealing with the complexity
of their legacy systems, they can't do it. So what,
what we set out to do, and one of the reasons
that In-Q-Tel made a, made a small initial
investment, that helped fund Plantir, was to deal with
this very discrete problem, that in a lot of ways is
a very unsexy problem. How do you get data to come
together in a way that analysts, typically
human analysts, can make sense out of that
information, enrich it and, and take action according to
the institutional mandates. That was,
that was sort of the focus and the drive behind what we're,
we set out to do. Why Palantir as opposed
to government entities is an institution like carry out
this type of work. I think if you look at some of
the complexities of government institutions, you, you see
that there are for better or worse bureaucracies
that come into play and make this type of information
sharing particularly complex. And the technology to be able
to do it may not be an easy thing for certain
institutions, and so there is, there is an opportunity for
private entities to be able to plug into this space.
And there may also be opportunities to expand that
technology, that sort of integration of information
technology, in broader domains. Because the reality
is that this is not just an issue that exists within
government institutions, but virtually every large
institution that, over time, builds up data assets,
is grappling with the same issue.
>> So, I wanna follow up a little bit
on this point. Which is to ask the following question.
It's a badge of honor for Palantir that you
partner with government. It's something that
you celebrate. It's something that's core to
your identity. In the language that's been used by some
of our panelists in the last couple months, it's
maybe part of your north star. Your guiding mission in terms
of what Palantir is about. The question is,
what are the limits on that? How do you draw lines? Are
there, you know, Google has faced this challenge from its
own employees about sort of not participating with its
technology in, in the killing of human beings. What
are Palantir's lines? Is there anything that the government
could come to you saying, this is what we wanna do?
Either this government or another government
in another country, where you would say you know
what? That's not consistent with our mission.
>> We, we are, we are proud to work with,
with government agencies, in the intelligence community,
in defense, with special forces, with state and
local, institutions. But this commitment to, to working
with the public sector as well as our work in the commercial
sector, Is not without limits. And we as a company have
to make decisions and tradeoffs about what we're
comfortable supporting. The, the, the reality of how this
decision making framework plays out is that it's not
easy, because the problems that you deal with in these
spaces are inherently complex. And if we, we mentioned it
in the discussion earlier, in earlier days at Palinitir, we had set out with a task of
kind of defining a, a set of red lines that we could apply
universally to all customers, or all potential customers.
To define what we would do or wouldn't do in the scope
of engagements. And what we thought would be this
very brief set of very clear maybe five to ten red lines
turned into this sprawling 40 page exercise. That when we
applied to both the universe of existing customers and
all prospective customers, not just in
the government sector but also in the commercial sector.
And as what, and also with respect to potential
philanthropic customers, we found that virtually none
or maybe you end up with a completely null set of, of
people that you can work with. Because every situation that
you work with in the world is going to be fraught with some
set of issues. That's the, that's the trade-off of kind
of engaging with with thorny, knotty, real-world problems.
So the approach that we took over time, where we, we built
up through a lot of pain and experience was grappling
with the hard questions on the ground. And gradually
realizing that there is sort of families of
resemblance that create a heuristic that you can apply
in any given environment, such that you may struggle
with the first three engagements that kind of feel
similar. But the next few, you start to really see what
the similarities are and you're much more effective at,
at addressing those questions. But the, the short answer is
there is no short answer and the reality is you have
to really struggle and, and toil with the, the questions inclusive of
the moral dimensions of these types of engagements.
>> And can you tell us anything that's on the other
side of the red line? >> So, we, one example, in the, our commercial work,
we made a decision that we would not work with tobacco
companies. And that was just a principle decision
that our CEO, made after some discussion within the company,
but there are other instances along those lines.
>> Yes, so if you have a question-
>> No, go ahead. >> So, okay so I actually wanted to, pull us
back away from this, though I hope we do come back to this
question of bright lines, and red lines, and
where we don't cross them. But to something that
Avril brought up, which was this conception of,
these companies, and it's something that we've heard
in here in previous sessions of these companies as American
companies versus global platforms or
global companies. And, and what the implications are of
that self identification for the work that
they go on to do. So, so, you said a couple
of different things. You said, you know, first of
all it's totally good and we should celebrate the fact
that these companies and their engineers or employees
are asking hard questions. You also pointed out
to us that, you know, regardless of which way
they make their decision, either one of those is
actually a decision. So deciding not to act is as
much of a clear action as deciding to go through
with the contract. But then you said, and I, we
sort of kept moving past it, you said that you would have
made a different decision. So for companies who
are based in America but are, are struggling with this
question of whether or not they are an American
company. What do you think are their
responsibilities to the US government versus other
entities that might have demand for their services?
>> Yeah, so I mean, I think part of the challenge is
I don't know that my answer is appropriate or acceptable to
be applied across the board. But I will give you my answer,
right? So from my perspective, I think it's informed by
my own life experience, not surprisingly.
One of the things that was sort of fascinating for
me, I started off in science, in physics and, and
doing work in that area. And and then I opened
up a bookstore cafe and in Baltimore and it was my
first experience with any kind of business owner.
My parents, one was an artist, one was a scientist and
a teacher. And and having a business in my experience,
it made me feel part of the community in a completely
different way than you know, living in a apartment building
in New York City when I was a kid. And and, and
it was the first time that I started to think about what
did it mean to be a part of the community in that kind of
way to have a business and be part of the community. And
you, you recognize, you know, oddly enough politicians would
stop by and talk to you, you know [LAUGH] like this
is very strange, you know, council people and so on. And,
and part of what it started to mean to me was essentially
that I had a kind of a heightened responsibility
within the community to make it a better community in
effect. That I needed to think about things like what am
I selling in my business? Am I doing things with
the community that, promotes, you know,
sort of getting people who are out of work into work? Am
I, thinking about, you know, sort of zoning issues,
all kinds of things that, that I needed to start to
think about in the context of being a community member. And,
and what I came to realize from other business owners
that I respected and you know was learning from was
really if I'm not doing it, nobody's doing it, right? If
I'm not taking responsibility for the government and the community that I'm living
in, then you know, it's not going to be the community that
I want to be in in ten years. And it's not gonna essentially
move in the right direction. And I see companies like
Google which are enormous companies, have incredible
power within our communities, right, and within our country.
Taking advantage of many of the opportunities
offered by the United States. Taking advantage of
the infrastructure of the, you know, political
environment, all of these things. That both provides
them with responsibility in the sense that they are taking
advantage of these things, they should be giving
something back to the communities in which
they live. But also really thinking about the future
of where they're living, they should be contributing
to that as well. And I think that's something
that should be a part of the conversation and the way
in which they interact in this country.
>> I mean, so taking actually that,
that same community oriented approach to what a business
sees as its mandate, or its community and, and to bring it back to Courtney,
do you. Is there a community with which Palentir would
identify as its primary, audience that it is
providing services for aside from it's customers? And
if so, how would you define that community,
who you are in service of? >> So I, I think the first layer of community is the
community of the employees. And those are people who
come from institutions like, like Stanford, people with
diverse viewpoints and perspectives and
political views. Probably not that far
removed from the sorts of political persuasions that are
represented in this audience. And, and so one thing we find
is that, when we engage with that community on these, these
hard questions about who we should be working with or who
we should not be working with, in the scope of, of those
types of, of deployments. Those, the some of the hardest
discussion are internal discussions. And we have to pass muster with
that community before we can even go beyond that. I would
say another layer of community is we are a company as we set
out from the early stages. We're a company that we're
directing our efforts towards enabling certain institutions
to preserve the society and the values that we consider
to be core to our identities, as employees of the company
but also citizens. And so that's probably the next
layer of community, and I think there's a lot
that falls out from that.
As the company has grown and expanded into other sectors,
and has moved into, government
engagements internationally, and as well as working with
commercial institutions. We've had to broaden that vision to
think about how, for example, private organizations,
private companies, also play into this view of,
what're the institutions that are critical to preserving
the society we wanna live in. And that implicates a certain
set of decisions. But I would say,
go back to what I said before, this idea of being a part,
being an active part and, and having agency in preserving
the world democracies, is kind of central to our
identity as a company. And it's, it is a north
star when we make some decisions often times
have complicated, complicated externalities.
>> Let me try here to get in too. So, I mean, the question
I had in mind just before when Jeremy was asking you about
red lines that you might, you know, definitely not wanna
cross. So you left there again invoking the values of
the society that we belong to, the institutions of democracy
that you want to defend. Again, those are,
those are phrases that, understandably would trip off
the tongue of someone like Avril, who, representing
the US Government, sounds stranger, perhaps, to
come from a private company. But so again,
let's just see, so I imagine that means
something like, you wouldn't do business with
the North Korean government. That's a white line you
don't go, don't go, don't go past. You could then
make a more complex decision. Maybe you do business with
certain non-democratic regimes and their intelligence
services, or government agencies. But
because you have a view about by doing business with them
in some long run horizon, you're acting on behalf of
liberal democratic values. So, take Saudi Arabia, say,
does Palantir do business with Saudi Arabia? And if you do,
how do you think about it as democracy preserving?
>> No, we don't actually work with Saudi Arabia, but
I think it's an interesting hypothetical. It does raise
the questions about whether there are strategic alliances
that make sense for us to engage in. And by nature
of the work that we do and the sense of institutions
that we serve domestically, we would have formal
responsibilities to discuss the prospects of working with
countries like Saudi Arabia with our counterparts in,
in the US. And, effectively ensuring that
they're comfortable with, with with that, with that
type of work. But, but the, the details of whether
we would engage, under what circumstances really come
down to the complexity of, of what's being asked and, and
understanding the, the actual context. Treating a country as
a monolithic entity that only is represented by the, the,
the depictions that we see in, a few brief, newsreels,
I think may not do justice to the fact that governments
aren't always monoliths. We, we know this to be
the case in the US, and so you have to really
engage on the specific institutions that you might
be, considering contracting with or working with.
And, even more specifically, the type of work that, that
would be involved, and, then, beyond that understanding,
the trajectory of that work. Whether it aligns with
a broader set of values, according to the institutions
that we serve primarily, and then making a holistic
decision based on all of those considerations.
>> So, one more thing that, you know, comes to mind for me
here is to ask. You know it's, I'm getting the impression
that Palantir has a foreign policy, and
it's making me anxious. >> [LAUGH] >> And, not cuz I necessarily disagree with the objectives,
but because I don't know what business a private a company
has having a foreign policy. And the idea will go
something like this, on behalf of democratic
values, democracies typically have civilian control of the
military. So if the work of these folks here is seen,
over some time horizon, to be distasteful to a
sufficient number of citizens, the various leaders get
voted out of office, and the direction of the military changes over
the time. But Palantir has no such accountability structure
to citizens. You might internally feel like you work
with various agencies, and in some intermediary way there's
an accountability to citizens. But from my perspective on the
outside, since I'm neither in the government agency that
you're consulting with, nor in any way connected
to Palantir. I wonder how it is that you
feel accountable to external members of the very values
of the societies and the values that you aim to
defend, but why should I feel good about Palantir
having a foreign policy? >> I, I, I think it's, it's a great question. I
wouldn't frame it as Palantir as a private company
holding a foreign policy, maybe foreign opinions,
but I think your point about accountability is
a very fair one. And I think it draws back on, on
the point that we are, people who work for the company have
a sense of responsibility. And so accountability for most is,
is to reflect, the way that, that people within the company
think about their, their comfort levels in
working with, with certain institutions. But, but the
reality is we acknowledge that our view has to be much
more sophisticated, and it cannot just be sort of
a go at, go at it alone, technocratic approach to
the world. We operate within a broader context, where
political discourse needs to be factored into the decisions
that we make. And so what that means is that, if we're going
to go into potentially fraud environments, we need to have
conversations with advisers, and officials that
that we know and trust within government
institutions. To make sure that some of the approaches
that we're taking are in alignment with considerations
that they could bring to bear >> Mhm, Cameron? >> Well, I wanted to bring Mike into this conversation
actually. So, it sounds like you have a point to
follow up on this, and so maybe you can also expand on
the point with respect to the US government, has
a pretty clear foreign policy. And so we see, you know, this
arms race developing in AI. Where one could think of China
as a democratic nation having a very centralized control
over the investment they put into things like
education and science. And how they wanna develop
particular technologies for particular ends that
they wanna achieve. And the United States seems to
have a very decentralized view of seeing, are there
particular companies that will work with us? Are there
particular projects that we could potentially fund,
that may or may not decide that their
gonna take the money and do something. And in terms of
how that plays out long term, how do you actually see the
competitive situation between the United States and China,
with respect to their polices in terms of AI going forward?
>> Well, I think, we talked about this,
earlier or we hinted at it. China has an advantage because
they can centralize the way they decide things. They, they
pick and choose the areas they want to fund, they make
data available to anybody that wants to use the data.
They force the use cases that, that are going to
advance the science. So they, they've, they've
come at a top down, And there's not sort of these
lines of, of responsibility. They can start a company,
from, from top down. We as a democracy don't
get to do that. And so to some degree we're at
a disadvantage. And, and so you have to pick and
choose where you can, you can exploit what
you're good at. And where you need to get
better at things. So the, the DOD is a very bureaucratic
organization, okay? And that, that, that it just,
that's just the way it is. What, what we've gotta start
to realize is that, the, the AI talent and
the AI growth, the science, is in commercial companies.
And how do we make, decisions about policy, funding,
all the way down from what, what comes out of Congress in
a five year funding cycle. And what use cases we're gonna, we're gonna, advocate to, to
get the commercial companies to work for us. I'm, I wanna
comment a little bit about, about Palantir also,
to kinda step back for a second. Years ago we didn't
have debates about why was Northrop making airplanes for
us. Years ago we didn't have debates about McDonnell
Douglas building F-4s that the country needed. What's
changed a little bit is that, the technology advantage
is not in carriers and ships and airplanes, which
are clearly defense things. But it's in
the softer science and art of information technology.
It's in how data flows and desperate databases and around
integration. And these things are being developed for
the commercial sector. And so, I dare say what's causing
this debate is the fact that we're trying to buy software
rather than buy jet airplanes. And that makes it harder
to draw the line between, you know, what is good and
what is evil. Which I think is way too
binary anyway. So really, so really I think that we have
to create the advantage by exploiting the things
we're good at. Now, one of the things I think that we're,
we need to get better at, is what I'm hinting at, and
that is, the DOD needs to understand that they need to
buy commercial technology and buy software at the speed of
software. We fund things in a five year cycle, and e-even
now we're finding that, we, we find technologies in
the valley that, users in the field would love to have.
And we can fund it and get it to them, but to scale it,
the funding is four years out, and in those four years we
lose a technology advantage. So we've gotta work on
some of these things. It's not,
it's not a slum dunk. China has the advantage, I
think. I think we can, I think we're playing catch up, but we
need to be clear about what we can fix and do very quickly.
>> So I wanna move from, I wanna move to a very
specific example and get all three of you to
react to this. You know, when I was a kid I might've
been afraid of monsters. But today's kids probably should
be afraid of killer robots. Not just because they'll be
on the television set or in the movie theater, but because
it's a very real possibility, whether through artificial
generalized intelligence or the kind of technological
advances that we have in the military. That you can imagine either
the American military or other foreign militaries
really putting in the hands of autonomous decisions
the question of whether lethal action should be taken
against a human being. So I wanna ask you, can you
imagine a situation, and we'll start with Abril, but I wanna
hear from both Courtney and Mike on this as well.
Can you imagine a situation in which you would be
supportive of delegating to machines the decision about
whether to take lethal action against a human being?
Now, in answering that question,
I want you to think about [COUGH] an environment
in which is not doing so might put us at a military
disadvantage vis a vis our adversaries. So if your answer
is no, do you really think that principle of
preserving a human at the, in the loop is so worth it to
bear cause with respect to our own military superiority?
>> So part of the challenge in answering these kinds
of questions I find is in actually drilling down on
what it means to have a human in the loop, right?
In other words, I'm delegating authority to
an autonomous machine to make a determination if
I wanted to kill if I am, if I'm flying an aircraft and
I have targeted, you know, I have a target
in front of me and it has human beings in it, and
that target is an identified target that we have decided
we need to hit. But I give the airplane,
essentially, or the machine, or the computer, you know, a
certain amount of space within which to decide when
to take the strike because it's more capable of
figuring that out than I am, right?
I've identified the target, it knows what the target
is in a sense, but it can decide when
it's gonna do it and it can do it within a certain
range of space, etc. And the reality is we've sort of
moved into that space already, right? Where there
are places where, you know, we delegate to machines,
within certain guidelines, things that they are supposed
to be doing according to our instructions.
So to that extent, yes, I can imagine it, right?
But at the same time, I can't imagine a scenario in
which I essentially abdicate all responsibility for
what target is gonna be hit, who will be hit, etc.
And I say to a computer that, you know, somehow is capable,
supposedly, of determining this, you,
you're now responsible for the defense of the United
States. And you decide w-who it is and when, you know, you
should be taking strikes, etc. And it's everything in between
that's really important and much more likely to be, right,
like dealt with. And so in a way I, the way I think
about this is as follows. I think, and there's,
you know, and you could spend an entire hour
on this issue, right? But, or more a class, maybe.
It is, on the one hand, I believe that, we are gonna
see more and more, AI, other technology, etc.,
be brought to bear on, on, on our defense through
machines in a way that is similar to what I described,
but even more advanced. And that is clearly because more
and more we are seeing how quickly, essentially, it is
possible for an adversary to, present a challenge to us
in effect by striking us or doing other things like that. And the question is when that
loop becomes so short, right, how does a human being
actually, are they capable of responding quickly enough to
defend against the attack that is coming in, right? And
the question then becomes, so how do we manage that in a way
that retains both our humanity in a sense, our principles,
our, you know, the sort of broader guidelines
under which we are willing to use force? How do we do it in
a way that is consistent with what we expect to be lawful
under the circumstances? Which is connected to
our principles and our humanity and the way we've
designed the law. But also, how do we do it in a way
that retains accountability when you do things that
are outside of that box? Because that's one of
the key issues, I think, that lethally autonomous
weapons systems raises. And it's one of the key issues
that's being discussed, both internal to
the government, I'm sure. I know it was discussed when
I was in government. And it's being discussed within
the international community. There's a group of
government experts under the convention for certain conventional weapons
that are looking at this issue and trying to devise
principles upon which to deal with it. And I think, as we
move through this, the key is really to do, you know, to
sort of very thoughtfully and sometimes more slowly than
people wish but nevertheless go through these cases and
think through, okay, are we, you know, do we have sort of a
rubric here? Do we think this is acceptable? Do we think
this is out of bounds? That sort of thing. While
at the same time, mind you, I think keeping an eye on
the policy development so that you don't actually create
an arms race in this area, that actually is
counterproductive to what you're trying to achieve
which ultimately is really effectively defending and preserving peace.
>> So let me ask you one followup
question before I invite the others in because Avril
is one of the most talented lawyers in the US government.
You played an important role in thinking about the rules
that should govern the use of drones. And to set in
place an architecture for making decisions and doing
this kind of careful judgement that's required about
when force should be used in that way. Of course
one of the challenges is that that kind of system depends
on the faith that people have in policymakers to operate in
a non-transparent space, and make those judgments in
accordance with our ethics or values. We now find
ourselves in a chal-, in a situation where a lot
of the architecture that was built during the Obama
administration, with respect to human
enabled drone strikes, has been rolled back by a
subsequent administration. So I'm interested in reflecting
on that experience. What lessons would you draw
for this careful, you know, calibration and experimentation, with greater
autonomy in decision making, in a space that is not visible
to the democratic process, that depends on a lot of trust
and faith that people have in policymakers and experts to
make reasoned decisions and at a time when that trust
is evaporating you know, given the events that
we see in Washington? Is this something that really
can be left to the executive branch to carefully navigate
in the way that you described? Or do we need some democratic
deliberation about this, and some external oversight of
the use of these capabilities? >> [LAUGH] There is a lot there. All right, so
first of all, I am not one of the most talented lawyers in
government. I'm not even in government anymore but there
are so many extraordinarily talented lawyers in
government. And anyway, and it's true I participated in
this effort to do this. I, I think try to boil it down to
a few things I'd say on this. One is I, I went into law in
large part because I thought lawyers really understood in a
sense how to effect change in society. And I was inspired
through civil rights and a variety of other spaces
where I saw law have an enormous impact. I've come
now to a point where I think, I still think law is
critically important, and, you know, important to
change it, and so on. But I think at least as important,
and protect, possibly more important in this moment
in history is is culture, in a way. Is sort of the norms
that we have that are not necessarily legally binding,
but nevertheless are the ways in which we accept, you know,
determine what is acceptable behavior, what's
unacceptable behavior, and how we think about things
and approach challenges. And, and it's largely because I
think these things are are so important in their
interaction together. To create an environment
in which you can actually promote what you think is,
for example, better decision making or activities that you
believe reflect society. And I think in the national
security space, you do naturally have an area where
first of all if the executive branch is not doing it,
I don't know who is doing it, right? So you really
only have one option. And in the context
of the executive branch because it's national
security there are going to be certain aspects of it that
are not gonna be public. And when it comes to oversight,
then you have to rely, in effect, on Congress. And
that is a key piece because they can receive
classified information. And they are a separate
branch of government. And you rely very little on
the courts simply because so little of what is in national
security can be brought to the court. So, that is sort of
the structure that you have. In which case, you need to
invest in those structures and in those oversight spaces in
order to actually make them effective, that's one piece.
At the same time, I also believe and, you know,
this is something I know President Obama took a very
strong view on, and that we tried to do a lot of. But, you
know, still, I think he would even say not nearly enough and
as much as we would want, which is to create as much
transparency as possible around the frameworks and in
the context of the activity, as it, you know, against
remotely piloted aircraft and variety of other ways
of taking action against terrorist targets in
outside of areas of active hostilities, which is a very
challenging space, and one in which there had
not been transparency. You know, President Obama
gave a speech, provided a lot more transparency
than had been provided. And I think it was the right
thing to do because we do have to have debates in our
democracy about these issues. And it's the only way in which
we can actually effectively, I think, challenge some of
the preconceptions that are in the executive branch sometimes
that become part of your groupthink concern but-
>> Okay, so in in about five minutes we're
gonna turn to the two mics that we have in each of
the rows to open it up to your questions. We know there's a
lot of demand to ask questions in the group. And so we wanted
to try that format tonight. Let me flag for you that in
asking your question, your voice will be on the video
that's being recorded and made public although your face
will not, just given the way the cameras are set up. Know
that if you're standing up to speak, you will be heard but
perhaps not seen. We will do that at 8:20. And let me just invite either
Courtney or Mike to jump on, jump in on this question of of
sort of humans out of the loop with respect to making
decisions about lethal action, and, and the, a real nicely
laid out the continuum. We know the tough cases
are in the middle, right? The tough cases are not what
we're already doing and not fully delegating to a
machine the decision about who to target, when, why, and how.
The tough decisions are gonna be in the middle.
How you think about that when you're sitting at DIUX
sort of trying to drive technology development for
the military that surely has an appetite to think about
how to reduce exposure for US service members in the
context of being, to carry out and achieve the effects that
they want on the battlefield? And how do you think about it,
you know, from the private sector perspective?
>> Well, I'm I'm of the belief that and
I, and I, I do look at it as, as, as we're driving
technology change and we're picking and
choosing the things that are, that are getting investments
that we want less exposure for our military members as
they're trying to do their jobs. But we also want the
accountability built in. So and then there's a part of me
that as as an airplane driver myself you sort of look at it
as in, in the context of of a, an actual combat situation
what it, what, what plays out, what scenario plays out in
terms of time, distance, and all of those things that
happen. Whether it's a close in thing, or a very far away
thing, in terms of combat. I think That the human in
the loop has to be there. So if, if you're looking for
the binary yes or no, I'm on the side of human in the loop
in the decisions we make and investments, as well as the
decisions that we're making in, in terms of what
technologies we're bringing forward and in what,
into what applications and into what contexts.
Trying to decide and, trying to decide what target,
where, when, and provide the, the box within which to
operate can be delicate and should be delicate to
a machine cuz you'll probably do it faster, better,
cheaper, more accurately. But making the final
decision of, or delegating the decision to say, within
these parameters, go for it, I think that works. But
leaving it completely to a, to a machine to decide
anything from strategy to tactics and executing it, I
just don't think is feasible. Mostly because of our own
ethics, values, and, and culture. You could probably
get the technology to work. But I don't think it's a good
idea and it doesn't fit in, in terms of our value system
to make it happen that way. So I'm, I'm on the side of no,
we, human in the loop.
>> Courtney? >> I'm, I'm skeptic with respect to the question of, of
whether the technology will, will get us there. And I think
this is a point that ties onto some remarks that Avril made
earlier around the point of the criticality of having
public discourse ar-around these issues. I think part
of that discourse needs to involve an effort to, to peel
back the layers and understand nature of what this technology
can and, and cannot do. We mentioned AGI, artificial
generalized intelligence. It often comes up in,
in these discussions but in my mind, it's a, it's a bit
of a bogey man when you think about history. Look at
the history of artificial intelligence. The golden age
began not a few years ago. The golden age of artificial
intelligence was supposed to have started in 1956,
and then we had not one but two AI winters in between.
And in the early days, the late 50's or late 60's you
had these towering figures in computer science like
Marvin Minsky Make make decorations like
within a generation, all the significant problems
of artificial generalized intelligence would be
partially solved and we, we would likely have
robots walking amongst us. That's clearly not the case.
By contrast, you, you had other figures like JCR
look lighter, who presented this alternative cue of
what the future would hold, this concept of man-computer
symbiosis. And I think that's very much where we are today
and where we're likely going to be. But this discussion,
this historical discussion that I think we should really
draw upon calls attention to some fundamental limitations
that need to be factored in. And one of those fundamental
limitations when we think about not just
the capability of artificial generalized intelligence, but
specialized applications of artificial intelligence
like computer vision, is the realization that
computers don't understand, and they don't, they're not
capable of carrying out cognitive tasks including
making moral judgments. And I don't think we're, we're
very close to actually getting to that point.
So when you acknowledge those, those limitations, it forces
you into, to the recognition that this, there is
a fundamental necessity for humans to be very deeply
involved in the loop, and understanding the,
the chain of action. So apart from the prospect of
the Chinese and the US and other superpowers developing
AI robots that go out into a field and have a battle
royale amongst themselves. And we all sit back on
the sidelines and eat popcorn. I can't imagine a vision of a
world where we would allow, or even want to as
responsible technologists, to have computers making
these decisions without a very significant
component of, of humanity being involved to
the degree to which there's moral culpability that
accrues to us directly. So I just wanna push on the we.
>> You can imagine a situation in which we
allow this to happen. Who, who's the we you
are talking about? Is it we in this room? Is it we in the United States
of America? Is it we in a set of countries that adhere to
liberal democratic values? Is it we including Vladimir
Putin, is it we including Kim Jong-un, who's the we?
>> Well, obviously it, it starts in this room.
I, I think there is a responsibility to, to spread
the, the, the discourse much more broadly so that people
have a clear understanding cuz I think there is a lot
of mystification around these questions. And
there's a risk that that in, in the the, the, the, the fog
of this emerging AI drive, that we get kind of caught
up in, in things that may not actually precipitate
and so create our own destiny. So I, I don't know what that
means in international scale. I don't know how exactly
you drive you drive that, that discussion across
across global institutions. But I think there, there
are historical examples that we may be able to draw
from with respect to, to Cold War and the Cold War. And the earlier technology
offsets some discussions around the prospects for,
for nuclear armament. There may be some lessons
that we can derive from, from those experiences in
terms of how to facilitate that international dialogue.
>> Okay, feel free to stand up by
the mic if you have questions that you wanna ask our
distinguished panelists. We'll start over here.
>> Hi, I'm Raj. So where do we draw the line?
Is it the business we need to make from the company? Or
do we focus on nationalism? Or do we focus on corporate
social responsibility? Example, here is,
when it is a local US company, we are focused on one nation.
Now, when you're a distributed organization like Google,
like, are you responsible to nationalism, supporting
nationalism in each country? Or should you be only in that
country where you operate? It's, that's the debate
a company goes through in in any given organization.
>> Mm-hm, so it's
>> Go ahead, yeah, take it, Mike.
>> So I would, I would take it the following
way. It is the business's decision. So in this
area we start companies. And when we start companies,
we start with a product, and we start with
a business model. We very seldom, you're,
you're a bit of the exception, and I hope there are more
exceptions, that start in the military context or
the government context. So we start with some product
that we're gonna sell. And along the way we make
decisions about what markets we're in, who are our
competitors. So even, even in the commercial
sector you know, we used to make decisions
about who to sell to, who not to sell to for
a variety of reasons. And that was before we'd
even considered going to, into the military or
the national sector. So I think as
companies evolve, they'll start with, with, with
a product and a business, but they'll evolve and create
the market. And along the way they'll create a value system
and a culture that supports the answer to the question
when it arrives Hey, look, DIUx has come to us with an
opportunity to get a contract with the government. Should we
say yes or no? And I think, I think, it's not as clear as
are we a patriotic company or not? Are we a nationalistic
company or not? I think your own business model coupled
with your culture and values will cause you to
answer the question when the time comes.
>> Thanks for coming. Pablo, you said we need to rely on
the oversight of Congress. What we have done, taking it
all together over several of these sessions, it's a great
concern that we all have. That the people who
are in Congress today, don't seem to understand a
great deal about tech at all, whether it relates to
defense or other things. And that's in spite of all
the briefings that I'm sure are coming for
people who do understand tech. And [INAUDIBLE], as well as
all the people that looked next door, they do the logging
in all of Silicon Valley, sticking people next door
to go and teach them. But they're still
not getting it. They're not trying to have
all the people in the room here get elected. >> [LAUGH] >> [INAUDIBLE]
>> But you should think about that. Go ahead, yes. [LAUGH]
>> [INAUDIBLE] investment that you mentioned that we need to
make in the institution to get Congress to understand
these issues or others as it relates to tech.
>> Yeah, I, I mean, I think you've
alighted on what I see as the answer, in a sense,
to this point. Which is to say that, my point is that the
reality is that there are at least some things that, the
executive branch is not gonna be able to make public in the
context of national security. And as a consequence, all
you're left with, in effect, for the oversight piece
is Congress. And when you think about that, and you
should push in every respect, as I noted, to make as much
as possible transparent. But when you accept that there is some nub at least that
is not gonna be ultimately disclosed, and that Congress has to be
your oversight mechanism, that's the system, we live in.
Then I think the answer, it, it sorta makes you realize
how much we need to invest in Congress in that sense in
order to ensure that they are prepared to be the kind of
oversight that you want them to be in that sense. Cuz I
think it's, it's a fair issue. And it's one where I think,
you know, it's a, it's a constant piece that I
think people have been seeing. Which is that we need to
actually bring the education of technology into
the government in all sorts of ways, including in Congress,
and I wouldn't say it's a loan in that respect. It needs
to be seen throughout. And there has to be more of
a dialogue, in a way, between technology, in my
view, and the foreign policy sector in order to promote
the kind of conversation that really gets to the next level
beyond the sort of issue spotting space and really
developing policy together >> It's apropos, that, that question we were
joking earlier today when reflecting with our
panelists about [COUGH] the undergraduate course
that we're teaching, about the whiplash that our
students are experiencing. They can't figure out whether
to trust the companies, or the government, or
neither. And each week it seems like a different
sort of institution is the one that you can count on
until you figure out that no, you can't count on them
either. And I think all of us are struggling to navigate in
this environment, but, but about whose responsibility
is it to look at the whole? Where can you rely on
mechanisms of oversight and accountability to keep
some moral compass and ethical guidelines that
drive this space? And these rules are being
written right now. I mean, they really
don't exist. And so these capability gaps that
exist on the governmental side are really real. Over to you.
>> I think the question that I'm curious about
is around this sort of, where do we draw the line
around what's acceptable for a company to be able to
provide technology for military use and what is
not acceptable? And to add more context to this question,
I'm kind of curious why is it even important to
have that line? It's not necessarily
my point of view, but I think it's an interesting
hypothetical. I would argue there's some
really interesting parallels around that. If we look at
like, some of the previous conversations that has
happened in this class, for example, around, I'm drawing a
line around what's acceptable around data sharing.
I think there's sort of this, interesting parallel there
around taking the Google case, for example. People being
uncomfortable with using Google because of how Google
is sharing their data. Now, the workers at Google
are uncomfortable with how Google is using their work, because of
how Google is now selling their work. So I think,
that in general also is sort of a modern problem that, that
we're trying to break apart in this class. But I'm just
sort of curious of why does it even matter to like have, this
distinction around what is or is not acceptable for
a given company to do? Because seemingly, some other
company will come along and do it anyways.
>> Courtney, why don't you take
that one first? Why does Palantir have a line
at all, beyond the law? >> [LAUGH] >> Yeah, so, so obviously we have an
obligation to treat the law as a floor. And we, we think
of some set of, of ethical, principles or, or frameworks
as a way of defining a, a higher standard or
higher threshold of, of how we engage. I mean, I think
there's an interesting thread to, to, to your question,
which is why have red lines when they're invariably going
to be crossed? Not just by other institutions, but in
different contexts. And, and, and this is one of the points
that I was trying to draw out earlier when I talked
about this exercise to define red lines. There's
an acknowledgement that so much of the decision-making
around these hard questions comes down to the context.
And if you are invested in, in making ethical decisions,
I think you have to, you have to grapple with context.
And, and that puts you in this hard position where you don't
always end up with, with easy rules that you can just, you
know, check the box and, and follow the line and see,
see where you, where you land. But, but
your other question of, why think about ethics
at all when if, if one company is going
to choose to be ethical, other companies may just
take that business and, and act as mercenaries? I mean,
I think there's, there's an, there's an element of rising
tide kind of with all, all boats. If companies
agree that there's, there's higher standards, there's people in the valley
communicate with each other. And, I, I mentioned before
that the community that, that Palantir is immediately
most responsive to is the community of, of employees
that we have, our engineers. And if they're not happy,
we have a bit of a problem. If they don't feel comfortable
with the work that we're doing, we,
we have a real problem. They're highly fungible
resources. They're coveted, they can command
high salaries, they can walk anywhere else.
And so there is that, that, that real tension that
plays out. If, if we're not actually responsive to,
to the ethical demands, and we saw this play out in the
Maven case. Companies are not responsive to, to the ethical
demands of their employees, then those employees disperse
to other institutions and they might make the same demands
of those other institutions. >> Can I add something to that, too? In my mind
is the tension between, wanting your own hands to be
free of any complicity And then being so self involved
with your own moral complicity that you're willing to remove
yourself from the entire rest of the structure, effectively
leave the entire structure in tact, with nothing other
than your hands being clean. You, you don't wanna buy
gasoline anymore cuz that indirectly supports
authoritarian regimes. You stop buying any type of
meat because that contributes to factory farming. You try
to go off the grid entirely, in which your hands
are completely cleansed, and you are removed from
the entire superstructure. But the entire
superstructure's still in place with you living
in the middle of nowhere. If you care morally about
making a change in the world, the price of making a change
in the world might be to get your hands a little bit dirty.
Now, on, on the other hand, it's important, it seems to
me also to think that when you're involved in
getting your hands dirty, there is something important
in terms of communicating what it is you're standing for,
even if you're in certain ways you're
compromised. So here's another example that's relevant to any
university context, Stanford and lots of other universities
have a big endowment. They typically invest
the endowment to maximize the return on investment. A
generation ago there was a big movement to divest the money
in the Stanford endowments and other university endowments
from companies that did business in Apartheid
era South Africa. Any sober economic analysis
would show you that this had zero effect on the market
capitalization of any of the companies. There was
no effect on the market when a university withdraws
an investment from it. But there's an important
communicative effect, a symbolic expression about
disapproval of something. Which might over the course of
some longer time horizon have a powerful motivating,
effect on other people. So if it turns out that
the people, you know, protesting at Google, all
that means is that some other company gets the Defense
Department's business. But there's some type of view,
that by having a large, prominent company and
its employees express this particular principle, that
other people could pick it up, and it stimulates some of the
very debate that we're having. Or in the other direction, with Courtney coming and
saying how it's important to actually work with
government institutions, on behalf of liberal
democratic values. The point is to allow the symbolic
expression of where you stand to happen, and allow that to
have an educative effect. Rather than just being so
self involved with your moral purity, that you ditch
the whole system and happily live by yourself.
>> Good point.
>> So you have talked about the advantage of China
in the AI development, because of their centralized
decision process. So what do you think
are the main impacts of China surpassing the US in
the AI development? >> Everybody turned to me. >> [LAUGH] >> Well, I think, I think it'd be a disaster
if China gained and kept the advantage.
Not just militarily but from an economic perspective
as a country as well. So, I, I think this, this is
the sort of the moment, we can't fall behind any
longer. So the simple answer from my perspective is,
question, what, what is the, what is the impact of
the Chinese getting ahead? I think it's a disaster.
>> Okay. >> [LAUGH] >> We'll stop, and. >> That was my same question. I'd like to first of all
thank Rob for arranging for the question and
answers session for this. This is a great way to to end
the evening, so thank you for that. I would like to explore
that just a little bit more, Mike, because you, you
obviously are sensitive to it. When you talk about top
down arrangement for China, what we do envision in
20 years from today? If things, you mentioned
this being a disaster, a dangerous situation, what could that look like
from your perspective? >> The disaster itself. >> [LAUGH] >> Well, I, I think, I think militarily,
it's the third off, offset going the other way. Our whole
notion of the third offset was to have several elements of,
of our infrastructure, our organization, and
then our technology, not necessarily in that order,
where we had an advantage. And that caused us to have the
advantage, in a way that we weren't overspending, that we
could afford is a better way to put it, and
I think we would lose that. And there would be decisions
militarily, but, but actually more importantly
there would be decisions economically and in terms of
economic advantage, that, that would be, that would
really have consequences that, that I can't even completely
describe. I think, if you go back to the notion of, what is
the role of the military, it's the sort of, I probably won't
say it as well as others can. But, the whole idea is to have
enough defense, that you can carry out your other policies
and your other desires, right? And if you start to lose,
if you start to lose that, then you cave in on other,
other decisions. The other way around, if you
lose the economic advantage, you end up caving in on other
decisions as well. So I think it's all interrelated, it's
not just about the military. >> Maybe I can build on that too.
>> Please do. >> So, I mean I think, it's hard to describe what's
the specific weapon, right? That they could develop or use
to essentially put us on our heels. But, but I think,
building on Mike's point is, I think, in each of the offsets
that we've had historically, the concept has really been
to promote deterrence. And, and that is, you know, to
the extent that we're capable of projecting power in
a way that says to our potential adversaries
under the circumstances, whether it's China or
otherwise. That, you can't move with
impunity to, to do things that ultimately, you believe
are to your benefit, but there will be no response to,
in effect. Because you have such superiority from
a military perspective, that, you know, we have no ability
to push back on that issue. And, and I think, and, you
know, and also drag us into then conflicts that we can't
ultimately succeed in. And the and so
in the context of that if you're looking at China,
you know, and their activities in the South
China Sea, for example, right? What we've seen is,
China promote effectively, policies and
actions that ultimately make, their sphere of influence more
effective in this area. And we have allies that we have,
you know, through treaties and otherwise, committed to come
to the defense of. In, for example, Philippines, or
in other areas in the space. And, and ultimately,
we have to push back enough so that we actually can be sure that we can come
to the defense of those allies in the event that China pushes
in, right? And if they gain such military superiority
through technical advantage, essentially, in this context,
it's much harder for us to push back in the ways
that we've pushed back, which have not involved us
using force. But simply through actions such as,
you know, using, warships that go through the, you know, the
South China Sea in ways that are consistent with the law of
the sea. And, you know, saying that we're essentially not
going to ask your permission, when they say, you need to ask
for permission. And, you know, there's a sort of all of these
very small ways in which we push back, And, prevent them
from continuing to push in on allies and partners that
we have in that region. And that kind of balance
in a sense shift, and then has an enormous impact on
the United States. So, I mean, just filling in with a lot
of the sea, space in, in the South China Sea. That's
important from a military perspective, because of
our allies and partners. But it's also important for our private sector,
because there's so much trade that goes through
the South China Sea, and we need to be able to promote
open waterways and so on. So it's, it's all different
spheres of influence and power in a sense that it affects the
United States in that context. >> Offer here. >> So, thank you again for coming. And, according to
an article at the Economist, they mention about two
different strategies. One, it's American acquiring
startups, all over the world, but then we know
that they acquired. But then the China
when they acquired, they acquired probably
a percentage of the startup or the company. So,
that means that the label or the name of the companies
they stay the same, so the local people don't
even realize that is a China, you know, percent bigger.
And they have, they can be part of the board,
or have access to data, or have even intellectual
property, you know. So, what United States is
doing in order to make, to understand the process
of getting into the, the different technologies,
you know? And the local people don't even
know about that they acquired, and it's a Chinese company. >> Well, to, to, to my knowledge, we
are starting to get aware of that situation, the situation
in particular that, that a company is getting
an investment from a venture capital. But if you trace
it back, the, the central funding is coming from
a Chinese source. And, we're starting to get the awareness
that's also translating into looking at policies,
where we start to take, where we take a look at having
more transparency around who are the the funds and the
so-called fund to funds. And the funds of funds that are
actually funneling the money to a VC here in the valley,
that might be making an investment in the company.
So, there's a series of things that I think are, are going
on around, transparency and policy, and it'll take some
time. But there's some, there's some aggressive moves
to immediately identifying technologies, for
example, that that must have examination
before any investment goes in. So it's starting both from
a technology that we wanna protect perspective, but also
putting in the framework for taking a look at where those
funds really come from in general.
>> And there is, is there any indication in
terms of who owns this startup to know about that?
>> Yes there is. We're taking steps to get
founders to understand that as they start traveling up and
down Santi Hill road looking for money, that they,
that here are the dos and don'ts of trying to see who
they were taking investments from.
>> Okay, thank you.
>> Hi, so something that's come up in, I think,
every one of these classes so far is the importance of
being able to have complex discussions about
this type of problem, because there's
no easy answers. So my final question is, are,
are any of you, do you have or have you experienced or
are you aware of like the best ways to make sure that
in a hiring process, for either a company like Palantir
or Google or you know, government agencies.
That a hiring process will assess someone's ability to
have these type of complex conversations about
technical ethical questions? And then secondary
question is, do you think there's any ways
for us to assess, you know, potential elected officials
we're considering voting for if they can have that
type of discussion? >> Courtney, do you wanna start?
>> You can do a fair bit of hiring, so.
>> Yeah, I can, I can take a pass at answering the first
part of the question. Yes, the answer is yes, in fact,
this is something that we look at all the time. And I struck
a note earlier around caution about the exuberance of words
artificial intelligence, the prospects of
artificial intelligence, machine learning, as panaceas
for all the world's problems. And so this is sort of
a filter for me when I and others at Palantir
interview people. If candidates come in with
strong computer science backgrounds from esteemed
institutions like Stanford, and all they can talk about
is how they wanna just get in there and get the data and do machine learning and
artificial intelligence. Without any thought as to,
as to what the, the deeper implications are,
to, to that type of work, that's kind of a strike, an
immediate strike against them. I'm looking for people who can
have a critical eye towards, towards the type of work
that they might engage in. And can, and understand,
the complexities of the application of
the applications of, of these,
these powerful technologies. We are always on
the search for, for people who are willing to ask
these hard, hard questions. In fact, given the nature of
the work that we do, the fact that we have sort of operative
from this position that we, we don't accept this false
dichotomy that we originally started it was, you know, you
either trade privacy for, for security or
the other way around. In this case we,
we also reject the, what seems an emerging false dichotomy
that you either chose a strategic advantage through
artificial intelligence, or you choose moral purity. The world's, world's much more
complicated than that, and if you can't engage in, in
those complexities and come up with nuanced answers that
really dig at those questions, then you're probably not gonna
fare well in those sorts of environments. But if you
can come into the table and have an informed conversation
about these sorts of things, that's gonna be a mark in
your favor at a company like Palantir. How to foment
that at other institutions, I don't have a great
response to. Other than that, the computer
science course that you're taking right now with a focus
on ethics, is, I think, critical to building
a generation of engineers who are able to go beyond
just the technical dimensions of their trade.
>> Who wants to help us choose qualified politicians?
>> [LAUGH] >> It's too hard. >> [LAUGH] >> There you go. I mean, I was gonna just talk about,
in the context of government, hiring. I mean, I, it, it is,
you know, with respect to the hiring processes that
I've been involved in or associated with, I think,
I've certainly seen effort in trying to find people who will
be able to think independently about complex issues overall. I think the challenge of
trying to get government to think deeply about
the ethics of technology and kind of create a space for
that conversation, is the part that's been more challenging
in a way. And you know, and in some respects, so, you
know, during the time I was in government, at least,
and I know this continues, there has been an increasing
effort To bring technologies at different
levels into government. And to allow them to be part
of the conversation more effectively throughout
agencies and departments. And but
also just to come in and to come out more effectively.
It has, you know, I'd say the success of that
has been pretty modest, right? Which is to say like,
there's been more, but, you know,
there was almost none before. And we're still having to move
significantly into that area. That still doesn't
answer the question. In other words, that, that did
change the conversation a bit, but it didn't
revolutionize it. And and I think this
is a constant effort that we're gonna have to
continue to engage in. And, frankly, it's not just
about technology. There's so many different areas of
expertise, whether it's, you know, climate change in
the environment, or you know, it's about certain aspects of
the economy, or different, whole, different areas of
expertise. Where increasingly we see the interconnectedness
between them and, you know, national security and foreign
policy in all of these places where there was sort of
a certain set of actors that were perceived as core. And
now there's a need to break in new areas of expertise and
knowledge and thinking. And and ethics,
you know, I think is an area again where
it's been it, it, it tends to get stovepiped
into specific places. And it doesn't get spread
across the enterprise in a way that's the most effective way.
And I think that's just something we're gonna have
to continue to struggle. And I fully endorse Courtney's
point about having classes like this in other places
where we do start to try to do that is absolutely critical
to seeing that happen, both in government and in
the private sector, you know, in different ways.
Cuz I think they also have that challenge.
>> Let me take one last question
from the audience and then we're gonna wrap them, things up.
>> This is, is a question for Courtney. You mentioned a red
line that you drew with a commercial client
with tobacco companies. I'm curious if you can give a
specific example of a red line you've drawn with
the government, a potential government client.
And then explain how you reached that decision,
and how it violated your company's values.
>> So there's been many
cases where we've, we've made conscious decisions
not to work with both foreign governments and agencies
within within US government. One example that often comes
up in, in conversations with, with communities is
our work with ICE. So it's well-publicized that,
that we work with ICE. We have worked with ICE for,
for several years, going back to, to the Obama
administration. And we've been criticized for,
for that work. But much of the criticism around that work
has not addressed some of the, the nuance about how ICE
operates specifically with respect to having two separate
subdirectorates. One being what's called Homeland
Security Investigations which is focused on
transnational and criminal investigative workflows. So
things like weapons smuggling, drug trafficking
human trafficking usually multi-year large
scale investigations. The other division of ICE is
focused on enforcement and removal operations. It's the
title of the subdirectorate. It's ERO, Enforcement and
Removal Operations. That's the division of ICE that's
largely been responsible for carrying out deportation under
the current administration. So our work has been squarely
aligned with HSI, Homeland Security
Investigations. Those are, that's the part of
the institution that, that we work with and enable
and their administrative controls that separate how
those two divisions operate. But we, we've made
a conscious decision especially in the wake of, of, of some of the, the past
couple of years of executive orders from the current
administration that we would not engage with the, the other
side of the house within within ICE. For, for reasons
of concern around how some of the policy was developing,
and what that would entail in terms of enforcement
prioritizations by, by that, that subdirectorate
within ICE. So that's one example, and
there's many other examples where we made decisions not to
work with certain agencies, or to descope potential work.
>> So Rob had offered us this wonderful vision of ditching
the entire system and living by ourselves off
the grid with clean hands. And as appealing as that
vision might be, I hope you'll join me in thanking the three
of our guests for not ditching the system, for thinking
hard about these issues and joining us.
>> [APPLAUSE] >> And we will look forward to seeing you at our next
discussion where we'll be focused on the power of
private platforms and the implication of that for
our public deliberation and debate.
>> Mm-hm, great.
Reminds me of the time a cigarette company came to speak at my “Intro to Law” class. Only one of the five people spoke - the four others stood in the back and said nothing the whole time - and all his answers were deflections or “I can’t answer that” responses. He was basically Aaron Eckhart in “Thank You for Not Smoking” in real life.
Listening to his slimy replies makes it clear to me just how fucked they are.