And now for something
completely different. As a financial economist,
I study financial markets and risk. And if you've been watching
financial markets over the last few weeks, you've probably
heard about the so-called fear index-- the VIX. This is an index that measures
a forward-looking perspective on stock market volatility. And it's actually based
on research that was done at MIT many years ago-- Black-Scholes-Merton option
pricing formula, and inverting that to calculate what the
market thinks volatility is going forward. That work was done
by our former dean at Sloan, Dick Schmalensee,
with a student, Robert Trippi. And if you've been
watching the VIX, you'll notice something
rather strange over the last few weeks. In particular, you'll notice
that, for the most of 2017, the VIX was somewhere
around 10%-- 10% market volatility. And then, over the course of
the first few days of February, it shot up to about 37-- a really striking phenomenon
that scared many, many people. And if you go back and look
at what happened to the VIX, not just over the last year,
but over the last decade since the financial crisis,
you see something much more reminiscent of fear at the very
heart of financial markets. In fact, for you Lord
of the Rings fans, this should look a little
bit like the landscape for the land of Mordor. All we're missing is
the Eye of Sauron. And investors react to
fear in the obvious way. If you look at the S&P 500, a
measure of the stock market, you'll notice that,
at the very beginning of this period of
the financial crisis, the stock market
dropped dramatically in response and in tandem
with this increase in the fear index. In particular, if you were
holding equities in the stock market early in 2007, you would
have been right around 1,500, 1,600 at the S&P. And within
a matter of a few months, your 401K would have
become, roughly, a 201K. You would have lost
half your wealth. And of course, investors
reacted as we expect they might. They freaked out. And they pulled money
out of the stock market, missing the rebound that took
about four years to get you back to where you were. And investors took more
money out and, ultimately, missed the great bull market
that occurred since then. This roller coaster
ride is a problem that investors, financial
economists, practitioners have been working on
for many, many years. And so we've been
focusing on applying artificial intelligence
methods to try to solve this problem about what
an investor is supposed to do. So to begin, we have to
ask the question, what do investors want? And I'm going to ask all
of you to think about that in the very specific context
of four financial investments that I'm going to show you. I'm not going to tell
you what they are or over what time
period they span. All I'm going to
do is to show you what happens to a $1 investment
during this investment period and ask you to pick one
of these four investments. The green line is a
very safe investment. It turns $1 into $2 over this
unspecified investment period. The red line is quite
a bit more risky. It turns $1 into about five. The blue line is even more
risky, but it's more rewarding. And the yellow line is
somewhere in the middle. And if you could pick one, and
only one, of these investments for your retirement or for
your kid's college education or for your grandparents'
funds, which would you choose? By a show of hands,
how many people would pick the green line? Nobody? Wow. OK, a couple of people. How about the red line? Anybody take the red line? Wow. I want you to
remember this moment. Because after I tell you
what that is, most of you are going to have
some rethinking to do. How about the blue line? Anybody want it? There are the venture
capitalists and the hedge fund managers. [LAUGHTER] And now, the yellow
line-- how many people-- yeah-- by far the most
popular, because it seems to have the best trade-off
between risk and reward. Well, let me tell
you what you picked. First of all, the time
period is from 1990 to 2008. The green line is
US treasury bills, the safest asset in the world-- not very interesting from
a return perspective. And if you put it in
2008, you would have not done particularly well, but
you wouldn't have lost much. The red line that most
of you did not pick, well, that's the S&P 500. Most of you already have
that in your portfolio, so you better
rethink that decision given what you just said. If you put your money
in the S&P in 2008, you would have done just fine. You would have done quite well. The blue line is the single
pharmaceutical company Pfizer-- much more volatile,
much more risky, but, also, quite a
bit more rewarding, and you would have
done well as well. What about the yellow line--
the one that most of you did pick-- the optimal trade-off
between risk and reward? Well, the yellow line is the
returns to the Fairfield Sentry Fund, which is a
private fund that was the feeder fund for the
Bernie Madoff Ponzi scheme. [LAUGHTER] That's why I had
to stop it at 2008. Now you know how the Ponzi
scheme got as big as it did. It is absolutely
innate human nature for us to be drawn
to investments that are high-yielding,
low risk investments. In the finance parlance, we
call that high Sharpe ratio investments. And we do this, sometimes,
to our great detriment. So that's what investors want. What do investors need? Well, it turns out
that technology has played a role in what
we offer to investors. A great revolution
occurred in the 1970s with the advent of index funds. All sorts of indexes now
exist that allow investors to put money in various
different assets at relatively low cost
to be able to capture the broad returns of
the market portfolio. But over the course of
the last few years-- particularly, the
last few weeks-- we understand that
that's not enough. The future of
investment technology, thanks to AI and other
forms of innovations, have given us the
possibility to create what I call precision indexes,
sort of like the personalized medicine that you
hear about nowadays, being able to tailor
a particular treatment to an individual. So instead of the Dow Jones 30
or the FTSE 100 or the S&P 500, imagine creating the Rafael
Reif 30 or the Rebecca Sachs 100 or the Daniela Rus 500. And imagine using technology
to tailor these indexes so that they take into account
things like your tax bracket, your income level, your
health, your age, your family-- all the various different
hopes and dreams that you want to accomplish
over the course of your life. And now imagine if you
can automate all of that, stick it into a black
box, and put it on an app. Well, that's fantastic. But it doesn't exist. And the question is, why not? What's missing? It turns out that it's not
artificial intelligence. We've got plenty of AI
to be able to do this. What's missing, in my view,
is artificial stupidity. We need to be able to
model algorithmically how investors actually
behave, as opposed to how we think
they should behave. And I think to call it
artificial stupidity is a little bit unkind. I think it's really
based on human nature. We're reacting to threats-- fear and greed. And so what we
really need to do is to develop artificial humanity. And it turns out that the
recent breakthroughs in AI have given us a hint on
how to go about doing that. So let me give you one
example that something that I suspect all of you
have been involved in. A few years ago,
I got interested in the biomedical field. And so I decided to purchase a
book on the biotech industry. And the best book
that I knew of, based on friends and
families recommendation, was a book about Genentech,
one of the most successful biotech companies in the
history of the industry. So I did what most
of you will do. I went to Amazon. I looked for Genentech,
and I clicked Add to My Shopping Cart. And as soon as I
did that, Amazon does this thing that I
find incredibly annoying. And you know what that is. They tell me, well, people
who bought your book bought these other five. And sure enough, I had to
have two more of those books. [LAUGHTER] it's really nasty,
nasty technology. This is part of the new AI. What Amazon does is
something devilishly simple. They simply take a look at their
database of all the individuals who purchase this
book on Genentech, and maybe they do something
even more sophisticated by stratifying based
upon demographics and try to compare people
with my demographic and then show me books
that they bought. The algorithm is really simple. But the use of data is enormous. And that's actually
a very different way of thinking about AI than we
did in the 1970s and '80s. Because in the early days of
AI, while we had expert systems, we had incredibly
complicated algorithms and virtually no data. Because back then, storing
data was a lot more expensive than it is today. And so the idea of
focusing on using data and detecting patterns using
relatively simple algorithms versus trying to figure out
every possible use case you would encounter in an
expert system, that's really what we do as humans. So the current approach
to AI is much closer to human intelligence. And I want to give you
an example of that. Because it's something that's
really innate to us and makes us make those decisions
that we will later regret. The example has to do with
something that all of us can do instinctively, which
is threat identification-- friend or foe. I'm going to give you an
example that comes from a scene that I suspect many of
you have participated in, which is a cocktail party. You're at a cocktail party. You're meeting lots of people. And you're trying to figure out
who's a friend and who's a foe. And so at the course of
the evening's conversation, you will talk about various
different kinds of things and learn things about the other
participants at this event. For example, you'll learn about
an individual's gender, perhaps their sexual orientation. And if you think that there
are two major genders and two major sexual
orientations, that's four possible identities
for that individual in that category. You might find out about
their race, ethnicity, their age group,
educational background, and so on and so forth. So over the course
of the evening, you'll learn various
things about the individual and put them into
various buckets. So I want to tell you about
two particular individuals that you might encounter
at such a cocktail party. And then I'm going to
ask you to make decisions about these individuals. So I want to introduce
you to Jose and Susan. Jose is a gay Latino male. He's a young professional
from California-- no religious affiliation,
Democrat, middle class, with an MBA. That's Jose. Susan, on the other hand, is a
middle-aged heterosexual white female from Texas-- Christian, Republican, affluent,
and with a bachelor's-- no MBA. And so now that I've introduced
you to Jose and Susan, I'm going to ask you three
questions about them. And just tell me what you
think in terms of how you would make the following decision. Imagine you're doing a startup,
and you need to hire somebody to help you with that startup. Who would you rather hire-- Jose or Susan? How many people would
hire Jose for the startup? OK, how about Susan? All right. Most of you would hire
Jose for that startup. Fine. Second question--
you are organizing a fundraiser for
breast cancer, and you need to hire somebody to help
you plan that fundraiser. Who would you hire-- Jose or Susan? How many people would hire Jose? OK, how many people
would hire Susan? OK, most of you would say Susan. Fine. Third question-- you're
an auditor at the IRS, and you're looking to try
to find who's cheating on his or her tax returns. But you can only audit one
of these two individuals. Who would you audit-- Jose or Susan? How many people
would audit Jose? OK, how about Susan? Most of you picked Susan. Wow. That's amazing. I can't believe how
judgmental you people are. [LAUGHTER] Now, I know I asked you. I was the one who asked you. But you didn't hesitate
to make a decision. And it's because
all of us are wired to make these snap judgments. From an evolutionary
perspective, that's what's kept us around
for the last 100,000 years. It's part of our human
cognitive faculties to make quick decisions. And we do it the
way Amazon does it. This is machine
learning via humans. What we're doing is looking back
in our database of all sorts of experiences we've had
in doing cancer fundraisers or in doing startups
and asking the question, the people that were
successful in those roles, did they look more like
Jose or more like Susan? In fact, if you go through
the different characteristics that I listed on this
page and you calculated the number of different
personality types that you would be
able to come to, it turns out that there are
about 350,000 unique categories if you just do
the combinatorics. That's more pixels than in
a 600 by 800 photograph. The problem, though, is that our
data set is very, very sparse. Unlike Amazon's data set
of people who bought books on Genentech, how
many people here have met more than 345,600
people in their lifetimes? Show of hands. I actually met a marketing
person who said yes, they did. [LAUGHTER] So most of our data is empty. We don't have observations
on a lot of these things. And by the way, this is part
of the problem with fake news. It doesn't take a lot for
me to change the entries in your very sparse matrix
of data that can completely change how you behave. And this is the challenge with
financial decision making. We have very sparse data about
experiences of bull and bear markets. And we're influenced
by very small things, like stories about somebody
who lost all their money because they invested in the
wrong stock or somebody who made a ton of money
because they happened to pick the right stock
at the right time. And so what we're
doing in the Laboratory for Financial Engineering is to
try to come up with algorithms using large data sets that we've
obtained from brokerage firms-- anonymized data sets of
individual household accounts-- using machine
learning to understand how people make mistakes,
how they freak out at the wrong times,
and what kinds of financial strategies
and products and services can actually help them
make better decisions, so that, ultimately,
we are going to be able to have the
algorithms to create precision indexes. Thank you. [APPLAUSE]