[MUSIC PLAYING] SHOUCHENG ZHANG:
Thank you so much. It's a great pleasure for
me to come here to Google, but also a special privilege to
be introduced by [INAUDIBLE].. But also, we have been
constantly exchanging ideas. And today, I'd like to talk
to you what about I view as the three frontiers
of information technology for the future-- quantum computing, artificial
intelligence, and blockchain-- but especially also
the possible symbiosis among these three major trends. I think in these
days in the world, there are many experts in
each one of those subjects. But I think the really exciting
opportunity is possibly the conference or the
symbiosis among these three major trends of the future of
the information technology. Let me start with a story of
a recent scientific discovery, a recent discovery, but
it had a long history. So a lot of great
discoveries in science also relates to some deep
changes in philosophy. We seem to live in a world of
opposites, a world of dualism. Whenever we have
positive numbers, we have negative numbers. When we have credits,
we have debts. We have yin and yang, good
and evil, angels and demons. But in the natural
world, there's also a counterpart to this
philosophy of the opposites or the duality. So in 1928, perhaps one of the
greatest theoretical physicists of all time, Paul
Dirac, was trying to unify Einstein's theory
of special relativity with quantum mechanics. And in the process of doing so--
he was doing some mathematical derivations-- he had to encounter an
operation of square root. And then he remembered
from his high school days that the square root
of 9 is not just 3-- because 3 times 3 is
9, but also minus 3, because minus 3 times
minus 3 is also 9. So whenever you take
a root, you have to take both the positive
and the negative root. At that time, it was
very perplexing what that negative root means. And he actually, in one
brain stroke of genius, he predicted that, for
every matter in the world, there's the opposite
matter or the antimatter. And when you visit
Westminster Abbey, you can try to find the
plaque commemorating the famous Dirac equation. And in 2012, one of the most
humbling experience in my life is to receive the
Paul Dirac medal. So just as I said, whenever
you take the square root, you have a positive branch
and a negative branch. And he preemptively
interpreted the negative branch to be a universal law of
nature, that for every particle there is in the universe,
there's also an antiparticle. Except at the time-- everybody viewed this
as a beautiful equation, but except at the time of 1928
where he made this prediction, there was simply no antimatter. So for example, the
antimatter of the electron will be something that
has a positive charge, but the same mass. The proton has the opposite
charge to the electron, but has 2,000 times more
the mass as the electron. So nobody believed him. Then you know what he said? He said, my equation
is so beautiful, you guys simply
just go look for it. And people did. And he was lucky. And five years later, in
cosmic ray radiation-- it's very hard to naturally
produce that on Earth-- but in the cosmic
ray radiation, people discovered antimatter,
namely the positron, which has exactly the same
mass, but the opposite charge of the electron. So I think this is
one of the greatest prediction of all humanity, that
something conceived of beauty also turned out to be true. Today, we actually use this
antimatter in medical devices. A famous medical
imaging technique called PET scan, Positron
Emission Tomography, was actually based on this
antiparticle, the positron. It also captured the
imagination of Hollywood. So there's the famous novel
and the movie of "The Da Vinci Code". Many of you have read the
book and saw the movie, but there's also a
sequel to it called, "Angels And Demons," also
the book by Dan Brown, but also played by Tom Hanks. Basically, the novel
depicts the epic struggle between angels and demons,
culminating in the inhalation of particles and antiparticles. So actually, it's the
highest information density one can possibly achieve
anywhere in the universe. If you have antimatter with
matter, the energy they release is the most powerful
there can ever be. But it's also a fun analogy. Just as we have
angel, we have demon. Whenever we have
positive particle, we have the opposite
antiparticle. But human curiosity
didn't stop there. So after Dirac's
prediction, viewed as one of the greatest
predictions of all time, curiosity didn't stop there. So there was another great
theoretical physicist, but somewhat elusive during his
time, named Ettore Majorana. And he asked a curious question. Could there be matter which
doesn't have antimatter? Or a particle which is
its own antiparticle, or a particle which would
not have antiparticle. It's its own antiparticle. Is that possible? So he asked this
question, and he also wrote down a beautiful
equation which described it. But this time, he
was not so lucky. Nobody believed him
and nobody found it. So he actually got very
disappointed about that. So ever since then, it became a
mystery in fundamental science. So we have, in fundamental
science, a "most wanted" list. For example, the
list included what is called a God
Particle, or Higgs Boson. But in 2012, it was
discovered in CERN, in the laboratory in Geneva. There's also the
gravitational wave. Einstein was less
lucky than Dirac. Dirac, his prediction only
took five years for it to be experimentally confirmed. But Einstein's prediction
of gravitational wave took more than 100 years. Only two years ago it was
discovered, whereas Einstein predicted it 100 years ago. So this is such a list. And it's also something called
the dark matter particle, which we still try to find. But also very much on
the top of the list is this very interesting concept
of Majorana fermion, which is a particle which does
not have antiparticle, or is its own antiparticle. But its more mysterious. Maybe among all those
on the most wanted list, maybe Majorana fermion
is most mysterious. Because not only Majorana
fermion has not been found. Like I said, he was
very disappointed when nobody believed
his prediction. And he was Italian. And he boarded a ferry
from Naples to Palermo, but he never reappeared
from that ferry ride. So he became a
deep, deep mystery. And this year is exactly the
80 year of his disappearance. But we also have some
good news to report. Even though he himself
was never found, his particle now has been found. And that's the highlight
of my talk today. So then, because he simply
wrote down the equation, but he didn't tell
people where to find it, so that's why it took 80 years. OK? So nobody knew
where to find them. But my theory group at Stanford
predicted where and how to find this
mysterious particle. And during the period
of 2010 and 2015, our theory group wrote
three theoretical papers-- first one exactly
to predict where. Actually, quite
surprisingly, it's not for this
particle to be found in some huge
accelerators, but it could be in a tabletop
kind of experiment very much like a semiconductor
device people will usually use. So it's a material called
a topological insulator-- Diane already mentioned
introduction-- something I discovered 10 years ago. But they put into it
some magnetic dopants. So the topological
insulator can do something like bismuth telluride. And there, you can put
in some magnetic dopants, which could be chromium. And then on top of it, you
apply a superconductor. So we predict that,
in this system, you can find these
mysterious Majorana fermions. But that's not good enough. Not only you have to
predict where to find it, but what to measure
in order to find it. And there, I think common
sense can even guide us. So somehow, the regular particle
is like two sides of a coin. Whenever you have the upside,
you have the downside. Whenever you have a
positive particle, you have the antiparticle
associated with it. But this Majorana
particle is only one side. It is only a particle,
but no antiparticle. So in some vague sense, it
is half of a usual particle. So this concept of a half
will be very, very important in the later part of my talk
about quantum computers. So somehow, this
Majorana particle is half of a regular particle. So the regular particle has some
phenomena of their conductance, like the resistance or
conductance we usually measure can be quantized in
units of 0, 1, 2, 3, and so on. So they behave like integers
in a quantization step. So we once had a Eureka moment
that, if the Majorana particle is in some sense half of
a regular particle, then they should display some
plateau at half integer steps. Namely, at 1/2, 3/2,
and so on and so forth. So that became our prediction
that, in this system, you can experimentally
construct, but what you measure
is this 1/2 step. And last year, in
close collaboration with experimental colleagues at
UCLA, UC Davis, and UC Irvine, so they exactly
constructed this system as we theoretically proposed. And they performed the
measurement exactly according to our
theoretical prediction. And lo and behold, besides
this integer step at 1, something at 0, you see
there's a step at 1/2. And this 1/2 is
the crucial idea, that a Majorana particle, being
half of a regular particle, it should display-- whereas, regular particle
display integer quantized step, Majorana particle should
give you half quantized step. So that is really a smoking gun. And it was celebrated last
year with the publication in the "Science Magazine." So in that very
exciting moment, I remembered the famous
novel and the famous movie I saw about angels and demons. And I proclaimed that it's as
if we discovered a paradise with only angels and no demons. So I call this the
Angel Particle. So now what is it good for? So today, classical
computers are already very, very powerful. But they are good at
doing some things, and not good at doing
some other things. So if I give you two
very large numbers and to ask the
computer to multiply, they do this in a split second. On Google Cloud, it may
be a nano, nano second. But if you give a number
and ask the computer whether that number factorizes
into two other numbers-- giving the example, for example,
15 is equal to 3 times 5. But 11 cannot be factorized
as a product of two numbers. The only thing you
can do is to say 11 is 1 times 11, which
doesn't mean very much. But then, if I give you a
very, very large number, and if you want to ask whether
that very large number can be expressed, just like 15, as the
product of two other numbers, or it is more like 11,
which cannot be expressed as a product of two numbers,
the classical computer will have a very, very hard time to
answer these questions. The only way it can do it is
to do an exhaustive search. It tries to divide this very
large number first by 2, then by 3, then by 5, by 7,
and so on and so forth. And then it takes forever to
do this exhaustive search. So when you maybe
think about maybe all of the most important
computational providence, what we would like our computer
to do with Google Cloud, with all the data, what
we would like to do is to find some optimal
solutions or something. So when we try to
find optimal solution, we basically have to
enumerate all possibilities, compute all of them-- maybe
there's some optimizing function associated with it, and
you try to find maybe the least path, or biggest profit,
or something like that-- but you also have to do
an exhaustive search. And that takes a
very, very long time. So that's why computer
has a lot to advance. But then enter
the quantum world. What is the mysterious
world of the quantum world? So if I have two slits
and I use a classical gun to randomly shoot through these
two slits, then obviously, a bullet either at one given
time goes through the right, or it goes through the left. And on the back of it,
you will see two blobs-- one coming from the right, and
the other coming from the left. But not so if you try to
shoot an elementary particle through the double slits. So somehow on the
background, you don't see two blocks
associate with the right, or one associated with the left. You actually observe a rather
intricate interference pattern. And that pattern can
only be explained if the particle went
through double slits at exactly the same time. It went through both
the right and the left at exactly the same time. If it didn't do so, and if
you knew which way it went, it wouldn't lead to this
intricate interference pattern. So somehow, the quantum world,
the mysterious quantum world, is parallel. At one given time, a particle
is both going through the right and going through the left. And then people somehow
started thinking that this very difficult problem
the classical computer has a very difficult time
to solve, namely it has to go through serially
an exhaustive search of all possibility,
maybe it can be done by a quantum
computer, which is intrinsically parallel. So basically then it can search
through all these possibilities exactly at the same time
and give you one result in one step of computation. So that would truly,
truly be wonderful and will increase our
computational power in such a tremendous way. But in order to construct
such a quantum computer, you first need to have the
basic elementary units, which will be called a
quantum bit or a qubit. So a classical bit, as you have
on your classical computer, one bit is either 0 or 1. But just like a quantum
mechanical particle can go through double
slits at the same time, a quantum bit, a qubit, somehow
is a linear superposition between 0 and 1 It's neither
exactly 0, nor exactly 1. Somehow it lives in this
mysterious superposition state between 0 and 1. So in order to do
a quantum computer, you necessarily
have to construct such an elementary
qubit, a quantum bit. But to be in quantum mechanical,
it's also very, very fragile. In the classical world, if
you are very curious to see, well, it is really 0? Is it really 1? You try to observe it, they
immediately collapse to 0 or 1. And you lose this
mysterious quantum concept. So therefore, in all or
most of the approaches that has been proposed to
construct a quantum computer, it has a lot and lots of errors. This qubit is very, very
fragile and very unstable. And it's very easily collapsed
into a classical qubit. So therefore, it's
a bounding number. For one useful,
logical qubits, you have to use 10 to even perhaps
100 error correcting bits to correct one useful qubit. And that's obviously very,
very, very difficult to scale. And that's why we don't yet
have a truly functional quantum computer, yet, which can
factorize a very big number. Now into my
scientific discovery. So we discovered
this mysterious, but very interesting
Angel Particle, which is half of a regular particle. So then it's a little bit
complicated, scientific diagram. But somehow, when you enter
in with one qubit, which is a regular particle, it
can be immediately split into two of these
Majorana fermions, or these Angel Particles. So then each being half-- so one qubit you already
think is the minimal thing you can have, but one qubit is now
stored in two Angel Particles-- so just like one
qubit entering here is partially stored here
and partially stored there. Then if you have
local perturbation, it's very hard for
local perturbation to destroy the global-- these two Angel Particles
together function as one qubit, so it's very hard for
local perturbation to destroy these qubits. And therefore, it's a
very, very robust way of doing computation. And in fact in this experimental
measurement, what is happening is that these Angel Particles
are braiding with each other. So if you have some lines,
and if you try to braid them, that is kind of a
digital operation, if you either braid
it or you didn't. Whereas, in other
most other approaches to quantum computing, it's
almost an analog computation. You can very easily
make little errors. But if you do what is called
a topological operation of braiding, then it's
actually very, very robust. So you now approach-- one qubit is just one qubit. You don't need error
correcting qubits. After our discovery, it's
still kind of a new approach, so it's coming up. But compared to other
approaches, which may already have many, many qubits,
but a lot of them are serving as error correcting
qubits to one useful qubit, I believe our approach
will eventually scale up much, much faster,
because it's one-to-one. So this is the first part of
my talk about quantum computer. But now, let me switch to the
second part of my talk, which is about artificial
intelligence. When we look at
the human history, it has a long kind of-- on Earth, it took a very long
time for the most intelligent species to develop on Earth. And it took maybe three
million years of evolution. But finally, we became
the dominating species. But now, we're actually
faced with a challenge. Maybe a more intelligent
species, namely AI, could be soon emerging. But AI has been developing
maybe since the '60s. So why we suddenly have
such rapid increase in the progress of AI? This is basically due to
the conference of three major trends in computation. One is the Moore's Law. So the Moore's Law basically
is about computational power. So the computational power
doubles every 18 months, according to the progress
of the Moore's Law. So now, Moore's Law is
facing some challenging, that's the bad news. But the good news
is that maybe we have something so
much more powerful than the Moore's Law predicts. Moore's Law has been a
quantitative, incremental increase, even though
it's very fast. But quantum computer
can be one quantum jump in the computational
power because of this massive
parallelism associated with quantum computing. So on the horizon, in terms
of computational power, we see those challenges to
the classical Moore's Law as the device gets
smaller and smaller, but we also see tremendous hope. Maybe quantum computer
can arrive at a thing. And when you try to search
among optimization problem, you can do one search
for one, rather than an exhaustive search
in a serial fashion. So this is something
on the horizon that could really fundamentally
be a game changer. But the other reason why
artificial intelligence today is exploding is because, with
the arrival of the internet and the internet of
things, it provided massive amounts of data. And machines need to learn. And they learn
only from big data. And the other is the rapid
progress of the AI algorithm. And this is also one
of the main reasons, for example, the
deep neural nets, which is providing the
main kind of engine behind this rapid growth. So in the field of AI, we
always ask this question-- when would someday
AI surpass humans? And what is the objective test? So we're all totally amazed
to see the progress Google has made, announced two years
ago about Deep Mind having AlphaGo, which beat a human
player in playing the game. And I was very fortunate
that our son, Brian, was also working at the Deep Mind
at these kind of projects at that time. So when we ask this
question-- so I'd like to revisit a question
that we're always asking, namely the so-called
Turing Test. When is the objective
test that AI really passed the human mind? So Turing proposed the
following test a long time ago. He says that if we have
a human, and then we're having a conversation with
something behind a curtain-- either another human
or an AI machine. And if you talk for one
long day, and afterwards, you cannot tell the difference
whether it's a human behind or whether it's
a machine behind, that may be the day when AI
really reached true human intelligence. But I think it's not
an objective test. So first of all, because the
human brain it took a long, long time to evolve, and a lot
of these human brain has a lot of irrational,
emotional components, and maybe it cannot be so
imitated by the machine. Maybe also totally
unnecessary for the machine to imitate every human
irrationality that's possible. Because one strategy is you
talk to the machine in totally irrational way, maybe
a rational machine will be very hard
to fool the human, to see that it's
actually a human. So but then what
about the Google's success at Deep Mind of AlphaGo,
which is a game of human, and it looks a bit
more objective? But still, it is a game
invented by humans. So why should a
intelligence test be based on a game
that's invented by human? So what would be the most
objective test that AI really reached human intelligence? So I'd like to have a
proposal which could possibly replace the Turing Test. And then I ask to
play a game of nature, namely ask the machine to
make a scientific discovery and before the humans do. And then, we can objectively--
such as a prediction of Majorana fermions,
gravitational wave, some of the greatest prediction
of the human scientific mind, and see if the machine
can make a prediction before the humans do. And we will do a
objective experiment and verify the prediction. We say, this is the
day when machine surpassed human intelligence. So can we see whether
this is possible or not? So I am usually a
theoretical physicist, but I, for the first
time, wrote a paper on AI, which will
soon be published. So basic idea is
that let's pick-- so first of all, we
haven't made the progress of making a prediction
that humans has not made. But our idea is to
rewind the history to say that, if humanity
is still at a point where one great discovery
hasn't yet been made, and whether the machine
at the same level can make that
scientific discovery. So we know some
great predictions in theoretical physics,
such as gravitational wave, Dirac antiparticle, and so on. But maybe the greatest
scientific achievements in chemistry is
Mendeleev's periodic table. So Mendeleev looked at all
the chemical compounds, and he discovered,
in a brilliant stroke of genius, the organizing
principle of the world-- namely, that all the
materials that we see can be reduced to elements. But these elements
organize themselves into a periodic table. So at that time, there's
some limited number of elements discovered. And once he organized them
into a periodic table, he sees some holes in
the periodic table. And he says, oh, these
elements must be there. You guys look for it. So that was the
brilliant prediction. And I think, certainly,
I would rank this as the greatest scientific
discovery in chemistry, maybe of all humanity. So the question we
like to ask ourselves is that, if we rewind history,
that we are in a stage that periodic table has
not yet been discovered. But if we feed all the chemical
compounds to a machine, would the machine
be able to come up with the discovery of
the periodic table? So that maybe is quite
related to all the AI work that's going on at Google. And we actually call
our algorithm Atom2Vec. So once you see the
name, you immediately see that there must
be a lot of connection to maybe all the work
you guys are doing here-- namely that's all the
Google Translate, all the natural language
processing, is based on a algorithm called
Word2Vec, to map words into a vectorial form. And once you map words
into a vectorial form, you can understand the machine. The vector actually encodes some
semantic meaning of the word itself. And then it can discover
certain relationships. So how does Word2Vec work? Basically, try to
understand the words in the context of
other sentences. If two words often
occur together, like king and queen
in one sentence, the machine will
understand maybe in vectorial space they are
somehow close to each other. So our idea is to
borrow this kind of idea from the natural
language processing and try to see if it
is possible to be used to make scientific discoveries. So we're basically,
just like Google here, to feed all the corpus of texts
into a machine using Word2Vec, and then discover the
meaning of the words, and then do translation,
and so on and so forth. We basically feed, in a
totally unsupervised way, all the list of all
chemical compounds to the machine and to see
whether the machine can come up with the organizing principle. And lo and behold, the
machine, or algorithm, discovered the periodic table. Because the periodic
table can be viewed as nothing but a
two-dimensional vectorial arrangement of all the elements. But if you can do
something like Atom2Vec, it will similarly
map each element into some vectorial form. And when you collapse
these to two dimensions, you will exactly discover
the periodic table. So for example, like
seeing a large corpus of text, whenever you see
king, you see queen a lot-- the co-occurence a lot. But in chemistry, whenever you
see NaCl, you see KCl a lot. So somehow, the machine
will understand Na and C and K may be very
related to each other. So in vectorial space, they
must be close to each other. So by borrowing the ideas for
natural language processing, we actually could organize-- in totally unsupervised
fashion, the machine actually discovered the periodic table. So I think we are getting to
a very, very exciting time, that one of the greatest
scientific discoveries can at least be replicated
by a machine discovery without any
supervision, whatsoever. But once these
algorithms start to work, then we can use it to
discover new materials and possibly use it to discover
new drugs before the humans can do. So now let me move to the
third topic of my talk today, and namely about the blockchain. And maybe some of
you are already wondering what AI, and quantum
computing, and blockchain, can possibly have anything
in common with each other. So basically, the
internet has always provided tremendous value
as a communication tool for all of us to communicate. But then, at some point,
we have to exchange values over the internet. But whenever we have to exchange
value over the internet, we have to agree on a
common standard of value. So therefore, the
most important thing when you try to move
to the next stage of the internet
development, possibly moving into the
world of finance, for example, the key
essence of finance is to have some
consensus about value. The reason why we
use gold previously is because, compared
to something like an apple as a
medium of exchange, is because everyone can agree
on what one ounce of gold actually means. We can do position measurement
to determine its content and quality. But it's very hard to
do it for one apple, because there are so many
different kinds of apples. So it's not suitable as
a medium for exchange. So therefore, the key element
of a medium of exchange is consensus. So if I have very broad
distribution about the value, then it's not suitable to
use as a medium of exchange. If we all agree on the
value, reaching consensus, then it is extremely valuable. So the internet taught us
one very important thing, is namely to do things
in a distributed fashion. But if you have a very
distributed network, how can they possibly
agree on something? So previously in human
economy, we always thought, there has to be some
centralized entity which is trying to control
all of it and get people to agree on some values. But when you actually
observe the natural world, there is a way for the natural
world to reach consensus. So let me give you one
example of the physics. For example, every day,
when you walk up and walk towards your refrigerator to get
a glass of milk or something, people usually like to stick a
magnet on their refrigerator. So how does a
magnet really work? So actually, all materials
consist of electrons. And electron works
like a compass. It has a north pole
and a south pole, so electron actually
works like a magnet. But the most of the
time, they don't agree on the direction to point to. So they're all pointing
in random directions. And therefore, globally,
macroscopically, they don't behave like a magnet. But the magnet that sticks
on your refrigerator somehow, miraculously, a consensus
has been reached. All the electrons decide to
point in the same direction. And that is happening without
any centralized entity telling electrons what to do. Somehow, there is a mechanism
of protocol of exchange. Somehow, they miraculously
agree to point one direction. So that tells
something very, very profound about
the natural world. To agree on something is what
is called the low entropy state. And to be disordered is
in a high entropy state. The natural trend
of the world is to-- gradually, always the entropy
has to increase over time. The world always becomes
more and more disordered. But somehow, in a
subsystem, you can actually reach high consensus,
reduce entropy. But then necessarily,
there has to be a cost. You have to dump the extra
entropy somewhere else. So consensus can happen in some
self-organized, distributed way. But there has to be a
cost associated with it that, since consensus is
a state of low entropy, you have to dump the extra
entropy somewhere else. That, I think, is the
fundamental explanation of why blockchain is working. So blockchain has distributed
to the world of computers. And the early approach to
managing a distributed system of computers is to ask whether
there's some centralized master algorithm, deterministic
algorithm possible, which will coordinate and direct all
these distributed computers, even though some of them
have very long latency, very broad distribution of latency. And some of them can even be
hacked and behave maliciously, whether there's still, in all
these circumstances, a master deterministic algorithm
possible to tell all these computers exactly
what to do and reach consensus. Then there's the famous result
in computer science called the Fisher Lynch Paterson
theory, which actually is a no go result,
which says such a master deterministic
algorithm is not possible. So this actually very reminds me
of a central result of physics, namely the entropy
always have to increase. If such kind of a
master algorithm exists, actually we have a name for it. It's called Maxwell's Demon. So somehow, this demon has
very high intelligence. For example, if you have
a compartment of gas, and you have a wall
dividing between them, and you have a little
hole, the Maxwell's Demon, when it sees a high energy
particle from the left, it opens the shutter
let it through. And to a low energy particle
coming through, and then it closes the shutter and
doesn't let it through. Then if this demon can
do all this choreograph in an efficient way,
then a little bit later, this side will be much
hotter than this side. And then you can
extract some work to it. So such centralized
entity to coordinate will really be able to
extract energy out of nowhere. And this obviously
is not possible. So I like to make the analogy
of the Fischer Lynch Paterson theory in with the concept
of the Maxwell Demon. None of them are possible. The master algorithm
is not possible, and Maxwell's Demon
is not possible. So what's the solution? The solution is provided
by the blockchain. So if you want the entire
distributed internet to agree on some
temporal order, which is the most crucial thing
for financial transactions, which transaction happen first
and which transaction happens later, you want to order
the machines to vote, but voting at a cost by solving
what is called a hash puzzle. Only those machine
which can solve a hash puzzle, which is
very difficult to solve, but very easy to verify,
then once a machine solves this hash puzzle,
every machine will agree that, yes, this is true. And we agree on
this temporal order. So it's a stochastic , algorithm
and it actually requires energy to compute and to
reach this hash puzzle. So therefore, in the
self-organized blockchain consensus mechanism, we
reach consensus, namely in a state of low entropy,
but we dump the extra entropy somewhere else through the
computation of the hash puzzle. And that is very similar
to what's happening in the physical world. Namely, we can, in principle,
reach this state of consensus of low entropy, provided if we
dump extra entropy somewhere else. So I really think
this is really one of the most brilliant
invention in human history. Somehow, we can have a natural
and objective mechanism in a distributed world
to reach consensus. But there's a cost to it. Namely, you have to
do this mining work so that the extra entropy
can be dumped somewhere else. So once you have this
consensus mechanism, I think this offers a
great new opportunity to a new kind of a symbiosis
between blockchain and AI. So I talked about AI being
conference, a major conference of three major trends. I alluded to the computational
power, Moore's Law, and then possibly
quantum computers. I also talked about some new
inventions in the algorithm. But what AI needs the most is to
have data so that AI can learn. But right now, all
data are concentrated as centralized platforms. So there's very little
incentive for individuals to contribute data,
because they basically get nothing in return. And maybe their privacy
could even be violated. So I envision the future
of the world where the ownership of that
data should be completely be returned to the individuals. So all my personal data,
all my behavior data, all my online data,
all my genomics data, all my medical
records, everything should be owned
by the individual. And the privacy should
be completely protected. But then you say, wow,
then how can machine possibly learn
anything if everybody keeps their secret private? And there is a beautiful
thing called privacy preserving computation. And that will make it possible
to have a data marketplace. So I first of all protected
all my privacy data, but I can leak information
out, one bit at a time, totally at my control. And such a world will
be a data marketplace. So it's a peer-to-peer
marketplace where individually they own their private data. And then there can be a bidding
and selling process, and very selectively controlled,
by performing privacy preserving data marketplace. So such a future world
of a marketplace, based on one principle, which
I call "In math we trust." And that is possible that you
can still preserve privacy, but still can do
computation that only leaks out very, very
selectively, one piece of information at a time. So the famous problem is
called the secure multi-party computation, or a
Millionaire Promise. So obviously, private wealth
is very, very private. People don't like to reveal. But it could be so happen
that two millionaires want to compare who is richer,
but without revealing to each other. If they reveal to each
other the wealth they have, obviously, they will find out. It leaks too much privacy data. But there is a
computational protocol, called Yao's Garbled
Circuit, that they can exchange protocol. In the end of the day,
they only find out one bit of information,
namely who is richer, without revealing anything. That's a idea of
differential privacy, namely adding noise
to private data so that they don't become
individually identifiable. But if I want to conduct
a collective survey, I can add noise in
such a way that, in the statistical aggregate,
the noise will cancel out. So the statistical information
is completely accurate, but not much individual
private data has been leaked because there's
enough noise that individually identifiable information
is not there. But the overall statistical
information is still accurate. And then there's also the
idea of zero knowledge proof. I can prove to you, for example,
that I solved a very difficult game-- let's say
it's a Sudoku game-- but I want to only give
you one bit of information, namely I solved the game. But I don't want to reveal
you my entire solution, I want you to keep
on trying harder. And this is also possible
through the zero knowledge proof. So there's really a
world where mathematics will enter economics in a
very central way in making a data marketplace possible. So that's the way all of us
will own our individual data. And then Google Cloud
and all these entities then can compute
in a centralized-- they can compute useful
statistical information without even having us to
reveal this privacy data. So I really think
about this world where both AI and
blockchain combined can do great social
good in this new era of cryptoeconomic science,
based on "In math we trust." Because when you really
think about what's the problem with
our society today, it's because there's
discrimination against minority. And that is a
fundamental of society. But when you really
think about AI learning-- let's say if my AI algorithm is
already working accurately 90% of the time, but I want some
extra data so that I can go from 90% to 99%,
the data I need is not yet another
kind of data which looks very similar to all the
previous data I have seen. I want data which is called
to have high mutual entropy, namely the data
that's most distinct. And that, by definition,
is owned by the minority. But then, in such
a data marketplace, I would bid the highest
for those data which are most in the minority. So then the economic incentive
structure would be aligned. Our society will value
the minority the most. And that's exactly what
we need to do social good. So finally, there's a vision
that the ugly duckling can somehow become
a beautiful swan. Because the ugly duckling
is not ugly, it's different. But now, difference
will be valued the most. Minorities in this
fair data marketplace will not be
discriminated against. So I really see this
wonderful new world in a conference of
three major trends-- quantum computing,
AI, and blockchain-- but I also see myself coming
from academia and offering interactions with
colleagues in industry. We really can enter
a new world where the latest scientific idea--
it's really, really fascinating and totally amazing that these
mathematical concepts was purely invented by
mathematicians in abstract could turn out to be so useful. So something like
number theory-- every day, when we conduct
a transaction using HTTPS, uses number theory in
the most essential way. So this is a wonderful
new world where collaboration with
academia and industry can really lead
to great progress. As I said, the greatest
opportunity of making progress is oftentimes seeing a
conference of some major trends before-- and anyone who, in
their specialized area, couldn't see the
overall picture. And I really think that the
symbiosis among these three major trends will
be the defining characteristic of the future
of information technology. Thank you. [APPLAUSE] Should I entertain
some questions? SPEAKER 2: Yeah. AUDIENCE: So you talked
about the consensus and how a group of work
systems achieve consensus by distributing-- like
[INAUDIBLE] entropy. SHOUCHENG ZHANG: Yeah. AUDIENCE: How does that work
in proof of state [INAUDIBLE]?? SHOUCHENG ZHANG: Yeah. So actually, I think,
in the end of the day, there should always
be some trade-offs. So I see the future
of the blockchain world and those cryptocurrency
will happen like what we have in the current world. The current world will have
M0, M1, M2, different layers. So I believe, at the
most fundamental layer, universal currency should
be completely based on proof of work. Because then, the
entropy that you dump is totally transparent. Not only it has to be there, but
it is also totally transparent. I think, at the most basic
and fundamental layer, proof of state will not
work because there's so much possibility of collusion
that you can lose something on chain, but gain
something off chain. It can be bribery and so on. So I think the truly exciting
thing about the blockchain world is that, at the
most fundamental layer, there can be something
that's totally objective and only connects to the
natural world, namely energy. And not so much
about proof of state, which human irrationality
can get involved. But I can very well imagine,
on the higher layers, then they will [INAUDIBLE]. But the most fundamental
layer, such like M1 or M0, should be completely robust. And I still think
that proof of work-- or there's something,
another approach, which is called proof of space time. Proof of space, which
is space and storage. And that, I think, it's
quantifiable, physical resources. I think, at the
most basic layer, human things
shouldn't be involved, but maybe at the higher layers. AUDIENCE: Could you
elaborate on how you feel quantum computing
relates to AI and blockchain? So by nature, quantum
computing requires unitary transformations. And it's like something
that should be reversible, unlike hashes, which
seem to be the basis of-- SHOUCHENG ZHANG: Yeah. So I mostly think about quantum
computing may be useful for AI as a search algorithm. So one algorithm for-- so one of the most interesting
approach to AI is the GAN, right? Generative Adversary Networks. So I don't mean these three
trends always necessarily have to work together. They can actually
lead to progress by competing with each other. So in one aspect, quantum
computing and blockchain are somewhat competing
with each other, because a lot of the
cryptoencoding algorithm could be broken by quantum. But on the other
hand, I also see that quantum can
help AI in doing the most efficient search. And that's also what
AI needs to do, right? So this relationship is
very much like a symbiosis in our ecosystem. There's both competition
and collaboration. Yeah, we cannot just use our
human will to dictate they will always do the same thing. I think, in the
process of competition, they will all become stronger. AUDIENCE: You mentioned the
universal currency, or M0, M1. SHOUCHENG ZHANG: Yeah. Yeah. AUDIENCE: I'm curious. I know you're a
theoretical physicist, but in execution to that, when
you think about an iPhone, for example, my iPhone
7 talks to the iPhone 6, talks to the iPhone 5. But there's a
metalayer of consensus to be reached that's
like, I actually agree into this distributed system. Currently in crypto, there's
many fragmented pools of "liquidity," quote, unquote. So how do you bridge
that gap between where we are now in these-- SHOUCHENG ZHANG: So
I think, for example, the relationship between
the bitcoin blockchain and Lightning Network
very much fits to this framework of M1, M2. So at the basic
layer, the blockchain is completely objective
based on proof of work. And so this is to try to reach
the most universal consensus among parties which totally
don't know each other, and they still need to transact. But when you really think
about business transaction, maybe two of us already
have been working very well as partners in the last
10 years, so why should we still treat each other
as totally strangers? So what we can do is we
enter into each other a state channel by putting our
[INAUDIBLE] on the blockchain. But we keep on doing
very, very fast trading, but we still settle
once a month. So this is, I think, exactly
like the relationship between M0, M1, and M2. The relationship between
Lightning and Bitcoin is like the relationship
between M0 and M1. So when you go above every
layer, it's less robust, but will be more efficient. But a trade-off comes
from our history, that we already had
a history of trust. But if you had
business partners, they already somewhat
know each other. They don't absolutely have
to use the most universal robust layer. They can establish
a higher layer where they sacrifice
some universality, but in exchange for efficiency. Yes? AUDIENCE: I have a question
on the Angel Particles. SHOUCHENG ZHANG: Yes. AUDIENCE: Angel Particle is the
one that's not positive or-- SHOUCHENG ZHANG: Negative. Yeah. AUDIENCE: Not negative. Right. SHOUCHENG ZHANG: Yeah. Yeah. So it's a half of a qubit. AUDIENCE: That sounds
like an identity element. SHOUCHENG ZHANG: Huh? AUDIENCE: It sounds
like identity element in your [INAUDIBLE]
field, right? Identity element, you know? When [INAUDIBLE] itself, it
stays the same as any other-- SHOUCHENG ZHANG: No,
the more precise analogy is like a complex
number can be expressed in terms of two real numbers. So a complex number
is like a particle. AUDIENCE: Right. SHOUCHENG ZHANG: The
complex conjugate is like the antiparticle. AUDIENCE: Right. SHOUCHENG ZHANG: But if
you have the real number, the complex conjugate
is the same as itself. AUDIENCE: OK. How would you-- SHOUCHENG ZHANG: So
the Angel Particle is more like a real number. AUDIENCE: I see. How would you-- now the only
thing you have a yin and yang versus-- SHOUCHENG ZHANG:
Yin and yang, yeah. AUDIENCE: --and
angel versus demon. What would be the
neutral element? What would be the
angel and the-- SHOUCHENG ZHANG: Yeah. Yeah. So yeah. So yeah. Well, I think the analogy
is just to say that-- so here, there's one
incoming quantum qubit. But before you do
actual computation, you are splitting them. And by splitting them, they
are already kind of become non-local. they're entangled,
but the classical noise is not entangled. So it's impossible to destroy
it using classical noise. So that's why topological
quantum computer can be so much more robust. Yes? AUDIENCE: OK. So Combining a couple of
the themes of your talk-- if we're able to harness the
power of quantum computing, and if we're able to
then secure our data through privacy-encrypted ways
of being able to share it, I'm wondering how you
see the future of Google? Because that seems like a
truly existential threat. If anyone can spin up
a quantum computer that can do an extremely
efficient parallel search, and then they can
harness everyone's data, it seems like-- SHOUCHENG ZHANG: Well,
I think the only way is to not resist changes,
but to embrace changes. AUDIENCE: Right. Right. SHOUCHENG ZHANG: [LAUGHS] AUDIENCE: So how do
you see a Google-- SHOUCHENG ZHANG: Yeah. So for example-- AUDIENCE: --operating
in this future world? SHOUCHENG ZHANG: Yeah. Yeah. Yeah. Actually, I have
an answer to this. So in this way, actually, we
can do the following construct. That for example,
my private data, I want to store it
in a secure way, but still be possible
to do some computation. So we know Google Cloud
competes with Amazon Cloud. So what we can do is
that, on the Amazon Cloud, I store completely
random numbers. But on the Google Cloud,
I store my information plus the [INAUDIBLE] information
I store on Amazon Cloud. So if I really can assume that
these two entities are really competing very hard,
maybe there's no collusion and there's no way they
will secretely exchange, but then you can
use the protocol of secure multiparty computation
to do a computation, which gets only one result without
revealing any details. So in this world, centralized
entities still is useful. But in order for
this to work, you have to assume that they are
competing, but not colluding. AUDIENCE: Hi. Just wondering. The use of term
entropy is interesting, because it seemed to be
this mysterious thing, but it's really precise
that, in thermodynamics, you can have a logarithm
determine in the classical thermodynamics. And then you have [INAUDIBLE]
with information theory of entropy. And then you make an
analogy using energy. That kind of reminds me of
[INAUDIBLE] free energy. SHOUCHENG ZHANG: Yeah,
Yeah, it's exactly. Yeah. So I think the blockchain
world is exactly extracting some free energy out of it. So you are basically
achieving something. But whatever you
achieve, the total amount of energy, the useful
amount, is only the energy you spent minus the
entropy that you have to waste. So you actually, today, still
see a lot of white papers that claim to do
miraculous things. And these kind of
white papers reminds me of the proposals
in the 18th century about perpetual mobility. AUDIENCE: I'm wondering. Can you extrapolate
the analogy further than-- you need a temperature
term for the [INAUDIBLE] to work. Is there a temperature-- SHOUCHENG ZHANG: Yeah. Yeah. Yeah. Yeah. AUDIENCE: [INAUDIBLE] SHOUCHENG ZHANG:
Actually, temperature occurs very naturally. Whenever you have a conserved
quantity, such as conservation of energy, the temperature
concept naturally evolves. Because anytime you have a
random but conserved system, it's the most generic, what
is called the Boltzmann Distribution. So the temperature
comes in naturally. But I think why I get so
excited about this is, for the first time,
I see a convergence between social science
and natural science. That it provides an anchor for
the social scientific world. So my idea of M0, M1, M2,
the fundamental anchor is now anchored on
natural science. We can precisely
see the entropy. It's wasted, so we can see
why a consensus reached. And then you can build more
human things on top of it. But the most basic layer is
now common between social and the natural science
and fundamentally reduces to energy, entropy,
and information. AUDIENCE: Thanks. AUDIENCE: Thanks so
much for your time. So I think, in
your talk, you were saying that you're can see this
first layer of one blockchain, and then further layers
built on top of that. So what do you think of the
various projects or companies that are trying to build
their own blockchain? And how does that
relate to your talks? Do you think-- SHOUCHENG ZHANG:
Well, I think, yeah. So there has to be some
unique thing that you provide. So Bitcoin, blockchain, and
Ethereum are really different. Because as a fundamental
layer of trust, you actually don't want
universal Turing machine, because it can be maybe hacked. But then you have to do some
more transactions on top of it. Then Ethereum
looks more natural. So the evolution of
the blockchain world will emulate the evolution
of biological species. You see forking, you
see different species. If they forked long
enough, maybe they become a different species. But there's always
something fundamental-- namely, all biological
beings are based on cells. So this kind of basic
contract will not change. But to some organization,
the different organisms, the different organizations
of different cells, that may change. Yes? AUDIENCE: Thank
you for your time. So my question is, when do you
think quantum computing would be in the application? Like after your
findings and research? And when it is in
the application, do you think it's going
to be in the hands of only big certain companies? Or will it scale into
having [INAUDIBLE]---- SHOUCHENG ZHANG: Yeah. So yeah. So I think quantum computing
research, most ideally, should be done in
open environment. I think because-- yeah. Let me just make this
statement, because I know a lot of companies are trying. But the very nature
of company trying is they have to protect
shareholder interest. They have to protect
their secret. But for something so
powerful and it's implication for humanity so
unknown, that I think it should be best conducted
in open university research. And this is exactly
what I'm doing. So my approach to a
quantum computer-- I have many, many
temptations to do a company on quantum computers,
but I've resisted that. AUDIENCE: And what is your
prediction of application of quantum computers? SHOUCHENG ZHANG: With
or without my invention? [LAUGHTER] I think, if you use
this way of trying, it will take a long, long time. Can you just imagine? For one useful qubit, you'd
need 70 qubit to serve it? I think it wouldn't scale. But with this approach,
it would scale. AUDIENCE: OK. I think we are about to wrap up. I'm going to ask
one last question. SHOUCHENG ZHANG: OK. AUDIENCE: About
your Angel Fermion. SHOUCHENG ZHANG: Yeah. AUDIENCE: Does it change
any other requirements of quantum computing, like such
as absolute zero temperature? SHOUCHENG ZHANG:
No, no, no, no, no. Oh, well, it still operates at-- most proposals operate at low
temperature, unfortunately. Yeah. AUDIENCE: OK. SHOUCHENG ZHANG: Yeah. But our approach could
work at room temperature, if a room temperature
superconductor is discovered. But that hasn't
been discovered yet. AUDIENCE: So [INAUDIBLE]. SHOUCHENG ZHANG: But
we shouldn't mind that. Maybe for some very,
very hard computation, if there's really a
qualitative improvement, we can just cool it
to a low temperature. SPEAKER 2: OK. Well, thank you so
much, Professor Zhang. SHOUCHENG ZHANG: Yeah. [APPLAUSE]
https://www.aps.org/publications/apsnews/updates/zhang.cfm
R.i.P
So where/how does the tangles entropy get dumped? Snapshots?
Majorana particles are a crucial part of the efforts put forward in the quantum computing research done at Qutech in Holland (https://qutech.nl/people-of-qutech/).
This man didnt mention #iota , thats sad because Iota is quantum resistant, and it has PoW, so it complements all his theories in a perfect way.