Translator: Ai Van Tran
Reviewer: Ellen Maloney What is an idea? What is a thought? And how do we think of these great
and new ideas that are worth spreading? My name is Henning Beck. I'm a brain researcher, and I want to show you
what is going on in your mind when you use information
to give rise to new thoughts. This is important because
information is all around us. Many people think it all starts with data. Date, the resource of the 21st century. Data is everywhere. Companies collect our data,
we do data analysis and data correlation, but in fact, data itself
is pretty simple: It's just a collection of letters
and numbers, signs you can process electronically,
but have no meaning, And you can measure data,
but you cannot measure an idea. Because when are you
really creative or innovative? When you have a thousand thoughts,
only one, the real game changer. So, maybe information is more important. We have so many tools nowadays
to acquire information. We have smart phones, mobile devices,
the internet everywhere. But never mix up information
with having an idea or knowledge, because you can Google information, but you cannot Google an idea. Because having an idea,
acquiring knowledge, understanding stuff, this is what
is happening in your mind when you use information
to change the way you think. So, what is that kind of thinking? Well, everything you see here is just
the surface of what is going on in your mind when you think. Most of the things run subconsciously,
which makes it damn hard to investigate but even more interesting. So, let's zoom into the brain to check out
what is going on when we think. Many people think the brain
is something like a supercomputer, like the ultimate calculating engine. It is supposed to be extremely fast,
super connected, and highly accurate. When you have something on your mind,
like right now hopefully, a picture, or an image,
or something like that, you can see very sharp and precise
and switch very easily, much faster than a computer, right? Because what can you do with this one? This one's at least stylish, it has appeal
on your desktop, but that's it, right? Well, look at this again and you see
it's totally the other way around. A computer you can put on your desktop easily calculates
3.4 billion times a second. Brain cells are much slower and only do
500 operations maximum speed. Computer don't make any mistakes A rough estimation is one error
in a trillion operations, and brains, you probably
know this from your personal life, much more error prone, and make mistakes
a billion times as often. In computers, you can
plug into the internet, and you're connected
with the world contrary to the brain. Because the brain
is 99 per cent self-oriented, most of nerve fibers
never get outside of your skull, most of the brain cells never see
what's going on in reality. So, from this perspective you have to say
"Okay the brain is everything but perfect. It is lame, it is lousy,
and it is selfish, and it still works." Look around you, working brains
wherever I look, more or less. But still, each one of you
has the power to outperform every computer system
by a very simple experiment. I can show you in a minute. So, what do you see here? A face,
you might say. Totally correct. I could also say it's just a collection
of fruits and vegetables, but you see a face. And what's interesting is not that
you do it, but how fast you do that. Because when your brain cells
are really that slow, you can only do like 20, 30, maybe 40 operations
within that split second. Computer software needs
many more steps, thousands, even millions of steps
to come to the same result. This leads us to the fundamental
principle of how we think. Because it's totally different from
anything we know of in our world. So, how would a computer approach
that kind of problem? Well computers use algorithms. Algorithms are basically stepwise recipes
telling you what to do. So, when a computer faces
a certain problem, for instance recognize a face
or solve an equation or whatsoever, the basic principle goes like this: You have an input, then you process
that input according to the algorithm, finally reaching an output. Input, processing, output. That's great. That's great when
you don't do any mistakes because when you do a mistake
at the beginning, you're screwed at the end. That's why computers sometimes break down
and end up in a blue screen. What a sad face, by the way, poor guy. Computers break down.
Brains do not break down. Unless you apply some external force,
or alcohol, or something like that, but usually brains are very robust. And that's because we think with a trick. When an input hits our eyes, the input is processed
by the sensory cells in our eyes, and they get actuated and actuate
the neighbors in this neural network. So, this is like a simplified model
of your brain in action. You have to agree for didactic reasons,
I drastically oversimplified that model of your personal brain. At least I hope so. But you still see the basic principle. Brain cells are dumb on their own,
cannot do anything much. But if you have a lot of them,
you end up with something we call an activity pattern, an activity state
of that specific neuronal network. And this activity pattern,
this is what we call a thought. Big difference to a computer. The brain does not distinguish
between processing and output because processing the information
is the thought itself. That's kind of tricky but maybe
it's similar to an orchestra: If you look at an orchestra
from the outside, seeing all the musicians sitting next to each other
but not playing any music, you have no idea whatsoever melody
this orchestra is able to play. Just like the brain. When you look at the brain from outside, you have no clue whatsoever thoughts
this system is able to think. In an orchestra, the melody emerges when the musicians start to play
together and synchronize themselves. So, the music, the melody,
is among the players. Just like a thought in the brain
is among the brain cells when they synchronize with each other. So, a thought is not located anywhere. A thought is how the brain cells interact, and how they process the information. So, that's different to a computer because no matter what kind
of processing the computer uses, whether it's algorithmic
or a deep learning network, or whatever fancy method we will
come up with in the future, it will always be "input, processing,
output," without mistakes, hopefully. But if you don't do any mistakes, you just end up at the place
you were programmed for. But nowhere new. Computers are intelligent,
but intelligence is nothing special. Intelligence means you follow the rules
as fast and efficiently as possible, but not to change rules. No super intelligent computer
will ever ever rule the world because intelligence is not enough. You need to be a rule-breaker,
a game-changer, you need to be crazy and creative too. And it's the mistake in our thinking, not the perfection, that separates us
from the noncreative machines. What do I mean by mistake? Well, it means that we can come up
with a new thought, a new activity pattern without knowing
before whether this is correct or not. Slightly differently actuated, we get
a new pattern, a new thought, but we don't know whether
this one is correct or not. We try. We fail. We do it again. But there's no objective criteria
for a good idea. Well, there's one thing. When is an idea a good one? When somebody else says,
"This is a good idea," but this social interaction,
this social feedback, this try and error, this social practice,
cannot be digitized right now. And that's one reason why truly new ideas
will stay analogue in our future. You see that this kind of thinking
gives us great advantage when it comes to creating new ideas,
and we call that special type of thinking "concept thinking,"
or categorized thinking. And instead of explaining
the theoretical background and explaining what's behind that, I'll give you an example. How you create new ideas: Who thinks that this is a chair?
Please raise your hand. Alright thank you. Who thinks that these are chairs too? Please raise your hand, right, very well.
You're quite familiar with furniture. That's very good, yes. And here's the task:
Who think's that these are also chairs? The longer I wait the more hands I see.
Thank you. But why? Why do you think this blue plastic ball
with three stumps is a chair? Well, this is the big difference between
the computer world and the brain world. What you see up here
is what we call deep learning, and you give a self-learning algorithms a gazillion of images
and a couple of hundred images of chairs and then it analyzes the whole
data and says with 98 per cent certainty that a chair is an object with four legs,
a seat, and a backrest. But we don't do that; we understand that a chair is not
a special shaped object, but something you can sit on. And once you understood that,
you see chairs everywhere. You can create new chairs,
with new designs and features. Here's another example,
how you do that basically Computers learn. But learning is nothing special, because animals learn;
blackbirds learn, dolphins learn, elephants learn, computers learn,
we understand. Deep learning is great,
but deep understanding is better. Because when you learn something,
you can unlearn it, but once you understood it,
you cannot de-understand it again because understanding means that you change the way
you process information, and since processing and output is one
and the same in our brain, we understand very fast on the spot. As I said another example: no computer is able
to follow that problem, but you're able to understand it
within a minute. I'm not a good illustrator at all
but it should be enough. I say that this is a flower. If this is a flower, what is this? A tree. If this is a flower, and this
is a tree, what could this be? A forest, or a garden, or whatsoever. So, you understood this,
but I say that this... is a child. If this is a child, what could this be? A grown-up. If this is a child and this
is a grownup, what is this? A family, of course. Data is the same. Information is different, and understanding is how you use
the information to change your knowledge. That's easy for you, but this problem
is impossible for computers to solve right now. And you see that how powerful you are
to understand things on the spot. A couple of weeks ago,
my two-year-old neighbor walked into my apartment, looked up
and said, "Oh, smoke detector." I said, "What? What kind of parents
does this little boy have? Are they showing him hundreds
of pictures of smoke detectors and fire exits until he finally learns
what a smoke detector is? Probably not. He might've heard the word
"smoke detector" only once or twice, but that was enough for him to understand
the meaning of the whole thing. We know from lab studies, that children
are able to understand the meaning of new stuff - of toys,
of words, of animals - at first sight. That's amazing, and you do this as well. How long did it take you
to understand the word Brexit? Maybe you've seen it once or twice
in the media, that was enough. Once you understood it,
you can do stuff with it. When you know what a Brexit is, what could a Swexit be?
Or Fraxit? Or Itaxit? Never seen before, but you
understand that at first sight, even more when you know what a Brexit is,
what could a Bremain be or a Breturn? (Laughter) Never seen before, but
understand it on the spot. Remember the sad face you were seeing
on the blue screen before? This poor guy differs from this happy one
by 50 per cent in it's data. But for us, it is 100 per cent difference. You see how powerful
our way of thinking is, but how do we make use of it to stay
creative and innovative in the future? Well... First thing to realize,
don't do it like computers. Many people think that it is important
nowadays to work harder, faster, and more efficient
to solve problems, but that's exactly what
machines can do. Every efficient procedure will be replaced
by algorithms eventually. What cannot be replaced
is inefficient thinking integrating new ideas, giving rise
to understanding of stuff. How do we do that?
Well, first we take breaks. We sleep, we lose attention
and focus and get distracted. This seems inefficient at first sight, but sometimes it's very effective if you step back, and see
the same things in a different context. Because if you only dig into the data, just rely on data or information,
and never step back to take time off to understand the whole picture of it,
you'll never be able to make sense of it. Second, when you ask people,
"Where do you get a good idea?", 70 per cent answer, "Under the shower,"
followed by: During sports, driving a car, vacuum cleaning the apartment,
washing the dishes, see the pattern here? Whenever we do some automated
routine of boring stuff, we seem to clear our mind
and attract new ideas. In fact, this is a great technique
many creative geniuses used. Dig into the problem, focus on the task, get involved, and then step back. Do something unrelated. Not a new problem, but chillax. In Ancient Greece, this was
godified by saying, "To be kissed by a muse," but you have to create a kiss-friendly
atmosphere for such a muse. And this means that you step back,
that you do something unrelated that some automated routine, sometimes
boring stuff because otherwise, it won't be possible to put the same
things into a new context. Third, escape the echo chamber. Nowadays, it's so easy to be surrounded
by convenient but non-inspiring opinions. In fact, we don't like other opinions. Facebook algorithm filters all
the online posts that are suitable to us. Most of our friends share
the similar points of views. We read newspapers that feature
the opinions that we like. We use media online, tv, radio,
to confirm our beliefs, but never to challenge them. But by doing so, we create echo chambers,
filter bubbles around us. In order to spark new ideas,
you should do the opposite. Try to provoke yourself from time to time. Read something you disagree with,
have a productive argument with a friend, try to see things from the other side. Don't think like an algorithm. Now try to challenge your opinions
and ideas instead of confirming them. Try to break the thinking rules
instead of following them. So, what is the next great idea
that is worth spreading? We don't know yet. But we can be pretty sure
that it will be created by brains. Not because we are smarter, faster,
or more intelligent than computers. But the opposite. We are slower, we're irrational,
and imperfect. That's why we understand the world
instead of analyzing it, and this is giving us the ultimate edge. We should appreciate, and be more proud of this because
this is what makes us human. Thank you. (Applause)