[MUSIC PLAYING] SARAH: Welcome to
this presentation by the renowned David
Eagleman, the presentation on, can we create new
senses for humans? David is a neuroscientist and
a "New York Times" bestselling author, as well as
an adjunct professor at Stanford University. He's known for his work
on sensory substitution, time perception, brain
plasticity, synesthesia, and neurolaw. He's the writer and presenter
of the PBS series, "The Brain with David Eagleman",
and he said one of his most
impressive credentials is that he is scientific
adviser to "Westworld". And so, without further
ado, David Eagleman DAVID EAGLEMAN:
Thank you, Sarah. Thanks for having me. Specifically, what I'd said
is that the scientific advisor at "Westworld" is
the only thing anyone remembers, even though it's my
least impressive credential. OK, so here's what I
want to talk about today. So I'm a neuroscientist,
and one of the things that's been of great interest
to me for a long time is this issue that
when we try to perceive the reality around
us, we're only perceiving a little bit of it. So we're made out
of very small stuff, and we're embedded in this
extremely large cosmos, and the fact is that
human brains are really terrible at
perceiving reality at either of these scales. And that's because we
didn't evolve for that. We evolved to operate at the
level of rivers, and apples, and mates, and food,
and stuff like that, right here in the middle. But the part that has
always been strange to me is that, even at
this scale that we call home, the scale
that we perceive, we're actually quite bad at it. We don't see most of the
action that's going on. So an example of this, take
the colors of our world. So this is
electromagnetic radiation that bounces off objects and
hits specialized receptors in the back of our eyes. And as many of you may know, the
part that we call visible light is actually less
than a ten billionth of the amount of light
that's out there. So all this is
electromagnetic radiation. It's just that we have
receptors for this part and not for the rest of it. So you have radio waves,
and x-rays, and cosmic rays, and microwaves, and
all that stuff that is passing through
your body, and it's completely invisible to you. You have no idea
that it's out there. There are thousands of cell
phone conversations passing through your body right now, and
it's totally invisible to you. Why? It's because you don't
have the specialized receptors for that frequency. Instead, you only have it for
this little range in between. Now, it's not that the
stuff is unseeable. So rattlesnakes,
for example, include part of the infrared range
in their view of reality, and honeybees includes some
of the ultraviolet range in their view of reality. It's just that you can't see
any of this, at least not yet. So what this leads to, I think,
is this very counterintuitive idea that your
experience of reality is actually constrained
by your biology. And that goes against
the common sense notion that your eyes, and your
ears, and your fingertips are just picking up the
reality that is out there, and all you need to
do is open your eyes. Instead, what's happening
is that we're sampling just a little bit of the world. And what's interesting is that
when you look across the animal kingdom, you find that
different animals pick up on totally different signals. So they have different parts of
reality that they're detecting. So just as an example, if you
are the blind and deaf tick, then what you're
picking up on is temperature and butyric acid. And that's the signals
that you receive, and that's how you
figure out your world. That's the only signals that
are telling you your reality. If you're the black ghost knife
fish, you're in the pitch dark and all you're picking
up on are perturbations in electrical fields. That's how you're figuring
out what's around you. If you are the blind
echolocating bat, all you're picking up on
are air compression waves that are coming back to
you from your chirps. And the idea is that
that's everything. That's your whole world. And we have a word
for this in science. It's called the, [NON-ENGLISH],,
which is the German word for the surrounding world. And presumably, every animal
thinks that their [NON-ENGLISH] is the entire objective reality
out there because why would you ever stop to imagine that
there's something else beyond what you can sense? So let me do a consciousness
raiser on this. Imagine that you are
a bloodhound dog, so your whole world
is about smelling. You've got this
very large snout. You have 200 million
scent receptors in here. You have wet
nostrils that attract and trap scent molecules. You have slits in your
nostril so you get big, giant nose-fulls of air. You have floppy ears
to kick up more scent. So everything for you
is about smelling. It's your whole world. So one day, you're walking
along behind your master, and you stop in your
tracks with a revelation. And you look at
your master's nose, and you think, what is it like
to have the pitiful little nose of a human? How could you not know that
there's a cat 100 yards away? Or how could you not
know that your best friend was on this very
spot six hours ago? But because we're humans, we
are used to our [NON-ENGLISH].. It's not like we have some sense
that we're missing something. We're used to the
reality that we have. We accept the reality
that we're given. But the question is,
do we have to be stuck in our [NON-ENGLISH]. And so as a neuroscientist, what
I'm interested in is the way that our technology might
expand our [NON-ENGLISH] and how that's going to change the
experience of being human. So what many of
you probably know is that there are
hundreds of thousands of people walking around
now with artificial hearing and artificial vision. So the way this works is
with a cochlear implant, you take a microphone, you
slip an electrode strip into the inner ear, and you
feed in this digitized signal into the inner ear. And the way it works
with a retinal implant is that you have
a digital camera, and that feeds into an
electrode grid that plugs into the back of your eye. Now, this works, but
what's interesting is that as recently
as maybe 20 years ago, there were a lot
of neuroscientists who thought this wouldn't work. And the reason is because
these things speak the language of Silicon
Valley, and that's not exactly the same dialect as your
natural biological sense organs. And so they thought,
the brain's not going to be able to understand
these digital signals. But as it turns out,
it works just fine. People plug these things
in, and they figure out how to be able to
proceed with them. Now, how do we understand that? It's because-- here's
the big secret-- your brain is not directly
hearing or seeing any of this. Your brain is locked in a
vault of silence and darkness, and all it ever sees are
electrochemical signals, and that's it. So it has all these
different cables that are plugged into it
that are bringing signals in. It doesn't know what those are. It has no idea what we would
even mean by eyes, or ears, or nose, or fingertips. All it knows is
there's data coming in. And what the brain
is very good at doing is extracting patterns, and
assigning meaning to those, and building your entire
subjective world out of that. But the key thing is that
your brain doesn't know and it doesn't care where
the data's coming from. It just figures out what
it's going to do with it. And this is really an
extraordinary machine. Essentially, you can think about
this is like a general purpose compute device. And there's a lot of
talk in Silicon Valley and here about AI and all the
great things that it's doing, but in fact, we can't even
scratch the surface yet of a system like this
that just figures out all of the sensory
information and figures out how to correlate
sensors with each other and correlate that with
your motor movement, and just make this
world around you. So the point is, what I think
a general purpose device like this allows for is that
once mother nature has figured out these principles once,
then she can mess around with the input channels. She doesn't have to figure
out the principles of brain operation every time. And so this is what I call
the PH model of evolution. And I don't want to
get too technical here, but PH stands for Potato Head. And I use this name to
emphasize that all these sensors that we know and
love, these are just peripheral
plug-and-play devices. You stick them in,
and you're good to go. The brain just figures
out what it's going to do with that information. And what's cool is that when you
look across the animal kingdom, you find lots of different
peripheral devices that can be plugged in,
even though the brains across different animals
all use the same principles. So just as an
example, with snakes, you've got these heat pits. That's how it
detects the infrared. And with the black ghost
knife fish that I mentioned, its body is covered with
these electroreceptors by which it picks up
these perturbations in the electrical field. The star-nosed mole
has this funny nose with 22 fingers on it with
which it feels out the tunnels that it's boring
through in the dark, and that's how it constructs
a three dimensional representation of
its tunnel world. Birds-- so it was just
discovered last month-- have cryptochromes
which allow them to detect the magnetic
field of the Earth. I mean, the fact that they
could tell the magnetic field has been known for a long
time, but it was just discovered how they do it. But cows have this,
most insects have this. They're all aligned
with the magnetic field, so it's called magnetoreception. So the idea here
that I've proposed is that mother nature doesn't
have to continually redesign the brain with each animal. Instead, all she's doing is
redesigning peripheral devices to pick up on information
sources from the world, and to plug it in,
and you're good to go. So the lesson that
surfaces here is that there's nothing really
special or fundamental about the senses that we happen
to come to the table with. It's just what we
happen to have inherited from a long road of evolution. But, it's not what we
have to stick with. And I think the best
proof of principle for this comes from what's
called sensory substitution, which is the idea of feeding
information to the brain via unusual channels,
and the brain figures out what it's going to do with it. Now, that might
sound speculative, but the first
demonstration of this was published in the
journal, "Nature" in 1969. So there was a scientist
named Paul Bach-y-Rita, and he put blind people in
a modified dental chair. And the idea is that
he had a video camera, and he puts them in
front of the camera, and whatever was in
front of the camera, you feel that poked into your
back via this grid of solenoids here. So if I put a coffee cup
in front of the camera, [CLICKS WITH MOUTH] I feel
that poked into my back. If I put a triangle,
[CLICKS WITH MOUTH] I feel that poked into
my back, and so on. And blind people got
pretty good at this. They were able to tell what
was in front of the camera just based on what
they were feeling in the skin in the
small of their back. So that's pretty amazing. And it turns out there have
been many modern incarnations of this. So one of these is called the
Sonic Glasses and the idea is that-- this is for blind people,
again-- there's a camera here, and whatever the
camera is seeing, that gets turned
into an audio stream. So you hear
[MAKES PITCH CHANGING NOISE] And at first, it sounds
like a cacophony, and you bump into things. And then after a little
while, blind people get really good at
being able to interpret [MAKES PITCH CHANGING NOISE]
all the stuff, the pitch, and the volume, and
so on, to figure out how to navigate the world. So they're able to tell
what is in front of them just based on what they're
hearing through their ears. And it doesn't have to
be through the ears. This is a version where
there's an electro tactile grid on your forehead. And whatever the
camera's seeing, you feel that poked
onto your forehead with these little shocks. Why the forehead? It's because you're not
using it for anything else. The most modern incarnation
is called the brain port. Same thing, for blind people
the camera sees something, and then it's put onto
a little electro tactile grid on the tongue. So it feels like Pop
rocks on the tongue, and blind people
get so good at this that they can do things like
throw a ball into a basket, or navigate a complex
obstacle course. So if this sounds
completely insane, to be able to see
through your tongue, just remember that's
all that vision ever is. All vision ever is is
spikes coming from-- in the usual case,
coming from the retina-- just turned into spikes
and sent back to the brain. And the brain figures
out what to do with it. Same thing here. So in my lab, one
of the things that I got interested in many years ago
was this interesting question of, could I create sensory
substitution for the deaf? And so the question is, if I
had a person say something, could a deaf person
understand exactly what is being said just with
some sort of technology that we build? And so here was the
idea we came up with. So first of all,
let's say I have a phone that's picking up
on the different frequencies in the room. So here if I go,
[MAKES PITCH CHANGING NOISE] you can see the thing picking
up on the different frequencies. And the idea is, could I
turn all those frequencies into a pattern of vibration
on the torso, let's say? So that whatever sounds
are being picked up, you're feeling those patterns
of vibration on the torso. And so that's what
we ended up building. And so this is the vest that
we built. And the idea is, I'm feeling the sonic
world around me. So as I'm speaking--
can you guys see the lights from where you are? I know it's sort of
bright where I'm standing. So as I'm speaking,
the sound is getting translated into a pattern
of vibration on my torso. I'm feeling the
sonic world around me as a pattern of vibrations. So we've been working
with the vest for a while, and it turns out
that deaf people can start understanding
and feeling what is being said this way. So let me just give
you an example. This was actually our very first
subject, Jonathan, 37 years old, born profoundly deaf. And so we trained
him on the vest for four days, two hours a day. And here he is on his fifth day. Oh, could you turn
the volume on? Let me start that over. [VIDEO PLAYBACK] - You. DAVID EAGLEMAN: So my graduate
student, Scott, says a word. Jonathan, who's totally
deaf, feels it on his vest, and writes on the board
what he's understanding. - Where. Where. Touch. Touch. [END PLAYBACK] DAVID EAGLEMAN: So the
thing is, Jonathan's not doing this consciously. It's not a conscious
translation, because the frames
are 16 milliseconds, and there's 32 motors,
and it's very complicated. Instead, his brain is
unlocking the patterns. And the way to really
understand this is to think about what
your own ear does. I mean, your own ear is
picking up on all the sound, and breaking it up into
frequencies from low to high, and sending that to the brain. And your brain is
just figuring it out. It sounds like, oh, that's
Eagleman's mellifluous voice that's going-- but in fact, your brain is
busting it up into frequencies and doing all this work on it. And that's exactly
what Jonathan is doing. And you can think
about this also with-- like when somebody is reading
Braille, a blind person, it's just bumps
on the fingertip, but they can read a
novel and laugh and cry because it has meaning to them. The meaning has nothing to do
with how it's getting in there, it has to do with how your brain
is interpreting that centrally. And that's exactly
what's going on here. I'll just show you
this, and this is useful because maybe this
is brighter than what you can see on the stage. But here she's saying sound,
here she's saying touch. And you can just watch
the pattern for a minute and you get the difference here. So just as an example,
the word "touch" has a high frequency bit when
she says c-h, and so you see, the touch. And hear she's saying "sound". And so you can see how this
works just by looking at it. And maybe that gives
you a sense [INAUDIBLE],, because the reason I think
this is really important is because the only option
for people who are deaf is a cochlear implant. And that's $100,000 and
an invasive surgery. And we can make our
vest for less than $500, and that opens it up
to the whole world. That means that
deaf people anywhere don't have to worry about
something like that. Obviously, insurance
typically covers this, but you still pay about
$9,000 out of pocket. And so this is something
that doesn't require surgery and is much less expensive. So that's why I think
this matters a lot. We recently had National
Geographic at our offices and we were filming. Here's a guy who's deaf, but
it's actually not because of him that we were filming. It's because of his daughter,
who's deaf and blind. And we made a
miniature vest for her. We actually have a
second subject now, another little girl
who's deaf and blind. And this is the only
input she's getting. I mean, the whole world
is cut off to her. The [NON-ENGLISH] is not
something she's receiving. Here, her grandmother's
taking her around and touching her feet against
things saying, OK, that's soft, that's hard, that's
cold, whatever. Here, it's hard to see, but
she's on a bed that's going up and down, and so the
grandmother's saying, down, down, down , and
then, up, up, up. And she's just training her
on these correlations, which is exactly how you learn
how to use your ears, just by understanding these
sorts of correlations. So this is work that will,
over the next six months or a year, we'll have
a lot more participants on this and a lot more data
about how that's going. But the key is that
young brains are so plastic that this is where
things are really going to fly. We've also built
a wristband that does the same thing as the
vest, but instead of 32 motors, it's got 8 motors on it. So it's slightly low resolution,
but it's much less friction. As far as people using it,
this is our first subject with the wristband. He happens to be the president
of the San Francisco Deaf Association, and
he ended up crying when he wore this because the
whole world was coming to him. And so he's just describing
here what kind of things he's able to do. [LAUGHTER] So anyway-- so we're
doing lots of stuff with this sensory substitution. It's been very heartwarming
and encouraging to us how all this is going, and
we're screaming along with this. And if anyone's ever in
Palo Alto in California, please come by and
visit our offices. I'll show you what we're doing. But what I want to
tell you about now is the stuff that
we're doing not just with sensory substitution,
but I started thinking a lot about sensory addition. What if you took
somebody who didn't have deafness, or blindness,
or something like that, and added something on? So for example, what if you
took a real-time stream of data from the internet and fed it in? Could you come to have a
direct perceptual experience of something that's new? So here's an experiment
that we did in my lab where this guy is feeling a
real-time feed of data for five seconds, a feed of
data from the internet. And then two buttons appear,
a yellow and a blue button. And he chooses one,
and a second and a half later he gets feedback either of
a smiley face or a frowny face. Now, he doesn't know
that what we're doing is feeding him real-time
data from the stock market and he's making buy
and sell decisions. And what we're seeing is whether
he can tap into and understand or develop a direct perception
experience of the stock market and the economic
movements of the planet. This is a totally new kind
of human [NON-ENGLISH],, something that humans
don't normally experience. Another thing we're
doing, we can obviously scrape the web for
any kind of hashtag and feel what's going on with
the community on Twitter. And again, this is a
new kind of experience for humans to be plugged into
the consciousness of thousands or millions of
people all at once and feel what's
happening with that. It's a bigger experience than
a human can normally have. We're doing lots of things
like taking a molecular odor detector and hooking it up
to somebody, so that you don't need the dog anymore. So that you can experience
the same sorts of smells that the dog can and feel the
different substances that way. We're working with
robotic surgery so that a surgeon doesn't have
to keep looking to understand what the data is with the
patient in terms of blood pressure, and how
the patient's doing, and so on, but instead
can feel all that data. We're working with patients
with prosthetic legs where-- for somebody
with a prosthetic, it's actually hard to learn
how to walk because you're not feeling your leg. You have to actually look where
the leg is to understand where it is sitting at all moments. So we just hooked up
pressure and angle sensors into a prosthetic, and then
you feel that on your torso. And it turns out,
this is unbelievably helpful in getting someone
to just use it and walk, because it's just
like your real leg. It's just like your real
leg, and you're feeling what your real leg is doing. It's just you feel it on
a slightly different patch of skin. And it turns out it's no pr---
that's actually quite easy for the brain to figure out. Another thing that we're doing
that's very easy for the brain to figure out is we did this
collaboration-- oh, sorry-- it's a collaboration that we did
with a Google team in the Bay Area where they have LIDAR
set up in their office. So we came and tapped
into the data stream so that we could tell the
location of everything, and then we brought
in a blind participant and put the vest on him. And he could tell
where everybody was by feeling where
people are around him. But then also, we put in
this navigation function where we said, OK, go
to this conference room. And he's never been
here before, and he just follows, OK, go straight,
go left, go right. And he just follows
along and gets right to where he's going this way. I was at a conference two weeks
ago that Jeff Bezos puts on, and last year at this conference
he got in a mech suit. So this is a giant robot,
and he's sitting here, and he can control
this mech suit. And so what my team did
this year is put together-- this is just in VR, but
we did this demo of OK, if you were actually
in the mech suit, then what would you want
to feel from the robot? And specifically, it's
every time the robot steps, you feel that. When the robot's moving
its arms, you feel that. You feel all the
data from the robot. If somebody throws something
at the robot and hits, you feel that. So the idea is if you're
inside this mech suit, the thing that really
ties you in and makes you one with the
machine is feeling what the machine is doing. So we had a very
cool demo of that. We're doing various things with
VR where inside the VR world you are-- in this
case, it's just sort of a shooter game
for entertainment. But the idea is you're getting
shot at from different angles, and you turn around,
and you see where people are shooting you from. But what we're
doing with this now is we've just made this for
social VR, where you can-- it's a haptics suit, so
that while you're in VR and people are touching
you, you feel that. So if someone touches you in
VR, you feel it in real life. Or you feel the raindrops,
or bumping into a wall, or somebody throwing
a tomato at you, or whatever the thing is, in VR
you're actually feeling that. Have you guys seen
"Ready Player One"? Who's seen "Ready Player One"? OK, a few of you. So there's a haptic suit in
there, and so we've got that. And so we're launching this
with High Fidelity, which is, of you guys
remember Second Life, High Fidelity is the
guy who started that. High Fidelity, it's the
new social world of VR. So that's what we're
doing with that. As Sara mentioned, I'm the
advisor for "Westworld", and so the vest is in
"Westworld" season 2, which starts Sunday at 9:00 PM. And I'm calling it
"Vestworld" now. So we're doing various things. We have with drone
pilots, we hooked it up so that the drone is
passing the pitch, yaw, roll, orientation, and heading to
the person wearing the vest. So it is essentially like
extending your skin up there. So you are feeling exactly
what the drone is experiencing. And the advantage is that you
can learned to fly in the dark, in the fog, things like
this, because you are-- it's just like the mec suit. You're becoming one
with the machine and you're feeling it that way. There's a lot of
talk about brain, computer interfaces
where you're-- I mean, two of my
colleagues and friends are doing companies where
they're thinking about, how do we implant
electrodes into the brain? But the fact is that planting
electrodes in the brain has a lot of limitations, the
main one being neurosurgeons don't want to do it
because there's always risk of infection and
death on the table. And consumers don't
necessarily want to get a hole drilled
in their head, so this is a solution that's
readily available right now. And where this is going, by the
way, is with things like this. So this is what a modern
cockpit looks like, and there's an unbelievable
number of gauges and things to look at. And the thing is, our
visual systems are very sophisticated
in certain ways, but what they're good at is
detecting motion, and edges, and blobs. What they're bad
at is looking at high dimensional information. So what you have to
do if you're a pilot is look at each one
of these individually. You can only attend to
one thing at a time. It turns out that with
the somatosensory system, you can take in high dimensional
information, which is why you can balance on one leg. There's information from
all these different muscle groups coming in, and my brain
has no problem integrating this high dimensional
information to do that, whereas your visual system
runs in a very different way, and it's very much about
serial, focused process. And so the idea is, we're living
in a world of big data now. And is there a way to,
instead of just having access to big data, to
experience it directly? So this is one of the places
we're going with that. Our goal is to do this
with factories, as well. Instead of staring
at monitors, just imagine feeling the
state of the factory in this high dimensional system. And I'm not talking
about alerts. Alerts are easy, you don't
need something like this. But I'm talking about feeling
how the whole system is going and where it needs-- where the pattern is moving in
this high dimensional space. And the key is, I think
with the right sorts of data compression, there's really
no limits to the kind of data that we will be able to take in. And so, just
imagine an astronaut being able to float
around and instead of look at all the monitors,
to understand how the International
Space Station is doing. Just, they feel it at all times. Or having access
to the invisible states of your own health. So your blood
pressure, and the state of your microbiome, and
so on, all these things that are invisible
to us, imagine having them made explicit
so you're feeling that. Or being able to see
infrared or ultraviolet. Or being able to
see in 360 degrees. So essentially, there's no
end to the possibilities on the horizon here. And I think the key is, as
we move into the future, we're going to increasingly
be able to choose our own peripheral devices. So we don't have
to wait for mother nature's sensory gifts
on her time scales, eyes, and ears, and nose,
and fingertips, and so on. We don't have to wait
around for that anymore, because that takes several
million or hundreds of millions of years for each new iteration. But instead, like any good
parent, what she's given us is the capacity to
go out there and create our own trajectory. And so the question, especially
with a smart audience like this, is how do you want
to experience your universe? Thank you very much. [APPLAUSE] The applause feels
good on the vest. So I'll take any
questions about anything. I think I'm supposed
to tell you guys to go to the microphones for that. AUDIENCE: So yeah,
I'm wondering, what are the limits of the
haptic perception that you have? Or where does it break down? Or is there fatigue after
a while that you get tired, or you start getting
numb to the perceptions? DAVID EAGLEMAN: Great. Let me answer this in two ways. So as far as the getting
numb part goes, no. What's interesting
is, when I first put the vest on every day or
the wristband, for the first-- I don't know-- let's say
60 seconds, I'm feeling it and I'm really aware of it. And then it fades
into the background. But it's not because
I'm getting numb, because if anything
happens that is unexpected, I immediately feel it. So instead, it's
just like the feeling of your shoe on your left foot. You're not paying
attention to it, but it suddenly you
get a pebble in it, then you're paying
attention to it. Or you can attend
to it right now and think about how
your foot feels. So it's exactly like
that with the vest. And the key thing
about using the skin is that the skin is the
largest organ of the body, and it's incredibly
sophisticated. It's got all these
receptor types in it, and it's this
unbelievably useful organ, but we just don't
use it for anything. The joke in the lab is
that we don't call this the waist for nothing. It's just totally not used. And so yeah, anyway,
you don't fatigue. As far as the limits
go of what kind of data we can pass into it,
we don't know yet. What is clear is that some
things you learn instantly. Just as an example, the thing
we did with blind people, where there's LIDAR which knows
the location of everything. And the guy who's wearing
it, he can tell, OK, there's someone walking up on my left. Oh, now the person's walking
around behind me, and so on. No learning. I mean, it was
instantly he got it. With something like
deafness, people have to-- people immediately do get
some things right away. Like if we present to
the wristband or the vest a dog bark, or a smoke
detector, or a baby crying, whatever, they get
that right away. But other things are
more challenging. It feels to me like the more
removed the data set is-- like let's say I'm
doing factory data-- it just has to be something
where you train and learn on it. AUDIENCE: Thanks. DAVID EAGLEMAN: Thanks. Let's go over--
let's switch sides. Yeah? AUDIENCE: Is there any
kind of problem with having your skin do double duty? Like, could you get so used
to hearing through your skin that if someone
were to touch you, you kind of hear something then? DAVID EAGLEMAN: Great question. The answer probably
is no in the sense that the way that you
hear is this very high dimensional pattern. And so someone would
have to touch you in a very particular way
every 16 milliseconds. So that's why we haven't
run into that yet, and I don't foresee
that happening. Yeah, that's the answer. And the general story is that,
like I said, because we all wear clothes nowadays and
so on, it's not really-- we're not utilizing this
for much of anything. By the way, other
people have come up with very clever ways of
using hearing, or sight, or anything like that
to pass on information. But the problem is, those
are senses that you're using. You actually need to use
your vision and your hearing. And the thing with
that brain port that I showed you, the thing
that sits on the tongue, it's a great proof of principle
for sensory substitution. But it's really stupid as a
device, because you can't eat and you can't talk when
it's in your mouth. So this is why I really
wanted to do something that was totally unobtrusive,
as in you guys didn't even know I was wearing it. It's just something
worn under the clothing, and something that takes
advantage of all the skin that you're not
using for anything. AUDIENCE: Thanks a lot
for doing this talk. This is extremely interesting. I was curious to learn more
about the learning process, because if you make an
analogy with machine learning, there usually needs
to be some label data, there needs to train this
prediction's extremely wrong, this prediction's OK. So I was curious,
have you started thinking of how to
make the brain pick up the interpretation
faster, better? Is there [INAUDIBLE]. So how does it work? DAVID EAGLEMAN: Yeah, thank you. Great question. So of course, you know
that the difference between artificial neural
networks and brain neural networks is miles of difference,
because with an artificial one you need millions of
exemplars, and you just don't need that with the brain. But the way that we train deaf
people, for example, is we'll present a word to, let's say,
the wristband or the vest. So you [BUZZING SOUND]
and then you'd see four choices on
the screen, and you have to guess which
word you just felt. And at first, you have no idea. So you make a guess, and
you're given feedback about what's right and wrong. This is just like
this foreign language learning programs
where you get feedback, and you start getting better
and better at it every day. The reason we do
those sorts of tests is so that we can quantify
exactly how things are going. But the real way that deaf
people learn is two of them. One is, they watch your lips. And as they're watching
your lips and feeling it, they're making the
correlation that way between what they're seeing
and what they're feeling. And the other way, which is even
better, is when they vocalize. They say something
and they feel it. And that's, by the way, how you
trained up your own ears when you were a baby. You know, you'd babble,
and you're hearing it, and that closes the loop. And you figure out
how to use your ears. And that's what's going on here. Thank you. Yes? AUDIENCE: So you talked a lot
about substituting new senses for an organ that maybe
doesn't exist for someone, or introducing some new sense. Do you know of any work
about expanding a sense that you already have, such as
seeing a wider range of light, or hearing new things,
or getting better at touching things? DAVID EAGLEMAN: Yeah. So thank you for the question. Ask me this question
again in three months. I'll be able to tell you more
than I can tell you right now. But my deep interest
is in, for example, with the visual spectrum that
I showed at the beginning, if we were born
500 years ago, it would have been a very
different situation because the world was unmapped. And you would have been able to
sail around and find new lands. Now, we can't do that. Everything is already
known in the world. But that's not true for
the visual spectrum, for the EM spectrum. I feel like I get
to be a pioneer and walk around
on this 10 billion sized grid to find out
what is meaningful to us as humans on that grid. And no one's ever walked
around in there before. And obviously, we build
machines in our cars to pick up on radio waves, we
build machines in hospitals to pick up on x-rays. And so we have various things
that pick up on different parts here. But there's a difference when
you're actually a human walking around in this spectrum. Just as an example,
some friends of mine make microwave cameras
that sit on satellites for various reasons. But what they discovered
totally accidentally is that you can tell if water
is drinkable or polluted just by looking at it in
the microwave range. But no one ever
knew that before. Why? Because you needed to be a human
who cares about these things to say, oh look, there's this
thing here and it's strange. So the point is, I
feel like there's-- if I had to make
a guess, I'd guess there's 30 Nobel
Prizes that are hidden along the spectrum for
people to just make discoveries about cool stuff. So I should mention, one of
the things that we're doing is we're releasing the vest and
the wristband with an open API. So people can put in whatever
data streams they want. They can wear cameras
for different parts of the range, hearing for
parts of the hearing frequen-- anything like that,
and go around and see what's out there in the world. AUDIENCE: Thank you. DAVID EAGLEMAN:
Thanks very much. AUDIENCE: Hi. I'm working on the
intersection of VR and empathy, and I think a lot about
perception of emotion. And I was wondering if you think
we could use a similar device to help people understand
the other person's emotions or the emotions around them. DAVID EAGLEMAN: Thank
you for the question. It would totally
depend on having a sensor that can do that. In other words, if
I had a machine that did, let's say, facial
recognition, or pitch recognition of
voice, or whatever, and could figure out
the answer, then it's easy to feed that in so that
I'd become more aware of how somebody's feeling. But I would need
the sensor in order to tell me what the right
answer is to feed it in. And the other thing
is, I've gotten a version of this
question several times about whether this would
be useful for autism. Probably not. And the reason is many
kids with autism have what's called sensory processing
disorder, where they can't stand the feel of things like
the clothes they're wearing, or whatever. And so having all this
buzzing probably wouldn't work for them, unfortunately. But anyway, that's the answer
about empathy or anything else, is if there's a way
to sense it, then it's very easy to feed it in
so you're getting that data. AUDIENCE: If I can have
a second question-- so you work on
sensory data, do you think it can help change
the way the brain works? Like if there's
a brain disorder, can these devices
be a compliment to the way we process data? DAVID EAGLEMAN: Yeah,
I totally think so. I mean, this is just
one example of many, but the thing about
the prosthetic leg, it's that you just don't
have that data anymore coming from your leg. And it just took us two hours
to be able to just fix that. So now somebody can feel their
leg as though it's a real leg. And I think one of the
big problems with stroke, with Parkinson's
disease, and so on, is losing sensation in a limb. So forget prosthetics for a
minute, just hooking this up so that you can feel what
your limb is doing. So it doesn't just feel
like this big, numb thing, but you're feeling it. That's another example. Thanks very much. AUDIENCE: Hi. So I find the navigation
applications very interesting. And my question is, before
we get to the airplanes and spacecraft navigation--
not that that's not important-- is there
an application to more immediate navigation need? For example, I can't tell
you how many times I almost got hit by a car looking
at Google Maps on my phone, not offense to Google. Or navigating back to safety. Like, I do have a friend
who got lost skiing because he lost his way. Or like that episode
of "The Office" where Michael drives his car
into a lake because of the GPS. So I was wondering if there's
any more immediate applications that we can use in
our daily lives? DAVID EAGLEMAN:
Yeah, the thing-- thank you for asking-- the thing that we did with
the blind participant, where he's getting navigation
directions that way-- AUDIENCE: In the office? DAVID EAGLEMAN: In
the office, right. Exactly. And as I said, that's a product
we're doing in collaboration with Google. I'm now transferring that
over to the wristband. And so I built the
wristband with eight motors, so that you have the
cardinal directions plus the in-between directions. And it doesn't have to
be someone that's blind. It can be for any reason at all. First of all, if there's any
kind of detection about what's around you, you can know, oh,
there's someone to my right, there's someone behind
me, there's whatever. Or you can be told, oh
yeah when, you get up here, turn right, turn left, blah,
blah, blah, that sort of thing. AUDIENCE: OK. Because I remember
thinking I wish Google Maps had a
vibration thing or it that vibrated on my
wrist that I would know which way to
turn instead of having to look down on my phone. DAVID EAGLEMAN: Exactly
right, that's exactly right. And I'll just mention
for clarification that some of these people
say, oh, well wait, doesn't like the Apple
iWatch do stuff like this? But of course, it doesn't. It just has a
single motor in it. And so by having the spatial
pattern of the motors, one of the things that's
trivial for the brain to learn is, oh you know, OK I got it. That's left, that's
the right, that's behind me, that's
in front of me. That's easy. AUDIENCE: OK, thank you. DAVID EAGLEMAN: Yeah, thanks. Yes? AUDIENCE: Hi. How much does your team
know about how difficult it is for someone to switch
between different kinds of sensory augmentation? In other words,
will I be limited to a single sensory
augmentation app in my vest? DAVID EAGLEMAN: That's
a great question. We don't know the
answer to that yet. Here's what I can
tell you, the brain has what are called
schema, where it's like OK, in
this situation, this is what this data stream means. In that situation,
this is what it means. I'll just give you an example. A few months ago, I
was throwing a football around with some friends
and I-- it hit my vehicle and knocked the rear
view mirror off. So that afternoon, I got in
my vehicle and I was driving. And I noticed, I kept making
eye movements up here, and I was seeing into the trees. And I thought, what am I doing? And it's because, of course,
I'm used to looking that way to see behind me. But I'm only doing that when
I'm sitting in my car seat. I would never do that
walking around the street. I would never suddenly look
there to see behind me. So my brain had unconsciously
learned a schema, which is when I'm
in this context, then I've got these completely
different sensory capacities. So the point is, the
brain's always doing this. So it may be possible
to learn more than one. We don't know. We just haven't tried that. My best guess for
what would be easiest is to have, like, two
wristbands, or you know, an ankle bracelet, or what-- we're building all sorts
of other form factors, too. And so, depending on the
apps that you wanted-- like, if there were mainly
two that you wanted-- possibly, it would
be easiest just to have them on separate
parts of the body which go to separate parts of the brain. Thank you. Ask me that again in
six months, and I might have more data to tell you. Thanks so much. AUDIENCE: I have two questions. What happens to the
visual cortex of someone who is born blind? And second, if you're
translating visual signals to auditory for
someone who is blind, do you see activity in
that area of the brain? DAVID EAGLEMAN: Great questions. And this is actually the
topic of my next book that comes out next year called
"Live Wired", which is to say, what you have are these cables
that plug into the cortex. So from the eyes, you
have data cables that go, and they plug-in
back here, and then we say, oh, that's
the visual cortex. But in fact, the only
reason we ever think of that as the visual cortex
is just because that's where the information goes
that becomes the visual cortex. But if you are
born blind, that's no longer the visual cortex. Instead, it gets
taken over by hearing, by touch, by vocabulary
words, by all that stuff. Why? Because the cortex is
actually the same everywhere, all over the brain. And what it looks like, and
what we call it in textbooks, is just a matter of what kind
of data is plugging into it. So back in the
early '90s, in fact, [INAUDIBLE],, a
colleague of mine, took the visual neurons
that would normally go to the visual cortex,
and he rerouted things so they plugged into
what we normally call the auditory cortex. And then that became
the auditory cortex. Sorry, it became
the visual cortex. In other words, if
you plug that data in, that's what shapes that area. What we now know is that
this is incredibly fast, this whole process. So if you blindfold me tightly
and stick me in a scanner, within 90 minutes
my visual cortex is starting to respond
to sound, and touch, and things like that. So in other words, the
takeover of these areas is extremely fluid. So that's the answer
to the question is there's nothing special
about visual cortex or whatever. It's just a matter of how
much information is coming in, where that information
is coming in. And if the brain
finds it relevant, salient, then it
devotes territory to it. Thanks. AUDIENCE: So a lot
of the applications that we saw in the vest have
been for strictly communicative purposes. Is there also, say, like a
possible emotional response if you played a
song to the vest? Could you learn to perceive
something like music through the more
tactile sensation and get the same
kind of response you get from hearing it? DAVID EAGLEMAN: Yeah,
that's a good question. So one thing we've
discovered quite accidentally is that deaf people
really like listening to music on these things. AUDIENCE: It probably
feels really good DAVID EAGLEMAN: Exactly,
it feels really good. And in fact, one
thing we've done is listened to, for example,
the radio with this on. And it's broken up in all
the different frequencies. And the singer hits a high
note, and you're feeling it, it's an amazing feeling. And when you turn the
vest off, The music feels sort of thin like
you're missing something now. So it is terrific. One thing I'll just point out is
that we only have 32 frequency bins on here, so you're
not actually capturing all the possible notes. You're just capt-- sort of
lumping binning of those. Nonetheless, what
you get out of it is the rhythm, and the feeling,
and where the music's going, and the highs, and
lows, and all of that. So people, even though I
hadn't predicted the vest, they like that possibly
more than anything else to do with the vest. AUDIENCE: 32 is a
lead-in to my question. How do you characterize
the richness of what you can input through the vest? I mean, I suppose there's
frequency, and amplitude, and spatial resolution. And what are the dimensions? And the follow-up
question then is, how does that compare to
the potential capability of the torso? DAVID EAGLEMAN: So let me
say three things about that. One of them is we've also built
a version with 64 motors on it, and the only reason we're
not using that is because 32 seems to be totally sufficient,
and it's easier and cheaper to build. So there are several things. One is what is the
spatial resolution? How close can we get
these motors together? There's something called
two-point discrimination, which we measure, which is
just-- at some point, if you move signals on the
skin too close together, your brain can't
distinguish those. So we've carefully measured
everything on the torso and published on
this sort of thing about how far they need to be. Anyway, the point is,
sixty-four is easy. We could probably fit, I
don't know, up to 80 or 90 on the torso with no problem. As far as what the
motors represent, I probably with this audience
should have been more technical about it. Each motor is representing a
different part of the frequency bin from low to high. So in other words, this is
the sound that's captured. We typically cut it off from
like 300 Hertz to, let's say, 6,000 to 8,000 Hertz
at the upper end because you don't actually
need anything higher than that even though ears can hear
a little higher than that. And so then each
motor represents some binning of the
frequencies and just represents the amplitude. So if this bin has
a lot of amplitude, then that motor is hard-- yeah. I think that was all the
questions that you asked. Did I miss something? Anyway, so what we have-- I'll just mention
one other thing-- we've got a lot of sophisticated
software, three years' worth of stuff that
we've worked on to do all these other tricky
things like noise, floor, and so let's say we're
talking and suddenly the air conditioner kicks on. It goes [BUZZING SOUND]. Within 20 or 30 seconds,
that will get canceled out. So I'm not hearing noise in
any different frequency bin, and we have an adaptive ceiling
and adaptive noise threshold, and all kinds of other
tricks we put in. But essentially, think
of it like a Fourier transform with binning. AUDIENCE: That answers
the question for sound, but it doesn't really say,
is this equivalent to what's in "Ready Player One"? DAVID EAGLEMAN: Oh, with the
"Ready Player One" thing, I'm almost
embarrassed about that because that hardly utilizes
all the capabilities we have. In "Ready Player One" it's,
if there's a collision here, buzz that motor. So I'm just feeling where
everything is going. And there's all
sorts of illusions that we implement about-- even though there a
motor here and here, we can make it seem like any
point along anywhere in between has been touched. I can explain how we do
those illusions and so on, but that's simply, hey,
where was my avatar touched? That's where I get touched? So that's the easy part. Yes? AUDIENCE: Should I be
worried that I'm too old and my brain isn't going
to be able to pick it up as well as a younger person? DAVID EAGLEMAN: Great question. No. We've tested this on 432
deaf people, as an example, and the oldest is
probably around 70 or 75. And they can get it
pretty easily, as well. AUDIENCE: Is there a difference
in how quickly they pick it up? DAVID EAGLEMAN: Yes, exactly. Very good. So if we plot things from
16-year-olds to 75-year-olds, and we're looking at let's just
say how fast they pick it up, it does go down. And it's essentially linear,
so it just goes down. So it just takes a 75-year-old
longer to learn it. They still learn it,
it's just harder. AUDIENCE: And do they get
to the same level of master? DAVID EAGLEMAN: I
think so, I think so. Ask me that again
in about a month and I'll be able to
tell you the data. But the cool part
is on day one, right when people come in, when we
present sounds to the wristband and we say, hey, was that a
dog barking, or footsteps, or a microwave ding,
or whatever, people are pretty good at that straight
away without ever having worn it before. It's sort of
surprisingly intuitive when you're feeling stuff. By the way, and who's
ever around at the end, you can come feel
what it feels like. Yeah, thanks. Yes? AUDIENCE: Have you
spent much time focusing yet on security
and privacy in these? Privacy being if someone could
extract all the sounds that you heard, or security being
if someone could just make it seem like you're
hearing something else? DAVID EAGLEMAN:
Yeah, good question. The answer is yes, we've made
sure this is really secure. So as far as recording
sounds, there's no recording that goes on-- so just as an example,
the wristband, the microphones are
built into here. And it's capturing the data
and doing the Fourier transform and all the other
tricks that we're doing, but it's not getting
recorded anywhere what is actually happening. And with this thing, we
don't record anything either. So we're sure about that. And then for the
passing the information, we're just making sure
that it's all secure. But we've thought
about that, also. It would be-- this is a sci-fi
story 50 years from now that somebody puts in
information that, hey Bob, and you turn around
there's-- anyway. So, thank you for that question. Yeah? AUDIENCE: Hi. I have two related questions. One, what is battery
life on those things? And two, can I buy one? DAVID EAGLEMAN: Great. The battery life
is 16 hours, and we wanted to make it so it's
just like a cell phone. So that you wear this all
day long, for example, and then you plug
it in at night. And the answer is, this you'll
be able to buy in December. And this, we are about July. So this is available for
preorder on the website, and this will come out in seven
months, eight months from now. Thanks very much. Any other questions? SARAH: That's the
end of our time. DAVID EAGLEMAN: Great. SARAH: Thank you very much DAVID EAGLEMAN: Great. Thank you guys so much. [APPLAUSE]