Let me tell you about Oliver Sacks, the famous
physician, professor and author of unusual neurological case studies. We’ll be looking at some of his fascinating
research in future lessons, but for now, I just want to talk about Sacks himself. Although he possesses a brilliant and inquisitive
mind, Dr. Sacks cannot do a simple thing that your average toddler can. He can’t recognize his own face in the mirror. Sacks has a form of prosopagnosia, a neurological
disorder that impairs a person’s ability to perceive or recognize faces, also known
as face blindness. Last week we talked about how brain function
is localized, and this is another peculiarly excellent example of that. Sacks can recognize his coffee cup on the
shelf, but he can’t pick out his oldest friend from a crowd, because the specific
sliver of his brain responsible for facial recognition is malfunctioning. There’s nothing wrong with his vision. The sense is intact. The problem is with his perception, at least
when it comes to recognizing faces. Prosopagnosia is a good example of how sensing
and perceiving are connected, but different. Sensation is the bottom-up process by which
our senses, like vision, hearing and smell, receive and relay outside stimuli. Perception, on the other hand, is the top-down
way our brains organize and interpret that information and put it into context. So right now at this very moment, you’re
probably receiving light from your screen through your eyes, which will send the data
of that sensation to your brain. Perception meanwhile is your brain telling
you that what you’re seeing is a diagram explaining the difference between sensation
and perception, which is pretty meta. Now your brain is interpreting that light
as a talking person, whom your brain might additionally recognize as Hank. [Intro] We are constantly bombarded by stimuli even
though we’re only aware of what our own senses can pick up. Like I can see and hear and feel and even
smell this Corgi, but I can’t hunt using sonar like a bat or hear a mole tunneling
underground like an owl or see ultraviolet and infrared light like a mantis shrimp. I probably can’t even smell half of what
you can smell. No! No! We have different senses. Mwah mwah mwah mwah mwah. Yeah. There’s a lot to sense in the world, and
not everybody needs to sense all the same stuff. So every animal has its limitations which
we can talk about more precisely if we define the Absolute Threshold of Sensation, the minimum
stimulation needed to register a particular stimulus, 50% of the time. So if I play a tiny little beep in your ear
and you tell me that you hear it fifty percent of the times that I play it, that’s your
absolute threshold of sensation. We have to use a percentage because sometimes
I'll play the beep and you’ll hear it and sometimes you won’t even though it’s the
exact same volume. Why? Because brains are complicated. Detecting a weak sensory signal like that
beep in daily life isn’t only about the strength of the stimulus. It’s also about your psychological state;
your alertness and expectations in the moment. This has to do with Signal Detection Theory,
a model for predicting how and when a person will detect a weak stimuli, partly based on
context. Exhausted new parents might hear their baby’s
tiniest whimper, but not even register the bellow of a passing train. Their paranoid parent brains are so trained
on their baby, it gives their senses a sort of boosted ability, but only in relation to
the subject of their attention. Conversely, if you’re experiencing constant
stimulation, your senses will adjust in a process called sensory adaptation. It is the reason that I have to check and
see if my wallet is there if it’s in my right pocket, but if I move it to my left
pocket, it feels like a big uncomfortable lump. It’s also useful to be able to talk about
our ability to detect the difference between two stimuli. I might go out at night and look up at the
sky and, well, I know with my objective science brain that no two stars have the exact same
brightness, and yeah, I can tell with my eyeballs that some stars are brighter than others,
but other stars just look exactly the same to me. I can’t tell the difference in their brightness. Are you done? Is it time for your to go? Gimme, gimme a kiiiissss. Yes, yes. Okay. Good girl. The point at which one can tell the difference
is the difference threshold, but it’s not linear. Like. if a tiny star is just a tiny bit brighter
than another tiny star, I can tell. But if a big star is that same tiny amount
brighter than another big star, I won’t be able to tell the difference. This is important enough that we gave the
guy who discovered it a law. Weber’s Law says that we perceive differences
on a logarithmic, not a linear scale. It’s not the amount of change. It’s the percentage change that matters. Alright. How about now we take a more in depth look
at how one of our most powerful senses works? Vision. Your ability to see your face in the mirror
is the result of a long but lightning quick sequence of events. Light bounces off your face and then off the
mirror and then into your eyes, which take in all that varied energy and transforms it
into neural messages that your brain processes and organizes into what you actually see,
which is your face. Or if you’re looking elsewhere, you could
see a coffee cup or a Corgi or a scary clown holding a tiny cream pie. So how do we transform light waves into meaningful
information? Well, let’s start with the light itself. What we humans see as light is only a small
fraction of the full spectrum of electromagnetic radiation that ranges from gamma to radio
waves. Now light has all kinds of fascinating characteristics
that determine how we sense it, but for the purposes of this topic, we’ll understand
light as traveling in waves. The wave’s wavelength and frequency determines
their hue, and their amplitude determines their intensity or brightness. For instance a short wave has a high frequency. Our eyes register short wavelengths with high
frequencies as blueish colors while we see long, low frequency wavelengths as reddish
hues. The way we register the brightness of a color,
the contrast between the orange of a sherbet and the orange of a construction cone has
to do with the intensity or amount of energy in a given light wave. Which as we’ve just said is determined by
its amplitude. Greater amplitude means higher intensity,
means brighter color. Someone’s just told me that sherbet doesn’t-
isn’t a word that exists. His name is Michael Aranda and he’s a dumbhead. Did you type it into the dictionary? Type it into Google. Ask Google about sherbet. So sherbet is a thing. So after taking this light in through the
cornea and the pupil, it hits the transparent disc behind the pupil: the lens, which focuses
the light rays into specific images, and just as you’d expect the lens to do, it projects
these images onto the retina, the inner surface of the eyeball that contains all the receptor
cells that begin sensing that visual information. Now your retinas don’t receive a full image
like a movie being projected onto a screen. It’s more like a bunch of pixel points of
light energy that millions of receptors translate into neural impulses and zip back into the
brain. These retinal receptors are called rods and
cones. Our rods detect gray scale and are used in
our peripheral vision as well as to avoid stubbing our toes in twilight conditions when
we can’t really see in color. Our cones detect fine detail and color. Concentrated near the retina’s central focal
point called the fovea, cones function only in well lit conditions, allowing you to appreciate
the intricacies of your grandma’s china pattern or your uncle’s sleeve tattoo. And the human eye is terrific at seeing color. Our difference threshold for colors is so
exceptional that the average person can distinguish a million different hues. There’s a good deal of ongoing research
around exactly how our color vision works. But two theories help us explain some of what
we know. One model, called the Young-Helmholtz trichromatic
theory suggests that the retina houses three specific color receptor cones that register
red, green and blue, and when stimulated together, their combined power allows the eye to register
any color. Unless, of course you’re colorblind. About one in fifty people have some level
of color vision deficiency. They’re mostly dudes because the genetic
defect is sex linked. If you can’t see the Crash Course logo pop
out at you in this figure, it’s likely that your red or green cones are missing or malfunctioning
which means you have dichromatic instead of trichromatic vision and can’t distinguish
between shades of red and green. The other model for color vision, known as
the opponent-process theory, suggests that we see color through processes that actually
work against each other. So some receptor cells might be stimulated
by red but inhibited by green, while others do the opposite, and those combinations allow
us to register colors. But back to your eyeballs. When stimulated, the rods and cones trigger
chemical changes that spark neural signals which in turn activate the cells behind them
called bipolar cells, whose job it is to turn on the neighboring ganglion cells. The long axon tails of these ganglions braid
together to form the ropy optic nerve, which is what carries the neural impulses from the
eyeball to the brain. That visual information then slips through
a chain of progressively complex levels as it travels from optic nerve, to the thalamus,
and on to the brain’s visual cortex. The visual cortex sits at the back of the
brain in the occipital lobe, where the right cortex processes input from the left eye and
vice versa. This cortex has specialized nerve cells, called
feature detectors that respond to specific features like shapes, angles and movements. In other words different parts of your visual
cortex are responsible for identifying different aspects of things. A person who can’t recognize human faces
may have no trouble picking out their set of keys from a pile on the counter. That’s because the brains object perception
occurs in a different place from its face perception. In the case of Dr. Sacks, his condition affects
the region of the brain called the fusiform gyrus, which activates in response to seeing
faces. Sacks’s face blindness is congenital, but
it may also be acquired through disease or injury to that same region of the brain. And some cells in a region may respond to
just one type of stimulus, like posture or movement or facial expression, while other
clusters of cells weave all that separate information together in an instant analysis
of a situation. That clown is frowning and running at me with
a tiny cream pie. I’m putting these factors together. Maybe I should get out of here. This ability to process and analyze many separate
aspects of the situation at once is called parallel processing. In the case of visual processing, this means
that the brain simultaneously works on making sense of form, depth, motion and color and
this is where we enter the whole world of perception which gets complicated quickly,
and can even get downright philosophical. So we’ll be exploring that in depth next
time but for now, if you were paying attention, you learned the difference between sensation
and perception, the different thresholds that limit our senses, and some of the neurology
and biology and psychology of human vision. Thanks for watching this lesson with your
eyeballs, and thanks to our generous co-sponsors who made this episode possible: Alberto Costa,
Alpna Agrawal PhD, Frank Zegler, Philipp Dettmer and Kurzgesagt. And if you’d like to sponsor an episode
and get your own shout out, you can learn about that and other perks available to our
Subbable subscribers, just go to subbable.com/crashcourse. This episode was written by Kathleen Yale,
edited by Blake de Pastino, and our consultant is Dr. Ranjit Bhagwat. Our director and editor is Nicholas Jenkins,
the script supervisor is Michael Aranda who is also our sound designer, and our graphics
team is Thought Cafe.
Is Michael Aranda not correct?