It’s Professor Dave, let’s do some processing. We’ve learned about how neurons transmit
information, as well as the way that neurons are organized so as to integrate that information. So now let’s start to look at some of the
specific information that is to be integrated. We receive information about our surroundings
through the senses. We see objects with the eyes. We hear sounds with the ears. We smell and taste things with the nose and mouth. How does this information get to the brain,
and what does the brain do with it? Let’s get a closer look at this process
now, starting with vision. We got a basic introduction to the structure
of the eye in the anatomy and physiology course, when we looked at the nervous system. We will assume that knowledge here, so make
sure to check out that tutorial if you need to review that terminology. With the structure of the eye understood,
we will now get a closer look at the mechanism by which visual information is transmitted
to the brain, and how that information is interpreted. So let’s start at the beginning. Vision is based on light interacting with
the eyes. There is plenty of information on light in
my physics tutorials, but if you’d rather not get into all that, let’s just begin
with the knowledge that light can behave either as a wave, or as a particle, which is called
a photon. What we refer to as visible light is actually
electromagnetic radiation, just like microwaves, and radio waves, and X-rays, and all the rest,
it just happens to have a wavelength between about 400 and 700 nanometers. This is the portion of the electromagnetic
spectrum that we have evolved the ability to perceive. If the wavelength is just a bit longer, in
the infrared portion, our eyes can’t pick it up. A little shorter, in the ultraviolet range,
again we can’t pick it up. But right in this thin band, we are in business. Other species have slightly different ranges
to work with, but this one is ours. Within that range, the specific wavelength
determines the color of the light, and the intensity determines the brightness of the source. Light passes through the pupil to enter the eye. The iris, or the colored part surrounding
the pupil, regulates the amount of light that passes through. This can be seen when the pupil changes size
to adjust to the intensity of available light, which we call constriction if it gets smaller,
and dilation if it gets bigger. Behind the pupil sits the lens, which focuses
the light onto the retina. We do this with two eyes at once, and the
brain is able to interpret the binocular disparity, which is the difference in the position of
a particular image on the two retinas. The greater the disparity, the closer the
object is, so having two eyes is what gives us our depth perception. Now, the process by which light hitting the
retina is converted into neural signals is quite complex to say the least. To begin to understand this, let’s zoom
in a bit on the retina. We see five layers of different types of neurons. There are receptor cells, horizontal cells,
bipolar cells, amacrine cells, and retinal ganglion cells. Interestingly enough, the photoreceptors,
which are referred to as rods and cones, depending on their shape, are located at the back of
the eyeball, so light must reach these first, and then the resulting signals propagate out
towards the retinal ganglion cells. The signal continues along the axons from
these retinal ganglion cells to leave the eye, and the disruption in the receptor layer
that this bundle of axons creates is what results in a blind spot in our vision. Let’s now get a closer look at these rods
and cones. Looking at their shapes, the origin of their
names becomes quite obvious. So how do they differ? As it happens, they mediate different kinds
of vision. Photopic vision is what happens when there
is plenty of light, which allows us to perceive our surroundings in great detail. This is mediated by cones. Scotopic vision is what happens when things
are very dim, and there is not enough light to excite the cones. This is mediated by rods, and it produces
a kind of perception that lacks certain details. This is better understood when looking at
how these neurons converge. Looking at cones, only a few of these converge
onto one retinal ganglion cell. So the input received here is from just a
few photoreceptors. By contrast, several hundred rods converge
onto a single retinal ganglion cell, so the input received in this case is from a great
many photoreceptors. This is why the rods can take a very dim light
source and amplify the stimulation, whereas this would not work with cones. This system trades sensitivity for acuity,
because an amplified signal means the information will be received, but it can’t be pinpointed
to a single rod, so it is less clear where the signal is coming from. With cones, there is less sensitivity, but
there is little ambiguity. The stimulus is coming from a distinct place,
so with a brighter light source, this is more effective. In terms of the distribution of photoreceptors
on the retina, we must be aware of the fovea. The fovea, which means “pit” in latin,
is a small indentation at the center of the retina, and there are no rods at all in the
fovea, only cones. Because this region is all cones, this is
the part of the retina that specializes in high-acuity vision, which means it can resolve
small details. Then moving out from the fovea, there are
fewer cones and many more rods. The rods reach a maximum at twenty degrees
away from the fovea on either side, and then begin to decline as well. The sensitivity towards particular wavelengths
of light is slightly different for these two receptor cells as well. With photopic vision and the cones, maximum
sensitivity happens at around 560 nanometers, which is roughly yellow light. As it deviates from this, it will have to
be more and more intense in order to be perceived with the same brightness. With scotopic vision and the rods, maximum
sensitivity happens around 500 nanometers, sort of a bluish-green light. As light dims and our eyes switch from photopic
to scotopic vision, an interesting phenomenon can be observed that relates to this discrepancy
in sensitivity, which is called the Purkinje effect. Yellow and red objects will appear brighter
than green and blue ones, and as the light dims, this relative brightness will appear
to reverse. To get even more specific with cones, we must
understand that there are three cone types. These are red, green, and blue, which are
also called L, M, and S, which stand for long, medium, and short wavelength. Trichromatic color theory explains how differential
activation of these three types is responsible for our perception of all colors. This is why a computer monitor can use only
red, green, and blue pixels to represent any color. This can be combined with opponent process
theory, which outlines opponent systems that describe how the cones connect to the ganglion cells. In this model, a combination of excitatory
and inhibitory responses controls what color combinations can be seen. A ramification of this mechanism is the notion
of the negative afterimage that can be seen after staring at something for an extended
period of time, whereby the afterimage is always the complementary color. This is because staring at something of a
particular color will cause certain cones to become fatigued, while others do not fire
at all, so when shifting focus to a blank surface, only the opposing cells will fire. So now we have a basic grasp of how light
is received by the retina, but how is this converted into information for the brain? Let’s talk a bit about visual transduction. Take a look at this protein, a pigment molecule
called rhodopsin. The absorption spectrum of this molecule is
nearly identical to what we saw for the scotopic spectral sensitivity curve. This is no coincidence. This protein is a G-protein-coupled receptor
that is found in all of the rods, and the degree to which it absorbs light of different
wavelengths informs our ability to detect light of these wavelengths. The way this works is that in the dark, when
rhodopsin is not activated by light, sodium channels are open, which keeps the rod slightly
depolarized, generating a graded potential which results in the continuous release of glutamate. This is a neurotransmitter that has an inhibitory
effect, keeping the bipolar cell hyperpolarized and inhibiting neurotransmitter release towards
the ganglion cell. When exposed to light, rhodopsin becomes bleached,
initiating a cascade of events that results in the closure of the sodium channels. The rod becomes hyperpolarized and the release
of glutamate is minimized, so the inhibition of signal transmission ceases. The bipolar cell can now depolarize and release
neurotransmitters towards the ganglion cell, which will generate an action potential that
can propagate along the optic nerve. We should note that rods contain just this
one pigment, rhodopsin, while cones contain three, which can act together in varying ways
to allow for the perception of any color in the visible spectrum. Zooming out a bit, action potentials that
start at the retinal ganglion cells will travel from the retina, along retina-geniculate-striate pathways. These begin with axons from the ganglion cells,
which leave the back of the eyeball and form the optic nerve. This will pass through the lateral geniculate
nuclei, found in the thalamus, and then continue all the way to a region of the brain called
the primary visual cortex, which is divided into left and right sections, found in the
left and right hemispheres of the brain. Notice that all the inputs from the left visual
field are carried to the right primary visual cortex, and inputs from the right visual field
are carried to the left primary visual cortex. Within these pathways we can also identify
a few layers of neurons. There are parvocellular layers, also called
P layers, which are comprised of neurons with smaller cell bodies. These respond to color, fine details, and
objects that are still or moving slowly. Then there are magnocellular layers, or M
layers, comprised of neurons with larger cell bodies, which are more responsive to objects
in motion. These layers project to different areas of
the visual cortex. Another aspect of organization to the visual
cortex is the way we can identify vertical columns, perpendicular to the cortical layers
themselves, which each correspond to a specific area of the retina from a particular eye. What other areas of the brain are involved here? Aside from the primary visual cortex, which
receives most of its input from the lateral geniculate nuclei, we also have the secondary
visual cortex. This is named as such because it receives
input from the primary visual cortex. Then there is the visual association cortex,
which receives input from the secondary visual cortex as well as other areas of the brain. To help distinguish the locations of these
regions within the brain, we can see that the primary visual cortex sits in the posterior
region of the occipital lobes. The secondary visual cortex can be found in
the prestriate cortex, which surrounds the primary visual cortex, as well as the inferotemporal
cortex, which is found in the inferior temporal lobe. The visual association cortex is found largely
in the posterior parietal cortex, and also distributed around other areas of the cerebral cortex. As we said, the hierarchy is such that visual
information goes from the primary cortex, to the secondary, then to the association cortex. These other two areas of the brain are responsible
for visual analysis, or the interpretation of the information brought to the primary cortex. This information travels to these regions
via the dorsal stream and the ventral stream. Roughly speaking, neurons in the dorsal stream
interpret spatial information, like the location of an object and its motion, while the ventral
stream is for object characteristics, like color and shape. The dorsal stream flows from the primary visual
cortex to the dorsal prestriate cortex to the posterior parietal cortex, while the ventral
stream flows from the primary visual cortex to the ventral prestriate cortex to the inferotemporal
cortex, as is shown here. There is much more to discuss regarding how
the brain processes visual information, and we will elaborate on all of this in the future,
but for now we will have to stop here and move on to some other topics, so let’s check
out the other senses next.