[music] >>CHIARELLO: Thank you very much, and thank
you for the invitation. Understanding how language is represented
in the human brain has a long history in brain science. Aphasia lesion localization began
with the work of Paul Broca in the 19th century, and it's still actively investigated to this
day. However, we really can't over emphasize the importance of contemporary neuroimaging
techniques in enlarging our view of language organization in the brain. Functional neuroimaging techniques such as
fMRI and PET have allowed us to observe brain activity in areas that are active as participants
actually use language [unclear]. Using functional connectivity analyses, we can track the conjoined
activation of several different brain regions in both healthy people, as well as in those
with injured brains. Advanced structural imaging techniques using
high resolution fMRI allow us to more precisely measure lesion localization after brain injury.
And using DTI (Diffusion Tensor Imaging) we can image structural connectivity along white
matter pathways that connect language relevant brain regions. As a result, over about the
past 20 years, our view of language and the brain has been greatly expanded, and we've
come to appreciate just how dynamic this system is, as we engage in various linguistic activities. So, what I am going to try to do in my talk
today is to provide a broad overview of current thinking on language organization in the brain.
In the preparation for this talk I was guided by recently published meta-analyses and review
articles that attempted to still what we've learned so far. But, I want to make it clear
that this is my own personal interpretation of the current state of the field, based on,
I hasten to say, over 30 years as a student of and a contributor to the study of language
organization in the brain. So, why don't we start out with the classical
view of language in the brain. When most people think of language in the brain, they think
of Broca's area, in the left inferior frontal cortex, and Wernicke's area in the posterior
part of the left temporal cortex. The thumbnail sketch is that Broca's area is involved with
language expression, and Wernicke's area is responsible for comprehending language. A white matter pathway, the arcuate fasciculus
is usually said to connect these two language relevant regions and to mediate their coordination.
Actually, though, this is an oversimplification of the classical view of language in the brain.
It's usually acknowledged that surrounding areas, as well as areas that provide sensory
motor access to the language regions, are part of the left hemisphere's language system.
We can call this the textbook view of language in the brain because it often appears in introductory
works. Now I want to contrast this with a recently
published compendium of fMRI data that illustrates brain areas that are active during language
use. The first thing to say is that don't worry;
I'm not going to describe every one of those colored blobs. But, the point here is to indication
that what we think of as the classical language regions really represent just the tip of the
iceberg in terms of how the brain subserves language. And it's worth pointing out that
even this illustration is incomplete because it only shows regions within the left side
of the brain. So, I'm going to structure the rest of my
talk as follows. I'm going to concentrate on four themes that have emerged from contemporary
language brain research. First, a number of regions outside of the classical language
areas are now known to be recruited for various language functions. So, I'm going to describe several of these,
to describe how they exemplify how our notions of language-relevant cortex have been greatly
expanded in recent years. Secondly, I'll describe parallel pathways
that are involved with language use. For both speech and reading, processing unfolds along
two separate pathways with differing functions. And, I'll try to give you an introduction
to the "what" and the "where" of these parallel streams of language information processing. Third, I'll also discuss language networks;
that is, interconnected brain areas that function together to perform various language tasks.
I'm using the plural by intention here because there's no single language network. Rather,
there's a confederation of partially overlapping networks, each of which is recruited in different
task situations. And, then finally, I'll describe how activity
within any of these language networks changes in response to task demands and individual
differences, in order to illustrate the dynamic nature of language processing in the human
brain. So, what are some of these newly recognized
language areas? Well, first I want us to consider an area that's clearly outside of the classical
language zone, but that plays and important part in reading. So, what I'm showing you here is just the
medial surface of the brain, if you sliced it down from front to back and looked inside.
And we see a gyrus that is highlighted in yellow here – the fusiform gyrus, in the
inferior part of the temporal lobe. The posterior part of that gyrus, as it goes in, it's sort
of a transition between temporal and occipital cortex is sometimes referred to as the "visual
word form area". And this has been shown to play an important role in early stages of
reading. Here, again, we're looking at the bottom surface
of the brain and highlighted here is regions of activation from fMRI studies of the so-called
"visual word form area". Numerous fMRI studies have indicated that this area is activated
when we present visual words or letter strings that approximate words. We call these "pseudo
words". Activity in this part of the brain codes for spelling regularities, or the orthography
of the language. And, often times we see activation greater for an actual word (a familiar word)
than a pseudo word. Some have hypothesized that this region extracts
the visual form of words. However, this doesn't have to imply that words are the only visual
patterns that are processed in this part of the brain. When patients have brain injury that damages
this area of disrupts input into the area they can have very profound difficulty reading
words. Many of these people can only read on a letter by letter basis, so if they were
to see the printed word "cat", they would approach it this way, "c … a … t … oh
‘cat'". Okay. So, basically what they're doing then is recognizing each individual
letter in order to be able to identify the whole word string. Of course, this strategy
like this (this letter by letter reading) would be of very little use for reading connected
text. So they are ending up with very profound reading difficulties. So this tells us that this left inferior temporal-occipital
area is clearly essential for reading, even though it was not considered to be one of
the classical language areas. An additional region that has been shown to
be relevant to language is the insula. And, this is a part of the cortex that's buried,
or hidden, beneath the lateral surface. So what I'm showing you here is a dissection
of the left hemisphere that exposes the insula. Actually, Wernicke originally speculated that
the insula might be involved in language function, but the idea really wasn't pursued much for
a very long time. However, a number of recent findings have suggested that, in fact, the
anterior part of the left insula does play an important part in language. And here I am just highlighting the anterior
part of the left insula which is the part that has been most implicated for language
function. So, what are some of the supporting finds?
Well, initially, there was a surprising finding from a modern lesion localization study done
by Dronkers. That data indicated that patients who had speech production problems were most
likely to have brain injury in the insula, rather than in other areas of the left hemisphere.
Later on, neuroimaging research with healthy individuals confirmed a roll for this area
in the articulation of speech. Currently, the extent to which the insula is involved
in wider language functions is under active investigation. It probably is involved with
things beyond articulation, but that's where we have the strongest evidence at the moment. In addition, research from my lab and other
labs has demonstrated that this anterior part of the left insula is structurally asymmetrical,
like other relevant language regions, with greater surface area on the left side of the
brain than on the right side of the brain. But what was particularly interesting was
that the asymmetry of the insula structure was correlated with functional language asymmetry;
so functional differences between the left and the right sides, whereas the asymmetry
for more classical language areas, like Broca's and Wernicke's area was not associated with
functional language lateralization. So this probably implies a critical role for the insula
in the establishment of the left hemisphere language network. Wernicke's area is generally described as
the posterior part of the superior temporal gyrus. However, it's now clear that other
parts of the temporal cortex might be even more important for our ability to understand
language. The posterior part of the middle temporal gyrus shown in yellow here is actually
more critical here for language comprehension than classical Wernicke's area, which is shown
in this bright green here. When patients have injury that's restricted
to this left middle temporal gyrus they have severe impairments in understanding words
and sentences when they're spoken, and their comprehension difficulties are worse than
patients who have brain injury in any other part of the left hemisphere. This same region of the left hemisphere is
also consistently activated in healthy individuals when they engage in verbal semantic tasks
that require the comprehension of words. And this was demonstrated in a meta-analysis of
over 100 studies that was published by Binder. More recently, a very elegant DTI study (Diffuser
Tension Imaging) indicated that this part of the middle temporal cortex was more strongly
connected anatomically to the other language regions than any other part of the left hemisphere,
implying that it's a very key component in the left hemisphere language comprehension
network. So now if we travel more towards the front
of the brain, still within the temporal lobe, and look at this anterior temporal lobe region,
which is, again, rather distance from Wernicke's area, we come to another part of the brain
that's been shown to be important for subserving the meaning part of language. This part of the brain is the focus of degeneration
in a condition called semantic dementia. Patients who have semantic dementia have severe naming
deficits, and they have great difficulty producing names in spontaneous speech. Testing reveals
that their problems with naming are actually not due to word retrieval problems, but, rather,
to the progressive loss of semantic or meaningful knowledge. In such patients, the anterior
temporal lobe degeneration is usually bilateral (left and right), however, it's generally
more severe in the left hemisphere. Now, up until very recently, it's been difficult
to get a good functional view of this part of the temporal lobe, just due to its anatomical
location, when using fMRI. However, very recently a study by Visser and colleagues was able
to surmount the technical challenges and actually do a really fine fMRI study of this part of
the cortex. And what they found was that this same anterior temporal lobe region was highly
activated in healthy participants in tasks that required meaning judgments. So an example
of a meaning judgment that they used in this study, someone would either show a picture
of a pyramid or the word "pyramid", and then they would have two choices. And the choices
could either be words or pictures, but one choice was a fur tree (an evergreen tree),
and the other was a palm tree. And the task was to pick which picture or which word went
better with the pyramid. Okay, so it's really a test of the semantic or meaningful knowledge.
And they found similar activation in this anterior temporal region, regardless of whether
the stimuli were words or pictures, which implies that this area may be an amodal region
for processing meaning. It's generally accepted that concepts are
represented in our brain in a distributed way throughout the cortex. So, for a concept
like "dog", visual features associate with "dog" would be stored in visual association
cortex, and "doggy" type actions would be stored in brain areas that mediate actions.
Emotional features associated with dogs would be stored in limbic cortex, and so on. So
in this little cartoon here, the distributed features are just represented as these colored
ellipses. It's been proposed that the anterior temporal
lobe area functions as a hub to link together all of these diverse features for various
concepts, and such linkages would enable us to make semantic generalizations and classifications. [pause] So far, everything I have described has been
focused within the left hemisphere. However, the traditional view of language as the sole
province of the left hemisphere has also been overturned. Even before the neuroimaging revolution,
there was some acknowledgment that the right side of the brain also was involved in some
language activities. For example, patients who have right sided brain injury can have
difficulty understanding intonation, or the melodic content of speech, and they also experience
problems interpreting non-literal language, as well as the pragmatic implications of various
speech acts. So, in other words, they have difficulty whenever the underlying message
differs from the actual literal content of the words being expressed. In addition, lateralization research with
split-brain and healthy individuals also has documented right hemisphere language capacity.
So, for example, my lab and other labs have demonstrated a broader activation of word
meanings within the right side of the brain than within the left side of the brain. So
it probably shouldn't have been surprising to us that when people began doing functional
neuroimaging studies of language that they actually observed, frequently, bilateral activation. Typically, the activity is, of course, more
extensive and to a greater extent within the left hemisphere, but actually right hemisphere
brain activity during language processing is more the rule, than the exception. And,
throughout my talk I'll give different examples to support this idea of a more bilateral system
for processing at least some types of language. But, so far, this view that I've expressed
so far of an expansion of the language areas isn't really all that satisfactory, because
just adding a bunch of areas to a list doesn't really provide any kind of a big picture idea
about how a language is organized in the brain. But, accompanying this finding that many regions
of the brain are relevant for language is the discovery of parallel pathways for language
processing. And this brings me to the second theme of my talk. [pause] Support for the parallel streams of information
processing view comes from both functional and structural imaging of normal individuals,
I should say healthy individuals, and brain injured individuals. For both speech, as well as reading, we can
identify distinct types of processing that occur along dorsal and ventral pathways within
the brain. So when we're talking about brain anatomy, dorsal just refers to areas more
towards the top; ventral areas more towards the bottom. The dorsal and the ventral pathways both begin
in sensory cortex, whether we are talking about auditory cortex or visual cortex, and
then they diverge. So the dorsal stream proceeds through parietal and frontal cortex, while
the ventral stream proceeds through more inferior temporal lobe areas, in a posterior to anterior
direction. So, in order to illustrate this view of parallel
streams of information processing, I'm going to describe to you the model of Hickok and
Poeppel. According to their model, the major function of the dorsal stream is to map sounds
onto action, while the ventral stream maps sounds onto meaning. Both of the streams begin bilaterally in the
auditory cortex, which is shown in green here. And, within the auditory cortex, of course,
there are acoustic analyses of speech. Within the ventral stream, areas near the superior
temporal sulcus, shown in yellow here, perform phonological, or language specific, sound
analyses of the acoustic signal. Then, these phonological representations provide access
to word meaning through the inferior temporal cortex, which is shown in this magenta here.
Note, that this part of the ventral stream is postulated to occur bilaterally on both
sides of the brain. Within the left hemisphere then, the final
sort of path of the ventral pathway is in this more anterior region that is involved
with extracting means across an entire sentence. So, then, the major function of the ventral
pathway - start with the sound and end up with a full representation of the meaning.
And, I want to point out to you the prominence of these temporal lobe areas, beyond Wernicke's
area, within the ventral stream. So, now let's consider the dorsal stream,
which is shown in blue here. Again, we start with the acoustic processing in the auditory
cortex, but next there's a region at the boundary of the temporal and parietal cortex, right
here in the left hemisphere, that provides an interface, or a mapping, between acoustic
and phonological representations and the articulatory system within the frontal cortex. So the articulatory system includes Broca's
area, the underlying anterior insula, as well as more dorsal parts of the frontal cortex.
The dorsal stream functions to translate heard speech into an articulatory code, and this
involves a segmental analysis of speech, which does not occur for the sound to meaning links
that go through the ventral pathway. Note, as well, that the dorsal pathway is only hypothesized
to be operating within the left side of the brain. So, again, the function of the dorsal
pathway is to map sounds onto actual articulation. The dorsal pathways are going to be used in
parallel when we hear speech, with the dorsal stream performing sensory motor and segmental
analyses, while the ventral pathway subserves semantic and conceptual processes. Now, I haven't said anything about grammar
or syntax. Actually, recent evidence suggests that these types of grammatical analyses occur
within each of the processing streams, and is currently under debate whether there are
actual differences in the kinds of grammatical processing that occur within the dorsal and
the ventral stream. But one proposal, that makes some sense, is that the dorsal stream
mediates more sequential analyses of the syntax, while the ventral stream is processing and
rapidly identifying grammatical violations that might occur. So I've talked to you about functional evidence
for the dorsal and ventral pathways, but there's also some very nice anatomical evidence supporting
the idea that there are actual white matter pathways that can serve as conduits of communication
between these dorsal and ventral streams. So what we're seeing here are axon bundles
within the arcuate fasciculus, shown in blue, which is going to connect areas within the
frontal and posterior areas of the dorsal stream. And then another pathway, the inferior
fronto-occipital fasciculus, which has two subdivisions here, actually connects areas
within the ventral root. So there's both anatomical and functional evidence supporting the idea
of these parallel language pathways. Parallel pathways have also been identified
for reading, and the focus of this research is then how visual words are decoded. Functioning
neuroimaging research suggests that when we read there's differential processing of the
visual words in the ventral stream, involving the inferior temporal occipital cortex [unclear];
the so called visual word from area that we talked about earlier, as well as a dorsal
pathway which includes [unclear] parietal and posterior superior temporal areas. Now, as we saw earlier, okay, the ventral
region [unclear] is more strongly activated for words, then for letter like word strings
(pseudo words), and it functions to extract spelling regularities across the entire word.
So this pathway is thought to process letters in parallel as we read, and to subserve rapid
whole-word reading. However, the dorsal stream provides another
way in which words can be read. Within the dorsal area we find greater activation for
pseudo words than for familiar words. And this pathway has been show to subserve a slower,
more segmental, type of word decoding in which individual letters are serially mapped onto
phonemes prior to recognizing the word. In children, the dorsal area predominates
early in reading acquisition, as children develop phonological awareness and learn to
sound out letters. However, the ventral path becomes increasingly important with development,
as reading fluency and reading skill increases. However, even in an affluent adult reader,
the dorsal system is still available and is still used. And, this was demonstrated nicely
by a recent study that looked at the effects of degrading visual words. And they degraded
the words in ways that would disrupt the ability to see the whole word form, or to process
the whole word form. So they rotated the words from their normal horizontal orientation,
or sometimes placed abnormal spaces in between letters in the word. And when they did that,
they found increased activation within the dorsal region. The idea here is the words
can no longer be recognized as familiar visual patterns, and have to be read in this more
sequential way. Just as an aside here, interesting finding
with respect to the dorsal and ventral pathways with respect to children who have developmental
reading problems, many of these children have problems with phonological awareness and the
ability to sound out words, so it was originally thought that we would see abnormal activation
within the dorsal pathway. And, in fact, that was found. Children who have difficulty learning
to read have reduced levels of activation within the dorsal stream. However, they also
have reduced activation within the ventral stream, which was surprising. And the idea here is that we actually need
to have a functioning dorsal pathway in order to boot-strap the whole word reading that
is probably occurring within the ventral pathway. So, in addition to having both of these areas
of the brain being under activated, children with developmental reading problems also tend
to have other areas of the brain have unusually [unclear] high levels of activation, areas
within the right hemisphere and within different areas of the frontal cortex So, in general, both across speech and reading,
the ventral pathways predominate for rapid word analysis, while the dorsal pathways are
involved with slower, more analytical processes that require segmenting the language input.
So one important feature of language organization in the brain is the divergence of information
processing into these two parallel streams. So, it should be clear by now that there's
lots of different brain regions that are important and utilized for language. However, this does
not mean that each one of these areas is brought into play in all language contexts. Rather,
the evidence suggests that somewhat different networks of brain areas are recruited, depending
on the type of linguistic activity that we are engaged in at the moment. And this brings me to the third theme of my
talk. Any language activity requires an interconnected
network of synchronized brain areas that function together. We can investigate this using functional
connectivity analyses. And, what functional connectivity analyses do is to take the actual
fMRI signal and process it in a way to identify brain areas whose activity changes in concert.
So, in other words, areas whose activation rise and fall at the same time. These areas then represent a functional network,
or a coalition of brain areas that work together to perform a particular task. So let's start
with considering functional connectivity when people listen to speech. In a recent study by Saur and colleagues,
participants heard three types of sentences: meaningful sentences, sentences with pseudo
words that replaced the real words but otherwise had normal sentence structure, and the pseudo-sentences
played in reverse. Now, when you place [unclear] speech or pseudo-speech
in reverse, you will present the person with the exact same acoustic content as the normal
speech, however the content can't be processed using the phonology of a language. So, then, when we compare the activation for
the pseudo-sentences [unclear] to those for the reverse pseudo-sentences, then it will
reveal brain areas that are processing the phonological content of the sentence, devoid
of meaning, because neither of them have any meaning. And we would expect to see greater
activation for the pseudo-sentences. When we compare the meaningful to the pseudo-sentences,
however, then we can see a network of areas that corresponds to the semantic of meaningful
content of the spoken message. And we would expect to see greater activation for the more
meaningful sentences, relative to the pseudo-sentences. So what I'm showing you here is the strength
of the functional connectivity for five relevant language areas that are labeled in the diagram.
The phonological network is shown in "a", and the semantic network is shown in "b".
Note that the phonological network is only found within the left hemisphere, whereas
the semantic network is more bilateral; however, the correlations between the brain areas are
definitely stronger within the left hemisphere than within the right. There were also very
strong functional connectivity between corresponding areas in the left and the right hemisphere
for the semantic network; in other words [unclear], a strong functional connectivity between this
area and this area, and so on. So these data support somewhat different functional
networks for extracting the phonological versus the semantic content of sentences. One coalition
of brain areas processes the phonology, another coalition processes meaning. The investigators went further and used diffuser
tension imagining to identify the white matter pathways that underlie these networks. And
these analyses revealed both dorsal and ventral connections for the phonological pathways,
and ventral connections exclusively for the semantic network. So then, functional connectivity analyses
go beyond identifying what are critical regions for language, and they reveal which regions
are activated in synchrony as we engaged in language activities. So there isn't just a
single language network, but multiple language networks in the brain, and which networks
are utilized depends on characteristics of the task and of the individual. For example, another recent study examined
the processing of spoken sentences again, but now contrasting sentences that had emotional
content and emotional intonation with sentences that were emotionally neutral. In both conditions,
so the emotional as well as the more neutral sentences, there was a bilateral – the functional
connectivity analyses revealed bilateral networks for processing both types of sentences. And
these were on the lateral surfaces of the left and right hemispheres. However, in addition to these regions, areas
on the medial surface of the left and right hemispheres were activated only for processing
the sentences that expressed an emotion. These additional areas of the brain are, from other
studies, associated with theory of mind and the ability to make inferences about other
people's emotional states. So these kind of results illustrates that language networks
are dynamically altered depending on the message that's being expressed. We've already seen that many brain regions
can be involved in language, and that subsets of these regions are organized into multiple
partially overlapping functional networks. My final theme is that functional connectivity
within these networks can be modulated by task demands and individual differences. And,
I'll mention two recent studies that illustrate this point. So, consider what happens when we are hearing
two people express two different messages at the same time. Buchweitz and colleagues
did an interesting study, and they found that about one-third of the people that they tested
could successfully comprehend two messages simultaneously. So then they went on to use functional MRI
and functional connectivity analyses to investigate processing in these individuals. So they contrasted
situations where the individuals were hearing only a single spoken sentence to conditions
when they heard two different sentences [unclear] spoken simultaneously. Not surprisingly, they found activation for
a bilateral set of regions when the person was comprehending either one message or two
messages. Okay, and these represent some of the core language comprehension areas that
we've seen earlier. However, areas that you see, and these are shown in white; but the
areas that you see in red, sort of surrounding those white areas, were activated only when
the people were comprehending two messages at the same time. Note that these areas that were recruited
[unclear] only when comprehending two messages surround these core language areas. And this
implies that additional cortexes recruited as task demands increase. But they also did
functional connectivity analyses, and that revealed a really interesting change in the
network dynamics under the dual message condition. And here, the network activity became more
synchronous, or more tightly coupled or correlated, when they were listening to two messages simultaneously. The functional connectivity increased between
frontal and temporal areas within a hemisphere, as well as for corresponding areas across
hemispheres. Furthermore, they found a really interesting individual difference in that
this increase in synchronicity within the network was greatest for the people who had
the lowest verbal working memory capacity. So, this suggests that when the language system
is challenged by increased demands, functional connectivity within the network increases,
again, demonstrating the very dynamic nature of the system responding as changes in demands
occur. Language experience [unclear] definitely differs
across individuals, and the current evidence suggests that this can also modulate the brain
networks that were used for language. So Zou and colleagues studied individuals whose native
language was Mandarin Chinese. Half of the people that they tested were monolingual in
Mandarin, and half were bilingual in a second language, and the second language was Chinese
sign language. When they were bilingual in the Chinese sign
language, they learned the Chinese sign language later in life, well after the original language
networks would have been established for their first language. So these question these investigators asked
was, whether experience with a second signed language could alter the brain organization
for the first language. So in this study, both of the groups, the
monolinguals and the bilinguals, named pictures in Mandarin, their first language; and they
were equally accurate at doing so. Using fMRI, they obtained similar bilateral networks,
during this task, again for both monolinguals and bilinguals; but there were two interesting
additional findings that suggested that learning a second language alters the brain organization
for the first language. First, the bilinguals had greater activation
for several regions within the right hemisphere. And, second, functional connectivity was observed
for the bilinguals – I should say – increased functional connectivity was observed for the
bilinguals between the regions between the regions shown in this diagram. And, what you're
seeing in the lines here are areas that had increased functional connectivity in the bilingual
participants. And we can see that there was increased functional connectivity both within
the left hemisphere, as well as between right hemisphere and the same left hemisphere areas. So, even though the monolinguals and bilinguals
were speaking in the same language (Mandarin), there was a somewhat different language organization
based on their language experience. Or, to put it another way, when two individuals perform
the same language task, they may rely on differently organized brain networks due to their differences
in lifetime language experience. So I've covered a lot of ground, and I'd like
to close by summarizing some of the important take home messages. I hope I've convinced you that our ideas about
brain organization have evolved considerably from the classical ideas of a small number
of left hemisphere language centers. Broca's area and Wernicke's area are clearly important
for language, but, as I said at the outset, they represent just the tip of the linguistic
iceberg. Modern brain imaging techniques demonstrate
conclusively that many individual areas are important for normal language function. Language
is not subserved by a small number of left hemisphere hot spots, but, rather, by a much
more extensive constellation of regions that span both hemispheres. Second, when we take in linguistic stimuli,
whether it's auditory or visual, this information is processed in parallel streams within dorsal
and ventral cortex. These different pathways extract different kinds of information from
the linguistic signal, and the flow of information generally proceeds from a posterior to anterior
direction. As we've seen, the dorsal stream processing
involves sequential analysis, segmentation, and sensory motor associations. Ventral stream
processing involves rapid word analysis and meaning access. So any model of language in
the brain has to include at least two parallel information processing circuits. Third, language is subserved by multiple functional
networks. And, I'd like you to consider the various linguistic activities that you might
have engaged in during my talk. Hopefully, you've comprehended my auditory message. And,
as we've seen, a rather extensive bilateral network would be recruited to accomplish this.
But you've also processed the visual language [unclear] presented in my slides; in many
cases simultaneously with comprehending the auditory spoken message. You might have held
parts of my talk in verbal working memory, while taking written notes. Maybe you were
also texting someone or talking to the person seated next to you. Perhaps some of the time
you were day dreaming, or imagining that you were conversing with a loved one, or recalling
what someone told you right before the talk. All of these linguistic activities will have
recruited somewhat different brain networks. Brain regions activated in synchrony to perform
specific linguistic computations. Language isn't one thing. It's many things, and so
it shouldn't be surprising that the brain organization for language is dynamic, recruiting
relevant brain regions as needed. And, finally, within any given language network,
functional connectivity will be modulated as processing demands change. Functional connectivity
can also differ across individuals based on their ability and their past experiences.
And this implies a higher degree of plasticity in brain organization for language that might
have been appreciated in the past. We need to begin thinking about language networks
as organic and adaptive, altering their organization over time, in response to linguistic experiences.
This view implies that the fine-tuning of language networks is an ongoing process that
continues throughout our lives. Well, we still have a long way to go in exploring
language organization in the brain, but I hope to at least have conveyed to you a more
sophisticated view then what you see here [audience laughs]. Thank you for your attention.