Today is an opportunity for Ann and Bill to have a conversation
through our brain decoding to facilitate what we think
would resemble a future use of an actual real life
application of the technology. We want to try to basically
create a demonstration where we’re decoding the text, the speech and the avatar at the same time from Ann’s brain as she tries to silently
attempt to speak. We just trained some
new models yesterday, so we might need to test it
a little bit. It should work. The device is an
electrocorticography grid which is attached to a pedestal which is screwed on to the participant’s skull. Ann is the very first person
to have this combination. [Ann] I think you are wonderful. [Sean] In the Chang lab
we really focus on restoring voice to people that have lost it due to conditions like stroke or ALS. [Alex] What we're picking up on are neural activity related directly to
the attempts to move her facial muscles, and that's what we're able
to decode into speech. [Bill] Hey Ann. How's it going? [Ann] It is good to see you. [Sean] Giving them
the ability to communicate again with their loved ones and caregivers is really
what we’re looking to do. [Bill] I was thinking about running to the store. [Ann] What time will you be home? [Bill] In about an hour. [Ann] Do not make me laugh. [Bill] That's the first time
we've ever had a conversation using this system. [Margaret] In order to communicate in
her day-to-day life she has an assistive
communication device. These are dollar store glasses. And she interfaces with it using a little reflective sticker on them. It's very slow, and the device now is very old, but she relies on it. [Bill] For us to have that conversation
using her Dynavox would probably be like a 5-to-7-minute conversation. [Ann] It was nice to have a conversation. I forget how
slow this machine is. [Margaret] When Ann was 30 years old, she was playing volleyball
with some friends and had a stroke which led to the condition that she has now, which is locked-in syndrome. At this point, she had a 6-month-old child and a 7-year-old child. [Ann] You are truly wonderful people. [Margaret] The physicians have
no idea why this happened. That was now 18 years ago. How can we create technology
that can help people really meaningfully contribute with all sorts of abilities? [Ann] Hi. How are things going? [Bill] Hi Ann. Things are going fine. How are you feeling about the Blue Jays today? [Ann] Anything is possible. [Bill] Well, you're showing not a lot of
confidence in them today, are you? [Ann] You are right about that. [Bill] I guess we’ll see, won’t we? [Edward] I feel really lucky to work with a group of students
and fellows and engineers and scientists, all as a team that have had this
singular goal for the last 10 years to build a device that can restore speech. And we were getting so close to making this something that is going to be
a real solution for patients. [Ann] Will you do me a favor? [Margaret] It's incredible to hear her
discuss her journey. [Ann] Hand that to me please. [Margaret] Apart from her motivations to actually advance this technology – [Ann] I thought it would be good for me. [Margaret] If you are able to produce
text, synthesized speech and then also a personalized avatar, she believes that would really – and we do too – advance her ability to become a counselor
and to work with people. It's important to develop technologies which can better support
individuals with disabilities because it's a huge amount of people who
we exclude from workplaces. [Edward] My hope is that this is going to be just a stepping stone to many
other things that can be done for people who have lost the
ability to communicate to realize their full potential.