[MUSIC PLAYING] [APPLAUSE] JAMES MANYIKA: Hi, everyone. I'm James. In addition to research, I
lead a new area at Google called Technology and Society. Growing up in
Zimbabwe, I could not have imagined all the amazing
and groundbreaking innovations that have been presented
on this stage today. And while I feel it's
important to celebrate the incredible progress in
AI and the immense potential that it has for
people in society everywhere, we must
also acknowledge that it's an emerging technology
that is still being developed, and there's still
so much more to do. Earlier, you heard Sundar
say that our approach to AI must be both bold
and responsible. While there's a natural
tension between the two, we believe it's not only
possible, but in fact, critical to embrace that
tension productively. The only way to be truly
bold in the long-term is to be responsible
from the start. Our field-defining research
is helping scientists make bold advances in
many scientific fields, including medical breakthroughs. Take for example, Google
DeepMind's AlphaFold, which can accurately
predict the 3D shapes of 200 million
proteins that's nearly all the cataloged
proteins known to science. AlphaFold gave us the equivalent
of nearly 400 million years of progress in just weeks. [APPLAUSE] So far, more than one million
researchers around the world have used AlphaFold's
predictions, including Feng Zhang's
pioneering lab at the Broad Institute of MIT and Harvard. AUDIENCE: Whoo! JAMES MANYIKA: Yeah. In fact, in March this year,
Zhang and his colleagues at MIT announced that
they'd used AlphaFold to develop a novel molecular
syringe, which could deliver drugs to help improve the
effectiveness of treatments for diseases like cancer. [APPLAUSE] And while it's
exhilarating to see such bold and beneficial
breakthroughs, AI also has the potential
to worsen existing societal challenges like unfair bias,
as well as pose new challenges as it becomes more advanced
and new users emerge. That's why we believe
it's imperative to take a responsible approach to AI. This work centers
around our AI principles that we first
established in 2018. These principles guide
product development and they help us assess
every AI application. They prompt questions like,
will it be socially beneficial, or, could it lead
to harm in any way? One area that is top of mind
for us is misinformation. Generative AI makes
it easier than ever to create new
content, but it also raises additional questions
about its trustworthiness. That's why we're developing
and providing people with tools to evaluate online information. For example, have you come
across a photo on a website or one shared by a friend
with very little context, like this one at
the moon landing, and found yourself
wondering, is this reliable? I have, and I'm sure
many of you have as well. In the coming months, we're
adding two new ways for people to evaluate images, first,
with our About This Image tool in Google Search, you'll be able
to see important information, such as when and where similar
images may have first appeared, where else the image
has been seen online, including news, fact
checking, and social sites, all this providing you
with helpful context to determine if it's reliable. Later this year, you'll
also be able to use it if you search for an image or
screenshot using Google Lens, or on your own
websites in Chrome. As we begin to roll out the
generative image capabilities like Sundar mentioned, we will
ensure that every one of our AI generated images has metadata,
a markup in the original file to give you context
if you come across it outside of our platforms. Not only that,
creators and publishers will be able to add
similar metadata, so you'll be able to
see a label in Images in Google Search marking
them as AI generated. [APPLAUSE] As we apply our AI
principles, we also start to see potential
tensions when it comes to being bold and responsible. Here's an example,
Universal Translator is an experimental AI
video dubbing service that helps experts translate
the speaker's voice while also matching their lip movements. Let me show you how it
works with an online college course created in partnership
with Arizona State University. [VIDEO PLAYBACK] - What many college
students don't realize is that knowing
when to ask for help and then following through
on using helpful resources is actually a hallmark of
becoming a productive adult. [SPEAKING SPANISH] JAMES MANYIKA: We use-- [APPLAUSE] It's cool. We use next generation
translation models to translate what the
speaker is saying, models to replicate
the style and the tone, and then match the
speaker's lip movements, then we bring it all together. This is an enormous step forward
for learning comprehension. And we're seeing promising
results of course completion rates. But there's an
inherent tension here. You can see how this can
be incredibly beneficial, but some of the same
underlying technology could be misused by bad
actors to create deepfakes. So we've built the
service of guardrails to help prevent misuse and
to make it accessible only to authorized partners. [APPLAUSE] And as Sundar
mentioned, soon we'll be integrating new innovations
in watermarking into our latest generative models to also
help with the challenge of misinformation. Our AI principles also help
guide us on what not to do. For instance, years ago, we
were the first major company to decide not to make a general
purpose facial recognition API commercially available. We felt there weren't
adequate safeguards in place. Another way we live up
to our AI principles is with innovations
to tackle challenges as they emerge, like reducing
the risk of problematic outputs that may be generated
by our models. We are one of the first in the
industry to develop and launch automated adversarial
testing using large language model technology. We do this for
queries like this, to help uncover and reduce
inaccurate outputs like the one on the left and make them better
like the one on the right. We're doing this at a scale
that's never been done before at Google, significantly
improving the speed, quality, and coverage of
testing, allowing safety experts to focus on
the most difficult cases. And we're sharing these
innovations with others. For example, our perspective
API originally created to help publishers
mitigate toxicity, is now being used in
large language models. Academic researchers have
used our perspective API to create an industry
evaluation standard. And today, all significant
large language models, including those from
OpenAI and Anthropic, incorporate this standard
to evaluate toxicity generated by their own models. Building AI-- [APPLAUSE] Building AI responsibly must be
a collective effort, involving researchers, social scientists,
industry experts, governments, and everyday people, as well
as creators and publishers. Everyone benefits from a
vibrant content ecosystem today and in the future. That's why we're
getting feedback and will be working with
the web community on ways to give publishers choice and
control over their web content. It's such an exciting time. There's so much
we can accomplish, and so much we must
get right together.