[MUSIC PLAYING] PRABHAKAR RAGHAVAN:
In the mid '90s working as an academic
computer scientist, I was struck by the explosion
of information in the world, and in contrast, the immense
difficulty of accessing that information. How do we take these
rigid notions of computing and turn them into vibrant
experiences that humans adore? That is so energizing
and so fascinating. Hello, everyone. Thanks for tuning in
to our Search On event. On behalf of everyone
at Google, we hope you're staying
safe and well and taking care of
yourselves and each other. In today's rapidly
changing world, information is absolutely
critical to people's lives. This is why over
a billion people every day choose to
use Google Search. Search has always been at
the very core of our mission to organize the
world's information and make it universally
accessible and useful. Building the best
possible search engine is an incredibly
difficult problem, one that I've personally
worked on for most of my academic and
professional career. It's also a problem that
my Google colleagues have worked on tirelessly
for over 22 years. And yet the engineering
challenges we continue to face are significant. Every single day,
15% of search queries are ones we've
never seen before. To solve these
information needs, we are constantly
exploring new technologies to understand what
you're looking for. Much has changed since we first
launched our search engine. There are more ways than ever
to search for information on Google. Whether you're asking your
Google Assistant for the latest COVID-19 news, looking for small
businesses to support on Maps, or identifying a new bird
that's visiting your bird feeder using Google Lens. There are also more ways than
ever to find information. News on Twitter, flights
and Kayak and Expedia, restaurants on OpenTable,
recommendations on Instagram and
Pinterest, items on Amazon, and many others. There's never been more choice
and competition in the ways people access information. So be humbled and honored that
people continue to use Google because they find us helpful. People trust us to give
them the best information and that trust is why
we continue to innovate. We are here today
to highlight some of the new technologies we've
been applying to Google Search and to demonstrate why we
believe that search will never be a solved problem. But before we do, I want
to spend some time talking about what makes us unique. Specifically, four key elements
that drive forward innovation. These elements from
the foundation for what allows us to answer trillions
of queries every year. The first is an
unwavering commitment to deeply understand all
the world's information. Two decades ago, we built
the most comprehensive index of websites to rank
search results. But, of course,
people don't just want access to web information. They want to know about
the world around them. Why is the sky red today? How busy is my local
grocery store right now? Where does that sign say
in a language I can't read? The next step was
the knowledge graph which embeds relationships and
facts about people, places, and things. So if you ask, how tall is
Europe's tallest mountain, we give you the answer. We can also show you other
users explorations, such as in which country is Europe's
tallest mountain. Of course, much of the
information in the world isn't contained on the web. So back in 2007, we started
driving cars down streets in order to capture
real world imagery. At the time, people thought
it was an impossible feat. Fast forward to today. Street View imagery
enables people to learn more about the world. It also helps place
businesses on the map so people can find and
support local stores. We've mapped 87
countries, including previously undermapped
places like Armenia, Lebanon, Tanzania, and Tonga. And we've enabled people
everywhere to virtually explore not just their streets, but
Everest Base Camp, the Great Barrier Reef, the Grand Canyon,
and many other destinations. Another way we make
real world information available to help people
is with busyness indicators on Google Search and Maps. You can see popular
times as well as how busy a
place is right now, which is incredibly helpful in
the age of social distancing. Over the past few
months, we've expanded this to include new outdoor
categories, such as parks and beaches, and we are on
track to add live busyness information to
millions of new places, including restaurants and
stores, by the end of the year. Today, we are announcing
that soon in Google Maps, you'll be able to see how busy
a place is directly on the map without having to search
for a specific business. This can help you make more
informed decisions about what places you visit
and when, helping you and your loved ones stay
safe, healthy, and informed. We're also adding COVID-19
safety information front and center on business profiles
across Google Search and Maps. This will help you know
if a business requires you to wear a mask, if you need
to make an advance reservation, or if the staff is taking
extra safety precautions, like temperature checks. One striking example where
AI is helping us better understand information
is with video. Through our ongoing
research, we've advanced our ability
to understand the deep semantics of a video. When you search for the
best cordless vacuums, the technology
will automatically highlight points and videos
that compare different products, so you don't need to painfully
scan the entire video. Crucially, all of
this analysis happens behind the scenes before
you even ask your query, and these examples highlight
what sets our approach apart. So having done the heavy
lifting behind the scenes, a second element is
to deliver the best results on every query. Delivering high
quality results is what has always set Google
apart from other search engines, even in our earliest days. And in a year when access
to reliable information is more critical than ever, from
COVID-19 to natural disasters, the important moments of civic
participation around the world, a long standing
commitment to quality remains at the core
of our mission. While dozens of
ingredients go into this, I want to highlight two. Natural language understanding
and the rigorous evaluation of search results. As we shared last year, we
made a huge breakthrough in natural language
understanding with a new model we call BERT. BERT models consider the
full context of a word by looking at the words that
come before and after it, particularly useful for
understanding the intent behind such queries. At launch, BERT helped improve
search on a massive scale, impacting 1 in 10 searches
in the US in English. Today, I'm thrilled to
share that BERT is now used in almost
every English search and we've expanded to
dozens of languages and seen particularly impressive
gains in Spanish, Portuguese, Hindi, Arabic,
German, and Amharic. Advances in natural
language understanding are also helping
people save time. Over the last two years, Duplex,
a conversational technology, has helped people complete
more than a million tasks, like making reservations
at a restaurant. And since the beginning
of the pandemic, Duplex has made calls to
businesses in eight countries to confirm things
like opening hours or whether they often
takeout and delivery. This enabled us to make
three million updates to business information that
have been seen over 20 billion times in Maps and Search. So that's natural
language understanding. But delivering the highest
quality results also necessitates a rigorous
evaluation process. Every year, we do
hundreds of thousands of tests and
experiments to measure the quality of our results and
ensure that any improvements really make our results better. Last year alone, we made
more than 3,600 updates to make Google more helpful. The third element is world
class security and privacy. As we design our
products, we focus on keeping your
information safe, treating it responsibly,
and putting you in control. Protecting you
starts with security. From the beginning,
we've led the industry in keeping you safe
while searching. We encrypt every
search on Google, including both the queries
and the results returned, and we're constantly
fighting against bad actors, whether that's protecting
you against malware or alerting you to phony
sites that try to steal your personal information. Every single day, we detect more
than 25 billion spammy pages. If each of those pages
were a page in a book, that would mean almost 20
million copies of "The Lord of the Rings" trilogy. To keep you safe, we
deployed technology like Google Safe Browsing,
which shows warnings when you attempt to
navigate to dangerous sites or download dangerous files. Today, more than
four billion devices are protected by
Google Safe Browsing and we show three million Safe
Browsing warnings per day. Now let's talk about privacy. We believe that
privacy is personal and it's our responsibility to
provide users with easy to use controls so they can
choose the settings that are just right for them. I want to highlight three. The first is the
Google Account, which is all of your easy to use
privacy settings in one place. This year alone, it's
been visited more than 12 billion times. The second is Privacy
Checkup, which lets you choose the settings
that are just right for you. So far this year,
291 million people have taken a privacy checkup. The third is auto
delete controls, which give you the choice
to have Google automatically and continuously delete
your search data after three months or 18 months. We believe products should
keep your information for only as long as it's
helpful and useful to you. We believe that privacy
is a universal right and we're committed to giving
every user the tools they need to be in control. I also want to affirm
the searches are between you and Google. We have a responsibility to
protect and respect your data. That's why we never sell your
personal information, period. A fourth and final element
is open access for everyone. The internet is nothing short
of a technical and economic marvel. Over the past two decades,
we're proud to have been a part of the creation
of millions of businesses, entire industries. The traffic we send
to the open web has increased every year
since we first created Search. That's more readers of
articles, downloaders of apps, and customers of
businesses every year for the last 22 years. We also believe in
open access no matter what language you
speak, which is why Google is available in
over 150 languages across 190 domains. We are constantly
adding more languages, including ongoing work to
help you set your search language to ancient
Egyptian hieroglyphs. And we're continuing
to invest in scripts, such as a recent
improvements to [? Zoggy, ?] a widespread font encoding
used across Myanmar. And fundamental to open
access is the belief that Google Search should
be free for everyone, accessible on any device, so
you can find what you're looking for and form your own opinions. The ranking factors and
policies are applied fairly to all websites and this
has led to widespread access to a diversity of information,
ideas, and viewpoints. I want to close by
reiterating how seriously we take our mission. We are able to deliver
a higher standard for Google Search because
of our ability to innovate in these four elements. Underlying all of the
work I just spoke about is the need for prodigious
amounts of processing far beyond traditional hardware. That is why we built Cloud
TPUs, accelerator chips developed specifically
for neural network machine learning. The latest Cloud TPU parts are
capable of over 100 petaflops. To put this into perspective,
the difference in speed between one such
part and the Cray 1, an iconic supercomputer
built in 1975, is greater than the difference
between the speed of light and the speed of a turtle. I honestly never
imagined in my lifetime comparing the Cray to a turtle. From new technologies
to new opportunities, we're really excited
about the future of Search and all of the ways it can help
us make sense of the world. As I said in the
beginning, search will never be a solved problem. So to show you new
ways in which we're bringing the most advanced
AI into Search, here's Cathy. [MUSIC PLAYING] CATHY EDWARDS:
Information has the power to unlock so many
things for people in so many different situations. To actually deliver good,
helpful, reliable information is really what drives me. When you type into
Google fruit that looks like an orange
puffer fish, what happens in the next
few milliseconds is the work of
thousands of engineers writing hundreds of millions
of lines of code over 22 years. Perfect search is a holy
grail and we will never stop on our quest. But recent
breakthroughs in AI have helped us make some of the
biggest improvements to search ever, to give you the
name of the fruit kiwano when you need it and much more. Now, we've all got those
words that, try as you might, you just can't
remember how to spell. Mine is bureaucracy. Is the E-A-U bit before
the R or after the R? Where do the Cs go? I can never remember. In fact, 1 in 10 search
queries are misspelled. But it doesn't matter, because
our Did You Mean feature is there to help. We've been building
this spelling technology for 18 years. How? By looking for mistakes. We look at all the ways
in which people misspell words in queries and text
all over the web and use that to predict what
you actually mean. It's working pretty well. But by the end of this month,
it's going to work even better. And not just a little
bit better, a lot better. It's going to improve
more in this one day than it has for the last
five years combined. This is because of a new
spelling algorithm that uses a deep neural net with
680 million parameters that can better model the weird
edge cases in our mistakes, like when you need
context to figure out that what looks like a correctly
spelled word is actually a misspelling, or when
your typo is just really, really, really far off. And it runs in under
three milliseconds, which is faster than one flap
of a hummingbird's wings. So we can now correct even more
obscure things than before. It's an amazing example
of using AI in practice. Beyond spelling, AI
is improving results for all types of searches. Broad searches and
super specific searches. Specific searches
are the harder ones. Sometimes it's like looking
for a needle in a haystack. That one sentence that
answers your question is buried deep in the page. It's pretty hard to
return the right results in these situations because the
information you're looking for isn't the main
focus of the page. The relevance of
that one section is sort of watered down by
the lack of relevance of all the other words on the page. To understand what's
happening here, you need to understand
how Search works. At the heart of
Search is an index, like the one at
the back of a book. And our index contains
not just webpages, but images, places, books,
products, streets, buildings, videos, and more. We've recently made
another breakthrough and are now able to not
just index webpages, but individual passages
from those pages. This helps us find that
needle in a haystack because now the whole of
that one passage is relevant. So, for example,
let's say you search for something pretty niche like,
how can I determine if my house windows are UV glass? This is a pretty tricky query. We get lots of webpages
that talk about UV glass and how you need a special
film, but none of this really helps a
layperson take action. Our new algorithm can zoom
right into this one passage on a DIY forum that
answers the question. Apparently, you can use the
reflection of a flame to tell, and ignores the rest of
the posts on the page that aren't quite as helpful. Now, you're not going to
do this query necessarily, but we all look for very
specific things sometimes. And starting next
month, this technology will improve 7% of search
queries across all languages. And that's just the beginning. So AI has helped us
with specific searches, but what about broader searches? Here's one. When gyms were closed, a
lot of people, including me, set up mini gyms in their homes. I started by doing a search
like home exercise equipment, and I got a lot of websites
that list all the equipment you might need. This is helpful, but wait. What if I have a small
space, or a small budget, or I just want to go for
the best of the best? How do I figure out
what's right for me? We think about these
different aspects as subtopics of
your broad question. We've now started
to use AI to learn about these
subtopics, which lets us capture the particular nuance
of what each page is really about. We can then use this
information in ranking to show a wider range of
results for broad queries, so we can be more helpful
to all the types of people who want to find
something that's just right for their
particular need. Sometimes the answer you
need is in a webpage, but other times it's in a video. Getting to the right
information inside the video quickly can be a
challenge, though. For example, I was
looking for a quick fix for my leaking bathtub plug. It's one of those annoying
ones with the push button. I found a video to help
me that seemed promising, but the crucial tip
I was looking for came at the seven minute
and 14 second mark. Some content creators mark these
key moments in their videos, making it easier to jump
straight to what's useful. But doing this is
work for the creators. As you heard Prabhakar
mention earlier, we've been experimenting
with a new AI-driven approach to deeply understand videos,
combining advanced computer vision and automatic
speech recognition. This lets us tag
these key moments and allows you to navigate
them like chapters in a book. So, for example, if you're
making smothered baked pork chops, you can jump
right to the part of the video that shows how to
thicken the sauce at the end. We just talked about
how we're applying AI to make Google
search even better, but AI can do even
more, and we're particularly excited about the
ways it can help journalism. Over the past couple
of years, we've been working closely with
journalists on a new tool to help with one of the most
important parts of their work, uncovering information. For those of you watching
who aren't journalists, just try and imagine
how much time and effort it can take to sift through
everything from court documents and email archives
to stacks of images, and don't forget about those
pesky handwritten notes in the margins. So we developed Pinpoint,
an anchor product of Journalist Studio,
a new suite of tools that harnesses the power of
Google to help reporters. Pinpoint helps you rapidly go
through hundreds of thousands of documents by automatically
identifying and organizing the most frequently mentioned
people, organizations, and locations. This lets journalists sift
through their document trove much more quickly,
whether you're looking for all
references to a source or if a person and a place
are ever mentioned together. AI surfaces new
connections that we've heard from multiple newsrooms
would be practically impossible to discover by hand. And reporters don't have
to start from scratch. Pinpoint includes shared
document collections from some of our key partners. Pinpoint will allow
reporters to upload and analyze documents
in seven languages, and it's available for free now. Here's a look at
how journalists are using Pinpoint to
uncover new patterns and find the facts that matter. [MUSIC PLAYING] MARISA KWIATKOWSKI:
The biggest challenge for reporting honestly
is time, right? It's really difficult to
say no when somebody wants you to look into something
that you believe is important but you just don't
have the time to do it. My colleague Tricia
Nadolny and I were asked to handle
investigations relating to social services
during the pandemic. Our focus really became
largely about nursing homes during that time period. We requested records
from all 50 states and I uploaded
them into Pinpoint. Pretty much immediately
when you upload documents and that side populates with the
list of names that are included or places that are included,
you can kind of very quickly get an idea of what is in there. We compiled that into a
database that we then analyzed and provided to the public. For the first
time, some families were able to look up their loved
ones' assisted living facility and see whether there had been
a case because in some states and in some counties, facilities
were not communicating directly with families in a way where
they felt like they understood what was going on. In terms of
investigative journalism, our job is to expose
wrongdoing, to shed light on something that's
going wrong in the hopes that maybe it can go
in a better direction. APARNA CHENNAPRAGADA:
The magic of search for me is like, regardless
of where you are, who you are, what you do,
you have all that information at your fingertips. As Cathy just shared, we're
applying advanced machine learning to help you
find the answers you're looking for on Google, even
for the most obscure questions. But with all of this
firepower, Google can also help you
really understand a topic, whether it is how
does photosynthesis work, or how has the median age
in the US changed over time, or what type of car is best
for your growing family. And we can help you understand
regardless of how you search, whether it's through a
text query, your voice, or even with your
smartphone camera. Let me show you
a couple of areas where we can be
particularly helpful. The first is learning. Students around the
world, like my son, have been handling remote
learning with resilience and grace all the time. Maybe, most of the times. At Google, we are
working hard on tools to help lighten their load, like
Google Lens, our visual search tool that lets you
search what you see. So if you're learning
a new language, you can already use
Lens to translate over 100 languages with your camera. But now you can tap to hear
words and sentences pronounced out loud GOOGLE LENS: [NON-ENGLISH] APARNA CHENNAPRAGADA: And
if you're a parent like me, your kids may be
asking you questions about things you never thought
you'd need to remember, like quadratic equations. Remember them? From the search bar
in the Google app, you can now use Lens to snap
a photo of a homework problem and learn how to
solve it on your own. To do this, Lens first turns
an image of a homework question into a query. And based on the
query, we show you step by step guides
and videos to help you understand the problem. And then, using a new learning
element of the knowledge graph, we show you the broader
foundational concepts you need to solve this problem,
but also others like it. This works across math,
chemistry, biology, physics, and more. But sometimes seeing
is understanding. For example, it can be
more powerful to see the inner workings of a plant
cell than to read about it. We think AR can make hands-on
learning a little easier, letting you explore concepts
up close in your space. Here's how one
teacher from Texas is incorporating AR
into her lesson plans. MELISSA BROPHY-PLASENCIO:
I heard about a teacher. The teacher was
very animated, a lot of fun anecdotes and stories. And the teacher was
so involved in what they were saying
that they simply didn't recognize that they had
been on mute the entire time. It's very hard engaging students
in this new environment. Turn to page 56, OK. There's page 56. With AR, we are able to
bring it into real life and manipulate it. Hey, what's inside of there? Oh, wow, I never
realized metallic bonding looked like that. That's where we really
see kids catch on fire and start to learn things
that are exciting to them. It opens up possibilities that
they never thought of before. We're talking about an
experience that's going to stick with kids forever. Oh, my goodness. There are so many ways. It's super exciting. I think it's an
enormous game changer. APARNA CHENNAPRAGADA: Whether
or not we are in school, we are all learners and we all
have questions about the world. And sometimes, answering
these questions requires data, data that's
spread across multiple sources, that's in messy formats,
and just not easy to get to. That's why, since 2018,
we've been working with The Census, World Bank,
and many other organizations on this open database
called Data Commons. And this data is
already being used by students and researchers
across many universities. Now we are going further. We're making Data Commons
available directly in Google Search as a new
layer of the knowledge graph. So if you want to learn
about a topic like employment in Chicago, you can
simply ask Google, let's take income levels. We can break it
down by individual versus household incomes. We can also show it in
the context of the trends overall in the US. Or you want to know
what jobs are trending. Jobs, turns out, are defined
across more than 2,500 categories, so we
have to aggregate them at the right level so that you
and I can easily understand. In both these examples, we use
natural language processing to understand your
intent, then map it to relevant sources
in Data Commons, and then present the
results in a visual format. Another area where
you need to make sense of a lot of information
to make decisions is shopping. This new normal that
we are all living has dramatically accelerated
the shift to e-commerce. That's why we are
making improvements to shopping that help both
merchants and consumers. For merchants, we
recently made it free to list and sell on
Google, so it's easier for them to reach the
millions of shoppers who come to Google every day. For consumers, we're
making it easier to discover and learn
about new products, even when they're difficult
to describe in words. Say you're shopping for
your work from home clothes. You're browsing online and
you come across a picture of this sweater. What would you search for? Mustard yellow, ruffled sleeves,
cotton, crewneck sweater? Starting next month,
you can just long press on any image you
come across as you browse on the Google app or Chrome. Not only will Lens help you
find the exact or similar items, it'll also suggest
ways to style it. To pull this off, we take
the world's largest database of products and combine it with
our Style Engine technology, which understand different
looks and concepts like ruffles or vintage denim
and also figures out how to match
with other apparel based on millions
of style images. So that's how you can shop
what do you see in Lens. But what about being able
to see what you shop, especially when you cannot
go into stores to check out a product up close? With AR, we're bringing
the showroom to you. If you're in the
market for a new car, you can just search
for it on Google and you can see a photo
realistic model right there in front of you. We've now built cloud
streaming technology so that if you're connected
to a high speed network, you can check out what your
dream car looks like in blue, in black, or metallic red. And you can zoom in to
see intricate details like the steering wheel or
the material of the seats. You can also see the cars
against beautiful backdrops or bring it right
into your driveway. We're working with
top auto brands to bring these experiences
to you in the coming months. Another area that
AR can be helpful is in understanding
the world around you. That's why last
year we introduced Live View on Google Maps,
which uses AR to show you which way to walk using arrows
and directions overlaid right in front of you. In the coming months, we
paired the same technology with helpful information
about a place right from within Live View. For example, you can
quickly see information about a restaurant, is it
open, how busy it tends to get, its star rating, and more, by
simply pointing the camera. Whether you type, use your
voice, or point your camera, you can use Google
in different ways. And there's one more way
that some of our engineers have been working on. Take a look. SPEAKER 1: [WHISTLES] SPEAKER 2: [SCATS] SPEAKER 3: [HUMS] SPEAKER 4: [HUMS] SPEAKER 5: [WHISTLES]
Oh, come on. [WHISTLES] SPEAKER 6: What's that song? [HUMS] [MUSIC - TONES AND I, "DANCE
MONKEY"] APARNA CHENNAPRAGADA:
That's right. Now you can hum to search. Believe it or not,
people ask Google what song is playing almost
100 million times a month. And behind the scenes,
it's no surprise to you we use machine learning
to match your hum or whistle to the song. We hope you have fun with it. So today, you've heard
from Prabhakar and Cathy about how we're using the
most advanced machine learning to constantly innovate
on Google Search, as well as examples
of how we're committed to deeply understanding all
of the world's information. And we're excited to
bring you on our journey to provide high quality
information to all the world. Information that's open,
useful, and easily accessible to everyone, that's
built with world class privacy and security
to keep you safe, that provides rich understanding
no matter how you search, and even in these difficult
times, brings a little joy. Thank you. Be safe. [MUSIC PLAYING] PRABHAKAR RAGHAVAN: How do
I work with the hypotenuse? APARNA CHENNAPRAGADA:
Do we have three hours? No, I'm kidding. CATHY EDWARDS: You know,
I never did spelling bees when I was a kid. APARNA CHENNAPRAGADA:
P-A-R-A-P-H-E-A-R-- PRABHAKAR RAGHAVAN: N-A-L-I-A. APARNA CHENNAPRAGADA: How
does photosynthesis work? CATHY EDWARDS: A plant needs
to get energy from the sun. PRABHAKAR RAGHAVAN:
And I was never very good at the life sciences. APARNA CHENNAPRAGADA: Beats me. CATHY EDWARDS: I have
no idea and I apologize. [MUSIC PLAYING]