[MUSIC PLAYING] MANJUNATH DEVADAS: First of
all, it's my honor and pleasure to be presenting along
with Adam from AB InBev. ADAM SPUNBERG: Thank you Manju,
it's a pleasure to be here. Very happy to be onstage
with you and Pluto7, and part of this whole GCP operation. MANJUNATH DEVADAS: Great. Thank you. So a little bit of background-- anyway the slides
will be distributed. So I'm Founder-CEO of Pluto7. We are one of the key
partners for machine learning and AI implementation
at Google, so we work with customers like AB InBev. When they raise innovation
challenges-- right-- when they say,
OK, we want to see what machine learning and
AI does for the industry. Is irrelevant for us if it does? With all the domain
expertise that we bring in-- personally I've been doing 17
years of predictive analytics work and solving for
many large enterprises. I started my career
at Cisco and then went on to solve for
30 different companies. Now with the power of
machine learning and AI, it really opens up our minds
in terms of what's possible. Like we have heard in
many different sessions, including the
keynote today, this is the next big wave which
we are going to embark on. Now with that said,
what we are going to show you is a real
example of something that matters to all the beer
drinkers in this room-- the taste of the beer. So we're going to talk
about how we improve the taste of the beer, and
of course dollar savings, ROI, and so on, with
machine learning and AI. So I'd like Adam to
quickly introduce himself, and then we'll get started. Adam, do you want to give
a little bit of background about you and AB InBev? ADAM SPUNBERG: Sure. So my name's Adam Spunberg. I'm actually in my
first year at AB InBev, but already have had a chance
to work with some great partners like Pluto7 and Manju. I'm leading the digital
transformation of the company on the supply side. Our team is called Tech Supply,
and what we're trying to do is find cutting-edge
opportunities to transform to what we call the
brewery of the future, and that is a world
where, perhaps, predictive maintenance,
artificial intelligence, machine learning--
all of these aspects are incorporated
into the automation so that we're functioning
on a higher level, and doing things that quite
frankly weren't even imaginable a few years ago. So we have a lot of
products at AB InBev. Here's some examples. Budweiser-- I think
you've all heard of that-- Bud Light, as well, of
course, Stella Artois-- we have some up here. Corona-- technically
Constellation has Corona in the US, but we
have the rest, and all of that is brewed in Mexico
in our breweries. And we're the number one
beverage company in the world, and actually the number one
consumer products company in the world-- by some metric anyway. Sure, so we have obviously
a huge operation. It's a truly global
company, and that's something that really excited
me about the opportunity when I jumped on board. The fact that there really is
a presence in every continent-- well, not Antarctica, I should
say, but everywhere else. If someone can find a
way to use technology to make brewing possible
in Antarctica maybe we would get that too. But we have all the continents. We have nine zones,
and that presents a lot of extraordinary
opportunities, but also challenges
with the demographics. And, you know, one area that
can speak a common language, I guess you could say, is
innovation and technology. And that's why one of my
plans in working with Pluto7 and Google is to
find opportunities for using that to create
that common thread, and create a global operation
instead of a fragmented-- this zone's doing this,
that zone's doing that. So machine learning
is actually-- it's not just a cool pilot,
or an interesting cost-saving initiative, or a beer
tasting improvement. It's actually a
mechanism-- a vehicle-- for bringing about more cohesion
within the organization. And that's pretty exciting. MANJUNATH DEVADAS:
Thank you, Adam. Now a lot of the
elements of innovation involve going back to
some fundamental aspects of how we humans behave. Now I'll set the
context, we'll go through a use case,
a very quick demo, and so that you can relate
how machine learning and AI is applied in a real
world use case, like in this case
improving the taste of beer with filter
replacement and so on. The last few minutes
we'll have a Q&A session, and we will have some
of my team members who are sitting in the front
as well as some Googlers to help out if there are any
specific questions that we're not able to answer
in a larger setting. So few things to think about. Why is machine learning and
AI a big deal now, right? It's been around for 30
years, used by space, defense, and all kinds of security
industries, and so on. But what's the inflection point? The inflection point,
really, is a combination of cloud computing, mobile, for
us the cost of processing going down, which means you
can run a lot more mathematical computation now
to make simple decisions. The decisions could be around
a classification problem-- you're trying to classify
whether the filter is good or bad, which is
an example we look at-- or regression, which is
what I did for 17 years. You know, what's the demand
forecast for the next 12 months, 24 months, and so on? And we we used all kinds
of statistical models and got an accuracy of 70%. Now, when we went into
one of these companies, and when we said, hey, we
are going to video whatever the best model you have-- we have statistical
model softwares and so on, there
are like, we have been doing this for 20 years. There's no way you come in
for six weeks and beat that. To cut the long story
short, we beat it by 20%, and it was a bit shocking
and disturbing to them that we challenged some of
their best demand planners. Right? Now the point I'm
trying to make here is, this is a paradigm shift. And why there's a
paradigm shift is because we are taking something
very fundamental to us and allowing a machine to do
that, which is decision making. If you look at an
enterprise, look at whether it's invoice
processing, to supply ordering, to demand planning,
to replacing a part, maintenance-- there
are small decisions made at every stage, many
decisions made by a human. Some decisions are
very expensive. If you make a bad decision,
it can cost a lot of money as well as a lot of disruption. Now you're taking that
very simple decision making and making that into a
machine learning model, which you'll host on GCP, which is a
very quick demo I'll show you as well. So in this journey when we take,
whether it's filter replacement or forecasting, if you
leave it up to humans-- and by the way,
this is by no means to say that we will
replace humans. It's to augment humans-- to improve the accuracy
level, to help, to make better decisions, to
do the filter replacement, to watch filter replacement,
not waking up and checking the filters and the
beer quality every time, but letting the machine do that
day in and day out 24 by 7. So leaving it up to humans,
in this decision making there is bias involved. We call it experience,
to some extent. But by the time we get
decades of experience, there's a bias on what
data columns are important, what's not important. We ignore some of the
signals in the data patterns on the columns we ignored. Like, in some of the
larger enterprises, we looked at 200
columns of data. Humans wouldn't practically have
time and energy and capacity to scan through that
every hour 24 by 7, and look at patterns
across those 200 columns continuously being generated. So that's where machine
learning plays a role, and some of the
slides that I'm going to share before we go
into the brewing example is to have you understand
that the conversations that we are having, and the
experimentation and innovation that's happening is not just
about predictive maintenance, but in the whole
supply chain ecosystem. We get involved with how
machine learning and AI can help at beating demand
forecast-- in this case, the example I gave for
enterprise, beating by 20%-- or reducing supply inventory
carrying costs by 50%. There's a publicly-posted
case study on Google Cloud on how we
improved [INAUDIBLE] for Amazon seller by 50%. Their inventory carrying
cost went down by 50%. So similarly, like, can
the deliveries be improved? What opportunities did
I miss because I did not understand my customer better? When I say customer engagement,
customer experience management, it's seen as a marketing
and a sales problem, but you are selling a product. In the e-tailers or
online retailers world, the customers are
looking on a website, looking at your email
campaigning, placing an order. You do not want to run
in a no stock situation. Or even if you have
stock, you do not want price to be the factor
where the customer doesn't buy. Or you do not want it to
be a quality issue which is a factor which is reflected
in the customer comments, sentiments, and so on. So these are this variety
of problems that we are solving for our customers. Now, when it comes to preventive
maintenance and so on-- so whether it's tracking inventory,
defective inventory, logistics, or equipment failing, these
are all different scenarios where humans are applying
their decision making. Again, there are certain
decisions which are simple, or we can make it a component,
as in, this particular decision I can build an ML model,
and this one I cannot. And realizing that
is a very key element in this innovation exercise. So Adam, would you like to add
anything around the various use cases that you guys are
thinking in AB InBev, where you think machine learning may
help, or already is helping? ADAM SPUNBERG:
Sure and, you know, part of that digital
transformation I was speaking to before-- we're trying to find ways
to automate the breweries, but in a smart way. It's not just enough to-- we're not trying to replace
people with technology. We're trying to find
technology that-- as Manju alluded to-- improves quality,
reduces costs, and just makes us more efficient in
general as an operation. And for example, take something
like predictive maintenance, which we're alluding to here. Say you have a machine
that, if you're able to predict an equipment
failure or an equipment problem-- say, something,
a wear and tear-- in advance, and you replace
that equipment faster, you might prevent
a shutdown that stalls production or
produces a lower-quality product at that time, and a way
beyond what a human being is capable of doing. So in essence, it's a
big challenge for us, but as the technology
keeps improving you get this influx of
exciting new opportunities. And our attitude is we want to
work with these cutting-edge possibilities and just keep
exploring and seeing what we can figure out-- testing pilots,
proof of concepts, and hopefully if something
really sticks and shows promise, to then expand
that into a business as usual type of possibility. MANJUNATH DEVADAS:
And when you think of businesses like AB InBev-- and to me, it was
a bit overwhelming when we showed up at their plant
to take a visit to the plant. I have some of my
colleagues here who walked through
the production floor. I believe they make, like,
100,000 barrels a month. Now think about a
production downtime impact. You're trying to
replace a filter, and you have to
bring down a line. Think about the trucks
that are coming in. Think about the packaging, and
the human resource, the labor, customer impact,
distribution impact. So something as simple
as one part replacement can disrupt the
whole supply chain. So when companies-- whether
it's AB InBev or some large enterprises-- when they look at automation,
looking and talking to Google-- and fortunately I've been
involved in many of these factory automations of global
companies who are distributed around the world-- with the C-level
executives, they think about the entire factory
automation, all the way from the time a raw material
is brought into the door to a time a finished good
leaves the door, right? What all can I automate
using machine learning, AI, IoT sensors, and the fact
that we are collecting all the data on the cloud. Now what can I automate? Because now the expectation of
delivery lead time is reduced, and expectation of downtimes
reduction is increasing. So there is a lot of these
things happening at the same time, because the C-level
executives-- the CTOs, the CIOs-- are realizing now
this is a power that they need to harness. So AB InBev, through
their incubator program that Adam just
mentioned, has been thinking about that
for more than a year, or a year and a half. And these are companies
that are, like many, many decades old. And for them to be looking at
this cloud and machine learning and saying, where can I
implement machine learning and AI to improve my
overall manufacturing experience, and so
on, is something that, whether it's AB InBev or
similar large companies, we are looking at where we
plug in which component. Can I improve the
safety of manufacturing? Can we decrease the
production downtime? Can we improve the
quality of the product? So some of you have that. In the keynote today
by Fei-Fei Lee, I believe she
presented on auto ML, and it's pretty fascinating. We are working with a few
customers on some pretty innovative use cases. So auto ML is like a hybrid
where you don't really need to get into the depth
of machine learning model, yet with your dataset you
can get prediction results. And then there are two
extremes to the left and right, pre-built ML model models, which
are Vision API or Translation API. These are machine
learning models that Google is
continuously training. All you need to pass is an
input, and you get a result. Then on the right side
is the customer ML model, which is what I'm going to demo,
which is more sophisticated, which gives you a
lot of flexibility, but you have to build your
own machine learning model. So when you look
at this process, you don't want to
look at it as, where can I use this fancy or
shiny object called machine learning AI, which is what many
customers sometimes start off to think, that OK,
there's something cool. I can use this, and so on. Because at the end of the
day, if you don't really tie to the use case, if
you don't tie to the KPIs, and if you don't convince
somebody like Adam, this project doesn't
go too far, right? And then he has to, in turn,
align with the stakeholders and so on. So you have to relate
to the business and how we are
going to transform. 20, 25 years back,
if somebody said internet will change many
aspects of your business, and every department, it would
have been hard for many people to comprehend. We are dealing with the same
thing, and some leaders claim-- which I believe it's true-- that this is going
to be a bigger revolution than internet era. Why? Because we are giving our
decision-making aspect and taking help of a machine. That's it's quite
hard for humans to do. But once you do
that, and you know that it's incredibly
powerful, now we get to the next level with that. We're going to obviously talk
about the brewing example-- what we did and so on. So I spoke to you about
a few other examples. So now the customers are looking
at it from a 360-degree angle, right? What can I do for my
preventive maintenance, my demand, my supply? If I can control my
demand and supply, can also I adjust my price? Think about an
online retailer who has to make decisions and
price points every hour or every few hours based
on stock, based on demand, and so on. So for them pricing is
not a separate department, marketing is not a
separate department, and supply demand planning
is not a separate department. Even if they exist as
a separate department, they work in a secular,
harmonious way. So with that, now the
customers are looking at, can you automate different
machineries, devices, X-ray machines. Can you tell me ahead of time
when the machine will go down? That's the most important thing. If you tell right when it fails,
or just a day before it fails, maybe it's useful. But if you can predict a month
or two months in advance, they can line up servicing
folks to show up at the factory. They can have downtime planned
during a Sunday morning when nobody's working. So these are things
that we are looking at. You know, how you take existing
data in the ERPs or wherever and apply GCP machine learning,
and reduce the risk level, improve fault detection, and
have more accurate replacement dates, and so on. So going a little deeper into
the beer exercise, itself. It all started
with, what can we do with machine learning and AI. So this is our
business, and then figure out what you can do. So we had an option to
pick 16 different ideas. We picked filter replacement. Why? Because it had a lot of data. With machine learning-- if
you've not heard of this before-- without data you don't have
a machine learning model. Without enough data, I mean. Number one. Number two is, you have
to bring in a little bit of your innovation thinking
hat, because there's a bit of paradigm shift in
designing these models, right? And also you have
to think about how do you harness information what
you have not harnessed before? Data exists in many
forms, and we'll talk about that in a minute. So when we talk about this
filter replacement use case, we validated that we
have a lot of data. And we understood the
business process behind it. If we replace this
filter, what's the business benefit,
how much does it save, what's the business
impact, and so on. So the business rationale
to do this was justifiable. Now beyond that, not only
were we going to save money, but we had a chance to
improve the taste of the beer. And with that started
this whole journey of figuring out what can
be harnessed with data. And there are lots of data. When I say lots of
data in their ERP, it's almost like 200 columns
of many different aspects of the whole brewing
and production process. Now we have to look at all
these different sets of columns of data. Think of it as a massive
Excel spreadsheet for the sake of simplicity. A massive amount of information
coming in continuously, and a brewmaster
relies on few columns, because that's what he has
relied for a long time, for decades, and for the
most part he's been right. Now we have all this
data, and we have to improve the accuracy level. So when we looked at-- before we actually
rolled out the model, the accuracy level was at
60% to 70% of the best. This is a brewmaster
looking at the beer as it comes out of the kettle,
passes through the filter, and checking certain gauges,
and then also looking at the beer quality itself. They are the
particles in the beer, and there are terms like
turbidity, and so on. So he gauges that, and he
determines whether the beer is good or not good. Now the model has to tell when
the filter has gone wrong-- not too early, not too late. So now we got to a
92% accuracy level. Now this is something that
has been not attempted before, nor was thought possible. So what was even more
brilliant was the fact that the brewmaster was very
involved in building this out, and he acknowledged that it's
hard for humans to beat this. And we were quite amazed
that somebody as experienced as that, and he's ready to
go through a transformation and a change. So Adam do you want to add
anything to it as we progress? ADAM SPUNBERG: Sure. And, you know, the
thing I find interesting is we're trying to improve
the quality of the tasting of the beer, but I find that
the more beers you drink, the better machine learning
and AI seems to taste as well. So it's kind of a
compelling counterpoint. No, but here's what I'll say is
that, I think a lot of people don't know this about
brewing, and I myself actually didn't know this
until I really got into the fabric of AB InBev. Brewing is a really
difficult, long process. It's not like you just
create some concoction, pour it in a can,
and lock it up. In some cases, depending
on what the brand is, it can be a month-long
fermentation process. It's really almost an art form. And so make an analogy
of what you were saying, the brewmaster-- obviously you still need
that human component, but it's sort of like in chess
when the computers finally started beating humans. At some point, no matter
how talented you are, you just can't quite be at that
level that a computer can be. And so we as a company can
either shy away from that or try to find a way to embrace
that in a way that keeps us competitive and trying to have
that marketing position that we have within the world. So if we're not interested
in these things, and not exploring what I think
is unmistakably the future-- and it's not just beer. We're using this
as our case study, but this could be anything. I mean, I think even
beyond the level that we can even
comprehend right now. This is just the beginning. So I don't know if I'm
convinced, personally, that it will be a bigger revolution
than the internet, but I also think that
100 years from now we might be bowing to robots. So who knows? It's really hard to predict. But I think at this stage,
this is a responsible AI. This is safe. This is controlled settings in
which you're using technology to improve production in
a way that helps business, and also, we hope,
gives you a chance to take that gulp of
Budweiser and say, oh, the aftertaste was
just a little bit better than last week. That's what we're
going for, anyway. MANJUNATH DEVADAS: Like
we were talking about, the filter replacement itself. So it's about looking
at these kettles, looking at the filters,
and instead of a human judging when to
replace the filter, it is using that data
to say the patterns of this 200-all columns. And you do feature
engineering, and get to selecting 15 or 20 columns,
which our data scientists do. Say, OK, these are 20
which seems to be relevant. Now, again keep, in mind
these data scientists are not experts in brewing business. We have separate domain experts
who understand supply chain and so on, and we have
the customer work with us through this innovation. And we say, hey, we see a
data pattern which indicates that the filter is going wrong. It doesn't seem to make sense. And the brewmaster and
experts at AB InBev say, oh yeah, that
seems to be more accurate than how we do it. So that's kind of how you
go through it relatively. And when you do this,
what's important is, as the beer
quality degrades, it's not a very definitive
time when the beer goes bad, and it kind of
gradually goes bad. And ideally you want
to get to a point we're you know that it has now crossed
that line where it doesn't meet quality control. And because it's
a drink, and there is subjectivity to
taste and clarity, and a few other factors
of quality control, you have to kind of pick
it at the right time. You pick it too
slow or too late, then you've let go,
you know, you've rolled out a whole bunch
of lower quality beer. If you replace it too
early, you've not only wasted a whole
bunch of good beer, but also you've increased the
cost of filter replacement-- more filter replacements. So getting it right
at the right time was the most important thing. And moving away
from 60% accuracy, and avoiding the situation
where there is 40% inaccuracy. When I say inaccuracy, as
in you replace the filter when you should have not, or
you did not replace the filter when you should have. And a lot of the inaccuracies
was because of the fact-- and this happens
in many enterprises in many different ways in
other use cases as well. Because of our experience,
we think certain attributes are most relevant. It could be clarity of beer, it
could be temperature, pressure, it could be something, right? Basically if I'm the brewmaster,
if I had a light on this, I'm going to look for these
data, and whenever it exceeds, beer looks a certain
color, then I'm going to decide that the
filter is ready to be replaced, and I do that. But there are a
lot of data which don't have very obvious
signals or signs, which it's hard for us to apply that,
because when you have hard data continuously getting
updated, and 200 columns, and so on, it's
kind of hard for us to look and see the
significance of this other data. This is where, when you
apply machine learning model, you are now looking at not only
data that you always looked at, but also data which you
thought was less significant. This is where our data
scientists go into feature engineering, and we look
at all the columns and say, these original columns
seem to be significant, because when the data
arrives at this pattern, then it's indicating that
the machine learning-- then it's indicating that it's
time to replace the filter. So with that now what
you're really doing is you're understanding
your process, you're understanding
the parameters of data that you capture, and build
the machine learning model. Before I go into
the model itself, Adam, do you want
to add anything? ADAM SPUNBERG: No. I would just say to echo
what you were saying, Manju, there is there really
is a precise art to this. And to give a really
simple example, I don't know if you all remember
those games that, at least, we had when I was a kid, where
you'd try to roll a ball, and there'd be two humps,
and if you push it too hard it would go over
the second hump, but if you didn't push it far
enough, it didn't make it. You're trying to find that
sweet spot in the middle. And because you have to err
on the side of caution-- this is, after all,
a consumer product that people are
drinking-- we're never going to release something
into the marketplace that's not up to standard. We're going to err on the
side of being more careful. What this allows us to do
is to know with certainty-- or with almost near
certainty, and we'll keep perfecting that model-- that no, we can wait to replace
that filter at this point, or in the alternative scenario,
replace that filter right now, because we're going to have
to throw out some beer that's not going to pass muster. So it really is a
tremendous value that this is bringing to us. And we'll see how
it impacts things as we keep working with Pluto7. But we have really
high hopes and optimism for how this is going to affect
our business and our quality. MANJUNATH DEVADAS: Great. Thanks, Adam. So what does the underlying
architecture itself look like? It's pretty simple, and if
you're familiar with GCP, if you're already
using, you're like, ah. You recognize some
of these logos. So your ERP, where
the data comes from, and Dataflow which is
for traditional analytics architecture. It's the equivalent
of your ETL, but it's more with more power, and
scalability, and auto-scaling. The data, we brought
that into BigQuery. And we built a custom
ML model on GCML. And when I say custom ML model,
remember the three ML models I said, you know, one what
you saw this morning was auto ML, the one in between. The one on the left
is predetermined, more prebuilt model by Google,
where you don't build or train. And the one on the right
is the custom ML model. Well, for the small
section of the audience, in case you are relatively
new to machine learning model, you kind of want to use
machine learning model-- I use this analogy whenever
I run customer workshops, is you kind of think as,
you just hired an intern, and you need to train
the intern on a job, then you check whether
the intern is doing well, and then you put
them in real job. It is the same process that
you go with machine learning. You kind of build the
model, you train the model, you give the data, and see
whether the model learned it. And then once you
have confidence in the model has learned it,
then you deploy the model and get it operationalizing-- operationalize it. So we are kind of mimicking
the same process here, and the model continues to
learn based on how you define it and so on. So I'll kind of quickly
go into GCP console, just to show you what it
means to build a model and to deploy a
model, and so on. While we won't run
the model itself, it's something that you
have to kind of think that, OK, for this scenario
where we understood this process of data coming in,
we understood that, OK, it's a classification
problem of saying beer filter is good or bad
based on the data pattern. And we do this, and
we train the model so that it's making
good decisions. And training the model
is based on history. Over the last x
months or x years, when has filter
replacement happened? Whenever those filter
replacements happened, what was the pattern of the
data at that point in time? And the model is continuously
learning from that. The more data and the more
training there is, it's better, but it doesn't mean you
continuously keep training. There's an optimal
point up to which you want to train to get to your
level of accuracy, and so on. So with that, let me just
quickly go into GCP Console. So during this whole exercise
of understanding the business process, understanding
the use case, then we understand the data,
and from there we build a model. The model, once it's built,
is deployed in Google Cloud. So this is GCP Console,
in case anybody's new. So we would go and
built the model, and we would deploy
the model here. So the model itself is-- at the core of it, you
want to think of it as a mathematical function that
has been data mined, right? Based on your data pattern,
it's a mathematical function, and that's hosted
on a TensorFlow. So now we leverage the
TensorFlow framework, and then you have deployed
the model, the version one. And this is a dummy model. For confidentiality
reasons, we can't show you the actual model, and so on. So you can put the model to
run repeatedly, and every day, or however often
you want the model to run, and train the model. To train the model,
so you schedule, you have the model
that's running, and then you have
a training job. So essentially, you
build the model, you deploy the model in the
cloud, and you train the model. So behind the scene, when I
said featuring engineering, and the column
selections, and so on, it's a combination of applying
your domain knowledge, and a combination of
applying data sciences, and understanding some of
the practicalities of how your model itself-- what you want the model to do. So based on all
those things, you select your columns or features. That's what we have
it in the left. This is part of the model code. And then on the right is, which
deep learning model do you want to apply based
on your use case-- it was a regression, in this
case, a classification-- and which model works best? And there are
different factors that go into which model to select. There was a session earlier
today by Prashant Dhingra. He's probably around
in the audience. He's over there. And you can always ask him more
about which models to select. He's from Google, and has a lot
of experience in not only data mining, which is the right
model for the data scenario-- the business scenario and the
kind of data that you have. It also comes with
a lot of experience that Google brings in by solving
not only internal Google's own problems-- internal operational problems
where the machine learning has been applied. There are thousands
or tens of thousands of machine learning models
Google internal uses to run their own business. Along with that, the
industry experience solving for some of
the largest companies. Today you saw Target
in the keynote, right? So there are many,
many companies Google is continuously solving. All those learnings are going
into these deep learning models under the using
TensorFlow, and when you use some of these models
recommended by Google, there is a lot of intelligence
going behind it as well, so that some of the feature
engineering and all this burden that you
put on a person building on machine learning
models, those intelligence are part of it as well. So there are various sessions
at Google Next which go deeper into these models,
and also during Q&A if there are a few things
that we can answer, we'll do that as well. One of the common
questions that I get asked when we are running
workshops, and proof of concepts, and
production rollouts is, very early on people
ask which model is the best model for what I want to solve. So we have to kind of take
them a few steps back. Do you even need a
machine learning model? Let's figure that out first. Half the time, people don't
need a machine learning model. They think they need a
machine learning model. They don't need it. The second thing
is, OK, even if you need a machine learning model,
do you have enough data-- the right kind of data? Or even if the data is there,
for you to prepare the data, the whole ROI for
that is not justified. And so let's go through
some of these basics. Google strictly follows
these disciplines that when you don't need-- and in fact, when I first
attended a Google machine learning training, if you don't
need a machine learning model, don't build one,
because you're going to just spend money and realize
that there are better ways to do it. But when you realize that you
have a right use case and right dataset, the power
of machine learning will be obvious by itself. So taking this a little
forward, the whole exercise was done in, like, six weeks. taking accuracy from 60%
to close to 92% accuracy. And based on the
region and zone, we do $1 million
savings, and so on. Now it's one experiment done
in one part of AB InBev. Now like this, there are
about 15-odd experiments or 20-odd experiments
AB InBev is considering. It's not so much
just the significance of what came out
of this project. It's rethinking how
you run your business. Now looking at your
manufacturing floor and saying, what else can I do with
this now that I'm on GCP, and I'm on the cloud. And I understand how
a machine learning model can be built mimicking
human decision making. So again, it's not
about replacing humans, it's about augmenting humans. We are not going to replace
every human decision making capability into an ML model. It doesn't work that way. Human brain is
incredibly complex. You can't mimic every
decision making, and those decisions
are not like-- you cannot combine that
into one, holistic set. That are many smaller
decisions humans make, and there's an overarching
decision that's made, which kind of works
in coordination. So beyond building
models like this, it's also getting
stakeholder alignment and realizing that
you need more data. These are critical things that
become obvious to our customers as we build these models. Do you want to add
anything, Adam? ADAM SPUNBERG: Yeah, I do. So the stakeholder
alignment, this is an interesting question,
and obviously not every company has the same view of innovation. And within a company, there
are all kinds of apples. So the important thing--
and I'll be honest, this is something
that I've learned is just incredibly
valuable in navigating any kind of big
corporate culture-- is you have to get buy-in
from the right people early on, because if you
start working on something and it's very exciting
maybe to the local team, maybe the brewmaster is really
into it, maybe people on site and even people in the
innovation program. But if you don't have those
higher stakeholders really supporting this,
you're going to run into obstacles along the way. And the way to
overcome that is, you have to make that
case early, and you have to make sure
what you're doing gets visibility
and is understood. And I think this is a
challenge for all of us. To some people machine
learning is a scary concept, or something that's just
beyond their realm of interest right now. And you can't
force that, but you have to find a way to slowly
bring about that understanding and evolution. So that's where I think it's
really effective when you can work collaboratively-- and
this is where Manju and I have really found some
common ground-- is we both saw the
value in this project-- obviously from two
different sides, but really we were
a team on this. We said, OK, what
can we do to make this compelling to
the right people? And we're still winning
some people over, but we got enough
people on board to really understand this. So for those of you
out there in companies that are looking to
pursue this track, I would say definitely
think about that aspect, because the impulse is
just go, how exciting, just run with this. But you have to think
bigger picture if you really want to implement an
impactful change that goes beyond the local aspects. MANJUNATH DEVADAS:
One key element I would like to highlight
is that ML is a journey. You start off with a project,
and you will comprehend first why this is powerful,
and so on, in a business context for our use case. Then it's an effort to kind
of bring everybody else along with you in the journey. And you have to assume that
some portion of the organization will not align with this
for various interests. Either they just don't want to
deal with the new technology, or it appears too complex,
it's not reliable-- whatever their
reasons are, there is a certain amount of
awareness education, and some folks don't
want to go with it. So you have to realize
what battles you're picking and keep progressing, and
realize that this is a journey. You're in it, your
company is in it, the whole ecosystem is in it. So just to summarize from a
prediction quality standpoint, with a 92% accuracy, taking
about six months of data, and getting to the
right kind of values so that the business
has the confidence that, yes, if we do this there
is a certain amount of money that's getting saved. Not only money
getting saved, there is improvement in
our production-- reduction in our production
downtime, and so on. So I had to go in and present
to the C-level executives on how it was
done, and why there was so much of a difference, and
how it is manual versus machine learning model, and so on. So it's pretty
impressive, too, for a-- I believe you're a 100-plus
year old company, right? ADAM SPUNBERG: Yeah, the first
breweries in Anheuser-Busch, anyway, were in St. Louis. And obviously there's been
mergers and acquisitions and all kinds of
stuff since then. But you know, this isn't a
technology or a product that was invented
yesterday, but it keeps evolving with time just
like anything else. MANJUNATH DEVADAS: And
one of the executives messaging that they want to
make it an AI-based brewery, I mean that's pretty amazing. So in summary, there's a
filter replacement problem, and the outcomes
we just discussed, which is replacing the filter at
the right time driving millions of savings by region,
zone, and so on. [MUSIC PLAYING]