[MUSIC PLAYING] VINAY BALASUBRAMANIAM:
Good afternoon, everyone. Thank you all for
coming for the session. My name is Vinay
Balasubramaniam. I'm a product manager
in the BigQuery team. I'm joined by Ian and
Catalina from King, who are going to share their
experience of using Looker and BigQuery. King has transformed
their business using Looker as their
enterprise BI solution. And they they're here to
share their story soon. I work in the data space. And your feedback is
super important for us. So your feedback is our data. Do share your feedback about
the session after the session has done. So let's get started. If we look at
organizations who are going through digital
transformation, they're collecting lot of data
than they used to do before. Data is being generated
from clickstream analytics, mobile data, social media
posts, [INAUDIBLE] sensor data. This particular
statistic from IDC shows that in the next four to
five years, the amount of data which is being generated, more
than a quarter of that data is going to be analyzed in
a more real time fashion. If we think about it, in
the next four to five years, there are going to be about
135 zettabytes of data. A quarter of that is
going to be analyzed in a more continuous manner. So as organizations, you
need to invest in a big data platform which not only
can analyze the variety and volume of data
but can do that in a more continuous manner. Google Cloud Platform
Data Analytics Solutions provide real time analytics
as part of the capability. It's just already ready for you. But what is driving the
growth of real time data? If we look at our
personal lives, we leave digital traces
everywhere on a daily basis. Or it could be a business who's
looking at all the sensor data and acting on that
in more real time. Or it could be a
marketing executive who's looking at
clickstream data to understand what sort
of ads they need to serve. Or it could be a
game developer who's looking at all the different
game players to figure out what sort of levels they need
to customize for every use case. And if we look at
this particular study from Forrester,
which was published, insights which are
derived from data have a shelf life,
which means you need to analyze that data
in a more real time fashion before they perish. So on the x-axis is
the perishability of data as it ages. And on the y-axis is
the time to insight. Not all use cases require
real time analytics. But some of those use
cases, such as the one which is where you take automated
decisions, for example, you need to detect if
there is a fraud happening at the point of
sale application. Or it could be a consumer
who's applying for the loan. And based on the
credit score, you need to understand
what sort of rate you need to give as a
financial institution. Those things are more
automated fashion, which you need to act on much sooner. The second on the
operational insights, these are more human-oriented,
where humans are looking at dashboards. For example, a consumer
walks into a retail store. And as a merchant, I need to
look at their past purchases to understand what sort of
discount or customized discount I need to offer. There is a little bit
more time to act on that. On the performance insight side,
you are looking at day old data to understand what
sort of changes you need to do in your sales
pipeline to act on that. And on the strategic insights,
these are more longer term, where you are looking at
quarter over quarter data to understand what sort
of changes you need to do on the business side. There is a spectrum
of use cases. Our vision for
real time analytics is to be the
platform which allows you to derive always
fresh and always fast data with no limitations
as the data volume grows. If we look at BI
solutions out there, they make you make trade-offs
between freshness and speed. If you want the most
fresh data, it's not fast. If you want the data to be
really fast, it's not fresh. We want to provide
a platform which allows you to analyze both
real time and historical data. If you map the same thing
with what BigQuery is today, BigQuery is widely used for
strategic insights, performance insights, and
operational insights. But where we're investing
this year and the next year is to push BigQuery to also
be in the real time space. So you have one
single platform where you can look at
real time insights and historical insights. And oftentimes, we
have seen customers using two different
systems for this. That's where a
BigQuery is today. And if you compare BigQuery
with what traditional warehouses are, you need to manage
servers, you need to scale them, you need to patch them, you need
to manage complex ETL pipelines to load data in. With BigQuery, we make
the overall infrastructure management invisible so
you have a lot more time to look at insights,
ask the questions, or even ask you newer questions
which was not possible. But the key differentiator
for BigQuery, which is unique to that, is
support for real time insights. So there are two use cases
where BigQuery already does that today. BigQuery on the
ingest side natively supports streaming ingest. Whereas, a lot of
other data warehouses require you to do batch ingest,
which delays your freshness. The second area where we
announced earlier this year, is an in-memory
analysis engine built into BigQuery called
BI engine, which allows you to get
subsequent query response time for some of these
dashboard style use cases. And if you combine those two
capabilities of BigQuery, you're really getting
to a point where you can derive
continuous intelligence as the data comes in versus
waiting for data to be loaded over time. In terms of streaming,
BigQuery, when it was launched a few years ago,
supported both batch and ingest streaming mode ingestion. But we had certain
limitations in terms of how much we can scale because
it was built for what real time insights was two years ago. But since then,
you have seen a lot of customers moving from batch
to streaming mode ingestion. So we took time last year to
create a completely new backend for BigQuery streaming. And this particular backend
right now is in beta. And that can really scale. If we remove some of
those limitations which we have with current streaming,
the default ingestion quota we give for all your streaming
ingestion with the new backend is 10x more than
what it used to be. So it's about 1 million
rows per a second ingestion. And we have tested it
up to 20 million rows per second ingestion. That's pretty massive in terms
of how much you can ingest. Also, not only on
the ingest side, but also on the read
side, the performance, the read latencies have
significantly improved. As you start
streaming data in you can read them at a much
lower latency pace. And that's improvement
on both sides. Sometime next year
we're going to have a new API for streaming,
which is binary, which has native Avro support. And that makes the
overall platform for streaming much more richer
for you to build real time insights on top of BigQuery. A second thing, which
we announced earlier, is BigQuery BI Engine. BigQuery BI Engine, for those
of you who have not heard of it, it's an in-memory OLAP engine,
which is built into BigQuery. It's a column-oriented database
on top of BigQuery storage. And it can horizontally scale. So especially built for best
dashboard and reporting style use cases, where we have
lot of concurrent queries. And all of this analysis
happens in the memory layer. So it can scale horizontally. So as you have lot more queries
coming in from your dashboard, BigQuery BI Engine
can automatically scale to support
concurrent queries. And BigQuery BI Engine
natively integrates with BigQuery streaming. So as data changes,
we can incrementally update that data in-memory
and make freshness of the data much more quicker. If you look at
traditional BI tools, they require you to take the
data from your data warehouse into the in-memory
layer for it to be fast. But that comes with lot of
architectural complexity in terms of managing
an ETL pipeline. With BigQuery BI Engine, we
have taken the same OLAP idea and baked it into
BigQuery itself. So it sits on top
of BigQuery storage. And we can move the data from
the BigQuery storage layer into the memory layer
much more seamlessly. And that simplifies
your overall ETL. And there's no need to move
data outside of BigQuery. When we launched it, we
announced BigQuery BI Engine only with Data Studio. But we are committed
to delivering a SQL API to make it available
outside of Date Studio. So tools like Looker, Tableau,
any of the existing tools which can talk to BigQuery
through JDBC, ODBC, can still make use of
this particular engine. And that is sometime next year. BigQuery BI Engine also has
a notion of smart tuning. So it really doesn't
cache all the tables. It only caches the
columns which are being used in your dashboard. And it can automatically
tune the column based on different encoding types. And it keeps the data
compressed in-memory. So that way, if your
table can be bigger, it can still handle
a lot of data. And in terms of reservations,
it starts as small as 1 gig. You can create 1 gig
increments of reservation. And you can go up to 50 gigs. But as we make the
SQL API available, we are going to allow you to
add a lot more memory layer. If you have a bigger
table, you can pretty much keep everything in-memory. A lot of data about
the BigQuery BI Engine and how it's being
used, what sort of caches, how much time we
cache the data, what kind of data is being cached is
published on strike driver. So you could get visibility
into understanding how much more memory layer you need to do. So here's a good example
of a customer who has taken this journey with us, Zulily. For those of you who
are not aware of Zulily, Zulily is an e-commerce
company based in Seattle. They use AI and machine
learning to offer a personalized shopping experience for users. When they started
off, they were on-prem in a colo using
traditional data warehouse technologies on Hadoop. They were collecting
data on a regular basis. But their whole
pipeline was slow and the data used to be
analyzed every 24 hours. And that was not
good enough for them to understand what
sort of daily deals they need to offer
to the customer. Since they moved to BigQuery,
they started streaming data into BigQuery and they
collect lot more data points. For example, today, they collect
up to 50 billion data points about different parts of the
shopping experience of users. And they are able to create
new shopping experiences, new products dynamically based
on the data they've collected. And since then, they've
really added a lot more users within the organization-- a lot
more customers to their site. This is a great example
of a customer who has transformed their business
and still staying competitive with Amazon. So they're able to do
that because they've invested their time on
BigQuery as their data platform of choice. Earlier this year, we announced
the acquisition of Looker. And that's a great complement
to what we are doing already with BigQuery. But the acquisition is still
pending so I cannot share a lot of details of what we
want to do jointly. But essentially, the key
point is, with Looker, we have lot of data
about joint customers who use Looker and BigQuery. And these customers,
who have taken this journey of using Looker
as their BI platform, love it. And that's one of
the reasons we felt there was a really good
synergy between what Looker was offering and
what BigQuery is providing. If you look at the overall
portfolio of our stack, we have services
on the ingest side, we have services on
the analysis side. But what we were missing is
the last mile of delivery. We were relying on
our great partners to offer the BI solutions. A lot of our customers wanted
an out-of-box experience from Google. Looker really fit
in that category. We will still continue working
with all the great partners we have in the BI space. But we are also going to
offer a first-party experience with Looker. What makes Looker
highly differentiated and complementary towards
what Google already provides? With Looker, we provide an
end-to-end data analytics platform, from ingestion to BI. Looker fits very well
with Google Cloud's multicloud strategy. Looker runs on GCP,
AWS, and Azure. And we will continue
to support that. And Looker connects with all
different data warehouses, including on-prem
database basis. We'll continue to do that. So it fits well with
our open cloud strategy. The second thing is the common
data model over your data. There are a lot of BI tools
we provide, self-service BI, which allows you to
connect to your data source and then get and create
these dashboards. But the challenge
with that model is that every department gets
their own view of this data, myopic view of the data. And there is no
single unified source of truth of what's going on. With Looker, what
they have done is they've invested in
a modeling platform so that you can express
your business logic, you can express your KPIs in
a more centralized manner. And then you can provide that
access to your dashboards so that different
teams, when they look at churn data, or revenue
data, or net revenue data, it's all consistent. So there is a little
bit of upfront work in creating those data models. But once it's
created it's widely used by the organization. King is going to talk about
how they have transformed their business using LookML. The third thing is, where
Google complements Looker is around augmented analytics. This is where Google brings
the machine learning and AI technology right
into your BI tool. This could be where you
can use machine learning models to understand or
predict how your churn is going to happen in the next
few months based on past data. Or it could be where you can
predict whether the customer buys this product right. This is where you
blend AI seamlessly into your BI solution. And that's where Google Cloud
platform's AI capability can come in. And lastly, we want
to help our ISVs and SIs to build data-rich
applications with Looker as the data platform and
BigQuery as the data warehouse. So with that, let
me invite Carolina-- and she's going
to share how King has started using Looker as
their enterprise BI solution. CAROLINA MARTINEZ:
Thank you, Vinay. Good afternoon, everyone. I'm Carolina Martinez and
I have been working at King for four years now. Currently, I'm working for
the incident management team. Today, with my
colleague Ian, we want to talk to you about how
we use to look at King. Ian will explain how we did
it, why we migrated to Looker, and the benefits
it has provided. Afterwards, I will
show concrete examples on how we use Looker in the
incidence management team. But first, for those who
don't know who King is, we are better known as
the makers of Candy Crush. We are a game developer. We developed more
than 200 titles, of which 18 are currently live. Our games are played all over
the world by 247 million people each month. We are currently located
in studios and offices all over the world. We have more than 2,000
players currently. And since February 2016, we are
part of Activision Blizzard. But how does King look
like in terms of data? Sorry. First, you can see
here a selection of our most popular games. You might have played
them at some point or heard of someone
just playing them. Behind this, in
terms of data, we have 7 billion events
generated daily from our games and consumed into
our data warehouse. We count at the moment 500
terabytes in fact tables and 60 terabytes in
dimensions, which equates to 3 trillion rows and
500 billion rows, respectively. On a daily basis, this
data is systematically processed in BigQuery. I can show you briefly how
this happens behind scenes. We have a combined batched
and streaming infrastructure that allows us real time
capabilities as well as robust and complete analytics. Regardless of how
we ingest this data, we can have real time
anomaly detection. We can explore this data using
the typical data scientists tool kit with R or
Python Developers. And we can also explore
this data with Looker. Also, we ingest this
data and ingest it into other analytical apps. In this environment, Looker has
played a very interesting role and has expanded our
capabilities of analysis. Next, Ian will talk you through
how and why we moved to Looker. IAN THOMPSON: My
name's Ian Thompson. I've been working BI at King
for over five years now. And I'm currently looking
after the BI platform. Google asked me to talk today
about our journey with Looker, a little bit of information
about how Looker works, and a use case we've done
with Looker and BigQuery. We started with Looker
about three years ago when it won an internal BI
evaluation for a complete BI solution. It became our main BI
tool after just one year. After two years, it
was our sole BI tool. King also completed its
migration to BigQuery from an on-prem cluster
and MPP database. It was about 18 months ago. But let's take a little
quick look back in time to see what our problems were
before we started with Looker and what we needed. Our biggest problem was that
we had one team bottleneck. So we had one team in
charge of reporting with a huge backlog, which meant
that we had minor changes that had to wait. We had many reworked copies of
all sorts of reports and data models. And that led to lots
of duplicated code and lots of wasted effort. We had no single source
of truth, really. And people were
defining what should have been standardized
dimensions and measures in varying
different ways, which results in different KPIs. And that led to a lack of trust
of the data and the tools. That led to rote
behavior, and people were spinning up all sorts
of different solutions and software. Users found it
quite hard to adhere to a single source
of truth model because the processes
were quite complicated and the workload
was quite heavy. And we had no traceability
or responsibility. So who owned these products
and where were they running? Was it in someone's
personal schema? And who was going to action
these bugs if they came up or any edits on the solution? What did we want to achieve
by integrating Looker? We wanted to disperse
our engineers. We wanted to embed our BI
engineers into the business to be a dedicated resource. And we wanted to
empower and devolve building those analysis
to those who fully understood the subject matter. The BI team understood
best how KPIs worked but not the intricacies
of how the different games and campaigns
worked, for example. We even wanted to
empower and devolve people who understood
the data best. The BI team, they understood
the core data model really well. But they might have
struggled to understand the intricacies of data
from all sorts of platforms across the business. We're better assisting rather
than bouncing a solution back and forth until it's perfect. And we wanted to get
the decision makers closer to the right analysis. And they needed to get
closer to the process. So we needed the decision makers
sculpting the landscape of what actually could be analyzed. This is a useful time to take a
quick break from how we rolled out Looker and see
how Looker works and how simple its SQL
abstraction language, called LookML, actually is. At the core of the
Looker at data platform, and not seen by the majority of
its users, is a query builder. You describe the attributes
of your database data model and any business
logic to Looker just once. Looker is then capable
of building and running tailored SQL queries based on
what the user wishes to know. And this is known as
exploring in Looker. The abstraction from
SQL has many positives and is coded in a
language called LookML. LookML is a language for
describing dimensions, aggregates, calculations, and
data relationships in a SQL database. Looker uses a model
written in LookML to construct SQL queries
against a particular database. So for some context, I
wanted to show you what an explore actually looks like. This is your explore page that
you're faced with in Looker. And the user is able to select
from the left-hand panel there some fields to
create their analysis and visualize the result set. Looker will create a
necessary SQL query based on the LookML
to fetch these results from the database. So what are the
benefits of LookML? Looker says that LookML
isn't a replacement for SQL, it's a better way to write SQL. And I agree. Some of the major points of the
benefits would be reusability. A majority of data analysis
isn't likely to be used again, yet a lot of the
same steps would be. With LookML you'd define
a table, a dimension, or a measure, a
relationship just once and you build upon it rather
than rewriting the SQL every time. Version control. Ad hoc queries and
multi-step analysis is very difficult to
manage in version control. With LookML, version control
is built right into the tool. And you can integrate
that into Git. LookML is also
architected to make collaboration natural and easy. And we also love that
it can empower others. So no longer are there
secrets of how and why things are designed and defined. Everyone's able now to chip in. This is a very, very
quick introduction, an example of LookML. And we can see here
we are defining the orders data set in LookML. The majority of LookML
is auto generated and it's very quick
and easy to enrich. In this orders data set,
known as a view in LookML, there is a dimension, which
is based on the ID field from the database. And that also has a
name, a data type, and is marked as a primary key. Here we can see a
few more dimensions. For example, we have a
formatting change there. And we've also got the
created time dimension. In LookML, someone
has decided there to expose the time, the dates,
the week, and the month. And that will expose
itself in the front end explore nicely and
easily to the user without you having
to specifically code week, date, month. At the bottom, we
have a measure, which is a sum of the amount
field in the underlying database table. So this concludes
our orders view. We can now see how this
is actually utilized. This is the explore definition. An explore is how you offer your
users to create an analysis. It's the playing field
you offer to them. In this case, we can see
that we have the orders data set and any customers that
may have made those orders. Depending on what dimensions
and measures the user actually selects in the front end,
Looker may or may not include the customer's data
set into your underlying query. Looker will always create
the most performant query to execute against
your database in order to get the results the user
is actually requesting. The collection of all of this
LookML is known as a model. That's a very simple
example of LookML there. And it needn't be any
more complicated than that to add some serious value. I personally love LookML
because of its ability to deal with complex issues
while still remaining clean and simple, along with
some of the other benefits I mentioned earlier, such as
the reusability is a big one and how easy it
is to collaborate when working with LookML. But let's get back to how we
rolled out Looker inside King. So how did we integrate Looker? We ran POCs with teams
disillusioned with the BI offering at the time. It took off, as shown
by a rapid decommission of our other platforms. And we did this by
running hands-on workshops with people, where
we'd start off with an introduction
to the tool. And we'd actually end up
building a product with them so the team left with
a working solution. We would always make sure
that we were on hand. And we were proactively
searching our issues in order to make the adoption
as simple and satisfying for our users as possible. Our power users were also
at the heart of our success. They would do a lot
of our work for us from inside their own teams. And we also introduced a concept
of what we call core models. So what is a core model? We think a core
model for us is owned by a specialized
BI team who best understand KPIs inside King. Teams are then seeded
with this core data model, which is a
starting foundation point for them to build on. So for free, they get all
the KPIs and the ability to add, edit, and remove
bits for themselves. And because of this,
there was a lot of buy-in from them, as well. They owned something
for themselves. They were free to
express themselves and could truly
tailor the experience for their own business areas. The solution of this is, I
feel, very robust, centralized, and controlled, yet adaptable. But this is only possible
through extension in LookML. What is extension? In short, you inherit everything
from the parent object and you can alter and
include new attributes. When I was putting
this slide together, I lazily googled the word
"extension" or "extending." And I was faced with
loads of extending tables. I feel this is almost
a perfect metaphor, so long as you are a
carpenter, for example. You'd be crazy to discard a
six-seater to table to build a separate eight-seater table. You'd be much better off
adapting your six-seater table to make it an eight-seater
table for the few times that you actually need it. You could reuse you're
perfectly suitable table legs and perfectly
suitable tabletop. Regarding King's
specific use cases here, we couldn't possibly cater for
all of the every single game nuance to build
different metrics. We can cover the
standardized ones and allow teams to extend our
model to include their own. So when we merged the
concept of our core models with LookML extension,
it's incredibly powerful. We seed teams with
this model and they can add value by tailoring
it to their precise needs. This results in a
single source of truth, which is based on a secure
code base and save in Git. We now have traceability. We can track all models and
analysis back to the data warehouse and the ETL. We can distinguish owners
of parts of the model and assign responsibility. And teams can really focus
on the differentiation. A few years ago, King started
working in partnership with some small
independent game studios. King supports these studios
in many ways, one of which is offering analytics. Our partners could leverage
our analytics offering through Looker to improve
their marketing campaigns, to improve gameplay
and engagement, and, obviously, increase sales. We are currently refactoring
the data ingestion process for our publishing partners. But this is a use case
based on a proof of concept with a very light touch ETL. So let's take a look
at how we created this multi-tenant analytics
service for our partners. We ingest all of our
publishing partners' data combined into
one GCP project. And we create a useful data
model from these events. Our internal publishing
business performance unit, otherwise known
as a BPU, may also wish to create some tables
specific to publishing. We expose only individual
partners' data to that partner. In order to do this,
we create a GCP project for that partner,
and every partner. And the data sets of
what we want to expose are created one-to-one
mapping with our source data. And each object there is
exposed via an authorized view from the source data. This allows us to do
column and row level access and offer our partners something
different for each partner. These views are built
programmatically in the partner-specific
projects in an authorized view. So inside Looker it
starts with our core model that we looked at earlier. We allow our internal
publishing BPU to extend this model to apply
standardized publishing data additions, whatever
that might be. This is then extended a
further time for each partner. And that's where
partner-specific additions are made. These models are configured to
read the partner-specific GCP project we saw before using
a partner-specific service account. Employees from these
partners access Looker. And due to permissions,
they are only able to access their own models. There are a couple of other
safety nets put in place here, as well, not shown, that
would stop accidental access to other partners' data. So I'm going to hand
back over to Carolina, and she can talk to you
about the incident management process. CAROLINA MARTINEZ:
Thank you, Ian. In the incident
management team we have benefited a lot on
this transition to Looker. Let me introduce
you briefly how we work in the incident
management team and what we are responsible for. In the incident management team,
the process has four steps. It's detection, investigation,
communication, and postmortem. Detection. When an incident
occurs, the first step is to detect and determine if
what is going on characterizes as an incident. This needs to happen
as fast as possible. Once detected,
investigation comes in. Our analysts in the team
jump right into analyzing the impact and the root
cause of the incident. Communication. At the same time,
incident managers start and keep conversations
with all involved parties, teams, and potentially
impacted stakeholders. When the root cause is found
and the fix is [INAUDIBLE] in place, a postmortem is run. This is all about retrospecting,
defining, and following up on actions so that that type of
incident does not happen again. Swift and fast detection
and investigation is crucial for
incident resolution. And they depend
heavily on reporting. For this, Looker
has become a must. Before having Looker,
the data on the model belonged exclusively to the date
engineers and BI developers. All investigations
were limited by what was in the backlog of the data
engineering and BI development team. There was also a really
strong dependency on how well requirements were
transmitted and understood by all involved parties. The fact that we
had a central team and that engineers
and BI developers were only part of that team was
making it really difficult to have a dedicated person
when an incident was happening. And it depended a
lot on workloads. At the same, for
exploration, there was a need to delegate
deeper investigation to data scientists
since it was required to have specific skills for
R development and Python. As this capability to dig
deeper was limited by the tech, the incident
managers were forced to go back and forth to
developers and engineers for a constant redesign. This was having a very
important impact on resolution and was limiting the
capacity we had to be fast and quick in resolution. With Looker, Looker
came in with fresh air. It allowed us to
band inside teams that engineers and developers. The line between stereotypical
roles got blurred. As iterating in
Looker is very quick, through the flow you
can see on this slide, this empowered all users to
own their data independently if they were developers or not. For the incident
management team, this empowered everyone to tune
dashboards, looks, explores, and add extra tweaks and
extra logic to existing ones. Even the more bold
just turned out to look into how the
data model works. All parties involved
feel now included in the analytics process. And rather than being
just an end customer, they can also become a creator. In the words of one of my
colleagues incident manager, she said, "With
minimal knowledge, I can create dashboards and
create my personalized own work space for an investigation." This gives them freedom to
investigate and to load data as they need, with less
dependency on the availability of engineers and developers. They can also easily
interact and collaborate with other data users, and
share with data scientists and engineers dashboards
collaboratively. Everyone can just add their best
to the current investigation. So more flexibility gives us
better investigation capacity. Next, I'm just going to
go through some use cases. We use Looker for traditional
dashboarding to load events, to explore those events, and
for interactive dashboarding. Traditional dashboarding
is very simple in Looker. It allows us to just
integrate in one single view different data sources with
different representations in a single, unique dashboard. This helps us tell a story. More importantly, when
investigating and analyzing, it allows to correlate
the different data sources with only a
glance and dig deeper when an anomaly is spotted. How dashboards are built? They allow us to just
click on its tile and then explore from there,
and create a new explore based on the same data. And start a whole
new investigation based on what we saw
previously in the dashboard. Next, I'm going to just
show how simple it is to add new data into one dashboard. Thanks to how Looker
is structured, it is very easy for any
user to add new data. They can go to the
Development Mode. They just head
directly to the model of interest and click Add Data. When they create
views from tables, they can just link
into view that exist directly in BigQuery. And with one click,
viola, they have a LookML with all the
information ready to explore. Next step, we add the
new explore to the model. It can be joined
with other tables. It can just be an explore
dependent on the data we just linked in. And we can visualize. Once this is done,
that is of interest. This can be saved into
the initial dashboard we were looking at. And one non-deep data
development person can just continue to go
adding information and going through this loop
and investigation. Another very powerful
tool we have at King is Slack bot, that allows us
to integrate data on Looker dashboards into conversations. We have a bot which
we can ask questions. And then this bot launches
queries against Looker, and Looker comes
back with an answer, being it in a form of
a number or a graph. This is very useful when
communication is going on because it allows us to quickly
check without having to change context, without having
to change the tool we are at the moment working with. This one allows us
to fuse together communication and exploration. Last but not least,
we are making use now of interactive dashboards. We use Looker as a front
end thanks to actions, where we can ask the
user's parameters for an investigation. These parameters are collected. This is sent to Cloud Functions. And in Cloud Functions
we can execute Python packages developed
by our data scientists. They can just work their magic,
dump results in BigQuery, and from there, results
can be explored, analyzed, and, of course,
visualized back in Looker. This a simple example of
an interactive dashboard where incident managers
can select a game. They can introduce
training dates for a model, which states they
want to run the analysis on, and all this data is collected. Thanks to the LookML that's
behind scenes with actions and calling to Cloud
Functions, data is processed and dumped back to BigQuery. And the user can just see
a completely new analysis that they run by themselves. Next, I'm going to just
hand back to Vinay. Thank you very much. [APPLAUSE] VINAY BALASUBRAMANIAM:
Again, thanks, Ian. Thanks, Carolina. The incident management
is a good example of a data application,
where you could use Looker as a platform to build data
apps powered by BigQuery. And you can share
that as a Looker blog within the organization. So to summarize, a
few call to actions. BigQuery streaming, we
have a new version there. There's a recent blog. We have written on what
the streaming offers. Tomorrow, there
is a session which covers all the new announcements
which is happening in the data platform, including BigQuery. If you can attend that
session that'll be great. It talks about the roadmap. And lastly, King also
has a very technical blog which describes
the solution which they have built with
Looker, and in general, what are they doing with BigQuery. So with that, again,
thanks for coming. Again, your feedback is
very important for us. So take some time
to give us feedback. And have a rest of
a good conference. Thank you. [APPLAUSE] [MUSIC PLAYING]