[MUSIC PLAYING] LEE BOONSTRA: Hi, everyone. My name is Lee Boonstra. And I'm a developer
advocate for Google Cloud. And I'm focusing on
conversational AI. I'm also the author of the
Apress book "The Definitive Guide to Conversational AI with
Dialogflow and Google Cloud. " You can find me on
Twitter at the @ladysign. And today I'm going to present
to you the workshop on how you can integrate Dialogflow
in your Flutter applications. For this, we of course,
will need Flutter. And Flutter is a
development suite for building UIs.
In the past, you could use this framework
for building mobile UIs. But lately, you can build
any type of application with Flutter, whether it's
also for desktop or for web. And people choose
Flutter because you can very quickly
develop an application, and it runs cross-platform. And you can make use of all
these nice UI components. Now Dialogflow, that is a
development suite for building conversational UIs.
So that has everything to do with chatbots and
voicebots like speech to text, text to speech. And then it's powered
of Google Cloud-- are powered by Google Cloud. It makes use of
machine learning. For example, we do
intent detection, which is kind of like
intent classification. The way how it
works is Dialogflow that runs in the cloud-- you log in on the interface. And then as a developer
or a designer, I feed it training
phrases as examples. And Dialogflow makes use of
an NLU part, natural language understanding, so
your end users that are typing to your
checkbox or they're speaking to your checkbox,
it will map whatever they say or type to whatever
the training phrases that you used in Dialogflow. And based on that, it will
return you a response. Now Dialogflow comes with
lots of integrations. For example, you
can integrate it in Twitter or Facebook or web. But if you want to build
a mobile application, yeah, you have to
do this custom. And that's what we're
going to do today. Now with Dialogflow, you can
create all types of chatbots. Obviously, the web chatbots
with a textual interface. And these are like little
popups or iFrames on websites. But you can also build
voice assistance apps with Dialogflow such as
for the Google Assistant, or Cobalts where you have
chatbots in telephony contact centers. And the last category, these
are the custom voice AI bots. Maybe you have custom
hardware or IoT devices. And you want to
integrate your voicebot, your voice AI on top of that. Now that is exactly what we
will do today, although we don't use custom hardware. We will use a mobile phone. And in our case, Androids. But it will work. This workshop will also
run on iOS devices. Just follow along the steps. Now maybe you're
wondering like, well, why would you create
your own voice AI? Why shouldn't you use
the Google Assistant? I actually get often
a lot of people asking me like, hey, Lee. How can I integrate the
Google Assistant in my app? And let's say you're
a bank, and you want to integrate a voice
in your banking app. And people always ask
me how can I integrate the Google Assistant in my app? And then I always
ask my customers but, like, well, is that
really what you want? Because they're-- I mean, the
Google Assistant is a great ecosystem. And it's great. But it's also
publicly available. It runs across every
assistive power device. And you can invoke those actions
by saying the wake words-- so "hey G" and "talk to my app." And with that, you get
the full ecosystem. So like, instead of only asking
questions about my banking app, I could also ask the recipes
or the weather information. And it might be an
overkill if the only thing that you want to do is having
a voice AI in your custom mobile app. So this is exactly
why we're going to build our own voice AI. And we can do this
with Dialogflow. And Flutter will use that
for the mobile app interface. Now there are typically
two approaches on how you can
integrate Dialogflow in your mobile Flutter app. The first approach,
which is the approach that we will use today,
that is by using a Flutter package or a Dart package. And there are some cons and
pros on both approaches. For the Flutter
package approach, and that means it's very fast. You just include it in
your [INAUDIBLE] pack. And now you start writing
the code, and it works. You don't need the back-end. You don't need to
pay for a server. It just works. But there is an
important part that you need to be very careful. As soon as you're going to
build production applications, you need to be sure
like, what or how do you secure your application? What do you do with
your service accounts? Now when you are building
applications in Google Cloud, you always make use of an
API key or service account. It is a JSON file. And a JSON file contains the
project ID of your Google Cloud project and also the API keys. And you give it access
restrictions or access rights in order for the
service account to make use of all these Google
Cloud technologies. And now you could
say, of course, like well, I only
assign Dialogflow to my service account. So there's not a big deal
if somebody gets access to the service account. Because let's face
it, hackers could decompiled your
mobile application and then take the service
account out of it. And you could think like, well,
it's not a big deal because I only give access to Dialogflow. So nobody can mine
bitcoins on it. Yeah, that's true. But the fact is with
Dialogflow, we-- or you could enable logging
in Dialogflow. And therefore, yeah, you could
collect PII data all the checks that are coming in-- yeah,
that's access to your Google Cloud account. If somebody bad has that
service account key, yeah, then they can get
access to that data. So that's maybe embarrassing
for a production app. If you do want to go
with this approach-- because it's
definitely possible-- but you probably want to do some
additional task on top of it. Either you want to do
something with rotating tokens. And I'm sure there are Flutter
packages available for that. Or you maybe run
parts on the back-end, or maybe everything
on the back-end. The nice thing is, since
we're using a package, which is Dart code, you can run it
in Cloud Run if you want to. But you just need to
be careful of this. For protection apps,
just be careful what are you doing with
your service account. We won't go that deeply today,
but I am pointing it out. The other approach
that you could use is by building the
whole Dialogflow part on your server side. This is great for when you're
building multichannel bots. So let's say that
you're also creating a telephone chatbot
[INAUDIBLE] and it also talks to Dialogflow. And then maybe you want to
build the back-end only once. You can also do this of course,
with Dart and for the code, or you write it in
some other language. That's fine. Yeah, but it means,
of course, that you need to maintain a server
and also pay for it. And on top of that,
that's also not as easy. You talk from Flutter to
the back-end with HTTP and websockets and
then communicate. Code for an ATP response
could look somewhat like this. But, yeah. And a multichannel
bot-- yeah-- could look like an architecture like this. So maybe you have
a Google Assistant and you have a web chat. In a Flutter application, they
all talk to the same back-end. It's a really nice architecture. But again, that takes a lot of-- yeah-- a lot of
knowledge, and when you're working in Enterprise,
probably also multiple teams. | this is exactly
why we're going with the approach of using a
Dialogflow Flutter package. I created my own package which
we will use for the workshop today. It's called Dialogflow
underscore GRPC. And yeah, it will
be a very easy way to build chatbots and
voicebots-- so speech to text bots, which means
that I speak and then it captures the text and it returns
me an answer on the screen. You can do that
with this package. So before we'll
dive into the steps, I'm going to show it
how it will look like. So here, you can see this is
the app on my Android device. And you will see that there is
like, this little microphone button that I can press. And I could ask a question
like, who is using Dialogflow? And while I'm talking,
you see that it starts capturing my voice. And what I press Stop, then
it shows the result on screen. We'll see it later
also in the emulator. All right. So now I know that this workshop
we'll have a lot of steps. So it's probably
best if you just watch me going through all
the code and explaining to you all the steps before you're
diving it in yourself. I mean, you could do. But I just want to mention it's
a long cold lap where we use lots of different technologies. It's probably best if
you just have a look and see how I am
doing these things. The chatbot that we
will build basically scrapes FAQs from a website. And I'm using my own
website in this case where I have some Q and A's on
the product Dialogflow flow. And the chamber will load these
FAQs inside of Dialogflow. And Dialogflow is then
integrated in Flutter. So you can ask these
types of questions and it will return
you the answer. Now feel free to ask questions,
if you have any questions. Or if you have any
comments or feedback that you want to
share with me, I invited today a group
of Google engineers that are helping me with
answering the questions. So if you are registered
for the workshop, I think there will be a Q&A
feature in this interface. Press that button and
start interacting. And also me, I will
dive into the questions afterwards at the end. Yeah, let's get started! So I'm expecting but I'm
not of course, not sure that you already have a Flutter
environment up and running. You can have a
Flutter environment for Mac OS or Windows. You could even run it on
Linux and then Chrome. But what you will need is
you will basically download Flutter on your Java pack. And probably you want to
download Android Studio. And Flutter also comes
with Flutter Doctor. So when you run Flutter
Doctor on the command line, then it will tell you
exactly if you correctly installed all the tools
and all the emulators. And if that feedback is all
OK, then you can continue. And what we will do
then is we will start by creating a Dialogflow agent. And you can do this by going
to the Dialogflow console. So you type in the URL
console.Dialogflow.com. And if it's the very first
time when you log in, you log in with
a Google account. Then you click on
this dropdown bar. And it's probably empty for you. As I'm a Dialogflow
developer advocate, I have many projects here. But you probably
won't have that many. But you click on
Create New Agent. And when you click
on Create New Agent, you get this interface
where you specify-- you give it a
name, your chatbot. You select a time zone. Take the one that's
the closest to you. Please do not switch
the mega agent setting, because you're not
building a mega agent. A mega agent is kind
of an orchestrator that can talk to many chatbots. But we're not doing that today. And I would also advise
you to not switch the global dropdown to switch to
a different region of chatbots, because since we're making
use of the FAQ feature, we are loading FAQs through
the knowledge base in our app. So that only works
on the global region. So you create an agent that way. I think, after those steps, you
will click on the cog wheel. And when you click
on the cog wheel, you have to enable
the beta features, because that is the
knowledge base that will make use of that. Alternatively, you could
go into the Speech tab, and you could enable the
speech quality settings. This works best if you are
on a paid Dialogflow plan, then you can improve
the speech model. For the workshop,
it's not necessary. You could leave it
the way as it is. Yeah. Now the next step
that we will do is we will go to the
Google Cloud console where we will create
a service account. So you go to your
console.cloud.Google.com. And you can start searching
for Dialogflow API. You need to be in the project
that Dialogflow created. So when you create a Dialogflow
agent, under the hood, it creates a Google
Cloud project. So you select that Google
Cloud project in the top. And then in the search bar, you
can search for Dialogflow API. When you search
for Dialogflow API, you should see Manage button. You click on Manage. And on the left side there will
be a menu with credentials. You click on Credentials. And this is the place
where you can start creating service accounts. So you create a service account. You give it the name. You can give it a description. And then you click on Create. And then the most important
step of this service account is you need to
assign it some roles. And we want to give it
Dialogflow access rights. So you search for the
Dialogflow API admin. You give that. Then you click on Done. And at that moment, it's
created a service account. You don't have the key yet. The key you will get when you
click on the service account name. And then you hit
on the Key Step. And from here, you can say
Add Key, Create a New Key. It comes with a popup. You choose the JSON key
and you click Create. And at that moment,
you store the JSON key on your hard drive. And later in Flutter. We will copy this
over to our project. Right. Now let's start with Flutter. And we will start by creating an
boilerplate application, which is like the demo application
that you get when you start a fresh new Flutter project. You can do this in
Android Studio when you create a new
Flutter application, or you could write on the
command line in your terminal Flutter Create space, and then
the name of your Flutter agent. So I called it Flutter
Dialogflow agent. And what this does, it generates
the whole folder structure for me. That's great. I prefer it to use the
command line personally. That is because I tend to do
a lot of things on the command line. But I also have the idea
that it's super fast. So if you don't
have a preference, I would say maybe choose
the command line as well. Then you get your
project all generated, and you can open it in
your favorite editor. I'm using Visual Studio. But you could also
use Android Studio that comes out of the box
with that, with Flutter. And there is an
important setting here, what we need to do. And we need to do these steps-- the next couple of
steps-- before we start running our application
and installing our application. And that is because
the Flutter application where we will make
use of is making use of a microphone component. The component is
called Sound Stream. And that plug-in only
supports the SDK Version 21. So for that, we need to
open the build gradle files. And there are multiple
build gradle files. But you need the one
in the app folder. So you go into the
App folder and then you click Build Gradle. And then you scroll down. And here is where you change
the minimum SDK version. I believe by default, it's 16. And you need to change it to 21. Then there is another
setting that we need to do-- is we need to give
permissions to our app that can allow us to talk
over the internet and that the microphone
is allowed to record. So of course, Dialogflow
is a Cloud solution. So you always need to be online. Therefore, we need to give
the internet permission. And we need the
recording permission for in order to
use the microphone. So what we will do is we
will go into the app source main folder where you can
find the Android manifest. And this is the place where
you place the user permissions. You can copy it from the
codelab and paste it right underneath the opening tab. And again, please
do this before you start installing the
app on your emulator or in your physical
device, because it might become difficult otherwise. Now you can see I have my
emulator running because yeah. I'm using an emulator here. And I could start
typing to the chatbot. I could say hi. And the chatbot
talks back to me. I could also start
recording my voice. So I could say who
is using Dialogflow? And you see that it
was recording my voice. Maybe you're wondering
what's that console showing on the left? Well, I'm actually logging the
audio stream that is coming in. And as long as these
are not zeros-- so this is a list or an array
with only 8-bit integers-- as long as this
is not zeros, then I know that there is
audio streaming in. So at that moment, then
I know that Dialogflow should work on this as well. So this will be
the end solution, what we will eventually go
build speech to text to capture the audio to written text. And then we will
match it to a response from the outflow, which in
my case, is [INAUDIBLE] FAQ. Before we get there, we need
to add some dependencies in our application. And like I mentioned, I'm
making use of the sound stream plug-in in Flutter. This is just an open source
plug-in that I found. There are multiple microphone
plug-ins available online. You need to plug-in for this
because this is a hardware feature. And this one works
on Android and iOS. So you can start using it. You go to your
pops up spec file. And here, you specify the
dependencies that you're using. So this is the sound stream
one that we're using. We will also make use
of Dialogflow GRPC. That's the one that I
create that's integrated with Dialogflow. And maybe you're wondering
what is the RXDart doing? Well actually, this sound stream
integration, the example's making use of it. So that's why it's here. Now we also need to link it to
our service account, the one that we just downloaded
on our hard drive. This is the moment when
you start copying it in your project folder. So what you could
do is you create a new folder called Assets. And in the Assets folder,
you drop the JSON key that you download. I renamed it to
Credentials with JSON. And I paste it in
the Assets folder. And then in my [INAUDIBLE],,
I write down assets-- assets/credentials. So basically, what I'm
doing is I make a reference. And if I go too
fast, don't worry. You will see all these
steps in the code lab. You can do this later. I think in [INAUDIBLE] Studio,
when you press Control Save, it starts downloading all these
dependencies under the hood. If you work from
the command line, you'll probably want to write
that on the command line Flutter [INAUDIBLE] get. And then it starts downloading
all these libraries in your project. And you can start
playing around with this. And again, here's the
big caution warning. Like, if this is
a production app, you do want to pay extra
attention on the service account because
yeah, a hacker could decompile your
application and then get access to it, which
is what we don't want. Now you can run the application
either on your physical device or on your virtual device. I showed you both. In the beginning, I
showed you that it was running on the Android. I also showed you how
it looks into emulator. If you want to run it
on a physical device-- and actually, if you
don't have a preference, I probably would go with
the physical device one, because then you're completely
sure that the audio works the way how it should
work, while on the emulator your microphone goes
through your laptop. And I have seen occasions
where the audio got distorted. And when you have
distorted audio-- I mean, if you would
picture it as a wave, like a green wave,
like distorted audio, then would be like very
big and up and downs. And that means that
also the Dialogflow, the natural language
understanding-- I mean the speech to
text will probably have a hard time figuring
out what it means. So physical device probably
won't have those issues. But it might work. As you've seen on my
laptop, it worked. Also with my virtual device. It depends on which
one you create. On the physical device--
so if you use Android, the way how you can
debug on your Android is you will go to
your Android settings, and then you will tap on your
build number a couple of times. And on the latest
Android version, you need to tap it seven times. And then you get like,
a secret developer menu or a developer option screen. And in this developer
option screen where you get access to-- that is where you
flip the USB switch. And once you have the
USB debugging enabled, then you can just
plug-in your USB cable. And as soon as you write
Flutter run or you press the button in your
Android Studio, it will recognize
your physical device and it starts installing
the APK on your device. That's amazing. If you want to make
use of an emulator in your virtual devices,
that's fine too. You can go into Android
Studio and you can create one. You can pick whatever
operating system and/or Android version you want and
what type of device, and you can set it all up. Now when you do this
for the very first time, you need to go into
your emulator settings and make sure that you
enable all the switches. That allows you to-- yeah-- let the microphone work. And if you have very
strange problems where you have that idea
like, well, I don't feel that the audio is working. I don't see zeros in my logs,
but it still doesn't work. Sometimes it might be
wise to record your audio and play it back in a wave
file through your Flutter code, because then you can exactly see
how Dialogflow gets the audio. Now by now what we did is
we created this example app. It doesn't do anything
with Dialogflow yet. But that's fine. Everything is set up
in the way that we can start using the
right code, which is what we'll do in the next step. We will create the
chat interface. And you could copy
it from the codelab or you can click on this link. Because I put the code
all in GitHub for you. So that's easy. That means you can just do a
Control-A, Control-C, Control-V and paste it in your project. You will create a
chat build Dart file. So that's a new file. And that's where you
put all the content in. And this is the place where
we will eventually integrate our Dialogflow stuff into. We will also create-- or we already have a main file. And the main file-- this
is where we link the check the Dart interface to. You can also get that from this
link and start copy pasting it. And what this file does,
it links the chat interface that you can see that here. Now what this chat are
doing, it basically draws out the whole
interface for me. So in my case, it's
an orange interface where you see space list view
for all the chat balloons. You will see a little text
area with two buttons. One is the Send button, but
it also work on [INAUDIBLE].. And you will see the
microphone button that you can press and hold
to start recording your audio. That's all for now. Yes, that's all for now. I have to speed up time-wise. But let's quickly go to
the Dialogflow settings, what you need to do. And what I did here is
I created some to-dos. So I created a to-do. The first to-do is for loading
the library in your package or in your chat Dart file. So you'll load my custom
code, the GRPC file. The next step what
you will do is you will create a
Dialogflow instance. Then you will integrate
the service account. And the service account--
now we have the reference. So this will work. It links it to the
service account. And once you have
the service account, you can start to create
a Dialogflow instance. And it now knows the project
ID where you're into. So only your Flutter app can
talk to your Flutter agent. And you can start running
the intent calls on this. And this is what we're doing
in the HAMIL submit method. The HAMIL submit
is that once you type in code in your
emulator and you press Enter. Then what it will do, it
will take your written text, show it on screen, and it will
make the detect intent call. And what you can see,
this is a future, which means it's asynchronous code. It loads the text and
it start detecting and get the answer
from Dialogflow flow, and stored at in the
fulfillment text. And from the
fulfillment text, we will print this on the screen. Now the second part, that
is the stream method. This is the place where we
will create the voicebot, or in our case,
the speech to text. In the speech to text, what
we will do is we will first of all, there is an audio
stream subscription. This is done for
the microphone part. And what we're doing is as soon
as you press the microphone button, it starts listening
for the audio stream and it's logging it. But it also adds it
to the subscription and this audio stream. This is what we will
eventually send to Dialogflow, because that is where
it gets my voice from. The very important
step here is that you need to create a
configuration where you need to assign a sample rate. You need to tell Dialogflow
the sample rate that its using. And that needs to match
to the sample rate that the microphone is using. So in our case, the
sound stream package makes use of 16 PCMI mono-data. So we need to set the
sample rate to 16,000. And we need to set the encoding
to audio encoding linear 16. This is very important,
otherwise it won't work. I did an additional step here
I am creating a biased list. Because my views are making
use of product names and so and I am not sure if my speech model
will understand these words or that it will
misunderstood my wording. So that is why I'm giving
certain words a boost. That way I influence
the speech model to make sure that it
always understands whatever I'm speaking out. Now and then the last thing
what you're doing here is you do the
streaming detect intent where you pass in the config
and you pass in the audio stream that we get from the microphone,
and we start listening in. So in real time, Dialogflow,
we'll start listening in and return answers to me. And if there is
an answer matched, so if an intent was
matched, then we are printing it on screen
in an chatbot message. And OK. I have a few more minutes
left and that's great. If you've built this,
now you can start playing around with Dialogflow. And this is also
the moment where I would say that
you after you're done with writing
the code you, should start testing it in your
emulator or physical device. The very first time when
you start your application, you will get a
popup that asks you if you are having the permission
for using the microphone. And you need to say yes
while using the app. And once you do
that, again, make sure that you enable the
microphone into emulator here. Then it will work. And then you can start
playing around with this. And like I said, if you have the
idea that you don't see zeros-- if you don't see zeros, then
there's likely something wrong. Or if you see
zeros, then there's likely something wrong
with the permissions. If you don't see zeros,
if you do see numbers, but Dialogflow is not matching
the audio to whatever you're saying, make sure that you
have the right sample rate set and if this matches the
hardware of your device. And if you're 100% sure
that whatever you're setting the configuration
is the right setting, maybe you want to
record your voice and play that back to be sure
that the audio that comes in is the right one, or test
it on a physical device, because then you're 100% sure. And then you're good to go. You can improve the model, as I
said before, in the Speech tab in Dialogflow. But you need to be
on the paid version. And then you have a much
better speech model. But the free version
works fine as well, especially if you provide the
bias list where you're like, steering the model to make sure
that, let's say if you would create an application, you
would say I'm going out to eat pizza versus
I'm going to Ibiza, which one is the right one? Well, it's the one that
you give it the boost. So that's what
you're doing here. As you've seen, our app
is making use of speech to text where I use the
microphone to record my voice and I show the answer back
on the screen in text. If you're also
curious in like, well, how would I play back that
audio through the users instead of showing it on screen? But yeah, playing it as audio,
well, the good thing to know is that Dialogflow
can return text to speech if you enable this
switch in your Dialogflow interface. And then it returns and base64
stream in your response. You probably need to
do some conversion to get it in Flutter. But if you have a look into
the sound stream plug-in, you do see that they
also have a player. So as long as you have a
stream and you attach that to the player,
then you can start play whatever is in the stream. So yeah. You could build this with
text to speech as well. Now for the last part of
this workshop is the modeling the Dialogflow agent. And as I mentioned
in the beginning, we were making use of the FAQ. Here, where I'm loading
FAQ questions in my app, the first steps,
what I'm doing here is I am modifying
the default intents. So by standard, if you
started flow application, you get two intents, the
welcome intent and the fallback intent--the welcome intent-- that's with the-- that shows
you the welcome message. And fallback is everything
that can be matched will go to global fall-back. And you can start creating more
intents with custom training phrases, or what
we will do today is we will create a knowledge
base, where we link it and fetch FAQs from a website. So you click on Create Knowledge
Base, you give it the name. And now here is the magic. You click on Create
The First One. And from here, you give
it a document name. You specify the type. I choose FAQ, because I
have questions and answers on my screen. You can make it even
more experimental. If you have a whole
article on the website, then you want to import
and let Dialogflow figure out the questions and answers. So you could try that as well. That's with the second option. We're choosing FAQs. And this is a website
that is available online. It's important that
this website is available to search
engine bots, because it uses similar screen
scraping techniques as search engines are using. So you select text
HTML as a MIME type, and then you specify
the URL that I have in my set of instructions. But feel free to
use a different FAQ if you want to build a
different type of bot. I'm loading the FAQs from
my website into Dialogflow. And when you click on Save,
it starts scraping that file. And it will get all the
questions and answers in my Dialogflow. I can check this check
box to make sure that it works on automatic reload. So every time when I make
changes on my website, then a couple of times a day it
will start fetching it again. So make sure that the
chatbot is up to date with whatever's on the website. And if you feel like-- because
the FAQs in Dialogflow, there is natural
language understanding what's behind this. So I could mistype this
question and it will still figure out what the answer is. But if I were to ask this
question on a totally different way, like, instead of
the question in our case would be like, when should
I use Dialogflow Essentials versus Dialogflow CX-- let's say if I would
ask which one is better? CX or ES? That's a different type
of question, right? But I do want to
return this answer. What I would do
in that case is I would provide training phrases. So you promote it to an intent. You say convert to an intent. It becomes an intent. And now I can start feeding
it training phrases. Which one is better? And then it will also make a
match on questions like these. Typically, I would
say give it like, 10 to 15 training phrases. Then you're good to go. Then you're training
a very decent model. Now if you go back to the
Knowledge Base Center, you see here slider. There you can specify
like, which one will win? Will the knowledge base
have a stronger preference than the default intents or not? We're going to stick it to the
default, so it's like a 50-50. You scroll down. And this is an important step. You need to specify
knowledge answer is one. So it needs to get the
answer from the question. So if you look here, and
like, it cut this block. But you need to specify
what is the answer that will be returned to the chatbot. And you could even put
text before it or after it. But you specify knowledge
base article one to make sure that will
be returned on screen. Yeah. You hit Save. And that is the
moment now you can start testing your application. And you should see a
similar app like mine. . And it should work
so, congratulations. Now if you thought like, wow. This is amazing. I would like to play
around more with this, or maybe you're
interested in Lee, how did you actually
built that GRPC plugin? Well, I explained
this on my weblog. So if you go to my weblog,
I'm explaining the steps on how you can create
your own GRPC package. Because the GPRC package
is basically auto generated code
with the SDK specs that's written in the docs. So I'm explaining you how you
could do that, just in case you have other ideas for
Google Cloud components that you want to run in Flutter. If you're totally into building
integrations like this, definitely stay tuned,
follow me on Twitter or keep looking into the
comments later on next week of this video, because we
will introduce a DataFlow CX competition where developers can
build their own integrations. Everybody gets a free tee-shirt
and you can win cool prices. So that might be interesting. And the other two links
that I'm sharing here is I created a
white paper on how you use analytics for chatbots. And if you're interested
in reading my book, please look on the
Apress website. With that, I want to thank you
so much for watching today. And have a lot of fun during
the rest of the conference. Thank you. [MUSIC PLAYING]