Labas! Recently, the Flutter team announced a
new Dart SDK - google_generative_ai. It enables developers to use Google's state-of-the-art
generative AI models in their applications. Now, I am no AI guru, but since robots
will take over the world soon, it may be a good idea to mess around and find out.
Today, we will dive into the journey of building my first AI-powered photo scavenger hunt game
using the new AI Dart SDK, Gemini and Flutter. The main idea of the game is simple.
Choose your hunting location, AI generates a list of items that should be
available there, and the hunt starts. Now, find the item, and take a photo
of it. The AI validates the photo, and you either get points or need to try again.
The game ends once all the items are found. The project consists of the main
app code and a separate package for integration with Gemini API. Even
though the app is relatively small, I decided to split the code per
feature. We will review them shortly. In the app's entry point, we set up all
the dependencies based on environment variables. You can see that it is possible
to launch the app using fake data instead of real integration with Dart AI SDK, which
is helpful for testing purposes. To do that, pass the use fake data flag
when launching the app. The photo picker feature is responsible
for taking a picture and making sure that the image bytes are ready to
be passed to the API a bit later. For state management, I used flutter_bloc.
Each application feature has a dedicated business logic component. The GameBloc is
responsible for tracking the game status, and storing the location of
your hunt and the final result. Once the location is set, a ScavengerHuntBloc is
initialised that loads and stores the hunt items. For every item in the scavenger hunt, an
ItemHuntBloc is used to track the hunt progress of a single item. This BLoC is responsible for
taking a picture, validating it and calculating the score. For score calculation, we are using
a stopwatch. The maximum score for finding an item is a hundred points. However, for every
5 seconds elapsed, you get a 5 point penalty. Each feature contains a view widget that initiates
the BLoC and covers all the different state views. Ok, I know that you LOVE talking about state
management, but that was it for this video. Let’s get to the juicy part - generative AI and
how to use it in our Dart and Flutter projects. The google_generative_ai package simplifies
the process of using Google’s generative AI models by providing a unified way to
pass prompts and receive responses. In the app, I created a dedicated Dart package
and added the google_generative_ai as a dependency there. The package consists of two main
classes - scavenger hunt repository and client. The repository class is used by the app. It
is responsible for calling the Gemini API, validating results and passing them to
the business logic components. Also, a fake version of the repository
is created for testing purposes. The main AI magic lives in the
ScavengerHuntClient class. First of all, the class has two constructors - a default one
that needs only an API key and a vertexAi one that also expects a project URL. If you live in
the region where Gemini API is already available, just use the default constructor and only pass
the API_KEY when launching the app. Instructions on how to get your API key is provided
in the official Gemini API documentation. If you are from a region where Gemini API
is not directly accessible, you can still access generative AI models through Google
Cloud, just it takes a bit more setup. First, create a Google Cloud project by following the
instructions in the Vertex AI documentation. Then, generate your API key using gcloud CLI
and build your project ID that should look like this. Finally, pass the API key
and a project URL when launching the app. The first API call we do in the app is for
generating scavenger hunt items based on the selected location. All we need to do is
prepare a prompt, call the generateContent method from the SDK by passing it as text
content and return the result as text. Honestly, the most challenging part of this
code is building the right prompt. After some trial and error, I found that this structure
worked the best with some random hiccups from time to time. First, we set the role for
the AI so that it could play along. Then, we specify the task and add some context
that the Gemini should handle. In this case, we ask to generate a list of 5 items, but not all
of them should be equally easy to find. Finally, we specify the rules and the structure
of the response. The result should be a JSON string that contains a list of
short, upper-cased scavenger hunt items. The logic for validating an image is
pretty similar. However, in this case, we also need to pass an image, thus we
are using a multi-part content object that allows us to send a prompt and an image
as a single request to the Gemini API. The prompt structure is identical to
the previous one, just the task is different - we ask to match the item with
its photo and return either true or false. The final result is an AI-powered
photo scavenger hunt game. Honestly, when building the app I didn’t feel that I
was working with AI specifically. The whole code structure is similar to any other public
API in the world, but the result is somehow different every single time. The fact that with
minimal AI experience, I managed to build this app over the weekend shows that the new AI
Dart SDK truly boosts your productivity, and using generative AI models to solve problems
in your app should not be an excuse anymore. Thanks for watching! Save trees,
stay SOLID, and see you around.