- All right. Good afternoon, everybody. Today is the first day
of 2023 AWS re:Invent, and welcome to this breakout session, Building a First Generative AI Application with Amazon Bedrock. My name's Sherry Ding, a Senior IML Specialist
Solutions Architect at AWS. Today together with me is my
co-speaker, Kaustubh Khanke. Kaustubh is a Principal Product Manager at Amazon Bedrock Team at AWS. In this session, we hope to
give you a good understanding of generative AI fundamentals, as well as a comprehensive
overview of Amazon Bedrock. So we are gonna get started
with a brief introduction to generative AI, followed
by a feature walkthrough of Amazon Bedrock, together with a demo. So we hope to give you
some good understanding about how Amazon Bedrock addresses some of the challenges in building
generative AI applications. Next, we'll talk about some use cases and the customer stories
to give you insights about how you can use Amazon Bedrock to build generative AI applications. And last, we're gonna end up
this session with some guidance to show you how you can get started on your generative AI journey on AWS. Now as you can see, there's a ton of exciting content in this session. So now let's get started. As all of you are probably
aware, so there are a lot of things going on in the
field of technology recently. The transformative and
innovative technologies enable us to solve really complex problems, and to reimagine how we do things. So one of those
transformative technologies that gains a lot of traction
these days is generative AI. While a lot of attention has been given to how consumers are using generative AI, we believe there's an
even bigger opportunity in how businesses will use it
to deliver amazing experience to their customers and employees. The true power of generative AI goes beyond the search engine or a chatbot. It will transform every
aspect of your company and organizations operate. The field of artificial intelligence has been around for decades. Now you may have this question. With so much potential, why is this technology, the generative AI, which has been percolating for decades, reaching its tipping point now? Because with the massive
proliferation of data, the availability of a highly
scalable computer capacity and the advancement of machine learning technology over time, generative AI is finally taking shape. So what is exactly generative AI? Generative AI is an AI technology that can produce new original
content, which is close enough to human-generated content
for real-world use cases, such as text summarization,
question answering, and image generation. The models powering
generative AI are known as foundation models. They're trained on massive amounts of data with the billions of parameters. Developers such as yourself
can adapt to these models for a wider range of use cases with very little fine-tuning needed. Using these models can reduce the time for developing new
AI applications to a level that was really possible before. Now, you may be wondering how generative AI fits within the world of artificial intelligence. So for anyone who's embarking
on their generative AI journey for the first time, I know
this might be confusing. So let's start with the
broadest area first. The broadest area is
artificial intelligence. Artificial intelligence is any technique that allows computers to
mimic human intelligence. This can be through
logic, if/then statements, or machine learning. Within the scope of
artificial intelligence, machine learning is a subset where machines are used to
search for patterns in data to build the logic models automatically. Machine learning models
develop from shallow models to deeply multi-layered neural networks, and as these deeply
multi-layered neural networks are called as deep learning models. So deep learning models
perform more complicated tasks, like speech and image recognition. Generative AI is a
subset of deep learning. It is powered by foundation
models that are extremely large and can perform most complex tasks. Now let's dive a little deeper
to understand the complexity of generative AI. Traditional formula of machine
learning models allow us to take simple inputs,
like numerical values, and map them to simple outputs
like predictive values. With the advent of deep learning, we could take complicated
inputs, like images or videos, and map them to relatively simple outputs, like identifying an object in an image. And now with generative AI, we can leverage massive
amounts of complex data to capture and present the
knowledge in a more advanced way, mapping complicated inputs to complicated outputs,
like extracting a summary, and the key insights
from a large document. So this is how traditional
machine learning models, deep learning models, and
the generative AI models under the scope of artificial intelligence
differ from each other from input and output point of view. Now let's take a look at the differences from a model point of view. Most traditional machine
learning models are trained by supervisor learning process. So these models use architectures that require the data labeling and model training to produce one model for one specific task. However, the foundation models that power generative AI
application are driven by a transformer based on
neural network architecture. So this architecture enables models to be pre-trained a massive
amount of unlabeled data by self-supervisor learning process. So these models can be
used for a wide variety of generalized tasks, and they can be easily
adapted for a specific domain or industry via a customization
process known as fine-tuning with very small amount of labor data. So this difference between the traditional
machine learning models and the foundation models
is what can save you tons of type and effort. There are mainly three
types of foundation models. First type is text-to-text models. So these models accept the text as input and output text as well. For example, you can input text from a newspaper article to these models, and get a text summary as a response. Second category of foundation models is text-to-embedding models. These models accept text as input and output numerical
representations of the input text. So these numerical representations
are called embeddings. We are gonna cover embeddings in a little more detail
later in this session. And the third category of foundation models is
multi-model foundation models. So these models take text as input, but generate output in another
data modality such as images. Now, if you are able
to follow along so far, so you basically officially
understand the basics of generative AI. Now let's walk you through what has been Amazon's
generative AI journey so far. Machine learning innovation
is in Amazon's DNA. We have been working on
artificial intelligence and the machine learning
for over 20 years. Our e-commerce recommendation engine is driven by machine learning. The paths that optimize
robotic picking routines at our fulfillment centers are
driven by machine learning, and our supply chain, forecasting, capacity planning are all
informed by machine learning. Generative AI is not really new in Amazon. We have been doing
generative AI for years. For example, we use it to deliver highly accurate
search results on amazon.com. We use it to deliver the human-like conversational
experience by Alexa, with which thousands of people, millions of people are enjoying every day. At AWS, we have played a key role in democratizing machine learning and making it accessible to
anyone who wants to use it. Now more than a hundred
thousand customers of all sizes and industries are using
AWS for machine learning. Customers have asked us how they can quickly
begin using generative AI within their businesses and organizations to drive
new levels of productivity and transform their offerings. Keeping these requirements in mind, we have identified four most
important considerations to quickly build and deploy generative AI
applications at scale. All of the generative AI capabilities that we already have on AWS, and that will be rolled out in the future are built based on these considerations. At AWS, you have the easiest way to build generative AI applications with leading foundation
models on the most performance and a low-cost infrastructure. You can use your data to
fine-tune these models, and we always ensure that all your data is in a secured and private environment. We also provide you
out-of-box applications and services to increase
your productivity, and shorten the time to market. Now I'm going to hand you
over to my colleague Kaustubh, and Kaustubh is gonna talk about some of those challenges in building
generative AI applications, and how Amazon Bedrock
addresses these challenges. Here you go, Kaustubh. - Thank you, Sherry. Building generative AI
applications is challenging. Customers have told us that there are a few big things standing in their way of using
foundation models more broadly. First, there is no single
optimized model for every task, and models are going to
be constantly improved with newer advances in technology for the foreseeable future. So for any particular use
case, customers often need to put together several models
that work with each other, or upgrade the same
model to newer versions. This can take time and resources. Second, customers want it to be easy to take the base foundation model, and build differentiated
apps using their own data. And since the data that customers want to use for customization is incredibly valuable
intellectual property, they need it to be completely protected, secure, and private during that process. And they want control over how their data is shared and used. Foundation models on their
own are also limited, because they cannot complete
complex tasks that require them to interact with external systems. To make these kind of
capabilities possible, developers must go through
a multi-step process to accomplish even simple tasks, tasks like booking a flight,
filing an insurance claim, or returning a purchased item. This process involves
providing specific definitions and instructions to the foundation model, configuring access to
company data sources, and writing API code to execute actions. Lastly, our customers have told us that they want application
integration to be seamless without having to manage huge
clusters of infrastructure or incur large costs. To address all of these challenges, we announced the general availability of Amazon Bedrock on 28th
of September this year. Amazon Bedrock is a fully managed service that offers a choice of
high-performing foundation models from leading AI companies. Along with this, you also
get broad set of capabilities that you need to build
generative AI applications, which simplifies development while maintaining privacy and security. With Amazon Bedrock's
comprehensive capabilities, you can easily experiment with a variety of top foundation models,
customize them privately with your data, using
techniques such as fine-tuning and retrieval augmented
generation, or RAG, and create managed agents that execute complex business tasks, from booking travel and
processing insurance claims, to creating ad campaigns and managing inventory, all
without writing any code. Now since Amazon Bedrock is serverless, you don't have to manage
any infrastructure, and you can securely integrate and deploy generative AI applications using the AWS services you
are already familiar with. Now, you may be wondering,
how does Amazon Bedrock work? Well, first you pick a
foundation model from the range of models that are
provided by Amazon Bedrock. Next, you can customize the
experience using your data through techniques such as fine-tuning and retrieval augmented generation. Then you can orchestrate your
tasks using tools offered by Bedrock to solve those complex problems that you're working on. The key point to remember here
is that with Amazon Bedrock, your content is not used
to improve the base models, and is not shared with
third-party model providers. Now let's go through each
of these steps one by one. We believe in offering the best choice and flexibility of foundation
models for your use cases. We have our own Amazon Titan models, which includes Titan Text
for text summarization and our Titan Embeddings model
for embeddings and search. We also offer AI21 Labs'
Jurassic-2 models, which are used for natural language to
generate text in a variety of different languages. Anthropic cloud models are built with the latest research on responsible AI to perform conversational
and text processing tasks. Command, Cohere's large language model, is a text generation model
trained for business applications such as summarization, copywriting, dialogue, extraction,
and question answering. Fine-tuned versions of
Llama 2 models from Meta are ideal for dialogue use cases. Bedrock also supports Stability
AI's foundation models, including the widely popular
Stable Diffusion model, which is capable of
generating unique images, art, and design. Now, once you've selected
your foundation model, one of the important
capabilities that Bedrock has is the ability to customize a model, and tailor it to your business,
your data and your products. You can fine-tune
Bedrock's foundation models by providing a few
hundred labeled examples. You can do this by pointing Bedrock to your data sets in your S3 bucket, making it possible to create
an accurate, customized model for your specific problem
without the burden of collecting large
volumes of annotated data. Now while foundation models
are incredibly powerful, and have a robust understanding
of natural language, like I've mentioned
before, they require a lot of manual programming to
complete complex tasks, like booking a flight or
processing an insurance claim. That's because the out of
out-of-the-box foundation models are not able to access
up-to-date knowledge sources, like your recent company-specific data. They're also unable to
take specific actions to fulfill those users' requests. So to make this happen,
developers need to follow a number of resource-intensive steps, steps like defining
instructions and orchestration, configuring the foundation model to access company data sources, and writing more custom
code to execute these steps through a series of API calls. Finally, developers must
set up cloud policies for data security while also
managing the infrastructure for the foundation
model-powered application. Now all of these, as you
can imagine, take weeks. To address these challenges, we offer Agents for Amazon Bedrock. Now with a few clicks,
Agents for Amazon Bedrock can configure your foundation model to automatically break
down and orchestrate tasks. And all of this can be done
without writing any manual code. The agent securely connects
to your company's data sources through a simple API. It automatically converts your data into a machine-readable format, and augments the user's request
with relevant information to generate a more accurate response. Now, Agents in Bedrock also take actions to fulfill the user's request by executing API calls on your behalf. And you don't have to worry about complex systems integration and infrastructure provisioning. As a fully managed service, Agents for Bedrock takes
care of all of this for you. Now, you may remember that Sherry had mentioned the concept of embeddings. You know, this was earlier in the context of types of foundation models. Let me explain this concept
a little bit further. So vector embeddings are
numerical representations of your text image, audio, and video data. While humans can understand the meaning and context of words, machines
only understand numbers. So we have to translate them into a format that's suitable for machine learning. By assigning a number to a
different feature of each word, we can view vectors in a
multidimensional space, and measure the distance between them. Words that are related in
context will have vectors that are closer together,
which helps machines understand the similarities and
differences between those words. Now for instance, cat is closer to kitten, while dog is closer to puppy. So by comparing embeddings in this way, the model will produce more relevant and contextual responses
than simple word matching. Now, while embeddings aren't new from machine-learning based applications, their importance is growing rapidly, especially with the
availability of generative AI and natural language
processing in general. For example, embeddings can
superpower semantic search for use cases like rich media search and product recommendations. In this scenario, you can see that the semantic search is
greatly enhancing the accuracy of our outputs for a simple query, like bright color golf shoes. Now, in addition to semantic search, embeddings can also be used
to augment your prompts for more accurate results
through a technique called retrieval augmented
generation, or RAG. Now let me try to explain RAG
in a few sentences if I can. RAG is an artificial
intelligence technique that allows foundation models to incorporate information
from a knowledge repository for generating all kinds
of resource responses. RAG allows you to customize
foundation model responses without the overhead that is required with techniques like fine-tuning. Although the degree of
customization may not be as much as fine-tuning, if your use case can
be addressed using RAG, you may not need to put
in the effort required to perform fine-tuning. So Amazon Bedrock offers a
feature called Knowledge Base. Now, Knowledge Base lets you
perform RAG without having to do any of the heavy lifting. All you have to do is
pinpoint the foundation model to a data source. We currently support three
most commonly used data sources used for RAG applications. Once you've pointed the foundation
model to the data source, the foundation model will
start responding with the help of the knowledge respond repository. The foundation model will
also provide citations that clarify which documents in the knowledge repository were used to generate the response. Citations are a good way to fact-check the foundation
model's responses, and reduce the problems associated with what you've probably
heard about hallucinations. Now let's take a quick look at
a demo that shows both Agents and Knowledge Base in action. Now in this example,
we're gonna see an example of a travel company that has
built a bot for their users. We are gonna walk through
all the different kinds of inputs that the bot can take. So now this is the website of the company. They've created a chatbot. The user is interfacing
with the the agent. The user's asking for,
you know, a recommendation for a family vacation, right? So all of this information
is now being pulled up by the foundation model, and it's able to give
a very succinct summary of that recommendation. And now the user is asking
for more relevant information, like, you know, are there
any special events happening? You know, what's happening recently? And then the user's also asking, what are the offers and discounts that are happening right now? Now this kind of information
can only be accessed through your data source, but the agent is able to link
to those kind of information, and is able to respond based on that. It's also able to tell the
lowest price for a room. It's able to give you that from up-to-date and detailed information. And then finally when the user says, "Hey, I wanna book that room," it's actually able to
do that booking itself. So that is basically the final step, which is not just serving the information, but making those API calls on your behalf to make sure that that action
that the customer wants to do actually goes through. Now let's take a look
at how you would've gone about configuring that exact experience. So we are gonna pull up
the Amazon Bedrock console, and we're gonna look at two steps. The first step is setting up
the Knowledge Base itself. So here's the the Bedrock console. In the console, I'm going to
the Knowledge Base section. I'm gonna create a Knowledge Base here. I'm gonna give it a name. I'm gonna, you know, sort of explain what this
Knowledge Base is for, and then I'm gonna point
it to the data source. Right here, I'm gonna
point it to an S3 bucket, and I'm gonna select an embeddings model. In this case, I've selected a Titan image embedding a Titan Embeddings model, and then I'm selecting one of the data sources that
I want the embeddings to be stored in. Once I've done that, I
come to the review page, and then as soon as I hit
Create, pretty much that's it. Your embeddings have been created. The next step now is to
go from the Knowledge Base to the agent itself. So now we're gonna create the agent, and give it access to that Knowledge Base. So step one, I'm gonna name the agent, and then I'm gonna describe
what the agent is for, so that anyone, you know, can understand. And then once I've set that
up, I'm gonna select the model that I want to use for that agent, and I'm gonna provide
specific instructions to the agent as to its purpose. Now, once I've done that, I'm gonna break the action into two steps. One is finding the travel booking, and the second is to actually
make the travel booking. So I've broken that action down into two. Then I'm gonna give the agent
access to the Knowledge Base. Once I've given that access,
I get the review page where I'm able to see the action and see the Knowledge Base
that's been handed to it, and then I'm just gonna create it. And that's pretty much it. So like you saw, there was
that simple chatbot experience where you were able to pull in information from your repository from APIs, and you were able to make
those invocations as needed. You were able to do all
of that without having to write any code just
by pointing the agent to the right Knowledge Base
to the right repository. And that's it. So you know, as you can imagine, like there is so many possibilities here that you can address in your
own organizations, right? And we can't wait for you to build all these cool new innovations, and solve all these complex challenges that you may be facing on
your day-to-day, you know. Now with Agents and
Knowledge Base, as I said, like you know, you are closer to deploying that generative
application without having to build any of that orchestration layer. Now this is the sample architecture, which is similar to what
you saw a little earlier in the demo, so you start off by providing a prompt to
your foundation model. That's the first step in that journey. Once you do that,
basically what's happening behind the scenes is the
agent is pointing the model to the right knowledge sources. It's then retrieving data
from some of the other APIs that were provided to it. The agent is then collating
all of that information, and then handing it off
to the foundation model for the foundation model to
then generate that response. And then once the foundation
model has generated response, that gets rendered over to the user. Now it's worth noting here that this could be either
a single iteration, or you could have multiple iterations. All of those can be handled
automatically by the agent based on configurations
that you would make. Now you may be wondering like, how do I figure out when to
use RAG versus fine-tune it? Right, so here's a simple
way to think about it. As you go from left to right
on this graph, you are able to maximize the degree of customization. While, you know, on the flip side, what you'll find is that the complexity and the cost along that same
axis is going to keep going up. So at the end of the day, it's
gonna be up to you to decide where along this journey, from
left to right, are you able to maximize customization
for your organization, while keeping those complexity and costs in check so
that it meets your needs. On the left, we start with
prompt engineering, right, which is basically simple techniques that you probably are familiar with on how you can provide instructions to the foundation model, right? And you would do this if, let's say the out-of-the-box
experience that you're getting with simple prompts are, you
know, not giving you the kind of return that you wanted. So a good example, a very
simple example here is, let's say if you're
getting huge responses, but you want like very
short and concise responses, you would craft your prompt in a way to say, "Hey, foundation model, I want you to keep your
reply under five sentences." And once you do that, you know, it basically would understand and deploy. There's so much more customizations that you could do through Prompt AI. Now, if that's not enough,
that's when you would go to the next step, which is RAG, right, which we just talked
about a little earlier, you know, linking the knowledge
repository to the model, so that it gives you better responses. Now, let's say even after that, you're not able to get the
responses that you want. That's when you would tap
into fine-tuning, right? And you would take annotated
data that is available to you, you would provide it to the model, and you would change it
fundamentally in a way that is more customized to your use case. Now, you may also be wondering, how do you consume Amazon
Bedrock as a service, right? So there are basically two
modes of operation here. There is an on-demand mode, and then there is a
provisioned throughput mode. With on demand, you're
gonna pay as you go, and you don't have to
make any commitments. Pricing is based on
input and output tokens. Now if you aren't familiar
with the word tokens, it's similar to a word count. On-demand is great for prototyping, and you know, small scale
production workloads. However, there are limits
on requests per minute and tokens per minute that
you will have to deal with when you're using on-demand. On the other hand, provisioned
throughput will give you a stable performance at a fixed cost, but in exchange for that stable
performance, you will have to make either one-month
or six-month commitments. You'll be paying at an hourly rate, and this will be discounted for extended commitments that are made. Now, provisioned throughput is great for production workloads where you need that
assured performance, right? Customers that have gone
past their prototyping stage and have graduated beyond like small scale
production workloads will find that this provisioned throughput is going to address your performance needs. Now, let's talk about one
of the most common questions that most customers have
regarding generative AI, right, security and privacy. Bedrock uses a bunch of techniques to address these concerns. So first and foremost, you
can use AWS Private Link with Amazon Bedrock to
establish a private connectivity between the foundation model
and your on-premises network without exposing your
traffic to the Internet. Second, your data is
always encrypted in transit and at rest, and you can
also use your own keys to encrypt the data. If you've used AWS Key Management service or AWS KMS, you can use that with Bedrock, and you can create, own, and manage those encryption
keys, so you have full control over how you encrypt the
data that is used for, you know, model customizations. Next, Amazon Bedrock is compatible with compliance programs
like GDPR and HIPAA, and there's so many more coming. And then last but not the
least, Amazon Bedrock, like I've mentioned before,
with Amazon Bedrock, your content is not going to be used to improve your base models,
as well as it's not shared with any of those
third-party model providers. The other key facet that comes up quite often is
governance and auditability. Now with Amazon Bedrock, we
offer a comprehensive monitoring and logging set of capabilities
so that you can support, you know, governance and
any audit requirements. You can use Amazon CloudWatch
to track usage metrics and build customized dashboards
that are required for any of your auditing purposes. Additionally, you can use AWS CloudTrail to monitor API activity
and troubleshoot any issues as you integrate with,
you know, other systems, and you integrate these other systems into your generative AI applications. You can also choose to store the metadata, any requests and responses
in Amazon S3 buckets. So that's completely up to you. Now you know, by now, you should have had a good
understanding of the basics of Amazon Bedrock as a service. Now, as you can imagine, the potential of generative AI is incredibly exciting, right? There's a forecasted 7
trillion increase in global GDP that is expected over the next
10 years with generative AI. Now let's look at a few common use cases that generative AI enables. You can apply generative AI
across pretty much all lines of business, including
engineering, marketing, customer service, finance, and sales. You can use generative AI to
improve customer experience through capabilities such
as chatbots, virtual agents, intelligent contact
centers, personalization, and content moderation. You can boost your employees' productivity with generative AI-powered
conversational search, content creation, text summarization, and code generation, among others. You can use generative AI
to turbocharge production of all types of creative
content, from art, music, to text, images, animations, and videos. And finally, you can use generative AI to improve business operations with things like intelligent
document processing, maintenance assistance, quality control, and visual inspection, and even synthetic
training data generation. The use cases we've reviewed so far can be applied pretty much
to all of the industries. Let's have a quick look at some of these, so you can see, you know,
financial services, healthcare, automotive, manufacturing,
the list goes on, right? And keep in mind this is not
even a comprehensive list of industries or use cases. Let's take a look at a
few customers from some of these industries that
are using Amazon Bedrock. So Lonely Planet is a
premier travel media company that's celebrated for more
than 800 travel books. You know, these offer
travel advice and guidance. So Lonely Planet has been developing a generative AI solution on AWS to help their customers plan epic trips, and create life changing experiences with, you know, personalized travel itineraries. By building with Claude 2 on Bedrock, Lonely Planet has reduced its
itinerary generation costs by nearly 80%. Lonely Planet quickly created a scalable and secure AI platform that organizes their contents in minutes to deliver cohesive, highly
accurate travel recommendations. Now they can repackage, and repurpose, and personalize their
content in various ways on their digital platforms, and they can do this all
based on customer preferences, while highlighting trusted local voices. Similarly, NatWest Group is a leading bank in the UK
serving over 19 million people and supporting communities,
families, and businesses. Amazon Bedrock allows NatWest group to leverage the latest
generative AI models in a secure and scalable platform, which
their teams of data scientists, their engineers, their
technologists, are using to experiment and build new services. Now with these tools, NatWest Group is combating
the next generation of threats from financial crime, as well as they're allowing the customers and NatWest employees
access to the information that they need, in the
format that they want, when they need it. Finally, Salesforce is extending their bring your own AI
integration to Amazon Bedrock for generative AI. You can access Salesforce
Data Cloud securely and easily with zero-ETL, and use that company data to quickly and securely customize your choice of foundation models using Bedrock. These customized foundation
models are tailored for your company, and
then can be easily invoked from Salesforce Data Cloud
to be used across Salesforce. Now, as you can see,
there's so much going on with generative AI, and these
are just like a few examples. Now I'm gonna hand it back over to Sherry. She's gonna take you
through the final stretch of this session. - Thanks, Kaustubh. All right, so just now, Kaustubh mentioned that generative AI can be
applied to a lot of use cases. So identifying the right use case that would benefit your organization and innovate your business
is really a good start for your generative AI
journey on AWS Cloud. To help you explore and select the most relevant use case, we have created the AI Use Case Explorer. It is an easy-to-use search tool that will help you find the right use case based on your industry, business function, and the desired business outcome. This tool allows you to
explore a curated list of AI use cases for your organization, and to discover how
organizations around the world are using AI to drive business outcomes. It also allows you to follow
our expert curated action plan to realize the power of AI. After you select the right use case, next step is to empower your
developers of all scale levels through a variety of
training opportunities. AWS Skill Builder is a broader library that provides more than
600 digital courses. In addition, we also have
programs like AWS Academy, AWS Restart, and AWS Educate
to empower your team. We now offer a collection of free and a low cost of trainings
to help people understand, implement, and begin using generative AI. For example, we recently
released a training course on Coursera called Generative
AI with Large Language Models. This course was developed with AI experts and AI educators like Dr.
Andrew Ng from DeepLearning.AI. This is a great opportunity to learn about large language models and to get hands-on experience
in training, fine-tuning, and deploying large language models for real world applications. Next, consider working
towards a proof of concept with help from AWS experts. These AWS experts guide customers through their generative AI journey on solving diverse business problems, aligning business and
technical stakeholders, and building an executive roadmap for you. In order to help customers
successfully build and to deploy generative AI solutions, we established this the AWS
Generative AI Innovation Center. It is a 100 million
investment in a new program that connects AWS AI/ML
experts with customers and partners worldwide to accelerate enterprise innovation and success with generative AI. Lastly, I'd like to share
some useful resources for you to get started with Amazon Bedrock. Now, please grab your phone and take a picture of
these three QR codes. The first QR code is Amazon
Bedrock's product page, and the second QR code is a step-by-step tutorial on YouTube. The third QR code is Amazon
Bedrock hands-on workshop. This bring us to the end
of this presentation, and I thank you so much for
your time and attention. (audience applauds) Yep. Please do complete the
survey in your mobile app, and Kaustubh and I are happy
to take some questions. (audience chatting)