[music playing] Good morning, good afternoon,
good evening, wherever you are. Welcome and hello.
My name is Deepak Singh. I have been with AWS
for over 12½ years. There, I run
our containers, Linux, high performance
computing organizations and the Amazon Open
Source Program Office. Some of you may remember me
on a stage a little larger than this one,
it even had a chandelier, at Las Vegas
the last couple of years. I was sharing that stage
with David Richardson who runs our serverless org,
and in those talks we talked a lot about how we think about what modern
applications mean to us at Amazon, how it has influenced
the products that we build, the services that we design
and make available to you, and quite honestly,
how we run our teams ourselves. This year we went virtual,
so we decided to split up and cover some of the same topics
just a little bit differently. So what am I going to talk about? Well, for one, instead of telling you
how we run our teams and how we think about software, you learn from lessons that we
have learnt from our customers, how they gel with our vision, and how those customers
are modernizing with AWS and influencing the products
that we build, the services that they use, and some of the announcements
that you will hear today. In fact, some of you
have already heard them, because you heard them in Andy
Jassy’s keynote a couple of days ago. This talk is going to stay
at somewhat high level. I am going to talk
about the lessons learned, the patterns that are
customers are using, and announce and introduce
some new services and products. But there is a number
of deep dive from other AWS speakers
throughout this conference. At the end, I will link
to some of these topics, you should check them out, and quite honestly,
check out all the talks and containers on serverless
of modern application development, on developer tooling. You will find them very useful,
you may find them interesting, or you might actually
something from them and apply them
in your own organizations. Many years ago, I read
a series of blog posts, four of them actually,
by a gentleman named Jamis Buck. For those of you
who don’t know Jamis, he is the author of Capistrano, one of the early core members
of Ruby on Rails, and I believe still works
at 37Signals. The title of those blog
posts was, “there is no magic,
there is only awesome”. And there are few blog posts that have influenced my thinking
as much as those do. And here is what he argued
for in those posts. He said that it’s human nature
to look at something exceptional and think of it as magic. Another way of saying
it is there is a saying that says anything suitably advanced
seems like magic to humans. It’s much the same thing. But, what Jamis said
was it’s not magic that separates the exceptional
from the mundane, it’s awesomeness. And by awesomeness he meant
how we think about the tools that we use, how well we use them,
and what do they allow us to do? It actually influenced
my thinking in such a way that I paraphrased what he said
and tell people around me that there are three things
I don’t believe in. Magic, silver bullets and unicorns. Because when people think
about things in those terms, they assume it will paper
over any deep-thinking. They don’t need to understand
what they are building, and they sometimes end up assuming
everything is going to be fine and they learn
the hard way that it’s not. But what I do believe in, and what Jamis believes in,
is awesomeness. And today, I am going to dig
into a whole lot of awesomeness over the next 50 minutes or so. The agenda for today’s talk
is somewhat simple. We are going to talk about
patterns for modernization. These are patterns
used by our customers. There are many, many patterns
that are out there, we are going to talk about three, and we’ll go into some detail
into each one of them. These are strategies that
our customers are using to modernize not just their applications,
but the businesses too. And quite honestly,
it’s the business drivers that impact how they think
about modernizing their applications. We will take a look at the core
building blocks that they’re using
to modernize, to leverage these strategies
and jump into the cloud, adopting technologies
like serverless, like containers. We will talk about how operators
and DevOps teams can scale these production
environments, how they can get
the rest of their org to adopt these new technologies
and be successful with them, while maintaining all the guardrails
and tooling and compliance requirements
that any large organization has to. And last, but not least,
to get any developer to adopt what you are telling them to do, even if you haven’t figured out
the mechanisms to scale it across the organization, those tools have to be exciting,
they have to be delightful. And so we’ll talk a lot
about usability and how we can pull all of these
together to excite developers. But let’s start with a journey. I am sure all of you have heard
about Vanguard. Vanguard’s one of the largest
investment management companies in the world. Like many other companies,
they want to become faster, they want to move quickly,
they want to adopt DevOps, microservices,
continuous delivery, all the things that organizations
think of when they want to move quickly, when they want
to reinvent themselves. But Vanguard had a challenge. They are an old company, they have
been around for a long time, their applications have
a traditional tech stack, they are heavily virtualized, we are talking of 30-50 million
lines of code. They thought a lot about
how they could change, how they could move faster,
how they could allow their teams to innovate and move quickly
while taking care of security, the governance,
the compliance required. They are still
a large investment company, and they needed to think about that. They ended up
choosing ECS on Fargate. Many of you may have seen them
talk about this journey at re:Invent last year, when Vanguard came out
to talk in Andy Jassy’s keynote. And the reason
they liked ECS with Fargate was they really,
really liked the security model, the security by
isolation model of Fargate fits into how they thought
about security. Add to that the consumption model, the deep integration with the rest
of the AWS services out there, and what did it mean for them? Well, they were able to move
much more quickly, they were able to lower costs,
so 30% lower costs of compute. It cost them a lot less
to build their applications, 30% lower costs to build,
and, perhaps most importantly, they were deploying 20 times
more often than they used to before. So they were able to reduce costs, they were able to build
at a lower cost, they were able to deploy
a lot more often, and at the end of it, they still were
able to maintain the guardrails, the governance, and integrate into
the rest of AWS if they wanted to. There is a ton of good content
on how Vanguard has leveraged Fargate and you should go
and check those out. The Vanguard modernization story
is just one. Companies are different. Customers
are different. Needs are different. So let’s step back a little bit
and talk about the common patterns and strategies that we’ve heard
from our broad customer base. They are divided into
some discrete topics, but one of the things
I’d like you to remember is there is
no clear difference between these. Very often people are in the edges
between these types of patterns, and they pick based on their need
at that point in time. And two teams in the same company
can pick a different pattern. In the end, my hope is that
you are able to take this talk and other talks that you’ll hear
at re:Invent this year, and identify your own road
to success. The one constant we all have to deal
with today is change. Just look at the world
we live in right now, the environment we are in,
change is everywhere. Our customers face
unprecedented challenges, but, at the same time, they want
to move quickly, they want to innovate. Very often this is the time
that they can reinvent themselves, and many customers are. They need to be able to scale
quickly to millions of users. I don’t know how many of you were
sitting after the election in the US and pressing ‘refresh’
on the New York Times page for a week.
I know I was. So your applications
need to scale quickly. They need to be responsive,
they need to respond in milliseconds, and they are often built
on terabytes of data. These are some of the characteristics
of modern applications. They include applications that could
be BEDAPS, backend services, mobile apps, you could be doing
a ton of data processing, maybe you started
adopting machine learning. Our customers have a portfolio
of these applications and workloads. When they move to AWS, they have
to make a choice for each one. Should they be retained?
Should they be retired? Should you rewrite your app? These are all the things
that our customers care about. I won’t go through
every one of these cases, but we’ll go through
three specific ones of those, and we’ll talk about that. So what are the questions that
we often hear from our customers? They are actually not
that surprising. I suspect all of you
have asked these. Our goal is to help you go
through these questions and make it easy for you
to answer them yourself, and based on what you know, choose the right path based on
the things that we talk about today, the options that we provide, and the things that work best
for your organization. Common questions are, how can
I move workloads to the cloud? The assumption is,
for most organizations, is modernization and a move
to the cloud come together. It may not happen today,
it may not even happen tomorrow, but at some point of time
the majority of your applications are going to be running in the cloud.
How do you get there? What are the things
you need to do today that make success five years
down the road much more assured and fraught with much less pain. The second part is,
how do you actually do it? What are the technologies
that you use? How do you change
your organizations to do that? Very often, and this is actually
a conversation that I have with customers a lot,
how does Amazon move so quickly? And the realize, and they’ve read,
that companies like Amazon allow their developers
to serve themselves, to build, to be able to run without
having to ask them any questions and ask for too much permission while maintaining all the guardrails
that a company of this size needs, because you have compliance needs,
you have quality requirements, there is a bunch of things
that can drive that. So those are some of
the questions that get asked. Then you get into
the more specific ones. Should I pick containers?
Should I pick ECS? Should I pick EKS? Should I pick Lambda?
And we’ll talk about that. And last, at the end of it, are you
setting yourself up for change? Because what you know today
may not be true tomorrow, and you may have to serve
way more traffic, or perhaps, in a negative world,
serve much less traffic. So I think these are
some of the questions that we hear from our customers, and many of things
that we talk to them about, about how they should think
about modernizing, are reflected in these questions
and driven by them. So why do you modernize? Well, you want to run
all apps at any scale, you want to improve
your customer experience, which means you should be able
to scale to millions of users. You may not always
be having a website that’s responsible
for election results, but at least in the US,
once every four years for a presidential election,
you will be. And you need to be able
to handle that traffic. You need to have
global availability and increase the efficiency
of developers. A great example of this
is websites where they go down because they have to do
some maintenance. That’s not fun.
As a consumer, that’s not fun. It’s not fun for the teams
that are doing it because they have to operate it,
go down. So they want to be able
to work in a world where their applications
are always available. And, in fact, one of the most
interesting parts of the cloud is that the applications can be
available anywhere globally. It is not uncommon for an enterprise
to come to AWS and say, “we have a well oiled machine we are
running in our own data centers, but we want to run
these applications because of some new regulation, GDPR,
for example, in Germany. AWS has regions in Germany. How do we take our existing stack
and move that to Germany? That’s very often a question
that starts a great discussion between AWS and that company. And then, of course,
you go from there and you end up having more
and more interesting discussions, as they take on modernization. And as our world becomes
much more data-driven, it’s super-important for us to be able to handle
terabytes of data, handle lots of compute,
but you need to be able to do it in a much more efficient
cost structure. You have to be able
to show better ROI, better TCO,
otherwise it’s somewhat pointless. So I’ve been teasing this
for a while. Here are the three strategies
that we’re going to talk about today for modern applications. We are going to talk
about re-platforming, and I’ll go into all of these
into some detail. But essentially,
re-platforming means you are taking
an existing application and moving it to the cloud,
and as you can guess, we’re going to talk
about containerizing as a means
of re-platforming. We’ll talk about refactoring,
which is the most fun for me. If you aren’t a software
development shop today, if you develop applications, you are refactoring code
all the time. Now how do you refactor code
with modern applications in mind? And I’ll talk about that too.
My suspicion is, for most customers, refactoring is where they spend
a chunk of their time. And last, and perhaps
the easiest one, the most obvious one, which is,
“I’m building a new application. How can I build this application
to take advantage of everything
that the cloud gives me?” Because you are not encumbered
by anything. So we’ll talk about
that as well. The reality is not so distinct. Depending on who you are, you may
be doing all three at the same time. You could have a team here
that’s re-platforming, a team here that refactoring, and that funny little team
in the corner that gets to build new apps
all the time and doesn’t have to worry
about anything. I like to be in that team.
It’s much easier. Let’s talk about these
in some more detail. Re-platforming is often
the first step to the cloud. For an enterprise it typically is.
Just a few weeks ago I was talking to someone
and the question they asked me was, “what’s the best way to lift
and shift to the cloud?” And that became
a super-interesting discussion about what ‘lift
and shift’ means. You could argue that
just taking an application and moving it to the cloud
as is a terrible idea. And some people would say
there’s merit to that. But that’s also really complicated. You have to figure out
all their dependencies, you have to figure out
what application depends on what database
somewhere, what processes are in place.
So the advice that I give, and what we found works
really well for customers is, if they slow down,
they think about the things that are really important for them
to operate in the cloud. Things like making sure
that their identity and access management
and identity systems are well-aligned with
how you do IM on AWS. Do they have
the right control tower, the right organizational
guardrails in place, so that when the things
start moving to the cloud they are using tools
and talking a vocabulary that they are already
familiar with. Once you set them up,
you can start lifting and shifting. You could take advantage of a few
things that the cloud provides. You have a variety of EC2
instances to choose from, so you could
potentially save costs, because you haven’t had
a chance to update hardware in your own data center
for a while, or you’ve been running
on a bigger piece of hardware because that’s what you had.
You could get better security because you have fine
grained permission control, and you can put guardrails on
your networks for example with VPC. You do have more flexibility.
You could choose to say, “hey, this application’s
running on MySQL in my data center. I can run that application
on a managed RDS-based, RDS MySQL on AWS, it’s still MySQL.
You are not changing anything, but you’re no longer
managing a database, which I think everybody will
understand is a great thing to do. So a lot of the effort that platform
teams in your own data centers where they spend a ton of time
keeping things up, provisioning systems, a lot of that goes away
the moment you come to the cloud, because provisioning and maintenance
are fundamentally different things, and you have access
to a lot more managed services. What we found
is that containerization has become a critical part
of this lift and shift process. Once you know your dependencies,
you can put things in a container, once they have
all these guardrails and tooling in place
that I talked about, and there’s a small amount
of investment upfront, you can choose not to do it,
but I would suggest you do, you can move your application
to the cloud and that’s where you go
and that’s where you start from. It works, it has worked
for a lot of customers, and it’s also helped us evolve
how we think about our containers. And I’ll talk about
that a little bit later, on some of the tools
that we have that can help you move your existing
apps into the cloud. The second strategy,
my favorite one, is refactoring. Actually, I’ll step back
and talk about my team. Every team at Amazon and AWS
that I have been involved with, including mine, start off by
building applications and services, not as one big monolith. But at some point of time,
even your small service becomes big, and you need to break it apart. For us, the reason we often
break things apart, one is ownership. If you have a small team,
we call them two pizza teams, that are responsible for an API,
or a service, you can continue
to operate that service, have a lot of ownership over it,
make sure it’s amazing, and make sure it has really good
contracts with every other service. It helps the organization
move faster, it helps you stay out of the way,
it helps you deploy more quickly, and perhaps, most importantly,
when you break services up, it’s also improving
the resiliency of the application. The term we use is blast radius,
and you’ve probably ready all the case studies
from AWS from Netflix where you want to say,
in a monolithic application or even in an application
with lots of dependencies, when one thing goes,
everything goes. When you break things up
into smaller and smaller pieces, you have the advantage that
the whole application keeps running, even if one component is not. You’ve probably seen this
on the Amazon.com webpage on the Netflix app. But there are many reasons
why you want to refactor. You may have an existing application,
and you can pull out some services to take advantage
of cloud technologies. There may be some piece of it that you find you can easily move
to AWS Fargate or Lambda. Why not?
If you can, you should, and it helps you start
getting the leg-up on moving faster, even with existing applications.
As you move more to the cloud, you could take other pieces
and start breaking them up. Something could start running
on a database like Aurora, or you find they are essentially using a database
like a key value store, and you start running that part
of your application on DynamoDB. This breaking up of things,
whether it’s intentional, from applications
that have been running in the cloud for a while on VMs, and are built in
a certain architecture, and they are growing into smaller,
more discrete components, is great. Or if you have
an existing application running in your own data center,
we are starting to take pieces off it and running them
as independent services. Either way, refactoring gets
a ton of benefit from the cloud, because you can start taking
advantage of the native capabilities that a cloud provides,
managed services, multiple stacks of databases, more and more abstractions
that help you focus on your business, not on the things
running underneath. And this is where we find that a lot
of our customers choose serverless. I talked about Vanguard
and their modernization strategy. This is a great example
of where serverless can really help, in their case with Fargate. Our third one
is the most obvious one. In the early days of the cloud, most of the use cases
that you saw on AWS was building new applications.
Here is the fun part. The application you build today, you will be refactoring
a few years from now, because the cloud is going to give
you that much more new capabilities, some new thing beyond
containers and serverless is probably going to come by
in five or six years. Lambda didn’t exist seven years ago,
but now it does. Fargate didn’t exist
five years ago, now it does. But when you don’t have
anything to undo you are not encumbered by any constraints
or previous legacy, and you can take advantage of
everything that the cloud gives you. If you’re a start-up, you don’t have
any organizational constraints. You get to pick the way
you want to build your org based on how you want
to structure your applications, and it’s almost reverse
Conway’s Law in some ways. What a lot of our customers
have found, especially larger ones, because they
do have organizational legacy, is that they build
a shared services platform to make it easier for
their operators and developers. Some of them will pick
a bespoke one. Those usually don’t work,
because they run into barriers, whether they be cost barriers
or flexibility barriers. They don’t work for a company
in the business units we have. But the idea of a shared
services platform is it gives you standardization.
There is a small team of experts thinking through how to bring
the cloud and enable innovation. And then you have
the application developers who focus
on the application, what you need to do to make
the application successful. So, again, you have all
the guardrails in place. They’re very important,
you can’t forget them. You can’t have an application
being built and then six months down the line,
or a year or a year and a half, when you’re ready to go
into production, find out that, “oh, no, this doesn’t work
with our security posture “, or “it doesn’t work with our compliance
and governance needs”. That’s not something
you want to do. But the net result is,
everyone is more productive. Your operational
and infrastructure production engineering teams are happier, because they have the right
level of controls in place, they get to build and innovate, and on the flipside
your business application teams are building
amazing applications, and they are not being stopped
by the fact that they have now got to think about infrastructure
or how its deployment should be done. Somebody solved that for them.
They are just deploying. And a great example of that
is a company you all know as well. And that’s Snap. For the few of you
who don’t know what Snap is, Snap is a company that runs
Snapchat and other properties. One of the things they have done
as they have continued to grow is they are building
new products. And for those products
they launch them as new services, they are using EKS
and ECR for them, they are taking
their monolithic applications, so there’s a bunch
of refactoring here as well, and they are doing 2 million
transactions per second, over that. And this was some time ago so
the number is probably even higher now. And, at the same time, their develop
effort went down by 77%, because of how they built
the applications using EKS and ECR. And you’ve now seen
different types of case studies across the whole gamut
of company types. They all have some variation
of all the strategies that we talked about today.
And they are, by and large, today using containers
and serverless technologies to build their applications. I see very few people in this day
and age, it’s not universally true, but, by and large, when you’re
building an application today, definitely the refactoring
and new application case, but increasingly so
in the re-platforming case, where they’re not being
built using containers, or using something
like Lambda or Fargate. So, since I mentioned
a bunch of products, let’s look into the core technologies
that you can use to build. One of the nice things at AWS is that we start with
the foundational building blocks. They are driven by the ‘why’. We just talked about
all the reasons customers care and want to adopt these strategies,
the business needs. We are going to talk
about the ‘how’ now. In the early days of AWS,
if you go back and look at any talk from Werner Vogels
from 12/13 years ago. One of our calling cards was that
we like taking care of the muck, or stated in slightly better words,
undifferentiated heavy lifting. These are things that don’t
bring you any business value. Managing servers,
managing storage, managing networks
and operating systems is time-consuming, it’s expensive,
and in 99.9% of cases, or somewhere thereabouts,
they don’t help you move quickly. They don’t solve
your business problems, and it’s no surprise
that the earliest AWS APIs were APIs
on top of these pain-points. The beauty of EC2, and it was EC2
that really drove it home for a lot of people, including me,
I was not at AWS at the time, but I remember the day
EC2 got launched, and I remember turning to my wife
and saying, “you know what,
I can now get a Linux machine, and all I need to do
is this Call Run instances, with one CLI call
I suddenly had a server and all I had to do to get
that was swipe my credit card. And that was pretty cool.
It was exhilarating. Today, it doesn’t seem
that big a deal, because we are all used to it.
But in 2006 it was a big deal. It was a big deal in 2010
as well by the way. And in some places,
and in some companies, it’s probably a big deal today. But within a few days
of this happening, we had spun up a 20 note cluster, we had some data
and some code lying around, and we were processing protein
secondary structures. That is pretty amazing,
and that was that time. As infrastructures evolved,
as the business problems have evolved, as the technology problems
have evolved, and the customers have
run more and more workloads and different types
of applications on AWS, the definition of much, the definition of undifferentiated
heavy lifting, also evolved. The layers of obstruction
keep getting built. It’s not just about provisioning
storage or provisioning servers. With EPC you start
provisioning networks. With RDS you start
provisioning databases. With things like DynamoDB, you literally have a database
that was just an API, and that, being able to take care of
the undifferentiated heavy lifting, for things that drive
little to no value for customers is super-important, and everywhere we find it,
we try to eliminate it. And containers were
a big part of it. The reason containers
became popular were one of the most
common problems people had, and quite honestly,
this was a problem people were trying to solve
before containers, is how do you package something,
and how do you deliver it? Can you do it
in a way that’s immutable, because that has become
a very popular way of deploying applications. Netflix very famously was baking
AMIs, Amazon Machine Images, putting the software of it, every time they was a change
they would bake a new AMI, and that would spin
their deployment tooling system. Other people
were doing the same. And containers came along
and gave people a much more convenient way
of doing exactly that. And what Docker
brought to the table was a really nice user experience
on building, shipping, and running a container,
because the runtime and registry and the build tooling were all part
of the container ecosystem. Over time, as people
got more sophisticated, you had to add
container orchestration. Whether this was ECS,
whether it was Kubernetes, whether it was macOS which
adopted containers quite well, or even the tools
that Docker generated. You suddenly were using this unit
that you could run on your laptop, manage all your dependencies, and you were building
a lot on top of it. And it made it much easier for people
to run their infrastructure, to run their applications
and share infrastructure. It enabled the shared
services platforms that I was talking about. And it also allowed
legacy applications to get containerized,
because you could do it. There’s a bunch
of container diehards that say you should have one process
for a container and that’s it. But it turns out reality
is not that. What people started doing was
figuring out what the application was and all its dependencies and using
to containers to package them up, the code, the run-time,
the libraries. And it made it much easier
for them to start moving those applications
across the cloud. And while there is no such thing
as perfect portability, the container itself was the closest
we ever got to having a run-time that can be run on a laptop
and somewhere else, and you can get
a consistent development lifecycle. Reality is more complicated
than that, but, as a starting point,
that’s why it got so popular. At AWS we have a fairly vast
portfolio of container services. I’ll start with the container
management tooling, which is Amazon ECS
and Amazon EKS. ECS is our AWS-built
container orchestration service, and I’ll talk about that
a little bit in a few slides, and EKS is a managed
Kubernetes service, because all of you love Kubernetes, but the part that most people
don’t like is managing Kubernetes, the control plain,
the etcd, all of that. You need to run
these containers somewhere, and on AWS we have two ways
you can run your containers. You can run them on EC2 instances, or you can run them in
a serverless container on AWS Fargate, which,
and you heard me say this before, is the way to run containers,
because running containers and BMs and orchestrating them,
it’s almost regressive, because you are suddenly introduced to a whole lot of leaky obstructions
that, in many ways, when EC2 came, you are taken away
from infrastructure. You need a high-quality container You need a high quality container
registry, you can run your own. You can really run any,
but most of our customers pick ECR because they like the scale,
they like the availability, they like the fact
that it just works. And at the end of it,
once you have broken all the applications
up into little pieces, and all of you are talking
to each other, you need to connect
those applications and services, not just using low-level networking, but high-level application
networking. That’s where something like AWS
App Mesh comes in. It probably means that you need
some service discovery in there, which is what you use
AWS cloud map for. There are other tools.
We have light cell containers, we have Beanstalk, CodeBuild,
that fit into this ecosystem, but this is basically what
I’m going to talk about today. So we talked a lot
about re-platforming. How do customers do that? Well, they are ..
it out and doing it the hard way. And to make life
a lot easier for them, we built something called
App to Container, or A2C. What App to Container does
is it makes getting started with containers a lot easier. It looks at your Java.net
applications, figures out
what the dependencies are, and actually builds
a container for you. And once you have the container and you also create
some of your manifest, etc., you can then start running them,
in your own data center, on in AWS. You can move very easily
to something like ECS or EKS once you’ve started App to Container. App to Container is relatively new,
but you have a good idea, it gives a good insight
into where we are going with it, and it’s explicitly..
today, at that re-platforming case, where you are taking
existing applications, you want to containerize them
as a step into moving
your applications to the cloud, and over time you might
start refactoring them and evolving them
into more modern applications. But this is a place to start. Many of you, like … one of things
that we hear a lot from customers is they like the fact
that OpenShift they have an existing
relationship with Red Hat. They like the high level
abstractions that OpenShift brings to something
like to Kubernetes, and perhaps,
most importantly these days, is once they’ve invested
in something like Red Hat and .. Linux, once they’ve invested in other
Red Hat software, or IBM software, OpenShift is a great way for them
to run those applications any way they want to, taking advantage of all
the modern practices that OpenShift has underneath it
by default. So, instead of saying,
“hey, yeah, AWS supports OpenShift, Red Hat has OpenShift
dedicated OpenShift online” we decided to collaborate
with Red Hat and build a jointly operated
service called Red Hat OpenShift Service, on AWS, which we announced
the preview for at CoopCon just about a couple of weeks ago.
And the whole idea of OpenShift Service on AWS
is it feels to you like OpenShift as an AWS service.
You come to the AWS console, you start using it,
you get billed through AWS, you can call up AWS for support if there are
more complicated things. If you are in an existing
relationship with Red Hat, you can go straight to Red Hat.
But the idea is that this is going to, over time,
evolve to integrate even more deeply
with the rest of AWS. So the things I like about AWS
and you wish you could use OpenShift more natively
you’d be able to do that as well. So I am super-excited about
how our customers use this, and we already have
some enterprises that area champing
at the bit to adapt OpenShift on AWS. The other great part of the cloud is all the compute
options that you get. I used to be on EC2
for many, many years, and I used to specifically
work on EC2 instances, so if there are a million instances I take some of the blame or credit
depending on your point of view. But the great news is you have
the plethora of EC2 instances that you get, you now have Intel based instances,
AMD based instances and, most importantly, Graviton,
our ARM based instances. And if you’re a company
that’s thinking ahead, you have seen the performance
and capabilities that Graviton gives you. Containers actually
make it a lot easier. You have multi-architecture images that you can store
on a registry like ECR, and you can go,
if you have code that runs on ARM. Or you could choose
to run on Fargate, run it natively, without having
to think about instances, without having to think
about servers, without having to think
about clusters, you get the isolation model, and in fact, our customers
like Fargate so much that today, more than half of every new customer
that chooses to run containers on AWS picks Fargate.
And that number is accelerating. About a year ago,
that number was about 40%, and now it’s over 50%,
and it's growing. Most of them come through ECS, but many of them
come from EKS as well. But the key message is,
it’s pretty clear, and that’s what we hear
from our customers, that they really, really like
the model that Fargate gives them, which is I have a set of containers, I want to run them,
I want to run services and do not want to think
about cluster management. I do not want to think about what
instances to pick, how I want to, which applications I want to run
on which cluster, and so on”. And it shows in the growth. A great example of a customer
that’s adopted Fargate is Samsung. Historically, they had many
administrators and operators that were dedicated to managing
the web services for the Samsung developer portal. What they chose to do
was move to Fargate, and so they significantly reduced
their administrative needs. They saved cost at the same time, because while an individual
Fargate skew, if you want to call it that,
if you look at EC2 instance, Fargate is a little bit
more expensive, but there are many, many more,
you don’t have to exactly write, the write sizing is much easier, and, by the way, for services
that go up and down it’s much easier staying within the Fargate construct
and you end up saving costs. And, at the same time, your developers
are that much more efficient, so you combine all of that, it’s a win-win
for most people running on Fargate. To be able to migrate super-easily
and the operational efficiency improved as well, so they were able to get wins
in every area that they cared about, and that’s the foundation of
the Samsung developer portal today, and they are just one
of many examples of customers that have adapted Fargate
and got wins on cost, on operational efficiency,
on developer efficiency and that also is driving a lot
of how we think about what
a serverless container means. A serverless compute in general. So, what are the architecture
and infrastructure choices people have to make?
On AWS there are many, many options. It ends up depending on the kinds
of decisions you want to take. You have options
of many storage resources. You could choose EFS, and, in fact,
for containers and serverless now that Lambda and Fargate
both support EFS, we see a ton of adoptions
of EFS because you want
a shared file system to store data between all the containerized
task imports that come in and out, because the lifecycle of those
task imports can be very short, but the application
is long-lived. For people who care about
other kinds of characteristics they may choose EBS,
they may choose FSx, we see all of that.
I have mentioned Lambda often. One of the most fun parts is that
people building modern applications tend to pick Lambda and containers. Even AWS services that we build are
a great example of DAS AWS match, which service in our organization. It’s frontend, it’s all serverless,
API Lambda it’s backend ECS, and that’s not
an uncommon architecture. You have to think through how you
want to secure your applications. You will be using things
like IM, CloudTrail, you want to monitor it
with things like CloudWatch, and you want to start thinking about
how you want to do your networking. You could start off with
simple load balance services, but over time you probably want
something like a service mesh to manage communication
between all these services, to manage security between them, and it’s requirements like that
that end up driving ours. So, for example, today I’d like
to announce that in Q1 of next year, and you can go to the app
Mesh Roadmap and see where we are with this.
We are going to add mTLS security into all our mesh-based
communications. And it’s been a major
ask for our customers. You start using a service mesh
because you want to manage traffic. You continue using service mesh
and you will get excited about it because you can also use it
to secure your traffic. And those are the kinds of
capabilities that we’ll driving in, and in this particular case,
this is coming soon in Q1. You can already run it
in preview. I published a blog post
about a month ago on how you should think
about a container orchestrator. One of the most
common questions we get is, “should I pick ECS,
or should I pick EKS?” And the fun part is, and I guess this is where
it’s been easy for us in some ways, is that we’ve learnt
that from our customers. They have told us why they make
the choices that they make. They like the powerful
simplicity of ECS. And what I mean
by powerful simplicity is it’s just a control plane that lets you run containers
on EC2 instances or Fargate. You get to use the rest of AWS, you don’t have to think
about too many options, because for load
balancing you are using ALB. For networking you are using EPC. For logging, well, there you have
some choices but you made it simpler. But your defaults are going to be
CloudWatch, and so on and so forth. But there are many others
who want all the flexibility and capabilities
that Kubernetes gives them. They want to be running it on
premises and their own data center. They want to bring the same
practices onto the cloud, or they start in the cloud
and they want to take some of their own practices
back to the data center. Or they want to integrate
but a wide ISV ecosystem. Those customers pick Kubernetes,
and on AWS they’ll pick EKS. So they like
the open flexibility of EKS, they love the powerful
simplicity of ECS, and what it means is,
those customers have helped us evolve our roadmaps
in different directions based on the reasons
and choices that they are making. And I think and hope that over time it will make that decision
even easier for you, because you’ll have a set
of capabilities or requirements, and you’ll pick the tool
that fits those requirements best. And the best part is that
many organizations are on both. So, now comes the fun part. Over the last year, the definition
of what AWS Cloud is has evolved. So far, every time I said cloud
I mean a big AWS region like US East 1, or US West 2,
or Frankfurt to Tokyo regions. These are regions
with multiple availability zones that are multiple data centers, and you can run these
highly available applications with all AWS services
available to you. Over the last year, year and a half,
couple of years, we have launched Outposts, which allows you
to take AWS infrastructure and run it in your data center.
You can choose AWS Wavelength, which is taking
that infrastructure and running it
in a Telco edge, or AWS Local Zones which are sort of
a localized version of an AWS Region, like we have one
in Los Angeles if you want to be close
to other media companies. So, effectively, now we have AWS Regions, Local Zones, Wavelength
and Outposts, but guess what? Our customers still run applications
on their own infrastructure. Very often, they may run it
next to an outpost, but they have a lot of applications
that are running, they have infrastructure
floating around, they can’t just throw it
into the drink, or they may be in a location where just the rest of them
are not options. So what do you do there? I told you that
our customers love ECS because of
its powerful simplicity. And the way ECS runs across of
all of these is you still have
the same control plane, you have ECS Control Plane,
you have the same APIs, and you just point it
to different types of capacity. That capacity could come from
an EC2 instance running in AWS Region, it could come from an EC2
instance running in an Outpost or Wavelength, or Local Zone.
Or it could be AWS Fargate. You could choose to run on Spot,
you could choose to run on demand. That’s up to you.
It isn’t that hard to imagine a world where the definition of capacity
could be any capacity. You probably heard Andy announce it
but I am going to say it anyway. I am super-happy to announce
Amazon ECS Anywhere, and what it means
is you bring capacity to AWS, to ECS, and ECS will run containers
on that capacity, and the best example of that
is this demo that if you haven’t seen already you will from Massimo
Re Ferre with running ECS tasks on Raspberry Pi
in his apartment, or something like that. The idea is, we don’t care, you bring
some sort of compute capacity to ECS, all you have to do
is run the ECS agent, and you’re off to the races. Now, you remember, this is running
from the ECS Control Plane, running in the AWS Cloud, or it gives you this wonderful
distributive architecture that you can use. And because of what we have heard
from our customers about how they want to use
this tooling in API, moving from one capacity type
to another is simply a deployment. You could start off
in your own data center, switch to an outpost,
and switch to an AWS Region and all of it would just be
a deployment option. And it’s as easy as that.
This is just the start. We have a lot of plans for ECS
Anywhere. I am super-excited
how all of you are going to use it, and what kind of feedback
you are going to give us. But in the end,
what you have now is taking ECS and just being able to run it on
whatever capacity you want to bring it,
regardless of where it is. What about Kubernetes? I think we spoke about the fact
that many people pick Kubernetes because they want to run it
in their own data centers. So how do you run EKS
outside of other regions? Our customers have outflows
and application portfolios that span AWS and on-prem. Well, for them, the first thing
we decided to do was, and it’s available today, you can download it and run it,
is Amazon EKS Distro. This is the same distribution
of Kubernetes that underlies Amazon EKS
that we are running in AWS today. It’s offstream Kubernetes, with add-ons and defaults
and configurations that we use to run EKS. Do we have an opinion of how
Kubernetes should be operated, and you get the same
when you take EKS Distro, you can download GitHub and run
it inside your own data centers. But we want to take
that one step further. You can all get the Distro,
what that gives you is the same bits, but we also have strong opinions
on how Kubernetes gets operated and the tooling
that should accompany it. One of the things that I didn’t
point out earlier was ECS Anywhere is going to be
available in Q1, and we are announcing it today. You can see the demo and very soon
we will open up previews, but availability will be
in the first half of 2021. I’ve almost forgotten which year
we are in these days. The same is true for EKS Anywhere. In the first half of 2021,
you will start seeing capabilities that we’ll provide to you. They help you operate these clusters
wherever they are running, and then you can then get
a single pane of glass, a single view,
into all of these clusters. So you are starting with
the same core bits, and then you can add to it with all
the operational tooling that EKS Anywhere gives you, regardless of
where your applications are running. You can connect to AWS for updates,
you can run your Kubernetes apps consistently with EKS Anywhere. You don’t actually have
to be connected to AWS to run your applications. So there are a lot of interesting
workloads that you can run with this new capability that
will be available to you next year. Here’s the fun part. You get to pick
the same reasons and advantages that people have to pick ECS or EKS, the same benefits
that I talked about earlier, the powerful simplicity,
the flexibility all apply here, and I am super-excited again
to see where customers take this and what kind of capabilities we end
up building over the next few years, because I am sure we’ll learnt a lot. The EKS Anywhere story is actually
not complete without talking about
the next two features, one of which is EKS Add-Ons. These are packages that are useful
in running Kubernetes. We use them on AWS that we will patch
and maintain for you. So whenever you spin up an EKS
cluster on AWS, for example, you can pick the add-ons
that you want to add to it. We are starting off
with just one or two, but over time there will be more.
These are Kubernetes components. And then you know that you’re getting
one that’s been qualified, that’s been patched, that runs with
your Kubernetes clusters on EKS. So that’s a pretty important one.
And the last is the EKS Dashboard, which is a single pane of glass
that I was talking about. You can use it just to monitor
your clusters on AWS, but you can also you
use it to monitor clusters running anywhere that you might
be running Kubernetes cluster. So again, this is going
to evolve over time, I am super-excited to see
how people use these capabilities and what kind of add-ons you want. So you should tell us which are
the add-ons that you care about once you see the initial list
and the ones that we plan to build. In general, the vision
is a simple one, and it’s well reflected
by the story from Capital One. Capital One was very early
in adopting containers. In the cloud in general, sticking with our team
of financial services companies, but they have done a lot,
and we have learnt a lot from them. They want to be able
to run their apps with the same consistent
controls everywhere, and they want to give their teams
the same benefits, whether they are running on AWS, whether they are running
in the cloud. And to do that they built
this amazing platform, and they used different types
of container services as required for their applications. I will encourage you
to check out their case study, but the good thing is, their goal is
to enable their DevOps teams to give them
the right building blocks so that those teams can enable
the rest of their company, the application developers. And that kind of mindset
is what we see from a lot of our customers,
the really successful ones. So this is a great segue
to how do you, once you’ve got all these
building blocks and once you’ve got all these
technologies, how do you then help
your organization or your company adopt them at scale. People have built all kinds
of internal tooling to do what their end goal is they want
to create a developer platform. At Amazon we have a team
called Builder Tools, which has a very simple goal. Their goal is to allow developers
at Amazon to build and deploy without worrying about
the underlying infrastructure. The best practices
are codified the tools built by the Builder Tools team. Many of our customers,
Capital One is a great example, have similar organizations that
build these shared services platforms that allow their developers
to run quickly. The built the right abstractions,
the challenge is, it’s hard. And what we found was our customers
were doing the same thing again and again, either succeeding, but in many cases
having a really hard time. And they came to us for help. The good news is we’ve seen
what customers have done with ECS and EKS, with Fargate and ECR
over the last few years. We understand,
and they told us what they needed, because they were very clear
on what their requirements were. At the same time they wanted
to make sure that they had the right guardrails and organizational
governance in place. So, once again, you can call them
SSPs, that’s the term that we use. But what are the shared services
platforms that I talked about? It’s basically something
that your developers can come to. It’s self-service. You know that if they’re
choosing something from there, it’s going to run in a way that
the organization will approve, you can bring new technologies there
without having all your developers becoming experts in the technology. If you have new ways of deployment
you can put them in there without developers
actually having to figure that out necessarily on their own. And so it allows these organizations
that have an operational team or a platform engineering team
and development team, or just two sets of developers
wearing different personas to actually move quickly, and we tend to see it mostly
in larger companies. And to support that, I am very happy
to announce in preview AWS Proton. What is AWS Proton? We think it’s the first,
fully managed deployment service for container
and serverless applications. We are not trying to solve
every problem in the world, we are trying to solve
the problem of people who want to modernize
their applications, containers and serverless. It gives developers a super-easy way
to deploy their code that enforces architecture
and security standards. So they can move quickly. They are just coming
to the self-services portal, picking up the approved application
patterns and just running. And your DevOps team,
your platform engineering teams, your production engineering teams, whatever you call them
at your company, they are the ones who are building
these application patterns or stacks that define
everything that’s needed. Whether it be the control stack
you need to put in place, what kind of code repositories
you are allowed to use, what does the architecture
look like, what code pipeline software
are you allowed to use? How do you do observability?
Proton ties everything together, it coordinates
all the elements to it, so any time there’s a change in stack
because you made a component change, it rebuilds everything,
your infrastructure, your deployment pipelines,
your monitoring, and your developers
don’t have to worry about it. And as an operator,
you have visibility into everything that’s running,
who’s running what, everywhere. So the developer, all they’re doing
is pointing to GitHub and picking the pattern
that their application needs, and then they can run. We think Proton is going to change
the way many of our customers build these shared
services platforms. We think it’s going to have
a steep impact on their agility, and it’s in preview today,
you have a small set of integrations and a small set of tools
that it supports right now. By the time we go GA next year,
that will be much richer. You should be able to pick
how you want to define your application stacks,
as infrastructure, as code, and Proton is strictly
infrastructure code based. It is strictly immutable deployment,
at least for now, you can pick which CodePipeline.
Today it’s CodePipeline, but in the future we’ll support
other ways to deploy software. And you can pick whether you want
to use Lambda, ECS or EKS, and then you can choose
from one of many collaborators who help you run
their monitoring tools. You can choose CloudWatch, you’ll be able to pick
from others as well. And I think that pattern is
what makes Proton super-exciting. But talking of observability,
there are other tools that we have built
over the last few years to help organizations simplify
how they get this visibility into their applications.
The first thing we built was FireLens which was logging through
Fluent Bit on ECS. We now have it
for EKS as well. It just makes log shipping
that much easier, and you can send it
wherever you want to. You can send it to S3,
you can send it to CloudWatch, you can send it to one of the many
collaborators that we have, whether it be Sysdig,
Datadog or Sumo Logic. We also have invested
heavily in open telemetry, and now have services
for Prometheus and Grafana. You can leverage
basically the entire toolkit that you have to run
your applications to get the right level
of visibility into them, and this integrates really well
into the rest of our services. So whether using ECS
or EKS and now Proton, you can take advantage
of the tools that we provide and integrate
into observability tools that you like using, because people can get
pretty religious their observability. The last part is putting it
all together. I like CLIs,
some people like consoles, but in the end,
the goal is the same. Let’s abstract away
the developer pain. Building applications is hard, you want people to think
about it less and less, and you want them to focus
on creation, on innovating, and be more productive and agile. For them, the starting point is many
ways of sharing container software. We have Code Artifact, which is an artifact depository
for all code types, and then we have ECR which is
your private container registry, there are billions of image
pulls happening on ECR today. And now, we pre-announced
this about a month ago, we are announcing
ECR Public Registry. So anyone can publish
images to ECR. There is a gallery where you can go
and search for these images, and you don’t even
have to be an AWS customer. You can pull those images. If you are an AWS customer,
you have additional benefits. You can use them with ECR, you can use them with other AWS
services in the future, with AWS Marketplace to monetize
your registries. But the idea is that
you have a really, really easy way of publishing
images and consuming them. We also give you ways
to model your applications, where they are using AWS CDK which is an imperative way
to define your app, or using the serverless
application model which his built in a model
very similar to CDK. Or if you’re building
front-end sites with AWS Amplify. In the end, our goal
is to give developers the tools that they need to define
and model their apps. There is no ‘one size fits all’.
Developers are different. We want to make sure
that they have the options available to them to move
as quickly as they can. In fact, there is a specific
version of CDK for Kubernetes that’s now a sandbox project
in the CNCF. So that model is very popular
with our customers. And on top of that, we end up
building domain-specific languages, or interfaces, to help people, developers in particular,
use all of this. And the one that I like
talking about is Copilot. Copilot is a CLI.
It was originally written for ECS but it has evolved
quite a bit from there. The whole idea is that
you can initialize a project, define it, define
your deployment system and you can run things,
all using a command line. So unit, deploy,
release your software. And I would argue that
if you are an ECS today and you CLIs,
you should be using Copilot. It’s by far the best
and most powerful way to do it. But some of our customers,
many in fact, like using Docker. So we started talking to Docker
about what we could do for our mutual customer-base, who love using Docker Desktop
or Docker Compose or Docker CLI. And we ended up
partnering with Docker to make it super-easy to start
with Docker Desktop, use Docker Compose and are
running services in ECS and Fargate. And we have all kinds of plans
and ideas together on how we can take this forward.
But we’ve already seen the excitement that our customers get from
using the tools that they love, like Docker with AWS
Services. So, after the last 45-50 minutes, we’ve talked about patterns that
our customers are using to modernize. We are talking about
all the tools and building blocks
that are available to them. We have talked about new tools
and new services that help our
customers modernize even more. They will help you adopt the cloud,
adopt containers, adopt serverless, at speeds
that you never could do before. What if you are just starting? We are expanding ECS and EKS
to any infrastructure. We are starting to think about
what are the right abstractions for our customers
that they can adopt and come into that fits the needs
of their organization. If you want to run VMs
and orchestrate containers on them, you can do so. In fact, now you can do so
on your infrastructure. But I think our goal
should be further abstractions. I think the muck
that we have to deal with, the undifferentiated heavy lifting
that we have to deal with, has only gone up the stack. What used to be yesterday’s muck
has been long forgotten, and we’ve added new
undifferentiated heavy lifting. Today, we are thinking
about clusters, we are thinking about networks, we are thinking about
how to organize many clusters. I think that’s regressive.
Tools like Fargate, services like Fargate,
help you take all of that away. And it’s only one of the ones
that we are talking about. It really excites me
that our customers tell us in no uncertain words
what they want to accomplish. This talk today
talked a lot about patterns that they are using to modernize.
Tomorrow there may be new patterns. So we should keep talking about
what those patterns are, how we can help you move faster, and, in the end,
we want to help you lower your costs, we want you to become more agile, and we want to help you
transform your businesses. There are a number of talks that will
go into more detail than I did today. We talked at a high level
about all the exciting things that are happening
in the world of containers. We talked about companies that are
transforming their applications, transforming their businesses,
however big they are. They could be start-up
social media companies, or some of the largest
investment companies in the world. If you go through these talks and all
the other ones that you will see, hopefully you will get
some new insights into how you can build applications
at your companies, how you can transform
your organizations, and how you can modernize
the infrastructure. Let’s go build. [music playing]