Hello, and welcome
to my talk on the topic of Infrastructure as Code,
a Deep Dive. This session will cover said topic, but with a bit more
advanced approach, so we will be looking
at some more advanced topics, best practices and tooling when it comes to infrastructure
as code on AWS. Who am I and why should you care? My name is Darko Meszaros. As you can see,
I go by different spellings. I'm a Developer Advocate for AWS. I've been working for AWS
for over 4 years right now. I started way over in Premium Support and now I'm moving over
to Developer Advocacy. I'm currently based in Germany.
In Berlin, Germany. but I'm originally from Serbia. One fun fact about me
is that I have lived in 5 countries in the last 7 years, so fun times! Feel free to follow me
on any of the social media. I do a lot of video content as well,
so hope to see you there as well. But let's move on. So this will be a Session 300,
or Advanced Session. What does this mean? Well there's a few things
I will not talk about, or at least try not to talk about. I will not be talking about or I will not be doing
Infrastructure as Click, just the code. I will not be explaining to you
or telling you why you should use
Infrastructure as Code, it should be already handled. I will not tell you
how to do Infrastructure as Code, at least not the basics part. I expect that you know already
how to do this and all of these things,
so if this is you, welcome. Additionally, I will definitely not
be comparing different frameworks of Infrastructure as Code out there – well maybe just a little bit. So what will you talk about, Darko? You mentioned a lot of things
you will not talk about. What is this thing
you will be talking about? Well, be ready
for a lot of terminals, be ready for a lot of screenshots
of terminal, a lot of code. Yeah, I hope you really enjoy that. So let's start off. Let's start off
with a story about a person, and this person is an engineer. This engineer is tasked
with building a globally scalable, secure, stable whatever bells
and whistles system or workload on the cloud. And what we want is
to make that engineer's life better. To be fair, we all want
to make our day-to-day job or day-to-experiences better, be that if our day-to-day job
is more fun, more challenging in a good way
and more engaging. So how can we make
all of these experiences better for that person? But let's step back a little back. So, long gone
are the days and the times of racking and stacking, so where you would provision
infrastructure by putting servers on metal racks. I mean that still kind
of happens at the backend, but you don't have to worry
about it too much. I mean, I come from that kind
of a background. I used to work in a company as… my title was
Server Provisioning Administrator. So I would basically
provision operating systems on physical boxes
somewhere in Fairbanks, Alaska. So I would literally have
to load the ISOs using the lovely remote
administration consoles, Java-based
remote administration consoles, and hope it all works. It took a couple of months
to do like 500 boxes. So it wasn't the speediest of things, but you know,
things have moved on since then. And how would my life look today if I would be provisioning servers,
if that would be my job? Well it would look
something like this. We switched over from a lot
of racking and stacking, do a lot of typing and commenting
and doing, you know, a lot of code. This is a lot less cables
but a lot more code, which is super useful, but it also
comes with its own caveats. So talking about this code,
what is this code? Well this is that piece of code. In this case
this is a cloud formation code that deploys CodeCommit Repository and the Cloud9 environment. We can see that we're deploying
two types of resources here on the cloud, we're defining
some of its properties, and that's it. So these things replace
the manual cable configuration or OS installation on some software, or clicking a whole bunch of buttons
to replace all of that with these lines of code. And this is
much more efficient, right? And I want to be talking
about that you should do this. You absolutely should do this. This is the baseline,
I expect you to do this. This is nothing new.
But how can we make this better? How can we make working
with this more enjoyable, more secure, more stable,
more repeatable, etcetera? Well, say hello to pipelines. The goal when I talk
about Infrastructure as Code, I want to present you the requirement that when you do
Infrastructure as Code, you treat it
as any other application code. So introduce it to CICD pipelines. So that means
take all the best practices from software delivery
to infrastructure delivery. So taking your code
into a source depository, versioning your infrastructure,
running tests against it, packaging your infrastructure code,
running "linting" and a syntax test, actually deploying it
to a test environment, and finally if all of this is good,
deploy to a production environment while in a safe, secure,
and repeatable manner. This is kind of the key
when I talk about these things. But when I talk about these things,
how does this actually look? I mean you may not come from
a software development background, so I'll just explain a bit. So the workflow
of your infrastructure delivery or infrastructure architecture and designing and building
your infrastructure through code would look something like this: You would have your architects,
engineers, DevOps engineers
if there is such a thing, basically code your infrastructure. They would write the terraform,
CloudFormation, Pulumi, CDK,
SAM, whatever. They would commit that code
inside some sort of a Git repository, be that Git, maybe SVM, whatever. Some code repository. This code
will be versioned there, etcetera. Then you will do
a lot of tests on that code. How are you going to test it?
It really depends. You can do it manually, you can do it
automatically through a pipeline. And then finally, you will deploy
this code to production. But what's important
within all of these things is that in each of these stages,
or at least a few of these stages, you get valuable information or emit information
on the quality of your code, the quality of your changes,
the impact your changes has given to some invisible metric
in the cloud, basically something that you can say,
"Hey, my change to infrastructure has actually made a positive impact to my infrastructure
looking at some KPI." Why is that important? Because the most important thing
you want to do here is repeat. Every time you make a change,
every time you put something, learn from it, understand
what you did and repeat. Keep the cycle going,
keep understanding what you did and what has that change, how has that change impacted
your current production. And this is kind of the proper way
you should do delivery. Now speaking of delivery,
let's talk about delivery of a specific Infrastructure
as Code framework. In this case, CloudFormation. So I know I promised
I'm not going to talk the basics, but I want to just make a good,
solid intro into CloudFormation just for you, maybe not everybody
is using CloudFormation. So what is CloudFormation? It's a common JSON/YAML language
or language framework that interprets that language
from files called templates, and basically creates
AWS infrastructure. Well a bit more
than AWS infrastructure, but we'll get to that. So make these templates,
give it over to CloudFormation, CloudFormation does a whole bunch
of API calls on your behalf, and some magic
in the back-end happens, and it creates those resources
within AWS. It also keeps track of those resources,
so it understands what's the current state
of that resource. So it's easy to create,
it's easy to update, and most importantly, it's easy
to delete, because if you do it, the stack, everything
goes wrong with it. Now how does that code look? You've actually seen this code
in one of the first lines and this is the same code
for CodeCommit and Cloud9. But code is written in these things
called templates, both written in YAML and JSON. I remember a time
when there was no YAML. Say goodbye to your comments,
but yeah, you can do either/or. These templates are creating stacks, and within those stacks,
you have resources. A resource is,
think of an S3 bucket, that's a resource created. A specific S3 bucket
with specific properties which you define
within that template. And those resources are plenty. Over 490 resources
are currently supported within AWS with the features
that exist right now, you can basically create anything. But I'll get to that. And also CloudFormation
handles the dependencies for you, so it's not just
blindly executing some code, it also understands
some dependencies for you. For example, if you create a VPC
and a subnet, it will understand that it needs
to create VPC first and then the subnet,
so there will be no problem there. But also if you have
some very specific dependencies, for example "instance A needs
to wait for instance B to start up", or vice versa, CloudFormation
doesn't know that. So you can also specifically define
dependencies as you go along. But let's start
with some best practices. Where do you start
with CloudFormation? Well you start in your text editor. So it all starts
from your text editor. No matter what you do,
you need a text editor. No matter which one you use, all the best practices
should start there. Now you should definitely use
a good text editor, maybe the best text editor out there. I'm not going to say which one, we don't want
to start that discussion right now, but choose a text editor
that can benefit from the plethora of plugins, tools and utilities for writing/testing
CloudFormation code. So for example, you can see here that you have a screenshot
from VS code that shows a bunch of errors,
and all of these errors are coming from a tool
called "cfn-nag", which is a plugin for VS code that checks for insecure
infrastructure patterns within your template. Or you can use a tool,
something like AWS toolkit that can help you
with some additional AWS resources and CloudFormation included. Now speaking of "cfn-nag",
so the tool behind that plugin, "cfn-nag" is not just a plugin,
it's also a CLI tool. So in this case, you can see it
running from the command line and basically running "cfn-nag"
standalone against my template. You can run it against one
or multiple templates and it will detect,
as I mentioned before, patterns that indicate
insecure infrastructure. So it will basically find
that "Darko, your IAM role that you created here
hasn't allowed for everything. That's bad
and I will fail you for that." Which it should.
It also shows some warnings, some security grouping stuff,
etcetera. Pretty cool. You can define
your own rules as well, so you don't have to adhere
to the currently existing rules, so you can define
your own rules as well if you want to bring them in. Pretty powerful, pretty cool. How do you get this tool? "gem install cfn_nag - - user". Do note, package name is "cfn_nag" while the actual binary is "cfn-nag".
I don't know why. Moving on, another lovely tool,
which is also a plugin and a CLI tool is "cfn-lint" and I cannot really
stop talking about "cfn-lint." It's just an amazing tool.
So "cfn-lint" is, you guessed it, a linter
for CloudFormation templates. So it's a static testing tool, so basically it goes
through your template and validates is it okay, did it find
any potential problems there. It will also point out
certain best practices you can improve
within your CloudFormation template. But it also checks
your CloudFormation templates against the CloudFormation spec,
so it understands that something is not good here
or something can be different. It will basically fail to start
to a point. You can lint it
against multiple regions, multiple templates, etcetera. It's pretty powerful. How to get it? "pip3 install cfn-lint - - user". Pretty cool and amazing tool,
you should really check it out. And here's an example why I say you can do it
against multiple regions. This is really useful. So I'm testing a simple
auto scaling group template here within the EU west region
and the EU north region, so Dublin and Sweden. You can see that in the EU west
I get a couple of warnings that I shouldn't hardcode AZs
but in the north it will fail because there's
no T2 micro instances types in the EU north. So it automatically told me,
"Hey, this will not pass," even though my syntax
was really good, it wouldn't pass like this. Now static tests like this are great. So linting and detecting securities
and maybe some spec checks, those kinds of things are cool. But this will not catch everything, so it will not catch
if you have limit problems, if you try to launch an instance
and you run into a limit issue where you have no more EC2 limits, lint nor nag will not detect that. So we need some more active tools
to do this. And the next tool is Taskcat. Taskcat basically allows you
to task at scale or actually test. It's not just linting, it also has
"cfn-lint" included for linting, but what Taskcat does,
it actually deploys your template to a specific region
with a specific subset of parameters and if it's successful,
the test is successful. If it doesn't launch properly,
it will fail. Now why would you use this? So imagine having
a large-scale library of CloudFormation templates,
maybe you were using them within your organisation through
something like Service Catalog. So you want to make sure
that your CloudFormation templates will always launch. You don't want to test it
by attempting to launch it when you really need it. So in scenarios when you have a lot
of templates, what would you do? You would use Taskcat to constantly, like every day, every week,
every set amount of time, you would launch this template,
see if it works, and if it works that's fine. So templates can fail,
dependencies can be lost. So if you have dependencies on AMI
which you maybe have deleted, this is a great way to catch that
before you need that template. Super useful, you define
a bunch of tests with it, and these tests look like that. So I am basically, in this case
I'm launching one template within two regions
with a different set of parameters. I can also do some other things.
I can maybe set some tags or blacklist a couple of Azs. I don't want this
to be launching in, etcetera. But once I run this,
just run "taskcat test run", which is again,
Taskcat is a CLI tool. If I run the "taskcat run" within the directory
where my template is and my actual
Taskcat configuration file, the file you've seen before, it will deploy the same template
in two different regions with a different set of parameters
that I have defined before. And if all of this is successful,
you will get this really nice report, all the greens and reds
if it's not okay, showing you and showing
the successful deployment, and you can actually have a look
at the actual CloudFormation events or CloudFormation logs that happen
as this thing gets deployed. Now all these tools are great. They'll work within your text editor
or within your CLI, which is great,
but that's not their place. There's a better place for them. Better place for these tools is here.
It's in the pipeline. So this is
how an ideal testing pipeline would look for CloudFormation. From committing our code
to pushing it to a pipeline, running all the linters against it,
deploying it with Taskcat to a test environment,
and finally hopefully, promoting it
to a production environment. So this is something
we should all strive for. Okay moving on. Let's talk about some best practices
when it comes to CloudFormation. There's a few of them that are not
strictly connected to CloudFormation, but yeah, you'll catch those
as you go along. So first and foremost, again, these apply to a lot
of different ICs as well, but one of the key things
I like to tell my customers is layer your application. Do not put all of your eggs
in one basket. Do not create this big old template that contains everything
under the sun because if something goes wrong and somebody decides
to delete a template, everything can be deleted at once,
which is not the best thing. And also, having smaller templates,
smaller files, makes it easier to work with. And if actually something goes wrong
or if you want to replace something, you want to rip out
one specific part of your infrastructure
and put in a new one, it's much easier to do it like this. And how would you do it? From deploying it to specific
security stacks, network stacks, frontend, backend, staging,
development, blah blah. Different kind of stacks
for different roles. So this is how you should do it
no matter which language you use. Now something
CloudFormation specific, in the past, if you wanted
to refactor your stacks or for example if you have a database
and you want to move the database from one stack to another stack,
like a new stack, this was a problem, you couldn't do that, in essence. You would have to basically try
to delete the stack, maybe contact support.
Super complicated. But right now what you can do
is you can use imports. Basically you can import
an existing database from one stack to another while deleting the old one,
and it will remain in the new one. Super useful feature
and a great use of that feature for the time you need
to refactor your stacks. Speaking of your stacks, stacks
can sometimes drift in configuration. For example,
you launch an EC2 Instance and some wiseass and goes in
and changes a setting that you do not wish to be changed and you haven't noticed
that that thing has been changed, so one of the things you can do, you can use drift detection
to detect that potential drift. For example, an IP address
has been removed from an instance or a volume has been attached. Something like that has been
manually out of bounds changed. So you can detect that
with drift detection, but if you want to accept that change or accept that thing
that has happened, you can also use import to fix that. You can re-import the resource
with that setting and it will basically
import those settings back into the template. So pretty cool, pretty neat features. Now very important things. Again, not CloudFormation specific
per se, but super important. First of all CloudFormation –
use parameters and mappings. Do not create
a CloudFormation template without the parameters. Make sure you use parameters
and mappings as well that you can kind
of make a more agnostic template, so that your template can work in multiple different types
of environments, and if you use mappings, you can create a map
of AMIs to regions, that means you can use
the same template in multiple regions and it will automatically know
which AMI to use depending on which region
has been launched. Now a very important thing,
I need to have a look at you here. Do not store sensitive information
within your CloudFormation templates. I know it's common knowledge,
I know nobody should say that, but somebody should say that. Anything that you do not wish
to be publicly visible from a GitHub repository, do not put
in your CloudFormation templates. If you want to store a password
to a database, use a parameter, or better yet, use parameter store
or Secrets Manager. Then instead
of even passing the parameter along as you create the template, you can just resolve it within,
you can see in the snippet, we're using a resolve function
within CloudFormation to resolve a specific parameter value for a systems manager
parameter store. Super neat future, super useful,
and this will help you basically never even have to look
at the database password, because it will be sitting
within the parameter store. And if you want to update a password, you change the parameter
within the parameter store and just run a blank update command
on your CloudFormation template, it will pick up the new password
and update it. Awesome things. Now let's move away a bit
from CloudFormation. Let's talk
about its younger little brother called CDK or Cloud Development Kit. Let's talk about delivery of CDK
and how that should be done. Just again like CloudFormation,
let me take a step back. Why would you use CDK?
Well maybe you don't like JSON. I wouldn't say
I'm the person who likes JSON. Maybe you don't like YAML.
Maybe tab spacing isn't your thing. So maybe you are just good
in another programming language, a general programming language
such as Python, Java, JavaScript, and you wish to use that knowledge
and experience to build infrastructure. Maybe you want to use
some routine, some things, some elements from these languages to make your eye
see code much better. Well say hello
to Cloud Development Kit. Cloud Development Kit
is a multi-language framework that allows you
to model your infrastructure as code. So it allows you
to create infrastructure as code by using some general
purpose languages, languages such as JavaScript
or typescript, Java, Python, C Sharp and F Sharp. Pretty cool, pretty useful feature. I love CDK,
it's just so easy to work with. But before I continue talking
about CDK, I really need to talk
about one of our customers who's a big fan of CDK apparently. So a customer called Alma Media. It's a Finnish digital service
business and media company that besides their new services,
they also provide information related to lifestyle, career
and business development. So they are massive. They have 750 million page views
per month, they reach over 80%
of the Finnish population, they have over 2000 people
working for them, they have 100+ websites and apps.
That's a lot. They run a lot of serverless, for example, they had
to the last of my knowledge, over 2 billion lambda function
invocations in a month. That's huge. And speaking of complexity,
when we talk about complexity, they have about 100 AWS accounts. So that is pretty big. So what they wanted to do, they've been using CloudFormation
since now, but they wanted to improve
the experience for the developers when they built
serverless applications. So when their developers
program infrastructure, they want it to be better, they wanted
the actual serverless part of it to be even tighter than just doing it
with CloudFormation. So what they did, they opted
in to use Cloud Development Kit. So they use Cloud Development Kit
for their serverless projects. They use Cloud Development Kit
or CDK with typescript and this has in turn improved
their developer experience and that's the most important thing, we want to keep
the developers happy. So when they run
their Infrastructure Orchestration, the experience has improved greatly. They create these building blocks
or constructs that can later be reused
for different elements or different parts
of a different project maybe. They use all the software
development best practices such as static analysis
while testing and versioning and dependencies
within their packages. So pretty cool. And as they go along,
they plan also to introduce some additional languages
such as Python and Java and build
more and more building blocks with CDK or constructs, libraries
to be used evermore within. So nice story, I love hearing
when customers are just successful with one piece of software. Okay, so moving on.
From constructs to the cloud. We're talking
about some specific aspects of CDK. This is how CDK works
in a 10,000 foot level. So write your apps, your stacks
and most importantly your resources, the actual part of CDK
that creates things. You pass it on to CLI.
CLI takes that code, does some magic to it,
creates CloudFormation code, and then everything else
is left to CloudFormation. This is really cool
because the entire complexity of managing
your infrastructure as code is part of CloudFormation and it's built in
and it's tried and tested. CDK just enables you to do it
in a different language and just basically interprets it
for CloudFormation. Really cool. And if we go a bit deeper into CDK
or the main concepts of CDK, I mentioned the core framework
or apps, stacks and resources, so our resources
are the definition of your stack, of your resource within AWS. A stack can contain
one or more resources, and apps can contain
one or more stacks. Amazing. For example, you might have an app
with a staging development and production stacks
with multiple resources as well. So construct library is a library
where you define specific constructs or specific groups of resources or a specific way
you configure a certain resource. For example, a "dynamodb" table
is a construct with some sensible defaults coming
from us already that you can use. And the CLI, as I mentioned before,
is the grunt or the workhorse of CDK, is the thing that actually
takes your code, interprets it to CloudFormation
and deploys it as well so it doesn't
just generate CloudFormation, it also handles deployments.
Pretty neat. Now let's talk
about those constructs, because constructs are important. There are 3 types of constructs
within CDK. CloudFormation constructs basically all CloudFormation
resource specification as you would use them
in a vanilla CloudFormation. AWS Service constructs. These are higher level abstractions
very similar to CloudFormation but with sensible defaults. They have some sensible parameters
already set. And the big kahuna,
the design pattern constructs. These are opinionated
reference architecture and design patterns
using multiple AWS services. And when I talk about this,
this is basically it. So a pattern, in this case, these 5 lines...
essentially, 3 lines of code create the application load balance
"Fargate Service". So within these 3 lines of code, you basically generate 817 lines
of CloudFormation. All of these services
or all of these elements you generate to create a load balanced
Fargate service in the backend. Again, very opinionated. You can do some changes as well,
absolutely, so it's not just a vanilla thing,
you can also do some changes which you will see actually
in the upcoming slides. So talking about each one
of these separately. I mentioned CloudFormation constructs
and this is how it looks. A CloudFormation construct is useful if you want to do
some very much primitive things, in a sense you want to create
just a VPC without any defaults. I'm creating a VPC in this case. You'll see "CFNVPC"
as the name of the resource. I'm passing you the "cidrBlock". Once I do this, bam,
I have just created a VPC, nothing else, a blank VPC. The same thing would happen
if you do it in CloudFormation, you would just get nothing, just a sample VPC
with nothing else on it. Okay, pretty cool, pretty powerful. But moving on to a service construct,
I'm doing the same thing, I have just removed
the "CFN" part of it from the resource name, passing in the same parameter,
but in this case because it's
a higher level abstraction with sensible defaults, it will also create elastic IPs, internal gateways
and some other things. Again, it has some… opinions on what it should do
for you as well. And now finally the pattern. It's the same pattern
I've shown before, and here we have a bit more lines. We have 5 lines of code in essence,
with some settings. We also define some memory and CPU
and task image options, blah, blah. And again,
it creates all the different things from load balancer, listeners,
ECS service, the whole nine yards. So it does a lot of things for you, again with some things
that can be changed as well. Now which one should you use
really depends on your situation. Most people
use the service constructs which are the easiest to get into
and give you the most flexibility. The middle of the road thing. Now how can we prevent mistakes? When you do your CDK code,
how do you prevent mistakes when deploying or running or whatnot? Well let's introduce testing
as before. So there's a couple of ways
you can you do tests on CDK. There are 3 types of tests
you can run on CDK as of now. So you can run snapshot tests, fine-grained assertions
and validation tests. So I will go into at least
two of these with more detail. I'm going to talk
about the validation tests, because I'm not going to explain
validation tests too much later on as we go, because validation tests
are nothing CDK specific, they are just general programming
best practices to have a fail-fast test to basically burn
into your code or hardcode in your code
some specific parameters. For example, if you define
a certain value within, let's say you're creating
a dead letter queue or SQS and you want
to specify a retention period of a certain amount of days, instead of creating a test
for a valid value, you can just make that validation
within your code that that value must be
between certain numbers or not. So that's a really nice way
to fail-fast your code. Your code will stop executing because some value
is not within range. Really cool.
But let's get on to the actual tests. Also, we're using "jest"
and CDK assertion library for these testings,
and they work with typescript. I don't think we have support
for any others at this moment. Snapshots. So what are snapshots? As the name says,
it's kind of way that – as you can see in this test – I'm creating an SQS queue
or "DeadLetterQueue" in this case and the test here,
a snapshot test basically tells me that my stack that is being created
from here should match a snapshot. And how do we get that snapshot? And snapshot is just a state in time
or a point in time of your resources or how your resources look when they're created
or compiled or whatnot. So how do you get that snapshot?
You create your test like this and then run "npm test". So the first time it does,
if there are no snapshots, it will create a snapshot
as you can see "1 snapshot written". Now where is that snapshot? Well it's actually sitting
within the test directory on the " _ _ snapshots _ _" directory and it looks almost
like CloudFormation with some additional things. So this is what it compares to. Now what's important is
if you do these things in a pipeline, make sure to commit
your snapshot directory so you can move it along
in your pipeline as you go. Pretty important.
Snapshots are great and if you're not matching
your snapshot like this, if you kind of make a change
that is out of bounds that you will get an error saying,
"Hey it doesn't match." So adhere to this,
if you change your mind and you're like, "You know what,
I really want this change to apply to my snapshot
and to create an updated snapshot," you just run
the "npm test - - - u" command and it will
update the existing snapshot to a new one and this will now
be a valid configuration. So as I said, snapshots are great,
but we can do a bit more. So let's say you want
to have more fine grained control, you want to actually control
what you test like you do with unit tests. Application development anyone? You write your unit test,
you write your expectations, you set expectations
that what you expect your code or application to do. The same thing
you can do here by setting… well let's say you're more assertive
on this, you do assertion tests. So instead of expecting my stack
to match a snapshot, I can expect that my stack
will have a resource called "AWS CloudWatch Alarm"
with these settings. Or that a specific resource
"SQS Queue" has this setting as well. So if these things are not met, my test will fail. So if that number is not that value
or if that resource doesn't exist with those parameters,
my test will fail. And there's no means
to recreate a snapshot, you basically update your tests
as you go along. Now this is the best practice, you would basically
write your tests first then write your actual code. Pretty cool, pretty useful
in the long run. Now again, all of these things
should sit in a pipeline, all of these things
should be tested as you go along, it's not just on your desktop.
It's also great on your desktop to kind of make sure
you don't break anything. And as I said, 3 types of tests. Snapshots, fine grained assertions,
and validation tests, which are kind
of more programming types of tests. Now going back to CDK's big brother
called CloudFormation, and I want to talk
about the really neat feature that came out and we will have
a demo coming up as well. So CloudFormation Registry. So what is this CloudFormation
Registry thing? Well let's say going back
to our lovely infrastructure engineer and let's say that not everything
that you run is within AWS or not everything you run
can be triggered by an AWS API call. Let's say you will have
a GitHub repo or use Opsgenie or you have a line
of business application that generates unicorns. So how can you create
those resources, how can you create a unicorn
from CloudFormation? Well in the past you could do things such as use a custom resource
with lambda functions, you can tie
in a lot of different things there, it would work but it was
a bit more sticks and rope. So how can we make it better? Well, introducing
CloudFormation registry. So CloudFormation registry
is basically an open accessibility model
for CloudFormation, so you can develop, submit, discover,
manage your custom resources or external resource providers. Think of it as burning
in a specific resource the same way you would use
AWS EC2 instance as a resource, on CloudFormation you can create
Darko unicorn factory as a resource and then use it like any other resource
within CloudFormation. So there's no need for you
to create lambda functions to do all of these weird things, but you can just, basically
since you create that, you submit it, you have it within your AWS account, you can freely use it
with any of your templates. And anybody else
within your AWS account can use this. Really great,
it gives a lot of functionality to third-party vendors
if you have a product out there, a SAS product that you wish
that can be invoked, created, modified with CloudFormation,
you can develop a resource type or a custom resource
for CloudFormation and give it out to anybody to use. We have already customers
such as Datadog, Fortinet Spotinst that have CloudFormation
registry resources available to use. Basically you can pull them
into your AWS account and use them as you will. Really super strong feature. So how do you get started? There's a few things you need
for this, and like everything, we start with CLI. So we need
the newfangled CloudFormation CLI. Now there's 3 flavours of this. There is the Go version of it,
there's the Java version of it and there's the Python version of it. So the Go and Java
are generally available, while the Python one
is currently in developer preview. Now,
why are these languages important? You'll see this later, when you create
your CloudFormation resources or custom resources, you need to code or you need to write your code
in one of these three languages. The actual logic of how you invoke
or how you create lists, delete, whatnot, these resources, you need to write them
in these languages. So first thing you need to do,
you need to model your resource, you need to explain
to CloudFormation what is your resource,
how does it work, what are its properties. So here's an example
of my unicorn factory. I'm defining some of the properties
for my unicorn, for example, a superpower,
it's a type of string and these are some lengths
it can use as an input value. Also when you do handlers, and handlers are kind of the meat
and potatoes of your custom research, it's where you define the create,
the read, the delete, the update, the list handler, the piece of code
or the logic that actually does this. This is where you define things such as permissions
for those things as well. Once you have defined this one,
let's work on those handlers, let's actually do this thing. So a handler, again
this is a Go lang example. This is my create handler. The create handler here
basically invokes an API endpoint that is my unicorn factory
and passes it on some parameters and it's basically
creating something externally. So think of it, even if it's not
a unicorn factory, think of it
as if you're doing GitHub, if you want to create
a GitHub repo you can invoke the GitHub API
through the create handler and it would create a repo. Again, this is really up to you
how you want to do this. There are certain things
you need to kind of work with, with progress events and errors and how you return things
to CloudFormation, but again,
that's to explain later on. Tests, did somebody mention tests? Like with everything,
we need to test things. So instead of passing on faulty
CloudFormation code and then going through
the entire thing to see it not work, we can use "sam local". So we can use "sam local" tests
basically "sam CLI" to locally emulate CloudFormation and invoke this create event locally
to create something while onsite of your API. So this is how it would look. I'm basically invoking
a local endpoint to create a unicorn and I get
a "success create complete". That's all I want. Once that is done,
time to upload the new "cfn", our CloudFormation CLI the "cfn submit" command basically submits everything
within your current directory off to CloudFormation
and hopefully as you can see here, registration is complete
and it works. Once that is done, going
to CloudFormation on the web console, on the left-hand side Resource types, if you go check in Private
or Resource types as private, you will see your new resource type
and it can be used immediately there. And voila, that's basically it. I mean, there's a lot more to it
but I don't think we – I could dedicate a whole session
just to this. Okay, but let me dedicate
something else to you. Let me show you a demo. I'm going to actually show you
how this looks in the console, the CloudFormation registry as well,
instead of looking at the screenshots. So demo time. Okay, what you see right now
is my command line. I'm currently in a directory
that contains all of my files that I need to create
my unicorn factory. So the model here, the schema
as you have seen before, you have these properties set up
here for my unicorns, and then you have –
if I go further down – I have some required parameters that I need to use
to create my unicorns, and finally
the handler definitions with the permission they require. Okay, that's fine.
But the meat and potatoes of this one it's sitting
in "cmd resource resource.go" file, and this file contains
all the business logic, all the logic
that actually creates unicorns. So you can see
that I define my API endpoint if I go further down
you can see my create function, you can see my read function
here as well. We just passed the update function,
delete function and all that stuff. All of these things are here, all of these things basically are
just making API calls to an endpoint. If I close this down –
because this is Go lang, I need to make
or I need to compile my Go. So what it will do,
it will run "cfn generate" to basically generate some code,
generate some static files, something that you need
to basically provide to CloudFormation explaining things. These are all kind of done for you. Once this is done,
I want to run that local test. So if I just run
"sam local invoke test entry point" with a specific "sam" test even,
which I've created, so the JSON file
which you have seen before, once I run this, it will basically
invoke confirmation locally using lambda or a docker
in essence and it will create, it will actually execute API call
back to my API endpoint. And if you see here, I'm using
"crud crud" for my API endpoint. If I refresh this, you will see
another unicorn created here as such. That's cool. Once I'm happy
with this, "cfn submit". That's it. So this takes a moment or so,
so it creates all the things you need like permissions, roles, etcetera, and it generates a request
to CloudFormation to do this for you. Once this is successful, you will be able to see this within CloudFormation
as so if I refresh this again. I mean it's already there,
but if I refresh it, if I go back to private, you will see
the Darko unicorn factory resource sitting here in all its glory,
and it can be used immediately within CloudFormation as we go. And also the GitHub repo, it's not connected
to my exact repository here, but that can also be changed. Awesome. So going back to this, I want to talk about some takeaways. So a few things. First of all, best practices start
within your text editor. Use proper tools, use proper plugins, use the best text editor
you can find, whichever that is. Important. Treat all infrastructure
code as any other code, as any other application code. Put it through a pipeline,
do all the tests with it, make sure you have versioning
enabled or version control, put it in a repository,
do all the things you would do from any line
of business application, do it through CloudFormation CDK,
Terraform, whatever you're using. Finally, test, test, test. There are so many testing utilities
out there. Depending on which language
or framework you use, make sure to use these tools. They'll help you save a lot of time
instead of, you know, you don't want to trial
and error things on the console, you want to test things locally
or in a pipeline to do this safely. Finally, before I go,
if you're interested in DevOps, if you need some help
with DevOps or management tools which these things are part of, feel free to check out
some of our partners, which either have
connections to CloudFormation or can help you with DevOps. Maybe ask them some questions
about registry, their customer resources
I mentioned before. Check them out
in the Partner Discovery Zone and you can see them as well. Once again, thank you very much. It has been a pleasure
to deliver this session. I hope you enjoy
the rest of the event. I hope you have enjoyed it so far and send me the positive comments
to my social media, send me my negative comments
to my email somewhere there. And I really hope to see you
at some further event. Thank you. Bye.