CDK Day 2020 - Multi-account and multi-region - Deploy your CDK app to multiple environments

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so next we are going to move on and we're going to add in forest and thorson's going to be joining us thor so welcome okay all right he's going to do a live presentation he is going to be talking about and one of the things that i get a lot of questions on is how do i go multi-account all right so i've got this working but how do we go to multi accounts how do we go to multiple environments i guess is another way to say that so with that i'm going to turn it over to you to school us on that and uh off you go yeah thanks so as you said i will talk about multi-count and multi-region so how to deploy your cdk application to multiple environments because as we all know some people have test environments all have production environments and we want to have test environments so we definitely want to put them to different accounts different regions and to reduce blast radius so who am i i talked about this um earlier but just for a recap so i'm ceo and cloud evangelist at timeous which is my own company i'm laida vs hero and ctk contributor you can find me on twitter um the twitter handle will be in the lower right corner all the time so you can find me all the time so what do we want to do so we will build serverless api using aws http api api gateway because but okay it will already be there this will be our starting point and then we will deploy to different regions and to different aws accounts and even sprinkle in a little bit of testing in our pipeline so how does this look like this is our account setup so we have our organization root we have our organizational units like deployments where we would have our cicd account which is holding the pipeline we have a workloads ou which has our pre-production and production um environments with our dev and prod and eric will love it it's a serverless application and we have our organization root account just for the sake of it if you want to create these um diagrams i have written a tool called aws orca graph that's creating these two diagrams for you so let's dive into the code so we have our and it's the same mod look that's a little bit so so we have our cdk application as we all know we have our app file our app file has this um yeah implement our stack our stack then says okay please deploy into this account in this region i'm a big fan of always specifying this so i cannot yeah accidentally put it somewhere else um and our serverless demo stack is then in the lib folder in this case i'm using a typescript application so we have a new dynamodb table we have an http api and we have lambda functions i'm using the node.js function construct here so it's automatically wrapping and building the typescript stuff with parcel and all the things so i'm adding a root for tasks with a post and i'm adding a root for task with get this doesn't matter really because it's just an application we already have so this talk will be about spreading it into multiple accounts so we have an application what's important for now because all i'm talking about is in developer preview and not the a yet so lookups do not work currently um in a fully supported manner so we should not use vpc lookups hosted zone logouts but always things that are in our application but everything else is perfectly fine so standard thing would be okay let's do a pdk synthesize and then we will get our template for exactly this little serverless application and the demo codes will show if this works because yeah it's spinning up docker in the background using parcel and creating all these things so what we now want to do is add more libraries because we want to build a pipeline so what we need to add here is the cdk pipelines package so there's a pipelines package that packages everything we need for the pipelines it uses code pipelines code build and all these things we just need code the pipelines package here what we're then doing is we import all the things we need in our application file so our only thing we will be hitting is our application file because we will not change our application because this is fine as it is but change the deployment of it so we are adding imports of the pipelines the code pipeline and the code pipeline actions so next of it we will create a new stack and this will be our only stack that we deploy in our application everything else will be then be deployed by our pipeline so we go ahead create a new pipeline stack and it is pipeline stack with all with all the code pipelines we need to then define some artifacts so we'll have source artifact and a cloud assembly arc effect the source artifact will hold the source code coming from github in my case and the cloud assembly artifact will then hold the synthesized cdk out folder so and the next thing we're doing is creating a cdk pipeline i will first paste it in and then walk through line by line so we're creating a new cdk pipeline from the pipelines package we will give it a name and we will provide the cloud assembly artifact that it should use between the stages we have to define a source action so where is the source code coming from in this case i'm saying okay it's a github source action with all the things i need for this so the owner and the repo and the branch and then we define the old token coming from secrets manager that's taking them having our personal access token for get up and it's outputting to our source artifact and then instead of specifying all the steps um manually this cdk pipeline and construct has something called synth action where we then add the simple synthesize action and using a standard npm synth because we're using npm in this case we're providing the source artifact and the cloud assembly artifact so it knows where to find the sources and where to put the assembly in my case we're overriding the install command because we don't need only npmci to get our package there but in the lander folder we have our lambdas with their own package.json so we need an mpmci for this folder too and as we might need docker i use a privileged container so the moment we do this we have our first pipeline defined it's just a pipeline it's not doing anything it's just building everything preparing everything but not deploying anything and for this to work first of all we need to remove this part we're not applying our application anymore using cdk directly but we are now instead of this adding our pipeline stack to our application so we have a new pipeline stack with everything we need it's the same account same region it's just deploying the pipeline instead of our application our build finished so we can see what happened before so before it was just our template of our application and in the next step this will be different so if we now would deploy this i'm just going to the code if you deploy this it looks like this so it deploys a pipeline which has a source it has a built phase in this build phase it spins up um yeah it spins up a code build and in this code build it will do the cdk synthesize and it will prepare all the assets and the next phase it will start a code build project and self update the pipeline so whenever we change the pipeline the first step will be deploying the pipeline stack and if the pipeline stack changes restarting the whole pipeline so when we add new stages and change things to the assets it will always self-mutate the pipeline and then deploy everything else so now we need to create something called a stage so up there we are adding something extending the stage we're calling it that stage and in this stage we are now instantiating our serverless demo stack which was our application before so we just moved it into this stage and that's all we need for this application to be there and for our pipeline we then add stages which is just pipeline at application stage and then we instantiate our stage which has the pipeline as parent a name and an environment where to put this stage so in this account in this region as you can see this one is in frankfurt and one is in dublin that's why it named it differently and it will prepend this to the stack name so and if we then deploy this we get another um setup so if i go to code pipeline we now have the source the build the update pipeline then there was a news that there is a new code build project called assets because it will then upload the assets to the s3 buckets in the desired accounts so it is an extra step deploying all your assets so in this case the lambda is a folder and then it's deploying creating the stage for my frankfurt deployment and it's using the native cloud formation change set and execute change set actions in code pipeline so i'm not paying some code build project to deploy um some things in cloud confirmation and then it's the same for our dublin account so this is multi-region with just one line and i can just put there more and more and more for more regions it's just working it will redeploy the pipeline and then redeploy your application so now we want to add some chests so in our f stage we need to do some minor changes so first of all we are declaring some outputs here and then we're just using these outputs here so what i'm out doing i'm creating output new cf and outputs in my stack based on the table and the api and put them into variables in my stage that's everything so now i have these outputs first of all i can see them with it in a concrete execution so if i go to an execution and look at the deployment i can see them as the output variables but the even better part i can now change a little bit namely i will remove this and replace it with something similar so it just extracted the app stage to a new variable to be able to then do some interesting things here because now for this stage it was just creating a change set and deploying the change set it's now adding a new action called the shell script action and this shell script action has a name as everything in code pipeline and it says commands and these commands will be executed using this code build project and i can use environment variables in this case url and with specifying user outputs i can populate these environment variables as stack outputs so what's happening here is that the api output i defined in my stage will be put into the environment variable for this code build task to validate or run these commands so if i go back to my pipeline so i added this to dublin pipeline there's a new action here called validate dev dublin and it's a code build and if i go to details i can see the build log the build log now says okay i'm doing a curl somewhere and this curl is going to this api and it's doing stuff and it's doing things and it's certifying because i'm using curl verbose and it has my output so everything's working and i can do some tests here and if everything's fine i transition to the next stage otherwise i block the stage and the run so i can do automated testing of these deployments and now comes the hard part i want to put it into another account and for this really hard thing to be possible i add another state and specify a different account id and that's it because it's nearly it so it's everything i need to do for this stage what i had to do beforehand so it's always the secret magic i had to bootstrap my accounts using the new cdk bootstrapping experience it's documented on the pipelines package and for this to be allowed i had to specify in my second account that it should trust my cicd account and that's why you definitely should go to this reference architecture having a ci cd account and then multiple stage accounts and each of these stage accounts will then trust your cicd account so everything running in your cicd account especially the code pipeline is allowed to assume the cdk roles in your target accounts and deploy things there so that's everything you need to do for the multi account stuff it's just adding it here and then in codeplug and call pipeline it looks like okay there's a new stage in this case i decided just to show that it's possible to add multiple state um environments in one stage so not one prod frankfurt for dublin but in one however you like it you could even put it in parallel to speed up the deployment into multiple production environments so the really interesting thing is to bootstrap your account or your accounts correctly using the new bootstrap experience and the trust relationship to your cicd account and then the rest just works so for that part for the production so there are blog posts so if if you look at the aws cd cdk i can show you one thing where you can find more of these so it's in the packages and then there is this pipelines i cannot emphasize it enough it's developer preview so be aware i would not say don't use it in production because i'm using it in production but you should not do it um and there is everything laid out again and the things you need when provisioning your account so bootstrapping the accounts is really important that you do this with a new experience and across account trust and then everything just works because the new um bootstrapping experience will deploy um iam roles with given names that can be determined by other accounts because they follow this yes specific schema and then they can be assumed across accounts and then deployed using codepipeline important thing to notice there is in codepipeline there is this details button to go directly to the codel project or the cloud formation deployment this does not work for cross account deployments because it's a deep link on the web console and is now deep linking into cloud information in the cicd account and not inside and the target account so this is something that's not working you have to switch to your target account and go to cloudformation there so it's not something you do wrong it's just not working like this so let's see what's happening in behind in in behind the scenes so there are many stacks deployed here so first of all in your ci cd account in this case essential because that's where i deployed my account there's a pipeline stack being deployed it then deploys my dev um frankfurt and my prod frankfurt in the dev and product account in the same region it also deploys the dublin um stacks in the dublin region and for this to be possible under the hood you don't see it but it just is there it deploys a support stack in the cicd account in your other regions so in all regions where you deploy your application there will be a support stack and because it needs to create a new bucket for the assets for this region because for lambda for example assets have to be in the same region and it sets up bucket replication so that all assets are in all regions um for the deployment to be there it's the same for the cloud automation templates and so on so this is definitely something you should know that is happening but you don't have to worry about it what you might need to worry about if this is just for testing for this to work so everything is encrypted by default all the buckets are encrypted with cdk bootstrapping and because you need cross account access encryption only works with custom managed customer master keys and not with aws managed customer master keys and because of these cmc in the case um you have to have your own keys and they cost them a dollar a month i think um per bootstrap region and account so this is something you have to keep in mind it doesn't matter for yeah i think enterprise setups or some setups where you deploy real workloads but if you do it just for fun maybe not deployed to 22 regions so that's it for the stacks that are deployed um one very important thing is if you change something to the pipeline or if you create the pipeline the first time and for the first time for the bootstrap you have to do it using cdk deploy and further on you should not use cdk deploy but using the pipeline itself to self-imitate it's very important to push your actual code to your repo and then if you then put it once to the repo then you can do um your manual deployment because the pipeline will start immediately when it's created it will check out the current state of the repository and self-mutate to whatever is in the repo so you should make sure that the current state is in the repo before you do a local deployment so that's it for my presentation i think we can do a short q a after this so definitely visit the aws cdk github and um give your feedback on all these pipeline features because they're still under active development there's room for comments and things that can be done i've written a blog post um for creating a ci cd pipeline for your cdk app you can find it on my web page timer dot d e slash blog or on dev on the heroes aws heroes group little self plug i um provide coaching for cdk and you can find all my getup things with all the cdk demos and all this org graph and some tooling i'm writing under my github slash timers so that's it and yeah open for questions all right hey sorry matt i jumped first i got it hey thornton there or something there's a couple of questions real quick and i thought i might uh uh show it here uh so each account gets a customer managed kms key is that correct each account and each region that you bootstrap with the new bootstrap experience gets a customer managed customer master key so cmcmk all right this is needed for crest account deployments if you don't need cross-account deployments okay then you're not in my talk right now but um that if you don't need cross-account deployments there is a current um pull request open to disable custom master and customer managed custom master keys to yeah remove the cost but it's a dollar per account so you definitely have one dollar per account and region all righty we have another question from brian uh what are the gotchas pain points for dev iterations for defining the pipelines so for the development of the pipeline itself yeah it takes a little while because you always have to put into github and then it goes through the pipeline self mutation and so on there's another open pull request for this to stop self mutation for development purposes and then doing cdk deploy for every change manually it's okay i'm just updating the pipeline whenever i like it and if i'm fine with it then i do another deploy that's activated reactivating self-mutation and go back to production mode all right excellent so one other one we have in here so the cdk rule is is it something that you'd have to set up up front you just have to do the cdk bootstrapping and if it's not the right version of bootstrapping it will tell you please update to the latest version of bootstrapping um and with the latest bootstrap version it will create across account rules for deployment for asset uploads for um repo uh pullings as i think it's three or four im rolls per account and region and they have a specific pattern like cdk some kind of random string but it's not so it's it's a random looking string but it's the same for all accounts and the account id at the region so cdk knows how the role all the other accounts um is called all right excellent that's all the questions i'm seeing uh matt do you see any others yeah we had a couple over from slack so uh thurston would you have any recommendations about how to best especially within the case of these pipelines divide up stacks does everything go in one stack should you have different stacks how would you approach it so to keep it simple in the beginning i would start with one stack per environment because having multiple stacks for different environments is crazy enough for for the beginning and if you see some things where you might hit limits which is especially in the serverless case might yeah happen fast if you build one lambda per endpoint for an api gateway for example and you have a role and all these things coming up you might hit the 200 resources limit pretty fast so then that's where i started to break it down into more stacks like okay this part of the service is going to one stack and so one is for the api then one is for this part of the lambdas one is for this part of the lambdas and so on but this is something that could be done inside your application stack so you can have one stack for your for your real application and do everything inside this application so in your stage it's yeah you can very for a long time you can stick to one stack here or at least one construct here gotcha and uh any advice for anyone who's looking to adopt pipelines you know either rough edges or you know gotchas that that stuck you up originally that you would just recommend that they watch out for so one thing that definitely just as i said not yet supported is lookups so the moment you use context lookups like the pc lookups hosted zero lookups things like that they're currently not working in all cases so there is a way if you do the lookup once locally and the context json is there it will use this context json instead of doing the lookup so it works in some cases but it's definitely a rough edge and this is something that's being targeted for the ga that that is working then okay so would a developer then be responsible for running those lookups locally and then committing that that cdk context json file into that would be a way for the time being or just try to avoid these lookups so that's why i demo the serverless application because then i don't need vpcs and that's the main lookups i'm doing and everything else could be done in the stack i would like to just say you're welcome and if you deploy your vpcs and you don't care about the kms key costs because you have not gateways indeed it saves you a lot of money there well good deal well thank you very much thoughtson we really appreciate it yeah thanks
Info
Channel: CDK Day
Views: 2,291
Rating: undefined out of 5
Keywords: aws, aws cdk, cdk pipelines, cdk day, multi account, multi region, pipelines
Id: v74PvMEhMhQ
Channel Id: undefined
Length: 29min 41sec (1781 seconds)
Published: Sun Oct 11 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.