DevOps Master Class - Part 6 - Infrastructure as Code

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Thank you so much for this series.

👍︎︎ 2 👤︎︎ u/s2a1r1 📅︎︎ Sep 21 2021 🗫︎ replies
Captions
hey everyone welcome to part six of our devops master class and in this class we're gonna dive into infrastructure as code this really is a critical piece when we think about the all up flow about we have in these pipelines about having resources there well infrastructure as code really helps us get those resources there in a predictable way first as always a lot of work goes into creating these classes so a like subscribe comment and share really is appreciated and hit that bell icon i previously talked about pipelines we talked about the idea that we have this pipeline and at the start of any kind of pipeline where we're deploying some application we're doing some testing well the first thing we have to do is kind of deploy the infrastructure now that could be virtual machines it could be a kubernetes environment it could be databases storage accounts whatever that is we have to have the infrastructure there first before we can put on our applications our data and then sure we go and think about things like hey our application we think about kind of unit tests and other things in that pipeline but how do we do this how do i get the infrastructure there how do i make sure it's to my requirements maybe my requirements have changed how do i modify what exists to meet whatever changes i have specified i can't have some user going in a portal and click click clicking i don't want to have to run commands i want it just done as part of my pipeline and when i think about infrastructure realize there's layers sure there might be vms or kubernetes environments i create in some fabric could be in the cloud could be on premises but then if it's a vm well what is the image that drives that runs on that os what are additional middlewares or runtimes i have to install what configuration is required if it's kubernetes what builds the container image i deploy into the pods which pods do i deploy via whatever environment if it's a database well what commands am i running to modify the schema to create the schema says all these different layers that build up my complete solution and i have to have that to make all of this work so when we think about the how how do we normally do things how do we create things and i can kind of think about well there's imperative and then there's kind of this versus declarative and if we really break those down if i think about imperative it's really about i'm telling it what to do how to do it if i think about declarative it's more about what i want so let's let's think about those for a second so declarat sorry imperative what to do i'm saying how to perform something the steps i want performed if it was a portal click this button then click that button then type in this value then click this button if it was a script it might be hey create a new resource group now create a storage group with this configuration now create this container now create this virtual machine there's steps to perform if i was traveling and i like to use this kind of example well i could think about maybe there are some streets i could think about hey there's certain streets here and i think about there's a destination i want to get to and maybe i'm starting here so if it was imperative i would be saying hey um go forward a hundred meters then hey you're turning left then you go forward a hundred meters let's say 200 meters then you turn right and then you go forward 50 meters so that would be imperative i'm telling it the steps to actually perform to get somewhere and it works from this particular starting state i'm at my house i would end up at that destination so if i always follow those particular steps i'll always end up with that destination from this starting point but there's challenges what if it failed halfway through what if i was in the car and it broke down here and i have to get a different car well i can't just replay those steps because now i'm going to end up somewhere completely different so what if my starting point is changed those steps don't work anymore so my starting state impacts this this only works if my start and state is always the same thing i can't just re-run an imperative list the steps i'd have to modify them i have to do checks to say well where am i how can i tell what's actually the end result matches what i put in those steps how do i know i'm at the right location well i could kind of replay those through but it gets very difficult so how do i validate how do i audit how would i detect drift how would i know i'm not at the right place anymore how can i repeat how can i just rerun that to make sure i really am in the right point if it was creating resources can i just re-run the entire script to make sure i have those resources there no if i'm doing commands to say create a vm well if i rerun that command again most of the time it would fail so i would have to build into my script to say hey check if something is there if it's not there then go and create it what about if i want to modify the goal modify my end state i can't just recreate the vm again with some different parameters again it's going to error i'd have to use a different command i'd have to use a modify command to actually make it match my desired state so it's really hard to kind of track that and do that in an imperative fashion there's lots and lots of steps i'd have to write in to handle modify my end goal starting at different place how do i audit to make sure i really am there and can i easily kind of aversion control and then detect what's changed so it's very difficult with an imperative manner now let's take declarative for a second i'm saying what i want so i could have that same idea of kind of the streets exactly the same and again i have kind of that same starting point and the same destination but in declarative instead of me telling a driver hey uh go forward 100 meters then turn left and go 200 meters then turn right then go 50 meters and you're there it's kind of like phoning an uber and saying hey i want to be at a that's all i say i want to be at a it's then up to the technology to handle that so all i do is state my desired final state to be at a that's what i put down i'm declaring what i want the end state to be so it's all about the goal the final end state and it's up to whatever technology i'm using in this case the uber driver to get me to a it works out the steps to actually get me there now if i was at a different starting point my desire is still the same to be at a and it's up to that technology to work out what it needs to do if i'm at a and i change my desired end state hey now i want to be at b it has to work out how to do that whereas an imperative i'd have to kind of work out a whole new set of exact steps and how to do that so this handles all of this for me so this gives me a lot of nice benefits i want to be at a i can say that over and over again i can say that when i'm at home when i'm at a if i'm somewhere else if i'm already at a i can just say i want to be at a it's like we're already there so it's item potent i can just re-run it as many times as i want and it will make it match the state if i'm already at the state nothing's going to change but i can make sure i am at a hey i want a vm in south central of this type i can just say that if it already exists nothing has to change if it's of a different type it would change if it doesn't exist it will create it so it's item potent i can rerun it which means it's totally repeatable i can keep saying hey this is what i want this is what i want so in a pipeline that's super attractive because i can just say this is the infrastructure i want there and if i run this pipeline 50 times a day it does not matter if the infrastructure is there already that matches my desired end state because i'm declaring what i want it's declarative it doesn't have to do anything it looks says yeah matches i'm done if it doesn't match maybe it's the first time running it will go and create it if in my desired state i've modified what i want because changing requirements it will modify what's there to match my new desired state so because it's declaring what i want it's easy to validate i can easily detect drift because it's looking at what's there what's my desire it can easily compare those two things because it's not trying to track well where would these steps get me this is what i want what is there does it match what's the drift and the nice thing i can easily version control because it's just what is my desired end state and if i change it it's easy to look at the differences and say oh this was the desired end state now it's a new desired end state it's easy to see what changed was again if it's kind of imperative and it's a list of steps it might not be obvious what the end state before was compared to what is the end state now so this is huge this declarative nature is what we want when we think about infrastructure at code i want to just state the desired end goal and the technology makes it so it makes it so i can just always just re-run it to make sure i'm in that state it makes it easy to see what has changed order it to make sure it's in compliance version control that's the desire i want to be using a declarative technology now there are different declarative technologies depending on what exactly we're doing at what kind of layers we're leveraging and i want to kind of touch on some of those now i'm not going to go into huge amounts of detail about specific technologies because again this is a devops course your technology may differ but i want to give some examples of some of the things you might use just to put it into some of some real world scenarios so you do understand kind of what's going on now the first thing to bear in mind is that many organizations that are using the cloud processes has changed now the cloud doesn't necessarily uniquely make this happen but it really did drive this change in behavior and the reason for this is in the past i could think about hey i had my business user my app owner whatever that might be and they wanted some resource now the way this would work i'll kind of say this was yesterday maybe it's still today in certain environments but this was how things always kind of happened in the past so i had some request for resource vm kubernetes environment public facing site whatever that might have been and the way this kind of worked is this went to the it admin the system operators and the whole point here is that it admin was aware of corporate requirements of security of regulatory requirements and they would kind of do a validation of the requirements so again that could be security focused it could be regulatory maybe certain regions are allowed to be used based on data sovereignty it could be based just around corp guidelines maybe where we spend money sizes we can create allocations whatever that might be but the it admin would kind of do that validations there was the request there was kind of the validation and then they would actually go and create and they'd go and spit out whatever that resource is vm container doesn't matter but that was the flow well that was yesterday if i think about today and we think about this new cloud and this consumption-based nature and self-service that doesn't work anymore it's too slow think about auto scale and reacting very fast to requirements we have the same app owner austin home we have the app owner but now again i'm drawing the cloud but it doesn't have to be i can do things like private clouds on premises where i can have self-service but now that request essentially just goes direct i'm using some technology to say hey create the resource and it just creates the resource now we still have this i.t admin because i as a company i still have all those same things i still have security requirements i still have maybe regulatory requirements i still have corp guidelines etc i still have all of those same things that i need so how do we do that i could educate the users it's probably not going to work very well i could create set corporate templates that have some of this stuff in place again may or may not work to different degrees because they can create things in different ways they might ignore my template they might click things in the portal so we really think about is we want this idea i'll do this here we want policy now the exact nature of this is going to vary depending on the solution i'm using azure aws google cloud on-premises kubernetes whatever that might be but i want policy and i think about that policy is this guardrail and no matter how i create provision a resource the guard rail will enforce that so this iot admin is not out of a job instead of reacting to each request and doing that validation the iit admin now sets up these guard rails in advance before people start creating resources and that is done through the creation of policies that apply no matter what i'm doing so that's kind of this big shift that yes it's self-service but the requirement to have those guard rails to make sure i'm not creating things in regions i'm not allowed to date a sovereignty or i'm not doing the right replication to make sure i'm protecting my business data or i'm not creating these massive resources and spending too much for a dairy environment those all still exist but we can't rely on the user to do the right thing so instead we think about this kind of policy idea so that's the first thing before i even think about infrastructure as code make sure you get the guard rails in place so for example obviously i'm an azure person so i'm always going to kind of use azure as the example here if i have my cloud i want to use the native capability so if this was azure everything in azure goes through this azure resource manager no matter what i'm doing if i'm using the portal if i'm using powershell the cli if i'm using some template technology it doesn't matter what i'm using they all go through the azure resource matches restful api so what we think about doing is i apply policy so there's a technology azure policy that you can think about wraps around the azure resource manager any request no matter how i interact goes through that policy so in the azure policy i create the policies to meet my security my regulatory my corp standards whatever that is and it might vary i do all of those things with azure policy so we'll actually stop the request if the request comes in hey create this and it breaks policy it won't let it get created and that's far more useful than having some process that runs periodically and looks at what's already deployed and says oh you're out of compliance okay now what do i do now i have to send an email to the app on and say hey you create this out of compliance they're going to argue with you they've already got things running it's live it's very hard to clean something up as opposed to stopping it be created in the first place give them useful information about why it's out of compliance and actually stop those things so i want to use the native capability at the entry point of the api if possible to stop it and get it enforced at the very start and again there's this huge amounts of different things it could be hey i need certain firewalls certain network isolations certain logs captured a whole bunch of various things now realize as well these policies i can use to enforce actually stop something or maybe i'm just checking tracking for compliance when we think about those policies we think about our pipelines one of those steps could absolutely be checked compliance states hey we've done these deployments am i getting alerts from the compliance technology of my cloud to say i'm out of compliance and how many maybe i have a certain threshold hey more than three compliance warnings i stop i fail it and have to go back and work out what's happening so i can hook into that stuff actually in the pipeline itself but my goal is to leverage that that native capability of whatever cloud i'm using now just a quick kind of um demo of this so if we jump over and this is actually i meant to kind of mention this at the start don't forget that the github repo for this class is filling out quite nicely with the sample code the whiteboards but i can think about well if i just go and look at policy for example and look at my def notice straight away i can see compliance date across this subscription in this example i can see the types of policies i have if i look at definitions there's definitions for everything across all the different types of technologies all of these different things like i could look for example just at let's say storage something very very simple but even for storage hey i can check hey account keys should not be expired um hey i should use private link to protect the networking limited to only certain sku so only certain types of storage account can be used i'm using shared access keys all of these controls are available to me and what i can also do is well rather than having to start from scratch and work out well what are all of the policies i need there are initiatives initiatives are collections of policies and here you'd see hey look fedramp moderate for example hey look there's an initiative around fedramp moderate right here and i could just assign that to certain resources there's 262 policies that make up fedramp moderate hey there's ones around fedramp hi there's ones around pci around nest there's all these different helping starting points now again this is azure high trust hippo etcetera but they're starting points i can use and some of the great examples of things i might want to do is well look certain storage account skus that i want to allow ie certain replications maybe i want it to be globally resilient i could have certain locations i'm allowed to use so i've got this u.s locations policy and notice some of them are applied at a subscription they have a scope of subscription some of them i'm targeting particular resource groups i they're more granular so you you have this idea of hey yes i have these sets of requirements i have but i can also assign them at different levels because if i think about my resources i'm just drawing azure here there's actually a whole structure i can think about well hey there's management groups which is this hierarchy of organizing my different resources that could be dev that could be prod and then under management groups i create kind of one or more subscriptions actually contain my resources and then they contain one or more kind of resource groups and all of those levels i can apply policy policy different policy different policy to this subscription a different policy to this resource group so i can apply those policies at very granular different levels and i'll kind of get the sum these are inherited down so at the top level it's maybe policies that are corporate-wide must have maybe production and dev have some different policies about types of vm skus i can use maybe certain types of replication i must have certain logging i must capture certain projects or maybe they have different policies but i can really drill down all of those different ways and these are enforced no matter how i try and interact so as an example here when i have it over i have a number of different i've got this policy test resource group and in this policy test resource group i've got things like hey only certain locations are allowed so if i do my edit assignment hey i'm restricting the locations and the actual parameters i've just selected the only locations are you can see here central us east u.s and north central us that's one of the restrictions i have i can also see storage account skus are applied at the subscription level and for the storage account skus this time i'm allowing all of them except zrs zone redundant storage so we can see that in action so if i quickly just jump over and let's actually try and create a storage account now remember one of them was only linked to a certain resource group so i'll actually target that particular resource group i have to give it a name um sav test allow and let's target a region that wasn't in that list so i don't think i had west us two and we will pick zrs now notice straight away before i've even done anything without clicking create it's already told me policy validation error west us2 was denied it's not going to let this even continue its checking in real time through the portal hey you're not allowed to pick that region if i go and create it's going to do a validation where it's going to fail and notice it's failed for two reasons hey your sku is not legal because i did zrs and my location is not legal so that's through the portal but the same thing will apply no matter where i try and create things now if i was to change this to one of the legal locations so notice that has gone away and if i change this to grs i now do review and create now it passed the validation so you can see in real time i get the feedback as the user i could absolutely go and click that link and see the policy i could put useful text as that it admin they would understand oh this is why it's failing and again that's going to happen no matter what i actually do that from so for example let's actually jump over if i go and let's go and look at our devops and we go through our infrastructures code so from here what about if i try to do exactly the same thing through powershell is it now going to work so notice i'm trying the same thing i'm trying a region that's not allowed i'm doing north europe and i'm doing standard zrs so we saw it fail through the portal what about if i try and do it through something else so if i run this it's disallowed by policy error non-approved region if i actually get the last error to a variable and dump that out in a friendly json you can see yep i'm breaking the us locations policy you can kind of see errors there but if i keep scrolling up you'd also see so that was the region not allowed but if you keep going i'll also see things about storage account sku is not allowed either so no matter how i'm actually creating this thing it's not allowed because it's been implemented at that restful api it's not just something in the portal it's actually at the azure resource manager that actually stops me doing that and again this isn't just enforcement as we kind of saw in the nice little picture it will actually help me do that from a compliance point as well i can actually go and look at my policy and i have this overview i can see hey how compliant am i i could drill down into this i could go to my compliance tab and see the detail of all of those different policies to help me track so this is a key part i want those guard rails in place right at the start now make sure you have standards i want naming standards if i'm using tags which is key value metadata that's really useful to track information about resources have those defined now again in azure there are documents that help me hey with naming conventions gives a lot of suggestions around components of the name there are documents and again all of these are linked actually in the description i put these in the repo hey how to think about tagging key tags that we recommend are there so all of these different components i want to make sure i have to really track and make sure it's useful all of the different resources that i actually have so we think about yes i want to get all of those things in place before i start creating any actual resources but let's say we've got that okay john we get it we want policy for the guard rails we've got these naming conventions i want to create some stuff okay so we think about this idea that we want to create things we want to create it as code we want these declarative technologies to go and create the things now the first thing is i actually have to pick a technology now i can absolutely think about okay great i have some cloud so i think okay i've got some cloud environment i'll draw this big make sure i've got enough space for the future but i have some cloud and i think about hey i want to create different types of infrastructure now that infrastructure could be virtual machines it could be container environments could be kubernetes it might be hey i'm creating a database i've got some database it could be a just a regular storage account but i'm creating certain types of resource actually on that and the way i want to think about actually creating that resource remember is i want a template i want some declarative technology where i can specify this is what i want it to actually look like now realize i don't have to modify the template as i move through environments remember we've probably got a dev test maybe an acceptance test and a production i don't want to use different templates i want to make sure what i'm deploying is consistent across the environment so i want to have different templates for different environments but what i will have is maybe different naming different sizing different numbers of instances so i have one template the diversion control but then i can add in per environment parameter files so it's using the same template but maybe devtest has one set of naming for the database and a certain size production has multiple instances and different naming and different resiliency but it's the same actual template i'm using across the different environments now i want to try and limit the number of technologies we always want to be able to use the same thing be it on premises in the cloud and different clouds so the first thing i have to kind of decide and think about on what technology to use is as a company am i multi-cloud or am i kind of hybrid am i using the cloud and on-premises if the answer is yes ideally i want to find a technology that i can use across them now very popular one right now isn't like terraform so terraform gives me this ability to have these providers for different environments hey i can have a provider for azure for aws for kubernetes for vmware and what this actually does is this is declarative i'm saying hey this is what i want to actually happen it's a single syntax for this i create these terraform files i have this kind of tf file now realize i do have a provider for each of my different kind of target environments it's not like i can just take a terraform file i create for azure and deploy it to aws the resources are still different but it's the same syntax as the same technology but it doesn't just mean i can transparently take one file for one environment and apply it somewhere else i would have to modify it now the way to terraform was i create these terraform files again i have parameter files and the way it works is we kind of plan it out we plan out hey what is going to happen when we execute that is this whole idea of a state file now the reason it has this remember is well it supports all these different types of possible environments and so to work really efficiently it has to track what it believes the state of reality is now it does some checks and refreshes but this is what it believes the state of reality is so when i have a new desired state it compares that to the state file and that is actually going to now work out what is the plan what should i actually do and then when i actually do the apply well that's going to govern the kind of resources which feeds back into an updated state file so it's this process so this file here is really important i can get in a world of hurt if something bad happens to my state file so we want to protect this if this was saying i was storing like locally i could store this in a blob for example i'd want to make sure it was grs so it's redundant it's protected but if i'm doing multiple deployments at the same time that file might get locked so then then i struggle with certain things so terraform actually has a cloud solution where it maintains the state fault i can kind of just submit hey i want these deployment jobs done it has secret capabilities it's free i think the teams of up to five for one concurrent run or there are paid plans for multiple concurrent runs for bigger teams for example there's private installations using enterprise but terraform is really attractive in that it's multi-cloud it's hybrid i can use this one technology and it's not unique there are other technologies that can do this as well just terraform is very very popular right now but it's this idea that hey i can use this to specify what is my desired state and remember this terraform file where would it be well where is it going to be is in my repository these are all in my repository so i can version control it it's centrally stored and i'm going to keep drawing different repos but think about it's the repo for my project so these are all going to be stored in the same repository i have so hey great i can version control this thing i'm centrally managing it i'm tracking it this worked aws to azure to kubernetes on-prem to vmware wherever that may be terraform even has a central technology i think that policy it can have policy defined at terraform to check hey am i doing something i shouldn't be and actually stop it proceeding beyond that so this is a really attractive technology if i'm kind of multi-cloud if i'm hybrid what if i'm not what if i'm just no i'm one cloud well i still might opt to use like terraform there may be features in there that's just appealing to me that makes it worth using that technology or i could use a native technology so in my example if this was azure well for azure there's kind of these arm json templates already the new thing is saying called bicep and i have links in the description to all the different videos in the github repo about um deep dives into these technologies but bicep is this new language that's very easy for us to humans to use where i can define what i want my state to actually be so it's kind of this key point to using these technologies so i can you say native and again all of the main cloud providers they have these capabilities um let's just quickly look at an example of one of these so if i jump over again to here so we're going to look at this time is actually a bicep deployment so we saw a portal we saw powershell now i can think about a bicep file so if i go over to here so notice i've got some parameters i'm getting passed in to the file a storage account name a secure location and i'm just saying hey i want to create a resource the resource name in the parameter if i was just going to be like a reference is stg storage this is the type so it's a storage account at the storage resource provider and a version and it's got some attributes hey a name a location a kind and a sku so i'm just stating what i want i want a storage account i want it in this location i want it of this kind i want it of this sku and then here i have different parameter files so i have a parameter file for dev well i'm picking zrs it's probably not going to go very well in north europe and then i have a good parameter file hey standard lrs and east us and i just call these files so if i think again about our kind of demo file over here let's quickly go into this folder i can say hey let what i can do what if tell me what would happen i'm just going to do a deployment of that template to my resource group with the bad parameter file so if i execute that i do this execution there's no state file with bicep it's just comparing it to the actual reality that exists at this time so i don't have any parameter any state file i have to worry about is just working directly off of reality so it's checking that so it's doing a quick look and it's saying it failed well it failed because of policy violation non-approved region and again i could look at the detail now if i deploy it with the good parameter file and again i'll do what if again again it's going to compare it to reality hey what's reality right now and it's working out what would it need to do and this should work because those parameters are not going to violate the policy it should be in a good position now while that's running notice it's going to create it notice it's also adding a tag that i didn't specify but in my policy i have hey i want this tag present so if i just went ahead and ran this so we just run that if i go quickly jump over to my resource groups and we'll look at that policy resource group which was empty so there's nothing there right now but what they should do is actually go and create that resource to that actual configuration so it's looking at reality reality is that's not there and it will go ahead and make it so now notice i actually over road sync i'm actually passing storage grs so it's actually going to make it a grs even though my parameter was lrs so i didn't actually i ran the wrong command that was the second demo so it's going to make it as a grs it says refresh the port can take a minute to catch up sometimes it's not showing it but it is there so refresh again there is there it is it created it and you can see it's grs now let's run this again and actually run the first line remember in the parameter file it's saying it should be lrs the only reason it did grs because i overrode it in the execution so i'm going to run it again so this is my my desired state is that i want this storage account to exist but i want it to be lrs so maybe i've changed it in my repo to now be lrs from grs i'm not giving it a different command now it says it's finished and notice it's saying hey storage sku lrs it's showing me that actually over here so it's gone and it's worked out what does it need to do and if we go and look at reality and we refresh now we're looking at this over here this was geo-redundant all i did was rerun the template which is different basically parameter value now it's lrs if i run it again and again and again it wouldn't care it's just going to make sure does reality actually match so i've got all of these capabilities just present for me so that's the key point of this it's item potent i can rerun this as many times as i want it will just make it match reality and again those bicep files would be in our repo now what about if i need to perform some other action that's not possible in the template there are things like deployment scripts maybe i need to add some additional users to some permissions that i can't do with a template most technologies will let you be able to run arbitrary commands in some temporary ephemeral maybe container so for azure i can do a deployment script component that i can run any commands i want or i could call an azure automation there's ways i could accomplish that now so the first thing we're going to do with these templates is integrate it with our pipelines remember we had that pipeline it's too far away but we had this pipeline all the way up there i'm not going to try and draw an arrow from there to there so we have this template idea well i have my pipeline i have the pipeline right here that bicep file where is that i have a repo so i have some repository for my project in there is my dot bicep file and my dot json file which is kind of my parameter file they are sitting in that repo so what my pipeline is going to do is step one is hey check check out the repo so it has the content of that code available step two well i need to authenticate step three is going to be deploy and when it deploys how does it deploy it uses a template to actually go and deploy those things now there's a certain set of sequences that have to happen here remember to authenticate we're running this this is our pipeline this could be github for example well our pipeline i could have a credential as a secret so hey it uses a credential from my repo my deployment or maybe there's kind of an environmental variable that sets the subscription because i want to use the same template and just modify where i'm deploying it to and then there's other kind of steps as well so all those things really kind of come together so let's look at this so the idea of using the template from a pipeline so what i've actually got over here is i can think about what i actually want to do really quick is i'm just going to delete this because i want to actually see it created from our pipeline so that same bicep file i manually executed we want to use that actually from github so if i was to actually look hey i've already uploaded these and there's my bicep file under the part 0 6 iec with the parameters great now two things before we go any further if i look at my settings remember i can have secrets so i have a repository secret of azure credentials so this can be used by any environment it's just there at the repository level but then what i actually did is i have an azure sub id one value for qa one value for test so test is actually my dev subscription qa is my lab subscription so now what that means is depending on which environment i target i get different subscriptions where i'll actually create resources so if i go to my actions so i created a deploy bicep demo and if we look at the yaml file what are we doing in here now i've got mine configured as manual execution just because i didn't want it triggering automatically i have a job deploy infrastructure that i'm using against environment test now environment test remember maps to my dev azure subscription i'm going to check out the repo then i'm going to log into azure and i'm using the repo level azure credential secret so now authenticated to azure then all i'm going to do is using the arm deploy i'm going to get remember the environment level azure sub id so i'm going to target that for the subscription i'm deploying it to my policy test i'm passing it that bicep file with my bad parameters so now we're actually doing it from a pipeline we're going to trigger this so that's the code we have so from here if i run the workflow we'll actually kind of see so we can see now it's queued we can click on it you can see all the different parts that are going to happen so it's waiting for a runner to trigger this off but it's going to check out the code it's going to go and authenticate there we go we've checked out the repo we're authenticating all good we're deploying the bicep file to that subscription and remember what i have in here oh it failed oh i've got these nasty failures why did it fail well no surprise it failed because of policy violations all order the same ones and storage accounts skew because it was the rs failed i'm also going to see in here the region so non-approved region that's kind of the line above it so that policy was enforced everywhere this is why we love policy now i could obviously go and kind of fix that so if we go back out instead of doing the bad file you can edit directly in here if i change it to not bad just be my parameters file and we'll just commit that so now we have passing it the good parameters file if we just run that again now remember right now if we looked at our resource group it's empty there's nothing in there right now so now it's deploying again it's going to find the job it's waiting for a runner again and i can run this as many times as i want it's just going to make sure it matches this state so i'm going to error if it exists already it's just going to work so hey set up the job and we're just going to see it run through and this time because it's good values we should absolutely see this work so i'm going to log into azure yep yep yep we should see the deploy bicep file validating template creating deployment and we should see the resources so that's the whole goal of what this is doing notice it's taking longer so touch worked it's working this time and this won't be a miserable failure of a demo but it's going through so i'm expecting that fully to work and the point is this could run at every single time the pipeline kicks off if the resource exists already it doesn't matter it's like fine i don't need to do anything but it's a way to validate it definitely is matching my desired state i can it's item potent i can run this as many times i want it will just make sure it matches my desired end state that's all this is going to do so that's why this is so powerful let's just go back try and prove the point that it actually works so we're still running uh kind of that deployment but i fully expect this just so we don't have to wait too long we'll see if it does come back but it definitely should work but if i go back for a second you can see here there was one i ran this was actually where i changed it to the good file and here you can see it deployed it just fine you can see that was successful but let's go back second refresh and it worked there we go deploy bicep demo and it did work and so what we should see now is if i refresh and again this can be a little bit behind but it just finished we'll keep hitting refresh what i would expect to see is that storage account within about 30 seconds it would show up in the portal so that kind of shows it goes through and actually creates those resources although for some reason it's taking a really long time because i'm trying to record a demo but we did see it was successful there were no errors there were no policy violations so we actually see through our pipeline it actually went and deployed those resources i think actually when i looked at the storage accounts let's just um let's go back in maybe oh it's really taking a long time there it is now it showed up there we go so we can see it actually did create so that's the point of that infrastructure's code again i could rerun that i could just run again it wouldn't have failed because it already exists i'm just saying my desired end state that's kind of the key point so that's what we want for our infrastructure now what are we creating in that infrastructure let's say it's a virtual machine the most basic thing well storage account is probably the most basic thing but let's say a vm what does that vm really comprise of well that vm really comprises of things like hey i have an operating system on that operating system we might install runtimes middlewares our applications itself so how do i how do i do those things how do i make the os match my configuration now there's really two levels to this as much as possible i want to try and avoid having custom images so let's let's put that to the side for now how do i configure the operating system be it windows or linux so i think about the configuration and once again our goal for the configuration is some kind of declarative technology that is things like power shell dsc desired state configuration there's things like chef inspec puppet ansible there's all these different technologies and that i can apply to say i want the os configured this way i want these apps present i want this middleware present this runtime here it configured this way so the whole goal around these for the most part is these are declarative i'm stating my desired intent which again means i can version control them i can see drift i have all of those capabilities on these things now as i talked about earlier i said terraform chef and puppy and others they can do that same kind of multi-cloud capability of the infrastructure as well i'm kind of focusing here on the configuration inside the os but again i'm going to state my desired intent now from a native kind of technology uh one i've used is powershell desired state configuration so here i'm looking at this file iis install and you can see what am i doing here i'm saying hey i want a windows feature iis to be present and the actual feature is called web server i'm not telling it hey install it i'm saying i want it present then i'm saying i also want the windows feature asp.net to be present and then i'm using an extension to say hey the default website i want stopped that's my desired end state and then i'm saying hey some files from one location i want to be present in another location as well basically make sure they're copied there and match the state and then i want to make sure there's a website called saviles site that links to this path to be present and started so i'm not telling you how to do it i'm saying this is my desired end state so again it's item potent i'm stating this is my desired end state go and make it so i'm not telling you how to do it i'm just saying this is what i want reality to look like if it's there already great if the files have changed in that source excuse me make it match copy over whatever's changed if i was modified make it match it whatever it is this is just my desired state go and make that the reality of the situation now i talked about azure policy earlier azure policy actually has an in-guest capability that uses powershell dse for windows and linux and it also uses chef inspect for the linux as well so using azure policy i can define things i want in the guest using that one technology so there's declarative ways to handle configuration inside the os and again all these files would be in my repo that's kind of a key point now what about if i do need custom images as much as possible we want to use the kind of vanilla images from whatever cloud vendor or whatever we're using but what if i do need a custom image so most of the time if we need a custom image there's kind of this idea that there is some gold starting image from maybe the marketplace we'll say hey this is the gold image over here this was provided for us but what we have is some kind of customization there's things we want to do and we're going to store that in our repo because that's our desired intent of what we want the things to actually happen so we can version control that and what we then have is some kind of technology a very popular one is packer and that's from hashicorp actually the same people that do terraform and that will take those customizations that i've defined apply them to some starting image and it will spit out my custom image that i would typically put into some kind of shared image gallery that could then be used by my environment so i'm going to store that it could replicate to multiple regions so it's easily usable and i could then use that to deploy those base vms maybe it's got certain agents pre-installed certain key configurations i just hash to have to have have to have but the whole point is this is the desired customizations that packer will apply now some clouds have these technologies built in azure has azure image builder that's built on packer that i just give it the customization and it will go and spit those out as i change it will create new versions and the nice thing about these kind of images they can be versioned so version 1 2 3 2.3.4 whatever that is so that as a new version is released maybe technologies go and see a new version and stage it out in a controlled manner so i can do those various things so that's that's great if you think about that with from a declarative perspective deployed the infrastructure configured the operating system and the apps and the runtimes even if there's custom images hey i'm i'm using infrastructure's code to define what i want the customizations to build the images everything is based on something from a repo that's version controlled i can rebuild anything i can rebuild my custom images i can redeploy the environment i can rebuild the infrastructure all from my one repo everything can be built and that's the key point when i have this as code i can see exactly what's been done i can see the version that was used i know exactly what i have at any point in time so let's continue to build on this that was a vm what about his containers well containers so we've got this kubernetes environment kubernetes is kind of that orchestration layer for containers so a container fundamentally we have a container image now a container image is really built on layers there's kind of these multiple layers some base os some tweak to the os some customizations then you have like a read write layer at the top so i can actually modify things so we build container images from layers we store these in some kind of container registry that could be docker hub that could be azure container registry whatever that might be now the way we get those images well it's layers remember so the chances are there's some kind of base container image maybe that's a certain os an ubuntu or maybe a windows nano whatever that might be so we have that container image which itself might be built from other images again remember it's layers someone built that image but then what i have is what i want my container to look like well we have a docker file that docker file specifies the changes we want made to create our container image and guess where that is this gets a bit redundant after a while it's in my repo so i can version control it i can version those things i can see what happened and we basically through docker we go through a process we take that and we apply this to build our new container image so that's kind of a step one so it's about to create the container image i want to use well we have a docker file now this is saying we're covering next week next week is containers and kubernetes and git ops and that sort of stuff but just to kind of quickly show it on part seven if i look at my docker i have a docker file and we can see hey i'm building off of this container image i'm making a change to it and deleting its default web page and putting in my content that's it that's going to spit out if i actually do you can see here i'm doing these builds so i can run i'm pulling images but you can see here i'm building my own container image called bad father the thing i demo all the time so you can see i actually build it by using hey the content of this so in this website in this file is all of this actual content so again i have this very prescriptive file with the docker file of what i want to do and from there i can recreate my container images very very simply so again everything is building on everything else so we have this container image that's kind of a key step so that would be a container now if i think about more larger scale we use kubernetes and the whole point of kubernetes is i tell it what i want so okay so for kubernetes here what do we have well what we have is we have our kubernetes deployment yaml file and guess where that is it's in our repo and that yaml file defines its declarative our desired state hey i want x number of these pods on these worker nodes that have these taints and these tolerances i want this service available on this port so what this essentially does is hey one color disaster never use that color all right hey deploy i'm doing a deployment and it's going to use particular images as part of that so when it does this what it's going to spin up are pods maybe multiple pods depending on the config that runs my container image and a container instance that offers my service there's other things there's services get spun up which maybe internal or public ips with lo so all these various things that happen but i'm essentially doing a deployment from this yaml file that states my desired configuration if we go and look and again i'm going to cover this next week in detail so don't worry really a lot about this but if i go and look now at this yaml file this is for kubernetes i'm stating what i want hey i want one replica where hey i'm azure vote back one this is the template and my label was azure voteback one but my image is going to be the redis image and then my resources i specify what the resources are what ports i want then i have a service that i want then i want a front end deployment we have one replica again this is built off of a different image v1 of this one and then i want a service that hey it's a load balance um there's the spec okay it's a load balancer and i want to offer 80 and i would deploy that i'm stating my desired end state if it was deployed and then i changed my desired end state i could comment these lines out and make it an internal load balancer instead it would modify what's deployed to match that new end state and again we're going to go through this in detail next week but it's declarative i'm saying what do i want my end state to be again it's as code that's a key point to all of these different things and once again there's an idea of policy so this thing called gatekeeper for kubernetes so with that i can kind of put a policy around this for azure azure kubernetes services it actually has azure policy for containers that uses gatekeeper underneath so i can define it using azure policy one place but it uses different technologies to actually do that enforcement so i could enforce those deployments at the kubernetes level as well so we have that single technology now there are other components everything like helm charts if i'm doing multiple yaml files it brings in multiple services to provide an application rather than me manually working out which bit to use a helm chart is basically a structure that says hey these are all the different yaml files that make this up and i can deploy the helm chart to get a complete solution deployed for me something else we're going to talk about next week is git ops so git ops is the idea that often we think about from all of our deployments it's the pipeline there's a pipeline running that it goes and gets the repo content and it pushes it to the environment so in this case there will be a pipeline that grabs the yaml file and applies it to the kubernetes that pipeline has to run to actually make that happen well there's another concept we'll use gold get out the gold pen it says this whole idea of git ops so get ops is there's this flux component so what this flux does is a number of different things but this operator runs as pods so i can kind of think about i'll just draw flux now it does interact with other things it does utilize things like helm operators but fundamentally what it's going to do is it's basically synchronizing a repo of yaml files so it's looking to see are there changes made to those yaml files it's also watching hey is there a new version of an image released if it sees hey there's a new desired committed yaml version i i've changed what i want to exist maybe i've changed number of replicas or a new service if i've got a new version of the image it will actually apply it to the kubernetes environment so it is actually i'm not having a pipeline run what's happening here is once i do the commit maybe to my main branch it will see there's a new desired state and it will apply it if it sees a new version of the image oh i'll go and update so this basically runs itself but it will deploy those things now again to deploy this initially i could deploy it as code i could have a template that goes and deploys this flux git ops environment to here so i can still use all of those different capabilities but this is super powerful so this isn't a pipeline this is natively just running and hey as i change my desired state in my repo it's synchronizing that constantly we'll see it and apply it i do a database up here what about databases well how do i normally define a database most of the time for database i have sql files or some other structure that is my configuration well guess what those sql files i'm going to store in a repo as part of my pipeline i can apply that to create my schema to create my tables to modify whatever it is i actually want in that environment so every element i i never want a human clicking things or doing things every element is as code everything create in the infrastructure define the state of the operating systems if there's a shared image what's running inside hey creating my container images deploying my kubernetes environment configuring my databases which we created with the infrastructure as code everything is as code and i've got these million different repo buckets all over the place but i mean the key point here is yes i drew lots of them so the picture makes sense but it's one i have my project repo remember that gives me the version control it's centralized all of these are prescriptive so i can understand i can easily track i can track what's deployed so in this i have my bicep files my terraform files i have my json files which are kind of my parameter files i would have for example my powershell dsc or my chef or my puppet the configuration i would have my docker files to create my containers i would have my kubernetes yaml manifest files for what i want that configuration to be i would have my sql files in here those packer files i'd have my packer files in there as well to actually with all those customizations and what's interesting all that azure policy stuff i talked about well i can do policy as code so once again what's interesting here is even the iit admin i don't want them clicking the buttons all of that can be defined as code like all of that policy stuff i did remember i said it comes through as arm i can define policy using a template so even the i.t admin yes the world has changed but that it admin here when it does that configuration guess what they can do all of that configuration as code they can have that checked in to a repo and guess what as they commit that in that could deploy via a pipeline they could actually have a continuous deployment so as that goes through it deploys the new version of the policy to the environment that's this key point so all of these things are in this repo so it's version controlled it's prescriptive it's saying what i want ideally it's declarative sometimes it's not but we want it to be declarative because declarative gives us that item potency i.e i can just keep re-running that as every time to start the pipeline it doesn't matter that makes it auditable i can track compliance is there drift is there not drift so those are key points to everything we're doing so yes i drew many many different repos there are not many many different reports they might they could be depending on the project like policy well that's probably complete that's not project specific that would not be in the project repo i don't want to duplicate code that would be somewhere else and i could reference all that centrally managed through those various deployment points but i have all of these goodness i have not clicked a button in a portal i've not manually run a script all of this i can do as code and that's critical if i think about those pipelines i need to make sure it's repeatable i can track it it's not someone doing maybe something different between environments it has to be consistent all of my environments in my stages be it dev and qa and production they all have to be production-like might be the same scale they're configured the same way to make sure my testing is valid this lets me do all of that so this is a key point infrastructure's code really is kind of a key cornerstone to everything we're doing so i really hope that made sense i hope that was useful as ever um this was part six and as i did kind of i was trying to stress all of this is in the repo so right now we're on kind of part six as we look so all of the whiteboards all of the code are all available link to the videos other videos that go into more detail i had six by the time you actually watched this obviously also there's the playlist i do recommend you kind of start and walk through this kind of in order they do build on each other to give you kind of that complete view but it's all there available um as i always say a huge amount of work goes into creating these so i really would appreciate a like subscribe and share with others if you find it useful but until next time take care you
Info
Channel: John Savill's Technical Training
Views: 4,295
Rating: 4.98 out of 5
Keywords: azure, azure cloud, microsoft azure, microsoft, cloud, devops, github, github actions, terraform, bicep, templates, infrastructure as code, iac, powershell dsc, kubernetes, docker, policy, governance
Id: 8AgEwM8RhAU
Channel Id: undefined
Length: 78min 3sec (4683 seconds)
Published: Tue Sep 21 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.