Getting Started with GCP and Terraform

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video i'm going to give a crash course on getting started with the google cloud platform we'll be deploying a simple scalable web application architecture using an nginx hello world docker image this is aimed at people with experience of aws already so some cloud computing background is beneficial i'll be using the gcp counterparts of these aws services some of them are similarly named such as vpc networks where there's ec2 and aws in gcp there's the compute engine and for load balancing in gcp this is located under network services now this is a high level overview of what we'll be deploying it involves a vpc a cloud load balancer and a compute instance group this is it in a bit more detail showing the various resources that we'll be connecting up i'll go into these in more detail during the course of the demo and finally i'll be using terraform for infrastructure as code as it reduces the barrier to entry by using tooling already familiar to some people using aws so if you're ready to get stuck in let's go okay so i'm going to assume that you've already got the terraform and google cloud sdk installed on your laptop if you haven't already done so then there's details on how to do this on my blog which i'll include a link to in the description of this video so the first thing that we're going to do here is we're going to create a new project in gcp we're going to do that using the web console so what we've got here is the gcp home page and what i can do here is i can drop down into the projects drop down and i can select new project similar to aws accounts there is the notion of projects in gcp they are so they serve as a container for resources that you will deploy as part of an application of a platform you're building but they also have the additional benefit that whenever you're done with a project for example you just delete the project and you can delete then everything under the under that project automatically so no risk of any lingering resources what you could do quite nicely in the gcp ui as well is you can quite quickly switch between projects too so what i'm going to do here is i'm going to create a project called quick start you get an automatically generated project id by gcp you can edit it so you can change it to what you want but you can't change it later and then finally you can group it under a location if you want but for the purposes of this i'm not going to okay so we now have a project being created what i'm going to do next is i'm going to initialize the g cloud cli tool in order to work with it so if i do a g cloud in it similar to aws command line tools there is the notion of configuration which is similar to the credentials and profiles concepts that you'll know from in the aws cli there's a couple that i already have existing here but i've had a previously named project called quickstart so i'm just going to reuse that configuration but with new settings so there's a couple of accounts that i've authenticated against the cli i'm going to choose my personal one out of that and then there is the new project id which we got created in the previous step so i'm going to assign that and that's it and now i've got the google cloud cli tools associated with my new project and just to show that there's nothing already running in here i can do a gcloud compute instances list and because i haven't used the compute service at all in this project yet i get a prompt here saying you haven't enabled this yet would you like to that's another difference between gcp and aws whenever you use a service for the first time in gcp you have to enable it so that looks like this if you were to do that from the ui if i go to compute engine and vm instances for example ah okay so i need to be in my quick start project there you go you just get a splash screen same you need to enable this do you want to so i'm not going to do that for now because i'm going to show you how to automate that in terraform so let's get on with that so the first thing that i'm going to do is create the networking infrastructure that's going to serve as the base for my computer instances to stand on i'm not going to hand crank this all out here but i've got some things i've come prepared with earlier so i'll get them deployed and running and then whilst that's happening i can give you a bit of a walkthrough of what they all are and how they all fit together i'm just going to grab my project id out of there and just chuck that into my terraform so the first thing i need to do is run a terraform in it if you are working with a project on a new platform and you need to install the plugin for that platform the telephone plugin that is you do terraforming it to do that so i've already got the google plugin installed so there wasn't really much to do there so the next thing i'm going to run is a terraform apply if you're new to toeform you can either run a telephone plan which gives you the plan but allows you to save it for you to execute it to later date or you can run terraform apply which does it all in one go so i'm going to run terraform apply here and what that does is it generates a plan which it's showing me here it's going to tell me what resources it's going to create and then i'm just going to say yes to confirm that i am happy for you to start creating them so whilst that's creating i'll give you a walkthrough of what we are standing up first thing in terraform is we have this constant area here where i can specify values that i went going to use in a number of places in my configuration i've got the provider block here which is for google so it specifies google as the provider and that's what triggers the plugin as part of terraforming it it takes the project id which i've stored up above and it has a similar concept to aws of region and availability zones which i've just set to us central here next thing is the google project service resource and what this allows me to do is to automatically enable various services within gcp so in this case what we've got is the compute service which i've enabled after that we've got a compute network which is a vpc effectively in aws terms i've called it quick start network and this is what you will see displayed in the gcp web console there's two attributes here that i've explicitly set there's auto create sub networks which is false if you don't set that explicitly by default gcp will create you a subnet for every single availability zone in gcp so you end up with 20 or so subnets being created by default now i don't want that because i want to know how to put it all together manually so i've set that to false similar for delete default routes on create as well so i've set that to true so it doesn't create me any default routes i want to do all of that myself as part of this demo as well depends on is the final attribute in here terraform is normally pretty good at inferring dependencies between the resources that you build but sometimes it doesn't quite get it so what this is saying here is wait on the compute service being enabled here before you try and deploy this network after that we are deploying a subnet as part of that so this is the bare minimum attributes that you give to a subnet so we're specifying the name aside arrange you want to assign to it assigned to it and then also the network that you wanted to be attached to which is the one above next up we've got a google compute route which is essentially the default route that would have been created but is just something that i can manually create and control myself to and then finally for this is a nat gateway so it's a combination of a router and then some that configuration that sits on that router so by default in gcp you will get an internet gateway being created but you won't get in that gateway so if you want to be able to utilize in that gateway you have to create one here which is what i've done so let's see our telephones going on with that that's great that's all been created so next up i'm just quickly going to add a virtual machine to this as well okay and then whilst that is applying i will walk through the vpc config and demo it in the ui and then also give you a run through of the instance configuration so whilst that's creating if we cancel that open the burger menu and scroll down to vpc networks you'll see that once i minimize this default one we'll have a quick start network which is what we've just created it has a single subnet with the side range that i've assigned to it and all looks good from there so we can see now that we have a virtual machine that's been created it will take a few minutes to bootstrap so whilst that's happening i will try and connect to it via ssh just so we can see what's going on now you can connect to these instances using a web-based ssh client similar to how you can in aws so i'll do that now what that'll do is it'll transfer some temporary keys over to the instance which we can then use for this session whilst that's happening i'll give a bit of a run through of the instance first so we've got a google compute instance here which i've called nginx instance because that is what this demo is presenting is nginx instances being run the machine type here i've specified as an e2 high cpu2 which is effectively a virtual machine with two gig of memory and two vcpus now there's this notion of tags here this is different to what you will be used to in aws this is more akin to well sorry labels in gcp is what you will know more familiar well what you will be more familiar with as tags in aws what tags is referring to here in gcp terms is actually the concept of network tags i will touch on them in a bit more detail later because things will start to become clearer later on next up we've got the os image which we use to bootstrap this vm so i'm just sticking with centos 7. we've got a metadata startup script here which is similar to user data in aws world and what i've done with this is i've used it to create a crude docker bootstrapping mechanism but the outcome of that is we have a nginx demos hello docker container being run on port 8080 and then finally in here we've got some network interface configuration so you specified the network in the subnetwork that you want the instance to run on and then with this access config this is what determines whether your instance gets an external ip or not another difference between aws and gcp is that you don't define subnets per se as being public or private it's it's set more a per instance or per resource level with the usage of that access config block and the presence of a external ip and the presence of an external ip decides whether or not the traffic to and from that instance is rooted via an internet gateway or a nat gateway so i've included this block here because i want an external ip and then i've also specified this network tier property there's two types of networked here in gcp there is standard the name is premium premium will aim to utilize as much of the gcp infrastructure as possible when routing requests to your instance for a little bit more money whereas the standard one will root using more general purpose internet infrastructure up until it needs to enter the gcp domain okay so if we come back to our ssh window we can see that there's a problem connecting to it so i'm going to close that down for now we can have a look at view network details now this window gives you some helpful hints and tools to triage connectivity issues that you may have with your instances if you go to ingress analysis down here you can see that there's a default firewall rule that explicitly blocks all traffic so what we're going to do is add a firewall rule that will allow connectivity via ssh i'm just going to add that down there and we will do another apply okay so whilst that's created i'll give a quick rundown of the rule so call it public ssh linked it to a network we are allowing traffic coming in which is tcp based on ports 22 and 8080 inbound so that's with the ingress direction here and the next bit is what leads me on to my point about network tags that we were talking about earlier firewall rules as opposed to in aws where you attach them to a security group and then you assign that security group to an instance in gcp you assign firewall rules to the network itself and the way that you attribute them to specific instances or resources is you use the network tags associated with that instance so you can see here what i'm saying is i'm going to allow traffic in from anywhere that's targeting an instance tagged with this so once i've done that and i can refresh this now that i'm hoping terraform applied correctly which it did i can now see that there's a rule assigned which is going to allow traffic from 2022 and 8080 and if i try an ssh to that again should hopefully be a little quicker this time okay great and we're in so what i can do now is i can do a sudo docker ps and i can see that i have an nginx container up and running listening on port 8080 now to demonstrate the public connectivity that this instance now has i can paste the external ip into my browser window and hit enter and we can see that we've got an nginx container up and running and that's the container id that it's running under and i can auto refresh that and that will keep going as well so that's proving that we've got external connectivity to that instance next up i want to be able to see if i can lo if i can scale this out using something like a load balancer so next up i have a load of load balancer config that i can apply to this there's a few resources that make up a load balancer so i will just paste all of that and we'll run another reply and then whilst that's applied i'll give a run-through of what they all do alongside the deployment so we'll create that okay so first thing which we've got in here is a compute instance group these are similar to auto scaling groups in aws except you can also have a variant of these which is an unmanaged instance group which is basically a manually maintained list of instances that you define here and you maintain in terraform so i've got my terraform web servers as the name i've got the set of instances here and the only one in here is the instance that i've created for up in the terraform code i've also got a named port here so i'm saying that 8080 or 8080 from now and i'm going to refer as the http port next up we've got a health check so this should look fairly similar to the health checks you use in aws you can specify attributes such as the timeout how often you check how many subsequent checks constitute a healthy or an unhealthy state you specify the port name you're going to use for the health check which is the one that we defined up above and then also i've added this depends on attribute here as well because if you try and deploy this whole terraform file at once you'll get a race condition with the compute service being created earlier next up we've got a back end service which effectively wraps that instance group that we created earlier with some specifics about load balancing so we could define here the the timeout in number of seconds we can specify connection draining timeouts we can choose what type of load balancer this is going to be assigned to whether it's an internal or an external one the protocol used in the port name to move traffic on and then we can assign a health check as well so it's within the back end service where you have an instance group married up to a health check next up we've got a url map under proxy as well which is specifically a http proxy this is where you add path-based routing if you want to do that but for the purposes of this demo i'm just going to keep it very lightweight so we'll just route all requests to our nginx container the final part of a load balancer which is the the front facing aspect to it is the forwarding rule so the folding rule takes a ip protocol on the port to use so i've specified tcp and port 80 here for standard http you can use the load balancing scheme attribute to define whether or not it's in internally facing load balancer and externally public facing load balancer so i've gone with external in this case and then similar to the compute instance that we set up you can specify networked here as well so i've just set that to standard and then you use that forwarding rule to forward traffic onto a target which for us is the http proxy that i've defined above finally you need to create another firewall rule which allows traffic from the load balancer to interact with your instance a couple of things about this so is just allowing tcp traffic on port 8080 inbound to the instance and with external load balances in gcp they're not deployed within your vpc they're a managed service as part of gcp so you have to specify a couple of side arranges that you will allow traffic in from and then as per the other firewall rule you specify the target tags that you want to apply this firewall rule to so the nginx instance tag okay another thing to mention as well is load balancer resources in gcp can either be global or they can be regional so it gives you the flexibility of creating either regional base load balancers or if you need the performance and reach of something more global you can define global load balancers as well the combination of resources that i've used here although there is the use of global and regional used interchangeably here the combination of these resources effectively builds you a regional based load balancer and that's defined really because this compute forwarding rule here doesn't have the word global in it if you wanted a global one i think it's there you can specify global forwarding rule but then you just got to make sure that all of your corresponding resources up above are supportive of a global forwarding rule as well so now that we've got all of that hopefully deployed great i can now go back into the gcp web console and i can go down to network services into load balancing and i should see my load balancer here and what i can do now is copy this ip and paste that and now you can see that we've now got connectivity established through a load balancer as well and just to double check as well they are both the same server name which is the same docker container id so that's all good now the next thing that i'm going to do is i am going to add a new instance to this to demonstrate the load balancing capabilities and to prove it works but i'm also going to remove public ssh access and for 8080 access to the instance itself so i can demonstrate that i can remove public access from an instance and effectively isolate it make it private but still have connectivity coming through the load balancer so the way i'm going to do this is i'm going to remove this firewall rule i am going to remove the access config component of this instance i'm going to duplicate this instance very originally just put a number two at the end of them and then in this web servers instance group here i'm just going to add the extra instance to the list so if we save and apply that i'll just say yes to that okay so whilst all that's going i am going to demonstrate first off that the public connectivity has now been revoked from the original instance that we've got there so if i go into my web servers group here i can see that the original instance nor the new instance has an external ip so they've been revoked and i can see also that my request is now hanging from my web browser so that's good there's nothing for it to connect to anymore but the load balancer is still auto refreshing just fine so whilst we're doing that i will quickly walk through how to interact with load balancer through the web console just waiting whilst waiting for that new instance to spin up so you can find load balancers again by going into network services and load balancing you can click on your load balancer there you've also got back ends and film tents here so this is relating to the backend service and the forwarding rule that we've created in the terraform code you could go into here you can drill around it gives you details about the front end the host and path based rules your backend services you can see here that there's only one of two healthy instances at the minute whilst we wait for our new instance to spin up if you want to go into more details about your load balancing resources and have something that looks more representative of the resources we deployed in terraform you can click the advanced menu here which breaks things down a little bit more so you now have forwarding rules we've got backend services back end buckets as well if you want to serve static content and what you can do in here is you can go to the backend service and it starts giving us some more details about how that backing service is being utilized so we can see that the requests are coming in from america and they're being served to one back end at the minute still one of two instances healthy but with a little bit of luck in the next few seconds we should start seeing them flip-flop which we are doing already so they are already reporting us healthy they've set up and we've now got a load balancer which is flipping traffic between two instances if i give this a refresh we might see that that is now reporting as healthy great okay so we've got two or two instances healthy there and we now have a load balancer demonstrating us being able to load traffic and balance traffic across two nginx instances so the last thing for us to do in all of this is destroy it we've done we've got the demo done so what you can do with terraform now is just run a terraform destroy and he'll very very quickly say do you want to destroy all of your 15 resources yes i do and that's it and then at some point in the next few seconds we should see that start to hang as well as things start getting torn down so i hope you've enjoyed that and you found that useful all the terraform code that i've walked through today is on my github and there's a write-up of this as well on my blog which again you will find a link to in the description of this video but until next time thanks for watching you
Info
Channel: Ben Foster
Views: 2,721
Rating: undefined out of 5
Keywords: Terraform, GCP, AWS, Cloud Computing
Id: 2xaZQHhNO04
Channel Id: undefined
Length: 27min 54sec (1674 seconds)
Published: Wed May 19 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.