Amazon EC2 Masterclass

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello welcome to this AWS webinar my name's Ian massing ham I'm a Technical Evangelist with Amazon Web Services based in Europe and I'm going to be hosting today's session which is a masterclass focusing on a deep dive on Amazon ec2 we have another webinar series called the journey through the cloud you can find the series page for this in the links panel in the webinar interface or in the webinar description the video description if you're watching this on YouTube on demand and that's a solutions orientated look at how you can apply a variety of different AWS services to a particular set of usage scenarios or challenges that you might face in your organization so check out that other set of webinars if you're interested in taking a look at AWS services from that perspective today we're on a master class which means a deep dive into one specific AWS service and we're covering ec2 today we have a lot of material in today's session lots of the slides dydz that wedge that week that we're going to be using have things added in them where you can find further information about the topics we're going to be covering and if you want to make your life a little bit easier you can download the materials from today's session once again in the files panel in the webinar interface you'll find a place where you can download a PDF of today's slides and if you're watching on YouTube once again there's a link in the video description to SlideShare where you can also download a PDF of the materials from today's session or just review them online and access those links between your find helpful if the stuff that we don't cover during today's session or topics that you want to know more about please ask us a question using the Q&A panel once again that's visible in the webinar interface and the member of our team will get back to you after the webinar we'll try to come to some of these questions at the end of today's session we do have a lot of content today ec2 is a very expansive service a lot of topics to cover so maybe we have difficulty fitting everything in inside the hour so with that in mind we're going to get on in a second with the content just one other thing to say which is at the end of the session I'm going to do a couple of things I'm going to put the webinar into Q&A mode which will give you an opportunity to give us some feedback and I'd like you to spend a couple of minutes just to give us a rating between one and five with five being the best let us know how well we've met your expectations today and of course if you've got quality feedback that you want to leave forwards we'd really welcome that via the QA interface via that Q&A panel as well if you think there are things we could improve upon for this webinar for the next time we run it please let us know about that you'll also see some social media links at the end of the session in addition to my Twitter which you can see on the screen right now and if you can follow us on the social media links that we show at the end you'll have a good opportunity then to stay up to date with AWS and also continue to stay up to date with our education program in the English language here in the European Time Zone so followers on social media at the end of this session of course you can find me on Twitter as well okay that's a little bit of housekeeping out of the way let's get on with today's session a masterclass looking at Amazon ec2 and these masterclass series webinars well they're intended to be a technical deep dive that goes well beyond the basics of one individual AWS service helping to educate you on how you can get the best out of the service upon which we're focusing and show you how things work and how you think how you can get things done using a variety of different interfaces to control and manage AWS services and the focus for today's session as we've already said quite a few times is Amazon ec2 so the Elastic Compute cloud providing resizable compute capacity in the AWS cloud and this service is designed to make web scale cloud computing much much easier offering a true virtual computing environment enabling you to launch instances with a wide variety of different operating systems and of course to run as many or as few systems as many or as few instances as you decide desire the basic characteristics of the service well elasticity so it enables you to increase or decrease capacity in minutes not hours of or days and you can Commission one hundreds or thousands of servers simultaneously we've got good examples of customers operating on ec2 a really high scale everything's controlled with those familiar AWS Web Services API is this in means that your application can automatically scale itself up or down depending on its own needs those API is offer complete control of your instances but we also provide full root access or administrator access to each instance that you're running I mean that you can interact with those instances just as you would any other machine any other system that you might be managing you're able to stop and start your instances while retaining data and then you can subsequently rehydrate or and hibernate those instances and bring them back into service and of course you have the ability to restart or reboot instances using the web services API s and you have console output access to your instances as well it's flexible giving you multiple different instance types which we'll cover in more detail later as well as a variety of different operating system types and software packages that scalability and reliability that platforms also very important the fact that there's a reliable platform there where instances can be replaced if necessary rapidly and predictably commissioned it's secure working in conjunction with Amazon's virtual private cloud or V PC something we're going to cover in much more detail later on in today's session it's inexpensive with the familiar on-demand billing model with no upfront commitments allowing you to pay for compute capacity by the hour and it's easy to start will show you during the session today just how easy it is to get started with ec2 using either Amazon machine images or the AWS marketplace where you can find a very wide variety of pre-configured software for use on the ec2 service that's a quick overview of some of the benefits I normally have a slide here which has a variety of different use cases on it but for you see - it's somewhat tricky to do that and I think rather than trying to flood this slide with different potential use cases for ec2 I'll just say that the use cases for ec2 are any software which runs on a supported operating system so any use case that you can put a computer esource - it's a use case that can be fulfilled by the ec2 service so not a lot more that you can say beyond that just that you can do just about anything you can do with the computer with the ec2 platform with the ec2 service I want to say something quickly about the history of ec2 it's a service that's been around for a long time and I doubt very much that when my colleague Jeff Barr wrote this post back in August 2006 that he thought his holiday pictures would feature in a webinar are now in the middle of 2015 but it's an interesting post to go back through the AWS blog archives and take a look at because you can see that the ec2 service started with very humble beginnings we offered a single virtual CPU or 1.7 equivalent to a 1.7 gigahertz Xeon 1.75 gigs of ram and 160 gigabytes of local disk space this was billed at 10 cents per clock hour so we're talking about the myths of time rule and you'll see over the course of the session that a lot of depth and a lot of breadth has been added to the ec2 feature set over that period but we've been operating ec2 for a long time and as a result of that we've got a very well eval evolved service which is very well understood and has some very powerful characteristics for developers today so thanks to jeff preiss original launch post back in 2006 the evolution of the service continues if anything at an accelerating pace today and we've added a lot of a lot of features over the course of even the last few months here you can see an extract of AWS compute related service announcements from the latter half latter period of 2014 and the early part of 2015 and you can see that we're enhancing the service launching both enhancements to the ec2 core service as well as evolving the way in which computing services can be delivered in the AWS cloud and we'll come back to touch on a few of these new innovations later in the session before we do that let's take a look at the agenda and get into the meat of today's webinar so what are we going to be covering today we're going to be taking a look at ec2 concepts and fundamentals storage and networking monitoring metrics and logs security and access control deployment and cost optimization techniques and there is a lot to cover today which means that some of these topics aren't going to be covered in the same depth they are there might normally be covered in another masterclass webinar because they're simply so much breath to go out with this service so ec2 concepts importantly ec2 is organized into a WSS region around the world these are geographical locations where ECT will launch the instances that you create you can choose a region should choose a region really to optimize latency minimize costs or address specific regulatory requirements you might have for data placement we currently have 11 AWS regions around the world and it's important to make a distinction the still confusion about this unfortunately a region does not equate to a data center a region comprises multiple availability zones and each of these availability zones contains one to six data centers depending on where you go in the world and precisely how big the region is in terms of scale so the availability zones themselves each of which as I've said comprises multiple data centers those are distinct locations that are engineered to be insulated from failures in other availability zones or azs they also provide inexpensive and low latency connectivity to other availability zones in the same region and those eleven regions that we have around the world contain between two and five ec2 availability zones here in Europe we have 3 AZ s or availability zones in our Dublin Island region and two in our Frankfurt Germany region when you wish to launch an ec2 instance your place that in one of those availability zones and we provide a wide selection of different instance types that are optimized for different use cases and this when I say different types I'm talking here about varying combinations of CPU memory storage and networking capacity and in the case of one particular instance family the provision of Nvidia floating-point GPU units inside the instances as well that you're able to access so that's a little bit about fundamentals in terms of the logical construct of regions a Z s and of course instances themselves next thing I'm going to do is take a look at some of the fundamentals of ec2 the first is that ec2 is available as we've already mentioned actually in several different locations around the world these are called regions as we've mentioned if you check out the URL that you can see at the bottom of the page here you can find always up-to-date information about the regions that ec2 is available in AZ well as details of things like how many availability zones there are in each region and also information about which specific instance types are available in which regions there's an extensive list of supported operating systems and software both operating systems that you can see listed here and find out more information about those in the ec2 frequently asked questions as well as commercially packaged software which is made available either under bring your own licensing schemes from ISVs or in the AWS marketplace where you're able to purchase and deploy commercial software and also deploy open-source software directly onto the ec2 platform through a simple one-click installation process so if you're interested in running commercial software on the AWS platform I'd say check out the marketplace it may be possible for you to license your commercial software by the hour in the same way that you pay for your ec2 resources if you're not familiar with it it's something that it's well worth learning a bit about there's integration between AWS others other AWS services in ec2 in the course of this session we're going to cover the elastic block store EBS cloud watch VPC & AWS I am or Identity and Access Management but there are also other AWS services that are built on top of the ec2 platform it's quite interesting if you look at Amazon Elastic MapReduce our Hadoop service for example when you create an EMR cluster you're creating a group of ec2 instances which have various roles within that cluster and ec2 is a service that we use ourselves as the foundation for many other things that we do and there is strong service integration between ec2 and other AWS services there are also purchasing options we'll return to this in more detail at the end of today's session but there are several different ways in which you can optimize your cost with ec2 you can also import and export virtual machines in other formats you may have VMware or Microsoft Azure virtual machines that you wish to bring in to Amazon Web Services to run on the Amazon ec2 service you can do that with our VM import service and you can find out more details about the process for that at the bottom of the URL you can see at the bottom of this slide and then lastly instance formats instance types or a variety of different instance types available to you that are serviced with resources in different ratios well that's the m4 the general-purpose balanced CPU and RAM instances or the GPU optimized g2 which gives access to an Nvidia Kepler floating-point unit or whether it's the latest generation of c-4 computer optimized instance you can optimize the instance that you use to match your workload and more importantly perhaps instances that are optimized offer they're optimized resources at the best price point so a gigabyte of ram will cost less in an r3 than in any other instance type CPU cycle will cost less in a c4 and so on so you should choose your instance type in order to optimize for performance and for cost and when we have instances we donate them in this format so the generation the type so the size and the family and there's a couple of these specific types and families that I want to talk about now first of all the t2 this is a low cost ec2 instance with bursts of all performance and what happens with the t2 is that you're allocated a baseline performance level but the instances also accrue CPU credits and you can deplete these CPU credits for improved CPU performance over a period of time okay so baseline performance say the t2 small it's receiving credits array of 12 CPU credits per hour and it provides baseline performance equivalent to 20% of a CPU core if at any moment the instance does not need those credits it stores them in a cpu credit balance for up to 24 hours and then when your t2 small needs to bursts to a level more than 20% of a CPU core it draws down from that CPU credit balance to handle that surge seamlessly over time if your workload depletes all the CPU credits that you'll have and you don't maintain a positive credit balance you'll be capped to that 20% of a CPU core and maybe then it's time to upgrade to either a large t2 or to a fixed performance instance type it actually fits very well for applications that need burst performance which is actually a significant percentage of many different types of apps you can take a look at that maybe is the first target when you're testing or piloting and establishing whether that offers a good platform upon which for you to run your apps at the other end of the spectrum to some extent is the c4 instances these are the highest performing compute instances that Amazon ec2 has ever provided we're making use here of the Intel Haswell Xeon v 3 processors and we're exposing advanced features from the processor such as the ax to enhanced vector extensions and also we're offering an instance here with 36 of these Intel Haswell cores made available to you with together with 60 gigs of ram in the c4a x-large so a very good instance for running compute intensive or high performance workloads so next we're going to move on and talk quickly about the lifecycle of an instance so when starting an ec2 instance you have a variety of different options that you can use to launch an instance using console CLI tools or AWS SDKs we're going to cover these in more detail later in the session you then configure your instance using either an AWS army and Amazon machine image which is the baseline from which your image will be created or you can use automation tools and again we're going to cover that in a little bit more detail later in today's session one of the tools that you have for doing this configuration is a service called the ec2 instance metadata service this is a private web service that's made available to each ec2 instance that you create and it contains metadata about instance state so things like the ami ID the hostname block device mappings the kernel ID that's been run as well as information about the network the placement of the instance security groups the public hostname they're all available through this ec2 metadata service and a guide to ec2 metadata is available at the URL you can see at the bottom of this slide one of the ways that you can use this is to access something called ec2 user data which is specific data that is passed to the instance in the metadata service and this is used by AWS provided armies Amazon machine images to perform instance bootstrapping actions what happens at startup time when the instance is created the AWS provided arm these have software components on them that read the ec2 instance metadata and will execute commands that they find there on Windows you can use Windows scripting or PowerShell and on Linux you can actually use any command interpreter but here we're showing usage of the bash shell so shell scripting to execute an action at creation time on a Linux instance and we'll come back to this in a few minutes and show you how this can work in practice in a short demo but this is actually a very powerful tool to allow you to bootstrap and configure instances at creation time using data which is passed to the instance through that instance data user metadata service once you've done that you can connect your instance using standard protocols subject of course to having set a security policy that enables you to do to do so once again we'll cover that in a few minutes but you have the normal approach then for managing and controlling your instance using standard tools like SSH RDP or any other tool that can be carried over tcp or UDP and then once you're done you can terminate your instance to minimize cost and this is important we really like to encourage customers to think about ec2 instances as disposable commodities that can be available when you need them and then stopped when you don't need them and obviously sensible use of that facility of terminating or stopping instance is a really good way to optimize your costs and reduce the amount that you spend so we really like to encourage customers to think about their computers a disposable resource that's available on demand and I use that as a mechanism for cost optimization again we'll talk a little bit more about that later in the session ok the next thing I want to do is show you a quick demo of getting started with ec2 on the AWS console so let's do that right now so here we are in the AWS console and obviously we're going to be working with the ec2 service today you can see also that we're operating in the Oregon region in the northwest us here today so onto the ec2 sub console which will access just by clicking obviously on the ec2 service here and you can see we've got an empty account here with no running instances I've got a pre created key pair which I'm going to use for accessing my instances later on and I've got some security groups that I'm also going to make use of facts in my instances as well obviously what we're going to do here we show you the launch process I'll go to launch instance and at this point I'm taken into a wizard driven launch process which we'll go through the few protest steps that you can see at the top of the screen here so I'm going to launch several Amazon Linux instances by way of a demo here so I'll select the Amazon Linux army and I'm going to launch several t2 mediums out of that t2 burst of all performance tiers I'm just selecting the specific type of instance that I want to launch there if I scroll down you can see there's all of the current instance generations and variations that are available are shown here in this list so picking the particular instance type that I want is just as easy as picking from this list then go to configure my instance details I'm going to launch three instances I'm going to put them in my default V PC what we are going to change here is we're going to use some of that instance metadata user data to automatically configure these instance for instance as for words a creation time so I'm just going to cut and paste that from another window that I have on my other screen and I'll paste that into the user data field that you can see at the bottom of here if you just review this content you'll see that I'm executing actions on the Linux environment that's going to be installed on these instances to install Apache and a PHP environment to turn on the Apache server start it and I'm going to grab a simple piece of PHP from this location on s3 which is a publicly accessible location on s3 and I'm going to move that into a location on the instances where it can be served to deliver a piece of content just for the purposes of this demo so next thing we would do if we were going to complete the process is add storage we're not going to change anything here so we'll show you this a little bit later on and then lastly we can tag our instances with the name so we'll put a name in here so that we can identify them easily in the console then we asked to configure security groups and these are the software firewalls that we're going to use to give us access to our instances so that we can access the content that's on them for the time being I'm just going to enable HTTP access so I will not be able to log into these instances remotely I just want to show you that the bootstrapping actions that I've included there have concluded you can see there we're not going to be able to connect to this instance using SSH that's fine then we have an opportunity to review the configuration of our instances and then launch we are going to set up a key pair which I have pre-installed as I said which is the public private key pair that's going to be used for SSH to these instances later on we modify our security group and we go into launch instances and you can see here that we can now view our instances in the console so if we hit that you'll see those are three new instances that are being created you can see the name I just typed in their instance IDs instance type the availability zone they're being launched in the instance state this will just take a couple of minutes now to launch these instances and get them into a state where they're ready for use we're going to open another browser tab and access the HTTP server on one of those three instances that we've created and we do that you'll see that we have our very simple piece of PHP which is actually accessing ec2 metadata itself to return information about the instances from the metadata server so you can see this is instance i-95 d7 which is the top instance in our list here so have in fact just created three different instances there we'll go to another one of them just to illustrate that the metadata service is unique for each one of the unique instances that is running as the second or third instance in the list which is i-96 d7 one you can see corresponds to the instance ID just here so that's a very quick demo of getting started with ec2 using the AWS console let's return to the slides now and continue with the webinar next thing I want to show you is that you have parity of control if you using the AWS CLI in this next couple of slides just show an annotated CL a CLI command for creating instances via the AWS CLI so passing in set parameters including the image ID the instance type that you want to launch in this case it's t2 micro with the count the number of instances that you want to create in this case it's one referencing security group IDs here we're using the security group IDs themselves but you can also use security group names if you slightly change the parameter that you're using a new security - group - name instead and then lastly key path that you want to use to secure access to the instances that you're creating and if you do run that command on the CLI here we've set the output of the command to table format using - - output table at the end of the command you can see at the top there it will return back to us data about the instances that have just been created for us as a result of that API call of course you can get that output in text or JSON format if you wish to do so as well and I don't know how familiar you are with the AWS CLI but what I would say about it is you can get detailed help on every single AWS CLI command so here we would have in AWS easy to run - instances space help this would return to us comprehensive help including a command synopsis and a full list of all the different options and attributes that are available to you if you're going to use this specific CLI command of added a link at the bottom of the slide here - the latest command reference for ec2 within the CLI and you'll find a similar command reference for every AWS service actually but there's a lot of information now that makes it very easy to get started with the AWS CLI for scripting or for everyday automation of your AWS management activities you can also control AWS ec2 as an ec2 via the SDKs that we provide and here's a code sample from the brand-new Python moto3 SDK showing the creation of an ec2 resource type and execution of the create instances method which is going to create as an instance and once again with passing in data here we're actually creating exactly the same thing that we created a second ago same it same army same instance type slightly different key pair name same security groups and that will return back to us value for the object which is the instance ID that we've created and then you can use these instances filter to terminate instances by instanceid so immediately shut down that instance and once again way returned status information indicating that that has returned a 200 code and the statuses in shutting down status so it's very simple to work with ec2 using these SDKs I've shown you the moto3 there but there's a variety of different SDKs of realms and as I'm sure you know so that's getting started with a variety of different control options for ec2 so move on now and take a look at instance storage or storage more generally to be precise and there are a couple of different storage options that are available to you when you're making use of ec2 the first is what I just said instant storage which is a ephemeral storage within the host computers that run your instances and then you have Amazon EBS the elastic block store it's network attached block devices and that's integrated with s 3:4 snapshotting so if you take a snapshot off an EBS volume that will be stored with the familiar durability and regional specificity that you have with the s3 surveys it's worthwhile just digging into these two options instance storage and EBS in a little bit more detail so instance storages are set physically attached to the host computer type an amount varies by instance type and you can find more information about instance storage by specific instance type on the ec2 instances page on the AWS website and data here is dependent on instance live static cycle contrast that with EBS where you've got persistent block level storage volumes that are attached over a network available a variety of different options magnetic general-purpose SSD or provisioned IAP CSSD and in this case your data is independent of the instance life cycle so EBS volumes can can and do survive instant stop start and termination actions this is in contrast again to instance stores data is lost if the underlying instance drive fails an EBS backed instances stopped or the instance is terminated in the case of instance storage volumes they do persist through a reboot but they don't persist through stop start or termination actions that's very important to understand there are situations where you would still use instant storage because they are locally attached physically attached to the host computer so they offer excellent performance and they also offer performance without without i/o restrictions so you can drive them to the i/o limit without any additional charges becoming applicable to your account which is important for certain types of workload but they shouldn't be used for situations where that they are the only persistence mechanism for data for data which needs to be persisted because that will invariably lead to issues down track EBS is a much better solution for that persistent data use case here we're replicating volumes automatically within the availability zone so we're providing for good durability of volumes and you can use them in conjunction with EBS optimized instances that live a dedicated through port between ec2 and EBS volumes there options there for between 500 megabytes per second and four thousand megabytes per second depending on your instance type so you can drive really significant performance through these EBS optimized instances in conjunction with EBS volumes and the lifecycle again is something which you need to know about so EBS volumes attached to a running instance automatically detach from that instance with data intact when the instance is terminated EBS option abs volumes created and attached to an instance at launch a normally deleted unless you modify the behavior of those volumes by changing the value of the flag delete on termination so you can set your EBS volumes to auto persist by changing the delete on termination flag this can help with persistence and it just indicates the data is independent of instance life cycle as I said a second ago EBS snapshots are integrating with s3 so two point in time backup copy of an EBS volume which is stored on s3 independent of both DBS and of the instance that the EBS volume is attached to and snapshots are incremental so only blocks that have changed since your most recent snapshot are saved when you delete a snapshot data exclusive to that specific snapshot is removed but data which is required for subsequent snapshots in the life cycle is not removed they can be shared across AWS accounts and they can be copied across regions there is an option to encrypt EBS volumes and if you do this data stored at risk rest on the volume disk i/o and snapshots created from the volume will all be encrypted the encryption incurs on the servers that host Amazon ec2 instances and this provides for encryption of data in transit from ec2 instances to the EBS storage and a couple of different options for key management a uses the AWS key management service master keys by default unless you select a customer master key if you create your own cmk your own customer master key you store that in the AWS kms service and you then the ability to create rotate disable that key define access controls and all the access to that encryption key as part of kms so it's very good and strong service if you have a requirement to demonstrate key control as part of a compliance standard that you may have to had adhere to this is relatively new functionality and features that have been provided through the launch of the AWS key management service we've recently also enhanced the EBS volumes to provide additional capacity and additional performance general-purpose SSD now supports volumes of up to 16 terabytes in size with $10,000 burst and up to 160 megabytes of throughput per volume and then the provisioned I ops folium is also on SSD again the 16 terabytes in size 20,000 died up s-- and double the throughput up to 320 megabytes per second next thing I want to show you is a quick demo of working with ec2 storage with EBS in the console so let's jump over now take a look at that you can see we're back in the ec2 console here looking at those three instances that we created for the earlier segment of this demo and we're going to have a look now at storage with EBS the elastic block store so if we go to the EBS section of the navigation panel here and click volumes you'll see that we already have three eight gigabyte volumes one which is attached to each of the instances that we created earlier and you can see the attachment information down here in the bottom if we tag tab across these different instances you can see the attachments down there creating a new volume is as simple as just clicking the new volume button here selecting the volume type that you want will have a 250 megabyte gigabyte rather general-purpose SSD volume checking it's in the right availability zone because EBS volumes are a Z specific so you can only attach them to instances in the same availability zone and we're not going to encrypt this volume so we'll just click create and that will result in the creation of that brand-new 250 gigabyte volume force that you can see here and then we'll create a second volume this one we're going to have provisioned I ops a little bit larger at 500 gigs we'll have seven and a half and I ops for this volume again it's in the right availability zone for words and will turn on encryption but because we have not already created a key in the kms service in the key management service you'll see we only have the default EBS encryption master key available to us so we'll use that and that will then create that volume forwards you can see that our first volume the 250 gigabyte volume is already being created we wish to attach that to an instance it's as simple as just right-clicking on it and selecting a tax volume and then the instance names that are available to us appear in this drop-down we will have 95 d7 1 and it's that simple in a second that volume will be attached to the host and it takes a second or two to attach the volume and there you are you can see that we've successfully attached that volume to I 95 d7 195 C which is one of the instances that we create a moment ago across referenced here and it's really that simple to work with storage in the ec2 console next thing we're going to take a look at is ec2 networking using the V PC the virtual private cloud feature there's a lot of documentation on V PC AWS Amazon calm slash V PC V PC allows you to create a virtual network in your own logical isolated area within the AWS cloud you can place into this virtual network infrastructure platform and application services if you do that those will share a common security interconnection framework actually many different sub features within V PC eni which are elastic network interfaces that you can attach to ec2 instances to get persistent IP addresses that you might use as service endpoint for example game servers are quite a good use case for that subnets where you can lay out a topology of your V PC and split it up into separate subnets network access control lists that allow you to control traffic flows to subnets and from subnets root tables to control the flow of traffic in and out and around your V PC Internet gateways for similar function for internet accessible for internet access virtual private gateways for building connectivity out of your V PC over a VPN to a corporate network on other data center and route 53 private hosted zones which are private DNS hosted zones within your V PC so a lot of features that you can use there to affect control of your network topology and layout V PC itself can span multiple AZ's but each subnet must reside within exactly one availability zone so you'd use at least two subnets in different Ozzie's for each layer of your network in order to build a multi AZ architecture enabling you to tolerate failure more effectively you have control over subnets and routing tables and you can lay out your network and it's apology which suits your use case it might be a very simple approach with two public subnets where you're going to distribute ec2 instance across to ACS and put them in different subnets could be something more complex public subnets private subnets VPN routing what might be sharing services across a variety of different customers using VPN style hugging where you got services in the cloud hubbed across a variety of different a number of different VPNs so simplify this we provide something called the V PC wizard that has four different usage scenarios that you can see on the left hand side of the picture on this slide and you can use those to quickly and easily create v pcs with common layouts single public subnet public and private subnets and some more complex layouts that include private subnets and VPN access as well so take a look at that if you want to create your V PC but you don't want to deal with the plumbing involved in building that yourself if you do want to exercise full configuration control over your V PC a common way to do that is to use AWS cloud formation which is a templating language allowing you to describe AWS resources as code in a JSON format template will not through the cloud formation engine and have those services created for you by cloud formation it's common for customers to use this as a mechanism for defining V pcs and exercising configuration control over them if you want more information about that aws.amazon.com slash cloud formation find out more there also a master class webinar from earlier this year where we covered this topic in a lot of detail it's possible to pay at V pcs so this is a one-to-one connection between two different V pcs into that you own or V PC that you own and a V PC that might be owned by a third party there's a process for that where the request or the V PC that wants to make the peering request request sends a request to the other party on that other party then has to accept that request at which point that V PC peering connection will be created and those V pcs can begin to exchange traffic obviously it's important the address blocks do not overlap in order to enable you to do that more details at the URL that you can see at the bottom of the slide here several different topologies that you can use there depending on what your use cases and then we recently added something called classic link this allows you to if you've been a long term user of Amazon Amazon ec2 and you have ec2 classic instances which are not in V pcs that use a different networking model classic link allows those instances in the classic networking model to take advantage of services within V pcs you enable this on a per V PC basis and then that will enable private communication between classic ec2 instances and those V PC based resource and you can find out more about that if you check out the blog post which you can find reference to the URL at the bottom of this page elastic load balancing so this is a very powerful service to enable you to distribute balanced traffic across the fleet of ec2 instances it has some advanced features like timeout configuration connection draining and cost zone load balancing I think the best way to show this is through a quick demo where we're going to create an e lb and put it in front of the those ec2 instances that we created a few minutes ago so let's jump over now into the console for a quick demo of creating an elastic lobe and so that can distribute traffic across a fleet of ec2 instances in the ec2 console or in the load balancer section of the network and security navigation now on the left hand side and creating a load balance as simple as just hitting the create load balancer button and there will be us to define our load balancer give it a name creating inside our default VPC we're going to be listening for traffic on the HTTP protocol only and then we're going to assign a security group reusing that HTTP access group which will allow traffic on port 80 to access our load balancer not lit we do not have a secure list next we're not running a secure web server on our fleet of instances and we're going to be pinging the index.php page that's the name of the route document on the three web servers that we set up earlier and we're going to add those ec2 instances to our load balancer just by selecting them here setting a name once again asked to review and create and when we hit create you'll see that our load balancer is being created we're warned that it will take a few minutes for our instances to become ex active in the new load balancer if we hit close at that point you can see you've got information about the load balancer here our instances are here and it will take a few minutes for those instances to become active so I'm going to speed the video up at this point and we'll return in a second when those instances have been registered demonstrate to you that load balance has been successfully created you can see there are three instances are now in service and if we visit the endpoint address that's been allocated to this particular load balancer which you can see here in a new tab in our browser you'll see that our endpoint is now available if we reload this as then larger size of this text if you watch the instance ID here you can see that instanceid rotates around the different instances that were in our group because our traffic is indeed being balanced across those instances and that's how simple it is to create a load balancer using the ELB service just clean that down that's it next thing we're going to cover is monitoring metrics and logs I'm going to move a bit more quickly now to not get a lot of time left so Amazon CloudWatch is a monitoring service for cloud resources and applications that you run AWS allows you to collect and track metrics collect and monitor log files and set and respond to alarms and you can use Amazon Cloud watch to gain system-wide visibility into resource utilization application performance and operational health most importantly you can use those those insights to react and the way in which you can react it's enabled really because Amazon Cloud watch monitors resources and the applications you run in real time and it can when generating alarms send notifications or automatically make changes to the resources that you're monitoring on the basis of rules that you're defining for example you could monitor CPU usage or disk read and write of ec2 instances and then use that data to determine whether you should launch additional instances to handle increase load through an auto scaling action you can also use it in other use cases which will touch upon in a moment to take actions in response to alarms there's integration between cloud watch metrics and the ec2 console you can actually see cloud watch metrics and alarms for your instances directly with an ec2 console without leaving it here you can see some examples in my personal account of metrics that are coming off instances that I've been running there for some time it's very simple to access those metrics inside the console there's a variety of monitoring scripts for ec2 instances because what you're seeing there in the prior slide is information collected from the hypervisor another infrastructure that AWS manages obviously we don't manage operating systems on your behalf that's something that you do under our shared responsibility model you may want to post metrics from your operating system for example memory usage into cloud watch so that you can view it alongside those other metrics in the console and made available some code samples in the form of scripts so you can see how to do this and of course you can write your own scripting or develop your own software to improve enhance monitoring for your particular applications but it's very simple to post custom metrics into cloud watch and treat those like you would any other metric also recently announce the additional service called cloud watch logs that and that allows you to use log data to one of their applications and systems it's stored in very durable storage you can set retention periods and you can access those log files via the web interface the console the CLI or an SDK we support Amazon ec2 AWS lambda which is a service for event-driven execution of JavaScript or Java code inside the AWS cloud and several other services as well so that's a mechanism the log management and the combination of these two things really metrics management and log management allows you to take action in two ways you can respond to metrics with alarms and actions can also respond to log filters to create metrics and then trigger alarms and then take action so you can drive actions directly from cloud lock watch logs as well as driving them directly from metrics and the alarm actions can take a variety of different forms simple notification service notifications which you may consume or use for a variety of other actions auto-scaling actions which we're going to talk about later in the session and ec2 actions which are simple things like recover stop or terminate instances on the basis of action so for example you could terminate or stop idle instances after a certain period of inactivity or you could recover an instance in combination with status checks to automate instance recovering the event of an instance hang or crash so got several different options there for taking sync simple actions with ec2 that you can use as part and available as your cost optimization strategy security and access control to components here access credentials these are the API credentials the access and secret key that you use when accessing AWS API s then there are key pairs public and private keys used to authenticate when accessing ec2 instances via SSH on the Linux platform or for decryption of administrator passwords on the Windows platform can also use iam roles to pass access credentials to an instance enabling your instances to work directly with the AWS api's without having to store credentials on the systems on the instances themselves or worry about credential rotation I'm just going to give you a very quick demo of how you can use this feature in combination with a service like s3 to authorize an instance to perform certain actions with an AWS API so a very quick demo using iam roles with ec2 and back in the ec2 console here for this demo I'm actually going to launch another instance in just a second before I do that I'm going to jump into I am the iam console for Identity and Access Management and create a role that I can assign to the new instance I'm about to create so I'm going to access my iam console and I'm going to take a look at my roles here and you can see I've got a couple of system roles that are already in place but I want to create a new role call this new ec2 role and I'm going to make it possible for ec2 instances that are using this role to work with the Amazon s3 API but only the Amazon s3 API so we're going to allow our ec2 instance to call AWS services on our behalf so it's an ec2 instance role that we're creating and I'm going to attach the Amazon s3 full access manage policy to my role which is a policy that will enable my instance to do anything within my account with the s3 API and that's it really I created that role and I've now attached as you can see here I've got that policy attached to this specific iam role I've just created the next thing that I'm going to do is launch another ec2 instance that carries this role so I go to ec2 on small launch an instance I can leave pretty much everything the same as I did previously I don't need bootstrapping data this time around because I'm not going to be bootstrapping this instance but I'm going to assign my new ECT role to this instance and I'm going to review and launch and in the case of this instance I do want SSH access I'm going to use this automatically created security group that's going to give me SSH access to this particular instance and I'm going to use my US one key pair okay so we'll allow this instance to launch I view my instances the console will show my new instance launching right here I forgot to add a tag so let's do that with inline editing while my instance launches just wait a second or two and this instance will launch while that happens we'll prep our console command ready to log into this instance so I can show you use of that role so I have a terminal window here I need my sorry I need my instance public dns name which is not actually available to me yet here we are so it's this public dns name that i need from here back to my terminal and what I'm going to do now is log into the instance that I've created using SSH using the key pair that I had earlier and I'm using a username here ec2 user which is the default username for instances created from the Amazon Linux army you can see that I will be logged in to that instance now so this instance has been up just a couple of minutes and I wanted to show you that I have permission here to access the s3 API and I can do that using the list buckets command this show is all of the book it's created minus three account you can see that I just have one nm - AWS demo bucket conversely I do not have permission to access other AWS API so we'll go for the describe instances API the ec2 API I need a region for this and you'll see that I get a unauthorized operation there because I don't have API or access permission for that particular API with my credentials if I jump back onto the iam console I'm going to strip myself of the role rather remove the policy from the role I created so about looking at that new ec2 role will detach the Amazon s3 full access policy from that role so that role now has no policies attached to it and the default obviously is no access if I jump back onto my terminal and repeat that command you'll see that I no longer have permission to perform that list book its operation I hope that shows how you can control the permissions that are available - and it WS easy to instance to work with the AWS api's by using I am instance roles ok we're going to finish up the DOE session now by talking about deployment and cost optimization and the basic premise for deployment on ec2 is that if you need to SSH into your instance or connect to your instance in some other way your deployment process is broken it's a slightly glib statement but what we're saying here really is it's ideal to treat ec2 instances as immutable components and one of the ways to do that is to use Amazon machine images so armies the base images from which new instances are created as a mechanism for deployment and then three different types Amazon maintained a set of Windows and Limit Linux instances kept up to date by Amazon in each region community maintained which are images published by other AWS users and shared through the marketplace or through other techniques such as giving image IDs to other parties and your machine images so armies that you can create yourself from ec2 instances you can keep as private or you can share them with other accounts effectively creating your own community armies and as a continuum here are really for deployment at one end is baking an army which is a fully ready to go machine image with all of your software instance fully configured and you just start a new army when you want to start a new instance through to dynamic configuration which is using services such as the instance metadata service user data and cloud init to perform actions on instances when they start and you can combine these two things of course I painted it as a rather black-and-white picture there but you could confine them to get combined them together using base images with custom initialization scripts which could be your golden base and then using bootstrapping with that to pull in custom information and perform post launch tasks like pulling in code from source code repositories github or those so you've got this continuum that you can land upon at one end of it is more heavyweight configuration and faster startup time combined with a degree of static config and therefore less opportunity to change and at the other end our continuous deployment the latest code environment specific but it takes longer to do that dynamic configuration on instance startup time which means going too far along that continuum to the right might not be the best option when you want to combine that with auto scaling now auto scaling is a mechanism for doing two things maintaining ec2 instance availability in other words keeping your fleet at predefined size or automatically scaling your ec2 fleet following the demand curve for your applications in response to metrics typically metrics from cloud watch although you can also use other sort of sources for those metrics and triggers and what you have here is reusable instance templates combined with automated provisioning and adjustable capacity and when creating an auto scaling group three things are required a launch configuration which defines what auto scaling will create when adding instances it's a one-to-one mapping one launch configuration / auto scaling group at a single time and as an example of a command line their command line execution for creating a launch config then you have your group and auto scaling manage grouping of ec2 instance which automatically scales the number of instance according to policy and you can see an example command line for creating an auto scaling group there on the and then you have your policy which are the parameters for driving that up downscaling action typically cloud watch alarms can be triggered by external sources as well where you might drive your own scaling actions in response to something external maybe it's the weather that you're using to increase the amount of instances if you run a journey planner in the rail industry or you're running some other weather critical service so you can use external metrics as well granularity is an important factor here consider this hypothetical workload profile you can address that in several different ways to cover the profile of CPU usage across the day could have 41 instance hours of an m3 large which would cost you $6 31 cents per day and scale like this or you could have 70 instance hours of a t2 small cost you $1 96 because the scaling is more granular and the unit's themselves are slightly more cost effective lower costs because they don't have quite the cpu performance so you can consider that an opportunity for a pretty significant saving so we would say evaluate carefully what instance size you're going to use to fit the profile of your workload and you can experiment with it of course tweaking your launch config quick tweaking your auto scaling group over time we recently added more responsive scaling policies which enable you to escalate increase the number of ec2 instances that you add if your thresholds are breached by different levels effectively setting threshold bands and specifying the number of ec2 instances that are added if an alarm is hit in those bands and with this step policies model the alarms are continuously evaluated during a scaling activity and while unhealthy instances of being replaced so let's say CPU load increases in the first step in the policy is allocated but during the warm-up period the second threshold is triggered in this scenario auto scaling will look at both policies and choose the one that has the highest magnitude so it's a good way to help you respond to changes in demand more rapidly and more aggressively if you need to do so so a comprehensive blog post on that from Jeff that you can find at the bottom of the page other deployment options not got time to cover everything but two that are interesting very interesting AWS code deploy agent based deployment software onto large fleets of ec2 instances we have a code deploy agent that is deployed onto the instance is a small package for Linux and Windows it reads the yam all deploy spec file or app spec file which contains deployment info and that will then execute deployment hooks place artifacts or assets on to the file system in locations that you specify and then execute further deployment hooks enabling you to do smart deployment it's aware of things like availability zones works with auto scaling so it's a very good tool that you can use for code deployment onto ec2 instances once they're running and they have the agent installed policy controlled by a a.m. and additionally the Amazon ec2 container service so if you're deploying containerized workloads in docker this could be a good solution for you highly scalable high performance container management service allows you to really simplify the process of launching and scheduling large numbers of docker containers on a substrate an underlying foundational layer of container instances running on Amazon ec2 this is a service that would continue to evolve quite quickly just yesterday yes the other day before yesterday sorry we had a post from from Jeff Barr about how we have added new features to this over the first half of the year things like additional regional support support for a long-running service scheduler load balancing support console support cloud Lake Trail integration quite recently announced at Dhaka Khan in San Francisco that we're going to add support for docker compose and docker swarm in the near future as well so if you're interested in running docker on AWS aws.amazon.com slash ECS you can find out more about that path we run today I'd like to apologize for that before we wrap up let's just look at cost optimization with ec2 especially three different options that you have for cost optimizing your purchasing of ec2 instance the on demand which is the familiar compute capacity by the hour with no long term commitments and payments that's the default me launch of instance then you have reserved instances and spot which will focus on quickly now you find out more about those three options if you check out the purchasing options page at the bottom of this slide getting started with reserved instances this is pre reserving ec2 capacity to optimize your costs over time as a comprehensive guide to this on the AWS website which you can find at the URL at the this slide but savings of upwards of sixty to seventy percent are available to you if you choose to reserve your ec2 capacity for a one or three year period and you can use ec2 usage reports or AWS twisted advisor which is a feature of our business level premium support service and above to identify opportunities that you might have to reduce your costs with reserved instances so check out those resources if you're running long lived workloads on the ec2 platform don't miss an opportunity to save yourself money spot instances this is spare Amazon ec2 instances that you can bid on where the price fluctuates in real time based on supply and demand it's integrated with the console so you may have noticed when I launched a con my first instances earlier which I showed you I could request spot instances at that point if I'd done so I would have been asked to place a bid price if my bid price had been higher than the current minimum I would have current market price whether I would have been allocated the instances then if it had been lower my request would have been put in a queue and would basically wait there until the price fell to a pointer which might my bid exceeded the market price that's how the model works and there's quite a few third party and AWS tools that are integrated with spot price bidding it's used across a wide variety of different use cases analytics big data financial modeling geospatial and others you can find much more details once again if you check out the URL at the bottom of this slide some resources where you can learn more obviously the most important and primary resource is the ec2 product detail page and also several good getting started guides getting started with ec2 guide and also an excellent guide on getting started with auto scaling which includes the new feature that I talked about earlier for accelerated scaling under heavy load or situations where metrics exceed one or more thresholds so that's something that's worth checking out you want to build your AWS skills more generally please check out our training and certification offerings and that concludes everything that we have for you today we're not going to have time for questions because we've overrun I'm very sorry about taking up more of your time than we said we would there's so much content to do with ec2 as I hope you appreciate it's very difficult to get it all in an hour if you want to stay up to date with us social media you can find me and AWS at the Twitter accounts that you can see here and of course if you do have any questions any things that you feel we haven't covered during today's session we're going to leave this webinar open in the Q&A mode that will switch to now please leave us a rating and please also let us have your questions and we will come back to you in email over the course the next few days thanks very much for giving us a bit of your time to learn a little bit more about AWS and we hope to see you on another webinar very very soon thank you bye bye
Info
Channel: AWS Online Tech Talks
Views: 235,131
Rating: undefined out of 5
Keywords: Amazon Web Services (Website), Amazon Elastic Compute Cloud (Software), Software (Industry)
Id: jLVPqoV4YjU
Channel Id: undefined
Length: 62min 42sec (3762 seconds)
Published: Wed Jul 22 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.