Best Practices for Getting Started on AWS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello welcome to this AWS webinar my name is Ian massing ham I'm a Technical Evangelist with Amazon Web Services located in Europe I'm gonna be hosting this session for you today and this session is the second in our three webinar series for getting started with AWS in the last episode a week ago we introduced the AWS cloud and talked a little bit about what some of AWS services are today we're going to move on from that and we're going to discuss some best practices for getting started with AWS in this session as usual there are materials available for download you can find those in the files panel of the webinar interface if you're watching us live or if you're watching on demand check out the link in the description of the YouTube video also if you're watching live you can also submit us questions which will answer either live during the session today at the end of the session verbally or follow up on in an email from our solutions architecture team here at AWS so please do ask us questions we've got a team on standby to make sure that you get the answers that you want and if you don't feel that you do get the answer on the first attempt please do let us have an email of a follow-up and we'll work with you to make sure we get you the information that you need at the end of the session today I'm going to show you some social media links that you can use to stay up to date with AWS webinars and actually other AWS events and news as well so be ready at the end of the session with your Twitter account maybe two followers on social media and also at the end of this session you'll have an opportunity to rate the session and provide some feedback really appreciate it if you could let us know how we're doing in this program and you'll be seeing into the session that will switch the webinar into a Q&A mode which will also introduce and show to you a rating panel where you can scores from one to five with five being the best so really appreciate your feedback on the session today let us know how we're doing please and of course how we can improve things most importantly okay so let's get started with the content for today's session it's going to be relatively short session today there are three content items that we're going to cover the first very quickly we're going to discuss getting started with AWS and talk about some resources that you can make use of which will guide you through the process for example of creating an AWS account launching your virtual some other common tasks so I'll show you that very quickly well then move on to the bulk of today's session which is about 8 best practices you should focus on when getting started with AWS so I'm going to take a quick tour of those 8 best practices talking about each of them in turn and introducing some resources and some concepts and some ideas that you might want to bear in mind when you're getting started with AWS to maximize the value that you get out of those first early steps and then thirdly very quickly it also at the end we're going to show you some resources some links actually that you can follow to learn more about some of the topics that we're going to cover during today's session ok so let's get started with getting started with AWS and the main thing I want to do here is just give you a pointer to a really important part of the AWS website you can see a URL at the bottom right of the screen here aws.amazon.com slash getting - started and if you visit that you'll find the getting started Resource Center you can see a screenshot on the slide here which covers part of that it will walk you through things like creating an AWS account it will take you through a short walkthrough of how to launch the virtual machine how to store media and files as well as further information about some of the concepts and services behind and provided by Amazon Web Services I really would like to encourage you if you're a new user please spend five minutes after you've watched the session today to bookmark that link and visit us and use the resources that are available there they're intended as an introduction for new users and I think you'll find them a really useful resource for just getting you started from the very ground level with AWS services we like to encourage you to do that so let's move on and take a look at these eight areas of best practice and the first is about choosing your first use case this is maybe surprisingly something that can have quite a big impact on whether or not your first experience with AWS is a successful one and the guidance that I always give to customers here is to make your first project a smart one so think about the way in which you might set goals if you're a manager or be set goals if you are contributing in your organization you're probably using this acronym specific measurable achievable realistic and time-bound and I think that this applies really well to selecting the first use case the first project that you might deploy within the AWS cloud be specific so set clear goals and target a specific project or goal or use case that you want to work on make sure it's measurable so understand when you will claim success and use metrics to help you define that do something that's achievable also specify a goal which is reachable for that first project and realistic so one that can realistically be achieved don't attempt to boil the ocean or an elephant in one by in your first project you know make it something which you could call you know quick win something that you can quickly assess whether or not you've been successful at and move on to the next project and when I say quickly that also talks to time-bound you know you want to understand the timescales and how long you're going to give yourself to achieve this first goal that you're gonna set yourself in working with AWS that's a really important I think useful piece of guidance don't try and boil the ocean at day one start with something small make sure that that first project has a good set of success criteria coupled with it so you can establish whether or not you've been successful or maybe tweaked or tuned what you're doing if you want to improve the level of success that you've had following that first project and there are four common use cases that we see customers deploying with AWS as their first project which broadly successful and those four use cases as follows the first is development and test it's a super use case to start with because by definition you're working with non production workloads so you have a few experiences of learning with this first project if it's a dev and test project that's unlikely to be visible to your end-users and it's a great idea to learn how to work with the AWS cloud with non production use cases there are also some natural characteristics of the cloud which are really well aligned with dev and test workloads things like the fact that you can spin up and down environments on demand and you can decouple development and test environments from sort of normal operational constraints the fact that you might not have enough enough environment when you need them for example those are things which are very easy to overcome with the ADA yes cloud you can find that a lot of financial and other benefits from running different test workloads in AWS and this is something that will return to quite soon in our the webinar series the journey through the cloud where we talked specifically about some of the benefits and practices that you might use when running dev and test workloads on AWS that's a very common first use case we really encourage customers to take a look at Mozza for the first use case it's a great way as I've said to learn about AWS with non production workloads similarly backup and dr once again you're working with non production workloads here non production data but obviously there is our area of criticality if you need to restore from a backup or invoke your dr and you can take your data or business application step by step from non production into production as part of your dr testing process so once again it can provide a useful framework within which you can learn about the AWS cloud and you can understand the dynamics of the cloud and test them carefully during that control failover process a backup and dr is also a very common first use case that customers will look to thirdly greenfield projects so if you're developing or deploying a new application or a new workload it's helpful often to start with a blank sheet of paper and the cloud certainly gives you an opportunity to do that you can build your architecture from scratch and embody some of the best practices for cloud computing and these under constraint unconstrained projects and of course you can tear down that architecture and rebuild it as necessary if you're using AWS a very very low cost and with very minimal impact so greenfield projects can also be a good way to get started with the AWS cloud this can take the form quite often of self-contained web applications storage intensive projects like document archiving and the like so take a look at greenfield projects and then lastly you might have a specific pain point that you want to focus on you might have a specific service which is causing undue cost or heavy management overhead and by using the cloud applying the AWS cloud to those challenges you may be able to solve that and some of the things we see here quite often are processing workloads Search Indexing MediaStream archiving once again popping up here so that's the third use case that you might want to think about our third class of use case that you might want to think about when trying to identify the first project that you can deploy on the AWS cloud and once you've got that first project identified in thinking about the lifecycle we often see customers move through this common lifecycle when they're deploying that first project and working where they're starting with a proof of concept understanding the specific AWS services that are going to apply to this use case testing for performance architecting for scale and importantly developing the capabilities and skills of the team whether that's in-house or working with an AWS consulting partner to help with that proof-of-concept project and that can help you familiarize yourself and familiarize your team with the way in which AWS operates of course and how your workloads will be deployed and managed inside the AWS cloud once you've done that you may move into production with that workload implementing things like monitoring improving your control framework around change control and management your ongoing security operations management framework and of course working to scale that service up and once you've stabilized in production you can then move to the automation phase of this lifecycle looking to automate corrective actions implementing auto scaling working with continuous or zero downtime deployments making sure that your system backup recovery procedures are slick and work very effectively so we mostly see customers move through the lifecycle in this kind of manner that's all about workload selection and what that project evolutions going to be so number two in this list of eight best practices is to think about the foundations that you're going to be working with there are four components today's the first is account structure so there's nothing to stop you and in fact in some circumstances we would encourage you to create multiple AWS accounts you can use accounts like environments where you might need separation maybe separation of duties maybe you want to restrict the span of control individual users have so you might create an account for your development environment another one for test you might create an account for each business unit that you have in production you might create accounts orientated around individual products or services that you're going to support on the AWS cloud that's very common and you need to create an account structure that makes sense for you and have a think about that before you start you know what are the business units what are the environments or other products or services that you want to support with the AWS cloud over the medium and long term and how you're going to map that into the account structure that you're going to create this is also important from a billing perspective you can do something called consolidated billing which we're going to talk about later in this section it's a very powerful feature of AWS it allows you to have multiple separate accounts but let one account pick up the book bill for those multiple sub accounts and you can use that as a mechanism for exercising cost control and improving the granularity of cost reporting something that's actually quite difficult to do with traditional on-premises resources where you may have to arbitrary allocate capital resources to individual business units or projects quite tricky to do that there's a helpful feature of AWS that makes that very simple and which we'll talk about in just a couple of moments so I think about that billing as well and think about how you're going to structure billing mapping your cost controls at the cross to the organizational units or accounts that you're going to set up now one thing you can do here is to set up delivery of billing reports this is a really powerful feature and if you look on your AWS console visit the billing and account management section which you can find from the dropdowns on the right hand side top right-hand side of your console and then visit your billing preferences and you can turn on here an able delivery of billing reports with resources and tags this is how you can get granular information about how your AWS services are being consumed you can make use of these resources and tags to provide metadata about your AWS services our AMA to record metadata about your AWS service you should usage and then you can report on that and that looks a little bit like this so you'll have a master account which you're going to use for billing this is where we're going to consolidate our billing data from our sub accounts and what we've done here is to create a specific address for our AWS resources within our company this is helpful because obviously it means that if an individual leaves the company that might be responsible for billing you can redirect that alias elsewhere or maybe have that alias go to a list of individuals within the company that hold responsibility for payments so that's a very good idea to where to do it day one once you've got that master account you can then create sub accounts linking them together through this consolidated billing relationship and each one of those sub accounts can have its own administrator and its own administrative namespace in the form of these iam users and we'll talk more extensively about what I am is identity and access management is later in the course of the session today once you've got your sub account set up you can then start to make use of tags now tags that key value pairs metadata that you can associate to AWS resources within each account by setting an appropriate tagging schema or schema and using that across a variety of different sub accounts you can analyze and report upon the usage of AWS resources across the different accounts that you're operating aggregating this at the top level and using it to determine where your bills have been generate in other words who is consuming AWS resources and how much spend they are accruing as a result of that which is really good idea to do this so different administrators per sub-account different management namespaces in their separate iam namespaces but the same tagging scheme across different accounts to enable you to consolidate your cost together for cost reporting purposes and so they really recommend doing the other thing that you can do within the different accounts is you can set up different spend alarms this uses an AWS service called cloud watch inside cloud watch you have metrics and you can put or rather trigger alarms when those metrics pass certain thresholds and one of the metrics that you can report upon is estimated spend so you can set an alarm alert for estimated spending your three different accounts which would trigger an email to your account administrator in each sub-account I let them know I've passed a particular spend threshold they can have multiple spent assures per account many metrics as you want so that's a really good way for helping you to control your bill or have visibility of your bot bill interment month and preventing cost overruns of course is another helpful benefit that you'll derive if you implement that it's recommend doing that as well in the master account by virtue of turning on those billing reports you get programmatic billing access and what this does is deposits CSV files into a specified s3 bucket and you can use those CSV files for analysis you can see for example here you can identify the owners or the stacks that are being used by applying metadata tags that contain this data to reach resource that you concert' to create and you can use that to slice and analyze the total cost that you're incurring through your usage of AWS there are also a variety of third-party cost management tools available to you you can see three examples here cloud ability cloud checkout and cloud in and these are third-party tools that will take those billing data files that I've talked about creating a second to go through that programmatic billing access and they will analyze those with some very sophisticated cost analysis reporting even optimization tools that they haven't really would recommend taking a look at these partners this ecosystem of cost management partners if you think that you're going to incur anything like significant spend on AWS they will give you enhanced visibility and also allow you to do some modeling cost optimization that would be helpful to you over the long term many of these partners have free trials I really would get once again like to encourage you to take a look at them and see if they could offer value in your particular use case just three examples many more available in the AWS partner ecosystem you can find that at AWS amazon.com slash partners if you look in the technology partner section now you'll find many more partners that are operating in this space of providing cost management and analysis services to customers so there's lots in the third-party ecosystem well moving on from accounts and billing to other areas that you need to focus on in this foundational stage thinking about your access keys so these are keys that you will use to access your ec2 instances should you create them in your account via SSH or via the process of encrypting and decrypting the administrator password on any Windows instances that you might create and you'll have can create as many of these ec2 key pairs as you wish so need to think about how you're going to separate access and be separate roles by using different public/private key pairs by rotating keys I may be using bootstrap automation to grant developer access with the unique key pairs all things that you can consider at this stage thinking about your key management strategy and then lastly groups and roles so you can use these Identity and Access Management groups to manage console users and API access setting up a group hierarchy assigning permissions to groups and then placing users in those groups is a much simplified way of managing who can do what within your AWS account there's also a I am construct known as iam roles that allows AWS to manage API access credentials for instances so granting system entitlements to make use of AWS resources is an abstract concept I guess for those of you that might be new to AWS but it's a very powerful tool you have an account within that account you have groups and within those groups you can assign users or add users doing things like restricting access using multi-factor authentication and attaching usage policies to those users to give them or deny them access to specific sets of AWS resources you also have these roles once again you can assign credentials to these roles and those are system credentials that allow ec2 instances or other AWS services to interact with the AWS api's for example accessing an s3 bucket or starting additional ec2 instances could something that you grant to roll for then able to do that programmatically from a system or using code that you might develop using the AWS SDKs it's a very very powerful tool and now you attached policies to these groups or roles this is an example policy here a very simple one which just allows a particular group or role to work with only a subset of AWS services so you could see that if you attach this policy to a group users within that group would only be able to work with the AWS services that are listed their policies are created using JSON and the control access to the AWS API now it's a deep subject could probably spend and in fact later in the year will spend pretty much an hour talking solely about I am it's a very powerful and deeply featured service but for the purposes of this session if you want to learn more about that and advise that you visit aws.amazon.com slash I am and take a look at the product documentation that's available there we're giving you a good grounding in the way in which you can use user credentials groups and roles to manage permissions within your AWS account and make sure that you have the right security posture now moving on from that to talk more generally about security which is the third area that we're going to focus on today AWS operates what we call the shared security responsibility model okay because you're building systems on top of the AWS cloud infrastructure the security responsibilities will be shared AWS we've secured the underlying infrastructure but you must secure anything that you put on the infrastructure or connect to it and for infrastructure as a service services like ec2 or s3 you have more control and therefore you have more configuration work to do so for example with ec2 instances you're responsible for patching the guest us the B guest OS on an instance once you've created it you're also responsible for things like configuring security groups these are akin to software firewalls that permit outside access to your instances or setting up the V PC the virtual private cloud subnet that your instance is going to reside with it similarly with s3 you have to say control policies to permit access to s3 buckets that you might be creating moving a little bit of the stack a little bit with platform services like RDS or Amazon redshift are petabyte scale data warehouse service you have less security configuration to do you'd have to worry about launching or maintaining instances or touching the guest OS or applications we would do that for you but you do still need to manage things like security groups to permit access to those RDS instances of those redshift data warehouses from other AWS instances that you might be running there are certain AWS security features like I am that we've just talked about talked about which are account wide and you do have to configure those no matter which AWS service you're gonna use so security it's a very very important area for the vast majority of customers in fact the top priority really for the vast majority of customers and just want to share with you some good practice that we've seen customers use to help them deploy or migrate applications to AWS in a secure manner that meets their needs and the first area to talk about is about understanding your customer and determine determining your security stance so you can see here that you have an external audience that you might have to work with namely your customers or stakeholders within your business and from that perspective you need to focus on the certifications that you need the security management processes that you're going to use things like penetration testing to validate the security practices and processes that you might have put in place but that is supported hence the drawing here is supported by an internal audience and a regulatory audience that you need to work with so things like the architecture that you use to ensure that you have security by design the administration of the services that you're using to ensure that they are managed and controlled in a secure manner this will lead you to use tools ultimately like I am just as one example of a tool that you might use to satisfy that administration requirement on the other side maybe less hands-on technical was still quite technical in nature the regulatory audience so this is where things like security certifications white papers and the qsa process become important so these are things that AWS can help you with we have done a lot of work to ensure that we meet the highest security standards but having said that security assessments still take time so it's important to allow for this in your planning cycle and to undertake architecture reviews early in your design and deployment process and we find the customers that follow these good practices making use of things like AWS is ISO 27001 accreditation our PCI DSS level one service provider accreditation there's nothing to fear in achieving the right level of security accreditation which you need to work with the accreditors in the way that many thousands of our customers have already done and make sure that you get that upfront in your design and deployment process and some of the resources that are available for you to help you do this well the first thing to say is there's comprehensive documentation about AWS security practices and processes AWS amazon.com / security and if you go there visit that location on the AWS website you'll find a lot of resources things like the risk and compliance white paper the AWS security processes white paper details of how you can get access to the Consensus assessments that AWS is completed for preview prior security orders those are available under a customer NDA so take a look at the documentation now if you're running any kind of security critical workload in AWS make contact with us using the processes there to get access to the resources that you need many resources are self-serve some of them you need to contact us form obviously very happy to work with you and help you achieve whatever security accreditation you need and lastly build upon the security features of AWS to implement security by design we've obviously got many many customers that are using Amazon Web Services today to run security critical workloads and there's a helpful sort of characteristic of the cloud ear which I call the highest common denominator feature which is that whenever a demanding customer requests a particular uptick in our security processes or ask for another service or feature that will help them implement or validate their security stance on AWS the nature of our platform means and that becomes available to all customers that are using Amazon Web Services this is led really to some great innovations in the area of security and control things like I am which we talked about already enabling you to implement this tiered access model having separation between the api's that control AWS services and the public private keys that are used for instance log on for example enabling separation have developed a separation of roles between administrators operations personnel and developers and also things like the ability to create temporary API credentials that you might need for permitting untrusted devices to access the AWS api's for a short period of time maybe if you're rolling out or deploying mobile applications in AWS you may make use of that control an audit instance firewall security groups we touched upon already but we've got other services like AWS cloud trail an AWS config that allow you to record an audit access to AWS api's or record an audit configuration changes with a new environment very helpful security management tools and again tools that is quite hard to replicate in a traditional on-premises data sent to a private data center environment the use of the V PC the virtual private cloud things like subnet control creation of bastion host which might be restricted not operated when not needed for access their startup being restricted by multi-factor authentication by MFA so you can build a very strong security posture we'd use you to that kind of technique and things like V PC peering enabling private connectivity to other V pcs peering your V pcs together allowing to share resources across multiple networks owned by you or the REA WS accounts all of these innovations being driven principally by customers asking us to implement these features it's very customer directed development and then lastly Direct Connect and VPN access so you can create secure private network connectivity from your net work or point of presence on your wide area network build that into AWS using either Direct Connect she's a private high-bandwidth low-latency network connection directly from you into the AWS cloud or over a VPN service where the endpoint is provisioned for you and you can control traffic routing from that VPN in and out of your V PC using your own defined routing table a very set so very powerful set of security features that you can build on top of in order to make sure that you build the right security posture for your particular application or workload and in a similar vein building on the strengths of the AWS cloud more generally this is also something that we encourage customers to think about before starting to deploy applications inside the AWS cloud on migrated applications to the AWS cloud and there's some obvious strengths of the cloud the fact that you can support variable capacity requirements you can quickly and repeatedly create standard technology stacks or defined reference architectures so review your application architectures early and see if you can make use of any of those characteristics from day one there are also services that can be plugged in if you like into pre-existing architectures that you may have that might enable you to reduce costs or improve reliability with a really minimal level of effort a really minimal level of outlay things like Amazon s3 and CloudFront can use that to deliver static content that might be required three public applications and to do that in a very very cost effective and easy to configure way thirdly there's an opportunity in many cases for the cloud to help improve business performance more generally helping you deliver products more quickly if you're an organization that relies heavily on software development may be reducing the capex that you need for IT or boosting your agility you can reinvest that money and other activities that your organization might get more value from because you're not having to spend it on IT any more this is something that we see many many customers do some time into their usage of AWS rebalancing their investment they're spending less money on IT overall of course the earlier you look at that in your process and your deployment cycle the more quickly you can deliver those benefits to your organization's that's also something we really encourage customers to take a look at and then fourthly opportunities that you might have to deliver more robust services or services that are more agile or more secure making use in adopting automation throughout the lifecycle of your applications using things like fully scripted deployments using iam and ec2 instance roles to improve your security posture or rolling or continuous deployment techniques again things that customers often turn to after they've been using AWS for a little bit of time we really like to encourage customers to look at that early and establish whether or not you can take those strengths of the cloud and get value out of them for your organization as quickly as possible and looking at a few different dimensions here really four areas that customers can focus on customers do focus on to deliver these benefits things like disposable compute designing systems that can tolerate instance failures also helps you dispose of compute when it's not required so your costs are going to be more elastic as a result of that and will more accurately track the level of demand you might have for your applications think about disposable compute think about the flexible nature of capacity in the cloud the fact that you can design systems that can dynamically scale from zero to hundreds or thousands of instances and you can use a variety of different events to drive that auto scaling with events or schedules or triggers to drive capacity availability so that it accurately matches the amount of demand that you have you can cost optimize by using the cost-effective storage options that available to you within amazon web services things like s3 for durable and cost-effective storage of data and objects and also other techniques like using RDS or dynamo DB to provision and scale either persistent storage that might be required for your application where that's relation and all relational or non relational database storage once again very cost effective way to deliver that and lastly automation and control automating everything from deployment to scaling to incidence recovery from failure could dramatically reduce the amount of resource that has to be devoted to operations and also help you build applications and services that are robust and recover from failure with little or no human intervention who doesn't like to avoid being called at 2:00 a.m. to fix an application that's something that you can resolve and automate using some of the features that are inherent within the AWS platform just on the deployment side of this and the scaling side of it as well the techniques for bootstrapping they're available for you so this is getting your computing resources specifically into a state where they're ready to start supporting your applications a couple of different approaches that are available to you here one is bootstrapping with custom armies so armies Amazon machine images the baseline images that are used from which you create instances which you're going to run with an ec2 the process for this really is that you create an instance for your operating system choice you're in config you environment install software that you might need you can then snapshot that instance and create an army from that snapshot and that enables you to launch a fully configured instance from the army and to do so very rapidly and in whatever volume you require and this is a technique that can be used to enable auto scaling or programmatic deployment of resources so one of the options that's available to you use of custom armies the second is bootstrapping through the use of a metadata service so this is configure on startup the metadata service it contains and provides information about our running instance and it's specific to the particular instance so accessing that API on different instances will give you different data even though you're going to access the same on the face of it the same API it's actually private to the specific instance that it's accessed from and you can use that to gather information about your instance things like the ID of the army that might have been used to launch it the profile that you're using the public host name of the instance etc you can also embed a user data in this metadata service and that user data can contain scripts which will be executed and launched and you can use this to perform programmatic configuration of your instances at startup time you can use this on the Linux platform with shell script or one the Windows platform with PowerShell so available across platforms you're receiving this custom date when you use that to drive bootstrapping you can obviously do a variety of different things with these things like installing software pulling application or data packages from s3 will be publishing metadata for this instance into another system for example to register with a monitoring system that you might be using or to register as part of a cluster that can be executed via this bootstrapping process and also to set up the security profile of an instance for example determining whether or not your production or do not know Instant maybe configuring yourself in a certain way on the basis of that piece of metadata that's also open to you as an option next thing we're going to do is take a look at making use of some of the availability constructs that are available within AWS in your architecture now the first thing to say is AWS regions as you may know are distributed split into separate logical availability zones and each one of these availability zones comprises at least one physical facility or at least one physical data center so by distributing your application across availability zones you're making use of more than one physical data center to support your application that is a way that you can architect for availability inside the AWS cloud many of the services that we provide are available so we are aware of this availability zone construct things like RDS you can specify masters and slaves that mark are going to be distributed in different a availability zones for you you can also specify the distribution of read replicas across availability zones to help boost the performance of relational databases that are running inside the RDS service and of course you can also distribute your ec2 instances across these availability zones as well so use RDS with replicas and slaves and not only will your compute capacity be distributed across AZ's but so will your data your persistent data you can then use auto-scaling groups to provision and deprovision compute capacity as you need if your application on the basis of metrics and auto scaling itself is aware of the availability zone construct and will seek to balance your resources across the availability zones that you configure you can see here we would be adding our instances in pairs one in each of the availability zones that's been configured with this particular auto scaling group and we can use elastic load balancing the ELB service to distribute incoming traffic across those auto scaled instances and that again is a regional service which is a very we're avail available zones and can distribute traffic across them and then on the inbound we can use route 53 to host our DNS zones ensuring that the front-door tour application in essence is always available very very reliable service with extremely good availability characteristics very very good performance on the resolver and that will bring traffic to our load balancer from where it will be distributed across our AZ's hit our webserver tier and make use of that database which again is distributed across those AZ's so just to recap building on the strength of the AWS cloud make use of services like elastic load balancing just describe there how you can use that to distribute traffic across your availability zones route 53 for inbound traffic got some sophisticated features mostly reliability and high performance but also sophisticated features like weighted routing the ability to control TTL and DNS updates I also to integrate with other AWS services like the elastic load balancer make use of RDS and you can scale your databases without the admin overhead that you might normally associate with running relational databases on private infrastructure it's very simple with AWS to add features like high availability or data replication creating master and slave configurations with data replication between those availability zones that we talked about just a second ago lastly auto scaling so this enables you to dynamically scale resources and by doing that to more accurate we match the amount of capacity that you have to the amount of demand that you're seeing through applications and therefore control costs by doing that I so those are the four eyes of strength that we talked about during the session today for more details visit the AWS architecture center AWS amazon.com slash architecture you can find a lot more detail about some of these areas of good practice practice as well as a large library of architectural best practices and templates that have been developed from our work with other AWS customers around the world I really would encourage you to take a look at that before you deploy anything substantive in the AWS cloud you may find that we've already worked out an architectural model that can be used to support your specific application so AWS Amazon TOCOM slash architecture if you want to take a look at that the fifth area of focus we're going to talk about is services not software and this really goes to the core of something that I often talk about when I'm discussing the AWS cloud with new customers customers that might not use the cloud today and that is that getting a maximum value out of AWS it's quite often has a little bit to do with not doing things or maybe more accurately stopping doing things that you might have done before so with a traditional self-service manage software and infrastructure stack you spend a lot of time actually doing things which aren't particularly differentiating whether that's building data centers installing servers or maintaining the software that you've got installed to deliver the features and applications that your business relies upon a lot of that is not actually particularly value adding and by using the AWS cloud you can get away from a lot of that stopping a lot of that undifferentiated heavy lifting spending a small amount of time configuring the services that you want to consume from the AWS cloud this can free up time for you to focus more of your time on things that are value adding and do genuinely differentiate your business so stop doing things if you're going to use AWS and think about how you can use that time more productively to deliver more value to your organization and some examples of services some of actually what you talked about that you might want to consider in order to achieve that goal of doing less and differentiating they're lifting well things like RDS the relational database service for relational data sources or Amazon DynamoDB are non relational no sequel database service with its fast predictable performance and support for document and key value data models both tools or services that you can use that will deliver functions that are likely to be required by the applications that you're running without you having a focus on operating the platform maintaining the platform scaling the platform recovering the platform in the way that you would traditionally have had to do with on-premises or private dedicated IT so I would like to just encourage you to take a look at AWS services in that light and work out if there are opportunities for you to stop doing things or other services as well like the simple queuing service or Amazon EMR our elastic MapReduce service from running Hadoop workloads in the AWS cloud and others like Amazon Kinesis some of the mobile services that we've recently announced Amazon redshift where you can make use of services that will simply reduce the amount that you have to do in order to build and operate the applications that you and your organization need so please spend some time familiarize yourself with the breadth of AWS services and check out what opportunities you've got to do less sixthly cost optimization we're gonna not going to cover this in a huge amount of depth during the session today because we have another webinar coming up next week where we're going to spend 45 minutes to an hour of talking solely about cost optimization on AWS and in that session next week we're going to be covering these ten topics really how you can use the right instance types use auto scaling turn off or shut down unused instances or other services that you might be making use of purchasing models that available to you like reserved instances and spot used to storage classes the use of offload moving parts of your workload to AWS services like cloud front use of services not software again we're going to revisit out some examples of ways in which you can save money by doing that Yussef consolidated billing to aggregate together your demand and use of cost management tools to give visibility and insight into your costs so that's going to be the outline agenda really for the session that we're going to have next week where we focus on cost optimization but just to whet your appetite and give you a bit of a teaser for example on using the right instance type again we covered this actually very briefly in last week's session in Israel we have a broad range of different instance families things like the general purpose m3 series instances the new computerized C for GPU enabled g2 instances memory optimized are three instances as well as storage and I are optimized instances and it's important to dimension the instance that you're going to use around the workload that you have so if you're running a large so you're running a large non-relational database rather than using dynamo DB you're running that on ec2 instances it's likely to be that memory or i/o intensive workload depending upon the nature of the database that you're running and its size and you should dimension your instances appropriately make sure you're running on an instance that offers the best the price characteristics for the resource that you need most of so if it's memory then make sure you're running your workload on an r3 you will pay less per gigabyte of ram that you will on any other instance family so that assessment is an important activity to conduct when you're getting started with the new application on AWS to make sure that you're minimizing your bill another technique that you've got for minimizing cost is to use different instance purchasing options on demand reserve than spot instances and we'll talk about this in much more detail in next week's session but there are three ways in which you can buy ec2 instances with different economic characteristics that fit different use cases well that's on demand for those short spiky or unpredictable workloads whether it's reserved instances for long term workloads where you might want to reserve capacity over a one or three-year period or whether it's spot I mean my burning high-performance computing world load supercomputers in the AWS cloud and the spot market is a great way to acquire the compute capacity that you need for that very low cost so options that you have around how you buy your instances kind of a significant effect on how much they cost and you should be aware of that if you want to learn more about that in ahead of next week's session then visit the ec2 purchasing options page that you can see at the bottom of the slide there and it will give you more detail on those three different purchasing options and how they will affect the amount that you pay for your computing capacity with AWS number seven to look at is tools and frameworks this is a very very broad area very interesting area to look at maybe need a foundation to start from really is to consider that everything is programmable with AWS and you can access everything via the CLI via the api's of either console that we have and console in fact is on top of built on top of the API itself you can use one of nine today soon to be ten with the announcement of upcoming support for the go programming language SDK you can use those SDKs to create or make use of AWS resources within your own code within software that you develop you can make use of this broad ecosystem of tools open source free and commercially licensed tools to work with AWS services things like community contributed SDKs for the languages that we might not support things like automation frameworks that might make use or be integrated they could pull us things like ansible or public that might be able to provision AWS resources using plugins that have been developed and make use of those SDK so a very very comprehensive ecosystem for automating deployment management and control of AWS services and also working with AWS services as primitives within other software and by combining these things you can build or deploy the highest levels of automation define your infrastructure as code support continuous deployment or automate other development ops or DevOps processes so there's an oath what you can do as a result of the programmable nature of the AWS cloud and it's a resource there at the bottom AWS Amazon comm developer slash getting started which will enable you to download and make use of sample applications that have been built in various programming languages and make use of the SDKs and make use of AWS services within them so I drink courage to take a look at that if your developer minded in a similar vein there also a variety of AWS provided deployment and management tools things like elastic Beanstalk which is a container for running applications of various types inside the AWS cloud and removing the requirement really if you have to focus on infrastructure of any type without ct2 instances or v pcs or RDS instances elastic beanstalk can automate the creation of the resources that you need to run your application whether that's a piece of PHP code or a docker container it supports a variety of different containers for execution opsworks which enables you to define stacks of AWS resources comprising the layers that allows themselves to defined using a chef a configuration management tool and executed by opsworks enabling you to dynamically scale your environment in that way cloud formation more complete templating language actually that allows you to define collections of AWS resource and then programmatically create or destroy the resources defined within a template or set of templates a very powerful tool used by many sophisticated AWS customers to support continuous deployment continuous integration on AWS and a new service called AWS code deploy which is intended to ease the deployment of code onto fleets of running ec2 instances providing features like roll forward phase deployments with awareness of availability zones the ability to define precisely what you're going to deploy and how you're going to deployed it using a templating language it makes it extensible and able to support a wide variety of different application types and deployment models so you take a look at AWS code deploy if you want more information on that service there's just a few examples of deployment and management tools from AWS that fin to the broader ecosystem that I described earlier finally then in our eight areas of good practice a areas you should focus on as you're getting started with AWS I like to talk about getting supported finally as the last focus area and there are four tiers of support available if you're making use of AWS basic develop business and enterprise and you can find out more details about all of these AWS amazon.com slash premium support her basic support is free developer cost $75 per month business starts at $100 or 10 percent of your AWS bill but if your usage scales up the percentage of charges scales down there's full details on how the percentage scales down at the URL you can see on the slide there and enterprise starts at $15,000 per month and also scales down as your usage increases to 3 percent with spender over $1,000,000 per month on AWS services and as you can see tiered benefits that are delivered with these different support packages a business support which I would recommend for the majority of customers actually you've got a response time of under an hour you have unlimited users that can submit technical support cases within your organization that's defined using iam users you've got 24 hour technical support access by a phone chat email and a live screen sharing very useful and powerful set of features you also get access to something called trusted advisor with the business and enterprise support tiers this is a really sophisticated tool actually that programmatically checks the status of your AWS account or make recommendations about how you can for example optimize your costs improve performance of your resources or improve security and there's over 30 to 37 checks around at the moment interested advisor organized into those four categories and all make recommendations you can see here the recommendations around fault tolerance one of my accounts that are used for demo purposes and you can see I've got out of date EBS snapshots and I've also got ec2 availability zones that aren't balanced properly and running in that instance actually here in one of my eyes ease but I'm not running one in my other availability zone and I've been warned about this because I have an imbalance in other words if that AZ fails for any reason always temporarily inaccessible and my matter instance will be out of service and I don't have another one to back it up and you can see the details of that here as well as recommended actions that have been proposed to me by trusted adviser so many would recommend that customers make use of this service it's very powerful and customers that do use it give really positive feedback about how it helps them minimize and mitigate incidents that might occur if they haven't recommend I haven't implemented some of the recommendations of trusted advisor another may be less well-known feature of AWS support is that again once again on the business and then to prize tiers provide extensive support for third-party software over and above the software that's provided or rather operates the AWS services so you can get operating system support infrastructure software component support support for commonly deployed web and database servers by virtue of having a business or enterprise support subscription again you can find out more details about third-party software support also from that same URL so there's a range of different support services that are available to you and like all AWS services these are commitment free so you can take them just for a month or two months establish whether you get value from those support services if you do continue if you do not then you can see some with no penalty so there's nothing to lose quite literally by trying AWS support for a couple of months and just establishing whether or not they add value to your use of the services okay so that's the eight areas of focus that we're going to look at I did say I would quickly share with you some resources where you can learn a little bit more about some of the stuff that we've talked about during the session today we've shared many of these URLs with you already the getting started guide AWS amazon.com slash getting - started the security center the architecture center and details of premium support you can see that and if you'd like to register and attend the next webinar in this series which will talk about cost optimization on AWS in a little bit more depth then visit the campaign's slash Amir - getting - started URL you can see at the bottom there and you can register for next week's webinar there as well and of course if you want to access the materials from the session today so you can get access to all of the links in today presentation without having to copy the model into your browser just grab a copy of the presentation materials in the PDF from the files panel you can then just click click the link straight from the slides that you can see something else that I just want to quickly touch upon before we close up today we have a comprehensive set of training and certification offerings at AWS if you are interested in pursuing more AWS training or IMAP even AWS certification we have a portfolio of hands-on labs self-paced labs that you can make use of formal classroom based training in the form of an instructor-led courses and a certification program where you can become an AWS certified developer Architect or systems operations professional you can find more details about all of those offerings on the slide that you can see here ok so that's all we have for you for today's session in terms of content I did say I would show you these social media links at the beginning of the session today so if you are interested in keeping up to date with AWS events webinars also in person events are running around the UK and also keeping up with AWS user groups here around the UK why don't you find us on Twitter at AWS and to score UK I similarly for AWS service announcements and global news you can follow us on AWS cloud also on Twitter you can find me the enm if you interested in keeping up with the life of an AWS evangelist here in Europe as well ok so that's all the content we have for you today we're going to turn to a few questions now and put the webinar into Q&A mode please stay on the line if you want to ask us any questions we'll hang around and answer any questions that you might have submit those via the Q&A panel that you can see in the webinar interface I like to thank you for joining us for today's session let's take some questions now
Info
Channel: AWS Online Tech Talks
Views: 20,651
Rating: 4.75 out of 5
Keywords: Amazon Web Services (Website), Best Practice, Cloud Computing (Industry), Software (Industry)
Id: T64qFcyTGAU
Channel Id: undefined
Length: 56min 36sec (3396 seconds)
Published: Tue Feb 17 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.