AWS re:Invent 2019: AWS Outposts: Extend the AWS experience to on-premises environments (CMP302-R1)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good afternoon everybody my name is Anthony Liguori and I'm a senior principal engineer within ec2 and I am the engineering lead for AWS outposts and with me I have rich Rodolfo from Philips he's going to come on stage a little bit later and talk a little bit about what they're doing with outposts but I'm really really really excited to be here today this whole week's been amazing but I'm really excited to talk about outposts because the thing that I love the most the thing that really gets me up every morning and it keeps me excited about my job is building a combination of hardware and system software to solve really unique customer problems a lot of the times people think about the things like operating systems and hypervisors and hardware as just you know stuff that isn't really you know customer facing but I actually think there's a lot of cool stuff that can be done and a lot of interesting problems to solve and without posts in addition to solving those problems for customers you now can actually see and touch this hardware that we've been building over all of these years and do cool things with it I hope everybody got a chance to go to the expo floor in the Venetian we actually have and not post-rock there I think it closes during this talk but it's been here all week so you could actually touch it and take pictures with it and give it a hug if that's your kind of thing certainly is my kind of thing so for this presentation we're going to start out with what an outpost is we're going to talk about how it works you'll hear a common phrase throughout this entire talk about sameness but even though there's lots of sameness there are new things without posts one of the big new things is ordering and installation because unfortunately it just can't be as simple as a single API request because we actually have to physically deliver something to you and get it installed then I'm going to talk a little bit about how we're thinking about services on outpost and I think this is one of the areas that's super exciting about outpost because it brings so much of AWS to wherever you need it to be and then finally rich is going to get on stage and talk about the things that they're doing without posts and then towards the end we'll do a Q&A section now this isn't exactly the way that the marketing team wanted me to start the talk but given that I'm an engineer and then I'm kind of crazy about hardware we're gonna start with pics or pictures of racks because this is what I I care about and this is what I really want to talk about so this is an outpost it's about four feet long two feet wide and about seven feet tall so it's the standard size of a typical rack what that means is that if you haven't existing on-premise data center it should fit into any of the positions that you have today but everything on the inside is different from what you've probably seen before and what you've experienced because over the last seven or eight years we've been reinventing the way our systems work to really optimize to solve all of your particular problems this is what the inside of the rack looks like the at the very top we have a patch panel this is where the networking comes into the rack and connects and ultimately connects to your network we've got servers in the center switches in the middle there is a power shelf which I'll talk a little bit more about one of the questions I've gotten a lot this week is what does all that black space we build our data centers explicitly for ec2 and one of the things that we've done over the years is we've really optimized the density of servers so we almost always have 1u servers and they tend to have more horsepower than what you'll find other where in the industry a consequence of that is that our data centers have a lot of power usually two or three times the amount of power that you'll typically have in a data center so this is probably the most compute that you could fit in a typical data center in its power constrained which is why it's half empty but if you have high-end data centers with lots of power we can we can fill it up a lot more than this like I said we typically have 1u servers this is what they look like if you saw Peter DeSantis as keynote and he showed a server opened up with the two nitro cards in it that's these servers so those exact same servers that Pierre showed in the keynote you can now get those servers in your data center this is not a different hardware platform this is not a different architecture this is exactly the same stuff that we're using everywhere else within AWS one of the big innovations that we've introduced in the last few years is something called a power shelf so the typical thing that you see in a data center is every server has its own power supply the power supplies convert AC power to DC power and having a lot of little power supplies is far less energy efficient than having a centralized unit and doing all the AC to DC conversion in a single place power consumption and energy efficiency is hugely important for us so it's been something we've investing in for a long time now and we're able to now bring that to your data centers thanks to outposts despite what some people have speculated in Twitter those bottom drawers are not snack shelves I would highly recommend not putting snacks in those shelves they're extra space for additional power base finally this is the back and you can see the large metal bus bar this is at where the servers actually plug into in the back and this is ultimately what powers the servers okay so we satisfied my need to show some pictures of hardware I will be at least another 20 minutes before we see more hardware but we'll get back to the normal presentation so showed a bunch of hardware but what is outposts on what problem is it really solving fundamentally outpost Springs aw s not just DC - not just EBS but AWS - your on-premise facility the way we achieve this is by using the same hardware the same software in the same control plane that we use for capacity are in our region's - your local data centers it is fully managed that means that we deploy the software updates we perform all the patching and we perform all the monitoring not just for software but also for hardware failures so if anything happens in Iraq we care for that rack just like we care for any other capacity in our data centers and finally as a user you can interact with AWS outposts just like you interact with any other AWS capacity to launch an instance on an AWS outpost you use the same management console as the region that you ordered the outpost in it's not a different API endpoint it's not a different SDK it's not a different console it's the same console SDK API endpoints so I've said we've had this hardware for a long time I've described how it all works but this isn't a new new problem that customers have needing on-premise compute capacity so why are we only introducing this now there's a little story we like to tell I'm inside AWS that since AWS was first launched about 13 years ago every single year in our roadmap planning we have it we had a plan for doing AWS on premise and every single year we didn't actually get it done it's been one of those projects that has gone on and on and on and we've never actually been able to make it work and so I wanted to talk a little bit about why this is such a fundamentally hard problem and why outpost is so interesting in the way that it solves it and to do that we need to talk a little bit about how AWS works so if I was going to launch an ec2 instance and I was going to use the command line and I was going to run an AWS ec2 run instances come API command and at the surface it seems like a relatively simple thing I execute the command and about 20 seconds later I have an instance and I can Association that an instance and everything works kind of the way that you would expect it to work well what's happening underneath the hood is that underneath the hood is that when you execute that command and you're talking to a service and just like a lot of you the way we build our services is that we build micro services we love micro services because we also believe in having small teams you might have heard of two pizza box teams and micro services allow us to divide our architecture up into things that small teams can operate and build and drive in an effective way so the very first micro service that the AWS ec2 run instances call makes a call to is our API front-end service and this is a service that handles authorization authentication it checks to make sure the api's are well-formed it does just enough work to figure out which other micro service it needs to hand the request to the first observation is that we build our micro services to be azy aware and to have a multi AZ failover strategy so in order to stand up any micro service we need hosts in at least three availability zones now one host in an availability zone isn't good enough because one it doesn't allow you to survive spikes in traffic but two if you have some kind of hardware failure you'll lose that entire availability zone and that's usually not a good a good thing or a good experience for customers so as a rule we always use at least three hosts in an availability zone that means that every micro service that's part of the run instances API has to have at least nine distinct hosts they can't be on the same physical server because if they're on the same physical server you don't get the same fare characteristics that you actually need the thing that happens after the front-end api micro service is usually a specific control plan front-end service front-end services are the things that typically handle business logic they decide if you're going to create let's say an instance I need to also create an e anti and I need to create an EBS volume and things of that nature again this micro service requires at least nine hosts throughout the region and the other thing that I didn't previously mention but is equally as important is that every micro service has to be operated I think the big difference between a service and a software product is that a service has a an Operations team behind it it has dashboards that people are watching to make sure that changes in traffic aren't going to affect the service in a negative way they have people to react for scaling and things of that nature and importantly they have alarms that if something is going wrong or unexpected that a person will get paged and can do something about it every one of our micro services has a team of people behind it monitoring and making sure that the system is healthy after the front end the front end API service we usually invoke some kind of database back-end it might be DynamoDB it might be s3 it might be something custom but as you can see something as simple as a run instances API call ends up involving tens to hundreds or even thousands of these micro services now this is all great because even though we're involving that potentially thousands of these micro services and hosts in something like AWS region it ends up being a drop in the bucket in terms of capacity we've got it's a very very large system you're getting economies of scale and so we can afford to have all of these hosts and all of these micro services because of the scale of a region now the problem that customers have always had is I need a rack of capacity in my data center so the question is how do you take something like AWS with the hundreds or thousands of micro services and actually put that onto a single rack you have to make compromises and all of the offerings that at least I'm aware of today have made a lot of these compromises the first one is your micro surfaces can't have nine instances you don't have availability zones so like you're just not going to be able to have that architecture what you usually do is just end up having one instance for our service and if that one piece of hardware goes down you're probably going to lose your API available so the net consequence of this is that you typically have poor availability in these types of offerings the second one is this micro service architecture which I think the entire industry really has been embracing because it enables you to be very very dynamic and have a really high feature velocity these get combined into a more traditional monolith monolithic piece of software because you just can't afford to scale yourself out horizontally you need to be really conscious about how much capacity are you using this creates an uncanny valley because no matter how careful you are about maintaining API compatibility when you rewrite a piece of software you're going to introduce subtle differences if it's a different codebase the the errors from api's might be slightly different the latency of individual requests might be different it's just going to be different and ultimately you're gonna have to port your software to that new different system I think POSIX is a great example of this if anybody's ever had a program on say maybe I'm showing my age here but hp-ux and AIX versus linux even though it's highly highly standardized by a large industry consortium you can't write code that works on all of those platforms you always have to have branching and conditionals and it takes work to support new architectures finally you're dealing with a fundamentally different software stack it is not the same soft control plane that you're using in a public cloud architecture this means you're going to end up with different features because ultimately unless you're double investing double effort and building for the public region and for the local environment you're going to end up with feature gaps and what almost always happens is that the one rack version of the software lags significantly behind in capabilities ultimately one rack any kind of offering that says this is the cloud in a single rack it's just not the cloud I know that's a statement that a lot of people might get upset with but I fundamentally believe that you just can't do that it's just not possible however I'm here today talking about bringing the cloud in the form of Iraq so obviously we've found a solution to this problem so let's think about how we could solve this problem I like to generally look at these things and look at a simple even if it's a ridiculous solution to a problem and then try to figure out how to make that solution work and so there is a kind of silly but accurate solution to this problem and that is you take the customers data center and you just connect the network to your data center and you make the customers data center an extension of your region and then you can use your control plane to manage the capacity in the customers data center and everything will work as long as you provide the same hardware and software everything will just work and you can have cloud in a customer's data center with a single rack because effectively what you're doing is you're just extending the region that's all you're just creating yet another data center that's part of the availability zone but as I mentioned this can't work fundamentally one of the problems with this type of approach is that it violates trust boundaries as part of the AWS shared responsibility model we ensure the physical security of data centers and if we tried to connect our network to another person's data center we couldn't provide those same kind of guarantees anymore the second problem is that you have a fundamental bootstrapping problem there's this really interesting paper called trusting trust that talks about how you ever trust a piece of code that's compiled by a compiler because ultimately something had to compile the compiler and you end up in an infinite loop of trying to find the source code for the compiler that compiled the compiler and at some point in time you just have to trust that you have a binary that is what you expect it to be and so there's this whole notion of how do you establish an initial set of trust that's something that maybe malicious is what you expect it to be and it's usually a really hard problem in the way it gets solved most oft because it's easy is you just say so I'm somehow going to get a secret on that device and then I'll just rely on everybody having a shared secret the problem is if you have physical access to a server there you cannot store a secret on it in a way that is a hundred percent bulletproof that you won't be able to get to it if you have physical access ultimately physical access is always with enough effort somebody's going to be able to get at the data they're finally and this one's a little bit more subtle a lot of what happens in the public cloud really relies on having large idle pools and particularly with software updates a typical strategy for dealing with software updates is live migration and live migration works great if you have lots of spare capacity where you can move things around and kind of reboot physical servers but when you start talking about a single rack form factor you just don't have the spare capacity especially if it's a relatively small rack in order to really make this work so this approach it doesn't work but potentially these problems are solvable and this is where nitro comes in so nitro is an effort that we began in 2013 it was a long journey for us culminating in 2017 about two years ago when we launched the c5 instance type since then every platform we've launched has been based on nitro and fundamentally nitro has three things and Vernor talked about nitro this morning I think Andy and Peter also talked a lot about nitro because it's been a fundamental shift for us in terms of what it enables us to do and it all really began with us asking if we were to build a system from the start we were going to build all of the software and all of the hardware specifically for ec2 and to solve the problems that ec2 has what would it look like and how would we do it and what we came up with is three main areas one is the nitro card nitro card consists custom silicon that we built to enable IO acceleration a security chip which allows us to effectively address this trusting trust problem and really deal with servers without having to worry about how we bootstrap their identities and then finally the nitro hypervisor which has Vernor talked about this morning in his keynote has the ability to perform what we call live update so we're actually able to update all of the software on the platform while your instance is running without your instance having any downtime as a result of that and so these three things really became a game changer for us and they allowed us to look at these problems in a new light and this is what we came up with AWS outpost is actually fairly simple in concept even though it allows us to do a lot of things that we could never do before it really builds on all of the innovation that we've been developing over the last years we take the same nitro hardware and software that we're using in the rest of our data centers and all we really need to do is introduce two new concepts the first concept is the outpost edge device this is a device that sits in the rack and it is the thing that establishes communication back to the region and effectively creates a VPN tunnel the VPN tunnel is then used by these outpost proxies to send control plane traffic down to the rack in a way that we've that we've been able to establish as very secure very robust and doesn't actually bridge any networks together it's just forwarding API traffic and kind of think of it like a very very sophisticated load balancer this outpost edge device also acts as a network edge for the local rack so your local network connects into the outpost edge device and it is the thing that acts as your gateway in your V PC allowing the instances within the rack to talk to the rest of your network from the AWS region's perspective the outpost proxies represent the servers that are in your data center so from almost all of our control plane services they talk to these outpost proxies and they don't even really care that they that their servers in your data center as far as most of our control plan is concerned those servers are just part of our region and they manage it just like our region and all of the services see it and treat it just like it's any other capacity in the region because we're using the same AWS control plane and because we're using the same hardware and the same software you get the true AWS experience the best way to explain this is if you are have a development team and they write a piece of software and they launch that software on to c5 4x are at large instances and they do load testing and they do stress testing you gain a certain degree of confidence that I know when I run the software on a c5 4x large it's going to meet my needs from a performance perspective now that software probably runs on a different instance type it may even run on a local server but actually making sure that you know what the performance characteristics are and how things behave it's it takes effort you have to do work you have to do validation well we wanted to make sure is that if you had an application that ran in the public region you could just take it run it in an outpost and it would behave exactly the same way because it's the same CPU the same memory the same networking and ultimately the same underlying platform now as I mentioned at the start of the talk there are new concepts and outposts that are necessary in order to be able to operate in your data center the first one is a local gateway so if you create an instance in an outpost and it is in a V PC it follows all of the same principles as any other instance in easy to you have a route table and there's routes in the route table there's V PC local routes and then you can add gateways and you can add an Internet gateway a VPN gateway or even a transit gateway what we've introduced for outposts that is new is the concept of a local gateway and the local gateway the thing that is connected to your network such that even if your network is using private IP addresses like the RFC 1918 or even if your have public IP addresses your V PC can talk to those addresses when the instances are in the local rack the other new concept is when you launch an instance into an outpost the way you do this is by associating a subnet with the outpost so an outpost has an Arn just like any other AWS resource and now when you create a subnet you'll see that there's a new option for associating that subnet with an outpost Arn and then that is ultimately what enables you to launch services our launch instances or services into the outpost the great thing about using subnets as the placement construct is that almost all existing tools that accept subnets whether they're our first party tools like cloud formation or even third party tools like terraform will just work with this model ordering is the next new concept the first step in ordering is creating a site so one of the things we need to make sure of before you actually purchase an outpost is that the location is going to be able to support the weight of an outpost the the rack I showed you earlier in the presentation weighs about 2,000 pounds you have to make sure that there's the carpeting can support it things of that nature there's there's a fair bit to just making sure you can physically roll the rack into place the second issue is power requirements we need to make sure that there's enough power but we also want to make sure that you don't order a rack that isn't going to fit the power envelope that you have at a particular site we validate the network configuration and now post-rock requires two up links those up links can be one gigabit 10 gigabit 40 gigabit or 100 gigabit and you can aggregate multiple up links into a lag and weak support up to 16 up links to aggregate it into a lag so ultimately you can bring a lot of bandwidth if you'd like to a to an outpost the next step in the ordering process is to actually select your outpost configure while we can build an outpost with any combination of ec2 instance types and EBS storage that you'd like we've created a bunch of configurations that we think will satisfy a lot of needs and we've prevailed ated that they fit within common weight power and networking configurations to try to make the ordering process as simple as possible when you're browsing the catalog which everybody can do right now via the console if there's a configuration that doesn't suit your needs feel free to contact us and we're happy to help define custom configurations for you finally you submit an order and as part of submitting the order our installation team will contact you to actually schedule the installation appointment we do the installation for you you don't just get a box that shows up on your doorstep we'll actually come in and do the install process as part of ordering the outpost on the installation day in AWS a technician will meet you at a loading dock they will roll the rack into place and then you provide an electrician to energize the rack once the rack has energized will connect the up links and then you can launch an instance shortly thereafter if you're interested in what this process is like we actually filmed an outpost install here in Vegas a few weeks ago and we've put a video on YouTube that actually walks through the install process and shows what it looks like to roll back into place if you've never seen what data centers look like on the inside or what actually a rack install process looks like it's a really cool thing to go and check out whereas installing a rack in a traditional data center and setting up software on it I've heard customers say that it takes six to eight weeks from actually having the rack arrive to when you can actually use it a typical out post install will take four hours before you can actually use it and a lot of that is waiting for electricians to arrive and things like that Vernor likes to say that all things fail and he's absolutely right we put a lot of effort into making sure our hardware is of high quality we have lots and lots of data about how things fail and at the end of the day failures are just going to occur it's it's part of having physical things all of the active components in an outpost rack are redundant so there's no component in the rack that we expect can fail that will bring the entire rack down and as a consequence we can remotely monitor the rack and when individual redundant Fonuts fail our systems can try to identify repair and if we can't repair we'll send you a notification saying that we need to come on-site to replace a component we will ship the component ahead of time to make sure that it's there in a timely fashion and then we'll schedule the replacement with you based on when we're when you're available to have us come in to our date your data center typically we're looking for a two-day SLA to be able to replace a component so that we can meet all of our availability SFA's and the installation and the servicing of an outpost including Hardware replacement is all based in to the cost of an outpost it is not an additional thing that you have to pay for you are paying for ec2 instances in EBS storage and we are taking responsibility for making sure that you're always getting what you're paying for one of the neat pieces of technology that's in an outpost that now that you can actually see the outpost I can talk about is the net Rossa curity key a common problem in data centers is what to do about data at rest the typical thing to do because you're not really sure whether the data has been encrypted and you're not really sure whether what's happened to the encryption keys was a lot of data centers institute hard policies of destroying all non-volatile media anytime it leaves a data center for SSDs in hard drive that usually means physical destruction if you've been ever worked in a data center you'll know that there are machines that can crush drives another common thing is for components that are not so standard technicians will use drills and templates in the literally drill holes and motherboards and things like that because we've built nitro from the ground up one of the things we did is we built the system with this in mind and we know that all data at rest in a nitro system is encrypted and it not just say is all the data encrypted but all the keys are managed in a central tamper-resistant way and the Nitro security key is actually an external device and if you look if you look carefully you can see a little bullseye and that bull's eyes for a screwdriver you can use a power drill or even just a hand screwdriver and what happens is when you turn this screw enough the screw is that the black screw right there is actually sitting over that's sitting over a microcontroller that contains a tamper resistant key and that key is used to re-encrypt the key that's used to encrypt everything else on the platform and you can be assured that when you physically destroy that chip and screwing in the screw will physically shatter that chip you've destroyed all the data in the server and now instead of having to open up a server remove a bunch of drives and destroy all of these drives you can destroy this little tiny microcontroller and be rest assured that all the data has been destroyed and this allows us to really easily come in pull the server out screw in the screw put it in a box done very very simple process for doing Hardware replacement AWS is not even though I'm a hypervisor guy it's what I love the most the virtualization bit is probably the least important part of AWS the services are what's important I want to talk a little bit about how we're optimizing services to work on outpost now we don't just want to lift and shift every service to run on top of an outpost because very quickly you'll run out of capacity because if we brought all of those control plane services and you'll have the same problem that we had at the start so instead what we're looking at is optimizing services to split apart the port bits that are latency critical that have to run on your premises from the parts that are control plane services that can still run back in the public region ECS is an example of a service where we really didn't have to do anything to make it work within an outpost the team did add a lot of features to make it work really well in an outpost but architectural II it kind of just worked ECS manages clusters of instances that you ultimately will run containers on top of and each of those instances has an agent that reaches back and talks to the EC s control plane and because of this architecture of reaching back and talking via the control plane and the context of an outpost it just works because you can talk back to the public region and everything just kind of makes sense the elastic kubernetes service or eks is a service where we actually had to do a bit of work to change the way the service orchestrated itself because every time you launch any CI in eks cluster you actually create a new control plane instance and so we had to explicitly teach eks to launch the control plane in the public region and just rely on having worker nodes in the local outpost as you think about using an outpost to solve your on-premises requirements and needs I would highly encourage thinking about these types of architectures where you can split the control plane that runs in the region from the things that must run locally if you think about it even if you have a database that supercritical has to be single-digit milliseconds from other things you probably have batch workloads that run in the evening to do HR reports or something like that where we really great to run that stuff in the public cloud using spot instances to reduce cost you don't really want to use an outpost for that type of work and so having these types of architectures that can span local and remote can be a really really great way of reducing your costs and improving your agility on Tuesday when Andy announced outposts we announced that the following services that are available to run locally today and these are all services that are optimized specifically for that split of control plane versus data plane this includes ec2 vpceb s e CS e KS EMR and RDS is currently in preview and he also announced a version of s3 that we're developing for outposts that'll be available in the first half of 2020 and as you can imagine s3 is particularly complicated to twist apart the data plane from the control plane but we also know it's a service that customers are really excited about having in an outpost so we've been working on it since early last year and finally I just want to stress that even though these services are the ones that currently run locally all AWS services are accessible in an outpost and there's a number of services like CloudFormation that we're almost always going to we're most likely always going to keep in region because there's no real reason to have cloud formation run locally it's well-served running in the region so that's an example of a service that we always want to keep in the public region with that I want to turn it over to rich and let rich talk a little bit about how they've been looking at and evaluating outposts thanks Anthony so I'm rich Rodolfo I'm from Phillips Royal Philips has where has we're known some of you be familiar with us from the consumer space we have consumer products that cross everything from general wellness to cooking to audio who are everywhere in the clinical space we have a full range of healthcare products some of which are in the critical care path and so the things that Anthony was talking about as reasons to have this really come into play in some of the products that we have we've been building solutions similar to the use cases Anthony's talking about for decades and I will tell you at the outset of this I've got a team that's been working on building small remotely managed clouds in hospitals for about 20 years and this is one of the hardest problems I've encountered him in in my career everything that he's talking about solving here we've solved a couple of times and usually not as well as what we see here in Altos so we'll talk about that the product I'm responsible for from the operations and infrastructure side is the Philips health suite digital platform this was started about 2013 to be our common cloud platform for all of the Philips products it also is a is a platform that serves third parties so we have Pharma and other healthcare related ventures outside of Philips forming on top of it as you can see and this is a this is probably a three day discussion um we're using just about every service that that Amazon has an AWS has and as soon as they release a new one I've got people clamoring to figure out how to add that into our overall or overall offering bringing services on Prem is an incredibly difficult problem for us and so to Anthony's earlier bit around uncanny valley it's exactly a problem we have I can replicate all of my api's on different hardware I can try to implement them in a way it looks just like what I'm doing in the cloud but it just doesn't work the same and it doesn't work the same because the underlying infrastructure isn't exactly the same and it's impossible to make that work right so when we look at something like outpost it does allow me to pull this application that I have and run some of that functionality local why would I want to do that I'd want to do that because I may have a large amount of data that is just too costly to move to the cloud where I where I'll then do some pre-processing so you may want to do something and it could be it could be as simple as simple as de-identification you just want to anonymize it before it goes up there it could be normalization so many of the workloads in healthcare although they adhere to standards they don't actually implement the standard the same way so you have a little bit of work you need to do before you can even begin to transport it some of it may not be designed to be transmitted away from the enterprise some these technologies are decades old so you may need to transform it or encrypt it or do something different before you move it off Prem second set of problems that we have is just regular rheems there are certain things that we can't move off-site in some regions and so you know how do I tap into the seemingly unlimited scale of the cloud but still satisfy all those local requirements and that's where this comes into play I can do latency-sensitive pre-processing on-site I can do some data normalization before I move it I can have site survivability in place in case the network is not reliable and I've got systems deployed around the world and and I will tell you that internet availability is a problem I've got systems instead of the other into satellites so you may not be able to move the data fast enough or reliably enough to be able to process that and if you're in an interventional situation where you're actually delivering care that is it's mission-critical that's probably not something where you want to rely on a on an intermittent connection I think that I think the last thing that's important to understand is that some of the industry's just not ready yet they're not comfortable yet with the idea of I'm gonna rely solely on this thing that is far away and this is a way for us to begin starting that you know kind of risk-averse industry in this idea that we're going to move these services to the cloud we can prove them first on Prem we have a transitional path there's a lot of things that are going on underneath the hood we've been testing this for a few months we had some very early versions of this and I will tell you the kind of short summary on some of the calls that we've been having is it just works compared to other products where we've tried to kind of emulate a behavior in the cloud on Prem for my folks the learning curve was essentially flat we're able to deploy our applications the same way we do in the cloud the API is that we use today to orchestrate everything just work there's some small nuances and I'm sure we'll get some of this and the questions later solve nuances of latency in terms of if you gotta download an image to launch a new instance that takes a little bit longer because distance comes into play but from the applications perspective we've seen no difference for the services that were listed in the initial release I think we want to leave a little bit of time for questions but you know so far this has been a great experience for us great awesome thanks rich at this point in time we'll take questions if anybody has any if you could just come up to the front of the room so and then I'll repeat the questions to the whole audience can hear it yeah just just right there's fine yes sorry I was just thinking it's always hard to remember to repeat the questions and situations like this the question was can you have an auto scaling group that consists of instances in an outpost but then also instances in the cloud the the short answer is yes but I would not recommend it so what I would instead recommend doing is having two auto scaling groups and then using a metric to determine and basically using the same metric on both so like one of the patterns I really like is to say I want to have an auto scaling group that has a maximum number of hosts in it in the outpost and then if you need more then run the next auto scaling group so that you can kind of conserve the capacity that's local and kind of just when you scale into the cloud you know not completely brown out the local outpost in terms of capacity availability yes [Music] right so the question was about the power for the rack what is the preferred power we support single-phase and three-phase power today in common voltages that you see both in the US Europe and worldwide right those question was what power density do you see so we're currently offering outpost configurations that are 5 10 and 15 kV a we can go higher so if you want to do higher power density we totally can do that we often use 25 kV a and our data centers it's just that we don't see a lot of folks that that want that much power in a single position yes so the question was this the the gentleman noticed that there's RDS in preview the question was what database engines so my sequel in Postgres Aurora is 3a Z's and so we're still trying to figure out how to make Aurora work in the context of an outpost but the common opens non Aurora database platforms are available in RDS the question was if EFS is available so EFS is another example of a multi AZ service that we're trying to figure out how to bring it into an outpost one of the things we want to make sure that we do is in a lot of these services and ETFs is a great example of this EFS has an amazing durability model when you store something in EFS it's not going anywhere we want to be very sure that as we introduce these capabilities that you know we can figure out how to either replicate or at least a purpley message the type of durability changes so EFS is something we're working on the other managed file system services like fsx I think you'll see probably sooner than EFS the question was how an outpost scales beyond a single rack so an outpost today can be anywhere from 1 to 16 racks we are working on designs of outposts that will allow us to scale beyond that we haven't announced the time frame for that yet but you should expect to see that we come up with solutions that allow hundreds or even thousands of racks in an outpost right so the question was does an outpost rack have to come as a rack can we use it in existing racks especially where we have custom PDUs unfortunately the answer's no so one of the things that we that I feel really strongly about is I want to make sure that we don't deviate outposts from what we do in our data centers our data centers we deal with racks and we always ship full racks and we think of the system as a single combined unit and so we just I don't see a obvious path how we'll get to individual servers but we always listen to what customers are asking from us and you know if that is a need that you have reach out to us let us know and you know we'll always try to figure out ways of solving these problems thank you it was a good answer by the way the question was what's the survivability of an outpost if the wind connection fails the specific example was sometimes we lose networking for 3 to 4 days losing networking for 3 to 4 days that would make me nervous I think you'd have to be very cautious about how you built your applications so if an instance is running in an outpost and you lose when connectivity it's going to keep running now in order for that instance to be useful you probably need to use local DNS you won't be able to use route 53 back in the region there are ways you can do a thorough native DNS with route 53 in the local region so you can create configurations where you get the best of both worlds this is another place where we're likely going to try to figure out solutions to help customers generally speaking I feel more comfortable with kind of a few hours as a time frame versus a few days definitely if you plan to disconnect for two weeks that's beyond what I would I would think would be a good use for outpost I would say that's not the right solution for outpost again bring these kind of use cases to us we're happy to talk to you about it we're happy to work through them with you so anthe just one thing that I would suggest on on that topic and this is something we've been we've been looking at depending on how you develop your applications you really want to rethink or revisit your architecture to anthony's point you know if you're telling me you have a few minutes about downtime or network downtime or even a few hours it's okay I personally know how we've developed a lot of our applications about and the developers problems with AWS is that they've made it very very easy and they haven't you don't encounter the problems that would have if suddenly a back-end service like cloud formation that your app depends on is suddenly not available so if you do a lot of dynamic scaling inside of your application of course that's going to be a problem on print and so I think again the thing we have to be very careful of the context is very close it's a little bit different so you always want to force that what is different with 1az what is different when you lose the control plan but it works at this point we've we've tested it it does work straight forward for a long period of time yeah right the comment was that that they're often in weird locations where something political happens you you lose connectivity for three or four days or potentially you're on a ship and the ship might be disconnected for weeks at a time I guess the the one of the things I'll say about riches I love rich and his team because one of the first things they did is they actually started testing disconnecting the outpost to make sure their application would behave the way they thought anytime you're dealing with failure modes the way to make sure that you're gonna work with failure modes is testing test constantly test that scenario it's not an easy thing to do so I would say that especially for those multi week outages that's not the right fit if you're willing to really put the effort into it and work with us I think in the short term and probably figure out solutions for for you I hope that over the coming years you'll find and I expect that next year's reinvent will have lots of awesome talks from customers like rich and others they're talking about how they're solving these problems with outpost and that this will get easier over time as more and more patterns emerge next question [Music] all right the question was with the local gateway can you steer traffic from the region through the local outpost to the local network the answer to that is right now no we've specifically disabled that it's not really a technology limitation we were concerned that that that would create weird things for customers that they didn't expect mostly having routes that were duplicated in which path was taken if that's an important feature for you that's feedback that we would love to have you know we're really trying to balance and making it not kind of creating problems for customers with these types of things so I would encourage you to give us feedback if that's not the right answer for you yeah and to add to that so for my situation and it might be for some of you I'm bridging my software applications with a customer's network yeah and so to your point boundaries of trust except me you may inadvertently cause a penetration to what you think is a secure Network if you're not very careful about it and so I think you know we'll probably have some similar asks about that but we at the moment don't think that we know enough about how this is going to be Haven right okay the question was about the pricing model all of the pricing is available on the website today roughly speaking it looks very much like a three-year reserved instance you can pay for it with all the same mechanisms you can do partial upfront all upfront roughly the same term so I would encourage you to look at the website to look at pricing the question was when do we expect graviton support for outposts I love graviton III it's one of I think one of the most cools the coolest things we announced this week besides outpost obviously we announced a preview for graviton so as gravity cone ton becomes generally available you should expect to see an outpost shortly thereafter all ec2 instances that we launch moving forward you should expect to see in outposts usually almost immediately right the question was about trust boundaries and whether data could leave an outpost and go into the public cloud and the answer is yes if you actually push it so you have connectivity to say s3 from the local outpost and you can absolutely push data into the cloud but you're in control of that it's not something that we're going to just transparently do underneath the covers sure right so the question was if I have a bunch of different offices can I have outposts in different offices and basically use that as a high availability strategy you absolutely can create multiple outposts and you can tie them to different availability zones and when you do that you'll inherit a lot of the same characteristics of an availability zone but we can't guarantee that your multiple offices are actually going to be far enough apart that they'll survive natural disasters have actually diverse power feeds from utility that's like a really complicated topic that takes a lot of effort to figure out and then the other one is diversity of fiber so one of the things we put a lot of effort into making sure is that different availability zones have different fiber connectivity so it's unlikely that you would do this kind of transit connectivity from all at the same time so there's a lot of effort that goes into building an AWS region it may be possible to get something that's a close approximation of that but my suspicion is it's always going to be easier in fact we didn't really talk about it in the presentation but I do want to emphasize if you can run in the public cloud it's going to be cheaper it's going to be more robust it's going to have higher availability because not just do we design our hardware for ec2 our data center centers are designed specifically for ec2 we have really advanced power monitoring cooling and so you're almost always better off being in the cloud unless you have a really strong need to run stuff locally the question was if we had looked at supporting gov cloud and the answer is yes I'm not sure if we put up a time frame on the website but it is it'll come soon gov cloud like other things have special accreditation requirements it takes time to go through those accreditations we're working on it though it will come the question was is an outpost compatible to our Connect and yes in fact Direct Connect is highly highly recommended so for the connection going back to AWS you have two options you can do that via the public internet with a built-in VPN or you can do if using Direct Connect direct connects definitely going to give you the best possible experience with an outpost that's correct the observation was that you can use Direct Connect to have consistent experiences without involving your ISP and that's absolutely correct I actually think that the combination of Direct Connect in an outpost gives you an experience where you have very high availability you're not subject to congestion on the internet or kind of random failures from the internet facing traffic and it also gets you jumbo frames so you can drive much higher bandwidth there lots of goodness comes from Direct Connect in outposts [Music] right the question was do we plan on working with Colo partners in order to you know rack and stack servers do installations over the coming year one of the things we're gonna focus really heavily on is making the installation experience as clean as simple as possible we're very open with for opportunities to work with third parties to make sure that we can meet customers wherever they need to be ultimately whether it's outpost or local zones or even wavelength I think the common theme of all of it is we want to make sure that we have AWS everywhere our customers need it to be and we're trying to figure out how to do that in the most effective way to meet as many demands as possible so right so the question was what's the difference between a local zone in an outpost ultimately it's all the same fundamental technology because it's all nitro underneath the covers right so it's the same underlying fundamental technology the difference between a local zone in an outpost is that you can choose where are not posts it's relative to the rest of your network that means that if you want to have sub millisecond latency you can actually control the fibre distance to whatever system you care about you can eject it deep within your network with a local zone you're always going to be transiting some kind of you know backbone fiber or something like that so ultimately it comes down to latency and weather and how much control you want over that the physical hardware itself okay the question was or the service is the same it's different just because of the nature of the way local zones work I'm not actually sure what we announced in terms of services with local zones so I would I would suggest checking the website I just didn't catch that part of the keynote so I don't have the answer offhand great the question was minimum bandwidth and maximum latency tolerance and we just have a few more seconds so the latency we've tested up to 150 milliseconds I think that's where we're at right now this is something that we're gonna push the envelope on down the road bandwidth we recommend a gigabit of bandwidth although the rack really only needs 10 megabit but we think the best experience is going to come with a gigabit just one more question before we have to stop okay the question was how does it look with multi account the answer is you can absolutely use multiple accounts in fact you can share an outpost with any account within an organization so if you use AWS organizations and have many accounts that that'll work just fine without posts so I'm out of time we're out of time thanks everybody really appreciate [Applause]
Info
Channel: AWS Events
Views: 10,340
Rating: undefined out of 5
Keywords: re:Invent 2019, Amazon, AWS re:Invent, CMP302-R1, Compute, Philips, AWS Outposts, Not Applicable
Id: n7AWdZVCq7g
Channel Id: undefined
Length: 59min 57sec (3597 seconds)
Published: Sun Dec 08 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.