AWS re:Invent 2021 - Keynote with Dr. Werner Vogels

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] a road trip to vegas [Music] reinvent 2021 how has it been 10 years please welcome chief technology officer of amazon.com [Applause] nope [Music] now that's what i call a ride but first music [Music] shall we [Music] uh [Music] what what are your stats ah longer aws lambda which is an event driven compute service for dynamic applications and you have to run no servers no instances server free backup wait i can't stop here it's bad country so [Music] it was only last year it's never been a better time to use your knowledge skills and talents to make a difference in the world [Music] can you believe it 10 years 10 years [Music] from 2012 to 2021 a celebration of innovation on behalf of our customers i'm ready las vegas are you please welcome the chief technology officer of amazon.com dr werner vogels [Music] [Applause] [Music] [Applause] so welcome las vegas how are you doing this morning hey did you like my choice of music today can we get an applause for the catfish quartet out there they were absolutely amazing and this may be year 10 of re-event that doesn't mean i'm not going to wear a t-shirt though yeah but let's make sure i've realized that many of you have little insight into the 80s book and new wave era so in that case for the few of you from the uk that actually know this band that had 19 albums on the charts the stranglers okay 10 years 2012 when i did the first keynote at reinvent still is very much alive in my head because i thought that was such an amazing first reinvent at all i had never done anything like that before and the cloud was five years old at that time we had quite a bit of experience already with that but we're giving you lots of advice that actually has come back over the years and again i think what you will see today is actually i will be reusing some of these advice that i gave you in the first keynote because they're still valid today and if i look back sort of thinking about sort of how did this all get started and i'll take you a little bit on my memory right and see how things have evolved over time and i think things started off of course if that innovation was constrained because before cloud you either had to get massive investment um you had to buy hardware you had to hire i.t people things that had nothing to do with actually building a product and you know the one thing that cloud really did in those days was go from everything that was constrained all the hardware pieces to making them all programmable suddenly getting access to capacity was just a click of a button and whether that was networks it became vpc or azs as data centers eob for load balancers you know rds s3 ec2 all of those became were physical resources before but now became virtually programmable and that made all the difference because it wasn't only that you could scale up by buying more hardware you could actually scale down easily as well with a click of a button and that made all the difference so if we look at the ec2 in those days really simple you know one single family there's three different sizes and it's the interface was simple create launch terminate and the kind of things that you've done over the years with ec2 instances has become let's say very ambitious and every day almost you ask this for a different instance family yeah you want it you want the storage optimized ones you want compute optimized you wanted to have large memory instances where you could run your sap hana in and so where the first instances were all based on a linux hypervisor we needed to do massive investments and innovations in our data centers to make sure that we could get sort of the next generation compute platforms into your hands as well and nitro where i talked about two years ago went in depth exactly how we were using nitro the knight to a hypervisor made it possible to introduce all these new hardware platforms for you and you know especially our shift to arm has had significant impact on cost and performance for everyone but to be honest the most popular launch in the last main event was actually a mac and so but the one thing that apple has done is followed the same path that we've done at aws and they also went to their own silicon and built their own arm chips so today i'm happy to announce that you will get your hands on the mac ec2 m1 instances so that you yeah let's make that the most popular launch this year as well yeah and so yeah so it is actually just like with the graviton processors price performance is significant in arm and i think apple claims for the m1 mac instances that there is a 60 cost benefit improvement over let's say the intel platforms that they used before now all these different instance types makes that you guys are doing a lot of work really a lot of work and that results in 60 million launches of ec2 each day and this is doubling the number of launches that we saw in 2019 you can't do this on bare metal and remember these are launches a day that's not even the steady-state running ec2 instances that are out there this is quite impressive don't you think so 60 million instances a day now as i said earlier the cloud removed those constraints however it didn't remove all constraints and if you go back to the 2012 presentation i gave you about the new world and removing constraints i promised you i would fix the speed of light um it's still on my to-do list though yeah so there were a number of what i would call laws that sort of continuously are the constraints that we have to deal with yeah whether there is latency and bandwidth or the business of your network connections or the law of the land data residency now that one is actually one of the i'd say easier ones but it's one of the things where we're making most progress yeah working together with regulatory agencies around the world to educate them about cloud and what the capabilities are and that the security and the protection that they get with data in the cloud is much better that they would ever be able to get in their own data centers now go back to 2006 if you think about removing these constraints or dealing with these constraints in 2006 we only had north virginia that was the first region we launched and you can imagine that if your customers in japan or in india or in south south africa latency to west virginia was significant so launching these other regions the first regions was in an island in 2007 and we lost singapore and north california and if you look at that customers now could start deciding which region to use based on where their customers are for example a company in india gama a media streaming company will be streaming their content out of singapore but would actually be using north california for doing transcoding because remember these regions are all cost following so certainly they could make choices for things that weren't actually latency critical but lower cost versus you know maybe a bit higher cost but lower latency and there's applications that we could not have been building if you didn't have let's say these regions closer to you now take for example alexa i think in 2006 or 2009 or even 2013 alexa would not be possible and why not because you need ultra low latency to make sure that you can reach the alexa voice service which sits in the cloud and if you don't get a response back from alexa within a second it doesn't feel like a natural conversation and as such you know both the advantages in hardware and gpus and and machine learning and both in hardware software have enabled the capabilities of alexa but we still needed to drive the latency down since the network will not be in your way now to that we developed this global footprint where today we have 25 regions 81 availability zones across six continents and i promise you by the time we finish this keynote you will know that it is actually in seven continents and so we're continually working to expand this there are nine more regions planned that we will bring online in the coming two years and each of them will continue to help you reduce your latency but also address data residency requirements now if you think about latency it's not only the regions that become important to you yeah we have 310 points of presence around the world and we use these pop locations to deliver content at high speed for cloud font but not only is it the content distribution network with lambda functions of cloud functions and lambda at the edge you're actually also waiting to do computation at the edge and if you make use of s3 transfer acceleration you make use of each of these endpoints to really at high speed move data into s3 beating sort of the the bandwidth limitations that you otherwise would have now it's not only that way that we bring you closer to the aws capabilities now uh in the past two years we've been building local zones which you get your closer get your services even closer we currently have 14 local zones we started off in the us to actually start to understand how to best build these zones and how to best use them and how our customers would be using them and there's a subset of capabilities running in the local zone that is attached to a region right so there's ec2 and ebs and ecs that you can run there and basically you can build a vpc in the region extend that vpc into a local zone and actually move your containers back and forth seamlessly within the same um within the same vpc now i'm happy to excite and i am very excited actually to announce that we will be expanding the local zones internationally across the world there's about 30 at least 30 new aws local zones coming online in europe in south america in africa in asia and in australia and of course we'll have one in my home city of amsterdam now one of the very interesting applications i've seen that make use of local zone is a company called jacktrip and they do sort of remote live performances of music now to be able to do that the maximum time for synchronization it's about 25 milliseconds so imagine there's a group of musicians spread out through through austin after all you know the life musical capital of the world if they would have to connect to one of the regions let's say in virginia or oregon latency would be too high it would be impossible for them to play synchronized music so they connect to the local zone in houston which makes them it makes it possible for them to completely rehearse and actually do performances live over the internet remotely and given the situations that we've been in for the past two years this was really important for them now that's not the only way we started trying to overcome the limitations of speed of light then aws wavelength is a good example for that and where actually sort of the low latency promised by a 5g we do not need to add more latency to that for you to go to region or to a local zone so that's why we put actually aws capabilities inside the 5g access points again trying to overcome sort of the restrictions that the speed of light and latency give us and quite a few of our customers are winning things like a machine learning influence in those instances or game streaming and this already starts to get close to the network so let's take a look at what we've done in the physical infrastructure but now on the networking site let's go back to 2506 again now when we launched north virginia the world was pretty flat basically you launched an instance reassigned an ip address to it and the network across is flat and if you wanted to go talk to other instances you needed to go over the public internet and so the early enhancements to ec2 really focused on the networking yeah how to do autoscaling how to do load balancing how to get insight into your network with cloud watch and one of the first things he did also was actually introduce elastic ip addresses meaning that you could actually have persistent ip addresses instead of one assigned at boot time to you now the most important launch of the networking space has been vpc and we launched that in 2009. this allowed you to cordon off a piece of the cloud assign your own addressing blocks to it then connect it back to your own data centers so it seemed like your data centers were seamlessly expanded and you could use vpc for vpn for that or later direct connect and this was still in the early days a pretty flat network however in 2013 we started making vpc the default and ec2 classic which we recently started deprecating is now sort of the old style network where vpc by default really is the one that you're using and if you look at a lot of how the aws backbone has evolved over time that looks very different from how we look like in let's say the late 2000s and the backbone of today consists of a fully redundant 100 gig network and it's not just our control plane you can use our backbone that we're in the past if she would connect communicate between two vpcs she would go over the internet now you just transfer transfer over the um aws backbone and this is really one of the the most highly skilled purpose-built global networks ever assembled and it's growing really really fast and if you look at some of the core networking products that we built on top of that there's a whole variety of them and i mentioned earlier sve transfer acceleration that basically makes use of this backbone to get your data to s3 as fast as possible now even with all of these components building a uh building a global network connecting maybe hundreds of your offices to the cloud is still a big challenge and really the first time that i actually meet a customer that shows me a massive spreadsheet which he uses to manage all the connections between manufacturing sites and back offices into the cloud so there's a lot of work going on there and actually quite a few of these customers are literally running thousands of vpcs if not tens of thousands of vpcs and so we started to think how can we help these customers uh sort of really overcome all the heavy lifting that you have to do if you have a very widespread network yourself that you need to connect to the cloud so today i'm happy to announce aws cloud one which is here gives you the ability to build manage monitor global private wide area networks using aws so how does this work yeah imagine you are a very large global company and you have manufactured dozens of manufacturing sites around the world you maybe have offices in each of the big cities around the world and you need to connect them all to aws because that's where your applications are running now the first thing you do is you start selecting the regions you would like to use and then automatically once you actually define this all your remote users and sites and data centers will connect to the geographically closest location using a vpn or direct connect and this actually built for you in minutes using the big aws backbone for you we basically give you a highly reliable and highly available software-defined wide area network running over the aws infrastructure and it's not just that you can actually also segment these pieces of software-defined network that you've created maybe you create a separate segment for your corporate office you create a different segment for your manufacturing office for your manufacturing sites and these segments cannot communicate with each other unless you explicitly allow it of course we'll give you a dashboard for which you can monitor all your network activity and it gives you visibility in availability you can troubleshoot connectivity and performance issue and so on actually this would not be possible within with a without this wide range of partners that we've been working with and whether to see the telco business system integrators and these software defined by their network vendors that actually help us with their technology to build software they find went privately for you over the aws backbone now think about regions what we talked about so there's much more than regions now we already said that to overcome sort of latency uh challenges yep we give you points of presence we give you these wavelength zones um we'll give you local zones but you know cloud has gone much further and you've asked us for so much more than just these zones and these points of presence one thing of course was another way to overcome latency requirements is by actually moving aws closer to your data centers and we've done that of course with outpost yeah and over years we really heard that you have certain workloads that you cannot move out of your data center it might be that your sap system sales connected to a database that for some reason you're not willing to move however you want to build all your new applications or even older applications winning on aws so the prime reason for outpost is to actually beat the latency between the aws cloud and capabilities that you're running in your own data centers yeah so by moving either the 42 racks into your data centers you can have extremely close connectivity now if i look at customers that have been very successful with outpost it's all for a variety of reasons fanduel does it to beat latency and actually reduce their the latency for the gambling between themselves and their customers first abu dhabi bank in the uee makes use of outposts to actually meet data residency requirements and they actually make use of multiple outposts in different locations in the ue so they can do business continuity using outposts and philips does this for again for local processing as well as for data residency requirements around healthcare now we started off of course with a 42 rack but now actually we also give you these two other outpost instances is that for example in retail locations or you know your edge offices you can bring aws into those offices and but there's a reduced set of capabilities running of course on outpost and again it connects back to a region from which you can monitor and manage all the outspots capabilities that you have it does have all the important ones ids ec2 ecs and the interface is run directly in your data center so those are your data centers and your offices but what about these other billions of devices out there that are connected now so how can you actually connect all these devices to the cloud how can we help connect those and i actually call this the internet of billions of things yeah and for that we give you free rts as a stable base as an operating system for these devices we give you aws iot core to manage all of those devices and to create digital twins and shadows and things like that or if you have a need to run the iot the capabilities on your devices itself we give you green glass now for green glass i've seen a number of very interesting applications one of them is in the autonomous trucking in hazardous mining conditions so these trucks will be autonomous they'll go into a mine and actually own that truck we'll run green glass completely autonomous collecting data managing the device and actually storing data at the gateway which once connectivity is restored it will upload it to the cloud and it's there so it's really the internet of billions of things but those are the billions of things that were actually already connected or could be connected to the internet there's also many devices definitely in manufacturing environments that are not connected they're old the typical age of manufacturing equipment in the u.s is 27 years that means that these devices were not modern data generators at all so for that we give you monitron which is a physical appliance with a bunch of sensors that actually manage these data coming off the um observations of devices they do vibration and temperature and things like that record all of that move it through gateway move it into the cloud you can manage it there for example for preventative maintenance and things like that is there's also something very special around one particular data stream that is video now for many of us we still watch video to be watched but in essence video is nothing else in the data stream to be analyzed and so many manufacturing sites but also retailers have many cameras around on site and you can actually make use of those data streams to analyze them now the paranormal panorama appliance you can actually move custom machine learning models onto it with an interference there of the data streams coming off iptv cameras and so for example i've often been asked by by retailers how can you understand how people are moving for a store how much time they're spending in front of this promotion or the impact of changing the layout of my story and i've always told them you don't really have that data now you have 30 security cameras in your store but you have someone watching those streams to look for fraud or for theft or things like that but in essence the data streams that you can use to analyze exactly how people are moving through your store and how much this time they spend where and how and for example the airport of cincinnati is making use of the panorama appliance to understand how people are moving through the airport so that's all because video has become a data stream to be analyzed instead of something to be watched now if i think about sort of environments that are even further out outside of manufacturing or retail there's all sorts of requirements that suddenly come there as well then most of them need to be compliant especially for example in medical environments or they need to be rugged because we use them in environments and deserts and things like that or you know they really need to be able to be remote and this is where i promised this seventh continent because a bunch of these devices have been taken to antarctica and i'm happy to announce that you know the seventh continent has now also versions of the aws cloud on them now the rugged edge is what we call this yeah and you have seen most of these devices over the years then whether you have snow cones which are small and very mobile really a snowball that are still um that they transportable but they're a bit bigger and heavier and but there's a the storage optimized version of it and a compute optimized version and then of course the snowmobile if you have so much data petabytes of data sitting in your data center you may need to move to aws snowmobile is the way to go now this is sort of the current state of the world of the cloud world but you know can we go even further you know what is the next frontier that we really should be able to be addressed when you think about cloud computing yeah and of course you know we think about space we think about deep sea we think about sort of antarctica but also start off with space now adivs ground station again was one of these approaches to virtualize something that was ridiculously expensive if you would have to build it yourself and so again by making ground stations programmable we certainly are enabling a lot more innovation in the space yeah and so again you have ground station data comes off the ground station you basically just went antenna time you get your data of your satellites and move it into aws and one example of that is the mohammed bin rashid space center in the ue launched the hope probe in early 2020 and in 2021 that arrived at mars it is now circling around mars doing research about the mars atmosphere that data through ground stations being moved into aws pre-processed and then within 20 minutes made available to the global research community now to actually tell you more about space and innovation in space i'd like to welcome vanessa de the founder and ceo of capella space thank you [Music] [Applause] [Music] thank you verner imagine if no change in our world went unnoticed and unmeasured imagine if we could build a digital clone of our physical world that was getting refreshed in real time imagine if we could have seamless and automated interactions between our physical and our digital world such that you could set a trigger that involved not just events in the digital world but it could also be triggered and acted out in the physical world let me give you an example imagine you could monitor all the ports and all the shipping lanes in the world all the time in real time and you could sit the following logic that was working for you in the background if there are more ships than container capacity then increase trucking support if that's not possible move some of the ships in this shipping lane to this other shipping lane to this other port optimized for eta you'll be able to catch issues and potentially prevent supply chain disruptions from happening in the future in 2019 on this same stage a company called cell drone told you about deploying sensors in the ocean to collect valuable measurements in real time there has also been a significant growth in quantity and quality of terrestrial sensors in the last few years however the next frontier that is now becoming accessible is space and sensors from space have a truly unique vantage point they are needed for us to have a connected world in fact to make truly global decisions based on global information you need global access all the time for this space is the missing link over the last six years we've been designing and building some of the most sophisticated satellites on our planet we can track changes on the surface of our planet to millimeter accuracy these are powerful satellites these satellites have some magical capabilities we can take images through the storm at night in all conditions it gives us reliability and it gives us visibility into our planet in all conditions that's a fundamental requirement for getting to real-time monitoring in fact this picture of george washington bridge was taken three in the morning on a stormy night from space we have five of these satellites orbiting earth right now and we're launching more and more and more of them in the coming quarters as we launch and scale our constellation we're going to be accumulating a lot of data in fact over the coming years we're going to have more than 500 petabyte of data getting accumulated coming down from space from our satellites that's a lot of data handling that much data at scale is a challenge on its own but it's an even greater challenge when you combine our data with data from other space sensors other terrestrial sensors and other ocean sensors in order to lay the groundwork for integrating our sensors with other sensors in real time with low latency and low reactivity we had to build our business on a completely different foundation than traditionally has been tried before we had to think about so many things we have to think about a distributed network no human in the loop operations with full automation real-time processing of data unlimited storage and processing power and instant skill scalability this is why we work with aws to enable this future we needed a company that could support us with resiliency could skill with our consolation and demands could support our automation and by the way do all of this in real time so 500 petabyte of data no problem scaling that out on s3 a flurry of new hurricane images coming down casual with just auto scaling infrastructure on ec2 and if we need that hurricane image even faster that's pretty simple we'll just grab the next pass at the aws ground station and the aws ground stations are integrated into the aws fabric they're also in close proximity to the data centers which means the data from our satellite to the ground station through the secure vpc getting into the cloud happens within milliseconds and with our entire system on aws and our platform built on a robust api ordering imagery from space is like ordering food from your favorite food delivery app fully automated and with no human interactions let's just take a look at how easy it is to request an image from one of our satellites you pick a location of interest let's pick las vegas and literally three clicks into it you've tasked the satellite we take care of the last the satellite goes over the areas of interest collects imagery comes over the ground station dumps the data down the data goes to the cloud gets processed and boom you've got you've got the data for your consumption and here's the best news about it this is all on an open api which means you are one api call away from accessing a global network of extremely powerful satellite sensors that means if you want to automate your end of this process you can use our open api and just blend us into your workflow let me just emphasize that you are literally right now while sitting in this room one api call away from a global network of powerful satellites this is game changer machine to machine operations and full automation are together the fundamental catalyst for taking action in the digital and in the physical world so let me just tell you a few stories from this year where machine operations led to humans taking better decisions oil spills were detected using our imagery automatically chinese dam failure was verified using our imagery automatically volcanic researchers identified new vents by peering through the smoke of an ongoing eruption automatically local agencies were informed of deforestation in amazonia automatically and of course during hurricane item our satellites were monitoring the daily progression of the hurricane and flooding through the clouds and storm day and night and pushing out updates in real time to our customers automatically and many many many more i'm sure you could tell we're really really excited about this capability when i took the stage like seven minutes ago i opened by showing a conditional flowchart of setting triggers and connecting our physical to our digital world thanks to companies like aws and capella that future is a reality today so before i leave i ask you all this what would you do and what would you build if you were one api called away from seeing our planet and its billions of changes thank you very much [Applause] [Music] [Applause] thank you have there's so much interesting innovation going on in space it absolutely is the next frontier and as always and aw has been very closely connected to the startup world and so in recently we launched the the robotics stand-up accelerator we launched an early start early start accelerator for startups in europe and of course we also recently ran a space accelerator which had 10 different companies that are making use of the aws cloud and are going into space and one of those is very interesting it's called d-orbit and they're actually working on sort of supply chain logistics in space but they're also working together with an aws partner called unilab who actually brings the cloud into space so if in the future you're looking for let's say a lunar one region yeah we may not be that far away from that so if you think about sort of i hope so um so if you think about all these different components of the cloud and there's a spectrum of hardware and devices and services that expands the reach of the cloud way beyond regions and availability zones yeah i have nicknamed this the everywhere cloud yeah and customers will always want to manage your application centrally even though they're distributed but you know you can push it out all the way over to the edge and where you can see all aws services running transparently regardless whether you're running it on the rugged x whether you want to run it um close to your space satellites or whether you want to run it in your own data centers it is not just a hardware well from day one we've been thinking about all of our software capabilities that sort of needs to span all of these different components that make up the everywhere cloud and one of the earliest ones is still one of my favorites there adbs storage gateway actually brings the power of cloud storage directly into your own data centers one of the things that i really want you to walk away with is that yes this is a massively distributed system but it's not decentralized yeah it's distributed but not decentralized and i would like to sort of spend some time on one of these services that touches each and every component of the everywhere cloud and that is identity and access management aws iem gives you very fine-grained control over any of the resources in aws yeah you can specify who can access which services under which conditions and it gives you identity and at first it appears as if am i m is relatively simple after all permission systems have been around in the world for a very long time but the requirements for iam are we literally have millions of aws customers over thousands of different types of resources hundreds of aws services and there's practicals over all of our regions our zones all the way out to the edge and so the requirements for iem are mind-boggling and they come down to two pieces that really needs to work it needs to be ultra reliable it needs to be absolutely secure and it needs to be incredibly fast now because remember iam is part of every api call that you make and now so given the importance of iem i'd like to pull back the the covers a little bit and show you how iem is designed and how it scales and do this because this is truly an example of building a high scale yet secure distributed system and core in all of that is that you need to keep simplicity in mind because otherwise you cannot scale to the scale that iem needs to meet now so the the components in the overall system are of course either you use the control or use the cli to talk to the iem control plane who then talks to sort of iem services that sit in each of the regions or each of the devices that you're using now iam means there's two capabilities and access management so authentication who you are and authorization is what you are allowed to do i'll dive into authentication i think that's sort of one of the more interesting pieces now if you ever set up an aws account which i think most of you must have done by now yeah you know that you get an access key which is public and you get a secret key and the secret key is used to sort of go with every api request that you make to make sure that we understand who is actually truly making this call so what you do we ask you to sign every request send it to aws sign it with your secret access key and this way we know exactly who you are we do this with a signing process that's called signature version v for sig fi for for short so we sign the request goes to the aws service who then passes it on to the iem service who actually checks who actually does exactly the same signing see whether the signatures match and if they match your request is allowed if they don't match your request is denied yep or not now let's take a look what we do with signing with is not your secret key secret key is only part of that signing process yeah first we concatenate the key with adb s4 to indicate what procedure we're going to be using then you do a one-way hash function of that that first secret with the with the date then you do that the date with the region and then again one way hash function and in the end you end up with a key with which you're going to sign your request yeah and so it's not just the key that we're using to sign we actually do this for a particular date for a particular region and the service you're trying to access within that region and so we do exactly the same thing on the other side now to be able to do that we need to have access to your secret key of course you know we could be moving your secret key into the iem service everywhere but that would be a massive violation of security principles yeah the secret key needs to stay in the iem control plane because there is where we can truly protect you so what are we going to do well we're actually going to repeat some of the steps of sig v4 what the control plane does is actually create a derived key the device key actually is just the secret the date and the region and then whenever a request arrives yeah and we store that in the iem service now whenever a request arrives it arrives at what's called the iem endpoints and there are literally millions of hosts in the imei endpoints and what the team decided is to make use of the capabilities of these millions of hosts to really accelerate the overall checking and signing process so basically what the idm service does is create another derived key it takes a derived key it got from the control plane and adds the surface to it so this is a key that is uniquely for one user one surface one region on one date and so with this now suddenly you can cache this at the iem endpoints and make the checking of the validity of requests really really fast and really really reliable yeah so in essence if we look at what we've done here is we built an intelligent hierarchical cache and there's many interesting aspects to all of that we could spend a lot of time in for example what the caching validation protocol looks like but here at very high scale we've been building able to build an ultra fast access control system that doesn't exist anywhere else and you can imagine you know what all the all the other components are that we really moved cryptography to the edge making use of the millions of hosts that are the iem endpoints and so how big is this really why is this really so important it's because iam routinely handles half a billion api's calls a second and just imagine just for this keynote 90 minutes or 120 minutes long i don't know how long we're going to take how many api calls iem has handled during that time and the only way why that is possible is they made the see the system as simple as possible because in that way you can actually scale to these numbers while meeting the security requirements that a modern iem system has at scale now the principle there of course is that you know if things become more complex you shouldn't start with building a complex system all complex systems that actually work evolved from a simpler system that works and you know if you look at sort of simple machines there are sort of mechanical devices that were used to make the world easier yeah and so most of them work on the principle of a small effort having a really big effect that take for example an inclined plane now if you had to move this box from here up to there would take significant effort however using the inclined plane you only have to use a little effort to get a major effect then the six of these different standard types of simple machines now you can use this uh some of these components to start building more compound machines if you take a screw a lever and a wheel for example it's easy to build a wheelbarrow out of that but still this compound machines are built out of simpler machines and that's the reason why they work and of course if you want to build more fancier things not just you know wheelbarrows but maybe the nice mustang that i drove up here or maybe you want to build something completely different maybe you want to build a monster truck or a formula one car now this is only possible because they're built out of primitives and with all these primitives together you can actually construct these compound machines and this is also the principle we've used from day one at aws we give you primitives not frameworks and why because i think if you have to build a framework then it probably takes a number of four or five years before you have everything together in the kitchen sink that means that by the time you deliver it to your customers it's already five years old and then you have to you build with that for something that has to last for the coming 10 years it doesn't really work like that what we wanted to do in the cloud is actually give you these small primitive components so that you could exactly build what you want to build without us telling you how to build it and so you can see sort of these these simple machines as being sort of sqs and sc and ec2 but you've always asked us for more of these simple machines more of these components purpose-built databases for example which means that by now we have well over 200 of these simple machines of these services and believe me you know i know it's sometimes overwhelming but remember you have asked for this it is basically your fault now each and every one of these services has a purpose yeah and about i think sort of 95 percent of them have been created by your feedback really helping you build exactly those things that you want to build now some of these services are truly simple but some of them are actually compound machines themselves yeah take for example at lake formation we've seen adam talk about that and swami talked about that so lake formation under the covers is built out of these other aws services yeah and where it this aurora and dynamodb or how to get things on premise and glue and iem all these components together make up very big formation or if you're building container applications yeah you may want to use apprentice so you don't have to worry about fire gauge you don't have to worry about application load balancers on iem and things like that or if you're a web developer and building web and mobile applications amplify is probably the tool you want to use again you know getting in lambda getting in s3 getting setting up route 53 connecting you back to to dynamodb because amplify really targets a set of developers that in general is not that happy with actually doing back-end work these are front-end developers that really want to build beautiful applications but often are actually burdened to become full stack developers but also because they also have to connect to the back end now so if you think about that you know most front-end developers are really good in the visual aspects of building your application it might be html and css and javascript but there's all these other things with web apps and mobile apps that's the frameworks that server side languages databases servers apis all these kind of things that front-end developers actually don't really want to deal with and so the challenges for front-end developers are really on one hand lots of these front-end developers the visual side is still manual and then they have to connect the front end to the back end an area where they're actually really not that much interested in and to help you with all of that i'm happy to announce today aws amplify studio which is a completely visual environment to build featured apps in hours and weeks and connect them back to the back end without you having to do anything it's really a developer first approach it's a truly low code visual environment where you can literally build uh web apps and mobile apps in a matter of hours yeah and it's on one hand it will help current front-end developers move much faster but also unlock this world for actually new type of developers that may not have that much experience in front-end developing and to explain aws amplify studio to you in more detail i would like to welcome ali spittle the senior developer advocate on the amplify team ali [Music] aws amplify is a set of tools that bring the power of aws to front-end web and mobile developers with amplify you can enable data storage file storage authentication and hosting and you can use the front end libraries hosting and backend resource provisioning either all a cart in an existing application or end-to-end in a new one for a seamless experience when i have an idea for an app i cannot wait to get it built and shipped development speed is so so important so i am so excited to introduce amplify studio it allows me to build my ideas faster and my designs are pixel perfect without needing to write custom styling code amplify studio makes it easy to work with aws no matter your cloud knowledge you'll find you can quickly build feature-rich cloud-connected apps that have scale and security built in to show you what amplify studio can do let's build an app together i end up at a lot of technical events like this one and i'm thinking of starting my own so i'm going to build an app to show off the sessions for my talk i want to put their sessions on a site so that potential attendees can start getting me feedback about the sessions i've gotten a couple of the session descriptions back and i'm waiting to hear from a few others so i need to make sure that the site is dynamic so that i can keep adding the sessions as they come in so earlier i created this data model and it has four fields as you can see name description speaker name and speaker image i can add data fields or even another data model if i need as my application grows so for example what room each speaker is going to speak in i'll change the field type to date time here you can see how customizable this is and you can also add relationships between the data as well you can see that i had seven sessions already entered in to the data manager provided by amplify while i was at stage i asked verner if he would like to talk and he said he wanted to give a talk about the everywhere cloud so let's use this information that i got from werner to create a new session i'll plug in that title description speaker image and speaker name and now you can see that there are eight sessions for my app amplify studio helps developers and ux designers work better together in addition to the significant time it takes for developers to make pixel perfect designs oftentimes those ux designs aren't implemented properly which makes ux designers are frustrated and makes it to the end user experience isn't as good as it could be with studio developers can import custom ui code from their designers using the popular design and prototyping tool figma so in studio we'll paste in the link to our figma file this will import the ui components that my designer has made into amplify studio you can see that there are all sorts of different ones marketing navbar e-commerce and my session card so now i have a data model and i have these ui components i need to link the two and make it so that my session card shows the information about my speakers and their sessions not just this hard-coded speaker name so i'll configure my component and then i can click to add a new component property you can see that i can name it myself and then choose the type which autofills from the models that i've created then i can click into the attributes on the ui and click in into example for the image and i'll set the source attribute to my image from my database then i'll do the same for speaker name and set that to the speaker name from again my data model i'll do the same for title and description and now i have my live data on my application on these components i want to display all my sessions not just this one so i'm going to take this session component and create a collection this will allow me to create a list or grid of all the different sessions that i have so i can choose from different layouts here so the grid or the list i can choose which direction they go in how many columns the centering the margin padding all those fun types of things and i can also choose exactly what data goes in the component so i can add filters and also sort for example by the name of the talk now i need to integrate this into my application and i call this part copy and paste driven development studio gives me these code snippets that i can use directly within my own application code to integrate the studio components into there i have already created a boilerplate react app with create react app so it's a brand new project but you could also use these components in an existing one if you wanted to so first we'll run the command amplify pull and this will generate all the react components from all the figma components that i had synced into studio so you can see all the different components as they're listed out then i can use these react components however i want within my code base so amplify studio saves developers from having to write thousands of lines of custom code it auto-generates human-readable credible react code later if i need i would be able to extend this code myself for additional control over my components or add more data properties if i need as well so in my react code i just need two lines of code one to import that session card component and one to render it on the page and i can just copy and paste these from studio i don't even have to think of that so here's what my app looks like all the sessions are displayed didn't take a lot of time to build at any point you can make changes to your designs in figma so i'm not a designer by any means but let's round these corners out a little bit on the card and i'll also bump out the top the font size on the speaker name i'll go back to studio and import those changes and you can see that i can either accept or reject the differences between figma and studio so for example if i'm not ready yet or there's a problem with the styling it's going to save me from accepting too early also i don't need to hand author all of these changes to the design and my own code so i go back to my development environment rerun amplify pull and all of my component changes and the designers changes are implemented in the app look at those beautiful rounded corners as an app's use cases expand or an app moves into production developers may require deeper control over their back end and deployment operations amplify's newly launched extensibility features give developers the ability to integrate any aws service in their application beyond just those offered by the amplified tool chain first you can override the auto-generated amplify settings using amplify override then within your code you can use cdk to get exactly the settings that you need you can run amplify add custom to add any of the 175 plus aws services directly into your amplify application again using cdk and then you can also run amplify export and if you need to implement amplify within your own deployment pipelines you can export your amplify app to cdk and you can run that export process over and over again it's not an eject where you escape out of the ecosystem with many other front-end developer-focused cloud solutions you hit a wall when you keep growing and building but with amplify that does not happen with amplify studio you can quickly go from a designer's vision to a full stack aws connected application that can scale as your business does thank you all so much [Music] hey gary great story and i'm really looking forward to kind of see what kind of things you guys are going to build with amplify studio now how do we build these compound machines yeah how do all these components get together and get connected yeah and they're of course all driven by apis but there is some one thing within aws is that we sort of value innovation over coordination this means that it's not just a service it's actually a team behind the service that is in contact with their set of customers i'm really thinking about sort of how can we move as fast as possible addressing exactly the needs of our customers and work backwards from what they need now they're not coordinating with other teams that have other sets of customers and waiting for them to get sort of their interfaces together we really are focused in each team on their set of customers working backwards from the customer to create their apis if we would be waiting for coordination we would be in this five-year waiting game because then innovation would get stifled and we would not be able to move as fast as that we're now able to move this fast now it comes with a downside these lacks of coordination namely that our apis are not necessarily coherent and consistent across each other and why is because these teams are focused on their customers not on coordinating with other teams and now quite a few of our partners have asked whether we can't do something about that because for them you know the fact that for example here if you have a london api call you know one is a get function and the kinesis wants to describe stream on all creating different type of resources you've asked us to create a can we do something that sort of supersedes this that make it easier to start integrating new features and services into our products and so we delivered the cloud control api that which actually gives you a more coherent approach to all the different aws api calls it is not replacing the api calls that you have now yeah all of our teams continue to be focused on their customers working from their customers backwards and giving them exactly the apis that they need and i really really want to continue to push for that because i think premature optimization is a killer yeah and really really starting to think about uniform apis before you actually understand how your customers are going to use your servers is very risky and to be honest you know we've launched sometimes services where customers were using the service then suddenly in every way but the one that we intended it to and as such sort of really sort of taking the cloud control api as something that is layered over the standard apis this still gives our partners the ability to sort of quickly integrate this into their systems and companies like polumi and hashicorp are using the cloud control api to really quickly integrate new aws resources into their applications now if you think about 15 years yeah of aws we've created a lot of apis over time and there's also lots of lessons learned from that and i just picked six of them but there could be many more now if i think about how to build apis these are sort of my six short list of things that are first tell engineers that you need to be thinking about it starts off with that apis are forever that means you cannot change or remove an api once you've created it and why because once you put out an api out there customers are going to build their business on top of it and you changing that api will basically break their business that's the last thing you want and the same is for if you do are actually changing actually evolving your api you can never break backwards compatibility and the way to actually make that easier is to not have your engineers be in charge of what should be in the api you should make your customers in charge so use your customers work backwards from their use cases and then come up with the minimal and simplest form of api that you can actually offer them and this is this is really important imagine you have an advertisement system and you do three or four different types of campaigns so you have one api call to create a campaign with all sorts of flags and and other things to actually modify the api call or should you just keep it simple just have one create campaign call and then other api calls to actually modify the resource that you've created with this create campaign call so make sure that you keep your apis as simple as possible because then you create really good building blocks for others to use yeah and so again don't only describe apis i think xd5 is you should be able to look at the api and more or less be able to figure out immediately what it does and also not only what it does you actually really have to document the failure modes because not it's not only about what your api does it is how it feels and whether that is because of the input parameters or or maybe security reasons or others you actually have to make sure the failure modes are well well described and the last one is a tricky one avoid leaking implementation details through your api because if you do your customers will rely on it and they will start to figure out how you've actually implemented this under the covers and will start to actually use that knowledge it's not a good plan because that means you can no longer change your implementation because now your customers are relying on sort of non-functional information and so with all of this i hope that you actually take a look at sort of all the documentation and sort of the builders library documents that we have to dive into best practices around api design now of course you know we want to give you tools for wherever you are and actually where you are right now is that most of you are making use of sdks to connect to aws yeah and you're very comfortable with that and we've created sdks for lots of different programming languages and i'm happy to announce today that we've got three more swift kotlin and rust and especially of course you know swift and kotlin are targeting particular platforms but i'm really happy that we finally have a rust sdk because i see an enormous interest in rust especially also when it comes to issues as sustainability making use of the most sustainable programming languages out there now it's not only where you are it's also where you're going yeah and i do think as much as i think that the the sdks are will be around forever it's also looking at sort of how would you actually like to build your applications and i know many of you are programmers and would actually really like to build your own infrastructure as code no longer a descriptive list that you need to use to set up your infrastructure you really would like to do this from inside your code and so that's what the cdk targeted yeah and now you're able to build these constructs using the cdk and you can share them with others yeah and whether that is with your colleagues or sort of as open source and the cdk has become very popular and we've been actually working really hard to improve it so in cdk v2 we fixed a number of the problems and issues that were in cdk version 1. one mistake we made in v1 was that we basically had a package for each service that made it very hard for dependency management management and so especially for our partners that were building constructs on top of this you know they basically needed to load 50 or 100 different libraries before they can actually have all of their dependencies resolved so now everything goes into one library that's the one that you need to use and we're able to actually then really open up the world for a much less complex dependency management we also start making use of what's called semantic versioning so that you actually know which of the versions of particular objects you're using and there's something really cool this they call it cdk watch i'm not really sure about that name i call it cdk hotswap basically if you write a new lambda handler or a new ecs task you do not need to restart your complete infrastructure and restart your cloud formation templates and things like that basically you can just hot swap your lambda code in and out without taking your application down i think that's really absolutely cool and allows you to continue to have your applications running even though you're updating and changing code now general available today is also the construct hub which is an environment or a many sites for us to actually give you access and find all the open source and partner cloud development kit libraries out there to talk a little more about the cdk and our customers have been using it i would like to welcome matt coulter the technical architect of liberty mutual insurance to talk more about that matt [Applause] [Music] [Applause] thank you verner driven by the conviction that progress happens when people feel secure liberty mutual is a global property and casualty insurer protecting everything from homes to cars to businesses we're there to back up our millions of customers when the unexpected happens today every customer interaction is digital from our applications to our call centers to our websites and these interactions need to be easy 24 7 365 days a year our story today is about adaptability change new ways of working and it's about innovation as a 109 year old company we've learned a thing or two about adapting to change but in the last 10 years we've seen more change than in the previous 100 combined to thrive amongst this change our journey to the cloud became our north star this journey began in 2014 we needed to relearn everything about how we delivered software at a time before most serverless aws services even existed to succeed we needed to spark continuous learning and experimentation culture within our developers that's why between 2014 and 2019 we created world-class automated guardrails on our aws accounts this meant that our developers could have the confidence to go out and experiment with new capabilities and then bring back the learnings for everybody else we also instilled the attitude that code is a liability it's not an asset what this means is that every single line of code must have demonstrable business value and this has been core to our evolutionary architecture strategy rapidly delivering business value in a well-architected way in 2019 it was time for us to go through another evolution this time using aws cdk but like everything else in life we needed to start slow before we could gain some speed and then optimize on best practices it was august of 2019 when we started slow i created a typescript proof of concept and this it was an l3 construct for a fully compliant private api gateway with a custom authorizer lambda function this might not sound like much but it reduced over 1500 lines of cloud formation down to just 14 lines of cdk and not only is that impressive because it's less code but all our teams could share a common construct which reduces the code liability of the overall company plus it enabled our standard ci cd practices like unit testing which was amazing we then took things to the next level by pairing aws cdk with aws sound for the enhanced local testing this felt like a game changer but we were still a long way from using it at scale to gain speed we needed to win the hearts and minds of our developers and that's why before the end of 2019 we launched a platform called the software accelerator the accelerator allows any developer to clone a working pattern and have a compliant production ready pipeline in seconds it allows anyone to contribute back their working software further accelerating our continuous learning flywheel through inner source my l3 construct was added to the accelerator at launch along with a strip back compliant starter pattern from here education was key we ran the tutorial at cdkworkshop.com several times internally across the globe and we always finished with every developer creating a working pipeline through our accelerator this meant that after they left they could always add the business logic leader from any of the open source cdk patterns then i worked with some amazing people to create cdk day this was a global virtual conference and it was so important that liberty mutual has had several speakers feeding back into the wider community a great example of how cdk combined with our accelerator has been core to our continuous learning was it at re invent last year 2020 aws lambda container image support was the big announcement on tuesday by the friday a working pattern was in our accelerator and it was being used to solve real business problems by the following monday less than seven days later throughout 2020 our flywheel had accelerated to the point we deployed more than three and a half thousand serverless patterns but that meant in 2021 optimization became key we'd spent all this time embracing divergence but now we needed to converge on best practices this is where we used the aws well architected framework to drive those discussions we looked at what was common across all of our stacks and introduced core constructs for everyday capabilities like vbc lookups right out of the box we then experimented with cdk aspects to automatically warn on compliance issues right in the ide further reducing developer burden we now have an ecosystem of well-architected reusable constructs that allows us to rapidly deliver business value in a well-architected way the proof we have teams leaving discovery and framing to deliver working well architected products less than three days later we're delivering industry-leading capabilities like our unstructured data ingestion pipeline it is 96 faster than the manual process and every bit is accurate all built using well architected cdk patterns in fact a great story is we're so fast now that i had an intern joined my team last year and he was pushing code to production on his first day and nothing broke this morning you heard me talk through how we started slow by learning what cdk could bring to liberty i regained speed through developer education and scaled through inner source until finally we were taking advantage of everything cdk has to offer rapidly delivering business value in a well-architected way we've embraced continuous learning and experimentation to be there and deliver on our promise to our customers in their time of need this is our mission a great example of this is we've built a surface call center that can automatically process a claim after a natural disaster in less than four minutes this allows our customers to piece their lives back together um this is our mission and this is why we want to use cdk to tell dozens of stories like this so going forward we are going to continue to empower enable our developers to be the best versions of themselves that they can be we see this as a key differentiator enabling us to push the boundaries of digital product reuse and inner source whilst enabling our developers to stay in lockstep with our customer needs so i challenge all of the builders dreamers and doers out there what was announced this week that you're going to enable tomorrow thank you [Applause] come up here [Applause] matt has done tremendous work for the community and creating really a truly vibrant community around the cdk and he's gone way beyond just sharing his knowledge and learning and as such i would like to give you the now go build awards as a reward for all the efforts that you've done thank you thank you so much [Applause] so actually what i haven't told you is that matt is one author he's part of our aws heroes program and so admits heroes is a group of worldwide enthusiasts around aws that are very much invested into sharing their knowledge and there's about 230 of them around the world right now i'm very happy that about a hundred are here today in a room up front over there i'd like to thank everyone of those heroes for all the incredible hard work that they're doing to build a community around aws now if you've been to peter's keynote yesterday and quite a few of the other leadership positions as well you actually have heard us talk about something that is probably on the forefront of everybody's minds today now we've been building technological infrastructures around the world that consume significant energy and everyone should be aware of actually how much energy they are consuming and so if you think about i'm taking this quote from peter from from last year the greenest energy is the energy you don't use so how to be more efficient and be more green at the same time according to a 451 research report moving on-premises workloads to aws can lower your carbon footprint by 88 remember we're able we're able to actually get much higher cpu and memory utilization by sort of the management techniques that we use to place your instances and to run your functions for you and especially those services that are serverless there we can really make an impact because we can actually engineer there for sustainability and higher efficiencies in ways that are way beyond you would ever be able to do yourself and i remember in the earlier days when we were talking to enterprise data center management typical utilization in the data center is 12 to 15 which basically means that 85 of the energy that you're using is lost is useless and so in aws we're coming up with this um this model now where we would call a shared responsibility model when it comes to sustainability yeah where aws is responsible for sustainability of the cloud that meaning that we do good water stewardship we do lots of innovation in energy management we're building our own silicon so we can really drive that cost and the efficiency up cost down efficiency up you however as our customers are responsible for sustainability in the cloud to make sure you pick those technologies that have the best the most impact for you and one of them is for example to move to serverless yeah lambda our serverless function service can actually spin up compute to incoming requests yeah and so as a result we're able to get far greater utilization cpu utilization memory utilization and therefore energy because we manage that fleet of service and can do absolute best placement for that and so you can also get insights into so how many of these services are winning for how long they've been running and that's sort of a good proxy for of course for cost but often also a very good proxy for sort of how much energy are you actually really using and without this without actually really getting detailed insights it's very hard for you to really get to understand exactly sort of what is your carbon footprint now coming soon and peter announced this yesterday is the aws carbon footprint tool which gives you clear metrics around how much carbon you are using in your applications so one really simple thing is of course if you provision something use it don't let it sit around doing nothing yeah and actually you go back 2012 again what i would like to say where i said don't forget to turn off the lights what i mean by that is that with one click of a button you can scale up but with that same button you can also scale down scaling is not only about scaling up scaling down is equally important and every resource that you're not using is the greenest resource that you can think of now i've also been thinking about something else and it doesn't really fit yet in this shared responsibility model there's also something you can start thinking about is what is actually a good latency towards my customers imagine the average latency or the 99.9 latency of your web pages would not be 1.2 but 1.5 seconds how much more energy would you save there yeah if you look at the current twitter stream probably was at 60 or 70 percent of these tweets have rich imagery associated with it well if your images are not at high resolution what if you use less imagery how much less energy are you using and so thinking about your customers and how you present your products to your customers and what can you do at the design level to think about if i design it this way it will cost more energy so thinking about the architectures not only but how am i using aws am i using the most sustainable technology possible but am i building building the most sustainable application so start thinking about that keep that in your mind now to think about sort of sustainability um you know we always had i always talked about the real architected framework that had basically six different five different pillars security reliability cost efficiency performance and operational excellence and i'm happy to announce today that there's a new pillar is the adb as well architect that sustainability pillar that will give you advice [Applause] that will give you advice about best practices that we've seen and we'll continue to update the sustainability pillar as we get more and better insight into how to give you advice to build sustainable applications now sharing knowledge like this has always been important and so we can collectively work together in improved software development and one particular area that i think we've been lacking is actually helping to answer your questions in an authoritarian way and so the past years inside aws we had a tool that's called aws answers and basically where our support people or engineers were posting questions from their customers that they couldn't find answers for and then all our engineers will be answering them basically you're getting authoritative answers to questions that really matter for our customers and so i'm happy to announce today that we're launching aws repost which is a question and answer site for you where you can get authoritative answers about any questions that you have about aws and the architectures that you want to build and i wouldn't forget of course remember two years ago i launched the builders library and i want you to continue to check this out this is not just the builders library with experiences from our our distinguished engineers um but it's also from over all of a over all of amazon this is basically 25 years of experience of building high scale distributed systems and there's two articles i want to call out actually there one by malcolm fentonby which is about idempotent how to be built item potent apis and a very interesting about one by colin mccarthy about sort of doing constant work and how that actually helps you to build scalable systems i urge you to check out these articles because they're absolutely really cool and full of advice that you can apply immediately to building your own systems now the cloud continues to change development and it was our goal because we want to work together with you to really build those tools that you need to have those simple machines and compile machines that you need to build your applications of the future not those applications from five years ago and as such it's not just the cloud changes development you change the way that we work you change the way that the cloud works we have this very tight feedback loop with our customers that continuously change how we are delivering new features and new services now like to bring some advice back from what i gave a number of years it was a sort of a joke that i made but i hope you take this serious so there's a very famous quote from edgar dykstra is a dutch computer scientist at the university of austin decades and decades ago really into formal verification and things like that and so his quote was really that if you do something quick and dirty you suddenly visualize that i'm looking over your shoulder and think dijkstra wouldn't have liked that and that would be enough immortality for me now i've modified that quote so if 10 years from now you're putting something piecemeal together instead of just using aws i hope that you suddenly visualize that i'm looking over your shoulder and think like when i wouldn't have liked that and that would be enough immortality for me now what we've done here that why why have we created all these services all these features yeah this really and i think the uh the words of the famous philosopher def punk really really are applicable here we work it harder we make it better we do it faster and it makes us all stronger and really this has driven this huge collection the biggest and broader set of array of cloud services that you can imagine you can build infinite different ways in many different types of systems with all these components now what if he would want to build something completely new something that has never been built before and that you can only build because you build it in aws so how would you build something really completely new let's watch this the stories you've heard they don't tell the old truth the island is indeed a place of legend there is power and fast riches as well but many who've sought to claim them have simply vanished without a trace your ship is stocked and your crew assembled [Music] charge your course and your fate we can pretend no longer this corruption must be stopped [Music] i will say a prayer for the souls of your crew so new world is a game designed by designed by amazon games and it's a massively online multiplayer game where players from all over the world can connect together and play together what you can do you can create a custom character enter the world and over time your players progress you know you carry more items and experience and it's not a session based game in that sense that if once you log out you lose everything that you had it's something where you can really grow over time and so this player navigate the world and connex can sort of explore all the different areas and it's really expensive and what you normally had with these games is once you move from one piece of an island or one piece of world to another you get these loading screens and things like that that's not here it is completely immersive and the things you can do of course is like usually in gaming you know you can do you can do combats you can fight you can craft things you can trade and you can go on missions with other players the players are free to explore this world in any way that they want now when you look at new worlds you may see a game i however see a massively distributed system and so let's take a look into how this actually is constructed now first of all if you think about the old world old world was game servers yeah and game servers were actually really servers yeah you would connect to it the game would run into that and it would be single physical server for you to use so in the old world if things needed to scale up you need the scale you need to scale up basically by bigger hardware in the new world it's all about scale out so architecturally how does this look like it starts with what's called remote entry points or wraps there's four of them in each world and basically they're a proprietary application router clients connect to that and there's where both security and resilience is handled and reps are also the only public facing access points to the whole game then in that world behind it sit there with computational processing unit we call them hubs you get seven of those and each hub owns a compute for a portion of the world that you're operating in and the third piece of this is a warm instance pool that is shared on a per ac basis where you can actually go on specific missions you can go like like fighting or you can go on on expeditions and actually then you use instances from those pools for the single game play there now if enough as the world is called is actually built out of 14 smaller regions and overlaid on top of that are a series of grids and so in a single world you can have two nine thousand players there will be around seven thousand different ai entities and literally hundreds of thousands of objects for you to work with now if you look at sort of the hubs how they are processing this we've actually taken the world and launched put a grid over it and each of the hub servers hub instances are picking up two pieces of this grid these two pieces are not consecutive so if you move for example from block two to block three you're basically handed off to another hub instance and these are not consecutive so you won't stay at the same hub this really spreads the load out very nicely over all of the compute servers that we have so each of these grids represent a physical simulation volume and you can continuously move objects and ai entities around the world moving from hub server to hub server now when we launched new world we started off with roughly 185 of these different worlds spread out over multiple regions across the world within 10 days there were 500 worlds and believe me the only way you were able to actually get this get this scale is because of the architecture how we've made this now it's important that there are there's these many worlds yeah and so something that is really radically different which you just saw in the game playing there in a traditional game probably five times a minute the user state and actions are being processed in new world we're able to do this 30 times a second and so everything is simulated redrawn analyzed and events are processed 30 times a second for all the world's entities and this kind of game simulation requires a lot of cpu and is really enabled by the use of multiple ec2 instances powering this world it leads to very immersive gameplay and a much better ended user experience now of course there's a lot of game state but in essence these game servers the ones in the hub are stateless they have ephemeral states and sort of in memory state but they can always be restarted and even if the all the hip instances fail we're able to bring them up quickly back online and restore the game state as it is and why is that is because we're writing everything to dynamodb and it's about 800 000 writes every 30 seconds to sort of store the game state in dynamodb important is of course in such a massive distributed system is that you have good insight into what's happening in with all the different components and remember 500 worlds times each of that set of servers and you need to actually be able to operate it now as i talked about last year in my keynote observability is extremely important and to be able to actually really get the observability meaning on the outside you can infer from what you're seeing what the internal state of the system is you need to log everything yeah and sort of in a distributed system it's important that you log this so that you can get the insight into exactly what each and every one of those components are doing in new world they use kinesis to log event and so about 23 million events every minute are being pushed through kinesis into s3 and where it's then processed using athena and so this this enormous data flow is used by every aspect of the team now data analyst can discover everything from which wolf in the forest have been fought most or you know which paths are most traveled or you know game designers can figure out how players are enjoying their game and can modify the game in real time based on this data and also operationally they are aware of course of what is doing what their behavior is anticipate service issues and deal with them really quickly so with that you know there's also the side of course of all of these next to the instant gameplay is all the things that are around it and whether that is the operational side or for example um the how the how trading and buying is being done and so all of that is not running on any ec2 it is using lumber for all of those yeah they power their control plane with it but they also power all the non-core gameplay activities by using lumbar one thing that i mentioned earlier is that there is the ability to actually also do session based modes basically you can invite five of your friends and you step through a portal you go into a dungeon and you're going to slay some dragons there but under the covers you're actually playing on one of these easy two instances that comes from the shared instance pool and this allows us to free up the traditional processing of of gameplay and the session modes are done on a separate pool of instances and once you are actually finished with your quest and return to the world out of the dungeon that ec2 instance is returned to the shared instance pool that's how you end up with a uh a quite interesting large scale architecture that is really built with a surface mindset in mind and serverless indeed plays a very important role in that now if you look at sort of how uh well the numbers are there now there's well over 150 million lambda invocation each minute was 200 million calls for the api gateway elastic cache easily get 2 275 hits a minute all of these things would never have been able to achieve if you would have to do this in uh in on premise with your own hardware now this truly is a mmo rpg game born in the cloud and it would only have been possible to actually run this in the cloud it would never have been able to able to build this sort of totally immersive experience using bare metal now if you think about this you know now suddenly with aws you can build systems the way you always wanted to there's nobody that tells you what you cannot do there are no gatekeepers you can build the way you always wanted to build your applications and it means you end up with applications that are for tolerant secure scalable cost effective and sustainable and that's really important i call this going back these 10 years 21st century architectures then if i look at sort of and i urge you actually to look back at that first presentation i gave because there's lots of really solid advice from day one yeah and if i look at sort of the advice i gave here in those days they're all very solid and have allowed many of our customers to scale to unprecedented scale and maintain ultra reliability and security over time so if i look back at some of those you know protecting your customers will be forever should be forever your number one priority it is definitely ours it will be forever our number one investment area to protect you and actually if it's coming when you're talking about protection you should build security from the ground up and this is a conversation i often have with younger businesses who are so gung-ho on building a new product that they want to build i actually urge them on day one to start thinking about how to protect my customers how to make sure i am as secure as possible what are the tools and techniques that aws gives me so that i can protect my business as well as my customers and the same goes for the other sort of non-functional requirement instrument everything all the time and why is because these two things security and operations are something that you will probably have to do for the rest of your life of the lifetime of the system that you're building the time you operate and protect the service greatly outreaches the time that you actually spend on building it so every investment you make in security as well as in operations will pay off forever now with this advice i will leave you i'll hope you see you guys tonight at the party with said and with that thank you and now go build
Info
Channel: Amazon Web Services
Views: 257,694
Rating: undefined out of 5
Keywords: AWS, Amazon Web Services, Cloud, AWS Cloud, Cloud Computing, Amazon AWS, wener vogels, reInvent 2021, reInvent 2021 keynote
Id: 8_Xs8Ik0h1w
Channel Id: undefined
Length: 114min 5sec (6845 seconds)
Published: Thu Dec 02 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.