AWS re:Invent 2017 Keynote - Werner Vogels

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

I was at re:Invent and saw this presentation in person and have been surprised that I haven't heard anyone talking about it in the community until I saw an article about it on mmajunkie today. It'll be interesting to see what useful information they'll be able to gather and make use of from all these sensors and the AI.

πŸ‘οΈŽ︎ 12 πŸ‘€οΈŽ︎ u/tooon πŸ“…οΈŽ︎ Dec 12 2017 πŸ—«︎ replies

What are they adding that's different? It sounds like more stats

πŸ‘οΈŽ︎ 6 πŸ‘€οΈŽ︎ u/onceiwasnothing πŸ“…οΈŽ︎ Dec 12 2017 πŸ—«︎ replies

Just give us the ref cam and STFU!

πŸ‘οΈŽ︎ 6 πŸ‘€οΈŽ︎ u/[deleted] πŸ“…οΈŽ︎ Dec 13 2017 πŸ—«︎ replies

Does this mean that Amazon will sign UFC to a TV/web service deal?

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/[deleted] πŸ“…οΈŽ︎ Dec 13 2017 πŸ—«︎ replies

Lame. It just sounds like in-fight metrics would provide the commentators with a few more lines to spit out during boring fights. Would it have been interesting to have the data about Ngannou's hand speed when he knocked out Overeem? Sure, after the dust settled, but in the moment, everyone's reaction still would have been "Oh my God he just killed him" instead of "Wow, I'd love to know exactly how fast Ngannou's left hand was moving."

πŸ‘οΈŽ︎ 4 πŸ‘€οΈŽ︎ u/-Mangarang- πŸ“…οΈŽ︎ Dec 12 2017 πŸ—«︎ replies

It's interesting. A lot of metrics he talked about was absolute bollocks. However the concept is interesting, I'd like to see how it gets visualised.

Like another lad pointed out below, I wonder if this will be available to a corner man if they have the app in front of them on their phone like, would that not give away some key points a fighter wants to keep to themself and their corner/camp?

By default, I wonder how Amazon will use this to market to me?

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/BigFang πŸ“…οΈŽ︎ Dec 13 2017 πŸ—«︎ replies

I want to know how hard punches are. I saw this camera set up at a baseball game one time where they could pause the action or have a replay and move the cameras view around to get better views of what was going on. I want that too. It'd be amazing for fouls and highlights.

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/Lovehat πŸ“…οΈŽ︎ Dec 13 2017 πŸ—«︎ replies

It looks good but will fighters have data about their opposition then? Seems a bit unfair I feel.

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/[deleted] πŸ“…οΈŽ︎ Dec 12 2017 πŸ—«︎ replies
Captions
ladies and gentlemen please welcome chief technology officer of amazon.com dr. Verner Vogel's [Applause] [Music] Wow good morning Vegas can I get some logs hey good how y'all doing yeah but crawl last night you survived at least thank you all for taking the time and showing up here this early this is my favorite time of the year absolutely so one thing yesterday probably were has been absolutely amazed with all the things that Andy announced yeah so many different things so many advances in AWS that I actually thought I would do something different so if you're here in this room today for announcements you can leave now yeah or maybe just one or two but I won't tell you where it is yeah for me this is sort of a little bit of an anniversary yeah it's five years ago we started off with 2012 with with reinvents there was a small intimate event compared to this one although I like this one yeah look how happy of us yeah it was amazing and I really loved that presentation looking back five six years now to that time we got an opportunity to really talk about sort of how our customers were radically reinventing the way that they were developing their applications and could give them lots of advice about sort of the experience that some of them had and it may be relevant for other customers as well and there were no announcements in that keynote and not at all and so I in my mind coming back that it's actually still the most favorite presentation ever gave at a tree event and a fourth for this year I would just go back to that because we've Doom's been doing so many things and it has had so much impact on the way that we developing our systems maybe it's time to sort of revisit that presentation or of the 21st century architectures because a lot of has happened in the past five years radical things have changed in development and so maybe it's just time to sort of revisit some of those lessons learned for many of our customers and I hope that for you in the room and for maybe others watching on the on the livestream and I don't know that there is even viewing parties out there that this might be relevant for you as well so going back to 2012 and I sort of divided things up into four different categories in those days was I told that sort of these architectures if you either want to build them in a scalable manner they need to be controllable that meant or decomposing into small building blocks that each of those you could control have different scaling capabilities we didn't talk about micro services and things like that yet but that's definitely how we know them now they need to be resilient yeah need to protect your customers you need to be able to and I think my advice was deploy to at least two availability zones I think that sort of standard for everybody these days adaptive meant that do not make capacity decisions up front now are you allowed to make mistakes build your systems first and then see which which instance types best fit them and those days we had not as many instance types as now so that that advice actually still stands and make data-driven decisions that meant that don't guess you know start winning that definitely things in production and then use facts to drive the way that you build your huge systems now since those days we have launched a few new services yeah actually we launched 3959 new major features and services since that day as you can imagine this has had somewhat impacts on the way there is long development and that development has changed and so the title for today's talk will be again 21st century architectures but we imagined in now all the tools that you have now today how do we do development at there's many different areas that we can that we can talk about I'll pick a few and I'll pick a few also about so how I think sort of the coming four or five years are going to look like in terms of system development in terms of what are the real drivers for how we are building our secong to build our systems differently and of course I don't have a crystal ball because you know five years ago the first we invent I absolutely could not have predicted that I would be here five years later no it clearly was that we had built technology that meant a lot for our customers and that allowed them to be successful in rate as they could not be before but I think there's another part to the crucial DNA of AWS and of Amazon is that we are not building this by ourselves we're not in a sort of an isolated environment thinking about sort of this is how you shall develop your software because one thing that we knew almost from day one that if we were going to be successful with AWS it was going to be radically changing the way that we building our systems and with that we weren't that arrogant to say and then we shall tell you how to do it now our goal has always been to build a whole collection of very nimble and fast tools that you can decide exactly how to use them and to build those tools we needed your help and I know that the 3959 major features and services sometimes have been confusing and frustrated the pace at which we rolled them out but much less that to do actually with you because each and every one of our surface teams has a whole set of customers that they have are in very close contact with and allow them to drive their work map we don't determine whether the only determine some of our world map but we allow our customers to to drive it's probably one of the here the cool examples in the beginning was that when we launched DynamoDB we knew customers wanted secondary indices we didn't launch with it though we launched without it if a minimal feature set that was rock solid and then we work closely with our customers to determine what should be the work met after that and of course secondary indices show showed up there but there were other features that also showed up that we hadn't thought about at all for example I am level I can control the fact that you wanted to set access controls or the individual columns and rows we haven't thought about that with all these before the ballot we didn't know that that was going to be extremely important to your customers and so we allow you to set our work map and this very quick iteration and means that we are going to be delivering tools that allow you to develop the way that you want to develop not the way you were developing to three years in the past but how you want to develop now yeah so I was think about that we are delivering the tools now for the systems you want to be running in 2020 and definitely you look at them this keynote yesterday many of those services will allow you to build those applications and systems that are going to target your customers in the coming two three four or five years so we continue to build services for the future but for the way that we were developing in the past now if we look at sort of our drivers these days that are sort of creating this radical shift in the way that we're developing a software the sort of four things that I want to pick out there's probably many more but these are sort of in recent days definitely that I've been thinking about well yeah that's really shifting the way that we developing our software the first is data and that sounds a bit is a bit of a general term but if there is one thing that cloud has done it has created a complete egalitarian platform everybody has access to the same computer capabilities everybody is access to the same storage everybody has access to the same analytics if you don't have to yell give them to yourself you can buy them so well then becomes the major differentiation between different companies next to say the unique sort of business logic that you have what Gandhi said company society is the quality of the data that they have and whether it is data about your customers whether it's data about your manufacturing processes and so that you can actually build data-driven environments and data different systems that optimize the way that you build new products that creates safety on your factory floors data will have a crucial impact in the way that companies are changing their behavior we are changing building news new systems a great example there is I think GE General Electric so Jeff Immelt the departing chairman of General Electric once said that you know one night you go to bed as a manufacturing company the next day you wake up as a software analytics company and that's really the transition that G is going through G is investing massively in creating analytics tools both for the internal processes as well as delivering product which is their large in a analytics environment that ones on AWS to help their customers be more effective with the the equipment that has both built from G and they are providing resources now also to that competition the really shifting from being at core manufacturing company into a data and analytics company and given that GE is almost the only company left or is the only company left on the sp500 original simply 500 there adaptiveness really shows the transitions you have to go through if you want to be successful in the future another shift I think driving the shift technology driver is IOT the capabilities that we've been developing at AWS allows a whole environment to become active and what I mean by that I think that every device that now draws a current it has the ability to become connected and to become an input or an output device of the systems that we're building other technology drivers I think if you looked at all the machine learning services that we launched yesterday they have been enabled by core fundamentals changes that has happened in the past two three years you know the availability of the PC the p2 in the PC instances as well as advances in the infrastructure pieces such as MX net intensive flow have driven the fact that we can now build neural networks that we can execute in real time and that's a major shifts it's not just that we can do machine learning in sort of an offline fashion no we can execute this in real time and Amazon the retailer itself has been a long user of machine learning that most of those processes were sort of not necessarily real-time modes your inventory level setting price prediction all this tool or is it abusive review detection none of those mechanisms really needed to execute in real time but now with this new hardware and new software advances we're able to execute neural networks in real time and that sort of drives a complete shift in the way that we gonna ask access at digital systems and also look at the interfaces to many of our systems now today or actually historically they have all been driven by the capabilities of the machinery screen keyboard mouse finger maybe and the way that you interact is through in dt2 in care at the terminal or for web pages or for cards on your phone this is not my way that you naturally interact with each other it is driven by the capabilities of the machinery and as digital natives have sort of become used to that each of us know exactly how to manipulate your search engine with weird keywords to get exactly the answer that you want we're not doing some fuzzy natural language clearly now we know exactly how to manipulate the machinery and it's mostly because he interfaces to our machinery has been created by the capabilities that the machinery said not how we want to communicate and so I believe that the interfaces to our digital systems of the future and no longer going to be machine driven the completely human centric because in how is the advances with the p3s and the a magnet of this world we can build neural networks that can execute human interface human natural interfaces at speeds that allow us to build natural feeling interfaces to a digital systems and with that a whole environment will become active anything you know that you feel as normal human communication and I might be voiced it might be vision that might be touched I'm not really sure if it can do smell but maybe that's possible in the near future sort of things that we as humans use to communicate are going to be the normal interfaces to a digital systems and so I think the first step in that of course it's going to be voice there we already we reach that point with all the advances in automatic speech recognition natural language understanding text-to-speech all of those now we really see that sort of we have this shift going on and this is normal because this is here look at us we're talking this is not a slack channel I'm actually talking to you and apparently you all come here to listen to hear someone talk this is a normal interface this is a natural interface how we interact and so in all places you talk to the people next to you that's how you interact you're not if he is sitting next to me you're not sending text messages to each other it's a natural way of interacting and in the digital systems we kind of have lost track of that because you know digital systems we use all these carrots and pagers and things like that and what voice will do is allow you to have a normal natural way of communicating the surgeon in the operating room will use force to actually manipulate the machinery around her instead of having to take her hands of the patient we were young dad and your kid is ill you don't want to go on to a website and fill in a form you want to scream at something I'm preferable in a very that's a fuzzy manner yeah and where there is an interaction happening there very question and answer where the other side actually understands that you're in panic and comes back at one moment with the advice that you're looking for that's a normal way of interacting and the last thing is actually something that happens to me all the time you know I like cooking but I'm a stupid European and can never remember how many milliliters go in a cup yeah and then you have to clean your hands and then you have to go and talk to your cellphone and let's see type 2 clearly in yeah instead of just using your voice to ask you ask you environment to give you the right answer and more importantly I think there's something else happening when we use natural interfaces I think currently a digital systems are being limited in their application to us digital natives because we know how to use keyboards without your screens yeah we know how to use all these apps but if you've ever given a tablet to one of your grandparents what you know is that they do one thing hit that Skype button and that's about all that they do however if it would have a natural interface voice they may be actually using the digital systems in many different ways there are young kids that actually can't read a right yet when we see that many dish and we also see that those digital natives they're ones they have actually gotten to access to voice different system they get pretty annoyed if you have to go back to your cell phone there's also something which I'd like to leave that has happened it's definitely happening to me it's something called AB fatigue yeah we're know yet another app to learn which app do we need to use for for this one and where once you've actually been talking to your digital systems you no longer want to go back in that you have to type things in or look for apps or things like that there is another area where I think voice will be crucial in unlocking our digital systems for everyone to use if we look at developing countries around the world forever we've been saying smartphones will unlock those will unlock the internet for those people to use well it's definitely not happening is it and why not because believe me apps is not the first thing on those people's minds surviving it's probably first on their mind yeah and so if you look at Moses's they don't have cell phones maybe there's an old Nokia or maybe a phone in the village and that's about all that they have so it's a great story from a former customer in the Philippines the International Rice Research Institute this is a magnificent Research Institute these kinds have 70,000 strands of rice in the freezers they can grow any type of rice that they want to and their target is to make the life of the poorest farmers around the world better and targeting especially in that world you know women and children that have no other ways of income than actually growing rice so one of the systems that they've built based on their knowledge is how to apply fertilizer at the right moment in these small patches of land that these farmers have and they build the digital system now nobody actually wanted to access that digital system why because they didn't have web pages the days of computers these farmers don't have access to that and the big success of actually the system that they've built is to put a voice interface against it so what the farmer can now do we can go to the village phone call this number select from 27 different dialects describe their patch of land in a sort of fizzy way machine learning goes off comes back describes exactly how much fertilizer to buy and when to apply it yeah and then the farmer actually has great advice this apparently reduces the amount of fertilizer by 90% and doubles the corporate the big success of this overall system is putting a natural interface against it using voice now if we've been gaining a lot of experience of course definitely with the launch of all the echo devices it's been a great help to us at Amazon to really understand how people want to use for them because this is a learning curve as well so if you look at these devices devices themselves are actually not that terribly smart the smarts is not in the device yes if you look at the device it has it has a speaker or set of speakers that some speech synthesis hardware in the wake word detection and that is all that lives in that device everything all the smarts of these devices live in the cloud in the Elektra voice surface yes so there's two parts of the whole software platform of Alexa a one hat is the Alexa voice service this basically manages automatic speech recognition that language understanding figuring out which skills the trigger and then talking back to a customer's or managing their home automation devices and on the other hand the extra skills kids with allows you to build very unique skills triggered by utterances of the user so if you if you take a quick look at sort of how this actually works so this is a lecture for service on this side automatic speech recognition natural language understanding treating the skills and text-to-speech so Alexa or the figures out the way QWERTY detection yeah then captures the speech that comes after that sends it over to the LexA Forest Service goes into automatic speech recognition which comes back to his text pumps that into natural language understanding to figure out what is the intent of the user and use that intent to figure out which skills to trigger the skills will come back with a speech directive and then text-to-speech actually creates the audio file that is returned back to the echo device mmh is then presented to the customer so all the smarts live in the cloud this is a pure cloud service and actually the ASR and all you components is something that you find back in max if you want to build your own and text-to-speech is poly so in essence you can build your own a lecture system if that's what you would want to do a lecture voice service is not something that is unique for echo anybody can integrate the election voice service into the devices that they have and whether it's Harman Kardon or so nose into their audio devices or whether Tom Dayan and Falk swaggin a Mercedes Benz and forth that are all putting Alexa in their cars everybody is capable of putting the Alexa for service into their devices and also you it's not just limited to to this device manufacturers if you're hobbyist you know you can actually put Alexa into anything this guy for example put it in Billy bass Alexa what's the weather currently in Cambridge it's 45 degrees with showers tonight you can look for rainy weather with a low of 43 degrees this is amazing isn't it you know what's more amazing this is a hobbyist it is just for fun so the CEO of B Reba sees this and what does he do this is extensive device that now with we can make a Bluetooth connection from any Alexa device or any echo device to you Billy bass that you have hanging on your wall somewhere and you can have it actually it have the mouth move on anything that Alexa is saying on the device it's actually really cool so but this shows that it's just cloud software that of Sound Cloud shows that you can integrate into any device that you have or any little software component I think this is broader than just voice with voice is important and I think also you think about sort of our environment just like Billy bass you know all of our environment is going to become active I will really see quite a few of these things although sometimes I'm not really sure whether it's the right way to actually sort of interact with our environment I don't know if you guys seen this internet-enabled toothbrushes basically you hold up your cell phone you brushing your teeth and you look at your phone whether you're brushing your teeth right or not I'm not really sure whether these cell phone outputs is actually the right way to go maybe some haptic feedback would be better in that particular case or if you set energy rules for your house if you really have to look at night at a graph about your energy usage during the day doesn't really change behavior but what about if you have a wall clock the ghosts of green when you're are meeting your energy usage goals or more importantly going into read if you're violating them that will change behavior yeah and so it's not just voice that will be the interaction with the environment it is many other pieces that will all become active and things that you can't even imagine yet you know when GE puts Alexa into their fridges which they've done that creates a whole different interaction pattern with your fridge it's very interesting so another area that Alexa has pretty strong at is home automation where it can discover all the home automation devices in your world it can connect to them it can manipulate them and certainly believe me when I now walk into my home house and actually I have to walk over and flip a light switch you can annoy it yeah I want to be driving home in my car and I say hey Alexa open the garage door turn on the porch light set the temperature to 20 degrees and I start playing about or chili peppers and that's what you want to do you no longer want to actually go around your room and flip all the switches to get all these lamps on now the next once you use to a more natural way of interacting with your environment you will not go back but next to home automation I think there's another area where we have lots of devices that annoy the heck out of us yeah at least me and that your comfort to him at work I don't know if you've ever tried to connect your laptop to the presenting whatever device that you have or find the printer and that's why meetings always start 10 minutes late it's mostly because you have to connect you or your laptop you have to dive into the confident system where you then have to first actually open up your laptop because you have to find the ID and then type to ID do the five times well I know it's a nightmare and it's not just that it's also the devices on your desk yes much of the devices on my boot I always need to use multiple devices when is the meeting starting you know you have to grab over my phone I'm not using a phone you know really do have to use the phone for this particular case and so we've been thinking then about if voice is this natural way of interacting in your home in home automation and things like that why don't we build something that you can actually use at work as well and so I'm very happy to announce today that we're launching Alexa for business yeah which which is a fully managed service for having many Alexa devices at work to manage them to manage the users to manage unique skills that you may have at your work both dynamic skills for depending on the devices that you have as well as maybe skills that are uniquely developed for your business itself so of course one of the first places where where we've been working on is making sure that Alexa works really well in conference rooms so there's been integration with with Cisco and Kristin and Polycom of all of the integration with their conferencing system so that you no longer ever have to type in a conference ID you just say hey Alexa start a meeting because you know which one you're in I don't know which meeting should be taking place there also integration with all sorts of other devices in and around your your conference room now after a few administrators have set up Alexa to actually really make use of all the devices there you can dim the lights the lower the blinds all these kind of things are suddenly available for you in your workplace or it's a great start up called team they do conference room management and you can the integration with Alexa for work for business with team allows you to quickly find another conference room the one that you read is too small or maybe it's occupied with another team also at your desk so it's not just in the conference room at your desk you can now have Alexis as well if the echos as well you can do the usual things that you can also do in the conference room join a meeting call this individual and the cool thing is that we actually give you a mechanism to merge your home Alexa environment with your Alexa for business at work meaning that you have access to all the private skills that you've set up for your own use you can actually make use of those at your work as well so you immediately get all your Spotify and Amazon music and things like that on your echo device at work as well meetings are important part of our business so there is integration with xanax already had with office 365 and G suite and I was starting today we also have support for on-premise exchange for all Alexa business users and so you can actually get access to all your calendaring and scheduling and meetings and things like that yes also integration with a lot of other we all know in software providers or service providers at work wing Central has PBXs so you can have you get access to your voice mail and things like that integration with Salesforce so you can ask questions against it concur which probably holds your flight information any reimbursement software if you use sex factor for 8 are the skills that integrate with that Splunk has built a number of great skills which can actually access your splink environment that runs in easy - to actually ask questions against it using voice of course they do bs we've done that as well so you cannot get access to things like cloud ports and other other information achromatic I is a software sprout based ERP system which connects in creamy for your voice so it's interesting you know now have access to all these software pieces we no longer need to go to web pages or anything like that you can just use your voice to access them and it's beyond just let's say just the office environment the wind the luxury hotel that's it's a bit further down the strip actually is putting echoes in each and every one of the rooms in their hotel and so and with a whole set of private skills that actually really allow you to access all the capabilities of the room so you can use force to actually play music to lower the blinds to set the temperature to control the TV all these things now have a natural interface instead of that you have to run around with five different remotes to get these things done so the service that we built in either u.s. is mostly sort of the management side of things where you can provision and manage these devices you know who is actually using which device you can configure the conference rooms access them from their manage users assigned skills which users are allowed to use which skills you can build your own skills and extend that to which are unique for your business and so this is a growing number of partners that build these skills for you as well as that we see already a growing community of actually companies that are really putting these devices to work so we work is interesting because there you have this all these co-working spaces around the world they putting echoes in all of the conferencing rooms that they have voice the same thing 25 offices around the US where they actually place the echo devices and there are big chime users both themselves internally as well as for their their their customers and so the integration between voice in chime is important there as well so voice is the keep thing here it's a natural way of interacting with your systems and so if I think it's really I think voice is the first disruption that will happen driven by the capabilities of sort of the deep learning tools that we are giving you it will also mean that you're going to build your back-end systems differently now which of the skills actually still interact with these sort of page based systems the next generation of systems will be built using conversational interfaces because this will become the main interface to your systems this is how you will unlock a much larger audience for the systems that you're building and so then you need to build your backends differently no longer thinking about just page based output but how does a conversation working how do this these psychological frameworks for conversations how do you translate that into different software components or different stages in how you interact with your customers and so if you want to build conversations and these systems are going to change we are going to be your partner in that one because we are going to deliver those capabilities in those services to you that will allow you to build these systems of the future using AWS and such that you can build conversational systems that will delight your customers that it's not trivial yeah I think if we think about new ways of actually building our systems then it is important that we start thinking about our architectures differently so if you go back to sort of 21st century architectures definitely one of the first thing is that they will have natural interfaces but then when we think about sort of what is the rest of the what's happening under the covers what's happening in our backends what kind of help do we get from AWS there is of course there's a lot of help that we will give you lots of experience over the years now I also think about if I think about these architectures is 23 - the architectures I think there's three different planes that you need to be sort of aware of you know what is the administrative plane of your systems that's basically your deployment cycles how do you deploy and manage sort of instances containers functions whatever mechanisms you're using then there's the control plane which basically manages the resources that you have deployed and then there's the data plane that actually gives you the access to the resources and each of those different trains play different world have different reliability requirements may have different security requirements and all of them are things that we see come back in all of our systems and we'll talk a bit bit later about not just how we see this with our customers but also for example how we do this at AWS now I want to pull up in an architecture that looks very safely simple you have a great customer in Kuala Lumpur called iflix there are video streaming and and TV streaming company the target most of Southeast Asia Middle East and Africa very wise very successful relatively young business and if you look at their architecture this is the typical architecture that you would see if someone talks about a video streaming company this is not the architecture and if any of you have ever built a real system you know that this is just way too simple because this is not how your real architectures look like yeah there's a lot more there is no eventing here where's the analytics yeah how do these data flows from one area to another area so that actually you data scientists can get access edit so let's do a little bit of a deep dive on what the rest of the architecture looks like if you just pull out that sort of application subnet one that actually has a whole set of subnets as a whole set of different three pieces that each of load balancers that have napped and that have Q benΓ­tez pilasters for each of the applications that are running each of them have databases associated with them and they have a whole range of these applications running and then if you look at the next piece sort of if you want to do some level of control of your application you actually need to move pieces for your organization you need to move data so this is they're inventing infrastructure and this is actually if you look at this many of these components in here are all AWS components iflix is a company that from day one has made the decision that if AWS provides this service they will not build out themselves and so they are really built around all AWS components but still their architectures are pretty what I would call extensive not necessarily complex but extensive and this is just how to get events and then you get to sort of where do these events go then in real time you have you as three brokers that have Horcher what data link date then you have a FINA compatible book Israeli winning a FINA out of this data importance FTP importers log file processors events coming back from their their players all these data is actually flowing into their system interesting enough we help them with most of these and the components that they really need to build themselves a relatively limited in seeing AWS services coming back everywhere but these environments are extensive and they need not even get to the part where they actually do the analytics and where they actually have to create analytics they have to go to their partners or the telcos or internal customers or external customers if each and every one AWS helps them but still these are extensive architectures and then it's not even the environment that sort of where their data scientists actually can experiment with this data or and they're not even talking with other parts of their architecture like this one which is how to ingest video data and see if all of that the kind of point I want to make is that our applications and our systems are pretty extensive but either way this helps you in most of those places to control the complexity and keep things simple but still real architectures are a lot more than just that one marketing slide that you saw at the beginning now for many of our customers this is a challenge with all of the different services that we have but we give you help with that now we've launched I think to three years ago a program called the well architected framework and this framework has become extremely successful for many of our customers our solution architects have performed literally thousands of of framework analysis at different customers and we've learned a lot from that and so where a number of years ago the weather I could take the framework was one single document now it is actually set of five different pillars with very deep advice and knowledge about operational excellence but security reliability performance efficiency and cost optimization and I really urge you to visit a well architected framework because there's a lot of advice about how you can build very extensive architectures on top of AWS so next to these five very extensive documents these guys have have been really active in sort of using all the knowledge that they have and try to apply to different areas so next to the five pillars that are underneath the well architected framework they also have creatives created two lenses and they started off with two of them so basically how to apply the well-acted architected framework if you have a high-performance communic high-performance computing application or a surface application now veil architects the framework is not also part of every associate level certification that you get and we run a number of boot camps here to train others to be able to do this well architected evaluations as well but if you look at the well architected framework there's a number of principles general principles that lay underneath there yeah stop guessing capacity needs yeah this is something like an old style approach yeah we're up for hand you actually need to figure out how much capacity you need know build your systems first and then try to figure out what's the best incidence to run this all at that moment yeah client also makes it you can test your systems at production scale it's not that you need to have this five servers in a corner way which you pretend to be a production system now just let that whole scale use automation to make experimentation easier make use of clouds formation and other declarative tools like terraform to layout your infrastructure and allow that then to sort of instantiate new versions of your production environment to actually experiment with make sure that you understand that probably the system that you're building now is not to be the same system that you're going to be running six months or a year or two years from now because business requirements may change scale may change reliability and security requirements may change over time make sure that you can evolve your system over time and also has gone for Amazon Amazon Web Services itself I remember first days of Amazon s3 with the doing the architecture discussions there with try and figure out sort of what will be the number of objects in the service about a year from now and just for the heck of it we put two extra zeros behind it we blew through those numbers in the first three months and almost with every order of magnitude the two orders of magnitude of scale in your service or in your software you will need to revisit your architecture the one thing we did know absent if s is that we were not going to run the same software two years as he is four years later that it needed to be able to evolve that you needed to decompose into smaller building blocks which it should evolve these independent building blocks over time without actually needing to be having to change the overall system it won't go let's talk to earlier about driving your data driving your architectures with data and actually use game days just break things a lot if possible one particular pillar that I'm really excited about is the one in security because I think if there is something that is really important today is security protecting your customer should be your number one priority without that you don't have a business this comes before any feature development and at AWS I will forever be a number one investment area protecting your customers and your business should be your number one priority so fortunately well architected security gives you a whole set of best practices to to follow yeah implement a strong identity foundation for exam or go for minimum prefers for these privilege yeah so I know that once we start building our system in general everybody gets root privileges to do everything that they can do that is something that over time you have to reduce basically start taking away I am roles until you can no longer do your job yeah because least privilege is a very important principle in keeping your system secure centralized privilege management privilege management to avoid long term credentials enable make sure the chair for all the tracing and auditing tools ready and much of that how to meet your security processes because us here in this room are still the major Filner abilities in terms of security out there protects your data at all cost and make sure that you are prepared for things that can go wrong now because sometimes they will and if you not prepared for that you need to have you play books here and books ready and I want to come back to this protect your data in transit and arrest because I believe we've not taken encryption serious enough this is a quote I like to use you know first of all nobody wants to see me dancing so that it's good anyway yeah but you should encrypt like everyone is the encryption is the only tool you have to be absolutely sure that you are the only one who controls access to your data and maybe five years ago these tools weren't easy enough to to use today that's a whole different story encryption is integrated and almost all of our AWS services and ready for you to use there are five years ago six years ago we were still discussing whether HTTP was too expensive now every consumer service runs over HTTPS and so encrypting your data both in transit any West should be your default behavior and we give you all the tools for that you know take a look at the signal - noise toolkit snn it's a very lightweight implementation of TLS the doesn't come with all of the millions of lines of code that sit in lip SSL make use of that you can encrypt it with yellow balances between your instances you can keep traffic between your instances and all the service that you're using you can encrypt the data above the rest and you can have us generate the keys for you or you can bring your own keys for example red shift encrypt by default every data block with a random key that whole set of random keys is encrypted with a master key we can generate the master key for you or you can bring your own master key yeah and with kms you can be assured that you are the one who controls your access to your keys by boosting abusing both I am as well as cloud trail to make sure that you have full control over your keys there is no excuse anymore to not use encryption at minimum encrypt personal identifiable information of your customers and you know critical business data you have to encrypt because the threat models make sure that you can protect your customers and your business at all cost and I'd like to believe that security is all of our jobs now there's no longer the security team there if you will continues integration contains development you have to make sure that everyone is actually now a security engineer now this used to be your security team but central in your security team will now be developers you as developers have to become security engineers it's all of our tasks to protect our customers and maybe in the past in our way you had the three months development cycle you could have a review afterwards by the security team engineer that's not only the case you know we move fast we do 23 deployments of equal today sometimes so this pace of innovation really needs to pace of protection and the best way to do that is to use automation now the good thing is we help you with that but there's many places in your development pipeline that you need to start taking care of security it's both the security off the pipeline so make sure you have strict control over who has access to your build service yeah make sure that the build servers have cloud trail enabled make sure that you know exactly how these are operated that it can't be compromised and then security in the pipeline make sure that every update that you make to your source code is actually validated do stoical code analysis on them and then you know integrate that into you process this is a typical you know continuous integration development pipeline isn't it version control the CI service deployment service different staging this is how they should look like each and every piece should get security integrated into it to make sure that the software that you develop developing is as secure as that you can make it yeah and when do you need to do that you need to do it pre-event yeah before you actually start deploying and actually afterwards you also have to have a set of rules for what do you do after that you've deployed definitely if you can find out that sort of critical API changes critical API usage has changed for example any changes to I am need to be manually inspected you need to figure out why did we make these changes any new set of libraries that call added to your deployment cycle should be audited yeah and when do you do this well whenever you make a change maybe on a daily basis maybe on a framework basis you make sure that they always do this so with all of that we give you lot of help with this automation well you can make use of trusted advisor to give you static analysis you can make use of Amazon inspector which has a whole set of vulnerability libraries that complete can continuously inspect the software that you're deploying to meaning that if you do continue the deployment you can have inspector continuously employee validate your environment to make sure that you can continue to meet the the compliance regulations that you probably subjected may see and guard duty make use of machine learning to either inspect your data and find person identify the information the access patterns there or guard duty sort of really protects sort of the identities and the access to your systems using machine learning about learning what are normal patterns for your usage and then find anomalies in the future and notify you of those so we give you lots of automation tools what are my favorite ones is actually configure rules that because config can sort of discover your whole environment all the tools and all the services and all the resources that you're using and then can start building an audit trail about all the changes that you're making to the resources that you use it you can get streams out of that you can get SNS notifications you can create snapshots comparing to how did my system look two weeks ago and compare it to now this is a very important tool often overseen by many developers but please start making use of this because it will track all the changes in your environment while trail similar my trail will log every API call to every surface and then put it in a secure bucket in s3 where you can actually do analysis on and put alarming - so we give you all of these automation tools use them because the more you can automate around security the better it is so how is development changed over time definitely you have to be more security away I think development also has become way more collaborative there were you work together with many different teams not necessarily in your office maybe around the world I met one CIO yesterday who said well I'm not really a technologist but I have 30 teams 30 scrum teams around the world collaborating together which I thought was a real challenge for her more languages I think five six years ago Java C++ and some others were dabbling maybe some Python now there's a whole slew of languages to choose from and we're moving much faster more services to integrate more different places where we're actually developing form and QA and operations are deeply integrated into our systems now if we build systems if we developed in this sort of fast changing development environment you need help and I think code style the coaster tools that we developed in the past year really truly help you in that world but I think there's something extra that we needed to do for you I believe that every great platform has a great idea and I'm happy to announce today the general availability of a DVS cloud 9 yeah which is a a fully managed cloud IDE for running writing winning and debugging code that is actually pretty spectacular and to actually show you more of cloud 9 I would like to invite on States player Lee Gyuri senior software engineer at Amazon Web Services who's gonna talk to you more about cloud 9 [Applause] [Music] you ready ANCA let's do it hi I'm Claire Liguori I'm a software engineer a nativist developer tools and with me here today is Ankur Agarwal the product manager for a device cloud 9 we are super excited to be here on stage at reinvent today and we're going to give you a quick demo of cloud 9 we're really excited to show you everything that we love about this IDE so let's get started this is AWS cloud 9 this is Ankur's cloud 9 environment which were able to pull up on any browser like on this computer on stage on car and I have been working on some side projects together recently which I want to walk you through we've been working on some applications to build applications that help us to throw team parties for the cloud 9 team on the Left panel of this screen you can see we have a DJ app where people can vote on the music they want to hear at our team parties and there's a trivia night app pre-loaded with some 80's trivia for a team trivia nights on the right hand side you can see the cup cloud 9 code editor with some of our project code open cloud 9 also has a full terminal built right into the IDE I personally really like to customize my IDE so I always go in and change the theme I like this dark theme Conquer likes it a bit lighter I also always change the key mining Emacs is the obvious rate choice here I'm a die-hard user the anchor likes the default the other thing that I really like about cloud 9 is the number of languages and syntax highlighters that come pre-installed we've mostly been working in JavaScript and Ruby but we've also been experimenting with Python and golang so let's jump in and run one of these applications we're now going to run the trivia night app right in the IDE and then we can go and preview this running application so without having to install Ruby or rails on the stage computer we're able to jump in and make sure we've got the right ADA bs trivia installed so that was our trivia night app we also have our DJ app where people can upvote or downvote songs for the party so let's switch over to the terminal we have the list of songs that people can vote on stored in a file in s3 so Ankur is now using the AWS CLI to download it the CLI was really easy to get started with because it comes pre-installed with all of our AWS account credentials so let's go ahead and open up that file and concur I think this is your personal playlist that's a lot of Britney Spears I love Britney - I hear that she has a great show here in Vegas and another thing with a fantastic show here in Vegas this week is serverless applications so on ger and i for our side project we've been working with server lists a lot and we're really excited about the possibilities here cloud 9 really helped us to iterate really quickly on the lambda functions we've built right here from within the IDE you can see all the lambda blueprints which helped us get started really quickly I want to show you a sample lambda application that we built that orders food for our party our big plan here is that we'll be able to place an order to our local pizza place at the push of a button but for now we've got a proof of concept that orders pizza so what anchor is doing now is he's importing this lambda function into his cloud 9 environment and now we're looking at the code that orders the pizza so let's go ahead and jump down to the order details so because this is a proof-of-concept we currently have our order hard-coded to two pepperoni pizzas so we've got that set but we want to throw a party here at reinvent so we're gonna change the address from Amazon headquarters to the MGM Grand and we should probably test this before we actually place an order so we'll set a breakpoint and run the debugger we're gonna run this with the run local option what that means is that we're actually testing out what our code would do if we ran it directly in lambda let's see how we did so we'll turn on debugging go to run so our debugger hit the breakpoint so we will scroll down and look at the variables make sure we made the right change perfect so you can see the MGM Grand we have the right address so we'll go ahead and resume this let it run awesome so our test order was successfully placed but of course this was just a test before we push this to production I want to go in and review on curse change so Ankur can you share your cloud9 environment with me this is one of my favorite cloud 9 features right now Ankur is going in and giving me full access to his environment so we can pair program together and review code great so I'll go in and get in my environment okay I'm logged in so what you're seeing now is my screen is on the left hand side with my blue theme and anchors is on the right hand side and the light seem you can see our initials on the upper right corner of the screen which means we're both online in this environment we can also hover over each other's cursor and see what code were looking at anger and I use this a lot to pair program so I can go in and highlight some lines on the screen and then they show up on anchor screen in purple wait wait wait wait wait I'll do that okay right my first chance to write prediction code yeah whoo I'm small characters okay fine what they need to do clear so what I want you to do is honker and I use this to pair program a lot is there a change that you want to make oh yeah it needs to be beer with the pizza obviously obviously so one of the things that anchor and I like to do is chat with in the cloud9 ID okay so go ahead on the right hand side of the screen go ahead and tell anchor that you're gonna make a change okay so Vernor says hey uncle I have one more thing to add I wonder what that could be and an anchor says go for it so verna on the left hand panel go ahead and make your change wait I'm getting old so as Vernors typing you can see that his changes are showing up on anchor screen on the right hand side and cloud9 autocomplete is popping up helping him to fill in the right order parameters so it looks like he's added our two beers okay thank you thanks earner so Ankur now that that's done go ahead and push it to production so what anchor is doing now is he's taking the changes that Vernor made in our local environment and deploying them to lamda and then we can go and actually run these changes actually invoke this function so we're gonna select run remote and really place our production order awesome so our production order was placed in pizza and beer and are on the way thank you so much please check out AWS cloud 9 in the console later today [Applause] [Laughter] well that's fast delivery thank you guys for great demo I left a beer stands to a backstage enjoy them thank you thank you go great idea faster deploy deep integration of all the different services pair programming and well I really really like in cloud9 IDE is the ability to debug your lambda functions there's no other tool out there that gives you that support for lambda cell also the deep integration with all the Cossack tools yeah so it's not just leaves that in it sort of isolated environment but costars pipelines deploy commit built all of those are integrated in ID so I just like to chew on security now I get to lecture you on availability so we all want to build highly available systems that are up 100% of the time yeah that's why we built our systems for that's how we built them we never assume that something bad will happen to them but if you remember this quote for many years ago that I did said everything will fail all the time yeah and actually after the last failure it promised to get that - haven't done that yet yeah I'll wait for the next failure which will happen we don't know when but it might so if we think about availability and so that the rules that are the best advice that comes out of the well architected framework we talked about it earlier you know testing recovery procedures and stop casting capacity those two were sort of general principles in well architected framework trying to make you recovery as as automated as failure as automated as possible if you use horizontal scaling of smaller components the likelihood is that you can build much higher available systems and then actually manage those changes throughout the mission let's take one step back and talk a little bit about sort of the fundamentals around reliability so you all heard sort of the famous two nine three nines four nines five nines that basically that's of indicates the amount of successful winning time over the time that your application should have been running and so the way that you architect your systems you can have a sort of educated guess about what sort of the minimum bound this sort of the availability of your application of course if you're lucky availability is a hundred percent but based on your architectural principles in the architecture that you've built you may have a lower bound that you can achieve dependencies play important of all there yeah if you have a one component that has four nines availability 99.99 are you have another component that has four nines availability if a higher dependency between them the overall system the best it can achieve is 99.9 seven in terms of availability now that is if you have Hart dependency so figure out exactly what do you do with your dependencies and how do you depend on them and what is the effect of your overall system if you have redundant components that becomes a lot better because actually the rules there is that you basically take a hundred percent and you subtract the product of sort of the failure times of each so in terms of if you have two four nine systems there are you get a hundred percent minus 0.01 times 0.01 which means that if you have two redundant components that if have four nine availability the overall availability of your system rises to five nines so redundancy is a very important tool to create higher available systems now if you think about availability there's a number of great great set of advices that come back in the real architect framework and these are actually many of those are principles that we apply within AWS ourselves as well for example fault isolation zones that is the minimum one of that is something that we call cells all the scholars charts but we're actually even within your application you're going to build cells that are independent of each other you use a resource ID or whatever to spread your traffic or your access over the difference cells and make sure that inter cell communication is kept to a minimum that means that the blast radius of a failure is actually limited to that particular cell the next level of sort of fault isolation is a ZZZ which we give you there were 80 so independent of each other but still have networking that actually allows you to do sort of singleness replication between objects in the different 80s and then of course the absolute best sort of isolation is that of of regions regions are completely autonomous and there is no sort of control and data planes that actually span multiple regions thinking about sort of the control planes a number of our services for example easy to have a control plane that spans multiple aces but it allows sort of the API calls to be filtered per se that is there is a failure going on in one of the acs you can still sort of access all the resources in the other races without needed to be bogged down by the fact that your control plane in one of the acs may be actually sort of slowing down so think about that when you build control planes that actually have to spend multiple aces in all of this disability systems best practices of course always come back I'm a real stickler for this to make sure that both our customers as well as our engineers internally stick to these distributed systems principles yeah you have to throttle if you get overwhelmed don't try to do all the work and on the other hand if your client make sure that it is retrial if exponential fail back if you see errors coming back all the AWS SDKs do this but if you actually build your own access mechanisms please take a care of doing exponential fall back where you see errors happening don't try to overwhelm the service with extra requests you should build your service make sure if you know that you might not be able to actually handle this request fail fast release all the resources that you need yeah and for all of those constant work circuit breakers I report it if you notice that one of your downstream services is giving you troubles and it's not a critical service to use flip a circuit breaker take them out of the loop and make sure that sort of you minimize what I would call bimodal behavior and that is your failure behavior sort of in your recovery paths if they are extensive yeah avoid that your services end up there fail fast make sure that sort of you don't do massive recovery work yeah but take for example a recovery oriented approach to actually availability it's a great story by I think fans Etta who actually is one of the multix engineers you probably are way too young to remember what multics is it's an very famous operating system built at MIT I think in the 60s and 70s and so one of the engineers wrote said half of the multix code was recovery for error recovery code and Dennis Ritchie one of the original developers of UNIX replies to that says oh yeah in UNIX we taken all of that code out and we replace it with one function panic and then you would scream down the whole like reboot the server that was sort of their approach however it made sure that in isolation you can actually have very fast recovery because actually rebooting the machine was probably much faster recovery path than going through massive trees of error handling code or failures that you'd never seen before now in all of this world if you really want to hell you have to apply automation as well we just give you access to canary deployment in lambda that means that a subset of your customers get exposed to new code Bluegreen deployments basically enough to fleets next to each other we can flip back a for for maybe half a percentage of your customers derived flow them to the new code feature toggles is the mechanism where you deploy the new code or the new system but you don't enable the new code and you slowly start toggling on features one by one to see how the impact your customers of course we do deployment deploy them to default isolation zones in sequence not all at the same time now in all of this I like to believe that it's business rules that drive availability choices as much is that we all want a hundred percent available systems they actually cost money and so it must be a business decision what kind of architecture you're going to be using to help support the availability now if you look at systems for example that have two nines they basically have a single instance and a database and maybe back up into se well if this instance fails maybe even when the database yourself on the same instance and the best availability that you can get out of this is 99.9% are 99.9% of fail ability then you should imagine that if something goes wrong it takes you 30 minutes to figure out what went wrong it takes you 10 minutes to deploy a new stack and then 30 minutes to probably load the backup from s3 and so if you have four of these failures a year that would take you sort of to somewhat with it four times 70 minutes 280 minutes and that would sort of have an upper limit of 99.9 percent up but totally you'll do some software deployment to a single instance so you take it out a number of times a year best numbers poly bottom line is 99.9 percent ah it's cheap though and it might be quite a few services that you have a quite a few applications that are fine there's two nines availability however if you want more maybe you need three nines in that case you need to run at least in two availability sounds you may even master/slave between the databases the master lives in AC one so all updates flow for AC one if something goes wrong with one of the ACS or one of the stacks if they basically failover the master to the other availability zone and that's sort of that's then you fail your handling again diagnosis 30 minutes 30 minutes recovery 60 minutes in total probably and again if you have two of these outages year that have actually take the whole system down you actually are able to actually get 99.95 out of this one now if you want to go really really sort of extensive you might want to deploy to three availability zones and actually be prepared for if you lose a whole if you lose the system is in availability zone to still have 100% capacity available for your application that means you have to apply 50 you have to deploy 50 percent of your capacity and it statically in each of these zones it may be in this particular case you will use Aurora use multi master so that you don't need to worry about cross a Z code communication for your databases but if you lose one the estilo 400% capacity available in the two other ones yes so that means that probably your failures are going to be seriously limited the complete failures where your whole system is offline it's going to be seriously limited to just a few minutes a year you can achieve four lines now if you want to achieve five nines you can't do that with just one single region now you actually need to be able to make use multiple regions so a five nines will basically deploy this active active over two regions make use of route 53 and what you're going to be using here is dynamodb global tables yes so that gives you singleness replication of your data between the different availability sounds and if you would leave the between the different regions and if you would leave lose a complete region you will still have everything in your available everything in your application available so this really will give you sort of the ultimate in availability design and it's still the business rules that decide whether or not you're willing to make these kind of investments that has nothing to do with AWS in all in all the world you will actually need to have your business make decisions about what kind of architecture you want to run to support the kind of business that you have the cool thing is with AWS we give you all these mechanisms and all this to to implement any availability scenario that you have in mind and if you look actually at the AWS services it itself you can see that there's differences between the control planes and the data place that are available there so this if you go to the web architects document on the reliability you will see some of these numbers you know three and a half nines four nines for kms in terms of the control plane not sure whether all of you can see the numbers over there actually route 53 the data plane has a hundred percent availability why because if all the spreading out of all the cap capacity disability capacity around the world we can guarantee they are designed for 100% availability I am a kms I both have four and a half nines of availability because of such crucial services in all of the operations of of AWS so if you were interested in these numbers check out the reliability pillar of AWS of the well architected framework in all of this testing place an increasingly important well yeah test test test now break everything that you can and see how your systems respond to it within that game days playing the portable just break things and just have a whole day where you do nothing else but try to break your systems many years ago we tried this for the first time not add AWS is pitted in in amazon.com where we took a data center out and we thought we were really good at that well all the things you learn when you have these major failures it's very interesting and very important all of this has resulted in a whole set of new theories and practices and that's called chaos engineering it's practiced by by many organizations around the world and very happy that Norah Jones the co-author of the book on the on chaos engineering is reading to come and talk to you see senior software engineer Netflix as great experience in chaos [Music] thank you burner good morning everyone this is a typical screen that you see when you log into Netflix now you might see a trending now row a popular on Netflix row and a continue watching row and if you're like me your continue watching row probably contains stranger things season two so let's take a look at this continue watching row say for example that we weren't able to load what you had been watching previously that's a part of our system and so that feeling probably shouldn't result in an error it should result in a fall back maybe a fall back that we just don't show you that anymore or maybe a fall back that we show you something else instead what shouldn't happen is that service failing shouldn't result in a streaming error and so we exercise these fall backs like this one pretty regularly through chaos experiments at Netflix now you may be thinking couldn't I just beefed up my known forms of testing and get that out of the box do I need to do chaos experiments as well so let's talk about some of those known forms of testing so one of them is a unit test and unit test is where we take a single component we look at the input we make sure the output is okay give an input X I expect Y to happen we're testing and known here we're testing things that we expect to test and we think to test in another form of of known testing is integration tests now integration tests are also testing knowns as well except we do it between components give an input X I expect component a to output Y and then I expect component B to output Z we can also do it on a service level two components in services talking to other services and knowing our expectations from there now that we've described some known forms of Sene let's talk about chaos experiments so you'll notice they look a lot like integration tests except here service C to service C we have the option to either add a failure or add latency by adding time in between our calls to troubles that trouble our distributed systems pretty regularly and so you'll notice here that we drop the word testing and that we're calling it experiments instead and that's because here we have unknowns right we assume that we're resilient to these failures we assume that we will be fine we will provide our fall bags and so here it's resulting in unknowns if we're failing these calls or we're making them latent so now that we've defined chaos experiments and we've talked about known forms of testing what's the discipline of chaos engineering as a whole the discipline of chaos engineering is experimenting on a distributed system in order to build confidence in the system's capability to withstand turbulent conditions and production scales engineering is not meant to replace unit an integration test they're meant to work together in harmony to give you the most availability possible in order to ensure that your customers have a great experiment experience and that your business stays up and running in my role is the senior chaos engineer at Netflix it's my job to expose the chaos in our systems before it renders your homescreen unusable and before it renders you unable to watch stranger things most recently I've had the opportunity to work on the chaos engineering book with my colleagues at Netflix Casey Erin Lauren and Olli and we talked about how we perform chaos at Netflix scale we also talked about how you can apply it to other types of businesses as well to early-stage startups to companies of the scale of Netflix and two different types of industries as well I've had the opportunity to do chaos at startups and do chaos at Netflix scale and I want to talk to you about some of my experiences introducing it at a startup so I worked at an e-commerce startup and in my second week of work we had a issue that caused the site to go down for an entire day over a weekend period and we realized we could have caught this we could have caught this through regular chaos experiments so we decided to start introducing chaos we did this in the form of graceful restarts and degradation now a lot of you may know of this as chaos monkey and if you're running in the cloud you should definitely be resilient to a single node or single instance failure so what happened when we did this Reese well restarts and degradation was that our system was already chaotic we didn't totally evangelize this properly we ended up bringing QA down for a week now luckily we had enough foresight not to do this in production yet but we found issues that were inherent and that would have happened in production anyway if we hadn't done this and so we started with an opt-out model but then we moved to an opt-in model right and so we worked with teams we worked with non-critical services that wanted to participate in chaos experiments that were ready that had a good steady state defined and so we moved into targeted chaos and at that point the startup was trying to do regional fail overs and so we were heavily reliant on Kafka during these regional fail overs so we decided to start chaos testing Kafka and we tried to think of ways Kafka could fail in these regional fail overs so we added experiments in the form of maybe changing the offsets may be partially deleting a topic we're fully deleting a topic maybe adding a bunch of consumers onto one topic we had a bunch of different chaos experiments running and we realized we weren't resilient to all these failure modes and we were able to find that out before we did these fail overs and before we brought these failures with us to other regions so next in the startup we were getting momentum and we tried to move on to cascading failure and so a cascading failure is when a failure in one part of the system triggers a failure in another part of the system which triggers successive failures so we tried to cause a cascading failure a bunch of service teams and I got in a room together and we had monitors up on the screen and we we triggered a failure and we did cause a cascading failure but we did not cause the one that we intended what ended up happening was search stop receiving timeouts pricing wasn't handling that and then elasticsearch went completely down in QA and again we were still doing this in QA and but it was well it was things that we would have seen in production had we not been chaos experimenting yet and that was when I really saw momentum get up and that startup and I actually saw a culture change I saw a culture change from people not asking what happens if this fails to people asking what happens when this fails and when I moved to Netflix I was lucky enough that this was already the culture Netflix is doing things differently and they were doing it through failure injection testing so service owners can actually easily add latency or failure so that screen that we looked at earlier the chaos experiment screen we allow service owners to easily add latency or failure based on predefined criteria meant and in different injection points and building blocks of our system but what's important here we realize this safety and monitoring with our failure injection testing framework we had no way to impact how much traffic what we were affecting and so we built the chaos automation platform on top of it that allowed us to determine how much traffic that we were impacting and we called this chap we're working on it actively today and so the way it works is that calls from service a to service B are behaving as normally now at Netflix our key business metric is whether or not you can actually press play now maybe you see a screen and you're pressing play and nothing's happening if that happens we see a spike in that key business metric we call it SPS stream starts per second or maybe the play button isn't even available at all and in that case we see a dip in SPS but we always keep a watchful eye on this key business metric SPS when we're doing these chaos experiments and so we look at we look at our SPS and then we calculate the smallest print smallest fraction of traffic possible in order for us to get a signal that the chaos experiment is working properly so say for example in this experiment we calculated 2% 2% of traffic is what we needed to get a signal here so we take that 2% we split it in half we route 1% of it into a control cluster and then we route the other percent of it into an experiment cluster and the control cluster we don't do anything to it we let it behave as normal because that's our control and then our experiment cluster that's where we can add our failure or our latency and so that SPS that key business metric that I was talking about earlier we actively watch this during the experiments through a automated canari automated canari analysis and we look at our control graph and we looked at look at our experiment graph and we see if those deviate too far from each other and if they do we automatically short the experiment and so this is a key safety mechanism in place when we're doing our chaos experiments because if the experiment shorts because we see an error were able to stop the experiment early and the engineer is able to go offline and debug the issue without being under the fire of a pager going off or a customer seeing issue and so we're able to debug this before it renders Netflix unusable for you so what's the future of this so right now in order to do chaos experiments we had to meet with teams we had to decide what good injection points were we had decided what good failure scenarios were and it was taking a lot of time you know it would take it would take a few hour meetings and we decided that we need to start getting smarter about this maybe we could decide maybe we could come up with an algorithm that decided what the best Kaos experiments were and so that's what we did we've come up with what we've looked at the entire system as a whole and started coming up with the Kaos experiments on our own and automated them we're automating the criticality of them too if there are more critical experiment we'll run it more often we know this is something that needs to be run more often and so we're doing that we're in the middle of this right now and it's been going great so far it allows us to run way more experiments a day it eliminates the meetings that we had to have before and we're seeing a lot of issues that that were able to reveal before the customer sees them so you may be thinking here I need more chaos in my life and you totally do everyone can and should be doing this so if you want to learn more you can go to the principles of chaos at org to hear more about the discipline the book that I mentioned earlier is also free online on O'Reilly and if I can leave you with one final truth today it's that chaos doesn't cause problems it reveals them thank you [Applause] [Music] regular ah yes we need more chaos in our lives don't we yeah I love that if you go back in time which I often like to do a sort of John gal was a systems researcher in the 70s and he had done a lot of research on what makes systems actually reliable and not so he'd come up with dispensable this by only part of his law this his observation was that complex systems that failed were in general built as complex systems from the start and so a complex system that works has invariable been found to have evolved from a simple system that works so you start off with building smaller and simpler components and I will aggregate you can probably build your more extensive more complex architectures because in the end we will end up with more complex systems so if we take a step back and I think about how can you reduce the complexity on day one if you remember the maintenance plane you have been playing the control plane the data plane if you compare for example IDs versus running your own relational database management system in the earth our DBMS you are responsible for all the seaplanes happen for deployment and management to creating the instance controlling yet controlling your database and the data plane in the case of RDS the admin plane is taking away from you we take care of that large parts of the control plane are actually removed from you as well and so you continue to keep part suited up with control plane in your world and the data plane if you look at winning your own notes ecosystem again you're responsible for managing the reliability and the security of each of those different planes however if you want DynamoDB no more administrators playing and the control plane is sort of reduced to only telling us how many reads and writes you want and then the data plane is again under your control that sort of where you interact with the surface so the more managed services you use the less moving parts you have and so if you want to build super simple services simple systems that can be reliable and secure then you need to make use of as many managed services as possible so that you don't have to take care of that now we continuously roll out these services yeah so for example in here and I may not have made many great announcements yesterday in Andy's keynote but sort of on the sides we've been launching all these other services that you notice that we actually launched a time synchronization service yesterday such as you can keep all your instances in check did you know we launched Amazon MQ which is an active MQ managed active and queue service and we compete oh wow all of these components me more and more well that managed services such that you have to do less and less and really truly can focus just on the business logic that you want to write and whether it's coordination whether stepsons history CS helps you coordinate your systems or whether you make use of alb or API gateway all of these tools are managed for you and so the more you make use of these services that'll higher the likelihood is that your system will be reliable secure and can scale in all of that there's of course the rise of micro services yeah and so I'm really happy that this is really taking a hold on because I believe that these are crucial steps for you to build more secure more reliable and better scalable services so in my eyes how do micro service or do micro sources look like it's basically scaling components down to the minimum business logic that has unique scaling and reliability and security requirements now you may have heard about sort of that amazon.com the retailer in 2000 went through a service-oriented architecture we architecture sort of one of the first pioneers in there and there was a great success over time actually but probably we haven't never told you is that we'd gone through a few of these phases the first of all scoring services and pretty quickly we figured out that we've made a mistake so one of the mistakes that we made was that in this decomposition of the monolith into source model services some of these have been data-driven basically all the codes that operated on the customer data set together in one service and the same was for orders and the same as for items which is the catalog and so in the service the service customer master service said all codes that operated on customer data and that meant that that one component had sort of the recognized customer service order login service as well as the end of book service and everything in between as you can imagine login service get hit on almost every page the address book service only when you have to check out so but when you need to scale this you need to scale it at the agreement of the login service not of that if the address book service and plus it became a security issue as well because now suddenly the component that has manages the address book service also has access to you cadenza set as well as the login service the house has access to the address book service which has no need so decomposing this into smaller building blocks that each have unique scaling and reliability requirements improves your overall posture in terms of scale reliability and security and improved fault isolation console status for all of this I think has been really taken off because of container tech technologies yeah both in terms of ECS which has a deep integration with all of the different AWS components and as well as now also kubernetes that's a computer elastic container service for cuban eaters gives you this as well and so container technology has become of the default mechanism if you want to build micro-services so if that allowed to invite a be fool around states probably the only person who can make wu-tang clan jokes and then explain containers for you with that at the same time so a be fuller is senior technology at census at AWS and probably the world's expert on containers Abbi [Music] [Applause] hi everyone thanks Bernard for having me I'm super excited to be here and if you all are half as excited about the container announcements from yesterday as I am we'll have a really great time today so as runner mentioned kind of looking into how can you work with containers and what we got yesterday was there's a lot more options for running on containers on on AWS and maybe there used to be right so we have ecs we have Fargate for ECS we have amazon eks coming soon and we have Fargate for eks so tons of options but that's really a good thing so we want to be the best place for you to run your container workloads however you want to run them kubernetes great UCS also great want to use something else sure part of this power here for you is in those choices it's being able to choose what works best for your workload what supports you how can you run something that just works and what Werner mentioned is how can I get kind of the Minimum Viable system with containers how can I get something that just supports my business logic where I have to do less and less of that of that undifferentiated heavy lifting but the question that I'm always asking myself when I'm listening to these is but how can I get started how can I make this real how can I choose the right tool for me how can I put my fingers in a keyboard leave this keynote right the second but don't do that and get moving so to do that let's rewind for a second and look at what we've built in and why we've built it the containers base has been evolving for many years from the orchestration to the automation to the layers themselves and we've been running containers on ec2 from the very beginning but what we got from our customers is that there are some difficulties for them there are some pain points there was a lot of that what we like to call heavy lifting so we built DCs and why would we why would we build something like this so managed platform for all of those pain points so cluster management container orchestration with that deep AWS integration that Vernor mentioned which is basically how can I get those things that make a well architected app how can I have someone else help me with that how can I get a shared responsibility model and still get things like auto scaling and load saying and logging and monitoring so that's what we mean when we talk about deep AWS integration and ECS was a managed way for us to do that for you run your containers on maybe two on AWS with a little bit less of that heavy lifting and customers are not only doing amazing things with ECS they're running mission critical workloads these are just a couple of them and I'm actually really glad that the screen gets bigger and bigger every year because I can fit more and more beautiful logos of of all of you and from startup to enterprise companies in every imaginable sector on that slide are using ECS to run their applications so let's dig into a couple of those customers so containers are for everyone so financial enterprises like Capital One use UCCS to run their containers behind application load balancer for a faster pace of development to make things a little bit easier and to separate out their services and it's not just about big enterprises it's for startups too so some of you may be familiar with segments they're a marketing startup that collects data from sources like web apps and mobile apps puts it through their system and gets it out to places like business analytics tools and CRM s so they collect all that data but to do that they process a ton of events so they handle more than 160 billion analytics event each and every month so that's up to I can't do on stage math like Vernor can but that's up to 270 thousand events per second so they have to do something right they can't be sitting at their desk and segments and deploying all these by hand because I think I think that might be a ton of work so they use auto scaling to scale up to thousands of containers to handle that traffic when they need to handle extra traffic but then to scale back down when they don't need that capacity anymore and that breaks down to 25,000 tasks across over 300 services but it's not just for event processing so they also run their ETL jobs on ECS so they collect data they run it through roughly 500,000 short-lived containers a day and play it and put it into places like redshift and Postgres so that's 3.5 million containers spinning up and down a week just for ETL and what dcs is letting them do is focus on what really matters for them and that's their application because all that really matters ultimately the bottom line is what can I deliver to my customers but ECS is not the only way to run containers in production we had a lot of customers that are running kubernetes on ec2 and we've thrown this one around a lot recently but more customers are running kubernetes on AWS then absolutely anywhere else one of those customers is Mons oh so there are mobile only bank they're based in the UK and they've been fine they've been cloud native from the very beginning so they're built with 350 micro services right on top of ec2 and for money I feel like high availability is extra important so I live in New York and please don't judge me but if I want to be able to buy a rainbow cookie at 2 o'clock in the morning my bank's planned downtime should not prevent me from my cookie so they have high ability at every single level of their infrastructure that's from multi-master to hundreds of workers all backed by highly available at CD but safety is important too so they use Direct Connect to peer between their resources resources and third-party locations like payment providers back to their AWS infrastructure plus containers have let them scale their infrastructure with a really small and flexible team so they've been able to use their infrastructure to their advantage so they started with 80 services in 2016 they grew to 107 67 in 2017 and in 2018 they're on track to have 350 so they've been able to really keep their small start-up team and that flexible mentality but use their infrastructure to help support their rate of growth growth but they have a quote that it's been passed around the internet a lot I think and it said that highly available kubernetes was not for the faint of heart and that was a theme for a lot of our customers run kubernetes for me helped me do this this is a technology that I want to use can I get a little help doing it and that's what led us to create eks what cks managed kubernetes for AWS and there's a common thread here that I think started with just running containers on easy - that brought us ECS that brought us to kubernetes that brought us eks and that's run containers for me and let me focus on the vacation but what if I could ask for more containers in production can be really hard work as a Verner mentioned you end up with lots of moving pieces and we want you to focus on what really matters and that's delivering a really great experience to your customers remove that heavy lifting just run my application for me which brings us to far gate and the future so far gate isn't a service but it's an underlying technology that you can use to power your container workloads that means no infrastructure launch easily scale quickly just take your test definition or pod drop it into far gate add some resources and away you go but I want to make that future real and I want it to be more than just a product announcement so I'd like to preface this by the way saying that this is not the flashiest demo that I've ever done and it used to be that I could come up here and I could create a V PC from scratch and I could be well look at how kasam I am and that's not gonna be like this this is a real-life demo so what I want to do is take something that you all might do at work deploy an app and see if our get in action so I'm gonna start with my tasks waiting for my thing great so I'm in the Fargate first-run experience I can add my container name so this is pretty much very similar to the ECS experience I add my resources I'm adding a port mapping I save it now I get to add my task definition so I add a name I tell it where to look for my container I have to add my load balancer because high availability is important go next I save it I wait a couple of seconds so that where this is going to deploy now and then I'm going to click over and you can see that my app is out there in the wild so this is the Vernor keynote so just like to let everyone know that there are just about 11 hours until the replay party and that's what they're really matters on Thursday right so whoa I don't know who the artist is either so for gate for me is less about the how and more about the how well and all I want to do is I want to just run my workload nothing else should really matter to me don't worry me about setup or tuning or environment or images just run it for gate lets me focus solely on that workload my user experience and it lets AWS handle the heavy lifting and that's really powerful me and Verner mentioned right at the beginning of today about building systems that would support you in 2020 and that's what Fargate is how can I make the future not only real but easier how can I do how can I focus on what really matters delivering something really awesome to my customers how can I iterate faster how can I scale more easily how can I do less work on the infrastructure side and more on just the running inside that's my goal no setup no service discovery none of those little bits and pieces and I it's a shared responsibility right so I want someone else to worry about them I want someone else to know that they're working on all the internals and I want to be able to build with a new primitive just my containers but all I ask is that you build with a couple of things in mind secure scalable reliable that's it the future and I think we're rocketing there really fast with services like far gate focuses on units of work a function a job a container workload first full stop thank you [Applause] [Music] there [Music] amazing we do you think'll two speakers let me not tell you that there are no walks there are female engineers out there yeah at least I had no trouble finding them yeah so something attached on what we just said is that for me how does the future look like yeah it's all the code that you will ever write will just be business logic yeah if you use these many services and his managed environment that is the only code you will write in the future yeah CPU scaling will take care of lambda and ECS are your scaling will be done by time with DB and auto scaling reliability by lambda ACS dynamodb lower surface all of these components will take care of this for you so you can focus on the two treatments of cloud which is you only have to write business logic now surface the place in important role in this as well yeah this is really we no longer have to worry about any server management you have flexible scaling that you don't need to take care of yourself now you get automatic high availability of your functions out-of-the-box you never have to pay for i/o capacity and it is actually not just a tool the service for sort of the the young cool kids what we've seen is a immediate tremendous pickup by surface of large enterprises or large applications if you look at job doc so the business lead is very successful business that actually matches patients with doctors you see that lambda in their architecture is crucial there is only business logic in lambda there are no servers in this architecture iRobot the consumer robot company yeah they actually have a very strict cost component in mind because their customers only pay for the robot they don't pay for the cloud services that come with it and if you look at all the different components that they have in all of these yeah all the different functionality that they need to do for the robots they make use of as many managed services as possible and all their code is service because it means that they never have to pay for idle so if you pop out one of those the registration for example you can see the typical move yeah API gave a lambda Q's logging cloud watch all managed services in a delivers a Gera is a company that actually for most insurance companies bill delivers this software and tracking mechanisms for how well people drive and whether or not you can actually detect whether they are in accidents and so one in three vehicles in the u.s. actually have software for mogera in them all of this Israel or lambdas yeah cues ml prediction engines all the data and fictional is being moved to using lambda and actually they make use of additional managed services such as Twilio to actually do phone calls for you home away is a very well-established old business and the lessee mellitus vacation homes for you and so they basically take in about 6 million photos I think a month now because all of them all of these individuals that are actually sort of enter notification homes really make use of more and more high quality imaging and so as you can see in their pipeline of image ingestion there is no server it is only serverless functionality both in terms of sort of dynamodb Esley the lambdas that are using so basically data comes in and these days very high quality images get dumped into s3 they actually tone that out to a master image that is about four thousand by three thousand pixels and then they sort of we process all of them into thumbnails into other resolutions for different devices they managed metadata streams for for that all everything there is a managed sir there is no server involved in this at all so if I think about sort of some of the things that you keep in mind hope that you keep in mind where you actually are building service applications that if you use with more than one function you start making use of state machines and step functions to do this that you no longer start to think about databases but just think only about cables events as you interfaces and manage sort of the execution of the different versions of your of your service environment using encrypted parameter store and we've delivered quite a bit of technology for you I think in the past year where the server for this application model and some local that came with lamda allows you now not only to have declarative approach to layout your service application but also to run that on your laptop in your plane while you're actually debugging and working lambda at the edge and in Queen grass the support at the API gateway for for protease a usage plans to separate different usage groups and I think the recent updates to two-step functions are very interesting so where you are going to actually update the state machine in real time without having to take it down so we do a lot of good things I think in around this world to continue to be the pioneer in this area to deliver technologies to you that you cannot get anywhere else if you really want to continue to be on the forefront of servers continue to check out how we fast innovating in this space you actually give us a lot of requests continuously yeah more languages more memory more functions deeper integration with our security tools and I'm happy to announce that at least today we have knocking off again a few of these for you you get API gateway with EPC integration meaning that meaning that you can bring new free pieces you can media gateways and scientific pieces and by the way if you guys another announcement that has happened with CPC course Regency PC chair sharing did you check that one out actually there where you have completely encrypted tunnels between your different three pcs in different regions and not only that we run a created tunnels there we actually anonymize all the packages for you so whoever would be inspecting these streams cannot even figure out what your IP addresses are and things like that yeah so these small these small things that we announce around here at at reinvent as well they have a tremendous impact in how we build our systems better concurrency control sort of you controlling how many fences clinics you run in parallel see megabytes support and support for in language for dotnet core to language okay angle you ask we deliver yeah I'm very happy that we now have Co support I think many of you have been asking for this one and why don't you go built and show us what you can do with that yeah so we also want to make sure that you can learn from each other and not only learn from each other but moving from sort of people that already have quite a bit of experiences in in the world of service and we are launching today something called the AWS service application repository we can weigh you and everybody else can put their service functions into and where you can sort of give publish and deploy them and allow other people to reuse your functions and you can learn from them as well quite a few of our partners are putting their functions in there like like Splunk and like video and these functions are immediately for you to ready to use Tildy be deployed in your world so service plays an important role not only in sort of system development but also in in in and around areas such as IOT and like machine learning and many of our enterprise customers have built very extensive architectures with them so to talk about probably one of the very most exciting customers that we have around the world with a lot of imagery and a lot of very cool applications that they are building with AWS I'd like to invite dr. Walter Scott onstage the senior vice president the CTO of digital club Walter thanks Curtis great shape [Music] thank you you know DigitalGlobe sits behind many aspects of our mobile digital lives if you've ever looked up directions or called for an uber or search for a trailhead you've used digital globe imagery or information that was derived from it but it's more than just enabling our mobile digital lives it's about helping make the world a better place whether it is helping find a boat in the middle of the ocean that let reporters and then authorities release over 2,000 slaves who were involved in fishing in the South Pacific or mapping villages in Africa to find where people were to deliver the right amount of vaccine to the right number of people at the right time or providing first responders with accurate and up-to-date information before during and after a natural disaster like the devastating fires that went through Santa Rosa California just a little over a month ago now there's a lot of information in one of these high-resolution satellite images looking overhead you can see you know cars manhole covers street markings but it's a really big planet and that same amount of information is present over every square inch of that planet so our satellites collect 80 terabytes of that information every day and downlink it think of that as data boulders from space that are big enough to crush just about any IT department on the planet well back in the day before AWS we built our own data center to store all hundred petabytes of our imagery but it was stuck in jail so we decided we were going to go all in on AWS and let me talk to you about three ways in which AWS enabled us to build the digital globe which is a living digital inventory the surface of the planet first step was how do we get our data out of jail remember this from last year it's a snowmobile digital globe was the first customer for snowmobile big data center on wheels here you can see selfie of our snowmobile and our Colorado headquarters taken from space we move 17 years worth of data to the Amazon Cloud in a single very cost-effective operation moving it into two separate regions and glacier a huge step up from our legacy tape plus disk but in today's digital age who doesn't expect instantaneous gratification and the problem was glacier was very cost effective but it didn't provide instantaneous access so that leads me to the second challenge which is how do we provide on-demand access to our hundred petabyte library while still managing costs and keeping most of the data in glacier so let's take a look our satellites collect imagery over the entire planet and so our first step was to say well we'll take maybe the last hundred and eighty days and do age based caching so the blue dots are representing what was stored in the cache but there were two problems and by the way the cache is in s3 which is obviously very performant problem number one is big cache and probably number two it turns out it's still missed a lot let's take a look at the customer access patterns we over the last few years have moved from viewing imagery to analyzing imagery and the net result was that the hit rate went from about 70 percent down to 40 percent which is a horrible cache hit rate if you take a deeper look at that this is the pattern that occurred over the course of several days so it's highly variable the orange dots are showing you what the what the accesses are the per the Purple Line is the hit rate it was diverse and it also included both current and historical data because a lot of the access were for trending so we turn to machine learning enter sage maker machine learning is a service so we applied sage maker to our caching problem let's take a closer look imagine you're trying to map villages in Africa the vaccine example that I was using now those villages appear within large image strips that are hundreds of gigabytes hundreds of square kilometers so we need to train the caching algorithm to find relevance and customer access patterns are people looking for something in the same image or images nearby can we predict based on the usage patterns where the next access is likely to be and preload that from Glacier before it's even needed well the net result was yes and we were able to get our cache hit right up by more than a factor of two in fact it's at 83% trending to 90% the cool thing was our infrastructure engineers really created for structural engineers total noobs in machine learning and in one week we were able to achieve the result that I just described and that was all enabled by Sage Maker so now let's move on to the third problem that AWS allowed us to solve which is extracting information out of that huge image library and doing it at scale so hundred petabytes of data with 80 terabytes per day if you had one human looking at one day's collection trying to pull out one kind of feature like the cars or the buildings or the roads it would probably take that person in about 85 years and they'd go blind in the process not a great answer so we introduced something called GB DX geospatial Big Data Platform it is geospatial analytics as a service it's a live service designed to convert that unstructured imagery into structured and analyzable data and pulling out various kinds of features in the imagery now we don't try to do it all ourselves we in fact enable an ecosystem of developers and so to make it super easy for those developers we introduced gb DX notebooks built on top of the open-source jupiter notebooks framework which are probably familiar to most of you in the data science community and super easy to use so it provides access to a number of machine learning patterns that might be the same as or close to what you are interested in doing it provides you the ability to modify to select the data you want to operate on to train your model and then to operate your model at scale by leveraging the tools underneath sage maker by the way I encourage you to try this out and there's a link at the end of the presentation that will that will take you to an opportunity to check this out so combining those components enabled us to solve what we're basically three hard problems number one satellite imagery is pretty hard to deal with it's not what most of us are used to machine learning is hard and operating at scale is hard AWS enabled us to do all three of those and when I say scale I mean continent scale like the continent of Australia it's a big place seven and a half million square kilometers 20 million buildings and if you wanted to map Australia using traditional techniques at the level of detail that is needed in the modern digital world totally unaffordable or you might be able to do it every five years we had the challenge of doing that as a continuously updated service so we took 200 terabytes of imagery collected over Australia we worked with our partner PSMA Australia to create G escape which is a continuously updated service of the buildings the roads the roof materials the roof Heights the solar panels the swimming pools the tree canopies the heights of the tree canopies it's the world that we live in work in and play and made digital now let me give you two examples of how this is being used the first is for telecom as telecom moves to 5g none of us like drop calls but unfortunately 5g is blocked by a lot more than the older telecoms were like for example trees who thought to map trees who thought that that was actually important to have in a map well for 5g Telecom it turns out that's critical if you want to be able to do tower placement without guesswork so that's a great example of how g BD x and g escape let you save time and resources how about saving lives well another example is in the outback in Australia the typical property is huge and the address is a dot in the middle of the property says nothing about where the buildings are or where the people are well in 2015 Australia was suffering from wildfires across a large portion of the continent and you think about what does the first responder have to do they have to figure out how to evacuate people what are the structures that have to be protected well if they're guessing that's time and in a wildfire time equals risk to life G escape takes the guesswork out of that sort of first response this has only been possible because of the image library with digital globe gbd acts powered by the scale and easy of use of AWS and the possibilities are endless imagine what you could do or actually better we have a challenge for you gb DX for sustainability digital globe at AWS want you to join us in an experiment with purpose come forward with ideas for solutions using machine learning to UN sustainable development goals like for example good health and well-being the example I used earlier about vaccines in the developing world that was actually a machine learning example we used machine learning combined with crowdsourcing to figure out where the villages were and estimate the number of people who were in them we encourage you to submit the winning proposals will get access to gb DX and 100 petabyte library and the results will be announced in the late spring let's all work together to see a better world thank you very much [Applause] [Music] amazing challenge I urge you all to take part in that gdb X platform is amazing and you get access to all the whole hundred petabytes of data so machine learning as you can see has impact across the board actually within AWS you probably see may see as well as guard duty both security services differing by machine learning where we sort of guard duty looks at sort of the normal Lewis normal access patterns to your accounts and then after that can actually use the models that they've created to actually start looking for anomalies so we seen machine learning being applied everywhere this is sort of one of these simple building blocks putting things in a row you get your Twitter stream you push it in Kinesis you use translate push it into comprehend store it in s3 you Safina and then push it into a click site for quick demonstration about sort of what are the the international sort of discussions going on about a particular Twitter topic that you're looking for this is just really building blocks putting next to each other you do not have to write almost no business logic there at all if you go to the Builder square the quad you will see a 737 simulator there behind that actually said a lot of technologies so that's sort of the the hardware interface the pro team simulator is actually has the unique capabilities of the real flight simulator but on this side it is all machine learning yeah Amazon election Poli will talk to the pilots and the machine learning is actually used to sort of give the private advice about the best actions to take or basically it will tell you whether you're crashing the airplane or not very crucial advice I would like to think so again all a degree has some is services there there's a trainer data flows in there lambda put it in dynamo DB in the cafe house and then you have to wholesale of sort of training models both a max near tensorflow resolve some random forest stuff we in ec2 containers to actually then create networks and so that once you've created those you actually need to deploy them the simulator actually has a Volta and NVIDIA Volta board on board so you can actually deploy models to the to the board locally and so you have to do all this work for that now let me tell you if you then use sage maker all of that goes away yeah all of the deployment all the management all of the complexity she had before this to actually manage your models to train your models to do the management of him and then load them onto hardware wherever that hardware was willing actually was all gone last year and fortunate to have trained I know States a very cool company out of the UK who who basically sells you train tickets for a job they actually now that they are all in on AWS started doing all sorts of experiments with the data that they have around train tickets so they made use of machine learning them to start doing giving customers advice about what would be the right moment to buy your train tickets to actually drive the cost down for them it seems to be extremely effective you see all sorts of reports in the UK press coming back about how well they're serving their customers using these machine learning models so one one area I think where there's a deep integration between machine learning and in the real world is a machine learning and AI an IOT let's say where you have lots of sensors around your environment that creates new data streams and then in response to those sensors sensor streams you can actually sort of execute your machine learning models and probably there is there's no better person to actually demonstrate this to us then Matti kochava who is the CEO and co-founder of an ADT heat that will give us an amazing demo of sensors machine learning and real humans mapping [Applause] [Music] stories people have been telling stories since our early days stories are part of our ancestors our ancient society and our modern society the person who was painting on the wall of the cave or the person who is using the most sophisticated software machine to create an image both are members of the same tribe the tribe of storytellers this tribe of storytellers is an on endless search of telling stories in new ways today we all join this journey ladies and gentlemen this is it's this is sports today we're here with UFC superstars Edson junior Barbosa and Martin bone crusher - Casey it's early round 3 of 5 here in Las Vegas nice flurry of punches from - Casey this is sports tomorrow [Applause] he's deep analysis ranks these fighters as two of the most explosive fighters in the UFC in right big fancy Barboza's hood sensor measures the force of seven points 2 of 10 he has to Casey wavering - Casey left cheek impact detected at 8 of 10 a hard punch - Casey's agitation is higher than us the average for that impact is talking barbozza Barboza's all business don't get confused at this smile he's emotional analysis shows a different perspective Barboza is confident for Chuck Casey is passive octagon or sensor measures Barboza's energy of 18% - Casey down 10% all locked up here chickie see aggression index is rising fast respect to ships to rates sweet bite marks bones that the Casey's back up he's got his leg Lance if you punch him the global audience ships - jinksy or England's missed you Casey while Brazil is pulling hard for their Barbosa where local audience stays with Barbosa both are strong they are bringing it this is tremendous what of both fighters are gaining confidence Wow double 360 from Barbosa gkc flops oh what a kick by - Casey Wow spectacular three now for the inside story with heat he has begin to see adding up even while bug poses punishment index hops at eight point nine two cases aggression index is all the way at eight point seven historical fight analytic says even fight Gary you heard it from he invite your friends we're in for an exciting night what amazing fight Barbosa is from Brazil jacuzzi is from England they just performed a scripted fight but they are about the real thing those guys stand on the octagon in the most toughest demanding sports in the world the UFC what you also saw is how we want to tell sports what you've seen right now is a two minutes of a UFC fight where 70 new insights we were introduced and young that fight those insights are covering entire aspects of the fight of Jacuzzi in Barbosa they cover their passion they cover the power of the fight the caravel the resiliency they covered the strategy all those things happen on the Octagon shunned we tell the story of sport that way sunspot be told in real time with real data the real information the real data the real insights and the real emotions that's what you saw on the big screens and we also going to show you I was going to look today on the smart phone we are a company which is trying to revolutionize the way we're gonna tell sports and live events so first who are we we are a partnership between two companies a GT International which is my company which was building such solutions for the industry and endeavor which is a WME energy company headed by Ari Ari will you please stand up aided by our Emmanuel one of the well-known people in the industry WME IMG is the largest entertainment and sports and freshman organization in the world together we envisioned this story how to tell the world with Internet of Things sports and entertainment so how do we create 70 insights in two minutes inside they've not been seen before it all starts with this it starts with the data that we collect from sensors if you would see up there what we have right now we have a camera which is on top of it very smart a analytics on the edge then mat that you see over here is a very smart mat it can manage movement analyze pressure analyze so many things this mat and the camera are connected to the same Center we also have a Thomas to dr. thomas butter we also have the glove which is a smart glove and inside this glove we have this sensor this sensor alone can create about 12 different stories and the stories are from the strength of the punch all the way to the impact the punch aid on the other fighter very sophisticated stuff the glove is connected to the mat it's connected to the camera we are agnostic to sensors our philosophy a sensor is a storyteller and it's a great storyteller if it has very good analytics connected to it so we deal with any sensors we are covered with thousands of sensors over here in this UFC fight we cover the self with audio and video and the things that I just described now all these those sensors create data which is flowing up into what we call our world graph a world graph is the center piece of our product and actually it's where everything is happening on top of it the world graph is the semantic representation of the physical world and the entities participating in it and the physical world over here was the UFC fight and the entities were of course the two fighters Jake is in Barbosa usually the other entities are going to be the corner Mans which some of us relate to them as coach each team is about three corner men and it's going to be the referee and of course it's going to be the physical entities such as the Octagon and it's going to be a non abstract entity which is going to be the fight itself all those are entities data on the world graph the world graph knows everything about those entities their properties their dynamics the complex behavior the wall graph also knows about the relationship between the entities because we want to tell a story we are not about statistics of the fight we are about the story of the fight so the world graph knows things like the connection between the fighter and between his coach the wall graph is also covering the historical behavior of the fighters so we know what they've been doing in the past so what we have on the world graph in real time is the information which comes with the sensors they described to us the fight and we have the historical data on top of the world graph we have our AI agent now is the way I want you to think about it think about a robot the seats right now in every sport event in the world think that robot is connected to sensors as I just told you and that robot has one mission is to be able to provide to each one of you what you want to know about the sport event even if you are not there so the AI agent that we have that's run runs on top of the world graph is it looking at the world graph and it's asking questions what's interesting what's important and what's fun so I can be able to provide this information in a cool way to my order and to my friends and that's what the AI agent is doing extremely sophisticated agent and what he also knows is he knows what each one of us would like to get it might be that I would like to get the emotional part of the fight and someone wants to get them to know about the physical part of the fight so it's gonna be able also to understand what each one of us want and to pick from the fight that we so right now and from the 5070 analytics to send it to the relevant people and for whatever they want to see now how does this work and I'm going to show you this specifically because this is a very complicated comprehensive information think you're such intense fight on a such small octagon and to be able to identify all those great emotions and great things that's the reason why in our company we have more than 100 pitch these who are data scientist in the same number of developers and we have artists and we have visual people because we believe that the way to tell story right now is the combination of science and technology and art to be able to tell a great story to all of us the story of this UFC fight so you saw the moment in the fight where jackc was standing - Barbosa of course it was scripted but it's based on a real fight in that moment something else happened Jake Izzy got a very strong punch from Barbosa the AI who is looking at the world graph identified that the punch was on the top 10% of that way class others they eye knows that because the sensor which is on the glove is connected to machine learning and already saw thousands of thousands of fights so the glove tells they I this was a unique punch it was on the top 10% at that moment the AI begins the trigger of the story and for him right now the first thing is going to do is going to tell all of you watch the fight or maybe you're at home you don't watch the fight he's gonna tell you hey by the way if you I know you like a UFC fight right now was a unique punch that Barbosa gave jkz but there's another thing is going to happen it's going to keep searching and looking for more interesting information and what is gonna be a local so finding is going to find the impact of the punch aider of ten is a strong punch but what impact it did on jkz and the world graph will inform the AI agent or the robot was right now sitting and watching for us the fight will inform him that the impact was eight out of ten where pain is a knockout now each one of us is no cut is a different number so my ten is definitely not there ten so he is gonna say it was eight out of ten which is a very high number to be eight out of ten at this point the wall graph knows the jockeys is under stress it starts to ignore jockeys is stenting he doesn't look at that anymore because he knows it's under stress but he's checking more information he finds out the jacuzzis energy went down in ten percent all real information he is also going to look at the coach or the corner men the corner men is participating in the fight the corner men is emotions the common is feelings so when the corner man is worried it means something about his fighter so we used three different analytics over here for instance physiological analytics and video analytics and audio analytics by the way why audio analytics because we can see that the verbal activity the rebel activity of the coach to jacuzzi increases Pelle history historical data so right now the world graph knows for sure didja case is under stress because he got it from him and he got it from him I got it from anyone else by the way we also have family members who are over here watching the fight family members can you send up for a second ok they have also sensors thank you they have sensors because why because they are part of the fight they want to say how they feel about the fight and when they get nervous they know their family and also by the way we have the same thing you saw we find out what people feel in Brazil or what people feel in England about the fighters so right now all the world is connected to this fight what happens right now is that the corner man is also under stress the AI agent knows right now for clear that Jacuzzi is manipulating the fight he actually got a very tough punch and the AI is able to measure something that you should never measure in Las Vegas he is able to measure the poker face he is able to measure the jacuzzi was playing it and you know what happened to Barbosa he fell into it because double Bosa in 85% of the passed every time that he feels that he can decayed down get down is the opponent he attacks him but over here my boss had chose to stay away he believed the taunting he didn't believe the data that we have but we without AI we can tell our fans and we can tell the people who are watching the fight here is really what's happening now how do we tell it I'm just talking about it right now for four minutes this is really long we know no one is watching anymore four minutes information videos are going down less than a minute so how do we tell it we are taking all this data that I just told you for four minutes and we visualize it in the way people look at the world we visualize it through avatars the avatars is the best way for us in the UFC by the way in other sports were using different visualization the avatars is the best way for us to be able to take a lot of data and to summarize it in real time now when I tell something which is very important the avatar is connected to sensors so the avatars over here are telling us the real story they not just we shall think they are the real thing and sometimes the avatar is going to show us that Wojcik ASD is tenting is basically in tough situation the avatar will be the real story and this is the way we're gonna tell you the story if you cannot watch the fighting you don't have time to watch the fight and if you don't have time to see the data that we are sending you you can be connected to jkz and you can be connected to Barbosa through their avatars and you forget it they have a fight but you might see right now suddenly on your phone in Avatar comes in and tells you jackass is in trouble or Barbosa is winning or Jack Hayes isn't great in great shape or both of them are winning and when we talk about winning you can't win without having great partners and the great partners that we have for this is Amazon all this extremely complicated situation a process that I just described to you and by the way today's we are talking over here Donna I think that we have two fights I think that wave - excuse me wave today - basketball games in Europe because the entire European basketball league which is the NBA of Europe not as good as the NBA of America but that's what we got the NBA of Europe is right now all on our IOT grid and all the players over there are connected to sensors and we are testing it right now as we are talking to you because on January in February next year all of this is going to be on basketball in the for the Euroleague so all this process that I just described you is on the Amazon which what's one thing that can say about them we are bidding over here a very complicated process of how to build storytelling in Amazon is working with us not as a giant but also as a start-up there are a start-up who works as a giant with us on what we are doing ladies and gentlemen we are focused about telling a new way to tell sports and live events in a new way to begin - it'd be a shame to show it in real time to bring the same energy that you have in the game to wave it right now in the information that you get and all of this is in order to serve you I hope you're going to be on our platform and really enjoy it thank you very much [Applause] thank you [Music] so originally I was supposed to fight however they couldn't find another fighter in my weight class willing to to do this yeah all of this hardcore IOT analytics machine learning all of the all of the important tools that you've seen also yesterday in Andy's keynote but also us in yesterday's yesterday's keynote was deep lent the the new device that we make available for you so you can play with machine learning and I'd like to invite my good buddy dr. Matt boots back on States who was with me in the first keynote in 2012 as well to talk about all things machine learning Matt [Applause] [Music] [Applause] [Music] Thank You Verna and good morning everybody so I'm going to talk a little bit about machine learning and particularly machine learning moving to the edge so we've seen a couple of trends over the last couple of years and when we talk to customers about their machine learning needs the first is that a huge amount of machine learning is now happening in the cloud both with application services such as recognition video transcribed comprehend translate but also customers building custom models using training tuning and inference which is using making predictions against pre train models the second trend that we've seen is around moving that inference taking train models and deploying them into connected devices to run them at the edge and customers do that today in a number of different ways just by putting the model on the device and burning it into the silicon or using services such as lambda at the edge the approach here is that this is a very useful way of embedding intelligence where a round trip to the cloud has prohibitive latency or where these devices may need to operate in disconnected environments you still want that intelligence on the top rate on the device even if it's disconnected from the cloud but a more common trend is to bridge these two worlds to be able to take devices that are running inference and integrate them with the cloud this allows you to put some level of intelligence effectively the sophistic is sophisticated enough model that you can get away with given the hardware that you're operating in and allow that to integrate back with the cloud now this is actually the exact approach that we take today with echo and the lexer so today the echo device allows you to operate some speech models and there we're detecting the wake word so Alexa computer Amazon you need three syllables to be able to do it reliably but as soon as we hear that wake word the echo device starts to stream the audio back to the cloud where we can run much more sophisticated models and when we were designing echo the idea really came out of a brainstorming effort where we were thinking what could we build with an infinite amount of compute that we have in the cloud and we were able to pair that with some sophistication down on the device and this is the exact same approach that we take with deep lens so with deep lens were able to run sophisticated vision models down on the device and integrate with more sophisticated models that run on the cloud so I thought I'd give you a deeper dive into the technology and the innovations which live inside the deep lense device so let's take a closer look so this is a deep lense device we announced it yesterday it's the world's first deep learning enabled wireless video camera for developers and whilst a specific task is to help developers hone their machine learning skills under the hood is actually a reference architecture for this hybrid world where we have some sophistication running on the device and integration with more sophisticated models up on the cloud so let's take a look inside this is the data flow inside the device we have a custom chip in there which has three main components an image processing unit a CPU and a GPU and we were able to take the high definition 1080p h.264 feed off the camera and we've send it into the image processing unit and that splits the feed into two parts we call it MPEG a-- fication now we have an h.264 feed and mpegs that we can run inference against and those mpegs get passed off to the cpu which is running Greengrass which allows us to run lambda functions and lambda with a single line of code can pass off the inputs and the image onto the model which is running on the GPU and it's able to do this in real time so it takes the images passes them to the GPU the GPU performs inference against them in a high performance we're running at over a hundred gigaflops on the device and then passes the results back to lambda and then lambda can do anything with the results it's just a normal lambda function or after that so for example the GPU can process and identify face and then you can pass back the yes lambda I've seen the face and only at that point can lambda say okay I've seen a face now I'll connect back to the cloud because we run on green grass we're able to connect back using secure short-term certificates to rest of the AWS fleet so we can then instigate lambda functions on the backend on AWS we can send messages through to a AWS IOT we can even initiate text messages or push notifications through Amazon SNS put images in s3 right table right items to DynamoDB the list goes on and on but in addition to that the device is capable and can emit a Kinesis video stream this is another service that we announced yesterday which provides real-time video feeds coming on to AWS so you can send that real-time video feed both the h.264 feed and the results of any inference or image manipulation you've done down on the device and send that back to recognition video so Kinesis video streams integrate natively with recognition video and that has much more sophisticated facial analysis activity detection content detection even celebrity detection which all runs in a fully managed service up on AWS so just from a single device because we have this power on the device and the connectivity back to AWS we're able to build this hybrid mode of running connectivity down on the device so now let's zoom out one more time what happens when you want to deploy a new model down onto the device well as we talked about yesterday you can use sage maker you can take all of your data which seer stores and s3 you can put them into your model train and optimize the model and then as soon as you're ready you've got a good result you just click a button and we send it over the air back to the device now there is a little bit of smarts here we don't just send the raw model we actually process and optimize the model in the background for you before we send it down to the device so we do a couple of things the first thing is we translate the neural network model that you've trained into the right format to run on the optimized inference engine down on the device the second thing we do is pruning we remove all the network neural network pieces that you don't need to run inference are in a high-performance way and the third step sounds awesome it's quantization with quantization you're able to take advantage of a neural network ability to be able to filter out and ignore noise in its signal this is how image processing works you're able to look at them image and despite the noise in the image and anything that might be in the way or the fact that it's upside-down on the right well you can still identify an object inside it because the neural network is tolerant to the noise and we can take advantage of that by region improve the performance by reducing the precision of the inference so we can move from 32-bit precision to 16 bit precision down on the device and still have the inference work and get a good quality result and that improves the performance so that allows us to deploy over-the-air using green graphs and push down to the device now this is great for learning because you can iterate really really quickly you don't have to worry about the quantization and the pruning it's all there for you the next thing you can do is a cutting edge area of research so it's kind of cool that in a device which is only 2 4 9 you can still take advantage of the cutting edge areas of research and that's something that people call human-in-the-loop learning so one of the approaches you can take is where you get a low confidence result back from your model let's say you're trying to detect a cat and you only get a confidence of naught point 4 out of one a range of 0 to 1 you can collect up all the images that you'll get doing poorly and you can store those in s3 and write a table to dynamodb and then you can pass those images back to real humans so they can annotate the data more effectively this is human-in-the-loop learning and it allows you to iteratively improve the data sets which are using to produce your model using human intelligence so we have a service for this that we call Mechanical Turk or which you can take advantage of today or you can use internal experts to your domain once you've got their human annotations you can pass those back up into AWS you can improve the results and improve the quality of your data then because sage maker is so easy to use you can just throw those back into your model click the button and deploy it back to your device so this is a very capable way of running machine learning at the edge which we think is an awesome way to get fast iteration and immediate gratification as well as exploring some of the more advanced concepts of running machine learning in the cloud and at the edge and integrating it with real humans so the faster you can spin this loop the better it is for your learning both for yourself as you hone your skills and also the learning of your models now we were very lucky on this project to work very closely with our friends over at Intel and so to talk a little bit more about our partnership I'd like to welcome up on stage Naveen rau who's the CVP for artificial intelligence at Intel Naveen [Music] I'm at hey Naveen and see ya good see you thanks for coming so everyone knows we're not gonna be fighting yeah they've taken the mat away no one wants to see that so when we started thinking about the deep lense project we quickly arrived at the idea that we wanted to use the Intel Atom processor so we need you to tell us all about what that is and why it's so awesome well first off it's a rock solid platform to start from so atom is used in a variety of different endpoint applications and basically we can leverage the investments that have been made across many different types of applications for this project it's also a great computer vision platform as as Matt mentioned it has has a video codec on it so we can actually encode video natively in hardware it also has a GPU it has standard intel architecture support so this makes it very easy to move from you know these cloud-based applications by the lambda functions onto this device in addition to that we we provided a lot of the optimized libraries with a compile-time library called CL DNN that allowed us to stay within power budget that was allotted by by you guys to actually build a device that we need yeah it's it's been a great partnership so as we kind of start to scan for words can you tell us a little bit about your vision for machine learning at the edge yes yeah I still really in its infancy in a lot of ways and I think we're seeing some applications that come out there so this is this kind of devices are very exciting for us because it really does start linking the data center with the edge and so we're seeing a really an insatiable demand for you know data center AI and now moving it out to the edge where we can actually interact with the world so IO IO T devices are great because they can actually allow us to monitor what's happening in the real world and then drive that back and so I think that's really what drives demand here and having things like Greengrass are really the way to make that happen for developers right they don't have to think about all the all the details of moving these models out there so it's a great platform awesome so I'm gonna put you on the spot now you didn't bring your crystal ball but any predictions for artificial intelligence over the next 18 months well so my training's actually as a neuroscientist and so I tend to look at biology sometimes and what we do see in biology is an aggregation of data at different points so I think right now we're seeing like you mentioned with Alexa you have edge device and then you have the cloud I think we'll start seeing some aggregation locally and some local learning so some of the products will be coming out with or actually you know built around that kind of concept and yes I'm looking forward to this partnership continuing awesome can't wait to see them so thank you very much I can't wait to see what you build with D blends and with that we'll hand it back to Verna place well thank you [Applause] [Music] [Applause] okay [Music] so I should probably figure it out I can't keep time yeah so thank you all the still here for sticking around we've got a few more things a few small things to go yeah but I know quite a few of the sessions have been pushed back as definitely when it's here around the area so don't panic too much this is a famous quote from a distaff who is one of the founders I would say of modern computing as well as disability computing and one of the things about Dexter was he was an extremely principled computer scientist and in all the writing that he did you should actually read it it's amazing basically 40 50 years ago he really predicted much of how we do development today so this is a very famous quote yeah that if he would be 10 years from now you will be programming he will be doing something quick and dirty which he was always supposed to I mean you'd feel Dijkstra looking over your shoulder you know we're not going to do that so actually I've modified this quote yeah that if I wanted ten years now but just to five years from now yeah if you're doing something are you not using an AWS surface while it is available for you you will feel think the groove and I wouldn't have liked this yeah that would be good enough for me yeah so with that you've got one more announcement to go of course so you've got a party tonight and so I think we've had amazing DJs in the past years with deadmau5 Skrillex Zaid Martin garrix last year the number one DJ quality DJ magazine in the world so I thought it was time to actually go a bit more international even and get ourselves a French DJ tonight and we're very fortunate that we could get the DJ that was closing out Ultra Music Fest in Miami this day this year DJ snake so if that's we're getting to an end I figure out for thank you all for sticking it out here a few things to to recap force will be the next major disruption and how you're going to build your systems yeah if you're not building systems think about voice interface yeah you all security engineers them security should be a priority number one we start thinking about reliability think about your business reasons as well why do you want that reliability in what the trade-offs are in cost and as last I think the future really is the way that we've been building our services for you and that the two premise of cloud is just around the corner soon you will only be using bills you only be writing business logic so with that with all these tools that we've given you yeah please go bill [Music] go go go go
Info
Channel: Amazon Web Services
Views: 65,875
Rating: undefined out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud, keynote, Werner Vogels, matt wood, alexa for business, alexa, ufc, cloud9, abby fuller, aws launches, netflix, nora jones
Id: nFKVzEAm-ts
Channel Id: undefined
Length: 171min 17sec (10277 seconds)
Published: Fri Dec 01 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.