AWS re:Invent 2019 - Keynote with Andy Jassy

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you hello and welcome to the 8th annual AWS reinvent it is awesome to be here this is our favorite time of the year you're here with 65,000 of your peers and as you know AWS reinvent is not a sales and marketing conference it's a learning and education conference and so everybody's favorite part of the week are the breakout sessions this year it'll be no different we have over 3,000 of them with most them being led by partners or customers so you can get the real scoop about the platform but I'm gonna need every minute of these three hours we have scheduled I have a lot for you including some things that are good at the end so I'm gonna get right to it and giddyup so when we were thinking about throwing a conference back in 2012 we spent a lot of time debating what the name of the conference should be and you can decide if you like what we chose or not but it was very intentional and the reason we chose this name was reflected what we were seeing at the time which is it was incredible the number of large and small companies alike and the way that they were inventing and the pace that they were inventing and how many large enterprises were completely reinventing their customer experience all with AWS and the cloud as the linchpin and you can see it if you look at the millions of active customers that are using the platform today and if you look at what's happening over the last six years or so the enterprise has very dramatically changed their view of the cloud and have adopted AWS and the cloud in every imaginable vertical business segment so in financial services you see it with Capital One and Goldman Sachs and Barclays and HSBC and Intuit the Commonwealth Bank of Australia in life sciences and healthcare you see it with Merck Pfizer and bristol-myers Squibb Johnson Johnson Novartis Astra Zeneca in manufacturing with GE and Schneider Electric and Siemens and Philips in energy with Shell and BP it has two Halliburton every imaginable vertical business segments he's using AWS and the cloud in a very meaningful way and you also see it in the public sector we have 7,000 government agencies worldwide using AWS 10,000 academic institutions and over 25,000 nonprofits so very broad adoption and the reason that these bigger companies have started moving to the cloud is because what's happened over the last 12 to 13 years is that startups have completely disrupted long-standing industries largely from a standing start on top of AWS this is often the time where I rely for a number of startups that are using the platform and most of the successful startups over the last 13 years have built their businesses on AWS but I'll give you a few examples that I think just illustrate what I'm talking about in terms of this revolution think about the old days when you needed a taxi and you call the taxi company and the dispatcher would sometimes pick it up and sometimes let it ring forever and sometimes they'd be nice to you on the phone and sometimes they'd be rude to you on the phone and sometimes the taxi would come when they said it was you never knew when it was gonna come you get in the car it'll be a little bit filthy think about that experience today and he'll completely revolutionize it is by companies like lyft and uber and grab an old on Kareem totally different or look at accommodations when you wanted to stay somewhere out of town you used to always just default to a hotel look at how air B&B is completely changed that industry they have four million active listings at any one time and two million people a night staying in their rentals it's just crazy think about exercise for those that used to use the bicycle for exercise yeah you could go out and ride a bicycle if it felt like dealing with the elements you could you know then go to a spin class but then you have to get in the car and go there and you have to go to a particular time you could have a bicycle at home but that's a little bit boring to exercise yourself and think about again how peloton has use technology to completely change that experience every day they have 20 live streams they have 10,000 on-demand streams that you can look at it's just totally change the way that you exercise and then think about delivery of food in the old days and he lived in New York City the only thing you got delivered to you was pizza but now with companies like doordash GrubHub and post mates you can get virtually anything delivered to you believe me I know this my son is one of their best customers and so if you think about it these startups have disrupted long-standing industries for a that had been around for a long time and really from a standing start and it's why when we thought about what we should choose as a theme for the keynote today we kept coming back to the same topic which is the question and the topic we talked most about with companies which is how should we think about transforming ourselves how can we reinvent our business and our customer experience so we can be meaningful and sustainable over a long period of time and transformation can mean transforming yourself or it can mean transforming to meet new technology situations or opportunities and so this transformation question was the number one question and so we thought we would share with you what we think are six of the most critical components if you're gonna make a significant transformation if there's a big change and a big transformation that you have to make you don't want to procrastinate it doesn't get easier if you wait in fact the ditch gets deeper and so when you think about the type of the first element of a big type of transformation like this it turns out it's not technical it's all about leadership and so when we look at the companies that make this transformation successfully versus those that just talk about it there are four differentiators the first is you actually need to figure out how to get your senior team aligned that you're gonna make this change it's not easy to make a big shift like this and inertia is a very powerful thing and it's easy to block in various parts of the organization sometimes for well-intended reasons sometimes for self-interested reasons but it's easy to block and if you don't find a way to have that senior management conviction in alignment that you're going to make that change and find a mechanism to get the issues on the table so you know you're Nick you're making progress and you go go several months down the road thinking you're making progress when you're not you won't make that change so you need that senior level alignment the second thing you need right alongside that is you need an aggressive top-down goal that forces the organization to move faster than organically otherwise would and let me give you two counter examples of this point so if you think about several years ago the CIO of GE Jamie Miller decided that she thought it was critical that GE moved to the cloud to move much more quickly and she got her top technical leaders together and said we're gonna move 50 applications to AWS in the next 30 days and she said for 45 minutes they told her what a dumb idea that was and how it would never work and she listened to them very patiently said I I hear you but we're gonna do it so let's go and they got to about 42 applications in 30 days but along the way they figured out their security model their governance model their compliance model and they had success and built momentum and all of a sudden all the ideas started flowing in on what else they could move and they're now about three quarters of the way through moving several thousand applications to AWS a second aggressive top-down goal that wouldn't have happened if they didn't set that goal will force the company to move now let me give you a counter example I went to go see a company in the life science of space and it turned out that I knew the CIO through a friend of a friend I'd never met him before but we had this connection and I got to the meeting and they were a little bit late coming out to see us and they were in with their CEO and the CIO came out and said I'm gonna grab you I think cuz he felt bad cuz we had this friend of a friend connection and he said before the my infrastructure leader gets in the room tell me how we're doing together and I said well we're doing fine but you're not doing very much with us he said ah it's definitely not true like I where I know we're doing a lot I heard work we got all kinds of workloads running we're experimenting we're kicking the tires I said well you're using three ec2 instances he said I can't be right and then his infrastructure leader came in the room and he said John he said are we doing a lot with AWS he said oh yeah we're doing tons we're kicking the tires were experimenting with lots of things going he said how many ec2 instances are used it you're using and he said I like three or four he said that's not what I mean by doing a lot but it's easy to go a long period of time dipping your toe in the water if you don't have an aggressive goal that lets the organization know that it's a priority to make this transformation by the way that CIO went away two weeks later said an aggressive top-down goal and they're one of our top five life sciences customers today but it needs that top-down aggressive goal the third thing is you got to train people lots of times you have these conversations around a table senior people get excited they decide to move to the cloud they come back to their companies and they say good news here's the cloud and nobody has any experience using it now it's not that hard to use the cloud but it takes a little bit of training so that's why we train hundreds of thousands of customers every year and then the fourth thing that's important to do is that you've got to make sure that you don't allow the organization to get paralyzed if you haven't been able to move and figure out how to move every last workout well we do a lot with our customers is we will go and do a portfolio analysis where we will go through all their applications with them and we'll classify them into the applications that are easy to move medium hard to move should go last because they have the most dependency and legacy those that they easily be lifted and shifted and those this should be rejected before they move to the cloud and we build them a thoughtful methodical multi-year plan to migrate and what companies find almost always is that so many were codes are relatively easily able to move to the cloud and get all those benefits and in fact they inform a lot of the later workloads that are the hardest ones to move so you've got to make sure that you don't get paralyzed by that so this first step of the transformation is not technical it's very much about leadership it's about making sure you have senior level alignment an aggressive top-down goal to Forces the organization to move faster than otherwise would the right training and then a thoughtful methodical multi-year plan to make that migration now what you find is once as a company you make that decision that you're gonna make this transformation to move to the cloud your developers are raring to go you are ready to go and you want the broadest possible capabilities that you don't want to be slowed down last year we talked about I want it all and I want it now this year we're talking about don't stop me now I'm having such a good time I don't want to stop at all and once you decide as a company that you're gonna make this transformation to the cloud your developers want to be able to move as fast as possible they don't want to be constrained they want all the capabilities they need to move everything that they want to move and to build anything they can imagine and so they want the broadest possible capabilities and there's nobody who has the capabilities that AWS has we have over 175 Services nobody has close to the same number of services but it's not just a number of services it's the depth of features and capabilities within these services and you know there's a lot of noise out there and there are a lot of companies who've become pretty good at being checkbox' heroes where they kind of look at something we have and they rush to have it out there and say we have it too but when you look at the depth and the details of the offerings they're pretty different and you'll see this across all these major infrastructure components compute and storage and database and analytics and machine learning and IOT and robotics and messaging and content distribution and marketplace of people services very big differences and this is a quote that you see we we hear this a lot this one happens to be from Expedia but what we typically hear from customers when we talk them is that they think we're a couple years ahead both on functionality and with regard to maturity and so instances containers Network I'm not going to spend a lot of time on the networking piece in part because I'm gonna let you be surprised by the press release we have coming out later today and also Dave Brown is going to reveal a lot of our the details of our new networking features in his networking presentation on Wednesday but I'll say a couple things your first of all you don't have compute without a great network attached to it and you won't find a network has more functionality and more capability than AWS is if you look at our footprint of pops and abilities to get on our backbone we have a much broader footprint than you'll find elsewhere it also is a network that has more places for you to do direct connect between your on-premises data centers in AWS it's the only one to have a hundred gigabit per second for standard instances and then you also have the most capable network hub what we call our transit gateway which allows you to connect your on-premises data centers with AWS and then also set up across multiple AWS V pcs and different regions it has the ability to take more connections it has much more throughput than you'll find anywhere else it has the ability to connect your branch offices more easily to AWS you'll find elsewhere the branch office is between each other with sd1 integration and even multicast IP which nobody else has which we're just launching today so much more broadly capable network than you'll find elsewhere but I'm gonna focus most of my comments on compute on instances and containers and so if you look at instances to start it's not just that we have meaningfully more instances than anybody else but it's also that we've got a lot more powerful character capabilities within each of those instances we have the most powerful GPU machine learning trading instances the most powerful GPU graphics rendering instances the largest in memory instances for SA P work goes with 24 terabytes the fastest processors in the cloud with a z1 d you've got the only standard instances that have 100 gigabits per second network connectivity the only instances that have all the processor choices from Intel and AMD and ARM based very different set of capabilities on the instance aside and if you look at the pace of innovation in AWS on the number of instances that we've built it's totally accelerated in a very significant way we have four times more instance types today than we did two years ago and there are a couple reasons why we've been able to innovate much faster right the first is that we have spent a significant amount of time over a couple years totally rehung and reinventing our virtualization hypervisor there and if you it's we built it with a system that we call nitro and if you look at what nitro does it takes the virtualization of the security and the networking and the storage off of the main server where the lightweight hypervisor and the customer instances are and gives you back all that CPU it was consuming before which means that you get performance indistinguishable from bare metal at a much lower cost it also means because we've taken all these pieces off that main server and put them on nitro chips that we build that we can innovate in a much quicker fashion because we don't have to every time we make a change to one of those pieces we don't have to have all of them change in lockstep and so it's why we were able to add our network optimize instances for instance so quickly which you won't find elsewhere because we've separated those pieces into separable bunches the third thing is that we have a security capability nitro that I also think is meaningfully better for you which is you know most of the traditional hypervisors have a trusted domain which people often call dom0 and therefore things like being able to add vm's and agents and troubleshooting and you have a limited number of people as a provider that you'll out of access to that trusted domain because got all the customer instances on it you have to be careful to lock it down etc with nitro because we've moved the security off that main server into a separable nitro chip it means that we just locked down that main server with the customer instances no one can access it we don't have to worry about the control of that it's just not accessible which is much better security posture for you so the first thing was we totally overhauled and reinvented the virtualization layer with something called nitro which has helped us innovate in a much faster rate and you'll see that throughout this conversation second thing is that we decided that we were going to design and build chips and I think in the history of AWS a big turning point for us was when we acquired annapurna labs which was a group of very talented and expert chip designers and builders in israel and we decided that we were gonna actually design a builds to try to give you more capabilities and i think that while lots of companies including ourselves have been working with x86 processors for a long time and you know intel's very close partner and we've increasingly started using AMD as well if we wanted to push the price performance envelope for you it meant that we had to do some innovating ourselves and so we took this - annapurna team and we set them loose on a couple chips that we wanted to build that we thought could provide meaningful differentiation in terms of performance for you on things that really mattered and that we thought people were really doing in a broad way and so the first chip that they started working on was an ARM based chip that we called our graviton chip which we announced last year as part of our a1 instances which were the first ARM based instances in the cloud and these were designed to be used for scale out workloads so containerize micro services and web to your apps and things like that we had three questions we were wondering about when we launched the a1 instances the first was will anybody use them the second was will the partner ecosystem step up and support the tool chain required for people to use ARM based instances and the third was can we innovate enough on this first version of this graviton chip to allow you use ARM based chips for much broader array of workloads and so on the first two questions we've been really pleasantly surprised you can see this on the slide the number of logos loads of customers are using the a1 instances in a way that we hadn't anticipated and the partner ecosystem has really stepped up and supported ARM based instances in a very significant way the third question on whether we could really innovate enough on this chip we just weren't sure about and it's part of the reason why we started working a couple year go on the second version of graviton even while we were building the first version because we just know if we're gonna be able to do it it might take a while and so I'm excited to announce today the launch of a new set of instances the m6g the r6g and the c6g instances for ec2 which is a new generation of ARM based instances powered by AWS graviton 2 so these are pretty exciting and they provide a pretty significant difference over the first version of the graviton chips each of these have 64 bit customized cores with AWS designed seven nanometer silicon all the instances have up to 64 V CPUs 25 gigabits per second of an enhanced networking and 18 gigabits per second of EBS bandwidth they have forces the first graviton chip they have four times more compute cores five times faster memory and overall seven times better performance than the first graviton chip but arguably most importantly they have 40% better price performance than the latest generation of x86 processors that's unbelievable if you think about that [Applause] so we're very excited to give these to you today the m6g are available today the arch 6 G and the C 6 G will be available in early 2020 so we also did was simultaneously we split off a piece of the annapurna team to build a second chip and again we are trying to pick things that could totally change the game for you with regard to what you can use we started with these graviton two chips that now with that 40% better price performance than what you get on the latest generation of x86 you can run virtually all your workloads on them that's a huge game-changer for y'all but then we also started working on something that we thought would also be a game changer and that's really around machine learning and we've talked a lot as a group over a few years about training with machine learning it gets a lot of the attention and they're you know they're hefty loads and we have the instances that perform the best or most powerful and machine learning trading with our P 3 and P 3 G instances but if you do a lot of machine learning at scale and in production like we have and a lot of you inside the room have done you know that the majority of your cost is actually in the predictions or the inference and just think about an example I'll take a Lexus an example we train that model a couple times a week it's a big old model but think about how many devices we have everywhere that are making inferences and predictions every minute about 90 80 to 90 percent of the cost is actually in the predictions and so this is why we want to try work on this problem you know everybody's talking about training but nobody is actually working on optimizing the largest cost for all with machine learning and so we announced last year in my keynote that we were working on an inference optimized chip called inferential and I'm excited to announce today the launch of the influences for ec2 which are are backed by our new inferential chips [Applause] and so our info on instances have a lot of things to be excited about it will have low latency it'll have three times higher throughput just think about that again three times faster throughput 40% less cost than the current best inference instances that are Nvidia chips just a significant innovation from this team 2000 terror operations per second we've integrated with all the major frameworks where tensorflow and with MX net and with PI torch and it's available today in ec2 but we'll make it available for Sage Maker and ECS and eks and early 2020 so when you think about instances in addition to having the most number of instances and then the most powerful instances within each of those categories and then the capability with nitro to innovate at a much faster clip when you layer on top of that our desire and willingness to design and build chips it gives you capabilities in the instance of space unlike you'll find anywhere else but that's instances let's talk about containers a second so increasingly customers are using containers for all kinds of workloads both because the advent of microservices architectures and also because it makes it quicker and faster for people to deploy but this is another area when you think about containers that we have a lot more capabilities and a lot more resonance than elsewhere if you look at all the containers in the cloud 81 percent of them run in AWS and that's a significant part because we just have a lot more capability so all the other providers effectively have one container service which is a managed kubernetes service we have three so when we started building container services back in 2014 before there was really a very popular orchestration engine what customers said to us was we don't just want a container we want a container as deeply integrated with the rest the AWS platform capabilities and so we built something called elastic container service or ECS which is growing unbelievably fast and continues to weave a lot of customers like Verizon and GoPro and Fox and McDonald's duolingo they use these but because we control the development of ECS with a lot of help and a lot of input from all of you thank you very much and keep it coming please because we control the development of EECS it means that we launch new features where they integrate with ECS right from the get-go so customers want the most deeply integrated container service out there they choose yes but not surprisingly as kubernetes became very popular lots of customers wanted us to do something there and you know if you look today 84% of the kubernetes that runs in the cloud runs on top of AWS so we have a lot of kubernetes customers but they understandably wanted a managed service there so we built the elastic kubernetes service or eks which has grown like a weed as well since we launched it a couple years ago really really fast and so customers say well it's awesome that I have these choices of to manage container services ECS eks but I would prefer not to have to worry about servers or clusters I want to manage containers at the task level and so that's why we build something that we launched a couple years ago called AWS far gate and far gate is a server list container offering it's the only offering like it anywhere out there and all you do is you tell us the CPU and the memory that you want you upload the container image and then Fargate does all the rest for you it deploys it right sizes the compute no server is no cluster is no provisioning for you and if you look at the popularity of for our gate it kind of blows us away we thought people will be interested in it but I don't think we anticipate it would be this broadly demanded for new customers this year new container customers in AWS 40% of them start with Fargate and that's because it's so much easier to run containers that way now we launched Fargate with support for ECS because again since we control the development of ECS we'd have to coordinate with anybody it was much easier to do so and trying to make it work for kubernetes has not been easy and customers understandably who used kubernetes said well we love the idea of Fargate but why won't you make it work for kubernetes so I'm excited to change that for you right now with the launch of Amazon Fargate for Amazon eks so now our kubernetes customers are able to get all the same serverless benefits of running containers in AWS and so what that means now is we have four container offerings for you to choose from for those that like to manage at the server and cluster level and you want that flexibility to stitch things together you can use it either ECS or eks and for those that want to operate at the task level and not have to worry about servers and clusters you can use Fargate for either ECS or eks so very exciting so when you make this decision that you're gonna transform yourselves and you're going to move to the cloud and you get that senior level alignment you set that aggressive top-down goal the force of the org to move your developers want to go they don't want to be held back they want the platform with the most capabilities not a fraction that capability is the most capabilities especially because you don't have to pay for it up front and if you look across the platform this is the bar for what people want if you look at compute they want the most number of instances the most powerful machine learning training instance is the most powerful machine learning inference instance is the most powerful GPU rendering instances the biggest in memory instances for SA P workloads a hundred gigabit per second connectivity with standard instances all the access to all the different processor options they want not just one container they want multiple containers at both the managed level as well as the serverless level and then they want the network with the most capabilities the most functionality the broadest footprint the best performance and then the capabilities with things like transit gateway that make it much easier set up your global network that is the bar for what people want with compute and the only ones that can give you that are AWS and it makes it by the way so much easier not only to move all your existing workloads over but also to allow your developers to build anything they can imagine with the right tools at the right price as quickly as possible now you may have noticed as you're walking in that we had some DJ's playing music as we often do before the keynote and we had this year we had two DJ's we had a woman DJ and a male DJ who did the last 15 minutes and I'm gonna actually bring up to the stage the male DJ whose dj name is DJ Dassault and he's actually not gonna come up and talk to you about music although he actually have very interesting things to say about music I enjoyed his selection tape it turns out that that DJ is a CEO of Goldman Sachs and it's true yeah yeah he's awful good at DJing in addition to being a CEO and so I'm gonna ask David Solomon to come to the stage and share with you the transformation that Goldman Sachs is making using AWS in the cloud David thank you Andy thank you for having me good morning everybody the one thing I'll tell you for sure the most fun I will have today was 15 minutes from 7:45 to 8 a.m. I'm I'm really happy to be here I'm standing up here looking at all of you some of the world's best technologists at one of the biggest technology conferences of the year and I know you're wondering what's the head of a bank doing here well the world's changing you just saw the CEO of Goldman Sachs DJ in Las Vegas and not for the first time I would say but let me tell you a little bit about how we're using cloud technology to change our business goldman sachs is a financial services company we provide advice we lend and invest money we make markets and we manage risk we stand in the middle of a market with trillions of dollars of flows we advise on hundreds of M&A transactions a year and underwrite millions of stock and bond offerings ultimately we help our clients achieve their financial goals we care deeply about our clients goals because they aren't intangible abstract things they are real-world problems and everything we do from simple ending to complex derivative trading serves a real purpose like helping you renovate your kitchen or finance your expanding contracting business finance is as simple as that or it should be the reality of it is finance is much more complex and Goldman Sachs job is to try to simplify it to make it as easy intuitive and as effective as possible how do we do it in-house we have around 9,000 engineers who make up about a quarter of our 38,000 person of course we have some of the best engineers in the world working on some of the most interesting problems and everything we are building is trying to make finance work better cloud technology allows us to do our job in a way that's simple all while accounting for the complexity of our industry and helping us ensure that our work is safe secure and responsible while we worked with a few cloud providers AWS was the first because there are immense capabilities and the astounding pace of their innovation a few years ago my colleague Roy Joseph was on this stage here to talk about how we work together to develop the bring your own key solution for AWS as key management services a crucial data privacy development that's allowed us to fully as an organization embrace the cloud not to be too on the nose but AWS is cutting edge approach to technology really unlocked something for us here since then we've been busy with the help of AWS building predominantly cloud-based businesses I want to talk to you about a couple of those for a few minutes here today the first is our credit card business in conversations with our clients we realize that the credit card could be a much simpler thing and it could provide consumers with a new way to relate to their spending repositioning the credit card is a tool that was truly on your side building the credit card platform that underpins Apple card took a number of our engineers as long as as along with a very strong partnership with Apple MasterCard and of course AWS it would have made sense for us to get into the business if we had to maintain fields and fields of on-prem data centers to do it the only reason we were able to deliver these capabilities digitally and it's scale is because of cloud technology Apple card launched just a few months ago and it's already one of the most successful credit card launches ever it's part of Marcus R Digital consumer banking business which today is just three years old although we're 150 year old company were still new to consumer finance but I think it's pretty clear that we're onto something today through Marcus we have 55 billion dollars in retail deposits and millions of clients we're putting cloud technology to work in other areas of our business as well big companies make trillions of dollars of payments every day but right now this space is dominated by legacy architectures manual processes and slow turnaround times next year we'll launch our transaction banking service a digital platform that helps corporations manage their cash built entirely on a cloud native stack provided by AWS goldman sachs is already using it to make billions of dollars of our own payments and five currencies every day providing us enhance transparency Tracking's around our cash flows while saving us a lot of time and money in the process we're eager for our clients to experience the service and look forward to sharing more details about our rollout plan next year in 2020 historically financial technology has been powerful and fast but it's lagged behind consumer and high tech in terms of elegance and simplicity if you want a bank account you have to wait for your funds to clear if you want a loan you have to wait for approvals if you would want advice for your company how to better manage your balance sheet you have to send an email and ask for a meeting it doesn't have to be this way we can do better but we're not there yet finance is the perfect place to take new technologies and have an immediate real-world impact our data scientists have been using AI and machine learning techniques for years and we're already pushing the research community to consider what's possible when you apply quantum power contem computing power to financial problems while Goldman Sachs serves corporations governments institutions and individuals we're also building for developers the same way you go to AWS for their best-in-class cloud services we want to be your first choice to provide services that enable you to build financial functionality directly into your applications and workflows already for our institutional clients we're making the capabilities of our powerful securities database available directly through a platform called Goldman Sachs marquee the real power of Goldman Sachs marquee lies in the scalable services you can access directly through our api's we've published some of our api's on developer GS com and we will continue to add more over time our clients already use the power of AWS to access a number of these services we're migrating production of mark into AWS and starting next year we'll be delivering new products and services to our clients there directly my goal is for Goldman Sachs to lead the way in building financial services technology and we're going to succeed in no small part during the work we have done and will continue to do with Amazon Web Services still our job will remain the same to assume the burden of complexity and make finances easy intuitive and as effective as possible we want to enable you to focus on building things that we couldn't dream of things that change the way the world works for the better because sure financials complex but it should never be a drag on innovation like AWS has been for Goldman Sachs finance should only ever be a business accelerant I look forward to seeing what we can build together thank you all very much for having me have a great day [Applause] [Music] thanks David it really it's an honor for us to be partnering with Goldman Sachs we're really excited about what we're doing together so thank you for all the partnership so when you're making this transformation and you make the decision you're gonna make this big move and then you give your developers the access to all the tools to let them make this change quickly and flexibly there's another thing you got to think through which is you got to think about what are the things you're gonna take with you and what are the things you're gonna leave behind when you're making this big move it's what people often call modernization that's a question that companies have been asking about their on-premises infrastructure for many years when they compared it to the cloud and it's cost more capital it's more expensive on a variable basis you don't get the elasticity it doesn't have near the capabilities of the cloud that's launching 2,500 new services and features a year you have to spend your scarce resources which are engineers on the things that are undifferentiated heavy lifting and you don't have the same security posture when you objectively look at whether you want a toy away at refining your on-premises infrastructure for the next several years most companies are saying yeah no thanks if that's moving up and I'm moving out and so when you make that decision that you're going to modernize you have to make a lot of decisions and you have to decide for yourself what you're gonna bring and it's a little bit like moving from a home there's a mainframe um oh it also looks like there's some audit notices and some licensing changes and pricing increases oh well when you're gonna move you have to decide what you're gonna actually take with you and what you're gonna leave behind and it turns out when companies are making this big transformation what we see is that all bets are off they reconsider everything and so I thought I would share some of the biggest decisions of some of the trends we see a modernization from companies that are making this big transformation so the first thing is companies are trying to move away from mainframes as quickly as they can every industry has lots of companies with mainframes but people want to move away from because they're expensive and they're slow and they're complicated and hardly anybody has mainframe skills anymore in fact one of the stories I like is one of our enterprise customers was making a big shift and moving its mainframe and teasing apart moving at to AWS and they got to the very last step where they needed to find the credentials to decompile something and they couldn't find anybody in the company who had those credentials and they realized that only one person had him it was this woman who had retired ten years ago moved out of state her name was ginger and they were able to find her and she gave them the credentials and they made this last step but you can't always find ginger and so people don't want to be using mainframes over the long term we have lots of customers moving away from them we see a couple different patterns we see some companies that basically just tease the whole thing apart into micro services and then relaunch it an AWS that's what Western Union did we have some companies that are methodically picking certain workloads and moving them one by one away from the mainframe so the mainframe becomes less central and kind of lives in a place with just non-essential tasks and that's what companies like Vanguard and Allianz and the US Air Force are doing but make no mistake about it companies are trying to move as quickly as they can away from mainframes and have a lot of success doing so second thing we see people move and modernize from are those older guard commercial-grade relational databases and we've talked about that here in this keynote for a few years people are trying to move away from Oracle and sequel server because they're expensive they're proprietary they have high amounts of lock-in and the licensing terms are just downright punitive and you know I think that we don't meet customers that aren't looking to fully Oracle but one of the things that we've seen over the last couple years it's a change as people are trying to get away from sequel server pretty quickly as well and one of the reasons is just you see this return to the ways of old from Microsoft where they're not prioritizing what matters to you guys into customers so let me give you an example for many years you were able to take the sequel server licenses that you'd bought yourself and then bring your own license and ronon where you wanted to in our relational database service as an example one day Microsoft decided that they didn't want to let you do that was it good for you hell no is it good for Microsoft maybe I think they think think so but people are sick and tired of being pawns in this game and it's why they are moving as fast as they can to the open engines like my sequel of Postgres and Maria DB but to get the performance that you need from these open engines that compares well to these commercial-grade databases it's hard it takes a lot of work we do a lot of it at Amazon it's doable but it's hard and it's why all of you asked us to try and solve this issue which was to give you a database option that works on the open engines but that has comparable performance to the commercial grade databases and is why we built Amazon Aurora which we launched in 2015 and Aurora has versions that are completely compatible with Postgres some of my sequel it has several times faster performance than the typical high-end implementations and those community editions it's at least as durable and performant and available as the commercial grade databases but one tenth of the cost and it's why it's been the fastest growing service in the history of AWS since its inception in 2015 and you've got ten thousands of customers who are using and moving to it's a very broad list of bristol-myers Squibb and Fannie Mae and Electronic Arts and Astra Zeneca and Liberty Mutual Nasdaq and Hulu and Verizon FINRA very broad group and it's pretty amazing to us how fast people are moving to Aurora at this pace mainframes relational databases the third area we see a lot of modernization decisions is around moving from Windows to Linux and this has been happening for several years anyway IDC estimates that in 2020 about 80 percent of the were closed deployed will be Linux workloads and Linux is growing about four times faster than Windows and there are a few reasons for this first people will want to pay the tax any more for Windows and particularly when they know the price goes up frequently second it turns out that because there's such a vibrant community around Linux and a more vibrant community around it than anywhere else that all the features and all the security things happen much quicker there and third again people aren't so keen to have one owner of an operating system especially when they're prone to raise prices which happens a lot or just change the licensing terms you know if you look at Windows in the cloud 57 percent of the windows in the cloud runs on AWS you know the company that owns that operating system maybe isn't so crazy about that so they just decided to change the licensing rules for you and they said hey new versions of Windows can run on dedicating instances and other cloud providers trying to Strunk you know stranglehold those workloads back their platform again I don't think customers are so keen about having one company own that operating system they're building their business on which is why you continue to see people moving from Windows to Linux the fourth area of modernization that we see is around the partners you choose and so if you look at most is vs and SAS providers they will adapt their technology infrastructure platform to work on a technology infrastructure platform so they'll build out their software work on a technology infrastructure platform some will do two very few will do three and they all start with AWS just because we have such a significant market segment leadership position and it's why you see a much more vibrant collection of is fees and SAS providers on AWS as you're moving to the cloud and want those capabilities and you see it with Salesforce runs the vast majority of what they do on top of AWS as does work day and Splunk and informatica and infor and Acquia and data dog and data breaks just a much broader collection of software that when you're moving to the cloud you can you can use really easily on top of our platform but the other interesting thing that we see is that companies are moving to the cloud with very often with different systems integrators and they've used on-premises for the last number of years and a lot of that has to do with the fact that companies when they're making this big a change they want to work with an SI that's really deep in a cloud platform it has a lot of dedicated trained professionals on that platform and I think a lot of the largess eyes have had this dilemma because they've had these big businesses of doing outsourcing and where they built practices and hedged their bets across lots of different companies and if you don't actually get deeper in one of these platforms the companies that are taking this risk of moving get a little bit nervous so there's some big a size that I think have made that shift pretty well and done done so successfully these are companies like Deloitte and Accenture but what we see is that a lot of the heavy lifting of moving enterprises to the cloud is being done by sis that have either pivoted their model quicker and realized what the future was these are companies like slalom and Rackspace or born in the cloud sis who don't have to worry about cannibalizing their existing business and who are very happy to pick up the small pilot projects for all of you that don't pay very much they don't seem like they're worth it but everybody knows you can't move unless you get pilots done successfully so they're willing to bet on the future and these are companies like Annika and clear scale in the US and cloud reach in the UK and all cloud and Europe and verson and asia-pacific who are doing a lot of the heavy lifting so lots of different choices in partners as people are moving the cloud as well so when you think about transformation there are some transformations that you have to do to yourself deciding you're going to make that move energizing your company make that move equipping your developers with the broadest possible capabilities that you can make that move quickly and successfully and efficiently and then making the hard decisions about what you're going to take with you in that new era versus what you're gonna leave behind but there are also transformations that are really about meeting new environments new technical challenges and I think if you look at the age we're in with respect how much data that we are trying to store and process and analyze is like nothing we have ever seen before this is a very different era than what's existed in the past you're not in the world where you're storing mostly gigabytes of data and sometimes terabytes we're in a world we are consuming petabytes of data and sometimes exabytes and there's lots of reasons for it some of it has to do with the borderless Internet and all the applications that can reach everywhere in a much broader way than was possible before but a lot of it has to do with the cloud and how much more cost-effective and accessible compute and storage are and how much faster you can get work done and the companies who thrive in this new era of this much data are not using the same old tools they've used forever they are adapting to meet the new technology challenge and the tools that require to do so and so one of the challenges that you find for companies is that they have all this data they've accumulated over a long period of time and unfortunately they live in these data silos all over the place which makes doing analytics and machine learning painful expensive and slow and it's why companies are so excited about being able to build data legs to bring all that data together making analytics and machine learning much easier and if you look at what people are building data Lakes on top of there's nothing that comes close to approaching how many data lakes have been built on Amazon s3 our object store and there are a few reasons why people are choosing s3 as their data Lake bass first is it just turns out to be more reliable and scalable and available than anything else a lot of that has to do with our multi availability zone our multi AC architecture where we will take your data and we'll store it in multiple data centers typically three and those data centers will be a few miles apart no more than a hundred so that you have the right low latency to operate for your application if you compare that to what other providers do they either mostly have regions where there's only one data center or if they have a multi AZ capability the AZ's live in the same building or they're right next door to each other so there's a problem in that building or on that street that's the end of your availability and durability story so very different on availability and scalability second s3 is the most secure data store that you have and there's lots of examples that I can give but when we talk about this era of giant data like we are now you really want to have granularity down to the actual object level and so in s3 it's the only object store that allows you to block public access at the bucket and the count level is the only object store that gives you inventory reports on all your objects so you can answer questions like are all my objects encrypted it's the only object store that allows you to analyze all the access permissions on all your objects and buckets with a feature we launched yesterday called i.m access analyzer so most available most secure it also is the object store and data like that gives you the most ways to get your data into it and that turns out to matter a lot because you have data coming from everywhere so you can get it in through the wire you can get it in through a direct connect it through our backbone and through streaming and through IOT and a storage gateway through SFTP if that's what you want to do through snowball appliances even to a 45-foot container with snowmobile almost every map away you can imagine getting your data from wherever it is into your data Lake you can do an s3 significantly more ways and you'll find elsewhere the other thing is that AWS actually gives you the ability to automate more of your actions which again if you think about the error we're in a very large data matters and so you can look at s3 intelligent tearing which is a storage class unlike anything you'll find anywhere else where we access using a machine learning powered algorithm which objects are hot and which are being less frequently access and we move it to warmer or colder storage and adjust your price accordingly or if you're operating on hundreds of petabytes of information sometimes exabytes with our customers you don't want to have to take operations across every single one of those objects you want something like s3 batch operations that lets you take actions across all those objects again something you won't find elsewhere and then s3 has the lowest cost options with s3 glacier deep archive you won't find anything more cost-effective or than tape by the way so those are the things that you find in s3 you won't find those capabilities elsewhere which is why so many people are choosing s3 as their data light but if you think about this world where you now have data lakes and we have a lot of customers who have very large data lakes now it's actually pretty challenging you have all these applications there you know all these places you're setting in data all this data being aggregated and normalized and indexed and reformatted you have all these applications that are trying to access this data lake and lots of people and for the person who has the job of having to decide which application should have access to which data it's very complicated and stressful and we have a capability an identity access management or I am that gives you all kinds of flexibility in what you do there so customers have used that flexibility put you have all these different access policies and objects you have to configure who needs what and people often make mistakes and customers have asked us if we would help them find a way to simplify that in this age of very large data so I'm excited to announce the launch today of Amazon s3 access points which radically simplifies managing access to scale for applications using shared data link services so access points give you a customized path into a bucket with the unique hostname and access policy that enforces the specific permissions that you've set up and they're very flexible you can also have access points that lock down access just for VPC access you can lock down your network and then now if you have to think about deciding who should have access to what data and only that amount of data it makes it so much easier you don't have to layer on thousands on top with one bucket you can assign a different access policy to each application it scales much better you're gonna make way fewer mistakes much much easier this is a very unique and helpful feature when you're building a broad data like now as companies are moving to these data legs and we have tried for a couple years now will continue to try to make it easier and easier for you to manage these very large scale data likes and s3 access points as an example that Lake formation which we announced last year in the keynote which makes it much faster to build a daily Lake from scratch is another example that but as you have those things at the end of the day if you don't have the right analytic services it doesn't matter and so you want the broadest capabilities in the analytic space and that's what AWS has provided free over the last number of years nobody has the selection of analytic services we have if it turns out you want to do ad hoc query of unstructured data you can use Athena if you want to process vast amounts of unstructured data using dynamic clusters of distributed frameworks like SPARC and Hadoop and presto and pig and hive and yard use EMR if you want to do superfast queries on structured data like a data warehouse you use redshift if you want to do analytics on real-time streaming data you Kinesis if you want to do bi with beautiful visualizations and great embedded and ml capabilities like you won't find elsewhere you use quick site and so very broad array of analytic services but I want to go back to redshift so when we launched redshift in 2012 it really changed the data warehousing space it was the first data warehouse built from the ground up for the cloud and it really changed the pricing equation where it was less than a thousand dollars per terabyte and it was the fastest drawing service in AWS for the first three years until Aurora was launched it's continued to be a very fast drawing service for us and we have tens of thousands of customers who are using redshift it is the most broadly used data warehouse of the cloud and these are companies like Electronic Arts and Aetna McDonald's and Yelp and Dow Jones and Pfizer into it Liberty Mutual and we have more and more customers gravitating to redshift in significant part because we're continuing to iterate at a fast clip so if you look at over the last year alone the redshift team has added over a hundred features and I won't share all them with you but I'll touch on a few that I think have really mattered for customers the first is concurrency scaling which we launched about a year ago which automatically adds and removes capacity based on unpredictable demand and so it's incredible how many of our redshift customers are already using concurrency scaling and because we give an hour a day of a free usage of concurrency scaling about 97% of you who are using it are using it for free second just a few days ago we released materialized views which takes your most frequent queries and pre computes them in cash as the aggregations and the filters and even joins between tables which makes your queries go much faster people are pretty excited about that the third thing that we've been working on now for a year or two is something that we called spectrum but we now refer to as the lake house which is really about being able to query not just the data that you have stored locally and know in redshift but also across your data lake in s3 and not surprisingly as people start querying across both redshift and s3 they also want to be able to query across their operational databases where a lot of important data sets live and so today we just released something called federated query which now lets you query across redshift s3 and our relational database services including Aurora Postgres and then when you're doing this querying across all these data stores and getting these aggregated data sets in redshift customers want to move those data sets back to the data layer because they want to let all the other analytic services and machine learning services be able to use those as well and that actually turns out to be difficult and a pain in the butt to actually do you have to do all kinds of work and so we've made that easy for you with a new query called data like export that we're releasing today as well for redshift so redshift is continuing to iterate at a very fast clip based on a lot of what you're telling us matters to you so when you step back and you look at redshift it's the most broadly used data warehouse in the cloud it's two times faster than anything else out there if you don't fudge the benchmarks it's 75% less expensive than anything out there and yet what we would argue is that as you look at the age of data that we're in today and if you fast forward even just a couple few years how exponentially the amount of data that people are storing and trying to process and analyze is going to increase you have to be looking around the corner and adjusting and evolving to allow you to do what you want and so we've thought a lot about this in this space and one of the first things that you all have told us is that is hey look I really want to be able to scale my storage and compute separably and redshift if I have you know redshift has these instances that have both storage and compute containing them if it turns out that I have a workload that needs more storage I have to provision another instance even if I don't need to compute I want to scale those separately which seems like a pretty reasonable we'll request so I'm excited to share with you our way to allow you to scale storage and compute separably with our new redshift are a three instances with managed storage and so what these are a three instances have is very fast big SSDs in the local instance and if it turns out that you have a workload that exceeds the SSDs in the amount of storage of the SSDs in the local instance we've built technology that intelligently and automatically will move the less frequently accessed data to s3 but then what we did is because we have nitro like I was talking about earlier we built unique instances that have very fast bandwidth so that if you actually need some of those data is from s3 for a query it moves much faster than if you just had to leave it there with without that high speed bandwidth instance and so with our a threes you get to separate your storage from your compute if it turns out by the way on your local SSDs that you're not using all the SSD on the local SSD you only pay for what you use so a pretty significant enhancement for customers using redshift at the same time if you think about the prevalent way that people are thinking about separating storage from commute and letting people scale separably that way as well as how you're gonna do this large-scale compute where you move the storage to the a bunch of awaiting compute knows there are some issues with this that you've got to think about the first is think about how much data you're gonna have at the scale that we're at but they're just fast forward a few years think about how much data you're gonna actually have to move over the network to get to the computer and we have very very large scale networking with capacity you know multiple petabytes per availability zone we have 100 gigabits per second instances we have a very large scalable network but it's not that hard to look the corner and realized that it's going to saturate the network at some point and it's gonna slow down performance and that's gonna be a real bottleneck for you and if you could get through that networking bottleneck which I'm not sure you can but let's imagine that maybe you could then you've got a second problem which is if you look at hardware trends what you notice is that the throughput in SSDs to and from them and between nodes and SSDs is scaling and growing it's six times faster rate than the ability for CPUs to process data and memory so it means even if you get through the networking bottleneck the CPUs won't be able to keep up with the storage and that means that you're gonna have a performance problem unless you decide to provision more compute but then you're adding cost and you're not separating the storage from the firmly compute again so this led us to really think about what can we do how do we need to evolve to allow you to have this better performance in this world that we're moving into and so the team has spent the better part of a year or two thinking about this I'm excited to announce for you aqua which is advanced query accelerator for red ship which is an innovative way to do hardware accelerated cache that lets you build up to happen unless you have 10x better query performance than any other cloud data warehouse solution out there [Applause] so here's what aqua does first of all it totally flips the equation of moving the storage to the compute on its head where we're moving the compute to the storage and what we built with aqua is a big high-speed cache architecture on top of s3 and the cache can scale out in parallel to lots of different nodes and in each of the nodes we have a double use design processors to make things go much faster so we've taken a nitro chip and adapted it and innovated on top of it to allow you to speed up compression and encryption and so this makes your processing so much faster that you can actually do the compute on the raw data without having to move it and then it also saves you a lot of work because in the past there's all this work to actually build data movement pipelines and to pre-compute things and the movement of all that data when the storage to compute is a lot of muck and so with aqua you get the double benefit of it being much faster and being able to do the compute on the raw storage and saving time and energy from not having to do the muck of moving the data this will also give you 10x better query performance then you'll find anywhere else it'll work with your existing redshift implementations we're gonna do all the work to make the migrations simple and easy and it'll be available for you in mid 2020 so we're really excited about this and I think it's also a pretty good example of something that we see all the time which is when you build something new it's only new for a period of time and when you build something that's shiny it's shiny until something shinier but the great products and the great companies find a way to carefully listen to customers and relentlessly keep innovating on their behalf and you've seen this not just in the redshift space but in compute and storage data analytics and machine learning from AWS in our first 13 years that you should expect it to continue now when you think about this scale of data and some of the ways we're changing redshift to allow you to manage it even looking forward a few years redshift is not the only analytic service that yet has to think about this you also have to think about it in areas like elasticsearch and so many of you use elasticsearch in this room we have an allow a manager lastic search service that we launched a few years ago which is growing like a weed it's incredible how much people are using this service we have tens of thousands of customers very large customers like Nike Intuit and Airbnb and whoo and Pinterest and as these customers think about their use of elasticsearch and how the amount of data is changing what they want they also realize that there are challenges there's this explosion of data because so many people are building with micro-services now the amount of log data that people want to use to monitor and assess their operational performance is just gigantic and one of the extra challenges and elasticsearch is that the file format is optimized for search not for storage size so it's relatively inefficient and we have a number of customers where if they want to actually store months of operational data it's many hundreds of terabytes of data and so what happens is it's expensive enough that customers don't do it most of our customers are storing just a few days maybe a week's worth of operational data in elasticsearch and that's not what they want there's a lot of reasons why you want to analyze your operational data your log data over a longer period of time so this again is something that we've thought about and we've tried to figure out if we could build a solution that will work for all of you and I'm excited to announce the launch of ultra warm which is a new warm tier really on steroids for Amazon Elastic search service [Applause] so typical warm storage layers for elasticsearch services aren't used very pervasively because the performance is pretty laggy and the durability is not very good and so we've taken a different approach and how we've built ultra warm it's designed to be a warm tier on steroids with much better durability that's backed by s3 and there are several things that ultra warm does that are a little bit different first of all what we look at is very sophisticated technology and advanced placement techniques to look at the blocks of data down to the blocks of data that are being frequently accessed or not frequently accessed and the ones that are not being frequently accessed we move to s3 so you can save money and by the way we think with ultra warm if you use it right you'll save about 90 percent on your storage cost versus what you're doing elasticsearch today and eighty percent less expenses than any other warm tier you'll find out there for an elastic search service in the cloud and then again leveraging nitro and unique instances that we built with fast bandwidth we allow you if it turns out that you need to make a query that that pulls data from s3 we have this very high bandwidth instance that makes that much faster so you get the snappy interactive performance that you need and that you expect when you use elastic search so we're very excited about providing this for you today easy to sign up for the preview starts today and we're excited to have you take a chance with it so if you think about how the amount of data is changing what you do and what you use it's changing how you think about data lakes it's changing how you think about the access policies to give lots of people access to your data lakes it's changing the analytic services you're using it's changing the price performance that you're going to expect and need to be able to process the amount of data that you want to in these analytic services but it's also impacting databases and databases are not immune to this issue you have all the same types of challenges that we talked about with the analytics and for a couple decades a lot of companies primarily use relational databases for every one of their workouts and the day of customers doing that has come and gone there is just too much data for it to make cost sense and complexities complexity sense to do that anymore and so what's happened is that this is demanded customers to ask for and really demand purpose-built databases so if you actually are a company like lyft and you have millions of drivers and geolocation coordinates you don't want a relational databases too expensive and too complicated and not perform enough you want a very fast high throughput low latency key value store which is why we built Donnell DB or if you have a workload that requires sub microsecond latency you want an in-memory database which is why we built ElastiCache for Redis or memcache T and if you want to connect data in multiple big databases of data you want a graph database which is why we built Neptune and if you're doing a lot of IOT like many of our customers are we're actually what you're trying to measure is the change over time you want a database that's anchored on the variable of time time series and so that's why we built and announced time stream if you run a supply chain and you want to have some type of transparent and mutable cryptographically verifiable ledger you want a ledger database that's why we built ql DB or if you're a company that does a lot of work with json documents you want a document database which is why we build document DB with MongoDB compatible this is a set of purpose-built data bases that have come from things that we've listened to that developers care about to optimize their customer experience and so a lot of our developers look at this list of purpose-built database and say this pretty awesome you have a selection unlike others but there's one obvious missing one that I don't understand why you don't help us with and that obvious one is Cassandra and people say well look we manage Cassandra on premises and when I tell you what it's like to manage Cassandra and premises it's kind of the same story on all these things and why we're trying to move them to the cloud it's hard to manage the hardware it's hard to manage the software it turns out it's really difficult to scale up and down so we all scale up for the peak so we're sitting on a lot of wasted cost the rollback features in Cassandra are pretty clunky so people are often operating on old versions of Cassandra which is dangerous for obvious reasons and security and so customers said why don't you do something about it you know a lot of our customers when they get to very large-scale Cassandra they tend to move to DynamoDB and this is what companies like Nike and Samsung have done but understandably companies have said I don't want to have to move I want to be able to use the Cassandra interface if I want to as I scale and so I'm excited to announce today the preview of the Amazon managed Cassandra service and so with this new managed Casandra service is compatible with the 3.11 release no clusters to manage single-digit millisecond latency with all your workloads you can choose to provision a certain amount that you think you need you know must be there at a certain point or you can just choose to provision on-demand and pay on-demand it uses all the same cassandra tools and drivers so it'll be easy to migrate your cassandra work clothes from on-premises to the cloud and then we've integrated it across all of our various AWS platform capabilities so you can use it as part of the platform so i'm excited to give Cassandra users this capability today so when you think about this collection of purpose-built databases you won't find this collection anywhere else and when you have a company it says I don't know you don't need that many databases I have a relational database and it could take care of all this for you you should not and say hmm and then some companies say no I have a non relational database and it does key value really well and it does document really well and it does graph really well and does time series really well you should listen you should be polite and you should be very skeptical Swiss Army knives are hardly ever the best solution for anything other than the most simple tasks if you want the right tool for the right job it gives you differentiated performance productivity and customer experience you want the right purpose-built database for that job and we have a very strong belief inside of AWS that there is not one tool to rule the world that you should have the right tool for the right job to make you spend less money and be more productive and change the customer experience now as I mentioned earlier there are some transformations where you have to transform yourself and there are some transformations where you have to adjust to the changing environment like what we're seeing with the huge amount of data or some changing technology and some of those transformations are relatively understood walks and some of those are a little bit further and less well understood journeys this has been the case in machine learning for the first several years which is that developers and data scientists and companies are so passionate and so excited about getting value out of their data with machine learning that they've been willing to deal with clunky tools and walk those 500 miles to get there however as many of you know most people aren't willing to walk 500 miles and so a lot of what we've tried to do a machine learning over the last few years in AWS and continues to be our mission is to allow you to get what you want to get done and get value from your data using machine learning much easier than ever before and we have a lot of machine learning that's happening in AWS and it's grown very substantially I think sometimes people forget that machine learning isn't just something called machine learning it's you got to start with the right highly reliable and available and scalable data store and then you need the right access control the right security on top of that and the right analytics and then the right machine learning capabilities and there's nobody who has that collection of capabilities from machine learning like AWS which is why we have tens of thousands of customers that are using us for machine learning and twice as much more machine learning in AWS and you'll find anywhere else and lots of companies that you know Intuit FICO and GE and change healthcare and Guardian and Volkswagen and the NFL and NASCAR Formula One and Expedia in foriegn Dow Jones just a very broad array of companies that are leveraging AWS for machine learning and when we look at the future of machine learning one of the areas that we have increasingly excited about that we think can completely change the world and change outcomes for everybody in and out of this room is how machine learning can help health care and one of the leading companies of healthcare in the world is Cerner and I'm gonna welcome to the stage the CEO Cerner Brent Shafer to share how they're transforming healthcare with AWS [Music] Thank You Eddie and thanks to a double yes for the opportunity to be here with you today and share our story at our core Cerner is a healthcare technology company and we develop software that powers healthcare delivery throughout a patient's lifetime across multiple venues and providers of care and we do that while helping health systems efficiently manage complex billing and revenue cycles now we're a global company managing the data of nearly 250 million people around the world and every day close to three million health care professionals in more than 30 countries access our secure system and at 23 petabytes Cerner has both one of the largest collections of personal health information in the world and a tremendous opportunity to transform the well-being of the world's population now transformation is something we're accustomed to for 40 years we've ushered in Health Care's digital age by moving medical data from paper charts and manila folders to electronic health records in this process is nearly complete complete and it's providing a more organized view of patient medical history and it's also improving communication among care teams and overall the quality of care but as you know health care is a journey of continuous improvement and we have a lot more to do globally if you think about it there's more than seven trillion dollars spent on health care each year and the United States alone accounts for roughly half of that so and recently the American Medical Association estimated that up to a quarter of us spending on health care that's nearly a trillion dollars is wasted and that waste comes in many forms comes in variation of care delivery it comes sometimes in over treating patients and we know for sure we have data gaps the lead to challenges as patients age and receive care from different doctors and different facilities to spread out across the country so the volume of data and care delivery requires new tools so let's start by thinking bigger what Cerner is really doing is making the world's health care data actionable for example we made paper data more accessible by going digital and now we're working on reducing variation on how providers actually deliver the care but what we and the rest of healthcare really haven't done well yet is learning predicting and preventing problems by leveraging the power of the data so who has done this before who knows how to leverage the power of data really well Amazon Web Services so we're delighted to be here and in it in July of this year we announced an expansion of our AWS collaboration to help us drive our strategic priorities and those are around migration modernization and innovation so we're Margaret migrating are privately hosted platforms to AWS and we're doing that with our joint pledge for the responsible and ethical treatment of healthcare data and we want to modernize the way we deliver our solutions by enabling software as a service and data science at scale and it's really by using infrastructure and machine learning services that will help us do that and we're innovating by leveraging these services into new solutions for the marketplace our goal overall is to improve patient outcomes is to reduce administrative and operational complexity and it's to predict and prevent health issues as early as possible so let's look at two examples as you may know you hear about it a lot in the press unnecessary follow-up visits to a hospital are a huge contributor to waste in the healthcare system so we call these readmissions and a readmission is basically a redo so think about taking your car into the local mechanic to have something fixed and within 30 days you're back in the mechanic shop having it done again basically the same work very very expensive and very frustrating ideally that second time or that readmission should not happen so in healthcare readmission costs are actually they're higher than the initial visit for two-thirds of the most common diagnosis and if we could predict that readmission and do something that with the help the clinician to modify the treatment plan we could possibly prevent it in the first place and of course that's much better for the patient it's better for the caregiver it's better for the family and it saves on the costs so one of the largest healthcare providers in the United States asked us to help them predict patients who are at risk for being readmitted so we leveraged five years of AWS data that we had aggregated together with them over our data aggregated with AWS and we use machine learning services to build train and deploy the predictive model now with that prediction the caregiver can now change their approach before discharging the patient and that's really the key to preventing second episodes of care and these are often very significant for patients that have traumatic brain in spinal cord injuries stroke and other neurological conditions and many more so as model we use creating the knowledge skills and capabilities of both the healthcare system and Cerner and the collaboration was really made possible through the cloud infrastructure provided by AWS and as a result the healthcare system reported its lowest readmission rate in more than a decade while simultaneously increasing its discharge to community rate all that means basically fewer patients were being readmitted and more patients were returning to their homes which of course is where they would prefer to be so as we leverage more of the infrastructure and machine learning services by AWS we expect to see more successes like this another example of the work we're doing with AWS is really focused on returning the joy of practicing medicine and if you read the popular press who are here about you one of the things you hear is that in the United States about 40% of physicians report that they feel depressed or burned out by the stress of the role and part of that is certainly that they're spending more than half their day often documenting information just doing data entry so this has really become a burden so yes we've digitized healthcare but at the same time the documentation requirement have gone way up so what if we could reduce or eliminate data entry for the physician I mean think what an incredible opportunity that is so Cerner is developing a virtual scribe application that captures doctor/patient interaction using speech recognition what it does is suggest allergies medications and medical problems and it integrates that information directly into the physicians workflow and we're using AWS transcribed medical for the speech recognition that powers this innovation and I'm confident with AWS clear leadership and voice and speech and our expertise in healthcare workflows will give doctors more time to spend with patients and that's what they really want so we're excited about how this collaboration helps us move closer to Cerner's vision and our vision is a really a seamless and connected word world where everyone thrives created by breakthrough innovation innovation that shapes the future of healthcare because we can all appreciate that healthcare is far too important to stay the same thanks for your time this morning and hope you have a great day thank you very much [Music] Thank You Brent it is really an honor for us to be partnered with Cerner the way we are and be as a small part of helping them change healthcare outcomes for all of you so really appreciate the partnership so I thought what I would do is spend a little bit of time sharing how our view of machine learning continues to evolve and we continue to believe that there are three macro layers of the stack of machine learning the bottom layer is for expert machine learning practitioners who are very comfortable at the framework level and this group deals with the three primary frameworks tensorflow PI torch @mx net and tensor flow has the most residents in the largest community today it continues to do so and we have a lot of tensorflow experience if you look at the amount of tensor flow that runs in the cloud eighty-five percent of the tensile in the cloud runs on AWS so we have a lot of customers running it and we have a team that does nothing but work on optimizing tensorflow performance on AWS and they have done a lot of innovation they have built the fastest writing tensor full you'll find anywhere and they've done things where they've really changed the scaling efficiency to achieve close to linear scalability across hundreds of GPUs and they've done this by inventing innovating on a way in which tensorflow shares model parameters between multiple instances making the sharing faster and more efficient between these instances so we have this separable team just on tensorflow to optimize the performance but this is where I think we also have a pretty big difference from what others do most other cloud providers try to funnel everybody through tensorflow some have started to support a little bit some of the other frameworks really the vanilla versions out of the box not tuned leaves with all the work for you and it's a little bit of a self-fulfilling prophecy of light everybody's using tensor flow but one of the things that we have realized as we've done research and we've talked to developer isn't to data scientist is that 90% of data scientists use multiple frameworks and that's because algorithms are being invented all the time by people all over the place in every type of framework and people don't want to have to take the time to port that algorithm back to tensorflow and so we support all three of the major frameworks equally well and we have dedicated teams that not just work on tensor flow but a team that works just on PI torch and just on MX net and it yields different results let me give you an example if you look here this is a common computer vision algorithm called NASCAR CNN and the previously fastest time to run this was from a company in Mountain View that did it in 35 minutes using hardware that's not available to any of you it's in some kind of private beta and if you look at our team that worked on tensor flow in AWS they were because the innovation that they've done that I mentioned they're able to get it done 20% faster in 28 minutes using p3 instances which are available to you but because we actually care about and support all the major frameworks it's not just tensor flow our PI torch team and our Mac's need MX net team also optimized how this algorithm ran on it and actually did it in 22% faster time in 27 minutes and so we are always going to give you all of the major tools that you need to do your job we're not gonna make decisions for you of what you must use we will give you the right tool for the right job now there aren't that many expert machine learning practitioners in the world most of them exist kind of hang out at the tech companies and so if you really want machine learning to be as expansive as we all believe that can it should be you have to make it more accessible to everyday developers and data scientists and that's why we built sage maker which we launched a couple years ago which is really a sea level change and the ease with which developers and data scientists can build trained tuned and deployed machine learning models and it's incredible how many customers have started using sage maker already we have tens of thousands of customers and we have thousands who are standardizing on top a sage maker avis and bristol-myers Squibb and chick-fil-a and Conde Nast and Dow Jones and GE and Hearst and Liberty Mutual Panasonic at Siemens very broad group and this sage maker team has gotten a lot of great input from all of our developers thank you keep it coming please and they have been hard at work they have launched over 50 features just in the last year and just a few I'll touch on about a year ago we launched ground truth which makes it much easier for you to label your data we gave you a marketplace which gives you hundreds of algorithms from others that you can use in your machine learning we were the first to build reinforcement learning and into a a service like sage maker which people have been using in conjunction with deep razor for a year by the way the deep razor championship right before Vernors keynote on Thursday you've always been able to do one-click training with sage maker but now you can do two on spot if you're willing to take a little bit longer if your training and save up to 90% and then we built something called neo which gets you trained once and then compiled and practically every magical place on the edge and so these are the types of capabilities that make sage maker so much easier to use and so popular and while customers have said you have made it so much easier to do machine learning than it was before and you've made every step easier it's still true that all the work in-between those steps and giving visibility and figure out what's going right or wrong is still a lot harder than we wished and the same you could say is always true with software development which is why we have integrated development environments or IDs the problem is that there's never really been an end-to-end IDE and machine learning until now I'm excited to announce Sage Maker studio which is the first fully integrated development environment for machine learning [Applause] so sage maker studio is a web-based IDE which allows you to store and collect all the things you need whether it's code or notebooks or datasets for settings or project folders all in one place with one pane of glass and it makes it much easier to actually manage all those pieces in building a model and so I thought I would share with you some of the things that are part of sage maker that we're making available today so first are notebooks and you know I think a lot of people know what notebooks are a machine learning but they're a place that people use to build machine learning workflows and they contain sections of code and documentation and visualizations and results and a sage maker notebooks are paired with compute and if it turns out that you need more computer less compute you actually have to go and spin up another instance and then you have to do all the work to transfer the contents from the first notebook the new notebook and it's just a little bit tedious and customers asked us if we make that easier so I'm excited to announce the launch of sage maker notebooks which is one-click notebooks with Elastic Compute [Applause] and so now you can just spin up a notebook with a click it happens in seconds if it turns out that you need more compute than you thought in this notebook you just tell us the cpu that you want with that notebook and we manage that compute for you and we do all the heavy lifting of transferring all the contents from that first notebook to the second notebook and then shutting down that first notebook if that's what you want so a much easier way to manage notebooks and people say well that's great notebooks are such an important part of doing machine learning but let me tell you about another problem we have which is when you're doing machine learning you're trying all kinds of experiments and you're iterating like crazy across lots of different parameters and dimensions and as you iterate a lot it creates all these artifacts and they live all over the place and I can't find them and I can't share them please make this easier so I'm excited to announce the launch of Sage maker experiments which is a way to capture organize a search every step of building training and tuning your models automatically [Applause] and so with Exige maker experiments it allows you to capture all the input variables all the parameters the configuration and the results automatically and it saves them and what sage maker calls an experiment you can have multiple experiments in a project and now you can not only browse your active experiments and see them in real time but you can also search for older experiments by name by input parameters by data set use or the algorithm or even the results and so it is a much much easier way to find search for collect and share your experiments as you're building a model so much easier way to manage notebooks much easier way to manage experiments people say well how can you make training easier and training is actually quite different for lots of different reasons not the least of which is that you're trying to work across dozens and dozens of parameters and a lot of times you don't really know which dimensions are really impacting the model looking at a trained model is a little bit like looking at a compiled binary to understand how an application works it's just totally opaque it's it's like gibberish for the naked eye and people want to have better idea of what's driving their model so they can adjust it and fix it and so they can explain it and so I'm excited to announce Sage Maker debugger which allows developers to debug and profile their model training to improve the accuracy of their machine learning models and so with debugger it's on by default we've done a bunch of work with all three major frameworks and sage makers so it's automatically setting to you the metrics that you want to monitor and to see what's actually happening and then we have this capability in debugger called feature Pryor is a ssin and what it does is it it puts a spotlight on the actual dimensions or features that are having impact on the model is this it does three very useful things means first you actually know what's driving the model which is hugely helpful as you're training a model second thing it does is it turns out if you have an underperforming neural network model you might want to know which dimensions it's leaving out that could actually help you understand why you're getting predictions that don't match what you think should be the case and then also if you have models that feature prioritization shows that are overly reliant on just a few number of dimensions you might have bias in your model that you want to change so very useful to help you train understand what matters and also be able to interpret your model so easier know easier to manage notebooks experiments debugging and profiling capability and people say how can you help me when I have models that have been working for a long period of time and all of a sudden it looks like the predictions aren't relevant anymore and this happens sometimes like if you take a overly simplified example let's say that you built a model in 2016 that estimated housing prices well the model worked well in 16 and it worked 117 primarily because the conditions were the same but then in 2018 as interest rates changed housing prices went up the models stopped making accurate predictions and this is a concept of machine learning that people call concept drift and it's actually there if you know there's concept drift you can actually make changes to the model but it turns out that the overwhelming majority of models are way more complicated than the simple example I just gave and it's really hard to find the concept drift it's all kinds of data wrangling where you have to look at what was the model that the what was the data the model was trained on what's the model that we're making predictions on now and how are they different how they changed and when it's just really complicated so we're excited to help solve that for you today with sage maker model monitor which is a way to detect concept drift by monitoring models to play to production automatically and so with concept drift what we'll do is we create a set of baseline statistics on the data in which you train the model and then we actually analyze all the predictions compare it to the to the data used to create the model and then we give you a way to visualize where there appears to be concept drift which you can see in sage maker studio and you can take charge of that and figure out how to make adjustments when you've got the situation of concept drift so all of these things I just mentioned notebooks experiments training with debugging and profile and concept drift are assuming that you are building models but we know there are a whole bunch of data sets that are very useful but that people just don't have the time or the wherewithal or the capabilities to train a model for those data sets a simple example would be let's say you had an operational database data set where it was all your sales leads and then the leads that actually ended up becoming real sales if you can actually build a simple model that predicted what the variables are where leads convert to sales you would spend your scarce resource and follow up on those particular opportunities so that's where this promise that we've talked about in the past of having something that people call Auto ml or automatic machine learning models has been and there been a couple problems though if you look at it with these Auto ml models of people tried to roll out the first is that they build this okay simple model initially that our total black boxes so if it turns out that you want to improve a mediocre model or you just want to evolve it because it's something that matters to you that could matter for the business you have no idea how the model is built there's nothing you can do about it or if it turns out that you want to make trade-offs maybe in some cases you may not take the absolute best accuracy you may trade a little accuracy for something like faster latency and prediction given the nature of your application you're out of luck you have just this one simple black box model so customers have said we want Auto ml but we want more visibility so I'm happy to announce sage maker auto pilot which is Auto ml with full control and visibility and so with Auto ml here's what happens you send us your CSV file with the data that you want a model for or you can just point to the s3 location and auto-pilot does all the transformation of the model to put in a format so we can do machine learning it selects the right algorithm and then it trains 50 unique models with little bit different configurations of the various variables because you don't know which ones are going to lead to the highest accuracy by the way even if you know how to build machine learning models having to train 50 models takes quite a bit of time so this is very useful and then what we do is we we give you in sage maker studio a model leader board where you can see all 50 models ranked in order of accuracy and we give you a notebook underneath every single one of these models so that when you open the notebook it has all the recipe of that particular model you can tell how it was built you can tell the configuration you can tell the parameters you can tell the algorithm it gives you the whole recipe so that if you then want to take that model and evolve it and make it into something that really changes your business over a long period of time you can and then also you can look at that modeled leaderboard with those 50 algorithms and you may look at the difference between algorithm 1 and algorithm 2 the difference in accuracy is tiny but the difference in latency is significant that makes your application make predictions much quicker you may choose to take that one and so with autopilot it allows you to do auto ml in a way that not only lets you create a model automatically but gives you full visibility and control to be able to evolve that model make trade-offs yourself so when you look at Sage maker before today Sage maker had become a sea level change in the ability to build Train tune and deploy machine learning models with Sage maker studio it's a giant leap forward it's the first IDE for machine learning and it's gonna make it even easier for everyday developers and data scientists to build machine learning models and to show you how the whole thing comes together it's my pleasure to welcome to the stage as I do every year the inimitable dr. Matt wood [Music] thank you Andy and good morning everybody saij maker studio pulls together for the first time dozens of machine learning tools into a single pane of glass which contains all of the tools that you need to build trained tune and deploy your machine learning models and our aim here is that we want to be able to provide machine learning and put it in the hands of even more developers and data scientists than we've done in the past I'm gonna give you a walk through a guided tour of sage maker today using a very simple example we're gonna build a machine learning model which predicts house prices we're going to do this using a couple of simple parameters things like the number of bedrooms and the number of bathrooms for each individual house along with things like the mortgage rate and the price and we're going to go ahead and collect this so machine learning learns by examples and so the more examples you have the better we can collect all of this sales information from across the u.s. then we simply take it wrap it up into a CSV file and drop it into s3 now we can move into sage maker studio and in just a few clicks from inside the IDE we can launch a new autopilot job we simply just tell sage maker where the model word data is what sage maker autopilot does at this point is it starts to spin up multiple different models each with a different set of algorithms data sets and parameters the dirty secret of machine learning is that you don't just train a single model you train dozens and pick the best one so here's sage maker is using all of the information that it has to automatically pick and train the algorithm and the parameters to train multiple different models and it does this iteratively we're just showing four here but in the next iteration it picks the best features using machine learning under the hood in order to seed the next iteration and the next iteration and over time sage made water pilot starts to homed in on the best set of algorithms data features and parameters to provide the best possible model and it provides a ranked leaderboard of candidate models here ranked by accuracy now in almost all cases you're going to want to choose the just the best performing model so the one with the best accuracy but because sage maker studio is using the debugger under the hood is providing profiling information and autopilot is automatically generating the notebooks you can dive into any one of these individual models and get a closer look you can look at exactly how sage maker autopilot pull together all that data you can look at the exact parameters and the exact algorithms that we use to generate it and because we're using the debugger under the hood by default you can start to inspect the features and their prioritization inside those models so this allows you to with new levels of visibility and insight pick the best model which not only has the right accuracy but also meets other expectations such as whether you're treating the data correctly or whether your model potentially contains any bias at this point when you selected the model that you want to deploy you can do that with just a single click inside sage makers studio it gets deployed and you can turn on the sage maker model monitor and what we're doing here is we step-by-step start to compare the data which is used to make predictions with the baseline data used to train the model and at any point if sage maker model monitor starts to detect any statistical deviations it will give you an alert and that's a really good indicator that you want to be able to go back look at your model again and potentially retrain the data so here we can see that the mean of the mortgage rate data has started to change this probably has something to do with interest rates so at this point we can jump back into studio we can either pull up on notebooks we can start from scratch using auto pilot and train another set of models probably with new fresh data then we can go through the process again we can select our best-performing model we can deploy it into production that goes into a fully managed to elastic multi AZ environment and model monitoring will continue until it detects any deviation going forwards so for the first time Sage maker studio starts to pull together the tools the developers are used to using with traditional software debuggers profilers automation man a German into a single pane of glass which could be used to build train deploy and manage your machine learning models in a way which is way easier and way more accessible for even more developers and even more data scientists and with that I'll hand it back to Andy thanks thank you doctor would I always love dr. Woods presentations I appreciate it okay so we talked about the bottom layer of the stack for expert machine learning practitioners and the middle layer of the stack with sage maker now sage maker studio for everyday developers and data scientists and now there's the top layer of the stack which we call AI services because the services most closely mimic human cognition and we have a broad array of services therefore vision we explain called recognition which is things like here's an object come and what's in it here's a video tell me what's happening in the video we have for speech we have text-to-speech with poly we have the ability to transcribe audio with transcribe then the tech side people say want that transcribe text translated into multiple languages which we do with translate we have an OCR plus plus service which not only takes the data from printed material but also in complicated formulas and tables and graphs in text tract and then we also have the ability to do natural language processing on top of all that data so that you don't have to do all the work yourself to read it and know what the heck is going on in there and then we have a number of services that are really born out of things that we've done at scale at Amazon that you've asked us to expose the services for you so we took all the natural language understanding and automatic speech recognition and Alexa made that available to you at a service called Lex and then last year reinvent we gave you both the deep personalization of forecasting experience that we have at Amazon in services called personalized and forecast and so customers have asked us to think about other areas where we have really deep experience where we can help customers and so one of the obvious ones is fraud if you think about fraud tens of billions of dollars or around the world every year are lost to fraud and people have fraudulent fraud detection services and systems but they're pretty expensive and they're pretty clunky and they don't use much machine learning and they're hard to manage and they have a lot of hard-coded rules that don't scale very well and one of the things that we've realized in over 20 years of doing fraud detection in Amazon's consumer business is that machine learning is unbelievably helpful but for all the reasons we've talked about in walking 500 miles it's hard for most companies to use machine learning and fraud detection so we've thought about that and we decided to announce today the launch of a new service which is called Amazon fraud detector which is a new machine learning service that does fraud management for you and so here's how it works transaction data we take that data along with the algorithms that we have built to detect fraud in Amazon's consumer business and we build a unique model for you and what we also do is we have a set of data detectors that we've developed for our consumer business in Amazon and we'll take those data detectors overlay them on top of the base model that we built for you and we create your own unique model that we exposed you through a private endpoint API and so what that happens is as you have new activities where there's signups or online purchases you use the API we've run them through the model and we return a fraud score among other things so then you can take action many of which you will automate based on the fraud score so just a completely different way to manage fraud using machine learning that we're excited to give you today now personalization forecasting fraud detection automatic speech recognition and natural language understanding these are all things that we have done is scale at Amazon that we've exposed to services and I think one of the things that's different about AWS from other technology companies is I think that a lot of companies in the technology space to work on things that they think are cool and where they're attracted to technology which by the way is understandable we like working kind cool things too but our priority and our focus is not on just working on cool technology or something that looks good in a press release we are here we spent all our time trying to build solutions that help you get your job done better and change your customer experience so the question that dominates our conversations as we're thinking about what to work on and spend resources on next is what else can we build that can give value to you and in places where we have done this over a long period of time in Amazon we're going to see if we can find a way to expose those as services so that you can get your job done better and one of the areas as we were racking our brains that we thought we might be able to help further in is code and most people in this room know this routine you write code you have to review the code you have a mechanism to build and deploy the code then after you do that you measure it and then you improve it and then you rinse and repeat but the problem of course is if there's a problem with the code these other steps don't really matter you're gonna have a bad customer experience and that's why we all do code reviews and they're all manual code reviews and there are a lot of organizations that don't have enough people do these code reviews or even their best people who do amiss things because they're moving fast they have a lot of things going on and so we thought about this issue and wondered if we might be able to provide some help and so I'm excited to announce the launch of a new service today which is called Amazon code guru which is a new [Applause] it's a new machine learning service to automate code reviews and also to identify your most expensive lines of code and so as I mentioned this service has two components I'm going to start with the automatic code review so what you do is you write your code you commit it as you always do and we'll support github and code commit to start with and we'll support other repositories over time and then when you submit a code change and you do a pull request you do that as normal but you just add code guru is one of the recipients to this pull this pull request and then what code guru does is it goes through models and algorithms that we've built from millions of code reviews that we've done in Amazon over the last 20 years along with the training we've done in the 10,000 most popular open-source projects and it provides you an assessment of your code and where we see there's a problem we'll give you a human readable comment that will tell you what the issue is and pointed out down to the line of code and so what are some of the things that will let you detect so the first is AWS best practices if you're missing a pagination or if you're not having error handling handled right or if you're using the API so the SDK features in a way that we think is suboptimal when we shared code guru privately with a few customers just this first piece of adherence to AWS best practices was a game changer for them but it also will identify concurrency issues these are things like atomicity violations or using non thread safe classes which it turns out these are pretty difficult to find if you have incorrect handling like you're failing to release streams or database connections from memory or if you have unsanitary inputs that could lead to injection attacks or deny little service code guru will identify all these types of problems and issues for you in human readable comments to make it much faster and more reliable to code reviews and the second part of what code guru does and this is a question that was borne out of a number of operational readiness reviews at Amazon where we talk to different teams about their applications where there may be issues or not and we're always asking ourselves how can we find the most inefficient unproductive cost expensive lines of code and so that's the second piece of code guru which is really a machine learning powered profiler simple to get started you just configure it in the console you install a small low profile agent on your application and then code guru observes your application and every five minutes creates a profile and will tell you things like latency and CPU utilization and it helps you identify the most expensive lines of code that you have in your application to improve them and this can make a big difference we have used this at Amazon now for a couple years we have 80,000 applications internally that are using the profiler part here of code guru and it's led to tens of millions of dollars of savings for us so I'll give you a simple example Prime Day which is the largest e-commerce day in the world we had our consumer payments team that was using code guru for profiling and they found most expensive line of code after most expensive line of code after most expensive line of code and made changes and throughout the year even as their application was growing very substantially because our consumer business is growing quickly they were able to improve their CPU utilization by three hundred twenty five percent and saved 39 percent in cost from where they were before in just a year so it makes a big difference and we're very excited to give you co guru today now a year ago in this keynote one of the things that I talked about was that we have these two macro segments of developers and customers and there are some builders who want access to all the low-level building box blocks so they have the flexibility to stitch the applications together however they see fit and then there's another segment of builders who say I'm willing to give some of that flexibility in exchange for getting 80% of the way there faster they want a different level of abstraction and the same is true in machine learning we have loads and loads of customers that are using all those building blocks I talked about her that top layer of the stack but we also have customers who say look I would actually like to have you stitch some of these together so I don't have to do as much the work so let me give you an example if you look in our service called Amazon connect which is our call center in the cloud service which is off to an unbelievably fast start one of the fastest growing services in the history of AWS with customers like Intuit at Capital One and GE and Citigroup and John Hancock BEST WESTERN and Johnson & Johnson and Hilton it's just off to a blazing start and the reason people like this service so much is that it uses the same customer service technology that Amazon is used for several years it's very scalable easy to scale agents up and down it's very cost effective and it's really easy to get started to use you don't have to be highly technical to use it and it's the first call center in the cloud that's set up right from the get-go with the cloud and machine learning in mind and actually the use of AI is pretty interesting in here if you look at you know we have capabilities in there like chat BOTS an IVR which is interactive voice response and people could have easily built those themselves on top of Lex but they love the fact that we made it a push button feature where they could just get IVR or they could just get chat BOTS and they said can you do more of that and so we looked at an area that we hear a lot of feedback from customers that they wish was easier and that is doing analytics around their calls and people say I want to store all these phone calls I want to transcribe them to text I want to actually be able to do search on them I want to actually know what's in them without having to read every single one I want to know if there's positive or negative sentiment and I'd like an alert if there's some kind of problem and our answer to date has been sure that's that's really easy what you do is you store all the data nest he news you need you use transcribe to transcribe that the audio to text use elasticsearch to make it searchable you use comprehend and do not the natural language processing and you use SNS or a simple notification service to create alerts customers say well okay you know some customers say great that's what I'll do and they were on their way but we have other customers in that second second let's say like okay but can you just make that easier and so I'm happy to announce today the launch of contact lens for Amazon Connect which is machine learning power contacts that are analytics for connect and so what contact lens does for you is you can activate it with a single click and contact lens starts to transcribe all this data and analyze it automatically and for each call it'll provide a full text transcription it'll tell you the positive or negative sentiment nature of each contact it'll capture things like if there were long periods of silence which often means an agent doesn't know the material or maybe there's some unhappiness or if people are talking over each other which also by the way usually is a bad customer experience and then it lets customers search against the transcriptions by keywords by specific phrases by sentiment by things like long periods of silence or people talking over each other or by multiple dimensions like give me all the contacts that had negative sentiment that talked about shipping delays you can create dashboards so you can show the status of your overall contacts and your level of compliance against the SLA is that you set and so it is a very different capability in mid 2020 will also give you the ability to see these transcriptions happening in real times and they'll call out for you whether or not you have a problem so that you can take action in the middle of that contact very excited to give customers this capability some of the customers that we have shared this with privately have said well this is awesome I love that you can make sense of my customer service contacts but if you can provide this level of understanding for all my customer service phone calls why can't you provide this level of understanding with data that lives inside my own enterprise and if you think about it if you have a enterprise it's anything like ours and Amazon is a reasonably technically savvy enterprise there is all of this data that you have internally that lives everywhere it lives in SharePoint files it lives on intranets it lives in file systems all over the place and it's really hard to do the work to unite all these silos and then actually build some kind of index and then if you are lucky enough to unite them and build an index then to build search that actually is useful is quite hard because almost all the search on this type of internal enterprise data is keyword based which often doesn't answer the questions you want like if I want to know where is the IT Help Desk in the re-invent building I'm not sure exactly which key word to say and so on top of that because it's hard to unite and index and build sophisticated search the results you get when you search your internal data in Enterprises is gobbledygook it's like a bunch of these long lists of links that have low relevance and they don't really help you find the data all that data inside your enterprise remains hidden and frustrating and so this was another very obvious place that frankly our developers and customers pointed out to us that we should try to help solve and that's why I'm excited to announce a the launch of Amazon Kendre which is a new service that reinvents enterprise search with machine learning and natural language processing we're super excited about Kendra we think it's gonna totally change the value of the data that you get from all the data that lives inside your enterprises and to give you an idea of how it all comes together works I'm gonna welcome back to the stage dr. Matt wood [Music] thanks again Andy so Amazon Kendra allows you to completely reimagine your internal enterprise search using machine learning but without requiring your teams to have any machine learning expertise and I'm going to give you a quick example of how to set it up and some of the key capabilities so with Amazon Kendra you can set it up entirely through the AWS console you get started very very simply by configuring the data sources inside your organization these are the silos of data and we have custom built connectors where you just have to provide your credentials and kendra will go ahead and index and inspect all the data inside those silos next you simply provide optionally a set of faq some frequently asked questions these are really common in things like knowledge bases and support workloads or just new hire documentation where you have a set of questions and a set of answers so you can provide those and the locations of your documents from s3 and we index those separately using our machine learning models under the hood the next step you just sync an index your data kendra goes ahead pulls in all of your data and builds an index and we're not just indexing the keywords inside the document here we're using machine learning natural language understanding in order to be able to identify the concepts and the relationships between the documents based on the tate text inside those documents so unlike the world wide web where you can rely on the structure of HTML and you can rely on the structure of links between those documents that structure doesn't exist inside enterprise data sets instead you just have tons and tons of unstructured data set across all of these silos and Kendra can pull all of that in understand and relate the concepts inside that data and then building index which you can query using natural language and you can set Kenda to automatically refresh that index whenever you want next directly in the console you can test and refine your queries so you can do test searches and then refine the results in real time so for example if you have a query for sales reports where you want the most recent sales report to pop up to the top you can just drag a little slider and away it goes finally you can deploy it so Kendra comes with a pre-built web application which you can just drag and drop and host on your on your intranet or you can cut and paste code from kendra will automatically generate the code for you and you can drop it into your existing internal applications and it will just start to integrate and search with all the capabilities that you expect including things like type-ahead prediction so let's take a look of how this search performs so here is a old-school janky search here we're just doing keyword matching this is the current kind of state-of-the-art and you can see a question as simple as where is the IT support desk returns just a ton of low quality spurious results which don't actually answer my question so now I have to go in and click on these links and try and figure it all out for myself and I'm really saved anytime by doing the search with Kendra because we're looking inside and understanding the relationships and the content itself we can answer natural language queries such as where is the IT support desk with a real answer so we can say it's on the first floor and point to the document where we got that from we can do other things as well because we understand the concepts inside the documents so what time is the IT Help Desk open here Kendra understands you're searching for time it understands the concept and the relationship of times dates and places and so we can pull up from the document the answer to your question it's open from 12:30 to 5:00 p.m. daily another benefit of using machine learning under the hood is that just by using the service the machine learning models can get better and better and better and so without your team's lifting a finger and going into all of the machine learning models we can take feedback smiley faces and upside-down smiley faces to show whether those relationships and whether those results were useful and will also automatically track what links your end users are clicking on and use that to continually improve the models under the hood without you having to do any of the custom labeling or training yourself so Amazon Hendra is incredibly easy to set up it allows you to combine your data sources to provide accurate answers to natural language queries and using our machine learning models continuously improve with know machine learning expertise it's incredibly exciting to be able to reinvent search with machine learning and with that I'll hand it back to Andy thanks Thank You dr. wood I am very excited to see what you all do with Kendra so when you look at machine learning end to end as we said earlier sometimes people trick themselves into thinking the machine learning is like a service a single service that's not what machine learning is you need the right highly secure highly reliable fully featured data store with the right access control the right security the broadest set of analytics and really robust offerings and all three layers of the machine learning stacks we which we believe that most companies with modern technology capabilities in the future will clearly operate in all three layers that stack that's what machine learning is and what's needed nobody has that set of capabilities collectively like AWS which is why twice as many companies are using AWS for machine learning as anybody else now when you think about transformation I mentioned earlier there are things that you have to do to transform yourself there are things you have to do to transform to meet new technical opportunities and situations but one of the things that we notice when you get through these first five pieces of transformation that we've talked about is that oftentimes people will get addled if they can't figure out how to move every last work load if they have some workloads that must remain on premises somewhere not in the cloud they sometimes stall their plan in how to make this transformation you know it's easy to run or to hide when you have angst about a big change that you've got to make but it doesn't accomplish very much and we see this type of activity and thought process sometimes in various companies who have the hard task of making this big transformation that if they can't figure out how to move every last work load they don't know how to move forward sometimes and so that's something that we have tried to help with over the last number of years and are continuing to make that easier for customers so I'll mention three of them the first is really around customers who say I'm moving the overwhelming majority of my applications to the cloud but there are some workloads that have to stay on premises maybe because they have to be close to something like a factory how can you help me do that and this is kind of how we got started with the very unusual and deep partnership that we have with VMware and the joint offering that we spent a bunch of time together working on and launching called VMware cloud neh WS which allow customers to use the same software and tools they've used to manage their infrastructure via VMware on-premises but also to use it to manage their infrastructure in AWS and this has been something that customers are very excited about this is the only managed service of VMware runs of this sort like this and has a fair bit of traction we have about four times more customers than we had a year ago at this time nine times the number of VMs that were there and you know a number of companies that you know of that are using it like Cerner and Accenture and Penny Mac and SP global and Scripps in the state of Louisiana but customers have appropriately said well that's great I love that I can use the same tools I've managed my on-premises infrastructure to manage AWS but that makes it easier to move applications to the cloud what about those applications that I told you guys I have to keep on premises for a while and this was very much something that we were thinking about over the last few years because there was a model out here to try and solve this issue that we've heard from customers didn't work for them and that's because they provided this solution was different api's different tools different control playing different hardware and it was really hard for customers to use and it's not that surprising when you're taking two things there is different as on premises and the cloud and then try and connect those two things together with a clunky bridge you end up with something that's clunky and hard to use and so we thought about this problem we took a different approach we thought about it less as building this clunky bridge between these two different things and more about distributing AWS on-premises and that's why we announced last year in this keynote that we were building something called AWS outposts and so outposts are racks of AWS servers that we deliver to your on-premises data center and it's got AWS compute and storage and database and analytics and you decide what composition you want and will deliver it and plug it in for you and and set up maintain it and patch it so it's not a lot of work for you and we announced that it was coming a year ago we've had a lot of customer conversations about things that were really important to you that we build inside of it and I'm excited to announce the general availability today of AWS outposts and so outposts will come with ec2 and EBS and ECS and eks and EMR and VP C's and RDS and we'll be adding s3 in in the first half of 2020 and a bunch of other things over time but this makes it such that you can now run those workloads that need to live on premises because I have to be close to something with the same AWS API is the same AWS control/play and the same AWS hardware and tools that allow you to leverage that learning and seamlessly connect with all your other AWS applications in our public regions so it's really easy to get started you just go to the console you set up an outpost you decide what composition you want of computer storage or database we deliver the outpost to your door we'll install it it'll handle all the maintenance and then once it's plugged in the power of Network you'll be able to see your new AWS outpost in your EMS management console and be able to use it to provision resources into your outpost so very excited that this is available today it comes in two variants if you're somebody who is used to and wants to use the AWS API as a control plane to operate your outpost alongside what you're doing in the rest of the AWS public regions then you'll use the ADA best native outpost and that's available today if you're somebody that wants to still use the VMware control playing like you're using for VMware cloud and AWS is a number of customers are you'll use the variant of outpost it's our VMware cloud and AWS outpost and that will be available in the early part of 2020 so very excited to give this to you today now we solve for this issue of I have certain workloads that live need to live in my on-premises data center and can't move to the cloud but there's a second barrier that we're hearing from a number of customers and it's particularly larger organizations who say I have end you in a particular geography that have workloads that are latency-sensitive where they need single-digit millisecond latency and where I don't have a data center or I do have some kind of Cola or clunky GPUs under my desk that I don't want to manage anymore what can you do about that and that's a very interesting issue as well if you're a cloud provider like us because it's very expensive to launch these mega regions like we have in 200 or 300 cities around the world we have a lot more coming but I don't know if we've really contemplated you know 200 or 300 more and so we thought about this issue and we thought about can we actually provide a solution you know take the typical examples these are real examples if if you're a media company in LA where you do content creation or you do video games those workloads is there building them needs single-digit millisecond latency or take a company in New York or Switzerland who are in financial services they have to be close to market data they need those single-digit millisecond latency and so we thought about is there a different type of construct we can provide you that solves this issue is scale and I'm excited to announce a brand new type of AWS infrastructure deployment called local zones which is a new type of infrastructure deployment that places compute storage and database services close to large cities starting with our LA local zone available today by invitation [Applause] and so local zones have compute and storage and database available to you and we kind of went back to our roots a little bit as we were solving this problem we said you know if you think about the history of AWS we have built in these highly flexible low-level building blocks that you can build a lot on top of think about how many services both AWS services and your services are built on top of things like s3 and ec2 and as we are worrying and thinking about this type of problem with local zones we realized that outposts was a really useful low-level flexible building block and so we've taken outposts and we've done some innovation and variants to it and now for customers we couldn't bring an outpost to their on-premises data center in those areas because they didn't want on-premises data centers we've built local zones in Metro cities that our buildings that we manage that have outposts in them with compute and storage and database and analytics so that you can have single-digit millisecond latency for your end users and those Metro cities where they need that latency to get their job done the first one is available in LA today and you can expect more from us moving forward so we solve for workloads the key that have to stay in on-premises data centers we solve for workloads in certain geographies where your end users need single-digit millisecond latency there but you don't want to have data centers the third barrier that we're increasingly hearing from customers is that as they have more and more mobile and connected devices all over the world that have to be connected to the stell network how can they get that type of single digit millisecond latency there as well and with the promise of 5g people have become very curious about this they've started to believe this is possible but like most new technology this was true with cloud and with big data and with machine learning and with IOT it's becoming true with quantum computing and it's true with 5g as well there's a lot of hype and a lot of misunderstanding about what the technology is so if we want to try and figure out collectively how we can leverage 5g to give those types of customer experiences first we have to really understand what 5g is and what it does and then how we can leverage it and I can't think of somebody better to explain to all of us how 5g works then our very close partner the CEO of Verizon Hannes vest Berg please join us [Music] welcome Hannes appreciate your being here let me start with a question which is what the heck really is 5g how is it different why is it so much better than 4G what does it matter for all of these folks it's a matter a lot for all of you in this room and a matter of law to you five years so different than any other G we've ever seen before I mean think about 2d 240 was basically you took a 2d phone with SMS and voice and you have today for your phone with great experience if it's a verizon phone you can stream it to different capabilities speed and throughput those were the only two in 5g is actually eight capabilities we call them the eight currencies the eight cars I'll give you some examples you used to understand how big difference is so the speed is going from if you have a 40 for today 40 60 megabits per second to up to 10 gigabit per second for an individual device and if you have a 5v phone in this room right now you have 1.2 to 1.8 in a bit already because we're fighting here so that speed throughput is going to be terabytes per square kilometer today we do gigabytes per square kilometer latency which is super important for the services that you are developing we're going from maybe 40 to 80 milliseconds down to 10 milliseconds on latency in the network and then you can combine all of this but you also have how many devices you can connect today we can connect hundred thousand devices per square kilometers in 5d is 1 million so it's much more than P so his devices and you can go on with other currencies but I think the most important is that when you can slice these and you give them to individuals applications and things you have a transformative technology that's going to transform consumer behavior it's going to transform businesses it's going to transform society you need to build this in in a certain way you need fiber to basic level the railway stations you need high high frequency spectrum you need Sdn software-defined networks you need a virtualized Network and you'll need a lot of real estate at the edge verizon builds all of these AIDS currencies not everyone is going to do it we build it right as you say because we want to give this to all our customers and already now we have launched five-year home in 2018 which is fiber-rich home broadband we've launched to fight the mobility we did that in in April we have now 1867 a 30 by year end and of course we we are continuing with a lot of new innovation but this is just the store to it so the eight currencies is what matters for everybody in here because that's what you can develop on and that's the new things we're bringing with 5g that's pretty awesome so people won't have to be tethered anymore to the Wi-Fi or their lower-performing LTE networks I think that first of all with our spectrum position with millimeter-wave which is this big frequencies you can get enormous throughput you can get the 10 gig and you can do the lateness of 10 milliseconds so of course that's why we can do it but secondly the connectivity and speed is just two things the other thing is we can we define the now bring the processing out to the edge because we have a virtualized network we can we get out we call that the mobile edge compute for 5g that's really what we can do and here the innovational consumers low latency immersive experience mashing machine loads that are heavy all that can happen at the edge right now because of the virtualized network and the eight currencies yeah that makes sense well you can tell if 5g is pretty compelling and if you think about as we've thought about what that means for AWS customers if you want to have the types of applications that have that last mile connectivity but that actually do something meaningful you those applications always need some amount of compute in some amount of storage and what they've done in the past is they've reached out to AWS to go get that compute and storage but the problem is there are so many hops along the way like you have to go from the device to the cell tower to the city aggregation site to the regional aggregation site to the Internet and then teh AWS and then back and the types of applications that are most excited about using 5g these are things like machine learning at the edge or autonomous industrial equipment or smart cars or cities or augmented or virtual reality they can't afford and don't want that round-trip back and forth and so what they really want is they want AWS to be embedded somehow at these 5g edge locations but think about if you're a customer what you want to do it you can't walk up to a telco and say can I be at your edge you know then you know because no one telco can serve the entire world geographically you'd have to go to lots of telcos and say can I be at your edge and then you'd have to manage all the work between these different telcos and how they manage it with some kind of abstraction and you'd still want all the same API tools and api's and control planes and things of that sort and so this has been a hard problem for people to try and solve and yeah about 18 months ago yes on Steam and my team started working on this and collaborate very deeply and excited to announce a new AWS service for you now call AWS wavelength which allows you to build applications that deliver single-digit millisecond latency to mobile and connected devices with AWS compute and storage and embedded at the edge and 5g whoo [Applause] and so what we've done is we've embedded a W storage and compute at the edge of 5g networks we started with Verizon who was a collaboration partner and really a true innovator in this space and now for the sensitive portions of your application where you want the single digit millisecond latency you have way fewer hops to actually get to the computing storage you go from the device to the aggregation of the city aggregation site and then right at that 5g City aggregation site is AWS and so it's a much better experience you don't have to worry it'll work it'll we'll we're launching our first launch partners Verizon will also have KDDI and SK Telecom Vodafone as part of this and more over time you don't have to worry about figuring out how to manage across all the different telcos we build an abstraction that makes it easy for you and as I said we started with Verizon who is often and you know often the leader and the innovator in this space and tends to invest most in its network thank you thank you know for us this is a historic day I mean as I said we were first with the five-year home we were first with fighting mobility in the word and of course to be here today to talk about we are now bringing the 5g mobile edge compute out to the edge we call it the verizon 5g edge and then with the collaboration of our engineers that has been working together for 18 months we virtualized our network all the way from the radio to the packet core and then bring in wavelengths there in order to create and platform yo all over you out there to actually start innovating on the eight currencies low latencies massive throughput and all of that and I think that it's just a massive moment for us at Verizon to actually do this and lead this sort of movement in the world with a lot of transformation with this new platform we already are in preview in Chicago so we have already customers up using this technology so it is much more than used to buzzwords we actually have in preview and we have we have NFL there we have Bethesda which is a gaming company that is using the low later but over time we will be able to do slices with all these currencies together with idea by AWS wavelength where we can move the loads of course all the way up to two to the higher levels of the all the data centers down to the wavelength and that's a beauty because that's going to be a game-changer and I think that everyone here should be extremely excited for it we at Verizon or extremely sigh excited to collaborate with the best and greatest cloud company in the world and bring this to us innovators out there and if you really want to see you today you can go into our booth we Ryan has a booth in here where you can actually start experience these aged currencies but as the year progressed in 2020 we're going to deploy much more real estate when it comes to edge compute together AWS and see that we are bringing these to all of you so you can innovate for your customers and partners so it's a great thing it is in case it's not obvious we're pretty excited that you have your and we think it'll eat we drink it pretty dramatically changes what customers are gonna be able to get done we couldn't have chosen a better collaboration partner it's a deep partnership we love working with Verizon and with your team Hans thank you for thank you on ish it's great thank you thank you [Applause] [Music] [Applause] [Music] so we talked about as you're making this big transformation sometimes what happens is that companies get stalled when they can't figure out how to move every last workload and that's what we spent a fair bit of time trying to solve and we'll continue to work on trying to solve for you moving forward but these are three big barriers I think we've been able to help you knock down how do I deal with my applications that have to stay my on-premises data center that want to work seamlessly with the rest of my AWS applications how do I deal with end-users and geographies where they need single digit millisecond latency but I don't want data centers again that works seamlessly with the rest of my AWS applications and then how can i leverage this incredible innovation of 5g to have single-digit millisecond latency with all my connected devices and the latency sensitive parts of those applications those are three big barriers that we have knocked down today for you so I'm gonna close with just a few comments you know I think that when there's a big change happening and I think that the change that's happening with the move to the cloud is the most Titanic shift that we've seen in technology in our lifetimes it's sometimes hard to think about how to handle it and I think that a lot of people will tell you that they love and embrace change but I would say that my experience is that it's not necessarily true I think a lot of people get nervous about change and they don't know what it means for them and whether they have the skills to be successful in that change and what it means for the scope of their job and all the things they spent a lot of time working on and the suppliers they built relationships with and as such a lot of times when there's a big change and transformation like this the first reaction is to dismiss it and then when it becomes hard to dismiss it because people are moving to it because there's value then the reaction oftentimes as well we can do it better we can do it less expensively we can do it more performant Lee and then when that doesn't appear to be true a lot of times the reaction is just to try to slow roll it do just enough to not at it that it looks like you're actually paying attention to it but the problem is if you dip your toe in the water for long periods of time in transformations that radically change industries you find yourself at the tail end of a big shift and suddenly way behind and sometimes it's pretty startling how far behind you become how quickly I mean the history of business is littered with companies that did not adjust to big technical transformations and were left in the dust the reality is with what the cloud offers you it gives you a once-in-a-lifetime chance to totally reinvent the customer experience to totally reinvent your business and to build things that were never possible before and that to me is an opportunity that all of us have that is unlike any other in our lifetime it probably will be unlike any other moving forward and so the opportunity is right here right now take it we'll be here every step of the way to help you and I hope you have a great rest of the week of Rev Run thank you [Applause]
Info
Channel: Amazon Web Services
Views: 710,354
Rating: undefined out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud, reinvent 2019, andy jassy ketnote, Hans Vestberg, Brent Shafer, Goldman Sachs, David Solomon, aws launch, aws keynote, dr matt woods, aws 2019 event, Amazon Fargate, Amazon Redshift, Amazon SageMaker Studio, AWS Local Zones, AWS Wavelength, AWS Outposts
Id: 7-31KgImGgU
Channel Id: undefined
Length: 159min 29sec (9569 seconds)
Published: Tue Dec 03 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.