Thursday morning general session - May 9 - Red Hat Summit 2019

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
ladies and gentlemen please welcome to the stage Red Hat senior vice president customer experience and engagement marco bill peter good morning good morning what a great day day three of Red Hat summit how have you how have you learned this lot what have you learned the last few days you heard it from the videos amazing a lot of new things to learn now yesterday Paul Cormier talks about bold ideas shooting for the moon and how to execute but having ideas is one thing having ideas and innovate by running a smooth operation a stable operation that a whole different thing now today is all about innovation and the future and how it's shaping up it's happening today in startups you see in startups change the development process they changed how much they release how many times did they release a day or an hour it's happening but you know what's more exciting for me would have learned the last two days and what I've learned the last few years visiting clients it's happening global enterprises as well it's not just happening in startups and that's really proud and really great achievement of you all to innovate while running a smooth operation and when we talk about innovation especially at the Red Hat summit they are of course the right at innovation of award winners we heard about them this year's Innovation Award winners where 8bp Deutsche Bank Emirates NBD HCA healthcare and Kohl's we heard from Deutsche Bank already from Kohl's and from BP and later today from Chris Wright we hear about HCA healthcare and the Emirates NBD but before before that we're going to talk about innovation and how reddit supports you through that lifecycle giving you stability security Haydn's I also say accountability and influence let me talk more about that having ideas having bold ideas and making them a reality you know that needs a really really trusting partnership with your partners and with your suppliers we understand that we understood that for the last 20 years because we were always a ecosystem around your IT needs this is where my organization customer experience and engagement comes in red hair products are not just software they are subscription in the subscription get software but you can also lifecycle you heard about Dell 8 we have 10 year life cycles 13 years lifecycle to make sure you can operate safely in those times it's also security product security I'll talk a little bit more about that but I also mentioned accountability what does that mean accountability means to have the expert knowledge from the technical support from the trusted engineers who built the software that can provide your real answers and changes you know a lot of people say supporting open source that's easy well it's open source but trust me it's not because if you gotta analyze software starting from the middleware layer into open shift into the Linux into further down these levels you gotta understand all the levels you got to be able to diagnose it but then you fit you find the issue what do you have to do you got to produce a patch it's not just patching that system you gotta patch it so it aligns with upstream because we don't forward but then you also get aligned with our software so we can stand behind it for the ten years of the life cycles that is real accountability if we don't support it it's not because we want to be difficult if we don't want to support it or we can't support it it's because we can't stand behind it we don't have to partnerships or the certifications or we don't know the code now let's talk about security quickly patches for security vulnerabilities are right but not enough give you a quick story a company I know they spent 18 months on a 15 percent performance improvement they have 10 thousands of servers so 15 percent meaningful the next week a security vulnerability comes along could wipe out 20 percent of your performance you're 18 months effort is gone that's where the product security team comes in to really give you guidance as in yeah here you got a patch here you gotta adjust these firmware settings here you're gonna just add that is the real value that's how we build the customer experience at Red Hat to be meaningful and valuable for you I mentioned it every year customer experience at Red Hat is at the core of what we do we don't squeeze or outsource support to save cost we understand this is really really supported they're really really important for you to succeed that's how successful customers engage with us and we hope to do that keep in mind having a smooth operation is the foundation to enable innovation now speaking of innovation I want to talk about previous winners not about the winners from this year because I want to show the previous winners keep innovating and I'm gonna get talked with two clients they have really good reputations to protect at the same time they gotta innovate around their bold ideas and stay relevant for their clients the company Hill to get recognized in 2009 for standardizing the large mission-critical ASAP landscape on Red Hat Enterprise Linux let me welcome Kristoff to the stage please welcome Hill T head of IT infrastructure dr. Christophe Beck [Music] dress off great great to have you here what a pleasure to be an anchor Christophe tell us first a little bit about what Hilda does and what your department does absolutely Hilti is a leading supplier in the construction industry we serve out of principality of liechtenstein about 120 countries with our services solutions products and software for the professional construction industry we are a family-owned company with about 29,000 employees on the all over the globe and do about the five point six billion revenue a year IT is for us a super important topic just to bring two facts to you we are a direct sales company so that means we have about 250,000 touch points with customers every day that means we record all of them including about 75,000 orders every day that we take from our customers a second fact our customers are very demanding think about a professional job site like out there customers do not expect to wait for for for their tools and products that they are ordering so that means we are able to serve our customers in almost 99% of the cases in time that means same day or 24 hours deliveries that is massive and that's all running on your SOT landscape absolutely and so back in 2009 Hill Devon the award to move their landscape the LCP landscape to right at Enterprise Linux and you can see from the quote here we wanted to bring up the quote here we have a very long 20 years or more relationship if ASAP and we have many customers that do that but let me do let me ask you Christophe how are you pushing the envelope today what's the latest I mean 2009 was a milestone for us we moved from from legacy hardware from proprietary operating systems into open operating systems together with you in with Red Hat together with Intel based technology not only for our ASAP landscape but for all our servers and databases now in 2018 we did another old step we placed a big bet basically I say P's pushing us to move to s4 Hana and that meant that we decided yes we gonna go that way and again we place that bet with our partners Red Hat and set up our is for Hana system to run and serve our business that is massive you know like just moving straight I was really impressed worked worked with you on those years to jump straight to ask for on the application side on Hana on the database what made you confident you can do that step I mean we were the first movers in in that direction from a size and complexity point for sa P so no company in our size has done that before we talked about the 12 terabyte ram system that we need to handle together with the partnership with Red Hat that was then working out and we could only do that because we trust in Red Hat we trust in in our partners that we have and finally it was the personal relationships as well and and the trust that we have with all our our partners in particular Red Hat that we were confident to pull that through yeah and I remember just two days after the goal lives we were all excited about the goal life I knew it was successful and we were sitting in a plane together by coincidence and I was really happy to tell Kris stuff that we just elevated our our competence center in SP in Waldorf so was great coincident and that showed to us that we bet on the right partners and on the right systems because you continued to advance up to today that's right Kristoff you're Emma thank you very much thank your marriage thank you thank you you know Jim Whitehurst always gives a history lesson every year I give a little geography lesson with having a customer from Liechtenstein here now let me welcome another Innovation Award winner on stage in 2018 ups built this innovative solution on RedHat technologies like open shift container platform fuse and Red Hat Enterprise Linux they recently migrated the largest mainframe application to open shift in December last year they exceeded 1 billion transaction per day on several occasions pretty amazing let me welcome Ken from ups to the stage to tell his story please welcome UPS president of information technology Ken Finnerty [Music] good morning everyone it's a pleasure to be here today and highlight how Red Hat is playing a key role in UPS's digital transformation you know I think all of us regardless of industry or geography where we're working share one common thing and that is that we're facing increasing competition in our marketplaces we might also though find increasing opportunities as well and I think you'll all agree with me that reacting to both of these conditions requires technology specifically making clever and innovative use of technology but futurists tell us that windows of strategic advantage won't last as long as they once did and to us that means that we can't afford to be good only once in a while we need to create technology organizations that can continually deliver value and that requires a modular design for your software along with modern tools and platforms and all of those things need to coexist in a healthy and productive technology ecosystem you know - ups being digital means having the ability to action the important events that occur within a supply chain and do that through software for us it means affecting the moments that matter to our customers and doing it without heroics you know there's an old saying that a good idea is worth $1 but a plan to implement it is worth a million well ups we have a plan we're building a global smart logistics network one that is fully digitized and operational in over 200 countries and territories around the world and delivering 21 million shipments every day we're already using technology to provide mass personalization to customers we're simplifying shipping delivery and returns we're removing friction from the ecommerce marketplaces and we're even helping our customers improve their say through increased digital traffic let me tell you how ups my choice is our flagship digital engagement product we have over 59 million members signed up worldwide and this offers those members a personalized delivery experience a personalized and flexible delivery experience it's personalized because the members can go online save their delivery preferences which we honor but it's flexible too because they could make changes to that on an ad-hoc basis shipment by shipment and they could do all of that without having to pick up the phone and call ups and shippers also enjoy the benefits of my choice because they can promote their brands in the electronic notifications that we send to consumers and consumers receive those through three channels through text through email or through push notification if they have the mobile app one of the best features about my choice is that it offers consumers the ability to redirect deliveries to an alternate delivery location we call that UPS access points that's a network of more than 25,000 retailers worldwide and they'll receive and hold deliveries for consumers who can't be home to receive them during the day giving consumers a simple digital way to manage their deliveries improves the experience for all parties involved in e-commerce it improves the experience for consumers because they enjoy proactive shipment notifications along with flexible delivery terms it improves it for sellers and shippers because they experience a reliable service that includes digital promotions and it's good for UPS as a carrier because we improve our operating efficiency now you can see how my choice creates value in the marketplace but I could tell you as a technology person that it also demands a healthy ecosystem one that begins with a modular software design where business services are exposed through Web api's in other words requires application building blocks however one of the things we found was that building blocks alone don't make up the whole ecosystem now we needed a platform a platform that would allow us to deploy to support and most importantly to scale as we innovate enter Red Hat we picked open shift because it is highly scalable it's an enterprise ready container platform with powerful orchestration through kubernetes it has become a foundational element for us it's enabled us to realize our cloud native micro-services architecture because after all we want to go beyond the 12 factor app now pairing our cloud native micro services apps with Red Hat has given us the foundation to process those billions of transactions that you just heard about and provide customers with accurate and timely shipment information now along with openshift we've also deployed other tools in our tool chain things that allow us to do build orchestration to do automated testing and also to instrument our applications all of these things working together in harmony and a technology ecosystem have enabled us to reduce our deployment windows to increase our change control success and to improve our overall system uptime having Red Hat's team of experts working with us on our journey has provided valuable insight and knowledge and it's accelerated our success they've been a true partner to ups we all know transformation isn't easy it requires a robust technology ecosystem it requires talented and engage people and it requires a process that's fully automated now Red Hat gave us the foundational elements for our technology ecosystem it's an ecosystem we're using to create a global smart logistics network thank you for allowing me to share a story today thank you can this was awesome unbelievable logistics now innovation doesn't stop because you achieve something you win an award after every success you actually have more opportunities and can take greater risks because you have established a culture of innovation a culture of change that's really important and right that will be there for you not just with bits and bytes but a company standing behind you with a real partnership everything you do with us and every time we have an interaction with you it only helps us to make our products and solutions better better for the ecosystem better for the open-source community but also better for your business thank you for being awesome customers eyerly Ortiz the Innovation Awards we're getting closer to that so stay around Chris Wright's keynote and the demo from birth sat on team and then we will reveal the 2019 Red Hat innovator of the year that you voted for at this time I would like to hand the mic over to Chris Wright our CTO and good friend of mine please welcome Red Hat vice president and chief technology officer Chris Wright [Applause] all right good morning day three Thank You Marco and thank you to Hill T and ups you're doing amazing work showing that you have to constantly embrace change and new innovations to stay relevant we know that business is ever-changing you have to continually adapt to changes in technology needs and demands all of this while meeting your customers where they are now over the last couple of decades we've seen a cycle of innovation it's not only fundamentally changed our view of IT but also business and even society the ubiquity of the Internet open-source smart phones DevOps big data and cloud RedHat has been your trusted stable and open source platform through all of this a common platform where you can innovate and not worry about availability flexibility scalability security lock-in now we're entering the next round of innovation the data centric economy the mass adoption of AI and distributed computing true innovation and disruption doesn't come from technology on its own true innovation happens when new technology connects people and new ideas in your business disruption comes from the culmination of incremental improvements in technology paired with creativity diverse ideas and open processes how you actually apply that technology to create something entirely new an important enabler for this is the ability to utilize technology without undue hurdles a quick time to value stability a platform that enables you to focus on the innovation in your core business now throughout my time with you today we have a lot planned we'll see how Red Hat redhats platform is built around rel and OpenShift are truly your trusted platforms for the next cycle of innovation and we'll hear from software partners that are helping operations and developers use AI and ml in novel ways we'll show you how these new capabilities are helping business deliver value in disruptive ways not only for themselves but for entire industries but before we get too deep into the software and platform side of things everything starts with hardware without hardware innovation we can only squeeze so much power out of our platforms and modern innovation using AI is hardware accelerated hardware is changing Moore's Law is hitting the laws of physics and the next round of hardware innovation is about specialized hardware especially for AI see Google with TP use Intel with DL boost GPUs even FPGAs now during my keynote last year I talked about performance and tried to make the raw numbers very real normalizing a CPU cycle to one second with that time normalization we saw that advancements in Hardware reduced the time IO operations take four months two days and even hours that was a year ago we were looking at CPUs with just under 30 cores compared to more than 5,000 cores for a modern GPU at the time what a difference a year makes the same convolutional neural net training that I tested last year is 56 percent faster we're also seeing the performance improved as much as two times within the same power envelope now this is great for inference deployed at the edge for example but in the post Moore's Law world this isn't just scaling up Hardware anymore achieving maximum performance out of hardware also depends on n 2 and optimization the hardware and drivers optimize network fabric specialized tool chains optimize software stacks and AI frameworks and of course applications a partner in this effort is the leader in GPUs nvidia nvidia has been with Red Hat all along ensuring that their newest hardware capabilities are available and supported end to end in our solutions to help tell this story and to show you what you can do with all these optimizations I'd like to welcome Chris from Nvidia to the stage please welcome nvidia vice president computing software chris lam [Applause] good morning and welcome everyone and thank you chris wright Nvidia and Red Hat have a long history of working together to deliver enterprise class high performance solutions I've worked on CUDA for about 12 years now and I remember back and could have won at out how key it was for us to launch with Red Hat and here we are now 10 generations of CUDA later and we're working closer than ever most recently we certified red hyatt Linux on dgx our enterprise class AI supercomputer we now have Red Hat virtualization integrated with V GPU for virtualized graphics and of course OpenShift support on NVIDIA GPUs I'm talking to you now at a critical time in computing Moore's law has ended and data is exploding and it's completely clear that in this new era of computing acceleration is going to be the key that allows our data centers to continue getting faster now NVIDIA GPUs are the most widely adopted accelerator across the industry and this is for a really good reason solving this problem requires an accelerator that is programmable across multiple domains of software but within a single architecture so it can be deployed everywhere and this is a really hard problem it's a full staff acceleration problem you have to do optimization from the hardware to the firmware to the drivers the communication library's algorithmic libraries for data science and machine learning framework optimizations all the way up to the the services and the application layer itself it's a really difficult problem and this is one of the reasons why we've taken that stack an optimized version of that stack and put it inside containers in NGC so there's a repository of fully accelerated applications with their stacks across multiple domains an example might be the major AI training frameworks or Rapids which is a Python based framework for accelerating common primitives in data science such as extract transform and load or classical machine learning algorithms so ultimately what we want to do is we want to make it easy to install and run these optimized containers on OpenShift our goal of working together here has been to simplify the deployment and management of this full stack needed for this era of accelerated computing so let's start with operations you're an operator and let's see a demo of how how easy this can be OpenShift 4 makes installing the cluster really easy it reduces the install time from the better part of a day to less than an hour so here you see open shift more installed on a bare metal cluster in our lab you got three bare metal master nodes and then three worker nodes with GPUs installed so then let's install CUDA using the GPU operator this is really really easy and the upgrades going forward or easy to this also installs a monitoring stack that gives you metrics so you can make sure you're getting great utilization of your GPS this is something that we hear again and again from our customers is really important that you're getting good utilization out of them ultimately the point here is to make need for a data scientist to launch one of these optimized and GC containers from a web console with just a couple of clicks so let's now switch over to the persona of a data scientist somebody who wants to set up a excuse me accelerated Python notebook so here we're setting up Jupiter hub on OpenShift so that the data scientists can spin up a notebook on demand the one here we're using uses the Rapids framework so that somebody could work say with the DB scan clustering algorithm ultimately this is what's gonna make it easy for a data scientist to just go get the resources they need on demand so in short what you've seen here is a preview of what accelerated AI and data science looks like with open shift in Nvidia we're really excited to work with you our joint current and prospective customers to join us in a Developer Preview to try this out with your AI teams and your workflows and ultimately our aim is to make it easy to deploy clusters on Prem and in cloud environments we've got a blog talking about preview access and how you can sign up to be part of our early access program and we're also working with leaning OEMs to make it simple to get starter kits for gpu-accelerated OpenShift clusters so if you haven't had a chance to stop by our booth in the expo and come by and see what we're doing please do we can share more on what we're doing you and our work together thank you all enjoy the rest of the show [Applause] all right thank you Chris have an invidious Hardware work with our latest platforms is crucial for these workloads so with all of this impressive Hardware delivering better AI performance throughput in rel and OpenShift we need to talk about the people keeping that hardware and those platforms running its operations modern software stacks are increasingly complex OpenShift and containers make it really easy to develop and deploy making it available on demand but we know that operations can be a thankless job you have pressures from the from the business to deliver more with less you have to keep everything running and you have to look for ways to make that ever more efficient while delivering new capabilities for your developers and applications and ultimately your customers but the unexpected happens you have hopefully infrequently hardware failure you have demand issues that go beyond what your hardware and infrastructure are planned for you have puzzling performance issues to track down discover why they happened and fix them now with Murphy's Law you expect the unexpected you just don't know when this is where AI can help to enable innovation you need everything running reliably and that means enabling ops to be better than ever the operational paradigm of the cloud offers a service abstraction that encapsulates operational excellence but in order to enable you to innovate that concept has to meet you where you are this is where AI Ops comes in we're really talking about autonomous clouds self-driving clusters AI ops is the combination of platforms big data and AI ml that enhances practices like performance monitoring event correlation and analysis and management at Red Hat were actively enabling this with solutions like Red Hat insights and core concepts in openshift for such as operators with AI ops the infrastructure learns from the data and gains the ability to predict issues before they become problems an example of a partner who is doing incredible work in the IAI ops space is profit store their solutions are being built on open shift and enhance its scaling and scheduling capabilities profit store and AI ops help operations teams predict and optimize workloads and resources in your cluster and now I'd like to welcome Brian to show you AI ops in action please welcome profit stores solution architect Brian Jiang [Applause] thanks Chris so as introduced I'm Brian I'm a solution architect over at profit store and I'm gonna be talking about our AI off solution federate or a i-4 openshift so federated I what we do is we simplify the cost optimization process for both day one and day two operations in open shifts for day one as many of you know there's hundreds of cloud providers out there each with their own instance types and price structures and new users the cloud environments might not readily know which cloud provider to choose or even know their own application workload right so that's where we come in profit store you just tell us the application and the optimization policy and we'll recommend which cloud provider and instance type to choose based on that profile so we do all the all the legwork for you right and then for day two that's where our machine learning AI comes in we learned the resource usage of each pod in your cluster and we predict future resource usage and with those predictions we can apply it to the native kubernetes scheduler and auto scalars for a much more intelligent resource utilization than the current historical algorithmic method all right and from these two solutions you can kind of see that we're being a force multiplier with openshift to enable your team to streamline operations save resources and allow them to manage systems more efficiently and in less time so just some more details about our day one solution the user tells us what application he wants to deploy about how many requests per day and the optimization policy whether it's cost performance SLA and from that we'll generate which cloud provider and instance type to deploy your application into but you know if the user also wants to deploy directly into his own cloud we can just directly recommend how much resources you'll need and turn CPU and memory and then day two as I said before that's where machine learning AI comes in we predict the future usage of each pod and we apply it to the native kubernetes scheduler and auto scalars and according to our test results we can improve your resource utilization by sixty percent alright so if you guys can see the the graph behind me there's a white dotted line which is our predicted CPU usage it's about ten minutes ahead of the blue solid line which is our current observed CPU usage but you can see that these two lines are really intertwined which shows that you know our prediction engine is really accurate at this point so with these accurate predictions we can recommend where to put your pod request and limit at which is the green line and yellow line respectively and we can also automatically execute these recommendations so your operators don't even need to worry about scaling the cluster themselves all right ok and then let me just switch over to a web browser this is Agra fauna environment it's a side-by-side comparison of the native kubernetes horizontal pod autoscaler and our federated I horizontal autoscaler and I was gonna be calling it a HPA from now on because it's shorter but you can see we found that we outperformed the native one in these three main categories all right for the same identical workload we can use 19% less replicas so that's directly saving you 19 percent resources right and then we can also reduce your CPU over lunar instances by 61 percent so every time your application hits a CP over-limit instance it gets performance issues and slow down slows down so it's really big that we help out there and actually the biggest thing right now is we reduce your atom memory instances by 90 percent almost 90 percent every time an application hits an out of memory issue it stalls out in crashes so you want to avoid these at all costs we do that 90 percent then so this is Agra fauna environment there's all this data we have a booth that you guys can go over with us later about it but just know for now this is a there's only a side-by-side comparison of the HP a the horizontal autoscaler we can be applied to the vertical autoscaler the cluster autoscaler the scheduler and all these different facets of your open ship cluster it can be optimized using machine learning AI with our federated eye solution right and the really cool thing is that once we have those usage predictions of each pod the future resource usage we can feed that back into our day one tool and you can get new recommendations of which cloud provider and instance type to choose so now your full stack from your resource usage all the way up to your cloud provider is fully optimized using federated I profit store alright so that's it we're gonna be in a general section we have a booth with one one three four if you guys have any questions or want to get more details come say hi talk to us thank you [Applause] so we're using AI and machine learning driven algorithms to help keep operators operating as you just saw we're using data to make predictions for the future keeping containers and pods running within set standards our apps built on top of open shift can automatically adjust and become more intelligent out of the box and the key here is data we have data the dataverse is growing exponentially I saw numbers recently from IDC half of the world's data has been created in just the last two years yet only 2% has been analyzed so how do we connect data to your business to create true innovation let's talk about AI as a workload how we help you innovate with AI developers need to be able to use AI they need to simplify these complex activities and have access to meaningful data the work of training models and connecting that to apps has historically been done by small groups of highly-skilled data scientists in a traditional AI workflow data scientists are a precious resource but they can become a bottleneck so we need to enable developers to help data scientists scale OpenShift enables AI innovation for the developer this benefits users as apps are able to move more intelligently faster now my next two guests are democratizing developers access to AI and different ways to make the AI workflow easy intelligent and accessible you'll hear from perceptive labs who've made their deep learning model training as simple as a mouse click with their drag-and-drop interface and that's fantastic for instances where you know what your data is but you just need to apply it to glean the insights or in this case return a result for your customers then you'll see how h2o a is approaching the problem of massive amounts of data where the model you should use isn't clear thanks to work from companies like perceptor labs and h2o AI developers now have access to AI platforms built on open shift made with them in mind I'd like to welcome Martin and Robert to the stage to show you how to train you and teach your AI to do powerful things the easy way please welcome perceptor labs chief executive officer Martin Isaacson and perceptive labs chief technology officer Robert Lundberg hi I'm Martin I'm Robert we are the cofounders of preceptor labs unlike many Silicon Valley startups perceptor Labs was not built in some Bay Area garage preceptor labs was built in a Swedish garage so to start with how many of you have ever worked with AI let me see you raise your hands that's some for those of you who don't work with AI take my word for it developing an AI model can be a long tedious and complicated process requiring specialized knowledge and skills well we now have a solution aimed at simplifying that process we built a tool that helps enterprises save time and money when creating AI models we simplified the model development process by substituting math and code with a simple drag-and-drop interface and we built all of it using ubi based containers on reddit OpenShift to make the deployment quick and easy okay so let's show you how it actually works and I just want to mention that if we wouldn't have been running this as containers we would have to wait at least five minutes for the virtual machines to start up I'm just saying anyway to your left you see the different operations or the ingredients of the AI in the middle we have the workspace where we can mix all different kinds of operations to you right we see a project menu and on top a tool bar pretty simple and we're going to do this a little bit like a cooking show where we first will build or bake a model from scratch and start training it in the oven so we created a data set containing images of red fedoras or red hats and we bought a model to learn learn to classify if an image contains a red fedora or not so let's load this data into a data set on the workspace and we will be able to see a sample to verify that the data looks like as we expected and you can continue to use this drag-and-drop interface to build out the entire model world workflow but here we will switch over to a complete workflow and show you the complete process so we can see the image data layer on the workspace we have also loaded the label data or the ground truth into another data layer and this is what our models output will be compared against during training and we set the output size to 2 since we have 2 classes reddit and not Red Hat and in the platform we can define what sort of AI technique we want to use by choosing a training layer and we can select from multiple options including reinforcement learning genetic algorithms dynamic routing and so on but here overshoes normal supervised learning and if you're too lazy to set the hyper parameters manually you can do it like us and how to generate them well let's start training our model first we'll set some general settings for example for how long we want to run the model we automatically get thrown into this attached statistic dashboard where we can see various kinds of metrics and this might look a little confusing at the store but Robert is here to walk us through it step by step thanks Martin so we will see our three different boxes at the top we have the statistics for a training layer it's just the overall performance of the model here we can see ok we can say things such as the input model or the current accuracy we can also see the network output in blue compared to the labels in yellow we have to write to that we see the same thing just average for many to give you a nice distribution if you want to keep track of how the model is progressing you can swap out your accuracy and say it there at the bottom right we have something which we call the view box it's like a prequel entry model we can peek into all these different parts you can select which part you want to look into by clicking on a map over to your left this gives you full transparency of what's going on inside your model while it's training and you can say things such as the output the biases or even gradients and if you ever want to you just pause your model change some things up on the workspace and then keep on training okay so it looks like the model has finished its training so let's go to test view to see how it would perform against real or live data now it hasn't really trained for that long so it's not going to be that good but if you want to know how it works we have a clickable map right here as well so if you satisfied with your model you can screw over to here you can export it as either Road idiots attention flowing model or a container image coming with a full api so the key points we want to highlight our simplicity and efficiency are modeling am I looking dashboard allowing you to customer it every allowing it to help with both understanding and with our searching all built upon a very flexible core allowing you to custom edit every component and finally the ability to perform cutting-edge AI on open shift okay marking you want to talk a little bit about how the audience can engage for this sure this model is hosted in openshift and we can communicate with it via twitter so we gave this a try and uploaded a picture of Robert wearing a red Fedora and we got the response that it classifies correctly and you can try this out yourself just go to Twitter upload a picture and mansion at fedora finder bot and it will tell you if he thinks there's a red fedora or not in the picture and this will be running through the entire day so have fun with it we will be hanging around the emerging tech booth if you want to come and see us so in closing we just want to say containerize it thank you please welcome h2o dot AI chief executive officer and founder Sri embody [Music] what an amazing demo from percept elapse thank you for having us Red Hat we're so excited to be here as a partner of openshift and bring AI to everyone open source is about freedom not free it's about culture code and customers ardent customers like you supported us over the last seven years and supported Linux or the last few decades thank you for your support community we are really excited to talk about driverless AI automatic machine learning has come of age there are five common steps for large and small enterprises to use AI first you need to drag and drop your data so connectors to your data sources is key then you want to do quick data sense you want to understand how your data is laid out automatic visualization allows you to understand your data in quick simple ways finally data scientists are the most wanted talent across the world today and automatic machine learning augments them to enable build models of high quality and deploy them quickly as scoring engine pipelines h2o driverless AI allows you to prevent the common pitfalls in doing data science like overfitting and an automatic feature engineering allows you to get that extra to get your model with that into production finally Trust is so important in AI your models are not trustable if you cannot really interpret them and explain them so explain ability is key and that's kind of the final step before you can take a model to your business user let's look at how this works in real time we deployed our models or our software on the open shift container platform and driverless AI is allows you to connect to several data sources whether it's traditional file systems on the cloud or open source object stores like Mineo or time series data bases like KDB we have a simple demo setup for us here to understand sentiment analysis of of the conference classic Twitter data sets correlation graph which allows you to understand the hidden structure inside your data and its dimensions let's run a simple prediction classic testing train in your data set and looking for conference sentiment we'll just look at text NLP is the heart of all AI obviously you can use not just one framework multiple frameworks including edge to open source extra boost trading boosting machines tensorflow torch and your favorite algorithms of choice h2 us driverless AI now allows you to tune eliminate features there are columns that are collinear or common pitfalls in how you do your tests and train division your validation set preventing leakage of of data and signal time series text and transactional data needs a lot of nice recipes smart recipes that can allow you to come up with the best signal from your data right now you're seeing automatic feature engineering from the data and pulling up signal or so you can quickly improve accuracy of your model we ran an experiment earlier and deployed that on OpenShift about 400 features have been analyzed across in just a matter of few minutes several of our customers have built models to democratize credit to prevent fraud prevention to save lives and you'll see one of our customers later who's using it to fight sepsis across multiple hospitals now it's time to deploy the model h2o models generate code automatically so you can deploy them as real rest endpoints and so Red Hat summit has been beautiful awesome experience for all of us and really excited to show that the conference sentiment on this has been spectacular finally open shift allows us to truly democratize AI and take it to where developers and data scientist and DevOps for machine learning can truly get AI is a a very powerful way to build code data is automatically generating code for you using AI and we're really excited to democratize it and make it for all thank you for having us [Applause] creating ecosystems of solutions for our customers will always be important to RedHat however it's also important to consider how our customers are innovating with our ecosystem partners customers and the broader open source community we're creating the trusted platform for the next cycle of innovation together we're enabling better hardware to drive AI and IO I know it's a little cliche but in ways we've never seen before we're enabling ops to be even more efficient by peeking into the future to predict issues before they become very real very expensive problems we're giving developers the tools they need to train their applications that take advantage of historical shared data to make decisions we want to create the right primitives and connective tissue in the platform to enable a broad ecosystem of specialized solutions now RedHat we focus on the upstream key open source projects in this area our cube flow and open data hub which you can learn more about today in the emerging technology track now all of this innovation delivers value to some organizations and to customers but what is value we can talk about money saving money or expanding and creating new sources of revenue but technology and innovation isn't only about money I want to talk about value that's beyond financials value to some organizations is related to health we're delivering value means keeping people alive it's important to enable innovative customers who understand how innovation impacts both technology and people like HCA health care who is one of our Innovation Award winners today to tell their story I'd like to introduce dr. Jackson please welcome 2019 innovation winner HCA healthcare chief data scientist dr. Edmund Jackson morning everybody when the organizers RedHat told me that the theme of this conference was expand I did not really appreciate that this meant the stage as well this thing is vast I could get lost up here I'd like to start on a personal note to thank the open source communities both our commercial partners such as Red Hat and three from h2o that you heard from just now but also of the non-commercial projects the list of such projects upon which our work relies is long canoe Linux closure elixir Kafka even our good buddy JavaScript without the contributors the moderators and the maintainer z' of these projects our work would be impossible so thank you it's my privilege thank you [Applause] it's my privilege this morning to represent a company HCA healthcare we're a Nashville based healthcare provider operating 180 hospitals and about 1800 sites of care across America and the United Kingdom and to tell you one of our stories spot one of the themes of our organization at Rey is a relentless commitment and energy towards improvement a few years ago our clinical leadership decided to take a bite out of sepsis census as you can see is a body's overwhelming toxic reaction to a bloodstream infection it's unknown but deadly when thinking about how to defeat sepsis there are two important things one every hour that it goes untreated results in an increase in risk of mortality by between four and seven percent but if you can detect it treatment is relatively easy as far as these things go so the name of the game is rapid identification and even more rapid treatment so we looked into our data to try and understand how we were doing here's the data for one Hospital ROS our day's columns are hours of the day and the number in color represent the number of sepsis screens performed at that time a sepsis screen is a unit of Nursing work in which we look for status in our patients and you can see we're doing this 8:00 a.m. and 8:00 p.m. shift change now there's 12 hours between there and in the fight against sepsis 12 hours is too long but our answer couldn't be hey nurses could you please do some more sepsis screens because nurses are already the most heroically busy people basically on the planet so we have to get smarter we look deeper into our data we use our data warehouse we look backwards in time and we applied a brace pretty simple algorithm and we saw hey we can see sepsis in the data if only we could do that in real time if you could do it in real time we're gonna let the nurses we could coordinate a workflow and we could give people the time advantage necessary to fight sepsis if only we could do it in real time our IT teams networking and data said no worries we got this they've pulled data from all of our hospitals in real-time to our data centers five-minute latency that's awesome now we can do this the spot product teams pick up that data they create a patient object that represents every single person under our care at all times every hour of every day and with every single transaction updates that state and looks the sepsis if it's detected it coordinates a pretty complex symphony of action and care in the hospitals to make sure timely care is provided all of this is done on openshift all of it now the key point here is that data itself are worthless absolutely nothing counts in this world except action and what spot provides is not just an alert or an algorithm it's a coordination of a very complicated workflow it coordinates nurses who care about all of the consent of one patient sepsis coordinators who care about sepsis across all of the patients and a management structure that cares about the performance of the entire system but my favorite plant isn't the lines of workflow it's the lines of communication between those teams and also back from those teams to us in the application people can send us a message ask for feature requests report bugs we'll just say hello which sometimes they do and as a standard practice an on heroic part of our business we can turn around those feature requests in the same day zero downtime major version upgrades with open shift and our culture of empowerment we've done that but all of this technical juju provides one thing a time advantage to our clinicians what we provide them is a five-hour head start in treating sepsis and in the hands of clinicians five hours save lives every day across over a hundred and fifty hospitals spot the algorithm isn't the story here spot the platform is a system we have created a system that will provide high reliability care at scale and allow us to radically transform the way that we with our providers prove lives and provide care for us it's that the care and improvement of human life thank you [Applause] at this point I was promised somebody would come and talk to me there is thank you alright dr. Jackson I love this story and one of the things we were talking about is you know the introduction of AI into the workplace and technology in general can actually make people uncomfortable even a little afraid and so one of the things I wanted to ask you about is where do you see AI in the workplace and the concerns about job loss coming together what does that look like for you at HDA at a really key question Chris I mean we deal with human lives it's a sacred responsibility and I think that spot I hope that spot provides an example of the right way of doing it we try to let the computer do what it does best look through data identify patterns coordinate complex workflows which frees our people to do what they do provide empathy provide care and dignity for people who are sick and I think you know at this sort of Promethean time of AI if we as engineers and creators hew to that line of trying to enhance humanity rather than compete with it I think we'll be fine thanks very much dr. Johnson I love that so that's computers doing what they do best crunching numbers nurses doing what they do best providing health care and empathy really we're talking about machine enhanced human intelligence I think that's a great way to think about it so I want to expand a little bit on healthcare and the implications of the work you just saw we know that data is valuable it helps you learn it helps you do important work some data however is sensitive private data like patient records for example the tension here is that you need to be able to use this data while respecting the privacy of the party it's tied to patient records have to be kept confidential for both legal and ethical reasons but we must study them the future of Medicine depends on it a red hat along with many other partners have been working on ways to share and generate new knowledge across businesses around the world without sharing the data that must remain private the result of this work is secure multi-party computing or MPC MPC is a cryptographic solution wherein multiple parties jointly make computations using private data from each party some secret number for example together they want to compute a function using these numbers but they don't want to reveal their private data I know it sounds a little bit like a magic trick but let me quickly convince you that it could actually be done so let's say I have a group of people who want to compute their average salaries I choose a random number add it to my salary and pass the result to the next person they pick a random number add that number and their salary to the number I gave them they pass the result on and so on around the room at the end we have one very large number now I subtract my random number and pass it on my neighbor does the same and so on around the room again when the result comes back a second time it's now the sum of everyone's salary which we can divide by the number of people in the end I know the average salary of everyone in the room but no one has shared their salary with anyone else now you have to take my word for it that you can do this with any calculation with more efficient protocols which you can imagine a city trying to do traffic planning without access to data from ride-sharing companies or patient info shared across multiple hospitals without breaching privacy laws or revealing sensitive information now as fun as it is to talk about the theoretical use of technology I'd like to make this real let me bring up someone who is exploring MPC right now to improve patients lives please welcome dr. Graham please welcome Boston Children's Hospital Director fetal neonatal neuroimaging and developmental Science Center and professor of radiology and pediatrics dr. Ellen grant [Applause] good morning thank you Chris it's wonderful to be here and to talk to you about why we think multi-party compute is critical to the future of Medicine there's two ingredients key ingredients to this future one is red hats infrastructure and another is Chris a platform we've been developing at Boston Children's Hospital unfortunate Chris RS has no relationship now Chris what it does is it takes data in the hospitals such as images and allows us to compute on the cloud it leverages OpenStack for rapid analysis OpenShift so we get reproducible results and multi-party computer security security compute across multiple enclaves let me walk you through an example of how this would work that it's almost in production now we have a ten year old boy that arrives in the emergency room with his parents he's had multiple seizures and of course they're very distraught one of the first things we do is get an images of his brain to make sure there's nothing grossly wrong what you see behind me are multiple images from one of the many volumes of images that we acquire there is one from the side one from the front and one from the top so as a radiologist I would be sitting now in the reading room looking through these in details about 5,000 images in total looking for any subtle abnormalities I look closely a review and I don't see any abnormality so I call it normal the patient has blood work Don has a neurological exam and is basically normal at that point in time so he sent home on a standard anti-seizure medication however over the next year's or weeks months years his seizures are not controlled they bring him back they tried multiple different medications he gets more MRIs he gets other tests such as the EEG EMG no focal um onset is found and so he goes on to have continuous poorly controlled seizures that results in poor school performance and incredible family hardship now is there anything we could have done different in this scenario if I had more data and more compute could I pick up something subtle earlier and guide more targeted therapy now imagine this child is in the emergency department again an MRI is done and the images are sent to me now while I am visually scrolling through the images and looking for a subtle abnormality I pull up a web interface called Chris I select this child's images and I go to the different plugins that we have that are all in containers and I decide to choose one plug-in called free surfer which does a detailed neuroanatomical assessment of this brain i point and click the data goes off and before I'm even finished reviewing the images which takes some around 15-20 minutes the results are back and this is a colored image you're seeing up here in your left I've now detailed characterization of that individual child's analysis some key features to notice it was easy I just pointed and clicked in a web interface using OpenStack it is rapid using containers it is reproducible these are three key features to get anything to the front line of Medicine now I have this braids boys brain characterized in far more detail than I could ever do by eye even as an expert doing this for 25 years I could not remember or even perceive this kind of detail and I've characterized all these functional regions of the brain with volumes surface areas and incredible detail and again this was easy it was rapid it was reproducible point-and-click back to me in the front line but we don't stop there I know now the detailed anatomy of this one individual but I want to compare into other individuals to see if similar to him to see if he's going to bring this normal or whether there's subtle abnormalities what if I could get a detailed similar information not just from my patient records but for many hospitals across the country across the world working together this is extremely important in pediatric medicine because we have many rare disorders that may be only seen at a couple of hospitals or if we want to really get a good comparison to demographics I have to you bring much more what multiple hospitals today aided together or my data will be skewed so for example if I want to get the same gender the same age and the same as Nick background I need to pool resources across multiple hospitals to be able to get that data together if I put all that day together and I can access to that we have an amazing depth and breadth of comparative data that I could use to guide my decisions so we're collaborations we've with Red Hat we've now tamed taken Chris to the next level it communicates with the NPC infrastructure and encryption is built into how Chris manages the NPC compute we have set up a case of enclaves in the cloud with collections of brain data each enclave participates but as chris mentioned no data about any individual is disclosed so I maintain patient privacy at each of these individual hospitals now imagine I'm sitting there at the frontline I've sent and I've now analyzed my data I send it to the Enclave to do comparisons of my index case the patient I'm seeing against this wealth of knowledge that we have now that we're sharing the first question I ask is how different is this brain from typically developing kids of the same age and background and I get the results here a red is keyed as standard deviations above normal and blue as standard deviations below normal so now I've exquisitely characterized how different this child's brain is from a collection of normal kids in a much more detailed game than I could ever do by eye but again we don't stop there I go back to this on clay these multiple enclaves and I search for matches in the same pattern of abnormalities with the same history of seizures and we flag a result there's a case that comes up our collection of cases from different hospitals only few and they suggest this may be very similar to this genetic disorder we then get a blood test on the patient and it confirms that rare genetic variant that then allows us to give that specific targeted gene therapy that specifically addresses the poor function that results in one processor me wrote results from that gene so the child is given that initial gene therapy within say two to three weeks of us actually doing this MRI his seizures are controlled and he goes on to lead a productive life without the disruption of multiple tests and failures at other medications although this is a theoretical example images used from an actual NPC calculation developer created this picture and always securely performed across settled prototype enclaves now this is the future of precision medicine this is what we want to do and this is not possible without the red hat infrastructure and Chris to bridge those two worlds together and one other important point is that open-source community because our engineers our lead engineer Rudolf Pinar is working hand-in-hand with the Red Hat engineers so there's no black boxes and that's another critical point in medicine I need to know what happens to my data I need to trace it through so that I understand the analysis that I get working together an open source yet encrypted environments has now helped us share our collective knowledge to better serve and save while protecting individual identity together we are changing how healthcare works and it's about time thank you [Applause] [Music] [Applause] these are awesome stories Thank You dr. grant so these these technological advancements and more importantly how they're applied are obviously having a massive impact as you've just seen from HCA healthcare and Boston Children's Hospital the innovation here is changing lives the lives of patients and their health care providers so it's really about fundamentally changing these organizations and well you really heard it from dr. grant it's disrupting their entire industry and it doesn't stop there this massive shift of machine enhanced human intelligence is making data actionable and usable paired pairing truly innovative approaches and happening across many organizations it's having an impact on those organizations industries as well telecommunications banking even the way you interact with your car they're all undergoing disruption thanks to innovation and thanks to Red Hat's solutions and platforms take something as elemental as the human voice how we communicate sure there's been advancements SMS instant messaging platforms emojis gifts these all help us communicate but there's room for innovation yet enhancing the human voice itself changing not only how we communicate but opening up the doors on who we communicate with in ways we couldn't easily do before please welcome Guillaume and vasa Li from Optus to the stage I think you're really gonna enjoy what they're up to please welcome Optus senior innovation manager Guillaume Poulet Matisse and Optus principal software engineer vasily chi Culkin [Applause] good morning morning for thousands of years people have been talking to each other in ancient Greece aristotle was walking with his purples and having conversations that would revolutionize a world later conversations have been carried for other means in the 19th century Graham Bell invented the phone call telephones have been used to carry conversations whether it is a conversation about love about war or to make peace in a modern world we have a multitude of ways to carry conversations check emails social media you name it but phone calls are failing out of the fashion there is this perception that phone calls are dead in fact it's legend concerns of features no emojis no chat history no multitasking on the other hand our voice is the most natural way of communicating many organizations are working on improving their products using voice voice assistance to answer your questions voice biometrics to authenticate you with your bank voice boards to improve service so a little while ago adopters we asked ourselves can we bring modern voice technologies to the phone call we have capabilities to establish and carry phone native phone calls we have towers data centers fiber channels where our mindset hadn't changed phone calls have remained wires and switches so how might we as a provider of communication challenge ourselves to rethink the phone call to leverage our core capabilities and open a mindset to lead innovation within our very own network today we'd like to show you a step change in how we see the phone call they made how it's going great and you feeling a little bit nervous and hope this demo goes well you know we are making this call because we want to demonstrate some of the cool things that we are doing adopters let me keep a record of this voice genie start taking notes so this phone call is now being transcribed in real time yes and look at the potential we just made this phone call open and digital you mean that we can integrate this conversation into different systems yes of course email calendar contacts can you develop for example a web service exactly is this system only for English speakers of course not do you speak French voice Jeannie translating to French this call is being translated by Optus voice translate to takern you're kidding no I'm not no is your participated in at Rizzo telecom and I can discuss our telecom network why not let's have a conversation about it pourquoi no Pandu's on poly until a explicated and as i me the boston come awesome ouch we should explain to our friends in Boston how does it work so phone calls little complex voice is not a synchronous you have to do it in real time and please do not drop way too many packets or jealous otherwise you will have very bad quality and let's look at its valve John is in Perth and he wants to call Mary to tell her about the weather Mary's in Sydney sipping a coffee and getting try he mails when John starts the call his cell phone is going to send a message to the tower this message gets carried all across the country to Mary via a mobile call when Marion says a call an RTP media connection is established between a Western Australia exchange and New South Wales there is 4,000 kilometres between them and if you don't want bad quality of service you need to be cautious where latency to avoid additional latency for the call we must be able to deploy or visualize media functions on the same path as the call and it brings additional challenges as a software developer you want a simple way to package a software in a portable format and from infrastructure point of view you want to deploy this package in multiple geographical locations have convenient means of upgrades and monitor the health of the platform from a carrier point of view you want reliability and support which brings us to containers and total shift the package are virtualized Network versions into containers then we distribute them across our exchanges with open source pods and by doing so we get benefits of a single platform to build scale and monitor our software we do all these while introducing repeatable software development lifecycle for hue generation by abstracting the complexity of our network and harnessing software advances we are opening a safe environment for software developers to build deploy and operate a new breed of telco applications this is a unique opportunity to uplift equip equipment legacy into the digital age thank you for your time today [Applause] come on that was awesome all right thanks Guillaume and Vasily when you think about how Optus is changing phone conversations you can see how new expectations for an old medium led to a game-changing innovation it's remarkable to see how we're still iterating something on something that's inherent to humans communication now the basic premise of banking has also remained consistent over the years advancements and digital technologies have led to a whole new set of expectations and places where you must meet your customers this is something that Emirates NBD is acutely aware of and they happen to be another one of our Innovation Award winners so please join me in welcoming Ally ray to discuss how they're changing shifting expectations in a long-standing industry please welcome 2019 Innovation Award winner Emirates NBD vice president cloud platform ally ray [Music] Chris great to be hey alright so thanks for joining me Emirates NBD started a four year digital transformation project can you tell us about the goals and milestones of this project absolutely Emirates NBD is a leading bank across the Middle East over the last few years with rising custom expectations and increased competition we realized as a bank we needed to change so with that in 2017 we started a four year digital transformation one of the first key tenets of that transformation was launching our private cloud based on RedHat OpenShift technology we named it Sahab which is Arabic for white cloud and it was very much the enabler for our digital transformation so you just mentioned Sahab my first Arabic word and that was a major enabler for your transformation ambitions so can you kind of walk us through some of the challenges and maybe some of the benefits you've seen over the last 18 months sure so we worked very collaboratively with Red Hat we took the best of our internal IT talent and bought some great engineers from Red Hat on board to deliver our private cloud which was very much a first for the region from there we looked at how can we deliver on our strategic goal we want to really push forward with always available banking very similar to utilities such as electricity internet how could we serve our customers 24 by 7 so banking whenever they needed it but we also wanted to pivot how could we change and innovate at speed how can we deliver faster we wanted to deliver in days and weeks not months and years so looking at these two strategic goals were very much an overall view of how the bank wanted to move forward with this international growth and expansion so you mentioned Sahab and it's a key part of Emirates NBD deeds infrastructure now how will you use this private cloud to personalize banking great question so Emirates NBD is continually looking at ways of pushing the innovation boundaries with adopted open source technology and a great example of this is what's up digital banking we built this using plug-and-play api's on our a cloud within three days this would have taken weeks months years previously and we've now launched that as a global first to all our customers we're now looking at how do we move over the next couple years ninety-five percent of our applications over to our private car platform by 2020 at the end of the year so you've called us more than a vendor silo thank you throughout this journey with red hat what have you learned and what are you looking forward to so working with red hat was amazing it was a fantastic experience I think me personally what I learned was we were looking at how we changed the culture of the bank how we were driving forward with the new culture and the culture that RedHat brought really worked well with us enable us to really achieve anything we wanted to do and move towards our vision and our goals collaboratively which was great looking forward into the future we've got some goals around as the public clouds become available in the region we want to work with the partners and move more towards a hybrid model we're looking at how we enable real-time Banking with kafka and finally we're working with a lot of our partners and vendors to move more of our critical workloads on to our private cloud such as our core banking system so amazing opportunity over the next year that I'm very much looking forward to that is awesome talk about expand your possibilities thank you thank you Chris thank you [Applause] so one thing you've seen today is that starting with the single innovation it's just the beginning for many of our customers to deliver value from that innovation Emirates NBD had to continually transform their business using data to personalize their customers experience this data and customer centric approach is important in every industry including the auto industry our next guest dr. lank is here to discuss how this transformation has taken place at BMW Group please welcome BMW Group lead architect connected vehicle digital backend big data and blockchain dr. Alexander lank [Applause] hey everyone I'm Alex I'm with BMW I hope you all know BMW company has been around for more than 100 years rebuilding cars have been doing this quite successfully but in the recent times not just the car but also the digital services become more and more important to our customers and even though if you know BMW you might not know how the connected car and IT plays into that so first of all we have a product this product is called connector drive and it's basically everything that is in your car as a digital service needs have some connection to the back end we have map updates over-the-air we have a concierge code where you can push a button you get a call connection to someone who helps you myself make a reservation in the restaurant and so on you can use your cell phone to open the car to heat it up in the morning when it's cold outside all the things that are really required when you have a car so connected drive has been out there for quite some time the system itself started 20 years ago was designed for a few cars because 20 years ago all the digital services were not that important today they are important and we have on these systems that consist by the way out of 300 microservices about 12 million cars we're getting every year because we are thankfully selling 2.5 million cars every year or IOT devices very fun IOT devices to be honest we're getting more and more cars onto our back end and because digital services are so important for the company and for our customers we in addition add new services to the existing fleet and of course a new cars also get these services so what we end up is basically growth of the request rate on an yearly basis of 30% you can imagine if you have traditional IT you have all these micro services running on shared infrastructure all the processes behind and suddenly you get this immense increase of requests that you run into problems and you need systems then care that can deal with us and for us this basically means we have really 1 billion requests per week we have to deal with at some point you just cannot tackle this with traditional idea so we decided a few years back started our journey with red hat and with open shift in 2016 and decided to have our back end completely migrated to the open shift platform so we started with the connected car application we're migrating first slicing our applications from all the lifts to microservices putting them on our open shift platform we have worldwide four clusters running all on open shift and by the end of this year we want to be done with this migration so it took us let's say full-time migration about two years but it came with a big transformation of the culture of the company as well today we are running 12,000 containers and we're not just having the four connected car class about nineteen clusters worldwide because we need to be more scalable in the future we're really looking into heretic openshift dedicated I think this is a product that helps us in the future when we need to scale even more we want to utilize the public cloud for us because this gives us the scalability and the resilience we need we can localize clusters in different markets if we need to and therefore could serve our customers on a worldwide scale in a best way we really think that a public cloud in many cases is the future for us especially when it comes to data you've heard Chris talking about data how data is in its increasing over time and we see this as well in order to give our customers the best last experience we really need that data we need data from the processes we need the data from the customer in order to really tailor the services to our customers and make sure they get the best best possible service from us we're building our data like currently up in the public cloud and of course then it makes sense to also have both OpenShift dedicated some runtime for application that can really utilize this AI platform we are building up there because in the end it's all about creating a good customer experience creating new services and giving you basically the best experience possible I'm very excited for the future and see what comes I hope you are as well and maybe we see each other again in a few years where we then explain how we use our AI platform to get you the services you need thank you [Applause] all right thank you dr. lang now that's five back-to-back examples of how technological innovation is truly disrupting business and entire industries the key ingredients to broad scale human and economic change are transportation how we move Goods and people communication how we talk to each other how we manage power money time and resources a set of key changes in key industries are what really unlocks the next huge phase of humanity but what that means for us you and me is that we will have seen massive change in our lifetimes sometimes we see change in unexpected places I travel a lot and not too long ago I was in Vietnam I was using Google Translate to speak with taxi drivers I wasn't just ask him to get me from point A to point B we were actually trying to have something like a real conversation and it excited me then to see how much that kind of opens up the world think of how much more in line trans translation like you saw with Optus would allow and the most delightful thing about that is innovation never stops not in the world not at Red Hat we continue to build the platforms that help make more innovation possible and now we're going to show you what is possible today in the realm of event-driven AI and industrial IOT I hope you've got your dance moves on because you're gonna need them so let's bring burns team back to the stage thank you please welcome Red Hat global director of developer experience ber Sutter with Sanjay Arora here on torino Jeffrey De Smet stewart douglas and paulo Pitino all right we're about to kick it up several notches are you guys ready well that's not good enough are you ready all right I know your bunch IT professionals we're gonna talk about that in a second but let me tell you a little story real quick for one thing though I want to know did you enjoy our rel8 and OpenShift and kubernetes native infrastructure demos yesterday well now we're going to show you something a little bit different we have built an application in this case and we're gonna show you how we built that application and then we're gonna let you interact with that application you're gonna be my beta testers because we're going to production here in just a few moments so be ready with your phone now there's one thing I know about the summit audience you're all some of the IT elite of the industry meaning you know bits and bytes better than anybody right even those arcane commands on the keyboard we talked about yesterday that's what you know but we actually live in a world of physical things and as you saw a moment ago at BMW right we live in a world where things have to be connected and when I say things I mean things like you see in my simulated Factory right here we have large machines that might be paint processing machines might be metal sheet presses they might be large industrial fans or conveyor belts these things have to be monitored and we have the ability now to March them at like at greater than ever before but the hard part is the hard part is how do we process all that data how do we receive that massive stream of data make sense of that master stream of data and then do something about it in a real business positive kind of way so be paying attention to that because you're gonna see this dashboard a lot more all right but just remember physical things have certain physical properties especially in the large industrial Internet of Things technology like you see here and one of the key elements to understand is vibration okay vibration is a leading indicator of machine failure that is how you know it's going to break before it actually does break because when it breaks it could cost you millions of dollars in machine outage right if you have a failure in this processing pipeline you may have a situation where your customer orders are getting backed up and of course those machines have to go into maintenance a mechanic has to get routed to go solve the problem and more importantly let me just make it easy for all you folks here an IT cuz I'm an IT person too remember that moment in time when you had an old car some of you may still have an old car you appreciate all cars but do you know that if you feel a vibration that seems a little bit odd you probably take it in to the mechanic and the rhyme I like to make is when things go shaky things go breaking and that's the theme of what we're gonna be showing you right now okay so let's jump into the architecture diagram I want to show you that and so you can see right here we have our sensors and guess what you have a ton of sensors and that smartphone in your pocket right now pull it out of your pocket or you're gonna need it in a moment be ready all right but by the way I should mention we have special prizes for people here in the room you've got to be present to win so we're gonna basically monitor those sensors in real time we have to receive a massive influx of data from your phone we're gonna do that through this technology called nodejs one of our engineers built this amazing technology you're gonna see it basically streaming that data in it's part of a redhead application runtimes we're gonna of course receive that massive influx of data with Apache Kafka because you need somebody can handle that at scale and you're gonna see that with redhead amq we of course have to filter the signal from the noise we have to know that not all shaking is created equal some vibrations don't matter at all it's everyday vibration some day some vibrations really really matter and we want to make sure we dial in on that through an AI technology you're gonna see here also we're gonna then talk about our back-end infrastructure where we know of course we have Salesforce comm where all our customer data is held and we want to make sure that our customer orders which are in flight which may be impacted by a machine outage are at least part of the algorithm we use to understand do we actually prioritize this event to make sure that we solve that problem so we don't have customer satisfaction issues or loss of revenue that's the key element also if you look at the Quercus element right there you're gonna see us live code that here on the stage we're gonna build a new REST API endpoint that's what you're gonna see from Stewart in a moment and we're gonna show you what it means to take Java to a whole new level and then of course we have to integrate these worlds so this is where red integration comes into play and we're gonna bring in that Quercus technology with that new API Salesforce comm our event stream input we're gonna run all that on top of cane native with a amic auto scaling you're gonna see that happened live on the stage also and you're gonna see us build that application integration component then of course we've analyzed that data right we get the damage record we can understand all information we put that in our data grid here and that's Red Hat datagrid and then we have another step which uses up the planner part of the Red Hat business automation little where and we were going to basically determine what is the most impact or what is the most efficient route to move mechanics from point A to point B and of course you see that all on that dashboard that you saw earlier this is where we've worked incredibly hard to unify and create a coherent application environment one underlying application set of middleware so you can run all that of course on top of OpenShift and that's what we're doing right now you guys ready get started we'll go a little bit slower in a moment okay so first I want to take you through AM Q so we have here on stage Paolo Paolo is my am Q streams expert on patchy copy expert and he's gonna show us how we set that up on OB chef Paolo so thank you bull let me show everyone how we solve the first challenge so first of all how are inter top things back-end needs to be highly scalable because if you thousands of sensors like the ones in your pockets right now can easily overwhelm our back-end infrastructure but I'm not worried about that degree so this is what Kafka is built to handle but we want also Kafka is to be really easy to deploy and manage so here we have openshift in order to end all our container rights infrastructure and there is a new software operator which is under the hang two streams product in order to end all our Kafka cluster running on openshift right now so the anchor streams operator provides you a really simple way to deploy and manage a Kafka cluster running on openshift with all the related entities like for example topics and users in a cloud native way ok sorry we saw software operators just yesterday in a big way and I love the fact that we now have this software operator to run Kafka great scale the brokers the zookeepers all that so it's great for operations but one of the things I hear from developers all the time with this Kafka thing is it takes me weeks to get my IT department to provision a new topic and that's a huge challenge I don't want to wait as a developer so how do we solve that prompt Paulo yes absolutely and we have been hearing the same story so I'm excited to show you a new self-service console using this console the users can see all the topics already provision in the Kafka class running on OpenShift with all the related data like partitions replicas and consumer but the users with repair mission can easily create a new topic so just fill in a name let me say cents or a stream and then for example setting the number of partitions the number of replicas the data retention policies that can be based on age on storage or the compacted one and then if needed adding some more new advanced at configuration and then just clicking on a button the topic is provisioned in a few seconds oh wow so right now we just provision a new kafka topic with a few points and clicks can you imagine that self-service you know empowering your developers at some point in their future thank you so much for that Paulo that is fantastic my pleasurable all right so up next I mentioned that we have more of our back-end to talk about right we have some really cool things to show you there and now we want to talk about a technology called Quercus worry again we've changed the game for Java developers and you're gonna see exactly why right now so right now we have Stewart on this stage he comes from way down under and he's gonna walk us through what is Korkis can you tell us about Gorgas well chris is a supersonic subatomic java where we've optimized jarl for kubernetes and openshift environment with all your favorite frameworks and api's plus we can pilot down to a native executable wait you said native executable would tell me more about that well there's no in JVM involved it's just a normal native Linux executable okay okay now Quercus is a really funny name when you guys agree now so I like a better reaction can you tell me where the name comes from well a quark is a reference to the subatomic particle because it's very small and light and the Astra first to the hardest software development us humans okay but it's smaller incredibly and faster dramatically and it's not some funny Australian exotic animal no so seeing is believing so I'm gonna show you what classes can do by finishing off a half complete quark South live on stage so what we have here is the machine maintenance API of our application which is a jax-rs and JPA based rests up it has two endpoints at the moment one that lists the machines and another one that shows a maintenance history now at the moment this isn't that useful because even though it tells us how much damage a mechanic repaired it doesn't tell us what the final health of the machine was so let's fix that now so as you can see from the URL here this isn't actually running in my laptop this is run openshift cluster so I'm gonna do some cloud native development now the first thing I need to do is connect my laptop to the cloud by running the remote dev command and now they're connected now the maintenance history is stored in the maintenance record JPI entity so I'm going to add a new field here to store the health now I'm gonna go to my jax-rs resource I'm going to expose that new field to the application now if I just go back to my browser and hit refresh we should see the change so now I've got this final health field I love that edit save refresh development model is fantastic yeah and this is running on the cloud - in an open shift pod this is true cloud native development awesome so now we've got this information what I'm most interested in is the current health of the machine so let's add a new endpoint to do that and to do that I'm going to use something called upper curio studio which is an API designer and part of the integration the Red Hat integration studio now we've can see that I've got two paths there I'm gonna add a new one now and add a path parameter which is of type integer now I'm gonna add a get operation now this operation can have an ID called current state which will translate into the Java name now I'm gonna set the content type which will be just sun and add a response type which can be machine State now I've designed my API I need to get it into our application and to do that I'm going to use the code generator so I'll just click through this wizard here and the end result of this will be a pull request on github it just takes a few moments to generate so once this has been generated I can review this as normal and see what changes it's made you can see my new method there that all looks good so I'm just going to be a bit naughty and merge my own pull request and then sync it with my local workspace we can see you can see here we've got this new method that I just designed so now let's implement it now I've actually got our method here calculate state that has most of the logic we already need so I'm just going to call that and I'll update this to include my new field go back to my browser to my new endpoint and we should say it Crocky oh wow I've got a query exception sorry two SEC's oh yeah oops rehearsing backstage all right we'll just pretend that didn't happen so you can see here so you can see here I've got my new API with the current health of the machine now because I've been doing development I've been working on the JVM with standard Java libraries but you can also compile this down to a native executable so if you have a look here here's one I prepared earlier that's running on OpenShift using only 20 megabytes of RAM and this isn't a hello world app either this is jax-rs JPA transactions all the stuff you need to write an enterprise app and because it's a native executable it will also start really quickly so if you see here I can just scale this up to 20 pods if you have a look we can see these pods coming up now this is a full stack enterprise java application deploying right now into open chef scaling out and you can see they're already running so that is fundamentally game-changing and this really matters a lot we're building an application here has to time it dynamically scale based on an influx of sensor data so you have to think about the firm moment we're gonna basically pound this system to death and if you don't have live scaling like that with a super small super fast runtime it's completely changes the way you think of Java for sure thank you so much for that Stewart totally awesome stuff hard live coding in front of several thousand people okay we got more to show you them alright so up next is Hyrum and what we want to do now is talk about how we integrate these technologies how do we take an API like you saw Stuart just create as well as salesforce.com but also take that influx of sensor data and bring all those worlds together so we can basically understand better what our damage looks like what are our repairs were how should our repairs be prioritized so let me introduce to Hyrum he's a key part of engineering team part of the Red Hat integration team specifically and he's also our apache cough camel guru here on stage so Hyrum please show us more about what you have there hey Burt I'm gonna demo to you guys how we can easily add Stuart's new API endpoint in an integration pipeline this pipelines job is gonna be to combine data from different systems so that we can schedule machine repairs I'll be doing this with fuse online which is a part of Red Hat integration let me select the integration that we're gonna be updating here okay so this first step here is connected to the kafka topic that was created earlier it's receiving all those machine sensor events we then need to understand how customer orders are linked to machines so we're gonna query our corporate asset management system in Salesforce and that's gonna let us know the cost of failure for the machine reported the event we then combine all that data and map it into a repair record which we then sent to an API which deals with scheduling those machine repairs what we'd like to do is improve this by prioritizing based on which machine is closest to failure so let me edit this integration and we can use Stuart's new API endpoint to get the current machine health and include that in the repair record all right a call to its API here and let me scroll down and select this history API here's this new endpoint all we need to do now is add a data mapping stuff which lets us configure the input into his API let me mad machine ID into the ID parameter finally let's also update this last data mapping step it's combining information from the original sensor event and sales force and mapping it into that repair record so let's also map in the health field from the API response into the repair record and now we're done and we can publish in a few seconds this is gonna be running like a native service thanks to a new upstream Apache project called cam okay okay Bert we're published we just need a whole bunch of sensor events sent into that Kafka topic all right so it's already deployed out and over shift right over your 4 that's right okay and so we want to also see our Kafka gore fauna dashboard because we need a big stream of events flowing in right let's make sure got that all right fantastic you see there's no messages right now if you look over here so what we need is a huge influx of data all right so just gonna the example what this looks like I happen to have a ton of sensors right here in the smartphone right now should I give it a try are you ready my my console here shows the deployments got zero pods are running as soon as you shake that phone all right this week a native auto-scaling responding to a series event so I'm gonna get shaking right now so here we go pushing in accelerometer data a great volume into this application let's see what happens here okay we've already spiked up 334 messages per second on that side and you can see a hundred pods coming online right now there we go sixteen available nineteen twenty one that dynamic auto scaling of cane native happening right there right now alright fantastic look at that what do you guys think of that this is a great job are you shook that phone generate a ton of calf events that cost K native to scale all right integration is up from zero and did he notice how quickly those pots started up that's because camel K is also running those integrations using corkers camel k on Quercus dynamic auto scaling out what k native serving 101 pods now available look at that fantastic okay for those of you watching on a live stream right now this is where you want to tweet your friends let them know some more cool things are about to happen cuz we have even better things to show you and I know all of you right now are anxious to be shaking your phone aren't ya you're gonna get a moment we're gonna have that opportunity but right now we want to tell you a little bit more about our applications so you see now we architected the back in let's talk a little bit more about the things you see more on the front end of this right now and we want to talk about artificial intelligence specifically machine learning and we have here with us on stage sasha from the office of the CTO where they worked really hard to train a very special model to help analyze certain vibrations and understand what that shaking needs to look like so Sanjay can you tell us more about the role of the data scientist and the machine learning on top of openshift absolutely so analyzing data like this for patterns is a perfect fit for some of the technologies we have been working on one of those technologies is called open data hub and you can check it out at open data hub dot IO now the workflow of the typical data scientists involves data curation exploratory analysis model training validation and serving open shift and open data hub later data scientists do all of these using their favorite open source tools and at scale so let's see how one would get started with open data hub you can go to your open chef console select the developer catalog on the Left search for open data hub and install it right there this gives us access to a safe instance to store our sensor data as well as Jupiter notebooks these notebooks are the main interface between data scientists and their data and they serve as the arena where all data data modeling takes place so let's take a look at a notebook this is what the typical notebook looks like and you can write your regular Python code and in this case we plotted the sensor time series data okay so I know I'm the greatest challenge we're getting data science and a IML integrated into your overall development you know cloud native development world is capturing data training a model you know performing iterations on that train model and coming up your hypothesis so how do we train our model for this application yes if we had to get a bit creative there this is a simulation we obviously don't have your own machine parts here so we want our audience members all of you to serve as proxies for the machines now it is very hard to tell someone to move like a broken machine versus a healthy machine so we picked a few specific moves that represent machines with various levels of damage would vibrate we spent the last few weeks training our models around these movements and the goal for all of you is to emulate them as well as you can but let's first take a look at how we train the model itself the first step was to collect the data and some of our colleagues wrote this great app that lets us do just that so as you can see on the screen we have a volunteer showing the shake motion and a user would click on train model execute the motion with the phone in their hands and submit the data which we would then use as our training data some very enthusiastic Red Hatters including BER generated a ton of data for us in our Raleigh office let's take a look those are models to train a model and at this stage we have training data so a data scientist can get to work we would go back to our notebook load the data refine it train our model and once we are satisfied with our models performance persisted and create a rest endpoint this resin point will then evaluate your moves so far we have a few thousand people in this room we should get them involved yeah yeah well I think this is where we reveal what our little application does and we get you guys a chance to connect our back-end you guys ready for it all right you got to go to red dot HT slash demo 2019 red dot HT slash demo 2019 on your phone you will see that the game will begin shortly because we have it paused right now are you ready ok let's turn it on let's see that first one you'll notice the shake there Sanjay will click on the shake icon and all you got to do is do it like you saw with Kyle our training model training our models okay and you'll notice you'll score points you'll see the points show up there and then you Larry go Barry's gone show you got the shake alright it's that simple and you're probably thinking wow we have a whole data scientist team to worry about that exactly this is how we do it here okay but actually there's a couple more fun things we added to the application so if we now could push out and dynamically add the X and a circle there we go so there's your circle all right come on get that phone moving there I don't see enough waving oh now we got a hundred yeah I see about a selfie waiver now all right and now the X is kind of tricky got to go kind of slow all right if you're over-aggressive about it and won't pick it up fantastic okay so you know we have tons of data flowing into the system right now what do you what do you think about that yeah this is great and I would like to clarify one point so each of you is associated with one machine so in my case I'm associated with machine e which is in the lower left corner of my app and so the machine learning model actually thinks you are the machines and when you move it thinks the machines are vibrating it then evaluates what move you made and how good it was and updates the damage on that machine yeah you can see with the machine e right there it actually is orange because you know it there's receives its now the sensors are picking up damage associated with it all right now I know you guys aren't fun but I need everyone to stop check that out we took control your phone all right be ready there's more coming now we actually have to show you more of our back-end actor architecture here because there's one more thing that is incredibly important to understand and when you see all our machines up here on the big screen you have to understand that we have to route mechanics in an optimal way just like you take your old car to the mechanic later to start shaking we got to make sure that our mechanics move to these big industrial machines and start fixing them so what we have here right now is Jeffery who's part of our Red Hat business automation team process automation team and he's gonna talk to us about Optive planner so routing repairmen might seem simple after all we have a prioritized list of repairs but it's actually quite complex we have a limited number of mechanics we need to decide for each mechanic which machine he will fix and there's a traveling time between all of the machines so we need to do that efficiently and also a heck of a lot of audience members here detecting machine damage so we need to that as efficiently as possible and we're going to use business automated business automation product and specifically my baby opted upon earth to do this okay so how does up your planner help us with a problem like this one though well of the planner is an AI constraint solver it optimizes planning problems by using advanced algorithms so for example it can solve the vehicle routing problem in which we need to send a fleet of vehicles to a number of locations across across the country and when we do that we want to decrease their travel time and we want to decrease their fuel consumption and the top top Ameri can reduce their travel time and fuel consumption by 15% or more obviously saving fuel consumption is good for the environment and it saves some of our customers hundreds of millions of dollars per year hundreds of millions of dollars per year just by a more efficient routing that is fantastic now how does it make an impact on our mechanics it's basically the same problem instead of tricks we have mechanics and instead of locations to go to we have machines to go to so it's the same problem we want to reduce their travel time so the mechanics spend less time walking around and more time actually fixing the machines this productivity boost allows us to compete allows us to keep all the machines afloat while you guys send in machine damage so in the scenario setup here you can see it all the machines are damaged the three machines in the middle are heavily damaged we have a red machine and to oranges machine but even the green machines have some damage so there are actually a reason to send a mechanic to all of these machines now which one which machine should be fixed first and in which order should we fix these machines now you might think that just send em to let Sun send them a mechanic to machine D because that's the most most efficient the damage machine but that means that he will head back to machine H and then to machine E and it's actually not efficient because it will spend a lot of time travelling between those machines on the other hand we could find the shortest path which is in the academic world known as the Traveling Salesman problem but that's not going to help that's going to be not going to be super efficient either because then we might lose some of these machines so we need to do something in between so I'm going and mechanic and I call this one Mario and you can see the order in which Mario will fix the machines let me just zoom in for you so here we go so Mario will first go to Machine H then machine D and then machine e and AC see as he goes to machine D he fixes machine H along the way because that's just more efficient I see it 1 2 3 and you see Mario standing right there by the break room so what but what happens though if one of our machines really sees a lot of damage starts deteriorating rapidly you know really falling apart yeah well if all of you guys for example focus on machine C it gets a lot of pressure so let me simulate that I'm going to put some pressure on machine C you can see that of the planner immediately changed its mind that is fantastic so it still maintains optimal wear out based on what it sees right now in real-time yes so let's get back to our factory floor ok let them go well I think we should let the mechanic loose see what it looks like here yeah here we go so as you can see the mechanic starts fixing the the machines in the aura that opto planner optimizes well I think at this point with mechanics in play we're ready to start playing the game again hold on there just a second there's thousands of sensors here in the room actually a thousand thirty thirty yeah so that's a that's a little bit a lot for just one mechanic so let me bring a second one okay I call this one Luigi the green one there all right fantastic so as you guys know right we basically have this application running right now there's a thousand of you connected to the game or more you can go to that same URL you see at the top of the screen right now and get connected to our game because there are special prizes for the top ten winners I mentioned that earlier maybe you weren't paying attention but now you can join the game also I'll tell you one other trick if you actually score all six of the things you're about to see you get a thousand bonus points remember that one does for people who really want to be hardcore about this okay now we need to get this game started one more time so go ahead and turn it back on for me let's let people play a moment all right you want to make sure that you pick that shaped motion and get that one nailed or you want to pick that circular motion and make sure you nail that one too these are already looking there's like a thousand phones waving at me right now that's awesome okay and of course you can see Sanjay playing the game there he's knocking down some points and you can wow keen sting look at that one way up there on the top leaderboard there oh yeah fantastic okay stop one more time okay you guys like that one don't you all right we have the power up here now the team we wanted to challenge our AI team we're like okay circles and X is that was easy the training that model wasn't that hard at all actually you know separating different types of vibrations so we wanted to take it up a notch so I challenged the team with a couple of different things if you remember that famous movie from 1977 it's called Saturday Night Fever with John Travolta rockin the disco stage this is my Arab people that's what we're talking about here so we actually have added a couple other things but right now I have a couple other people are gonna come up on stage and help us do this right now so kinda up here you know we had some volunteers who want to ensure that we'd make you guys understand how these motions are created we're gonna get everyone down here fantastic okay all right now we're ready let's unpause the game show them the roll motion there it is this had the fever move there it is oh wow look at you guys going as you can see we have a Siebel store and huge points there alright pause one more time again we have the power you guys are looking good up here that's fantastic now the younger people on the team we're like birth at 70 stuff is cool and all that I know about it because it's a fortnight you guys with me your kids playing for it I like they learn those same moves I learned about them more in the real but that's a different issue they're like well we need to train to say I model to do one more thing you guys ready for it we want to show you the floss let's add it in there and get going here we go you want to win this game you got to score those points all right let's stop game over Wow look at all you folks think about this for a second look at you put over and through it a thousand people playing the game live with us right now we had over 12,000 transactions go through our system scaling out on that back end you saw earlier what all that infrastructure your sauce put together and of course we had nine nine thousand five hundred recognized motions and it's absolutely incredible and if you remember all that Red Hat mittigar middleware that you just saw we pulled together in a coherent unified application environment running on top of OMA shift where you guys could interact with it directly absolutely fun stuff so I know you're super excited about what you just saw you can come to our booth to Deb's own booth at 1:00 p.m. and you can actually get a behind-the-scenes tour of all the other cool things to win in here there's actually a lot more cool things you didn't even see today so just keep that in mind all right just one p.m. Deb zone now I recognize where IT professionals here we don't all dance a lot of breath because all that movement alright but you do innovate and you oh by the way I should mention if you see your name and on your phone up here come see us at the end of the keynotes don't rush the stage right now we do have special prizes for our top ten winners okay but you you do innovators IT professionals and one of the things you want to see right now is yet one more award given away for the innovator of the year so I'd like to welcome back Marco and Chris thank you please welcome back to the stage Chris right and Marco bill Peter [Applause] thank you BRR thank you Wow Chris wanna do some flossing it gets more difficult to pick the most outstanding and innovative projects from the pool of nominees okay we look at the impact of business the nature of the transformation really elements of community and openness we also look for projects that are unique and creative you know cool cool yeah and the top five become our Innovation Award winners you've heard from them throughout the week all of these organizations are winless though not because we picked them but because they are showing us the way forward this year's top five include BP where they're developing things differently if openshift launched a bank is really curate everything as a service and for Best Supporting Actor Emirates NBD using data to personalize customer experience HCA healthcare enabling machine enhanced human intelligence and Kohl's they're all in on cloud and open source and from from these five winners you choose the winner the innovator of the year first give me the envelope dude you're killing me what guys guys forget a little sometime oh oh oh thank you thank you while you're here Jim why don't you do the honors oh it's good to be the CEO all right did PwC certify this absolutely all right oh the winner I'm very excited I did not know literally until this moment HCA healthcare saving live [Applause] [Music] congratulations really truly an inspiring story especially all right gets the honors [Applause] all right in the back thank you again congratulations truly inspiring thank you I have to say we spend so much of our time you know deep in technology because we're passionate about it we love it but having an opportunity to observe what that technology can do to literally have life-changing impact it's it's an extraordinary thing to see and it really really helps personalize everything that that we all worked so hard to do so again thrilled with all our Innovation Award winners and certainly HCA it's just such a great story so summit has brought us so many new things a new version of rau a new version of OpenShift a new logo and even new ways of working and so we are so excited to have you here to share them all with us for those of you who've been with us for a decade or more now we really appreciate you being here and for those of you just started to join us you know welcome it's great to have you you know I went to a number of receptions last night and several people said to me God you know we're here for the last Red Hat summit so want to be clear we're not going anywhere the party's definitely not over we have a great party tonight and we will see you for Red Hat summit next year in San Francisco it's going to be bigger and better than ever thank you so much for being here see you tonight [Applause] you [Music] you
Info
Channel: Red Hat Summit
Views: 11,177
Rating: 4.9375 out of 5
Keywords: Red Hat, innovation, Hilti, UPS, Nvidia, ProphetStor, PerceptiLabs, H20, HCA Healthcare, Boston Children's Hospital, Optus, Emirates NBD, BMW Group
Id: FUu4kMc0PL8
Channel Id: undefined
Length: 134min 0sec (8040 seconds)
Published: Thu May 09 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.