Keynote: Artificial Intelligence with Geo

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] good morning how many of you are using a I show hands okay small number great let me tell you why AI is exciting in just two words it is artificial intelligence is the opposite of natural stupidity that's all you have to remember that's the definition of AI well Khan did start explaining the power of AI using a story behind me here is a timeline of the earth it's 4.6 billion years long right from the Big Bang and as you follow that timeline I want to call attention to Earth's four billionth birthday that is a start of the Cambrian acceleration you know before the birthday at four billion years all life on the planet was single-celled maybe a small and color a number of aggregations of cells and then in a very short time period of 80 million years called the Cambrian period all sophisticated life on the planet including nervous systems emerged a large number of you know about this story people have been wondering why one of the most interesting theories proposed so far is the emergence of sight the ability to see at a distance so you might ask the question why should the ability to see at a distance drive an incredible explosion in nervous systems and body complexity and diversity of life well here's an answer let me take a simple predator-prey scenario when a prey develops the ability to see at a distance it becomes very effective at avoiding prey a predators well then the predictor has to in turn evolved incredible senses to go catch the prey so it can survive and now the prey then in turn becomes smaller there is an incredible acceleration and evolution driven by this ability to compete and the predator-prey cycle it's just one of several examples then the creatures on a planet start evolving evolving reasoning systems nervous systems limbs to avoid wasted wish to lock mode faster and so on until a new equilibrium is reached and a very sophisticated ecosystem forms that's what the ability to see can drive to see remember is not just to register pixels on a retina but to understand what they mean and to be able to act predictively whether we are at a very similar threshold now in the world of software until now software was really about collecting data recording them and writing programs against them in the future with AI software become sentient ability it develops the ability to understand what the data means it allows us to build models that predict things and allows us to act without being programmed and that drives an incredible explosion in sophistication of software make software sentient and it in turn will drive an a fundamental change in the economy so if you think about how that revolution is coming together today the cloud with its unlimited computing power and the ability to integrate vast amounts of data across applications that is the ocean in which AI is being born data from all over not just from regular applications but sensors of all new types being invented and data about all interactions and communications all of that data their data is the oxygen that feeds AI and algorithms their advances incredible advances in AI such as deep learning and others that gives a site the ability to see into the data and act predictably with it all software of the future is really going to be integrating these three things cloud data and intelligence and they will live in that cube that integrates all of these three innovations in massive ways very powerful it's a house Microsoft then playing in this technology revolution we have three major directions we build an AI platform for developers like you to build amazing applications on top of it that a platform is what Microsoft has been always about it is about empowering developers to build amazing applications infused with new technologies the second thing is about us infusing AI into every product whether it be a Microsoft Office or Excel or exchange or Outlook all of these products or even Xbox all of these products get incredible innovation from AI and then finally in specific areas where there is a big opportunity to help our customers such as customer care where there's tremendous amount of conversational data available already we can then apply ai to improve those customer experiences and create incredible efficiencies so now let me tell you about the platform so the Microsoft AI platform it's really a first-class platform built in the cloud it brings the best of AI to the cloud and the best of the cloud to AI and that marriage is really important for developers looking to build complete applications end-to-end systems of intelligence that then operate day in and day out in a 24 by 7 way in some workflow so we have three major buckets or components there AI services the infrastructure on top of it all of these services aside and the tooling to build applications let me tell you a little bit about them so in services we have again three groups custom AI this is about developers bringing their own data building machine learning models and AI models and statistical models and then deploying them in the cloud as web services and creating hosted applications on the cloud that are reliable monitored and managed the second bucket is pre-built ai cognitive services for example now I should tell you about this a little bit and we make an analogy you know for example these days we rarely build buildings from brick and mortar all the way up we assemble buildings with prefabricated parts that's why for example a few years ago in China they were able to build a hotel in six days and then rest on the seventh day because it is assemble from prefabricated parts there'll be in the cloud in the not-too-distant future a million prefabricated AI kpi's in the cloud that developers such as yourself will be able to compose into very powerful applications what are examples of this ap is speech recognition video understanding image captioning very powerful search natural language understanding sentiment analysis so on and so forth there's AI anything that you can imagine is a nice component think of this like almost like integrated circuits on a circuit board right there is in the semiconductor industry a vast number of integrated circuits created by manufacturers all over the world and then you are some of them into into actual products it'll be similar with AI and then a third bucket conversational ai conversational ai is about creating chat BOTS and and an intelligent BOTS that can interact with you in natural ways and get tasks done for you you know it's not just about a conversation it's about task completion it is about engaging you in interesting ways and helping you discover more and that's a very powerful direction as well and all of this magic then on top of the infrastructure of a sure so the data infrastructure such as data lakes no sequel databases graph databases spark data lakes all of these that all of you to bring in vast amounts of data and do distributed computing and querying and processing on top of it it also uses compute surfaces available on the cloud such as spark such as data science virtual machines such as batch AI Bhatia is a really interesting kind of concept that is available only in the cloud this is the ability to fire up thousands of virtual machines and schedule a job on them and when the job's done then you automatically shut down it's completely elastic and you pay only when you use the VMS for the computation and in fact on the cloud if you choose to use preemptable VMs VMs I can be taken away from you well then you get about one fifth of the price of a regular VM and that means when you're doing a machine learning cleaning job or an AI processing job at scale with large amount of data well you have at least four tries before you actually end up paying as much as holding all of those VMs for a whole hour or more so it's very efficient very cheap and then you have capabilities like kubernetes clusters and then things like IOT edge IOT edge is an innovation that allows you to manage a very large number of IOT devices Internet of Things devices and put a eye on them and manage the AI manage the software on those devices from the cloud so it's a managed collection of IOT devices all of these are innovations really in the cloud that I love you to build amazing systems of intelligence in variety of ways and then there is too late we have great tooling and Microsoft has typically built a very strong developer platform with visual studio and we should study a code which is cross platform across Linux and Mac and we know Windows and so on and we provide AI tooling on top of them we also have the actual machine learning workbench which allows you to do data transformations in very simple ways and a number of other capabilities out there a very rich collection of capabilities and then finally we are a very open platform the algorithms supported there are not just Microsoft's we support all the modern and deep learning frameworks like tensor flow and cafe and C and T K which comes from Microsoft but it's in open source and we support scikit-learn and you can really bring your own frameworks come to Asher and develop models very easily experiment with them connected to data and then deploy these applications so that's a Microsoft AI platform it's tremendous amount of detail that you can find on the ash or AI webpages so now it's my pleasure a real pleasure to announce an important component of this AI platform this is the Geo AI data science VM and it's it's what an exciting step that I'm sure all of you will find incredible value in it combines our GIS so arcgis is hosted in a data science VM on Microsoft Azure the VM comes pre-installed with all the popular deep learning tool kits our and Python and a number of data components data components for data exploration and visualization and this VM can be stood up on any number of compute instances from the smallest computers to the largest VM instances with incredible amounts of memory and even GPU support you can use then capabilities like Asher batch and build amazingly large scale distributed machine learning jobs on geographical GIS data right and that's very very powerful and then once you build those models you can deploy them as web services in the cloud and see have a complete application so very exciting so now I want to talk a little bit about applying AI for improving the world around us really you know if you think about it AI is at its core about empowering people and organizations to reason and interact with this increasingly sophisticated world all around us and at Microsoft we believe we can help transform why we as a society can work to solve some of our greatest challenges and I'm pleased to tell you and to tell you more about this and pleased to welcome Lukas joppa who is the chief environmental scientist and founder of AI for Earth Lukas Thank You Joseph thank you very much thanks so thank you thank you so I just wanted to start by saying how incredible it is to actually be here the ESRI developer summit is something that I look forward to every year because it's really the one-stop shop to be able to come and see the incredible number and diversity of applications that all of you actually can build on Ezra's GIS stack and so I always find it hugely motivational I've been particularly looking forward to this year's event because I've been really excited to finally be able to tell people about the geo AI virtual machine that Joseph just described and the the geo a virtual machine was really intended to empower data scientists and developers to be able to build applications that classify observe in make predictions from and optimize over this incredibly increasing amount of geospatial data that are coming from sensing systems like GPS tracking devices IOT deployments and this rapidly increasing constellation of high resolution imaging platforms like satellites and drones and as most of you already know I mean all over 80% of data that's already out there already has some sort of spatial component to it and that's only going to be able that's only going to increase and so I spend most of my time in my jobs thinking about how to apply advances in artificial intelligence to spatial data in order to change the way that we as a society old man model and ultimately think about sustaining Earth's natural systems in a fairly different manner than than we have been doing in the past and and to just kind of I guess explain where I'm coming from the in that space I would just tell you I guess a one-sided argument that I've been having most of my career with with Captain Kirk from Star Trek and his description his famous description of space as the final frontier and that's because that's an argument because I think that of course space is the final frontier but for me space isn't about the moon and the stars and and the outer reaches of some distant galaxy instead by space I mean the space all around us or the study of geography or as Jack Dangermond our friends it as we would say the science of where and I say that it's unexplored or the last frontier because we really are in many respects flying blind with our understanding about the planet that we call home I mean I think there's a little bit of irony if you think about the fact that astronomy is a science has been fundamentally transformed by artificial intelligence already it's machine learning algorithms that scan and sift through the incredible amounts of data being collected every day from telescopes and satellites and extract the semantic information that accelerates the work of astronomers and identifying interesting observations in distant galaxies but here on earth our ability to understand what is where how much is there and how fast it is changing is that a much reduced capacity compared to where we are with astronomy and so at Microsoft we really want to help change that as Joseph said I have the the honor of serving as Microsoft's first chief environmental scientist and as part of that helping the company roll out a new five-year 50 million dollar commitment to a program called AI for Earth which is a program dedicated to deploying Microsoft's deep investments in artificial intelligence to solving critical environment challenges and we have some big challenges as I'm sure you're all aware and if you pay attention to the news or read scientific journals we find ourselves at a time where we have to somehow figure out how we're going to mitigate and adapt to changing climates ensure resilient water supplies sustainably feed of rapidly growing population a human population growing to ten billion over our lifetimes all while stemming an ongoing and fairly catastrophic loss of biodiversity so fail and we're gonna pay a significant price succeed and we're gonna have made a pretty important contribution to the human experience but in order to succeed I'm convinced that we're gonna have to do exactly what Joseph was just explaining to all of you we're going to have to teach computers how to see into and extract information from this incredible archive of raster and vector data that we've been collecting over the years and we'll only increasingly continue to collect moving forward and that's why from the very beginning of the AI for Earth program we turn to Ezra's geospatial toolkit to both accelerate our work and to allow us to start thinking about how we could directly integrate Microsoft's AI services applications and algorithms with Ezra's GIS stack and that's really where this GOI virtual machine came from it's the preferred go-to platform for my development team and it's intended to allow developers and data scientists to seamlessly work between these two seemingly at least occasionally seemingly disparate worlds of GIS and AI and the way that we're allowing people to move between those worlds so effortlessly is through simple easy-to-use Python and our api's and what that means now is that it doesn't really matter so much from a technical perspective if you are principally a GIS developer or somebody who spends most of your time thinking about how to build AI first applications now you have all of those tools available together on a single scalable machine backed by the incredible elasticity of the Microsoft cloud and so one of the first projects that my team and I took on and what's turned out to be one of our more interesting and difficult projects with the GOI virtual machine was a partnership between Microsoft as REE and a small nonprofit on the East Coast called the Chesapeake Conservancy and what that partnership was trying to do and still is trying to do is to empower more people in more places to sustainably manage their lands we want to do that by providing information about land cover at a national scale at 1 meter resolution would be the first map of its kind here in the United States and the way that we're working by taking is by taking advances an area of artificial intelligence called deep learning and a specific class of algorithms called convolutional deep neural networks in order to classify freely available high-resolution imagery like you see in one of these hexagons here in two labeled classified land cover data on categories like forests and fields water and impervious surfaces like roads and houses and so this project one of the things that really motivates me about this project is of course the conservation aspects but also the fact that it's a fairly canonical example of requirements across some of the world's biggest industries whether that's retail or defense municipal planning or natural resource management and thing areas like agriculture or forestry and the thing that we found so empowering about having ezra's GIS staff embedded in our AI development environment is because of course now we have instantaneous access to s eries solutions for all of our traditional GIS workflows things like data visualization ration these kind of the incredible diversity of geographic coordinate systems and projections all together across many different data sets but we can also use our Geist Pro in this instance to operationalize our model and so that's what I'm showing here actually using our chess froze raster functions where in the upper corner you can see that we're showing one meter resolution imagery from a location on the east coast of the United States in the in the pane next to it we're showing the data that our model was originally trained on and then in the bottom two panes I'm showing the output of our model which we were able to create instantaneously by using raster functions passing that image to our deep learning model using Microsoft's cognitive services AI libraries and then immediately visualize the output of that model and allow anybody using the application to paint to pan around the United States and ultimately giving people the capacity to calculate land cover statistics over the over the almost 10 trillion pixels at 1 meter resolution that make up the United States and I didn't talk about how we actually did that and I know this is a developer conference and so the the reason that I didn't talk about that is because my colleague Mary wall is actually going to be going very in depth in a technical tutorial on this project at 2:30 in the scalable geo AI and the cloud session and I would highly encourage anybody who's interested in how you might be able to work on the Geo AI VM to spin up a GPU backed deep neural network and ultimately after training it embedded and operationalize that inside our Chesbrough please go see Mary's session at 2:30 but between now and then I would just leave you with this which is that Microsoft we're investing in AI for earth because we fundamentally believe in the power of artificial intelligence to change the way that humans and Peters can work together to solve some of our most pressing environmental challenges and so if any of you or your organization's are working in the four areas of climate agriculture water or biodiversity and interested in how an AI first approach might accelerate some of your work then I'd highly encourage you to go look up the AI for Earth program apply for one of our small seed grants use those azure credits to provision one of these new geo AI virtual machines and start to experience everything that's possible when you have the full power of s raise GIS stack and Microsoft's AI solutions all together for the first time so thank you very much and Joseph we can switch back over thank you so three points to make one AI helps you understand everything here in the past we could analyze statistically traditional data which is structured data numerical data vector data but now the vast amount of new data that is being created is unstructured data like images raster data which you can now understand with AI in so many different ways you can get deep insights into them then there is text data there is data from sensors that are streaming in time series and so on vast amount of this data if you don't apply sophisticated analytics in AI and signal processing and other concepts really you can't use and that's one of the areas where AI comes in a big way second thing is about Riesling learning developing models predictive models and forming conclusions and deploying them in production workflows and in a third one there's about interacting intelligently with people and allowing you access to very sophisticated levels of information and knowledge and integration is not just data data are now being taken to a new level of knowledge and insight and then being correlated so let me now give you examples of a number of these things and show you a few demos so I wanna tell you about some of the capabilities in the Microsoft cloud that I love you to get this kind of deep understanding in your unstructured data for example we have vision API so the vision API allows you to recognize faces we just recently released an updated a face detection API you can verify people for example uber uses it when a driver gets into the car how would you burn itself know that the person who's actually driving is a person who registered the uber account so what they do is ask you to send a picture take a picture and send it and it'll match it with the weather image from the driver's license so that's one example but you can also do identification yeah we just launched an update where you can identify people from among a million faces up to a million faces so you submit a picture it may be coming from a camera you can with reasonably high confidence pick the right person and identify them this is one example of several things it'd be very useful for security for example but then there's a lot motivation we have a great optical character recognition system it recognizes natural handwriting very well you can take an image and get a caption out of it it takes gives a textual caption for an image it understands what's in the image very powerful then speech this is an area where Microsoft has incredible years of experience building world-class speech recognition systems and not only can you really recognize speech in many languages but you can actually customize the speech to your language to your vocabulary and to the domain using custom speech that's very important then there's language translation in fact Microsoft's language translation is used by even one of the largest Chinese companies to communicate with its multinational employees and it's a simple API on a cloud then there is knowledge and then there is capabilities like search Microsoft's Bing search engine is one of the widely used ones now with about 20% market share that comes as an API and that API itself is very useful to integrate into an developers application now these are just a start of a very large number of pre-built components that you will see on the Microsoft cloud and it can be applied to any type of data so here is an example this is actually an image from one of our customers they wanted to organize and understand all of that information in the cloud it happens to be the Nigerian Ministry of Justice now their challenge is to scan all that information organize it and understand it how do you make that possible well by applying a lot of the capabilities that I just mentioned we can make headway into these the best way for me to show you this is with a demo so let's step back on November 22nd 1963 in the streets of Dallas a lone gunman assassinated President John F Kennedy or so they say right there's so much controversy around us that 25 years ago more than 25 years ago Congress mandated that all documents assassinated with the JFK assassination be released in 2017 and so how much data is that the first tranche itself was about 34,000 documents and it was a stacked 17-feet high and then they kept releasing vast amounts of these scanned documents now wouldn't you want to understand what's in all of that but I could spend a lot of time trying to read and parse and make connections among them but now we have AI to help and how do we do that you know one developer in my team just canned in all of these just took all these kind images uploaded to the cloud applied AI to it and here's what he did he just you know it read it enriched it with all of these cognitive services that we have and then put ash research on top of it and created a simple exploration interface all of this in about a couple of weeks one one smart developer doing it and by the way this agility e the power that a cloud and AI capabilities in Microsoft's AI platform give you right so here is an architecture we had the data being brought into Azure storage then we apply enrichments optical character recognition captioning of vision natural language and so on if we create those annotations we store it in a graph database something called cosmos DB and then in as a first approach we put search on top of it so now let me show you the kind of insights we were able to find I mean really I mean there's lots of amazing things in there so I bring you JFK files [Music] so I'm just going to bring up Safari there is an application that we built using that architecture that you saw this is the JFK files and in fact everyone can go to JFK files to dasha websites or net let me type a simple query here into the search so I'm going to search for Oswald and so look at what we found all of that data was automatically extracted and then after OCR on the left you have all the key terms so it's a keyword extraction key entity extraction API that has been applied to it and just said the key terms are here on the left and I could select any of them and get a subset but then here are all the documents a large collection of documents that relate to Oswald or have a mention of it remember these are images now they have been understood and the keywords like Oswald have been highlighted now the most interesting thing is let's look at this handwriting for example and so there are documents like this that are handwriting I and I think I have to show it ok so you will see Oswald has been recognized in yellow and it's actually somewhat rather difficult to understand handwriting but it actually shows up here on the right and that handwriting recognition has made it made it possible then let me show you something interesting if I click on this image you'll see the image itself has been captured automatically now it says Lee Harvey Oswald if you look at on the right it says Lee Harvey Oswald posing for the camera you know right here New Orleans LA 1 1 2 7 2 3 and that's actually from the image itself it recognized from the image New Orleans LA 1 1 2 7 2 3 and so it did some great optical character recognition and in fact I can apply some really interesting ways to recognize so here is a graph that was built the graph shows all the relationships that Oswald had with other people and here is you know people like Sylvia Duran Sylvia Duren happens to be a go-between between the KGB and and and Oswald and so if I search for Sylvia Duran then there will be a whole collection of documents that come up so on and so forth the key thing is now we were able to digest all of this data and understand it and make these relationships very very visible and so you know let me show you one interesting thing you know when a government releases this incredible tranche of classified documents etcetera you hope your name isn't in there right well my name isn't in there but the name of one of our products is in the JFK files I searched for sequel in that and so turns out I find a document with secret server in it and this is the document from the CIA and they even give us a wonderful architecture here right so this is that architecture diagram and all of that the thing that is really magical here is you know we didn't really know anything about this it is all extracted by OCR and cross-correlation in the data and you know capabilities like this all of you to relate that what we found with all of these other concepts so that's that's sort of the magical ability to explore the data and you know going back to Oswald for example so you can see all these relationships you can click on Sylvia Duran you find that Sylvia Duran's previous statements are there she was a go-between she was meeting the KGB officer in in Mexico so there is this Mexico connection so etc so this is a Mexican she's a Mexican citizen somehow that was there and ability to just browse through all of this and even understand very sophisticated handwriting all of that is now possible sequel server was used in by the CIA to develop a secure classified information facility and to store all of these documents associated with the JFK files in 1997 and what was really amazing was they had this entire architecture data came in into blob storage there is a serverless compute capability called Asha functions that are in the cloud that can be used to orchestrate services in the cloud such as optical character recognition and computer vision and entity linking we did all of that and then we took those annotations put it in cosmos DB you had a cognitive skill set you then you know put a search on top of it and we even build some custom AI for really doing topic extraction on that data so that Asscher search can now understand topics as well so you bring all of those things together and you have an architecture that anybody in fact any developer here can build and remember these things are very easy to build because fully managed services in the cloud it's assembly of prefabricated components and capabilities like these are an incredible demand across enterprises so for example for analyzing legal contracts for understanding engineering plants for extracting form information and so on and so forth and any type of data unstructured data can now be digested and organized so that explains how your AI is able to help you gain deep insights into vast volumes of data so now let's talk about reasoning with such data and one of the things you can do now when you have asked amounts of data is to predict things for example one of the work that we did collaboratively with ESRI was to predict accident probability with actual machine learning and rjx you know here is how it works you get arc GIS maps and data you use Azure machine learning to do some code free data wrangling then you develop models with it predictive models for accident probability using Jupiter notebooks your train machine learning models there then you you're not you're not done there now that the challenge is for you to build an entire application and deploy it live right so it can be used in production so what we do is then you can darker eyes the machine learning models along with The Associated code and create web AP is that would stand up on a kubernetes cluster and serve these predictions off of the cloud so the best way for me to explain this is with the demo to do the demo let me invite Omar Maher who is the AI lead at esli Omar yeah hey Joseph thank you okay so we can have the slides hello everyone how is the day going awesome thank you so this is a very unfortunate event that we see on daily basis it's causing 1.3 million deaths a year 35 million injuries or disabilities it's the ninth leading cause of death globally and scho stting 518 billion dollars annually to deal with Road car crashes and the question is can we use machine learning to fight this and the answer is yes we're gonna use ArcGIS with Azure machine learning to predict the probability of accidents per segment per hour in Utah but before jumping to machine learning we ask ourself what Co what could cause an accident of first place is it weather features like temperature rain fog or time features like time of the day rush hour day of the week or spatial features like proximity to intersections or Road senior city or the direction of the road or the Sun direction or other factors in fact we believe we are dealing with tens of variables and the kind of data we're analyzing to train our model is really large we're talking about seven years of data four hundred thousand accidents five hundred thousand segments it's nearly impossible for any human being to manually analyze this and predict right but we think we have a shot with machine learning so what we're really going to do is the following we're going to develop our model using ArcGIS Pro to prepare the training data set and pass it to Asian machine learning workbench to train our model once we have a train model we want to deploy this into production so using Azure cloud services we're going to deploy our model on a cube Rina this cluster and this is a docker ice container and once the model is deployed in production you can think that it will need real-time feeds to produce results and predictions right and hence the need for ArcGIS Enterprise on Asia so the model would receive different feeds like the time and the weather feeds produce predictions for the accidents and send it back to ArcGIS and a price once it's available on enterprise it could be accessible to the whole company right so a single source of a system of engagement where people can develop apps web maps etc we have built a geoprocessing service that act as a web tool that would invoke the model either in hourly schedules to produce hourly information products or dynamically as needed so as you can see we have the data the model and the analysis all in one place in the cloud so why don't we start saying this into action let's have a look so what we really started to do through the demo that you're going to see right now is we have collected all the data sources like the accidents of the seven years the road network layer we have spatially joined these added the average daily traffic for every segment in Utah the intersections whether they are signal they're not weather feeds coming from about 27 weather stations as you can see there are lots of accidents happening near intersections so we thought about proximity to intersections as a feature to our model and we extracted different spatial features as well like the road curvature or senior city the road direction the number of lanes the road width etc and we added the billboards data set to see if there is correlation between the location of billboards and accidents now all of this processing could be automated using arc PI we have used Python through arc PI to create our geo database and Ecstatic weather from online sources or physical sources and did some data processing and really most importantly extracted the spatial features so the kind of features have been talking about which is proximity to intersections the direction of the road the road curvature and other features and finally we joined the weather feeds and the road collisions throughout segments and after we did so we want to load this kind of data to the Asia environment so the first step is using is using the Python API to load the data from the geo database to the CSV s and then transferring those CSV is to Asia blobs which is an amazing data storage mechanism once the data is once our training data is available in a new blob storage now it's accessible through the asian machine learning workbench so let's have a look at this so the first thing is a machine learning workbench provides amazing data wrangling and preparation experience you can see that this is our training data set so for every accident we have different spatial weather and temporal features like 40 features in total so you can see here that you have the speed limit the surface type the road orientation some weather data etc so we can do lots of things here like for example removing columns producing histograms producing column statistics you name it so once we think our data is ready for training we just start the training process so basically what we're doing is we load our training data that you guys have just seen and we store to select our model which is gradient boosting gradient boosting is a very powerful machine learning technique not only it's good with predictions but explaining why these predictions have in the first place so we are training we are splitting our data into 90% training 10% testing for validation we are optimizing some parameters like the maximum depth of the tree the number of trees and others and we are just starting the training one other important aspect of asian machine learning is the asia log service it's a very interesting service that you can log and record the different statistics of your model that you want to visualize after that so let's have a look at the output of the ESRI lock service one important aspect in the data science process that you iterate continuously so we would want to have insights along the training process like what's the duration of the training what's the value of the f1 scored the precision that recall the accuracy etc our machine learning records every cycle of those training you can click on any cycle and explore it in more detail to help see how is your training process going into action so for that cycle we can see all of these details like the scores I've been talking about and you can see the visualizations we have been capturing through the a Sherlock service so now we think we have a train model that is producing good results and we want to deploy it into production to be accessible to the whole enterprise Asia machine learning has a dramatically amazing experience to help you deploy your model really easily so the score file you're going to find it as a default file in any kind of Asia machine learning project and it comes with a skeleton that helps you develop the needed functions so for example defining the schema for the rest API for the model initializing your model like passing the data and the model itself and finally the scoring function which is the function that's gonna take the real-time input like the time and weather and produce the output and the final stage here is deploying that model to the kubernetes cluster using the aja cloud surfaces and what's really good about this is the scalability so it scales dynamically based on the need right so as the number of users as a number of requests grow it scales dynamically so let's have a look unto the results of that model so again what we want to do is predicting the probability of risk per segment per hour today is December 3rd it's Iranian icy day and it's 3 p.m. we are close to the rush hour and we are seeing the woods cross area in the salt lake city so where is our algorithm predicted the highest risks here we go red is the highest risk orange is still high but not as red and green is the least which is safe we can see that the risk is mainly in the interstates and highways on some internal roads here so what about seeing where the accidents did actually happen interesting they are happening mostly on those segments we have predicted not only on the highways and the major ways but some integrals as well like that one here and that other one here and you can explore the degree of and the effect of road curvature on that and here's South South Salt Lake area which is really near to the South Salt Lake downtown so our model predicts that those interstates and highways are really risky and some internal areas here you can see for example that this is more greenish this is a more reddish and oranges area and again the accidents are happening on those intersections we have predicted and we can see a pattern here that many of these accidents in the inner roads are happening on or near very near to intersections finally for the South Valley Regional Airport these are the model predictions and these are the actual accidents again some internal and curvy roads besides the highways so what we've seen today is using rjf with the powerful aja machine learning tools to help predict the probability of accidents per segment per hour and fight this global phenomena and save more lives now this is not the only example in fact there are more examples that we have used the marker soft AI services together with rjs and we're going to show this today at two 30 p.m. at the scalable GU AI experience at the cloud session so I'm really excited to share with you different examples today and see you all thank you [Applause] all right so now let me talk a little bit about interacting conversational apps have become the replacement of intranets of yesterday when the internet technology came most enterprises said can we harness it to make our internal enterprise information visible and usable and now people are taking one step farther and putting conversational interfaces in front of it and why is this interesting now imagine for example the use case of one of our customers called Unilever a very large multinational company in a hundred and thirty different countries and their to provide HR support in all of those places typically when a Unilever employee for example that's an HR task in one of these countries like say let's say someone gets married and wants to change their last name they go through a number of steps and it may take up to a week to get your name changed across all of these systems and now after they've launched a conversational interface and HR bought that whole process takes three minutes chatting with a bot it's so much more efficient and so much more natural now let me give you one simple example of this capability in action so we took the ArcGIS online help that's out there and a source of the question how quickly could we put a bot in front of it that allows you to answer the FAQ very various you know simply and so here's a bot the bots created now you can go to Facebook Messenger and start talking to it so I'm going to play a quick video that actually shows how this works right in 35 seconds you have a bar so this is a small application we put together called a bot maker give it a couple of URLs so now I give the landing page content there is the now they're fake you link right and this is all from the ArcGIS website and this is happening in real time by the way and so I give it a couple of URLs a FAQ URL and then say create and when that's done what's happening is behind the scenes a whole bottle is being provisioned and a chat window comes up and now you have a chat bot live right there you can ask a question can arcgis be integrated with Bing Maps and you that that questioning you can ask the question in many many different ways and you immediately get an answer right so in 30 seconds it's possible now to stand up an intelligent chat bot now it won't be perfect it can be improved but the thing is when you put this conversational interfaces in front of systems and now you set up now an opportunity for you to learn from interactions with people understand what questions are not being answered now tweak your text models to answer those questions better and have a continuous learning loop and the chat board becomes better and better for example progressive insurance did this a few months ago built a chat board called progressive flow and they sold their first auto insurance policy over a chat board in November right so every kind of this customer interaction is now going to become better and better and automated with chat BOTS so now here are the here's a brief summary of what the AI platform is capable of doing so you can build these powerful understanding applications using cognitive services then you saw Omar Maha talk about building custom AI models from data using Azure ml and GIS data you can build conversational AI using BOTS right these are the sort of the first step of infusing understanding into software an AI now because of all of this power available to developers is truly becoming the new normal you touching AI every single day on a mobile phones every recommendation every spam filter in your email every fraud interaction or rather every fraud screening that happens in payment processors all of these things are driven by AI and now capable is like chat BOTS are taking it to an entirely new level as well now AI can be also very empowering and I wanted to share with you a very profound story from an ISV that built an application to help mothers communicate with autistic children and I'm going to play a video and then I'll explain what that application does she will sit on you gist receiver kidnapped I do longer donkey I new compounds from acid bath the shoulders if you cheat they are canceled natella crash Armando you'll impress stroke innocent only proof hello God approves internet 11 digit can use a gel touch which resist on acaba who is asleep a possible if we pop art it's a tableau in love a part in a tableau repressive a arcane system ok longer sort of to their fat ego recruit India country say never see comment Lopakhin Flickr cashews his vocal cashews Lucas Olympics Park officials did lives down the facil mana Simon Lebon cast lemon captain Valerie Thomas footer when I come home after honey do not arrive for me to communicate elect and pick two [Music] your pal done enough for my structure and it tried with Papa Legba Catherine Pictou and Stoughton mo on pictogram he up crap and she's assumed oh hi because after the Lopez bon appetit quality service disposition on club do the organist and imagine do local conditions vocal it has automatic her the longer it's an important place to say new do nail opposition p.m. duck seediest and intelligence official professing commune not of two or four valve a lady asked retrievable ask you to be 50 PS cool to have satellite paddle shoes no fancy in dome city fuels 0 Moreno pre-owned which is a bit o Musa function luma delve a continuous monofin homage of the facade do not enjoy it especially you jo-jo Elvis Russia Jurgen a tablet even on a sand bar yep I'll do Crete yep I redevelopment of a city Wow district down the teachers comes a single Victor [Music] [Applause] amazing isn't it that was an enterprising developer with a powerful idea he asked himself the question why is it so difficult for a mother or a caregiver to communicate with an autistic child why should it be so difficult now the way they communicate is by taking pictures from that book and assembling them to really communicate through pictures and at the same time speak but here's what the developer did I said okay instead of pictures from a comic book equivalent of that why couldn't the mother take pictures from the household that the child is already familiar with images of cups and butter and other things can't I use that to communicate now could in the mother to speak they couldn't speech recognition understand what she's saying and then recognize the entities in it match it to the cup and butter and things like that and compose a conversation and then couldn't I show this all on a mobile device so that on an iPad I could actually hold it and share that and show it to the child and speak at the same time that's what he did he used salmon which is a cross-platform mobile development platform from Microsoft Visual Studio for mobile to build a cross-platform app he took images he made a pass I made very simple for the mother to go around the house take images put that in the vision API understands what those pictures are so it labels it with the right name so a cop is recognized as a cop and that text is in there then he used the speech recognition so when the mother speaks the speech recognition is automatically done then the key terms are extracted and those are matched to the labels or the images that are there and then that conversation is composed on an iPad and that allows a mother to communicate this is a new way of interacting that he created and how many of you are well if you haven't and Android or iOS you can also see other applications like this there is an application a very powerful application called seeing AI that was developed by some Microsoft researchers which allow the blind now to see the world they can point their camera at an object it recognizes object it can show it at a room if I show it here in the room it'll say a crowd of people in a smaller room it'll count the number of people you can even show it currency and you'll recognize how the currency and the dollar value of it and so on and so forth and it's so good now there are even people taking the seeing AI app putting it in front of a TV and watching TV with it so all of these capabilities are now coming to light you know see AI has the power to empower us all it's actually really amazing the capabilities that developers such as yourself when you combine the power of AI with the data can potentially build and finally then I wanted to show one more thing here where you can get a lot of resources we go to Azure day I gallery there are a very large number of examples of these kinds of capabilities that can be realized and you will see industry solutions experiments machine learning api's custom modules and so on and so if we click on say aerial image classification you see a whole lot of documentation here describes what to do you click on View project and you will see a github page with aerial image classification lots of details about it if you go back here there is just a very rich collection of things so if I go to like solutions in in this page you will see solutions for the edge carbon emissions data platform for example if I click on this you will see how to build a model for in this case real-time carbon emissions and global weather data you can click try it now there is a power bi dashboard that will be available soon for that one and you know if you click deploy it walks you through a whole process for deploying at your assign end but it's a very very rich collection of examples I can even find you can pick many many topics and you'll find it like in this case yeah I just type hard and you would say heart disease prediction machine learning models for that there are you know I say predictive maintenance and then you'll see a solution for predictive maintenance of aircraft you know this is identifying components identifying component failure as a whole architecture and so on so I invite you to go to a sureiy I gallery look at all the resources that may be available many of them are instantly available for you to try and develop with there is code there is machine learning experiments there's a vast collection of these capabilities so now let's switch back to the PowerPoint and I'll point you to some of the resources here a show comm /ai is a place to start AI school dot maksakov comm has a tremendous amount of training material I should a I gallery has a large number of examples so they're wonderful place to start and I invite all of you to get on this journey of applying AI to geospatial data and all types of data to create magical software of the future thank you [Applause] you
Info
Channel: Esri Events
Views: 6,099
Rating: 4.9148936 out of 5
Keywords: Esri, ArcGIS, GIS, Esri Events, Esri Developer Summit 2018, Keynote Speaker, Joseph Sirosh, Microsoft AI, Artificial Intelligence, Esri DevSummit, Esri DevSummit 2018, AI, Lucas Joppa, Chief Environmental Scientist
Id: x-hVkgBHkT8
Channel Id: undefined
Length: 64min 17sec (3857 seconds)
Published: Wed Mar 07 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.