Performing Deep Learning Analysis in ArcGIS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good morning everyone thank you so much for attending today webinar this is the third of the series of the webinar that we plan for this year on the topic of imagery and remote sensing for higher education community for this webinar we would like to bring the topic of deep learning analysis using arcgis and we would like to take it from the angle of educators like you teaching this topic how to start with a simpler workflow so it's easier to adopt in the curriculum and also easier to digest better students as well before we start a little bit of the housekeeping all attendees are in listen-only mode but feel free to enter your questions in the questions window at any time we also have staff monitoring the questions so they may answer your question in the chat directly some question we will cover during the qa session now this session is also recorded and we will make the recording and slides available after at the end of the webinar we will launch an exit survey please do participate on the survey so we can help to improve our webinar and it's also a good way for you to get connected with us at s3 if you like if you like to ask questions or request for assistance one more thing before we continue we would like to ask you a full question let's open the first full question the question is are you teaching imagery and remote sensing now this is a multiple choice question so feel free to check those that apply to you very nice it seems that many of you has responded thank you so much it seems like there is a majority of you teaching undergraduate which is awesome and also some of you are researchers it's great thank you so much now i would like to introduce our presenter for today webinar we the first presenter is finale wish van buren finney is a principal product manager with arcgis imagery product team and second presenter is sandeep kumar sandeep is our senior product engineer for data science team at is3 rnd center in new delhi and myself cancer in acurnia i'm a senior solution engineer with education team at esri i will mostly moderating the session and share the resources at the end for the agenda we will start with finney that will give you a brief introduction to the deep learning for some of you that probably new to the technology he also gonna show some application and a demo of end-to-end example then sandeep will take offer and cover the deep learning workflow in arcgis he will start with two options for you the first one is a simpler works workflow using the pre pre-trained deep learning models that already available in arcgis living atlas of the world now the second option is to train your own deep learning models which you can do it if the any of the pre-trained models doesn't work for you then sandy will also go deeper into the deep learning concepts in detail as much as we would like to demystify deep learning it is pretty complex technology so we still want you to understand the complexity of the deep learning analyses and the more that you know that the more you actually can can tweak your model and can achieve the result that you want then finae will take offer to talk about the few best practices for applying deep learning analysis for teaching and research and we will close the webinar with qa session and resources now before i hand it over to finae let's do another pool question it's kind of like we need your your interaction here um the this question is we just want to know what is your level of comfort with the deep learning so are you actually new to this this deep learning know very little or already getting familiar or you are already implementing okay so we can we would like to gauge this this is this is this is awesome so 52 of you know very little so with hopefully with this then you can get the better uh idea about the the deep learning analysis okay okay we can close the pull result and with that i'm gonna stop sharing and hand it over to [Music] finae that have a lot of the content to cover finnick time is yours thank you reena firstly and i'm really excited to see you guys over here now before i go on ahead i'd like to talk to you about deep learning in general why is deep learning important there's a fire hose of imagery and there's a ton of data that is streaming down the pipe daily we've got huge amounts of satellites there's massive amounts of drones that's capturing high resolution and high temporal data which quickly becomes rather hard to manage and it's not just about image management it's not just about management of the data you know the traditional approach to take a couple images extract features that really doesn't scale anymore hence we have deep learning essentially teaching the machine to extract features for you now the de facto solution for automation that we've seen historically for a long time is artificial intelligence and machine learning let's see how we can use ai and ml for automation purposes before proceeding let's talk a little bit about what is deep learning a lot of you have probably heard of artificial intelligence machine learning deep learning is still relatively new to a lot of your users our users and hence obviously you are here now let's see how each one of these team terms they fit together machine learning it refers to data-driven approaches or algorithms where we learn from a lot of existing data and use that to predict the outcome for newer or unseen data deep learning is just one type of machine learning that is inspired by the human brain now in the context of arcgis all of you have been doing machine learning for a very long time it's exposed in the form of simple tools to achieve a certain level of automation and in many cases reusability even across different seasons for instance so clustering image segmentation space time pattern manning mining predictive analysis these are all classic examples of machine learning so we've seen machine learning we've seen the tools and how long it's been around now a little bit of a background on how deep learning has worked and how long has it been around computer vision is now almost as good if not better than human vision at least in some of the image retards and at this point i'd say most of the image retards this chart on the right it shows you how the error rate at recognizing images in this imagenet visual recognition competition has gone down every single year till 2012 machine learning was used and the error used or the error returned was was around 25 since then deep learning took over and some time as you can see in the past few years the error rate became so low that computers got better at recognizing images they got they actually got better than humans and they had to because of this reason discontinue the competition we humans obviously can no longer judge a machine that's better than us in extracting features now let's see how deep learning integrates within arcgis there are more than 30 different models that are targeted towards specific tasks and these stars they range from object detection to image classification to image enhancement change detection and much more while they're mostly imagery related models we also do have models that are non-spatial data such as text and tabular data and these are also available to our users directly from within arcgis additionally archaea arcgis also enables integration with third-party deep learning frameworks and machine learning libraries now pictures speak a thousand words so let me switch to a demonstration where we can go over the key applications of deep learning in arcgis and a demo that really showcases the value of deep learning within arcgis okay so first like i said we have about 30 different deep learning models right so if i have to go over those 30 different models via demos i probably take the entire presentation so i decided to pull all of that into this story map and this is a publicly available story map you guys could take a look at it i'll go through each one of the models that we have so first one is object classification this essentially as you can see there are different building structures in here right so i want to classify my structures in this case as either damaged or undamaged and that exactly is what my deep learning model is doing here for me go on further down this is the most common task everybody wants to detect objects from an image you've got a collection of images i want to detect the number of pools the number of solar panels the number of cars planes you use the object detection model to derive your own trained model for any cuban geography here's an example of how we detected pools pixel classification as the name suggests it's used to classify your imagery into multiple themes so that you get a thematic map a classified thematic map here we've used the sentinel image and then we've created a classified map using our pixel classification models the ins segmentation model how is it different from object detection now if i'd used object detection ideally i would have gotten bounding boxes for each one of these buildings with instant segmentation it's actually delineating the actual geometry of each and every building you see the difference right edge detection again as the name suggests it's used to detect edges a typical use case is detecting field boundaries in the ag industry or detecting parcels this is useful for urban planning and other scenarios and then we have road extraction road extraction is massively requested typically for urban planning transportation planning and much more so you can train your own road network deep learning models based on your imagery and your geography this is an interesting one which is change detection when you have a stack of images what is the primary use case you want to detect change so here's this model which will enable you to easily detect change as you can see we have two different images and as i splice through them you can see the number of images that have changed the structures that have come across everything in pink is automatically detected by the system using the change detection model this one is an interesting model wherein i've simulated optical imagery using radar data so we've started from radar or sar imagery and from there we've extracted optical imagery next is image captioning click on a given location and this is a self-constructed sentence which says it's detected a number of planes and it's detected this to be an airport and then it's constructed this to be the caption it's constructed a caption that says you've detected multiple planes that are parked in this given airport lastly image enhancement this is almost like it's come out of the movies this is what the image looked like previously a 30 centimeter resolution image and as i scroll over or as i swipe over my image you can see this is a synthetically derived image so this is a high resolution image that has been derived from my low resolution image so you can see there are a ton of models that we've made available within arcgis and there are much more for point cloud data there's for tabular data as well like i just said and there's for text as well now i'll just go over a demonstration showcasing the real value of deep learning how can you do it within arcgis and the real value of doing it within the context of a gis i'm not going to go over all of the tools sandeep will be doing a little bit of that so firstly to provide you with some context this area of interest this boundary that you see is really the fire perimeter of the woolsey fires that took place late 2018. now my objective here is within the fire perimeter you have about 10 000 the the all of these little polygons in blue they are depicting buildings so i want to go through the process the deep learning process and then classify these building structures as damaged and undamaged and this is an exercise we did we put together with usaa which is an insurance company so they wanted to disseminate they wanted to distribute their insurance claims as soon as possible so it's obviously broken down into four multiple steps the whole deep learning process the first step is capturing training data we provided with a series of tools you could either edit features and create your own training samples or there are specifically designed tools to label your features for training purposes so here you're essentially capturing training samples and you're telling us the system this is what a damaged structure looks like and this is what a non-damaged structure looks like so you can pick a structure over here and you can add a new class call this undamaged give it a different color so essentially you're telling the system what does a damage structure look like what does an undamaged structure look like you obviously need a ton of images or ton of training samples to actually define these these uh locations for your system and so in the interest interest of time i've created a bunch of pre-trained samples and these are what my training samples look like zooming out we've captured about 800 training samples right here we've classified them manually as either damaged or run damaged structures and then we go through a series of geoprocessing tools it's available as part of the image analyte analyst toolbox in the deep learning tool set you have access to tools to create those training chips from the training samples those chips are fed into the training uh train deep learning model which will essentially train a model for you and then we've got a series of tools here which will do the inferencing which will do the feature extraction for you so after going through each one of these processes this is what my result looks like those blue buildings that you saw at the start of the demonstration have now been classified as either damaged or undamaged and zooming in a little tighter you can see the results that i got these automated results are actually better than that of a trained assessor now like i said the real value of doing deep learning within arcgis is it goes beyond just extracting features out of the box you have access to more than these thousand four hundred thousand four hundred geoprocessing tools right all of this can be used for downstream analysis so i use these features as barriers i fed it into network analyst and at that point i've identified locations i've tried to identify locations that are within 20 minute walk time for anybody who's been affected so the best areas to plant shelters so these are all of the affected buildings using network analyst we derived or identified locations that are within 20 minute walk time for anybody who's affected so that was simple analysis that we did right we just used a bunch of geoprocessing tools feature analysis tools within arcgis and extracted those features next i enriched those polygons and i brought it into this completely configurable operations dashboard and for any given location i can see the total number of damaged buildings undamaged buildings the average damage building value and the block population as well as i pan around you can see dynamically all of this prop pops up and it's a completely configurable app the operations dashboard so you can do you know extract insights as much as you want as as as needed all right so in that demonstration this was our workflow we labeled our data data we trained the deep learning model and then we ran inferencing but at a high level if you notice my complete demonstration the real value as you saw was the entire ecosystem playing together all of the imagery was managed by image server we used a suite of tools to perform downstream analysis with the results and then used operations dashboards to translate those results into actionable insights through dashboards a complete end-to-end geospatial deep learning system right and with that i will hand it over to sundeep who will talk a little bit about how to get started with deep learning in arcgis the tools he'll talk about those tools in depth that i just talked about the apis and much more but before that we have another poll question for you and reena can you help out with that yes thank you so much finae you probably can see the you can move to the slide of the pool so the question is what is the primary data type would you like to run deep learning on okay so there is a few um answering here this is actually a single choice question so choose the one that most uh every data type that you would like to use in your deep learning analysis seems like 72 of you is actually use the satellite imagery and drone imagery awesome very interesting okay thank you so much for your participation now uh let's hand it over the session to sandeep to continue yeah thank you arena so there are two options to get started with deep learning in arcgis but we recommend you to start with the option one especially in a classroom where time is a constraint so the option one is to use pre-trained models and then we have an option to to train your own models so the pre-trained models are ready to use models which are as good as gui tools and you can directly deploy them on your imagery or any other data set second option you can train your own models but now let me just show you the next slide in which we can talk about first the pre-trained models so vinay might have shown you the whole workflow in which we label the data we prepare a training data set in a format from which we can train the model and then we deploy the mod the train model but the best way to introduce deep learning in your classrooms is pre-trained models it's simple it's ready to use and it reduces the imagery requirements for model training it reduces the labeling requirement you don't need a massive compute resources available with you to train those ai models we can't make it any simpler so let me just give you a tour of the pre-trained models available on living atlas for earth so these are a few pre-trained models available on living atlas the first one is building footprint extraction usa this works on high resolution imaging next we have building footprint extraction africa next we have road extraction for north america next we have swimming pool detection next we have a car detection model that works with drone imagery next if we have a solar panel detection model which works with drone imagery next we have shipwreck detection model it works with bag data next we have a ship detection model it works with sar data so we also have a few models which are land cover classification model the first one is land cover classification model that works with sentinel-2 imagery next we have land cover classification model that works with landsat 8 imaging next we have a human settlements model it works with landsat 80 imagery next we have a human settlements model which works with sentinel-2 imagery now apart from these imagery models we also have models for point cloud classification the first one here is the power line and tree point detection using this model you can detect infrastructure lines and electric poles next we also have a model for image reduction such as this one phase and license plate blurring also we have object tracking models these models you can use with full motion video support in arcgis pro and detect moving objects in a video like this one so this where this is so we have seen what now let's see why it is so important for example if you have a large imagery you want to create information layers out of it you can use pre-trained models to directly create a gis from it here i have loaded imagery for grenada grenada receives 78 78 inches of rainfall annually due to that some buildings and other infrastructure is constantly at risk of damage due to flooding so now this is how the flood susceptibility looks like we will now try to use pre-trained models to extract buildings and roads the first tool which will be using to extract the building footprints from this so now we'll use the detect objects using deep learning tool and we can see we we can see all these models on the living atlas so these are all the models available in the living atlas so the for the first one we'll extract building footprints in the interest of time i already ran the tool i'll zoom in to the city of saint george it has the model has already detected around 50 000 buildings out of that these 1500 buildings which are falling in flood prone areas we then also extracted roads and then we combine these layers along with the lidar data to create a digital twin out of it here we can see these all layers combined at a single location and along with that we can also we have also extracted the power lines and poles using the pre-trained model for that this digital twin can now also be used for urban planning disaster management and prevention and many other use cases this is the power of automation and we try to make it very simple for you you can also use these pre-trained models to start adopting deep learning in your classrooms so this was about pre-trained models sorry to interrupt sorry to interrupt but folks as you've seen this demo i have already seen this demo a couple times it always blows my mind his input was a massive collection of imagery it was just imagery using that you and using those pre-trained models he was able to create a full-blown foundational gis this enabled him to derive valuable insights in a matter of minutes it just completely blows my mind imagery free trained models put them together valuable insights back to you sunip i was just excited about that i just had to express that thank you vinay so this was about pretend models trade and model really give you a jump start but now there are several scenarios too where you need to train your own models specific to your geography specific to your imagery properties or you're just looking for a different asset for which we don't provide a pricing model for that you can use the complete workflow and we've got you covered with all the required tools in arcgis you can do an end-to-end workflow you can design models for your specific geography resolution imagery properties and if a specific asset you are looking for now to train your own models you can take two routes the output of both is same but it depends on your preference to use a gui tool or an api script even in your classroom you might have seen some students that want to use squid tools while some prefer api over that the best part here is these two are completely interoperable and deliver identical results combine this and that you can use any portion from both of these roots so now we'll see so now we'll see a workflow where i'll combine both these routes to complete an end-to-end workflow here in arcgis pro i have loaded a land cover layer on top of a higher resolution imagery this hand covered layer is from chesapeake conservancy dated 2013-14 we downloaded a specific portion of this land cover for kent county once loaded to our jspro we can see this is how our training imagery looks like and this is how our land cover looks like the first tool which i'll go which i'll use is export training data for deep learning i'll export my lan cover along with imagery to a format that can be used to train deep learning models and here it is these are the images in form of small chips or tiles these are labels corresponding to those images now i'll bring them back load them in arcgis pro to train a deep learning model the tool which i can use here is a train deep learning model tool i filled all these parameters but i really want to use as a data scientist i always prefer a notebook over a tool to train my deep learning models so now i load in the training data to my notebook here in arcgis pro itself using arcgis pro notebooks so once i load my training data i can use the prepared data function to load it i can pass in the path where i've stored my data and once it's loaded i can use the data show batch method to visualize a few samples of my training data this is how it looks like the land cover is overlaid on top of imagery once we have loaded our data we can create a model the model is a pixel classification model once i've created a model i have not trained it yet so let's see how it works without training i can use the model dot show results method and because we have not trained it we can see it compares it does not even compare with the ground truth because it it's it it's not understanding what it has to do right now but now we can continue training this model to do that we can use model.fit method you can ignore this parameter because we have covered you in defaults so you can just call model.fit if you don't understand this parameters right now so on the left side you'll see epoch written that means my model has seen the training data four times right now and you can see a accuracy column so we can see as the model sees the data again and again it's able to understand the data the relation between the imagery and the land cover and the accuracy keeps on improving for now we'll just train it for tiny box and we'll check in tiny box how good is it working it's less than under half a minute so we can check the results and we can see that the model is starting to understand the relation between the imagery and the land cover but to get a good model we need to train it further so once we train it further we'll get some good accuracy number and if we check the results now it compares very well to the ground truth our model is able to understand even the finer details once trained we can save this model using the model.save method and then load it back in this tool this tool is classified pixels using deep learning tool we can pass in testing imagery that is different from a training area and once we we run this model right now i am limiting it to a testing extent and yes i have a gpu so i'll also specify gpu here so once i run it i'll compare it with the ground truth so this is how my ground truth looks like it's from 2013 imagery from different time so you can see some of the buildings are missing on the left side but this is the output of our model and we can see these buildings are there so this is the bar of automation which i was talking about but with your own training data your own custom models so you can do that using tools within arcgis pro itself so now we saw arcgis pro and we also saw a part of api and then just see how it compares with the underlying apis which are used in the in the ap in arcgis api but with an increased level of abstraction and tight coupling with large-scale deep learning models so you can see the code on the left side python code on the right side you can see arcgis dot learn code this is the api you can do you can simply train a model in six lines you don't need to write much code it's just a very simple and consistent api so now we saw that we used pre-trained models we also saw a workflow in which we created our own models we first recommend you to try the first option in your classrooms due to the time constraints and also you can try your own models also you can definitely try the option too if you feel that you are not being covered by the option one anytime to take any of those options i'd like to share with you one more thing about the installation of deep learning libraries to start in arcgis pro you need to install the deep learning libraries installer that's available on a github repo and we have the links for that you can directly install take it from there and install it then we have arcgis learn corner meta package if you are an anaconda user we also have some other resources for disconnected users so now i'll talk about deep learning concepts in a little bit detail so everything was fun so far but deep learning at a conceptual level can get complex to explain this section might get a little overwhelming but it is just a set of concepts which we think is very important for you as educators to understand so the first thing which i want to talk to you about is what's the training validity training set in the validation set we have seen this graph we have seen this type of table printed on our api previously so what was training and validation it's nothing but uh we we hold back some amount of training data just to check whether our model is performing as well as on a new data compared to what it has seen already on which it has been trained so you can also hold back some data for a in two way later when you get a pre-trained model just to verify that your model works on unseen data as good as on the training data so now you can control this split between validation and training set using the parameter in the api or using this validation field in the gui tool next i wanted to show you this image yeah this is the same image which is transformed in many way to look like a different image so even if you have less training data you can train very good models using data transformations this is by default on and you don't need to do anything else to do it but just want to give you some information that this is being used and due to this the models perform really well in real world scenarios it reduces under fitting or overfitting just give you an appropriate fitting even if you are using a pretty small data set or a pretty large data set you can control you can also control this if you want using the transforms parameter in the api next is about a learning rate finder so when we are training these deep learning models they look at our data multiple times if they and with each pass they understand some relation between let's say for example imagery and land cover but the level the level of understanding per pass has to be controlled so that you can train a robust model but in a given time so we provide you with tools such as the learning rate finder so that you can easily find an appropriate learning grade and also it's by default on in the api as well as the gui tools so you need not worry about it but yes you can control it if you want to control it this is an important hyper parameter and technically it's not required but you can also but you can control it when required so just to give you an overview of what we saw like a model training saving and loading cycle so once you have your data you can create a model you can fit it you can save it and deploy it on the testing aoi but let's say if you want to come back train your model further on a new data or you want to train it further on the same data you can do it using the model dot load method and you can fit it again for for more steps or more epochs so now i'll hand it over to winner will give you some information about few best practices or what do you winner thank you that was a lot of information so the concepts they might have been overwhelming for some of you but again as sandeep said we think it's important for you to know especially for you as educators you need to know these concepts understand these concepts better now that we've gone over everything deep learning we've demystified it for you let's talk best practices the number one factor that determines the accuracy and quality of your deep learning model is really the quality of your training samples so what encompasses preparing that ideal training set you got to be selective there needs to be a balance of classes reduce the over-represented classes as well if you just want to detect structures and pawns like here on the right capture them accurately and has two separate features but if you want to detect this as a single careful site as a whole label them together with that context included now try to apply image augmentation to over sample underrepresented classes uh it's a parameter both in the api and the tool as well then next is the size of the chips which is a constant question i get it should be i'd say over than or equal to 400 pixels now these larger chip sizes why do we recommend them these chip sizes they provide better context to your model lastly the universal question how many image chips do i need usually somewhere between 400 and 40 000 is what i recommend i know the range is really really huge but it really depends on your use case if we need a genetic model that is applicable on a wide geography we need training data that represents that variety and that geography but if we apply the model to a limited area or a limited geography we can get relatively good results with limited training data now we've trained the model right how do i know that my model is good without really having to run it over my entire project area try to run inferencing over multiple regions with large variability and when i say variability i mean variability in terms of context or number of positive examples obviously the best way to tell or judge the results is to perform a visual scan over your entire inferences results make sure these results are over a test area an area that the training data has not seen as yet it's an area that doesn't contain any of the training data basically once done training the model we provide you with a model metrics file if you look into the folder you see a model matrix file i have a screenshot over here on the right look at the training metrics it provides you with a confusion matrix uh validation accuracy both of which are good indicators of the model quality you can avoid you know standardized indicators like the map and f1 score these indicators are useful when training happens over known benchmark so they don't really tell you how well a model will perform in practice once you've saved your model look for this graph as well it's available in the modelmetrics.html file the graph trend will tell you which ch which category your model falls in the model graph on the left for instance it indicates model is not complex enough so try to increase the backbone size the graph in the center both trains and validation training and validation losses are decreasing but have not yet converged obviously this means you need to keep training your deep learning model last graph is the model has overfit implement early early stopping in this case reduce the number of epochs or let the system automatically stop it when it is no longer improving again this is a flag that's available in your tool which is it's enabled by default and it's available as part of the api as well the curves on the left represents ideal model behavior but it also indicates the model converged a while ago so you're wasting time and resources so implementing early stopping would be good the graph on the right shows the loss oscillating widely here's where you try a smaller learning rate another question you will run a lot into is what imagery do i need what hardware do i need right confirm confirm your imagery is suitable again deep learning is not magic can the object of interest be identified be can can it be identified be by you as a human if you can identify it within a couple seconds the machine can do it for you automatically ensure the resolution bands and bit depth match the model requirements something to keep in mind for your analysis is that 8-bit and 3-band imagery is no longer a limitation so you have multiple bands something like the landsat data the nape imagery or maxier imagery you can use all of those bands together so deep learning accounts for higher bit depths and additional multi-spectral bands training deep learning models they have huge compute requirements so we typically recommend training and inferencing to be done using gpus a good desktop gpu would be something in the range of the rtx to 2000 ranges or the rtx 3000 or p4000 or the gv100 series as for cloud gpus t4 v100 aws instances the entire g4 or p3 instance series you could have a look at that azure instances the nd6s they're good instances which you should consider looking at and with that we're nearing the end of our presentation the key takeaways from our session today arcgis has powerful deep learning capabilities powerful tools and apis to enable you to breeze through your deep learning workflows these tools are accessible through a variety of clients a lot of this was shown to you today by sandeep we support a range of tasks all the way from image classification to object detection to change detection and much more if you remember the first story map that's where we showed you a lot of these tasks that was supported natively by arcgis the processing is massively scalable so you can use arcgis enterprise to scale out your processing now to complement our deep learning solutions we have a powerful image management solution and over 1400 geoprocessing tools as i mentioned before as well to perform downstream analysis on your results lastly we're further democratizing ai and we're making deep learning accessible to the larger geospatial community by providing pre-trained deep learning models right and with that let me hand it over to rena who will cover resources and she will be fielding questions as well thank you fine so we are in the qa session thank you so much for um many of you that uh posting your questions i would like to cover the questions related to the resources first okay um one of you asked if we you are is it permitted to use the webinar recording to engage your students the answer is yes we're gonna so after this webinar we're gonna send you the link to the uh recording and also the link to the slide deck feel free if you want to use it in your classroom so um i'm gonna send you an email to each of you a week after the webinar listing again all these resources story map recording and everything and you can use it for your class or for teaching or research and one of you asking questions as well that can we get the demo white paper step by step and where the data is found so for that we're actually working with the learn team learn arcgis theme that they actually created lessons um for the deep learning that you can use in the class that will include step-by-step instructions also the data and a couple of them is excitingly is in the pipeline is about that um the fire destruction model and also the mangrove that's in the in in the making and we i will keep you posted okay um all right so all that resources you can use it for teaching and research and i will again i will send you all after this webinar is about a week from now and you can always engage with us and ask more after the webinar all right so there is a question in here that i will send it to um sandeep is is there a specific deep learning model that applicable for any environment worldwide i think you mentioned it a little bit right sandy so the question is any deep learning model that is applicable to any environment worldwide to anybody yeah so most of our uh deep learning models are trained on training data from specific geography but we have seen that they work well even on other parts from which where they have not been chained additionally we have tested the ship detection model the shipwrecks detection model and the land cover model the roads model they work fairly well across the whole world it's subjective to your imagery properties and also very specific to the geography you apply it on may or not may not work really well at each location i see so many of them is actually we put it in the story map if you see some of the area is for worldwide will will work some of them when we we kind of put a note like north america but give it a try and see if it's working in in in the area of your interest okay all right thank you sandy for that and i think this next question i'm gonna send it to finae so finna you give the demo of the fire destruction model um that fire destruction model has no step for validation is it really that easy and trustworthy what if the training samples aren't that good no so we do i think later on i did mention uh in the best practices there was a section where we say in addition to the model we provide you a collection of files one saying the model matrix and in the model matrix we show you screenshots of what the ground truth looks like and what the predicted result looks like so when you're training your model you can specify keep aside 10 just for validation purposes or 20 just for validation purposes and then we provide you with a confusion matrix and we provide you with a bunch of scores which will enable you to decide uh which will enable to enable you to look at the accuracy of the deep learning model and even the detected features once you run inferencing there is a confidence that is associated with each feature that is extracted so every step of the way there is a certain amount of validation that we're doing awesome thank you finay so the next questions here is how can i split training testing and validation data in arcgis deep learning tools yes in rds pro it automatically does it for you so as part of the tools one of the parameters is the validation test uh and there you specif this is kind of related to the previous question you can specify set aside 10 percent or set set aside 20 percent and automatically the system decides for you it will take out for example if you've captured 100 training samples and say set aside 10 uh 10 90 of those training samples will be used to train your model and 10 of them will be used for validation purposes thank you sleep if you want to chime in anytime at all feel free to jump in yeah that 10 is by default so even if you don't specify anything it will keep 10 percent aside for validation thank you um related to that probably is like we working with the trained models with s3 train models the question is can i use externally trained model for inferencing in arcgis pro there uh there is a deep learning framework and we can provide you a link to that and you can bring in your own chain models but you need to do some extra work to glue that to that particular deep learning framework i see okay so the any of you that answered that uh ask that question feel free in the survey to indicate if you want us to contact you after this so we talk about uh best practices sample site gpu and everything so there's one question in here we have access to high performance computing cluster however the os is linux can i run deep learning tools in that environment so you're asking about running deep learning tools within linux yes yeah if it's if it's enterprise yes but arcgis pro is obviously not designed on uh linux the python one can i guess when you run it as the python notebook so um that would be correct right sandeep yes yes yes you can use the python api in your linux environment we provide with arcgis underscore learn meta package you can get it from the s3 channel and uh your environment will be ready very well thank you um okay there is a few other questions in here that i we didn't have time to answer but again indicated in your uh exit survey if you want to be contacted with us and we would more than happy to answer your questions one to one okay but with that i will uh share the resources so this is a list of resources and i believe we're gonna also put copy and paste this one in the chat so the first one is the installer deep learning libraries installer from github when you want to use it in arcgis pro and nokia server as well some sample notebooks s3 community in arcgis pro or in a python api that will help you with some question as well we have the github report for arcgis api for python and some link to the resources for guai and also the deep learning models i the last one it was my blog for that um and again i would like to illustrate that to say again that a week from now you're to get email from me that more comprehensive list of the resources including the learn lessons that you can use in your classroom and the recording as well all right with that i would like to thank you all of you for attending this webinar uh please stay tuned for the last webinar series which is in december will be about the topic of image management i know many of you have a massive imagery collections we really appreciate all your participation on this webinar please please take part in the exit survey and if you learn to get in touch with us feel free to contact me i write my email here on my slide um so thank you again and thank you for uh finae and sandeep for presenting and have a wonderful day thank you everybody for attending we really appreciate you attending and feel free to reach out to us for any question that you have you
Info
Channel: Esri Industries
Views: 488
Rating: 5 out of 5
Keywords: Esri, ArcGIS, GIS, Geographic Information System, ArcGIS Pro, Education, Higher Education, Imagery, Remote Sensing
Id: m82OYTm8fH8
Channel Id: undefined
Length: 56min 56sec (3416 seconds)
Published: Fri Sep 24 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.