Google Earth Engine 101: An Introduction for Complete Beginners

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
and zoom here and i don't believe we'll uh be using any pop-ups so this should show for you let's see let me make sure that i'm sharing the right one yeah there we go all right so hopefully you see my slide now the the opening slide welcome my name is stace maples and i am the geospatial manager here at the stanford geospatial center this afternoon what i'm going to do is run a short webinar usually our our workshops in person are about three hours long that's a bit much for uh for zoom so i'm gonna do a short lecture on google earth engine as a platform and then once i'm done with that i'll do a brief walk through of a prepared repository of scripts that i have that you should have links to so i sent out a couple of links to the eventbrite list for those of you that signed up and so you should have those links if you don't they're in the slide deck as well i've just placed into the slide deck a link to my slides here and you should be able to see the link here on my slides if you just go to this link up here at the top then you'll be able to in your browser see my slides and they'll advance as i go through the the beginning lecture so let's go ahead and get started this is the slide where you can click on this google shortened link this is a link to my repository if you if you click on that link then what'll happen is uh you'll open up google earth engine in your browser and it will allow you to sign in and and again if you're in this webinar then try signing in with your stanford.edu email address that's what i added everyone uh today using if you can't get into google earth engine right now we won't be able to troubleshoot that during the during the uh webinar today but you can catch me on slack and we can we can figure out how to get you into the platform later on so so what i would suggest i'm recording this uh this webinar and it'll be posted uh in a couple of places i'm also live streaming this on facebook right now so if you go to the stanford geospatial center uh facebook page you'll be able to find the the live stream of this later on as well the recording so even if you can't get into the platform it's probably worthwhile uh sticking around because what i'll do is i'll walk through these simple scripts uh explain what they're doing run them and then we'll talk a little bit about about what's happening in in google earth engine but first i think it's useful to start with a poll and i've never done this before so i'm going to try launching this poll and you should be able to see that now i believe and what i hope you'll do is go through the three questions and just check whichever of these answers um you know sort of conveys your level of experience with three things remote sensing programming and spatial data and so i'm interested in what your levels of experience with these are so that i can kind of tailor this now that said you know this is billed as a as a an intro for the absolute beginner and so um what i don't want to do is scare anyone off who doesn't have any programming experience you're specifically who i designed this workshop for so those of you who do have some remote sensing and programming experience you're going to get something out of this as well but those of you who have never programmed before i actually think google earth engine is a wonderful way to uh to take a first step into learning what you can do with code and what programming can do in terms of of data science so i think we've got a couple of people all right it looks like everyone's voted now and a couple of people don't know what remotely sense data is that's great we're going to go through that i'm going to talk about what remote sensing is and the nature of this data that we'll be working with today in google earth engine we'll talk about that in just a moment a lot of you have used multi-spectral data in a gis before but it doesn't look like many of you have actually done advanced remote sensing analysis so one person says they've done that this will be useful for you but this again is going to be a very very beginner's level introduction to google earth engine and the code editor how's your programming so about half of you say i don't think i'm a programmer that's great this workshop is designed just for you this i love the fact that the first time uh you're coding might be this webinar so that's great and then the rest of you know how to move things around in the terminal but none of you use python or javascript on a regular basis and obviously nobody writes haiku in fortran maybe tom mcwright a friend of mine can do that but but you know most of the the the top end ones are sort of silly um and have you worked with spatial data before um so it looks like most of you have some experience with spatial data just a couple of you have little to no experience with spatial data but you at least know that google maps exists and how to use that so so we're going to start today with a an introduction to earth observation and actually we'll start with a what i call my tiny introduction to remote sensing um so if if i could get someone to just give me like a thumbs up or a yes in the chat to tell me that you can see my slides all right that would be great beautiful all right thank you very much okay so the thing to know about remote sensing data is that what we're generally sort of dealing with the data comes to us as uh formatted as images now remote sensing data sets aren't images they're they're they're data sets that could be uh displayed as images and we'll see how that works in a little while and and how google earth engine sort of interacts with visualization um while it's doing sort of the number crunching that it's so good at but fundamentally a digital image is made up of pixels and you've seen these if you've ever zoomed into an image too close you see those squares of color those are pixels and those pixels are actually numeric values to the computer they're just the computer is just showing you colors because that's what you've asked it to do with the data set at that moment but fundamentally remote sensing data is imagery made up of pixels and those pixels are numeric values now going a little deeper into what we're doing when we do remote sensing the values that are in those remote sensing data sets actually represent the amount of reflected electromagnetic energy from whatever it is that we're pointing at and in this case we're talking for the most part about multi-spectral imagery this is imagery that's uh recorded by satellites that are orbiting the earth and the sun's light is hitting the earth and in its simplest uh explanation is reflecting off the surface of the earth and we're capturing that reflected light with a satellite that has basically a digital camera in it and we're capturing that reflected light at different parts of the spectrum and that's what i'm going to kind of explain now so everything on earth reflects electromagnetic energy sort of differently my sweater is absorbing all of the electromagnetic energy that's hitting it essentially right now and that's why it appears black because there's not much light bouncing off of it in this image that we're looking at here you see a couple of sports cars and so we're going to walk through sort of why one sports car is red and one sports car is blue as an explanation of of how we sort of encapsulate these different uh spectral samples that we're using which is actually what multi-spectral remote sensing data is so electromagnetic energy has wavelength and and this is sort of how we place it on this electromagnetic spectrum based on its wavelength and and we see within a very very narrow band of the electromagnetic spectrum you can see that here just above the ultraviolet and below the infrared the visible part of the spectrum that that rainbow we're all used to seeing when we pass light through a through a prism or something like that but electromagnetic energy is actually everywhere and it exists outside of what we can see and one of the nice things is that we can use technology to capture electromagnetic energy outside of the visible part of the spectrum which is really useful for the applications that we want to do when we're working on earth observing data now when you take an image and you put it onto your computer or onto your phone what you're actually doing is you're creating you're displaying what's called an rgb image a red green blue image and any image that you have that you've taken with your cell phone or that you have on your computer is actually one of these are for the most part they're these rgb images which means they're actually three images they're not just one image and so what that means is that when you take a picture of something with your cell phone you're actually taking three spectral samples in the visible part of the spectrum you're taking one spectral sample of the red light that's being reflected one sample of the green and one sample of the blue that's being reflected and then reassembling that using the rgb model into an image that shows you millions of colors and you can see that sort of in this in this rgb image that i showed before when you mix red blue and green you get various other colors and if you've ever used an image editing application and gone into the color settings you'll sometimes see this red green and blue settings where you can change them to anything between 0 and 255 usually and those zero through two 255 values of red green and blue gives us millions of possible colors and that's what makes it possible to create realistic photography and pass it across the network into a browser so let's talk about a little bit this this idea of spectral sampling a little bit more so here again we have this image of sports cars we have a red one we have a blue one we have a green one we also have back here in the back you'll notice a white car and a black car over on the on the right side as well now the red part of the spectrum uh what we'll see this is actually the red band of our red green blue image of these sports cars and so in this image you can see that this car is very very reflective in the red part of the spectrum and that kind of clues us in that that's the red car right and this car on the right is is is very very absorbent in the red part of the spectrum that is it's hardly reflecting any red light at all and so that gives us a hint that this is probably the blue car and then back here you've got a little bit of a mix and that green of of the sports car in the background is a bit of a mix of red and other colors so it is still reflecting a little more red at least more than the blue you'll also notice that the white car in the background is reflecting almost as much red light as the red car is but if you pay attention in the next slides you'll see that the other cars change but that white car continues to reflect quite a bit of light across the spectrum and so here in the green part of the spectrum naturally we expect the green car to be re very reflectant of green light we expect the blue car to be a little less reflective but certainly more than the than the red car which in fact is not very reflecting at all in the in the green part of the spectrum while our white car back there in the background is highly reflectant in the green as well as the red and finally we'll see that it's also highly reflected in the blue and when you mix those high values of of the red green and blue you get white like we saw in the middle of that rgb graphic here we see that in the blue part of the spectrum obviously the blue car is reflecting highly and these other two are not they're sort of absorbing a lot of blue light and so once we put those back together those red that red green and blue band back together we get the millions of colors we're used to those pixel values those values of zero to 255 for each of those three colors result in all of these rich browns and greens and blacks and and the gravel and the whites of the street lines and all of that is the result of each of those pixels having three values now we're not interested in sports cars um that just happened to be a great image that had red green and blue objects in it so i could explain what it is that we're doing here we're interested in observing the surface of the earth and we don't do that with our cell phones either i mean there are youtube videos great youtube videos actually of people sending their cell phones up on on weather balloons and taking video of that and so on but we need something a little more structure than that we need to be able to take that imagery and co-locate it with other data sets where it belongs on the surface of the earth and the way we do that is with satellites and these satellites have very carefully uh monitored trajectories so we know where the satellite is relative to where it's shooting on the ground and we can take those images that it's uh that it's capturing of the surface of the earth and we can place them in a geographic context with all of the other images so many of these satellites snap discrete images some of them do a sweeping motion where they're capturing the imagery back and forth one line of pixels at a time and then there are even satellite systems now like planet.com where you have a constellation of hundreds of satellites that are orbiting the earth on a polar orbit and and together they're imaging the earth and all of those pixels that each of those little satellites is collecting are reassembled into these massive images of the earth in fact planet images the earth once a day at three meter resolution which is pretty astounding and incredibly useful now one of the great things about satellites is that they can see outside the red green and blue outside the visible part of the spectrum your phone naturally has a has a filter on it that limits the light coming into the camera and onto the ccd to the visible part of the spectrum um but but we build cameras all the time that can see outside the visible part of the spectrum and we put those cameras in satellites and those satellites can see in the near infrared the the shortwave infrared the thermal infrared they can take what's called a pan chromatic image which is basically a black and white image that records all of the reflected light in the red green and blue into a single pixel so it's basically it's looking at the intensity value of all of the light being reflected so we get this this sort of constellation of different types of data other than just red green and blue that would be great to just have red green and blue but it turns out that these other bands are incredibly useful for discriminating between targets like water and vegetation and urban built up area and so on now when you have the availability of all of these extra bands of information what that allows you to do is more easily discriminate between the targets that you're interested in on the ground and for for the most part for a long time what we've really been interested in in in terms of using satellite imagery is classifying what each pixel on the ground is in terms of its ground its land cover and so we do a lot of classifying of water bodies and vegetation and tree canopy and bare earth and things of that sort and this graphic shows you that if we have data outside the visible part of the spectrum you can see here in the visible part of the spectrum the what's called the spectral curve of each of these of these targets is pretty close together and so that makes it in some cases very difficult to discriminate say between vegetation and water cover and things of that sort but once you get outside of the visible part of the spectrum here into the near infrared for instance you can see that the spectral curves for these for these targets that we're interested in begin to diverge and they do so um at a at a huge scale and so that makes it much much easier for us to discriminate between these targets on the ground if we have access to the infrared and so this is why you'll often see you know one of the biggest things in drone in drones right now is is drones that have a camera that shoots red green blue and infrared because this is so useful particularly in small-scale agriculture applications and so on all right so the satellites that we work with um in this in uh um slide i'm sure what i'm showing you is the basically the spectral samples that we have access to in in these in these data sets and so this is landsat 7 as well as landsat 8 and you can see that we've got we've got spectral bands down here in the visible part of the spectrum we've got near infrared bands we have red edge bands we have the thermal up here and if you're looking and wondering what these valleys are these are these are referred to as uh atmospheric windows these actually the gray parts are atmospheric windows so it turns out that some wavelengths of electromagnetic energy are reflected by the top of the atmosphere pretty effectively and that's where you see these very very deep values in uh in this particular graphic and so we can't really sample across the entire spectrum but we can sample at discrete points where we have really good reflectance all right and so what we can do when we have all of these other spectral bands available to us is we can use that red green and blue model the rgb model that we use to display digital images on a computer or um or in our in our phone we can cheat nature right we can we could put the blue green and red layer from the satellite image into this rgb and create an image from that a true color image but more often what's most useful is to take the blue green and red and instead of putting the red band into the red uh part of the rgb image we put the infrared there and then we put the red band of data into the green band and then we put the blue the green data into the blue band and we kind of throw away the blue it scatters a lot it's not terribly useful for a lot of applications that we're usually interested in and what results from that when we do that when we put the infrared where the red goes if you look at this image on the right you'll see this is an image of a peak you've got snow cover at the top and then below that you've got this really beautiful deep red and that deep red is a result of the fact that healthy vegetation reflects very highly and we saw that in the last slide very highly in the infrared part of the spectrum and so we can use that to see the difference to increase the contrast really between things like vegetation that you can see here and water cover that you can see here now let's talk about spatial resolution because that's important too these data sets that we have are made up of pixels again right and those pixels are squares of color but when we're talking about images on the surface of the earth those pixels actually have geographic dimensions too right so in terms of the um the landsat data which is one of the more popular data sets that we'll work with and we'll work with it today in google earth engine what you have available to you is for the most part 30 meter pixels most of the bands come in 30 meter pixels and what that means is um here you have an image of a baseball stadium and it's a high resolution image right this is a goi image probably um with a half meter resolution so you can really see the baseball field and if you look at the infield here that yellow square is a 30 meter square that's the size pixels we have to work with when we're working with landsat data other platforms have other size pixels like aster has 15 meter pixels so if we're just talking about between the pitchers mound and home plate here that's a 15 meter pixels and then some of the pixels that are further up into the electromagnetic spectrum like into the thermal because their wavelength is so much longer than those uh than the the wavelength of the parts of the spectrum we capture down in the red green and blue and the near infrared the ccds have to be larger right and so the pixels end up being larger because you can't capture them any smaller than that and so our thermal pixel pixels end up being about 90 to 100 meters in size so what does that actually look like well on the left here you'll see i've got a landsat image of that same stadium right and that stadium you can't even if you didn't have the outline of the stadium there you might not be able to tell that's a stadium you can certainly see that there's grass or some vegetation here there's vegetation out here these other things might uh you know if we zoomed back out to the might in the geographic context resolve themselves in our brain as urban built up area and buildings and so on but for the most part that's just squares of color and we can't really tell what that is here again you see the high resolution image but what we can do with that so so these types of data when you're working on a project one of the important things to consider is what's the size of the target that you're interested in resolving what is it you want to see so for instance you're not going to be able to count cars with landsat data you would not be able to see the cars in the parking lot of this of this baseball stadium with landsat imagery it's just not going to be possible but if you were to acquire imagery from max r or high resolution imagery from planet.com half resolution imagery that would be perfectly suitable for counting cars in parking lots and so on the rule of thumb for the targets has always been my remote sensing professor once told me that whatever you want to resolve uh whatever you want to detect in a remote sensing data set needs to be about twice the size of your pixels and then finally cadence cadence is how often a satellite revisits a location and images the same point on earth now for landsat we get 16 day periodicity that means every 16 days landsat takes another picture of the stanford campus where i am right now sentinel 2 which is the european space agency sort of equivalent of landsat that has 10-day periodicity and slightly higher resolution 10 meter pixels 10 days at the equator has a little different at some other some other places planet.com uh this is the company that i mentioned earlier that has the micro satellite constellation they image the earth once a day at three meter pixel resolution and then the red green blue and the infrared and they're beginning to introduce more bands of information like a blue edge and a red edge and some others to to correspond to the landsat bands actually and then digital globe platforms like maxar and even planet.com's higher resolution satellites can be tasked so that means if you need a shot of a particular location next friday at five o'clock in the evening you can task that you can have a satellite company move their satellite and grab that that image so it is actually possible to take images of the earth whenever you want to all right so why so now i'm going to shift and talk a little bit about google earth engine and we're going to we're going to discuss what it is that makes google earth engine sort of revolutionary in remote sensing when i first started doing remote sensing work was back in the late 90s and and i was doing everything on a desktop and i remember downloading landsat images which by itself took forever because we were on 56k unless you were at the university at the time but you would download the image and then you had to do all of this pre-processing you had to stack the image you had to warp the image you had to pick out the bands that you were interested in some cases you had to pan sharpen the image to make those pixels sharper there was a lot of processing involved and in fact it would sometimes take weeks depending on how big your area of interest was weeks before you were ready to do what it was you were actually interested in doing and and what's happened over the last 20 years is that there's been such a proliferation of earth observing platforms there's been such an increase an exponential increase really in the amount of earth observing data that's available to us that our questions have gotten bigger right when you can image the earth every day at 3 meter pixel resolution or when you have a landsat collection that's 40 years deep with 16 day periodicity for that entire 40 years you naturally want to ask bigger more complex questions questions at 30 meter pixels for the whole world that might take hundreds of years to calculate on a desktop and so a couple of years ago this book came out and i highly recommend at least reading the introduction to this by jim gray the fourth paradigm and he proposed that we were getting to a point this was you know just in in 2007 2006 he proposed that we were getting to a point where it really wasn't practical anymore to bring the data to our question that is it was no longer practical practical to download data to a desktop and and and analyze that data because our questions were too big i mean we literally you know would take forever to download it but once it's down there what do you do with a petabyte of data on a desktop well you can't and so he proposed that what we need to start thinking about doing is pushing our questions to the data and actually putting our our questions in the same place that the data is actually stored and and google earth engine about i don't know 10 12 years ago was created to do exactly that because when you have 60 petabytes or more of satellite imagery and you store it in in a in a data center like you see in this slide here but you also co-locate the algorithm algorithmic primitives that allow you to manipulate and calc make calculations on that data as well as the firepower the processors 20 30 000 processors at once working in parallel on a single calculation of a single uh image of the earth well that changes everything then those calculations that once took would have taken 300 years can take a weekend because you're you're breaking that job up into 30 000 little jobs and calculating them all at once and then bringing the answers together uh and aggregating them and so that you can see what your your answer is so the google earth engine was sort of born out of the desire really to have a pretty image of the earth um this is an image i believe of the the southern coast of borneo and um and if you've ever looked at satellite imagery if you work with satellite imagery and you work in the area of indonesia or other areas that are on the you know island areas you know it's incredibly difficult to get a good cloud-free image of some of these areas and so the google folks were really frustrated by the fact that they couldn't have a nice clean cloud-free image as a base map in google maps and so what they did was they they did after a while have a sort of had built up a longitudinal set of high-resolution satellite images um over the course of several years some of which had clouds in one place and some of which had clouds in another that's the beautiful thing about clouds is this image was taken in an instant and five minutes later all of these clouds were somewhere else and so you could take another image and have some more pixels exposed and not under clouds well what they figured out that they could do was they could take all of those images that they had collected over time and stack them up and then look down through that stack at every pixel location and find the best pixel and it turns out the best pixel is usually the median pixel it's the one that's not too bright because it's not a cloud or snow cover it's green or it's urban built up and it's not too dark because it's not a shadow and so and so by taking that stack of images and finding the median pixel value they were actually able to create a pretty nice cloud-free image from dozens of other images of the earth and so when you go into google maps and you're in the google maps view and you click on the little icon down at the bottom and switch to satellite view and zoom all the way out that's the reason you don't see any clouds now that's really powerful and once they've figured out that they could do that they realize well if we can do that if we can make a you know a cloud-free image of anywhere on the earth uh you know from from just a set of of uh you know images that were taken concurrently well then we can also create animations of those images then it's just a simple another step to create one of those cloud-free images per year for say 30 years worth of landsat imagery and so what you're seeing here is an animation of exactly that 30 years of landsat imagery turned into an animated gif and suddenly we can see these riverine dynamics we can see these serpentine rivers and these oxbow lakes forming and dissolving we can see the urban built up area growing in the deforestation we can see that in this image as well we can see that the the forester roads moving out into uh the virgin forest and then we can see that forest going away through time uh as deforestation occurs and so google earth engine made all of this possible now as i mentioned before we just have this glut of earth observing data too much really to handle it's just coming in too fast this is just a slide of the available commercial satellites and and there are um maybe as many available uh non-commercial public domain uh data sets available as you see here and so this is all just to to point out that there's so much data that this was true until 2008 only four percent of the landsat archive had been examined because it just simply wasn't practical it wasn't possible to look at all of that data now google earth engine has been ingesting all of that data pre-processing it preparing it for analysis and now this is actually an old slide they've got well over 60 petabytes of data available in the google earth engine catalog and we'll go through the catalog after we run through my slides in a little while just to show you what's available and how you can go through the catalog and sort of shop for data all right at this point now we're ready to go ahead and start the demonstration i'm going to look up here and see if i have any questions in the q a and it doesn't look like it so if anybody has any questions please do feel free to put them in the q a or drop them in the chat i'm going to watch that but what i want you to do is you should have received an email that confirmed that i had added you to the google earth engine account for stanford that would have been with your stanford.edu email address so if you click on this link that's in the slides that should take you to the google earth engine platform it should allow you ask you to log in and you'll log in with your google with your stanford email which will then bounce you out to our single sign-on that'll be familiar and then you'll come back into a platform that will look very much like this i'm going to go a little bit slow and talk about the interface here because i want to give everyone who wants to follow along an opportunity to get the get signed in and get the slide to get the scripts open and so on so i'm just going to go through just a little overview of what you're looking at here in the interface and i just want to confirm if somebody can just drop a yes in the chat to let me know that you can see the uh the code editor in uh in my screen excellent thanks travis all right so so what we have here is the google earth engine javascript code editor and google earth engine actually actually is a set of apis there's the javascript api which we interact with through this code editor and this is a great way to learn google earth engine it's a great way to play it gives you immediate response gives you immediate gratification if you break something you can you can reload the script you can play around with changing values and really this i think is one of the best ways to learn not only google earth engine but but how to code and how to work with code it's a great interface especially if you're interested in earth observation so here up at the top obviously google's single search box and you can search for places so if i wanted to search for san francisco and zoom in there i could do that but you can also search for data so if i want to search for landsat data i'll just start typing landsat and those data sets become available here and i can import them directly into the code editor and begin working with those you also hear on the left you have the this set of scripts now for a new user you're not going to have any any scripts other than my sgc earth engine 101 sample scripts that you import through that through that link and i should probably go ahead and post that link in the chat for you all just in case nobody said that they don't have the link but i'm gonna make sure there you go so there's the link in the chat if you need it you can grab that from there and let's get back to earth engine there we go all right so you've got the this is your script repository so scripts that you've imported from other people will show up here scripts that you've saved will show up here and then here you have the docs so you can begin searching for things so if you're interested in calculating the slope on an elevation data set you can type in slope and you can begin to examine the algorithms that are built into google earth engine that allow you to calculate slope from an elevation data set your assets are all of your data sets that you've uploaded uploaded to arcgi um i'm sorry to google earth engine and a google earth engine allow they give you i think about a quarter terabyte of storage so about 250 gigabytes which in you know in uh in satellite imagery years is not very much um but it's enough for you to upload a few small images and start playing around another thing that you can do is there's actually if you're if you're a stanford affiliate you can get a planet.com account through our stanford planet account and just email me later on about that and i'll put you on our account but planet has an orders api now that allows you to order data through a python api and push it directly into google earth engine so you can actually begin working with proprietary data very high resolution proprietary data using the tools that we'll go through here in just a few minutes all right this center part here is the is the javascript window this is where your scripts will show up when you load them or as you're writing this is where you'll write your javascript to create scripts and there's a number of buttons that are really useful up here at the top so the most useful button of all probably is the get link button so you see i've got a script here that is in the script window if i click on get link and copy that any of you would be able to follow that link and open up this script in your own code editor and this is actually probably the easiest way to help someone troubleshoot code this is how i help people when we're on slack and they have trouble with their code and they can't figure out what's going wrong well they just grab their link for their code put it in slack i open it up and i can troubleshoot the code from there and then pass the pass a new link back to them to open up the fresh and edited code so this is a really useful and important link to know about one thing that's really important to know is that copying this url up here at the top from the address bar and giving that to someone will not in most cases allow them to look at the code that you're editing in the script editor you've got this run button which has a couple of options but typically you just want to hit run usually i think i've got my code editor set to when i load a script it runs it automatically often that's not optimal but but in this workshop it runs so that you can see what's going on so i just clicked run again it runs the script again and the results will show up in the map if there's mappy stuff going up on in the in the code or in the console and so the console is kind of similar to the console you would have like in a browser if you were doing some uh javascript coding there and so um we'll go through these other tabs later on because they're relevant to some of the scripts that we'll play with but i just want to show you that finally you have the map window and we'll talk more about some of the elements of the map window as it comes up in the scripts so if you would go find this earth engine 101 and load this first javascript syntax script and run it if it doesn't automatically and what you'll see is i've just got a few things in here that kind of introduce you to the syntax of javascript so for instance often we do we do a lot of declaring variables everything we do in in these scripts is going to be declaring a variable and passing that variable to a little bit of code to do something to whatever we've put in that variable and often what we've put into that variable is an image or a collection of images and so on so variables are declared by saying var and then the name of the variable and then in terms of assigning a value a numeric value you would use an equals or if you want to assign a string value you would use this i am a string let's see you can use double quotes but best practice is to always use single quotes in javascript and definitely don't mix them because that can always cause some problems now here you'll see that all of my little bits of code here end in a semicolon and that's actually best practice and the the code editor will run things without doing that but it will complain about it and so often you'll see feedback here in the left part of this code editor panel and here it tells me i'm missing my semicolon so i can just put that right in there and run it again and now i no longer have that error message now you can see i've declared the answer to be 42. my other variable is i am also a string i've got a variable called test i've also got a variable that is a list so you can declare numbers of things into a variable as a list of things and so here i've declared my list as eggplant apple and wheat and if i print my list and object zero which is the index of the of the first object because in computers we start counting at zero right you see that the result of printing my list and object zero is the word eggplant and that makes sense because there it is it's the first object in in my list there curly brackets define dictionaries and dictionaries are lists of of a property and its value right so here i have a dictionary that is that has a value a property of food and its value is bread right and so you can build up these these sort of dictionaries and here i'm printing my dictionary and i'm printing the value for color and when i do that it gives me red it doesn't give me color it gives me the value for that particular property and then finally you can create functions and these are like mini programs that you can take a variable and pass into and it will do some things with that with that variable and then pass back the result into your program and then continue running your uh your code we'll talk a little bit more about this in the scripts because at some point in the scripts things do get a little complex and it becomes a lot easier to build these little functions and pass things to the functions instead of just writing out the code altogether all right so that's just a javascript syntax intro i'm going to go ahead and click on hello world the the o1 script because again we start counting it at zero on computers uh and i'm gonna abandon my changes now that's fine because uh this script is already saved it'll just keep my original changes there and load my new script and my new script if i click run prints hello world and this is traditional you have to do this in any programming class your first code has to be writing hello world to the console and the way i did this is i called a function called print and i wrapped the value that i wanted to pass to that function the the string that i wanted to print to that function by putting it in the parentheses here and then i've got my semicolon at the end and when i run that the result is it prints hello world i'm just going to mute for a second excuse me all right so so that's the simplest way to do it but often we want to declare these as variables and so we're going to do it that way as well so here what you'll notice is these two slash marks that i have in front have made the code green and that means the code is being ignored there i've commented these code lines out and now i'm deleting those comment slashes and so you can see my code has suddenly become alive right it's got the colors and and so it's suddenly active i can comment this first bit of code out so that it doesn't run and now when i run uh my my little script what's happening is i'm declaring a variable called message msg as equal to the string hello world and it's slightly different so you can see that it actually is printing a different thing and then i'm passing the variable instead of the string into the function called print and that is resulting in hello world coming into the console over here all right so that was sort of obligatory it's a rule you have to do hello world first but now we're going to do hello images because that's much prettier and much more exciting and we can begin to talk a little more about the rest of the interface here so what i've got right now and i haven't run it yet so i'm going to talk a little bit about this first so you can see up at the top i've got this weird section that suddenly appeared and what i've done here is i've declared a variable called srtm and i've declared that variable as a collection of data and that collection of data is this srtm digital elevation data notice that it's hyperlinked the name is hyperlink so if i click on it i can see the information about this data set it pops it up i can see the metadata it's the range that it was collect when it was collected how to call it in my code but more importantly i can see what bands are in the data and in in this case there's only one band and every value is between negative 10 and positive 6500 and gives me the value of the elevation at any point on the earth in meters all right so we'll close that and so that's how you can kind of examine the data sets that you've got so i've declared two data sets as variables here and now if we move down into the script what i'm doing here with this first line is i'm going to display that first data set so to my map which is down here i'm going to add a layer that's the function that i'm going to call and that layer is called srtm and here it is up here the variable srtm so what i'm actually telling this script is add the digital elevation model to my map and then this is a little visualization parameter so you noticed that the values of that elevation model went from negative 10 to 6 500 but if i if i visualize them stretching to those values the the map is going to look really grayed out and kind of washed out and so what i generally do is is i stretch to some smaller value and we'll talk about stretching in just a few minutes because it's easier to explain in another way all right and then finally i'm naming this layer srtm so i'm going to go ahead and click run and if we're lucky we'll start to see some things happening and so what happened is that first of all i'm in the ocean so let's move out of the ocean because we're not going to see a whole lot of elevation there and in fact let's go ahead and go ah there we go let's we can go over here i'm going to focus on this area of the rockies here because it looks nice all right so i clicked run and what happened was i added my srtm layer but i also you'll notice there's another active line of code here called map add layer so add a layer to my map called forest and that's this data set up here called the hanson global forest change data set if we click on it we can see that it's got lots of bands of data the one that i'm interested in in this case is the lost data set and that loss data set is a binary data set it's either zero if there was no loss of forest in that 30 meter pixel or it's a 1 if there was a loss of forest in that 30 meter pixel during the 16 years that this data set is is focusing on and so that data set was also added to my map and you can see that because it's the black and white data set it's zeros and ones it's just black and white so if i want to get rid of that or turn it off and see what's underneath it you'll notice there's this new little layers widget here and i can toggle layers on and off so i'll turn off loss and now i can see my elevation data and as i zoom in and zoom around in the map notice that the tiles come back in that's because it's recalculating this on the fly and sending new tiles based on the zoom level that you're looking at the world in and so as your your scripts get more complex and you're doing more and more calculations on these images before you just display them in the browser you'll often find that the delay to display them in the browser gets slightly larger it's still nothing like waiting weeks for an image to pre-process all right so now i've got this layer of of loss so i can see here i've got areas of white that demonstrate that at some point between 2000 and 2016 they lost forest cover at those pixels so you'll notice i've got some other code commented out and i'm going to go ahead and delete those comments because what i'm going to do now is i'm going to make this layer that's on top transparent so that i can see my other layers through it often you want to be able to stack layers up and look at layers in the context of other data layers and so this is a really useful trick to do that what i'm going to do is i'm going to mask this this forest loss layer with itself now the mask function basically takes a binary image of ones and zeroes and wherever there are zeros it makes the target image invisible where those images where those zeros are and so here you can see that i'm adding a layer and the layer that i'm adding is loss but i'm updating it with an update mask based on itself and so what's going to happen there is the zeros in the data set the black pixels are just going to disappear so now we'll run that and actually i'll have to take this loss layer out and you'll see now that and and one other thing was i gave it a palette of red so that my ones my pixels that were that remain uh will be red and now you can see that those areas that lost forest are superimposed on top of the other layer the uh the elevation data set and you only are seeing the the ones the the pixels where forest loss was all right let's move on to computations and as always you can just click on the next script and and uh and throw away any changes that you made and these scripts will go back to their pristine form and and i've built these scripts to be useful to you in uh in building up your own scripts change out you know an elevation data set or change out the landsat data that we're using in the sample script for say sentinel data or change the location all right so this one is actually going to take those data sets and we're going to calculate on the srtm data that's the elevation data set again we're going to use the slope algorithm right and we're going to apply that slope function to our srtm collection of data and so we'll run that and what we get is this nice colorful slope data set and you can see that i've created this time i've actually instead of embedding the visualization for this slope layer in the actual map add layer code line i've created a variable because that's what one does in javascript you create a variable and then you can apply it in lots of different places and so that's what i've done instead of writing out the visualization like i did for the dem i'm simply putting the variable that describes the visualization here instead and the code knows to to replace that and so now we have this nice slope layer here i'm going to skip on because we are moving uh um a little slower than i had hoped to let's take a look at spatial reductions now those of you that have worked in a gis before will be familiar with spatial reductions using the term zonal stats so this is where you take an area of an image and you characterize the pixel values in some way either by summing them up or averaging them and so on and so this is how you can reduce an image to some variable that describes it in some way and so here what i'm doing is i'm taking that slope data set and i'm complete uh i'm going to compute the mean slope in a particular geometry and you can see that geometry right up here in the imports section of my script and if i click on this little icon that looks like a gps symbol it's going to center on that geometry so now i can see that i'm centered with a polygon on grand canyon village if i run this script what's going to happen is it's going to calculate that slope i didn't bother with the visualization now because what i'm really after here is a calculation of the value uh the average value is it the the mean value of the slope in this particular polygon and so here in this code that i'm using to call it there are a couple of things that are important to point out so so here i'm declaring a slope dictionary and that is the slope and and here you can see this dictionary has the properties and the values of the properties right and so here i'm declaring my dictionary my slope reduce region is the name of it and we're going to reduce the region based on we're going to use the mean to reduce this region the geometry that we're going to use is named geometry you can see that right up here and the scale now this is important now i said this before as you zoom in and out of the map google earth engine recalculates on the fly and and it's important to understand that it's actually recalculating based on different pixel sizes when you do that so if you want to get a number a numeric value out of google earth engine you have to declare the scale at which you want to calculate that so here i want to calculate the mean slope based on 30 meter pixels and if i look over here that's exactly what i've done i've calculated the mean slope as 19.745 and i've done that based on 30 meter pixels now if i change this value to 60 meter pixels i should get a different mean slope value slightly different not much different but but slightly because that change in uh in the in the scale at which i'm calculating has made a change to the calculation all right so i'm going to move on to load and filter and image collection and it looks like we're going to go a little over so you're welcome to stay for this i will send an email out after i've finished walking through these scripts with a link to the recording so you can go through and jump forward if you want to through these but you're welcome to stay as well i love having folks spend time with me while i'm doing this stuff all right so here what we're doing we're getting a little simple again right we've only really actually got three lines of code we've declared our variable called l8 landsat 8 and we've declared that variable as the landsat 8 collection right and so here we've declared another variable and that variable is called filtered and what we're doing is we're using a function called filter date on our landsat 8 variable right up here so we're going to filter date on the landsat collection between these dates 2017 701 and 2017-731 so we're we're asking for one month of landsat imagery here and then we're going to add that to the map just as is and what will actually happen is the default is to load only that that image was taken most recently at the top it doesn't make sense to load you know all of the images underneath it if you can't see them anyway so if we run that what we'll see is we'll get those latest pixels loaded into our map and frankly they look pretty terrible so if i zoom out we'll recalculate a little bit and this is because we're not doing any stretching at all and so this gives me an opportunity to just talk a little bit about what stretching is what we're doing here remember i talked about this in the beginning uh lecture the rgb image model and how the values of those rgb bands are actually between 0 and 255 well the reflectance values for these data sets are often wildly different than 0 through 255 sometimes they're they're you know negative 1 through 1 or they they range between zero and one or they range from like in the case of the srtm data from negative 10 to 6500 and we have to figure out a way to put those values in a scale of 0 to 255 because that's what we have available in the rgb model here to to display this data on uh on the browser window and so what happens in a stretch is we take the actual range of values and this is easier to show you if i go down here to the display widget and click on this little settings wheel now you'll see right now that i'm displaying the range of this data as just 0 to 1 without adding any other parameters what i'm going to do is i'm going to stretch 100 and so what this is going to do is it's going to look at the data values of all the bands in these data sets that i'm using and it's going to find the lowest value and the highest value and that's what it's done here the lowest value is 0.014 so and so and the highest value is 1.13 so and so and so now what it's going to do is it's going to take that range and it's going to normalize it to 0 to 255 and turn that into an image and display it in in the browser window so if i apply this what you'll notice is that the image doesn't become great but it does become a little bit clearer now if we go to a stretch of 98 where we're chopping off the ends the highest values and the lowest values and apply that you'll see that the image does become a little richer now we're going to do some things in a minute that's going to give us much better images what i want to the other thing i want to point out here is that what we're doing here is simply calling the data in we're not doing anything else so you can actually see that there's clouds in here you can see the seams between these images you can actually see this image this landsat scene is actually off by itself right right there so you can see that these are just little tiles of imagery that have come in with no mosaicing no sort of blending or anything and it looks kind of terrible so let's fix that let's start to fix that and here's where we start to play with image bands in the data so here i'm using the same landsat image collection and i'm still declaring it as l8 and here what i'm going to do is i'm still filtering it to the same date range and i'm going to create here i'm creating a true color visualization so what that means in is now i'm declaring what are the bands from the landsat collection that i want to visualize and so if i click on this landsat link up at the top let's take a look at what that means hopefully we'll get some landsat metadata here soon let's try clicking on it again there we go all right and so here are our bands and i can see all of my different bands my blue band is called b2 my green is b3 and so on and if we go back we'll see that i'm using that in the code so here for a true color visualization i'm using the the red green and blue and so b4 is red b3 is green and b2 is blue in the landsat 8 collection here i'm going to add that as a true color image and then if you look at this next bit of code what i'm doing is i'm using a different set of bands this is that false color infrared that i showed you an example of in the lecture so band 5 is the infrared band and then again b4 is the red and b3 is the green and we kind of forget about blue and throw it away because we only have three bands to work with in the browser to to visualize this and then we'll add this as a false color image so let's go ahead and run that and see what happens and again this is just adding the images as the most recent pixels without any other any other manipulation to get rid of clouds or the seams between tiles and so on and you can see now that we've applied a stretch and picked specific bands things look a little more familiar now right um so this band that with this image that we can see that's come in already is our true color image and it came in first because we listed in our code first and now you can see the false color image is starting to come in that came in second because it we've declared it second here in our code and if we look here we can begin to turn those on and off and toggle those and if we zoom in to a particular area again it will recalculate on the fly and give us the new images based on this new zoom level that we're at all right moving along now we're going to do a little of that cleanup that i talked about so now we're still filtering to a month of imagery actually here i think we're filtering to a year of imagery this will be great because now we're going to have like 22 images stacked up for every location we've defined our true color and probably did i do a false color it doesn't look like it did a false color but we're going to display the most recent pixel just like we were before but now look at this line so now we've declared this variable called median and what we're going to do is we're going to take our filtered image collection and remember this is actually a stack of images for a for a whole year and we're going to calculate the median pixel from every band across all of those images and then add that as our true color visualization and remember when we take the uh the median value of that stack of pixels it tends to be the best pixel the greenest pixel because it's not in shadow and it's not in cloud and so if we run this what we should see is a little bit better image a little prettier so this is the ugly image still with the tile seams and the clouds and so on and here is the true color image without the seams and the and the and the clouds and if i just turn off the filtered you still have some of the seams there but for the most part you can see that what we've done is we've we've created what's called a composite we've taken the best pixels and we've created an image the best image we can uh from landsat for the for the whole year using a median now there are better functions to do this and you can see if you're in a cloudy area this uh code down here that you can un you can uncomment this code and run it and we'll just run that you know these things will run much faster if i zoom into an actual area of interest you can feel google's tile servers smoking on the other end all right so here's our cloudy image and as well as our image our median image but then as it's coming in i'll explain we've got this uh landsat 8 image collection declared and we're using this this simple composite algorithm that's designed specifically for raw landsat imagery in the google earth engine platform and this is going to give us a much better cloud masking it's going to use the metadata from the imagery that tells uh earth engine which pixels have clouds in them and so on if you look in the in the metadata you'll see that there are cloud mask bands and and things of that sort but now we have this much better composite if i toggle back and forth between this composite version and the median version you can see the difference between those two versions you get a much cleaner image using the composite algorithm that goes and finds all that ancillary metadata and uses that all right to isolate an image it's quite easy often you're interested in just one area and if you just want to see a single image then you can use a point as your region which doesn't make perfect sense but if i zoom into this area and then zoom back out we'll see i think this is in the bay area here right down in the south bay and let's try and get zoomed back out some more there we go ah this is actually just south of bataan um all right so i've got a point there and you'll see that my i i'm actually applying uh this uh the filter bounds as roi and i'm declaring this image as uh the the data that is within this uh bounds and is within this filter date and uh and i'm sorting it by cloud cover and so that means i'm going to get the best image at the top and i'm only going to visualize i'm going to pass back the first image i'm going to take the first image and then here this is just my true color visualization that you're used to seeing so i'll go ahead and run that and what we should get is a single image with the true color visualization this image is the best the lowest cloud cover for this year for 2017 for this particular location all right so um i want to skip over some stuff but a lot of these are really important and kind of cool ndvis are normalized difference vegetation indices and these are where you take the red and the infrared and you use them by subtracting and adding them and then dividing the results of that to normalize them and and what this provides you with is an image that from low to high value indicates sort of the relative vegetation health and here there are if i run this what we're getting is just the composite right now we're going to get the composite image for this area that we're looking at and then while that's loading i'm going to go ahead and uncomment the code for computing the ndvi and so here what i'm doing is rather than taking calculating on the collection i'm going to calculate on my composite that i've made this nice image that is the best pixels from this entire year of pixels for this area that i'm looking at and i'm going to run the normalized difference algorithm on it and i have to tell it what the infrared and the red bands are for this data set because often across data sets like the landsat 7 infrared band i think is a different band number than the landsat 8 infrared band and then the sentinel is a different uh infrared band as well so to use these you have to tell them which is the infrared and which is the red band and then we're going to rename this ndvi and then display it so if i run this now that i've uncommented the code we'll get two layers we'll get our composite and then on top of that we'll get this ndvi layer visualized from white to green and the green represents the darker the darker the green the more healthy the vegetation is all right now you can uh actually because we do have a longitudinal data set you'll notice that i skipped a couple of these um of these scripts they're commented you can go through them and run them and see what they're doing and play around with them after the after the workshop i just want to show you that we can run that same calculation on a series of images over time and then chart the ndvi values through time so i've got this region of interest that i'm interested in and i'm going to zoom out so i can see where that is and there we go this one is in the bay area and so what i'm going to do is i'm going to chart using that point the ndva ndvi value over the course of a year and you can see here i'm mapping that function over the collection i'm creating the filter and then i'm creating an ndvi and putting that into my resulting image and here's where i'm making the chart and so the chart is going to be a result of my with ndvi image which is right here i've declared it and put an ndvi into the actual rgb landsat image data set that i'm working with and this is going to be at 30 meter pixel resolution remember you have to declare that explicitly when you're reducing an image i'm using my roi which is actually just a point and then here i'm using google charts uh to chart the the values that are returned from this and so if we run that what we get the we don't get i didn't put any of the layers into the map i just created the chart and so these are the ndvi values at every month at every point that i have an image across the entire year for that particular point on the surface of the earth and so in this way you can begin to look longitudinally at these earth observing data sets which is handy because we've been looking at the earth for 40 years and one of the things we're really interested in is cycles and change all right exporting imagery is important it's one of the the main things people want to do with this because you have such a an easy access to these massive data collections you can very easily uh zoom in prepare a landsat image and then download it usually to your google drive is the easiest thing so here i'm again i've declared the landsat 8 collection and i'm making one of those cloud free composites using the simple composite function from one year worth of imagery again and then i'm just going to display that composite then i'm going to export that imagery and there's two ways to do this you can export the data or you can export the visualization so the first one i want to go over is exporting the data and you can see here i've called the export i want to export my image i want to export it to drive and the image that i'm going to export is composite you can see that that that data set has been declared up here here it is so i'm going to export composite and this is just a name that you can give it so that you can discriminate it from other files in your google drive and then the region that i want to export and i'll just go ahead and zoom to that is this polygon so i'm going to clip the landsat image to this polygon and export it to my google drive now at the same time i'm also going to export the visualization now this data is going to export the actual pixel values of the data set itself this is going to export the rgb image which is just those 0 to 255 values that will be in the rgb image this is really appropriate for throwing into say a powerpoint or a paper this is really appropriate for for actually working in a geographic information system or another remote sensing application of your choice and so again you specify scale that's going to give you the pixel size and i'll go ahead and run this and this actually requires a little extra work because what we're doing you can see that the visualization is coming into the into the map but you'll also maybe notice that this task tab is now flashing and so this is an important thing to understand there are some functions in google earth engine that can run uh locally sort of run locally and be pushed to the browser and then there are some tasks that are so uh computationally intensive um that often you end up having to cue them and so that's what we're doing here we're going to cue our exports and they often take you know a few minutes to um to export there's a question in the q a what format is the data exported in and the data will be exported as a geotiff geotiff is a tiff image that has extra information in the header that tells geographic information system software where to put that image on the surface of the earth so it's like a tiff extra that knows where it belongs on the surface of the earth the export to drive i believe is going to export a png by default all right and so what i need to do is i actually have to run these tasks and you can see you can change these parameters that you've already set in the code if you want to but i'm just going to go ahead and run those and it is unlikely that they'll finish before this workshop is over but you can run the code yourself and then go to your google drive on your stanford account and you'll note that you have this little clipped out area of the image available in your google drive as an rgb image a png and also as a tiff a geotiff that can be used that's actually got the data values in it and then finally i want to show you one last thing and this one is really scary and complex um because it's uh it's a combination of several of my scripts and that's this sentinel damn inundation it's actually an expansion of this classification script that i've got here and so it's taken a minute to to come in but here's what i'm going to do is i'm going to walk through this one and then we'll be all done and and i'll let you go um uh and if anybody one wants to follow up with me the best way to get in touch with me is on slack and at the end of this i'll put my slides back up and there's a slack link there for the stanford geospatial slack org so let's walk through this script real quick you can see up at the top i've got a bunch of variables i've declared a variable called sentinel because i'm using the sentinel data set this is the esa's equivalent of landsat but it's got better pixels it's got 10 meter pixels and so i've declared that as sentinel and then these are my feature collections they're the sample data that i'm using for this classification so let me just go ahead and zoom to the area that we're going to work in here and i'm going to go ahead and turn on my geometries just the ones that are my sample points so that you can see what is going on in here so what these are are classes of targets on the ground that i'm interested in identifying in an image and so what's gonna happen is i've taken um there we go i've created geometries and you just create a new layer and you start dropping points and so for instance this is uh this is green space this is canopy or uh or grass and this is bare earth out here these yellow points and the blue is water and the red is going to be urban built up area and what i've done is i've zoomed in really close in a satellite image and i've identified as many of the pure pixels for these particular targets that i can so i've identified a pixel that i know is just water and i've called it water and i've done that over and over and so you can see up here i've got about 40 sample points for each of these classes that i want to classify my image into all right so let's talk about the code and talk about what all of that means so here i'm filtering to date so i'm filtering to it looks like one month of data for for this before and then one month of data for after and what i'm looking at here this is a dam project there was a dam built here and between these two dates in 2019 in june of 2019 and june of 2020 the dam project was finished and the and the flooding the inundation was uh created and you can see the difference in the satellite imagery between these two dates so now what i've done is i've created two data sets that are filtered data sets of sentinel data between these two dates i create a false color visualization of of the before of the of the sentinel data here so this is this is just my visualization parameters and then here my before image is going to be uh selecting um the median pixel from that date range that month of data for my before data set you can see that declared here so i'm taking my before data which is a month of data getting the median of these three bands and i'm doing the same for my after and now we are calculating the the median here we have a map add layer so it's going to add these layers to the map why don't i go ahead and click run because this actually takes a little while to run this particular script and i'll go ahead and scroll down here now up to this point i've just been collecting the bands and creating the median composite and creating a visualization right and adding those to the map but now what we're doing is we're getting into the actual classification and remember uh our our data sets are mini bands sometimes they're not just red green and blue they're not just three bands and so while i can only visualize three bands i can use all of the bands and all of the spectral information from every one of those bands for my classifier and remember what we're doing is we're looking at that spectral curve that we saw in the lecture earlier and and trying to discriminate between targets based on the divergence of those spectral curves um across the spectrum and so here i've i've chosen all the bands that are suitable that have spectral information in them and then here in this part of the script i'm merging all of my different classes of training data those point data sets the water the bare earth the vegetation and the urban and i'm merging them into a single data set called training so that's going to give me one point data set and it's going to have a property called class and the value of that property is going to be the name of the of the class from each of those classes now here's where i'm building the classifier and i'm going to use the features my training data the features that i'll use for training are named training and that's what i defined up here i'm going to use this is the cart classifier and there are a number of different classifiers you can read up on them this is the one i use most often it's fast and it's pretty accurate and i like the uh the results that it gives me and uh here i'm going to use begin training my my classifier using the this training data set and inputting all of those bands that i that i enumerated in that list early earlier and then once that's done i'm going to declare a layer called classified as the actual prediction the classification of all of my image pixels based on their similarity to those sample pixels and then you see i've got a classified b because i've got a before and an after image and then finally i'm putting my i'm centering the map on my area of interest and i'm adding that area of interest as a geometry so you can see what's going on here i've got some more code that takes this area of interest and measures the counts the number of water and urban and bear and vegetation pixels and makes charts out of those we'll talk about those in a second but first let's take a look at these layers and i'm going to go ahead and turn these off and start with the before so here's the before and it's easier if we turn off my geometries so you can see so here's this area before in the false color and you can see there's no there's no water there's not much water in this area visible and so now in the after image we can clearly see the increase in the presence of water after this inundation occurred right and so now the next thing to do is to actually the first thing to do would be to classify the before and so you can see now i this is an image with four pixel values and those four pixel values are either bare earth urban water or um or vegetation and then finally i also created an after classification and in this way what i can do is i can isolate the actual extent of the inundation i can take this into another geographic information system turn it into a vector data set and so on and then finally i put my area of interest on top of all of this and i can create a chart based on that area of interest and and basically what i'm doing is i'm i'm tabulating the number of pixels in each of these feature classes for before and after the inundation and here you see the before chart where there's no water or barely any water measurable in the chart and here you see the after where the inundation has happened and we're measuring much more water in the same area of interest as we were before in the before image and then the rest of the script is is grayed out because it allows you to export all of these layers and so on all of these should be available to you in the sample scripts you should be able to go through these what i would suggest is if you make changes to them that you want to keep then you go up here and save that new script save as with a new name so that that you can keep the sample scripts pristine and refer back to them in the future when you need to uh write your own scripts and customize them in certain ways now with the last maybe five minutes i want to do one other thing that's the end of the walkthrough of the scripts and i'll be available uh as i said on slack if you have trouble with these scripts or you want to figure out how to you're having trouble figuring out how to apply them to a different data set i'm i'm pretty hyper available on slack and so you can grab me on there what i want to do now is just walk through very briefly the data catalog and so how you can kind of shop for data sets in google earth engine so i showed you that you can search for data sets up here in the search bar but there's actually a google earth engine data catalog that you can browse through in several ways so the landing page has themes so climate surface temperature weather and so on and then there's also this nice tags page where you can look for particular tags that you're interested in and so for instance if you're interested in fire we can go look at the fire data sets there are 26 data sets available that have to do with fire i'm i like firms firms is really cool data and easy to kind of work with from the get go and if you'll notice i've actually created a firm's filter date sample script here but what we'll do what i want to show you here in the data catalog is that when you find a data set that you're interested in go to its landing page you've got lots of metadata about that data set like the range of dates that it covered covers so from the year 2000 up to yesterday is the data that's available thus far the name of the data set that you would use in your code to call it in your script a brief description the bands that are in the data set so here we have this band that's called t21 which is the brightness temperature so you can kind of detect fires from this value this is the confidence that it represents an actual fire pixel and so on but really the most important thing is this there's a sample script for every one of these data sets there's a really brief sample script that kind of guides you through manipulating the parameters of this particular data set so if i click on uh open in code editor this should bounce me back into a new panel a new browser tab of code editor and load that sample script for firms into the code editor and if i run that it's going to zoom into that area and it's going to show me the data that i've got in this partic for this particular data set filtered out and you can see that this is filtered for looks like the first 10 days of august 2018. why don't we do the first 10 days of august 2020 i think that was actually i think that was pretty bad wasn't it and if we run that we'll see that we get a different data set because we filtered to a different range and and there's a lot you can do right now we're actually visualizing the t21 data set if we wanted to we could use filtering and selecting like we did before and masking like we did before to only show those pixels those fire pixels with a with a certain um a certain confidence percentage and so on so that's what i wanted to show you about the data catalog because it's really rich as i said there's about 60 petabytes of data and this is being updated constantly in fact i think it's the case that some of these data sets google is grabbing directly from the download links the download sites and they're processing it and putting into google earth engine and that's i think faster in some cases than organizations like nasa and esa and um and jpl can get the data released into their platforms and it's almost certainly more useful once it's in google earth engine as well all right i'm going to return to my slides now one last time because i just want you all to have these links and you'll know you know if you're if you're following along in my in my live slides you should have this link and it should be live you can click on it and and these are just really great getting started guides for google earth engine i've taken a lot of my scripts from their materials there are a lot more materials if you look under community in particular under education resources and edu this section is very heavy on actual remote sensing labs that use google earth engine rather than tutorials on how to use google earth engine these assume no experience whatsoever with remote sensing and so if you want a really nice introduction to remote sensing through google earth engine in a browser with no other installations very simple very satisfying and easy this is a great starting place to get started with that all right and then finally the stanford geospatial center which is where i work it's a center that that we run to support the use of geospatial technology and data in research and teaching at stanford whatever that means at any given time it means that we distribute software like arcgis and we maintain cloud platforms like arcgis online and subscriptions to google earth engine and other things planet.com if you're interested in any of those things i would urge you to go and sign up for the stanford geospatial slack stanford geospatial.slack.com and uh and if you have any questions at all about any of the stuff that i went over uh in this uh in this webinar today please hit me up on slack if uh you know if it's two o'clock in the morning i might not see you but the second i do at five o'clock in the morning when i get up i'll get back with you and we can start working on your code and remember if you're troubleshooting code if you're playing around with code that get link button is the magic button so if you need help with code get a link put it in sli in a slack in a slack message to me and as soon as i can take a look at it i will and i'll see what i can do to help you along with your coding that's all i've got for you today thank you so much i see a couple of folks are uh taking off and uh thank you to you travis um for uh for helping me out letting me know that folks could see uh my my screen share i'm gonna go ahead and stop sharing now and if you have any questions at all please reach out on slack reach out on email or just go to gis.stanford.edu and you can find lots more resources that we provide from the stanford geospatial center there thanks a lot i think this went well for my first webinar we'll see i'll take a look at the video and i'll send out a link to the video recording to everyone that signed up for the webinar after we're done and once it's posted on zoom thanks a lot you all have a great weekend and stay safe
Info
Channel: Stanford Geospatial Center
Views: 10,667
Rating: undefined out of 5
Keywords: Stanford, Geospatial, Center, Spatial, GIS, Maps
Id: oAElakLgCdA
Channel Id: undefined
Length: 95min 7sec (5707 seconds)
Published: Thu Jan 28 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.