RUS webinar: Processing Sentinel-2 data with R - R01

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good afternoon everybody and welcome to another roost copernicus webinar my name is miguel castro gomez and i work as remote sensing specialist for circuit alia in the ubuz copernicus project today i will be your host for this session in which we will address the topic of processing sentinel to data using the r programming language so before starting i would like to share with you the outline for this session so the first thing i want to introduce you is to the roots copernicus service the project that is hosting this series of webinars and with which you might be already familiar with next i will briefly talk about the technical aspects of the sentinel 2 mission to make sure we are all on the same page and that we get the same knowledge about the data we are going to be using in the third point of this session which is going to be the exercise for that i will be using a ruse coping with virtual machine to run the code and to explain it to you at the very end we will dedicate some time for a q a session to make sure we can answer to all your questions i would recommend you to send your questions as soon as possible do not wait until the q a session because we we might not have enough time so make sure you send your questions as soon as you have them and as always with all the russ copernicus webinars it is being recorded and it will be available in our youtube channel and in our rules training website i will give you more information about that later okay so let's get started and for that let's have a look to the rus copernicus service let me give you a brief introduction to it well first of all roo stands for research and user support for sentinel core products it is an initiative founded by the european commission and managed by the european space agency with the objective to promote the uptake of copernicus sentinel data and support r d activities this service provides a free and open scalable platform in a powerful computing environment hosting a suit of open source toolboxes pre-installed in virtual machines which allow you to handle the and process the data derived from these sentient satellites so what does that mean in other words well with the large amount of data produced by the copernicus program the challenge in earth observation with copernicus data is no longer data availability but rather storage and processing capacity to solve that rusk copernicus offers virtual machines so that you have the appropriate computing environment to handle the data in addition to all that ruse also provides a specialized user help desk to support your remote sensing activities with sentinel data and a dedicated training program that includes webinars like the one of today but also face-to-face events that we do throughout the year so some updates regarding the roots copernicus project since last september 2020 rusco bernicus has designed a new offer in order to reach a greater number of users and provide them with through tools and services as before we organize the ruse offer around two main activities first the support to individual users and their r d projects and secondly our well-known training activities which include face-to-face events webinars and support to external trainings so let me comment the main changes in each category regarding face-to-face events not much has changed we maintain a small audience to ensure a high level of interaction vms in this case will have four or eight cores depending on the processing needs of the exercise and will last one month maximum period in which the ict support will also be available regarding webinars we will maintain the rate of one webinar per month and we will keep publishing these step-by-step guides as before the update here is that there will be a limited amount of vms for webinar repeat and their duration will be two weeks maximum next the support to external training organization and operation we will maintain this activity but there will be a limit in the number of vms available which will also have the ict support and finally the last update regarding the support we provide to individual users in this case the vms will no longer be available for this category we will provide a docker image to make the ruse-working environment available this will include a suit of pre-installed open source tools for processing copernicus data support will be available to help users setting their docker containers in their local pc in case they are not familiar with the container technology or with docker and finally and for your information rules vms are provided from copernicus ds and other cloud providers so um let's move on to the arrows websites where you can find all the details the first one ruse dash particles.eu contains all the information about the project and it is here where you can register for the service and access your virtual machine the second one contains all the information about the training activities we organize such as this webinar so roosterstraining.eu if you want to stay updated with upcoming training activities i recommend you to check the website from time to time finally and as i said at the beginning of this webinar we also have our youtube channel called the roots copernicus training where you can find all our previous recorded webinars so i recommend you to have a look later on after this webinar i'm sure you can find the topics of your interest we already have a very long list where we cover many applications using copernicus data using not only sentinel 2 but also sentinel 1 sentinel 3 and sentinel 5p which are the sentinel missions are as of today up and running okay so let's move on on our presentation and now we are going to focus on the sentence to mission so i want to share with you some basic information about this satellite i'm sure most of you are already familiar with it but just some key information for those of you that might be new to this data so the sentinel 2 mission first of all is formed by two twin satellites known as sentinel-2a and sentinel-2b that are faced at each other at 180 degrees together with its white swath of 290 kilometers it delivers a very high or a high visit time which is of five days at the equator with the two satellites of course if you go northern or south in latitude this revisit time will decrease the satellites have a multi-spectral high-resolution instrument with 13 spectral bands and a note on these bands there are special resolution changes so four bands in the red green blue and near infrared are provided at 10 meters pixel size those are band two three four and eight then we have six bands which are located in the red edge and in the shortwave infrared which are provided at 20 meters pixel size and finally we have three bands which are used for atmospheric correction and are provided at 60 meters a small node here on the different products that are available from the sentinel 2 mission you can find mainly the level 1c and the level 2a those are the levels available to users the level 1c provides reflectance values at top of the atmosphere while the level 2a provides reflectance values at the bottom of the atmosphere so level 2a has already gone through an atmospheric correction using the centrical algorithm and these products the the ones with an atmospheric correction are produced and delivered systematically by isa so something to take into account and i think a good detail okay so that's all for my introduction about the central intermission of course you can find more information on the official website of of sentinel 2 and i do recommend you to have a look if you are not familiar with the satellite yet okay so let's get started with the exercise so what is going to be the objective of our analysis today what we want to do is to plot an ndvi time series in a region of interest located in the united states or in particular in california by deriving the vegetation index or ndki from sentinel 2 products using the r programming language so in this exercise i wanted to add the temporal dimension by driving this time series since i think it's one of the key components of the copernicus program if you remember from the introduction i just gave about the center to satellite one of the characteristics is that you have five days revisit time with the two satellites sentinel two a and b at the equator however the north you go or the sample you go in latitude the revisit time is going to decrease so you will get a lot of products over your region of interest and whenever you start working with with the copernicus program in general or in particular with central and two you will see that you have a huge amount of data over your specific location so the temporal component of the analysis when working with copernicus is one of the things you will face for sure if you start to do such work so in this exercise i don't want to focus the um the the final result in something very advanced but rather keep it simple and easy touching one of the key components of copernicus which is the temporal dimension to make sure that both people with experience in art that is attending this webinar but also beginners can follow can understand the logic behind the use of r for processing satellite data in this case optical data from the sentence submission and so on so what are the tools we are going to use i'm going to use in this exercise well we will be working within one of the components of the rust program which are the virtual machines within that virtual machine i will be using an r script that has already been written that i have already created and that will be available for those of you repeating this exercise and i will be using the jupiter notebook or actually the jupiter lab to run my r code if you are not familiar with what is jupyter notebook or jupyter lab i will talk a little bit more later on once we start the exercise but so far let me go to the roots copper in this vm and let's start the exercise okay so here i am in the welcome page of the russ copernicus vm and as you can see we are accessing this environment which is sitting on the cloud via a web browser in this case google chrome so i'm just gonna go full screen here to have a better visualization first of all and now with the credentials inserted i'm just gonna log in okay so here we are in the ruskopernkus vm and let me give you a brief introduction to it in case this is the first time you see it or the first time you you work with with a virtual machine like this one well as i said during my introduction rusko bernicus vms come already pre-configured and with a list of pre-defined software installed so for example we can see here we have snap which is the dedicated software of isa to analyze copenhague's data we also have qgis we also have our studio for example to run our code jupyter notebook etc so as you can see the machine comes ready to use so you can start your your projects straight ahead for those of you that are new to virtual machines i think this is one of the easiest and most friendly steps you can make when you start working with cloud computing basically because a virtual machine will give you an interaction or to the user to yourself that is very similar to what you can find in your regular or personal laptop so for example we have a terminal window to interact via command line with the vm we have the file manager system where we can put our files etc we also have a dedicated internet browser within the virtual machine and so on so as you can see you can everything you can expect from a computer you can find it within the vm the main difference here is that we are being backed up by cloud resources uh so that we can have the appropriate environment to handle and process our task in this case the analysis of satellite imaging so the first thing i want to show you in this exercise is how to access copernicus data some of you might already be familiar with that but i think it's important to cover it in case uh some of you are new to sentinel 2 or to copernicus in general and don't know where to find the data remember copernicus data is free and open for everybody so the only thing you need is an account the next thing you need to do is to go to shihab or sahab.com.eu and in here access the open hub once you are here and again once you are registered you just need to log in and you access this interface where you can set the search the search parameters uh that you want so for example by uh opening the advanced parameters panel we can first of all select the sensing period we are interested on so let's say for example from the 1st of november until the 15th of november and then we can select for example the central mission we are interested on in this case let's pick sentinel 2 since this is the data we're going to be using for today we can specify the uh specific satellite that we want either sentient to a or b if you don't want to make this difference you can just leave it empty and it will select both you can then select the product type so remember level 1c it's a top of the atmosphere reflectance and level 2a is bottom of the atmosphere reflectance in our case i'm going to pick level 2a and finally you can set a threshold for your cloud cover in case you want to apply that so once you have your advanced search parameters it's just a matter of drawing your region of interest on the map so let's pick for example the study area of today which is in south california so it's very close to the border with mexico and basically here so once you are in your region you can just change to the drawing mode and now you can just draw your polygon that represents your region of interest once you have all of these you can click on the icon here search and in a few seconds you will get the list of products that match your search in this case we got three products and you can see here on the left side this list this panel that pops up with the results and a preview of the images so if you want to access more details about the specific or particular product you can click on the i icon here and you will see more details about the product something relevant for example is the true color vis previousization that you can see i think this is very helpful especially if you care about the clouds which with optical data is always um a concern but together with this you can also access metadata about the product which you can basically explore here once you have identified a product of interest you can just click on the arrow here which will start the download process and remember in this case we are using the internet connection and the storage capacity of this specific virtual machine as you can see it is quite fast the download and in a few seconds you will get the product in your virtual machine and ready to be analyzed so for today's exercise i have downloaded i have pre-downloaded the products we are going to be using so i will not download all of them uh just to let you know i'm co i have taken several products throughout the year of 2020 to see the temporal revolution of ndvi at specific locations located in in this product so let's get to the next point i want to explain you so as i said while i was introducing the exercise today we are going to be using jupyter notebook to run our r script more in particular i'm going to be using the r installation that comes by default with the anaconda distribution anaconda is an open distribution of the python and our programming language that not only gives you access to these programming languages but also to some tools to some package management that helps you to create virtual environments or content environments to handle different versions of modules and packages and some dependency management etc so it's just a distribution of python and r with some extra tools to handle and manage that installation so anagonda is part of the list of softwares that is available and pre-installed in the virtual machine and i'm just going to make use of that so the first thing i need to do is to basically activate my conda environment and this i can do by just writing the command conda activate followed by the name of the environment that i have created in advance and that contains the all the dependencies i need all the r packages and also jupyter lab in this case so by writing this and clicking enter i now get the confirmation that i have activated this environment and i can see it here and now i can just launch jupiter lab so what is jupiter lab or where's jupiter notebook let's let's uh say well the jupyter notebook is a web-based application that allows you to combine your code in this case python or r together with rich text elements like images paragraphs hyperlinks etc so for example this jupyter lab environment gives me the possibility to write my text with hyperlinks images etc and together i can write some code so the whole idea of jupyter is to create a framework where you can share your code in an easier way so you can share it with others and obviously it's going to be easier for them to understand the logic behind your analysis if you combine your article together with some explanations or some context like the text you can find here so i think this is something helpful especially in the context of this webinar that's why i have chosen it instead of our studio but obviously you can just take this code and run it in our studio it's good just gonna run the same so let me go just go let me go sorry full screen here so that we get a better visualization and we can start the exercise so as you know we are going to be running this temporal series of ndvi over in this case the imperial valley in the usa and so the way i have structured the i just want to show you now the way i have structured the exercise so basically and if you are repeating this exercise you will find this text here that is introducing you to the problem to the copernicus program and i'm also leaving you here a reference for the technical augmentation of sentinel 2 so if you're interested on this i would recommend you to read it i will not do it since it's basically the same as i have explained in my per in my presentation a few minutes ago so we can just skip that part i'm also leaving you some links and references to some art tutorial that i think are nice to follow and also to the jupiter notebook documentation and tutorials in case uh you are not aware of of jupiter lab or jupiter notebook i think it can be helpful for you if you're new to this so how are we going to proceed well the first thing we will do for this analysis is to load some our packages i will briefly comment the ones we are using and give you a little bit of context of why i have chosen them then we will load some accelerated data so some files that we need to perform our analysis we will then load the sentence to products that we will be using today into the r session or into the r kernel of in jupiter node in jupiter lab and then once we have the data loaded we will start to do some processing in this case as you can see rather simple we will do a subset once this is done we will derive ndvi we will extract the ndvi values at specific pixel locations and finally we will just plot the evolution of this which is basically the temporal series so let's get started and the first thing we're going to do is to load the r packages before that i want to point out the fact that in jupyter lab you have basically two type of cells one the most important one are the code cells so basically you can identify them because they have this kind of gray background color um also for example because when you click on them you can see here it's written code and well basically because it's here where we write where we write sorry the uh the script in this case r or python if you are working with python in position two code cells you can find markdown cells which is basically this kind of cell where you can write the rich text elements that can be added to your code for better understanding so basically this is a cell that follows the markdown markdown structure and whenever you execute it then it renders the output and you can read it easier so now comes the question of how to run the code or how to execute the cell well there are different alternatives in jupyter lab and i'm leaving you here note on that but basically the first thing you need to do is to select the cell that you want to run and you do so by just clicking somewhere on the cell and once you do so you can for example click on the play button here or the wrong button you can also go to the run tab and select one of those options but i think the most convenient option is to just press ctrl enter or shift enter whenever a cell gets executed you will see that this little white dot becomes dark it means that the kernel is busy so it's executing the code and here an asterisk will show up whenever that asterisk asterisk becomes a number it means that the cell has been executed and that number will be the index um of the cell so for example here we will get a number one in the flow one in the following we will get a number two and so on in that way you can keep track of of the order you have followed in your execution okay so point number one load our packages so which packages are we gonna use as you know r comes with several tools several packages by default however most of the times you will need to download install and activate other our packages that contain the tools the algorithms for a specific application for specific analysis so in our case for this um sentinel-2 data processing we need some extra packages that i have already pre-installed and that i'm only loading here in this r session and you can see here the list so um if you're familiar with r i'm sure you recognize them for those of you that are new i think it's helpful if i give you a brief description of them so type er is basically a very well-known packaging art that will help us to reorganize our data it's let's say a data management package the rgdl package gives us access from r to the gdl library which is extremely well known in in satellite data processing or geospatial analysis ggplot2 is going to be a library to create some visualizations some graphics raster i would say is a mandatory almost mandatory uh packaging r to process raster data so the name is very descriptive leaflet is just an r version over an our adaptation of the leaflet module of javascript which will allow us to create interactive maps raster vis grid extra and our color brewer are again packages that we need for raster data manipulation and visualization and finally plotli is also an art version of the totally library in python which is mainly used for interactive visualization as well okay so basically those are the tools we are going to use and i have created here a list in r so you can see it's called pck and this variable i'm going to pass it to this function here to load the packages so uh some comments in in r especially if you are coming from python and you are new to r or maybe if uh if you are completely new to both python and r um i'm to use the this icon here this arrow followed by this dash to assign values to a specific variable so on the left side you will see the name of the variable and on the right side whatever is happening so i'm creating this list or while this vector of names containing the names of the r packages that we are going to use and then basically i'm saying to the r kernel look for these packages in my installation and if they are not present install them since they are already installed this will be skipped and it will only be um there will only be activated to activate them i will use the supply function in r the separate function is a function in r from the family of the apply functions and you can find for example alternatives to these like the laplay function or apply basically those are functions in r that allow you to pass a list as input and then you can pass a specific function that you want to apply to these two little elements of this list and it will give you back an output following the same order so it's a very handy tool in our that you definitely should check out so basically i'm saying use this variable pck which contains the list of packages i want to use and to each of these elements apply the function require require or library in r is the function we need to activate specific module just like in python you use import followed by the name of the of the module in this case it's required uh okay once we do so uh the next thing we will do is to set our working directory something also very common and very advisable when working in r in particular which is setting the your the folder of reference for this specific project so you can see this is the folder have set it's a folder that is located in my virtual machine so if i just go to my file manager system you can find it here it's basically uh folders with the data so in the original folder you can see for example the list of products actually now that i'm here i want to point out that as you can see the sentinel products the sentinel 2 products we're going to use in this exercise are in the dot safe format basically what i have done is i have to download them in advanced and i have unzipped them i do this because when working with r it's better to have the the products unzipped so that i can navigate through the different subfolders and uh to my work um so this is just an advice i would give you but definitely this is something you can also automatize in your r scripts okay so we just go back to full screen and run the first cell and now the output comes so we just get some prints some messages from the different packages and basically at the very end we get the output of the supply function so basically it is confirming us that all the packages have been loaded with this true value so true meaning that yes the package is installed and it is active and ready to use if you are repeating this exercise or if you are loading your packages for other projects and you see a false here that would mean that that specific package was not loaded it's not active and basically you would see also an error here so if you rerun this cell basically you can just get rid of this printing and you get directly the output of supply okay so we got our packages activated so now we can start with the next step which is load auxiliary data so in this case what we are going to do is load our save files shape files that we will need for our analysis the first one is going to be the study area so it's a shapefile i have created in advance that represents as the name indicates the region of interest and then i'm going to load another shapefile in this case points i'm going to call points underscore sa and those are the points the coordinates that are going to identify the pixels that i will use as reference to extract the ndvi value later on and plot my temporal series um four or two load save files into the r session there are different alternatives in r one of the very well known is to use the read ogr function which is the one i'm using here the next thing i'm going to do is to plot the study area so that we can locate it on a map and for that i will be using the leaflet library on the leaflet package to plot an interactive map in in r and at the same time i'm going to specify a specific tile so a specific background that i want to provide and this is something you can customize and you can find on the internet a lot of information about it and once i have this tile this background map i'm going to add a polygon which is going to represent my study area so as you can see i have here another variable called study area number two that that i'm using as the variable in the interactive map here the reason for this is that the leaflet package works with special elements that are projected into the wgs 84 projection so in latin coordinates however my original study area shapefile is projected in utm and the reason for this is that when working with copernicus data and other layers like save file you want as always when working with geospatial information you want them to have the same crs the same coordinate reference system by default sentinel 2 data is projected as a set in utm so whenever you create something to represent your study area for example make sure you project it in the same crs since i want to also represent my study area in a leaflet map i just need to reproject it or to transform it into lapland coordinates and i do so by using the sp transform function so i just passed the original study area variable as input and i specify that i want a crs in lat lon following this specific datum okay so once this is done i'm just telling r that i want to visualize the m variable in this case the leaflet map so let's run this and have a look okay so the first thing we get is this print of the shape file so this is basically the outputs of these two functions here so what you can see is that our as any other programming language doesn't give you a graphical visualization of your files that you have loaded directly this is normal and expected but this specific function read ogr gives you key information about the data set that you have just loaded into the session first of all we see it is a shape file so that's good we know but it's just a confirmation we get its source so from where in my virtual machine this specific file is coming and if you pay attention when we were loading it we didn't specify the folder where it was and we didn't do so because we already set the working directory in advance so by default r is going to go to our working directory and it's going to try to find the the file of need so we get here the the path and we also get a key information which is the number of features and the number of fields so if you remember from a shapefile or from vector from vector data let's say we get its attribute table and each attribute table will have a specific let's say column a specific attribute and a number of entries for that specific attribute right so the study area has one feature one field so one polygon and one attribute which is going to be the id for the other one which is the points we get four features and two fields so four features meaning four points and two fields meaning two columns in our attribute table so as you can see uh by just loading the data we can get a lot of information about the uh in this case the save file and just by reading the uh the metadata information that it's printed okay and just below we can see the interactive map that we have opened here in uh in jupiter so we can basically locate our study area and as you can see this is what i told you it's in the united states in the uh in california very close of the with the border of mexic with mexico we can just find the imperial valley it's an irrigated well it's um it's an area that's irrigated for a culture and i think it's uh it's gonna just work fine for our study case today well one of the advantages i would say one of the nice features of of working in jupiter notebook is you can you know create this interactive maps that i think make the understanding of the analysis much easier obviously in our studio you can also create interactive maps i'm not saying it's not possible but i think the interrogation the integration of everything i would say uh in jupiter lab or jubilee book it's um a little bit more friendly but definitely this is a personal opinion okay so let's go to the next point point number three which is uh load sentient data so what we are going to do now is uh ingest the products that we have into the r session into the uh our kernel so that we can work with them so the first thing we are doing is setting a specific folder where the data is stored in our virtual machine and as you can see this is located in shared training r01 sentence processing original so this is a specific folder i just showed you before and what we are going to do is create this variable s2 once this is done we are going to overwrite this variable with a new command and basically we are going to create a list we are specifying this by using the list function of r and i'm saying okay let's go to this specific path so the s2 variable at this point is pointing to this path and we are saying go to this specific folder go into the different subfolders that you can find and look for files that follow a specific pattern and that pattern is specified here in this argument which is b0 uh then followed by two three four or eight and this is what these brackets mean uh so we're just passing this variation of of uh the sequence after the zero and then underscore 10 m dot jpeg and gp2 so that's basically what we are saying and uh once we do so once we create this list what we are going to do is again use uh the family the the apply functions in art so this family of functions like apply apply supply that i talked before and we're gonna say okay the input is gonna be this list of files this list of absolute paths to my sentence to products and i want to apply to each of to each element of the list a specific function to load the file into r and that function is going to be the raster function so raster is a function from the raster package that allows you to in to load a raster file into r and once this is done everything will be saved again overwriting the variable s2 in s2 and finally we will just have a look so let's run this and let's see what happens well it's done uh we are now visualizing the output of um so the first element of s2 and remember since we are using uh laplay what we are getting back is a list so the structure let's say the data type is a list that's why we can use square brackets to to specify an index for the list so for example since this is a list i can just write the head and visualize the top part of this specific list and we get back the the first six elements this is just pure r programming right if uh if i write the tail for example i get the bottom part of the of the list and again if we use a specific index with square brackets we can access an element so what can we see here what's what's the output of of this raster function well first of all it's telling us that the file has been loaded as a raster layer so its class is raster layer is giving us some metadata information that is very important for example d dimensions so in this case in the number of rows the number of columns and their multiplication which is of course the number of cells we get the resolution 10 and 10. another question is 10 by 10 what well we know in advance since we know the characteristics of sentinel 2 that this refers to meters however there's something here to point out r is going to use the units that are specified in your coordinate reference system so if we go to the crs attribute here we get the projection of the data so utm zone 11 the datum that is being used and there's one specific parameter that's called units most of the times when working with let's say standard crs the units are going to be meters but definitely this is something you you might you need to know in case it's it's uh it's different for your for your work so this 10 by 10 in r refers to whatever units is defined in your projection in this case meters which obviously matches the specification of sentinel 2 which is 10 by 10 so no surprise there okay we get the source of this specific raster layer so what are we looking at well we're looking at the band number four from a product that was loaded from this specific folder so uh well and it's underscore 10 meters.jpg so something you uh you maybe you don't know or maybe yes is that the sentence products follow a specific naming convention meaning that the name of the product here is not randomly generated it follows a structure and the the thing i want to point out is that part of the name contains the sensing time so uh it is here sentinel 2 satellite a multispectral imager level 2a so bottom of the atmosphere reflectance then this is followed by the year of the sensing time so 2020 in this case 0 5 for the month and 0 5 for the day this is followed by this fixed character which is just a t and then we can get the hour of the acquisition so 18 hours 19 minutes 21 seconds and well then you can see all the i mean all the numbers that are not very relevant and the last one i want to point out is this one here in this position you will always find the tile of this product the tile when the sentinel triples are acquired they all follow let's say a tile division and this specific product belongs to the tile 11 sps and this is information that you can use to later on look for products from the same area so this is a product from from may for example if i open the first one we can see that uh it's also from may because it's the band number two but for example if we open the number six now we get a product from june and so on okay so now we have our products ingested in r and they are stored in this s2 variable which by the way has a length we can check that by using the length function in r so this list has a length of 24. 24 raster files that have been loaded if you pay attention for each product we are loading the bands two three four and eight so uh with a simple math you can now know the number of products we are using so six six products and four bands per product okay so once we have the the data in r let's start to do something and of course the first thing you can you want to do or one of the first things you want to do is maybe to visualize the plot to see it um so that's what i'm going to show you now so in this cell basically what i'm doing is first of all setting some layout parameters this is jupiter lab specific so nothing very important here so what you want to do first of all is to create a stack so you might be already familiar with what a raster stack is is basically we are putting together raster layers that are individual we are let's say stacking them into on top of each other and the raster star the raster stack structure is um very common in r and all the programming languages they might be called different but it is very important because most of the algorithms or more many functions in in r are designed to work with stack as input it's uh that's a structure that helps you to manipulate the data and to index them etc so the first thing we're going to do is to convert our list of raster layers that we have created before into a raster stack and i'm going to save that into a variable called s2 stack so nothing very surprising once i have this i can now do a plot i can visualize that and i'm going to use a specific function function from the raster package that's called plot rgb this is very convenient because the function already knows that i have a multi-layer stack so let's say multi-layer raster where each layer is a band and i just need to say okay here's my stack of of layers of rasters and i you only need to specify in which layer you have the red the the red the green and the blue band so since i am importing the the files in the order two three four eight we know that uh the blue band which is number two is in the index number one the uh band the green which is uh the three in sentinel two is in position number 2 in my list and the red band is which is the number 4 in cental 2 is in position number 3. i can confirm that by for example opening here the band in index 1 we can see that it is here the band number two and of course if i go to index number two well then i get the band number three and so on um by the way if you're coming from python and you are new to r um the r indexing does not start at zero but at one so that's something you you want to know as soon as possible if i write zero it's um nothing okay so um we were saying that we were we wanted to plot an rgb so we give the stack we specify the the index for the red green and blue channels and then basically we are setting the max values so the range of values for our pixels we are saying okay they go from zero to the maximum value of a band of reference which in this case i have used the sentinel to i have used the the layer number two but you can use any other this is just to help the function distribute the colors from one value of a pixel to from a minimum value to a maximum value the minimum is usually zero so you don't need to specify that and finally we are setting the stretch uh parameter which in this case is gonna be histogram or hist um let's plot this and we will talk about this parameter later on so i'm just gonna run this and here we get the output so what we are visualizing is a true color and full score composition of our of one of the products we are going to be analyzing today you can see that the first plot the one on the left was created with this specific line where we specified that the red channel was on three two and one and then what we have done is we have added a the study area the bounding box of the study area with this yellow rectangle here and we have done so with this other line where we say plot the study area we want to specify a border color in this case yellow a width for the line so five and the key parameter here is add true a true in r means that we want to add this specific plot to a previously created pro plot so for example one not for example but actually this one so if you do not plot anything before the add true it's going to fail because it's it doesn't know where to plot it so first of all you need to visualize something then another thing and with the add the true parameter you can stack the visualization together as i was saying the lwd is just the width of the line so you can change that and play with the visualization for example let's let's have a look if i put 10 you see now that lines are much more visible so as you can see r is not by default let's say a visualization tool definitely it's more to analyze the data run your analysis and export it and maybe you know finalize the the the layout maybe in a gis solver or something like that but definitely you have some you can play with it and there is a lot of parameters that you can set and this is going to help you to automatize the creation of of the layouts that you want so i was saying um well basically here we do the same to create the phone score composition the only difference is that now we are saying that in the red channel we want to pass the index number 4 which is the near infrared we move the red channel into the red band into the green channel and the green band into the blue channel so basic operations so i was saying before that i wanted to comment a little bit more about this stretch hist so basically what we are doing here is um telling our the way we want the course to be distributed along our data so the values of our raster go from zero to the maximum value of this specific raster that is taken as reference and we want to assign the colors to these pixels using a histogram or clip right so we are the same we are following the data distribution to put the minimum and maximum colors related to the histogram again this is something very common when doing visualization of data however this is something you can change in r and it's going to give you a very different uh output the alternative to this is to put the parameter lin and basically this is going to change the way the colors are assigned to the pixel values it's just going to follow a linear distribution and as you can see d visualization gets heavily heavily affected or gets affected somehow up to you whatever you prefer okay so those this is our product this is uh a true and false visualization of our products with um the region of interest added and i remember on the before we just located on a map so that you can locate it in the world okay so the next thing we are going to do is to start processing our data and uh the first thing someone might want to do is to crop it right there is no it doesn't make sense to carry on with this large data set and heavy data set throughout the different processing steps if you are only interested in this specific region so i would recommend you that you crop your data as soon as possible so that you only keep whatever you want and you save some memory space and and definitely you speed up your analysis so cropping or subsetting clipping is one of the things you will do first and in r this is very very uh easy so how to do it well first of all again i'm setting some layout parameters in jupyter lab so nothing in particular here and well how to crop a raster in r well that is by using the crop function of the raster package in r so as you can see very explanatory by itself so what i'm doing here is i'm creating a new variable called sentinel to stack crop and i'm saying okay in this variable i want to save the output of the crop function of the raster package and this function needs as input the raster files that have to be cropped in this case the sentinel 2 stack remember this sentinel 2 stack is what we have created here before plotting this and after creating the list of files with the previous cell right so this one okay so uh the input is the sentient to stack and basically we want to crop it using as reference a study area the study area is pointing to the shape file of our study area which the key thing here is that it has it must have the same crs it's the same projection because if not obviously and this is common for geospatial analysis they're not going to fit in the space let's say and it's going to crash okay so let's run this um again after the crop we are just again gonna plot the true color and false color okay so right after the cell has been executed we get our outputs and again we see the result of that so nothing very surprising we just cop our images to the size of our study area as you can see now i'm not adding to the plot the let's say the the polygon by itself because it's basically the same as the extent of this product well something to uh point out and to visualize again is this difference in color representation that we can have if we use the stretch method of histogram or stretch equal length so let's change one of them for example actually what i'm going to do is create a new cell to avoid cropping the products again which is this kind of a time consuming step not a lot of few minutes but um it doesn't make sense to do it again so i'm just going to copy paste this i'm going to change the parameter to linear and i'm just gonna execute the visualization of this okay so here we get the output and as you can see the the difference in the way the chords are stretched is much more noticeable here well this is just the consequence of using a histogram stretch or not right in this case the calls are concentrated in the range of values that are more common and here we are just going linearly from zero to the maximum value and putting one color in every step and the contrast change etc something you might want to do when working in r is to check the help of a specific function so for example here i was saying that to do the crop we were using the uh the crop function but you know maybe you don't know what are the parameters or what are the the many other arguments that this function can take etc this is something i have that you will end up doing when working with our all the other programming languages a lot um one of the i think one of the nice features of r is that you can access the documentation of a specific function directly and if you are a user of of our studio you you know already where to find it there is a dedicated tab for for that in jupyter notebook you can just follow the same syntax we can write the name we can write the question mark icon and followed by the name of the function if you do so uh well the documentation of the function is going to pop up and basically here you can access all the information right so for example the crop function belonging to the raster package description and what basically all the information you might want to to know i think this is something very helpful uh i think also the fact of working um in jupiter and having it displayed this way and i think it's it's very nice but obviously this is information that you can also find on the internet if you just look for the documentation online and by the way something to point out as well is that it can happen to you in r that especially if you start to work with a lot of packages it can happen that sometimes one package has a specific function that has the same name as in another package and this can cause confusion and will cause confusion to your analysis because let's say we have two mode two packages with this with a function that is named the same so crop how can i specify that i want to use the crop function of the raster package in particular and not the crop function of another package well this in r is done by writing the name of the package before the function followed by this annotation here so basically here we are saying go to the raster package and use that and not to any other package by default this is pure purely another thing the way r works is that when looking for the crop function it will follow it will check for it following the order in which the packages have been loaded so if the first package that was ordered was raster then it will take the the crop function of raster if you want to use the function of another package but with the same name either you load it before or you just specify your scripts this which i think is much better okay just a side note on this okay so let's continue with our exercise we have cropped our images we have visualized the output of this and now we can continue by deriving the ndvi index so i assume you are all familiar with the ndvi index so i will not go into the description of this but basically and for those that might be new to it the ndvi or the normalized difference vegetation index takes advantage of the jump there is in reflectance for vegetation between the red part of the spectrum and the near infrared um yeah but i don't want to go into why this happens it goes it is related definitely with the structure of leaves and we go into chemistry but basically the idea is that there is a jump between the amount of flight that is reflected and absorbed in the infrared and red and by normalizing this difference we can create an index that is going to tell us where we can find a vegetation it's going to be an index that you can relate to that obviously it's a very well-known index in those observations so something very basic so how can you derive it in in r well there are definitely a lot of ways and the one i'm going to show you today is um is one that incorporates a little trick when working with a lot of products for which you want to derive the index um there are dedicated functions to drive ntvi there are many ways you can do it and this is just one of them so let's follow the logic what we have here is six products um six sentinel two images and for each of them we want to derive the ndvi index so that we get the temporal evolution of that definitely we can go one by one and and do it however the idea right of working with the programming languages somehow to automatize our analysis and save time you know things like that so since we have six products uh you can assume that what we want is six ndvi's and this is more or less uh something that you want to store in a list in r right so what i'm doing first of all is creating a variable called ndvi and this variable is going to be a list that so far i don't know how i'm going to populate it but it's going to be a list each element of the list will be the ndvi of each of my products so what i'm going to do next is to start a for loop so basically i started starting here i'm just gonna have it like that okay so i'm saying uh four i in one two the length of s2 divided by four so from of for every so for for every element that goes from one to the length of sentinel 2 so remember sentinel 2 is the the variable that contains the list of rasters so if i just write here length s2 and i run this it gives me 24 right you remember four bands two three four and eight times six products 24. of course if i divide that by 4 it's going to give me the number of products i have 6 right so i'm saying for i in 1 to 6 and this means the number of rows i have i want to do a specific operation i'm going to say that the first element of my list so ndvi double square bracket i in in r the double square is used to access a specific element in a list right so if i um if i just write sentinel 2 for example like this i'm accessing this specific element i'm accessing this element on the on this list so that i can start to operate within right so if i just use one i'm not at that level so i'm just let's say simply speaking um visualizing the first element but i'm not within it so that's why we use double quotation so i'm saying uh for my for ndvi position i and position and i goes from one to six so let's say we are in the first iteration so i equals one so for the first element of my ndvi list i want to overlay and overlay is basically a function in r and coming from the raster package so it's going to allow me to do algebra with the rasters so i'm going to say sentinel to stack crop so this is the variable that is saving my crop data and as you can see here i'm using a double square brackets so i'm opening here the double square brackets um and i'm closing them there and basically here i see this little trick this is the key part of the line i'm saying i minus 1 then times 4 plus 3. so what's happening here well let's imagine well let's not imagine but in the first iteration of the for loop i is going to be 1. the second iteration i is going to be 2 3 four five six right so when i is one let's do the math together one minus one is zero times four is zero plus three three so what i'm saying is the ndvi in position one has to take the sentinel to stack crop in position number three and if we check that um the position number three here this is um band number four and then i have to take the uh this if we continue with the sequence here we say okay take the number three which is the band number four near infrared and then take another band which is i minus one times four plus four so one minus one zero times four zero plus four four if i go to number four what i see here is that we are taking band number eight so as you can see we are taking uh the two bands that are needed to derive ndvi right the near infrared and the red in sentinel 2 the near infrared is stored in the band number eight and the red in bind in button number four okay so what i'm saying is just take this uh this layers and then apply a function so i'm saying here fun equal function i'm saying this function must have two parameters x and y this x and y are going to be coming from these layers that i have identified in advance and i'm going to say once you have these parameters just do y minus x divided by y plus x if you know ndvi this is basically the formula of ndvi right neo infrared minus red divided by new infrared plus red so this is the little trick so when i in the first iteration of the first for loop when i is equal to one what i'm saying is go and take band number four of the first product pay attention here to the date 2020505 then take the red band of the same date and do new infrared minus red divided by an infrared plus red when i is equal to two let's do the math together two minus one one times four four plus three seven if i go to number seven i can see that now i'm taking d band number four but this time of the 2020 0604 product so one month later when i is equal to two in this case two minus one one times four four plus four eight if i go to number eight i see that we are taking the band number eight sony alphabet again from the 2020.0604 and then i apply the function i create my ndvi and i store it in my ndvi list of course we can do the same for all i think you got it when i let's say for example i is equal to 5 5 minus 1 4 times 4 16 plus 3 19. if i just go to 19 you see i'm taking the bank number four so red band off this specific product 2020902 and if i just do one more the 20 i'm taking the band number eight and so on so i think it's clear so as you can see this is a way of deriving ndvi it just makes use of this little i would say algebra here this little trick to iterate over the different raster products that i have in my list and give back whatever you specify in this font parameter in this case the function of ndvi okay so once this is done what we are going to do is to basically set the name of the result to something a little bit more meaningful than what we have here so i'm just saying okay the name for each ntvi layer has to be ndvi underscore and then follow by the sensing time the sensing time as you can see it is specified here but it's just together with all the things that we don't want so what i'm saying is uh first of all what i'm doing is splitting this string using the underscore uh icon so i'm i'm saying here in this piece of code i'm saying take this string so take the name of uh of the products that you are using and divide divide it in different elements and split it when you find an underscore and once you do so i want uh to look for a t which is this fixed character that we can see it and this is something that you will find in all these sentences products and once you find the t split this again and in that case in that way we can i am we can isolate the sensing time only and then just attach it to ndvi underscore which i think in the end is going to produce a better name so let's run this and let's see the output uh well it's doing all the math it's doing everything remember the ndpi is happening at the pixel level so every pixel is going through this function and we get the results back so what we get is a list and we know that because we created here but also if you are familiar with r this type of indexing is you know already it's a list the list has six elements so that makes sense because we have six products each element is a raster layer so that's good the dimensions in number of rows and columns this corresponds to our study area that's good resolution 10 by 10 meter which is keeping the original resolution of our bands great the crs is still the same as in the original data so utm zone 11 the source in this case memory means that this specific variable is stored in our ram memory of our virtual machine so this is not something that exists in our file in our file so if i close this i'm going to lose this calculation then we see the names so you can see ndvi underscore the date so this is basically what we were doing here that i just explained and then the values go from minimum and maximum from minus 0.99 to almost so from almost -1 to almost one which is totally fine because if uh well it's a normalized index uh ndvi it's part of the name and uh that that's why we have the values normalized from minus one to one um okay so we have our ndvi calculated again we are not visualizing it directly we are just seeing this metadata information which is great but of course we want to see it so let's go down and now let's set the layout so here for the layout i'm using a specific function that belongs to the raster vis package and this function is called level plot to visualize raster data in r definitely there are a lot of ways to do it and different options different packages i just think the level plot function from rest of these is very nice but you can follow you your way okay so first of all i'm setting again some layout parameters in jupyter lab so this is i mean it's specific from jupiter but basically it's telling me the dimensions of the figure i want to create then what i'm going to do is create some breaks while i'm creating this variable called breaks and this is just creating a vector of numbers that go from minus one to one and those are let's say the steps i'm gonna use to assign the colors in my ndvi after that i'm just going to pick a color palette in r in this case i'm using the red yellow green and you can change that to whatever you want and well basically once this is done i'm saying level plot my ndvis and as you can see as input i'm passing a function so i'm saying stack and dvi so instead of stacking ndvi before and then passing that into level plot i'm just doing it at the same function which is also fine okay so let's run this [Music] and uh later on i will comment the other part of the plot okay so here we have it and here we can see the visualization so the top part of each figure we can see the name of this specific layer so ndvi 2020.0505 then one month later we see the same area the ndvi in this case of june the 4th then july the 4th august the 3rd september the 2nd and october the 2nd so more or less more or less one month uh time um okay so things to point out well basically uh as you know ndvi vegetation so the greener it is the more active let's say the vegetation is so red and yellowish colors are low in dvi meaning low vegetation or low active vegetation and the greener the color the more active the more green the vegetation is right as you can see on every of those spots we can see these little blue points or blue marks basically what i've done is that i have added to this ndvi plot the location of the ponds i'm going to use as reference to extract a specific ndvi value and then plot the temporal series so the final objective from the exercise is basically to go to this specific point give me the pixel value then the same pixel but in this layer and so on and so on and just plot a line out of that and see how it changes we're going to do this for four points over our study area and the way i have added these points to my plot is again by using the level plot syntax so i'm just gonna pass this to another line and the way this works is [Music] that first of all we define the first plot that we want to create in this case the ndvi plot i just explained before and now instead of doing the add equal true that i was showing here if you remember so if i go up uh in this case i was adding this uh this polygon by using again the plot function and then passing the parameter at true right well in this case we were using the basic plot function of r in this other case we are using again the raster vis package and this has its own syntax which i think is also very convenient and easy to understand so we are saying the first layer of our plot is whatever this is so ndvi then we're saying plus and then now we want to pass some points so i say layer sp points points underscore sa points underscore sa if you remember well is our the points we loaded at the very beginning so if you remember when we were loading the auxiliary data we were loading this save file that and that shapefile is basically those four points um okay so i'm saying load these points and then pch call npg this is basically the um the um [Music] sorry the parameters for the representation of for the visualization of those points right so pch is just so this number is an index for the icon that's been used 69 is [Music] an index for this specific blue color the background here is not showing up and sex and low and the lwd is just for the for the size and and width of the line so basically visualization i am pointing this out just to show you that even if r let's say it's not the final tool to to create the outputs definitely has a lot of options and if you know them all and if you know how to play with it you can definitely automatize the creation of plots part of definitely that analysis is uh create visualizations to actually show what you have done um okay so you can do advanced graphics let's say with with her as well okay so here we see the the evolution and uh well definitely if you if you pay attention i mean if you follow the trend a little bit the early in may was kind of green most of the fields and slowly um towards the summer there are most of them going into the yellow color so not that active and mid summer some some are still very green definitely this again is an irrigated area and local knowledge of your saudi area is always going to help so definitely some irrigation happening here and what september october etcetera okay so if uh that is one a first result of our analysis however we want to see this evolution we want to see this uh change in the value so we're gonna actually do what we have said extract these pixel values so to do so we are going to use again the apply function uh again this family of functions that are that is used to to pass a list in a specific function so i'm going to say okay to my ndvi layers i want to apply a specific function that i'm going to specify right after this so parameter ndvi i'm going to say extract that's the key function the extract function from the raster package requires a raster as input and dvi or let's say the parameter of this function that is coming from this input here so a raster a location so where does this needs to to happen so at the location of these points and then basically we are saying some methods to uh to specify the way that the data is extracted again remember if you don't understand what method is bilinear etc remember you can always do like this question mark raster two points extract and check the description and for example if we go down and we go to methods let's see if i find it here here uh it's it is telling you either so it's a character so you have to pass string simple or by linear and well basically if by linear the return values are interpolated from the values of the four nearest nearest raster cells if it's simple then it's most likely gonna take the value of the exact pixel where it falls so i'm using the option by linear uh because i think it's uh a good way of taking of taking the neighboring pixels around uh obviously if you are using for example gps location if you went to the field and you know okay that's your exact location where you want to to sample then use method simple but if you know if you are not very sure and you want to take a little bit of context because maybe you know that pixel was affected by something in particular but i think it's it's a nice feature of the extract function that i wanted to show you okay so method by linear and then df equal true df equal true meaning that the result of this is going to be shown as a data frame data frame is something very common in r is that that's a structure very very common it's a table basically simply put okay so uh we run it and we are now visualizing the first element so you can see it here and remember since we are using laplay the output of this is a list in this case a list of data frames if i just remove the index here we can see the list so our first data frame second element a second data frame and so on and so on so let's go to the first one just for the demo so we see the data frame we see the name ndvi 2020.505. we see the data type so that's a double position this dbm and that's what it means so it's telling us we have a number here with double position and the first column is id so one two three four and the second column is the ndvi and is the value by itself right so 0.95 and so on so what is that one two three four well obviously the points one two three four which is the order i mean is this one or is this one etcetera etcetera well definitely this is something that when you play with the data uh most of the times you will know right if you have created or not the points this one two three four is making reference to the way these points were loaded into r uh if we go to points dot s a and we go to v chords um so this is just uh so maybe if you don't know i can show this if you um once you load the shapefile you can access different methods and uh for example the bounding box uh the data the projection of this shapefile et cetera and one of the arguments is coordinates right so this is telling us the in utm coordinates x and y the location of these points so this is point number one those are the coordinates of point number one and they basically if you know uh if we plot for example here the coordinates we could know which one it is uh again this is those are little details are very important but when you work with the data when you create your points of reference et cetera it's something that that you know definitely are ways to figure to figure it out okay so we get these data frames but obviously you know this is i mean i see that point number one is going from 0.95 then in a month later it's it goes to 94 then to 89 and so on but obviously this is not you know the best visualization you can do so let's move on so what we're going to do is arrange this data frame a little bit so that we can actually plot the line a little bit better so what we're going to do first of all is to combine these data frames into a single one and for that i'm going to use the function c bind and i'm going to create a single data frame for everything and then what i'm going to do is to remove some duplicates so let me comment this line with this hashtag icon so that we see the effect of this afterwards so let's run this and uh well now we see all the data frames that were individual elements in the list are now combined but now we see you know okay great but it seems there's something repeating uh the id column right this is i mean fine but i i don't need to be refreshed every time with the id i know the row number one is point number one and i don't need that well here you are seeing something very common when working with data in a programming environment and um when working with data in general you will have to go through data cleaning and that data preparation this is something it's part of the work it's part of the job and most of the times it can be a big part of the job um so in this case what we are going to do is say okay let's get rid of these repetitions we don't want them and for that i'm just saying okay in my data frame i want all the columns and i want to avoid the duplicates so i'm using this uh exclamation mark to save it if the duplicate is uh is a true then remove it and and check the call names for that operation so if we run that now we see the output and that we have a clean data frame so great now we are closed now it's easier to see the evolution of the points but of course you know what we want we want a plot we want a graphic and it's going to be easier for everybody so let's do so so here what we can see is that we have a data structure where we have uh for one sample in one row we have different values right so this sample uh in row number one has this value and then this value this value this is something that doesn't follow let's say the the preferred data structure for visualization in in in r so here we are going a little bit more deep into the way uh different packages in our works i'm talking about the structure that is required by the thai diverse or tie-dr series of packages in our those are packages where which functions expect per row one sample so in here instead as i said we have different values for one sample in the same row and the idea is to have in one row one sample one value so what we need to do is to let's say stretch this data frame duplicate some some values so that we have that data structure let me show you the what i mean and it's going to be easier to understand so what i have done here is to use the gather function of the idr package that we have loaded and basically we are saying uh okay take this data frame here so this one um create the the columns date and value and uh gather it so what we have here is that for uh per row so in one row we have one variable one value and this is key to and this is something that you need to understand when working with this type of packages like ggplot2 or ggplot they all rely on this data structure one row one one sample one value that's why in this case we have instead of having the columns the dates and now we have a column called date and then you see it's repeating so 2020505 four times and for each each of these we get one volume so again this falls into this section of data preparation data cleaning that management it's part of of analyzing data with r or with any other programming language it's part of the job and as i said usually it takes a big part of the time you will dedicate to analyze your data okay so we can see the data rearranged here and now basically we can just plot it and for that i'm going to use ggplot it's again a very well known library to create graphics i'm going to pass a data frame as input i'm saying the access are the column date and value i want to group them by id so this number one is the same goes together with this number one and with this one etc and i want to link the colors to the ids right so the number two in id must have the same color as this other sample and as this other sample they have together you know okay then i want to create a line between them between each record and then i want to on on the line i want to plot the points so that i can see it as you can see i have commented this line here and uh let's run this and i will comment it later oh sorry i uh since i commented it's not showing i just need to i just need to do this okay so here we get the the plot and as you can see you know it's it's not very nice uh obviously because i have played with the way the jupyter notebook is displaying the outputs i'm showing you this to to show you what what happens with with when you deal with data i mean um obviously in this type of exercises i've run away the code and i fix the issues but this is something that can happen to you and you need to know how to approach it what's happening here is that we are getting the plot very big since i have been playing with the way jupiter is delivering delivering the outputs if you remember if my in my previous plots i've been using this option here to to set the width and and height of the of the plots and this is affecting the the default values that jupiter is using to to plot the outputs so in this cell since i didn't specify anything it's using uh the last thing that was said and that's why i see the the graph like that obviously there are ways to to change that manually and uh you gg plot has a lot of ways to custom your um your your graphs however in in here i want to show you uh something that you might not be aware maybe if you're intermediate users of art this can be of help and definitely if you are beginners as well is to create an interactive plot so it would be nice for example you know to to go along this line and know this value and i mean interact with the graph by itself for that we're going to be using the plotly library which is uh available in r and it's a python library i mean mainly you will see a lot of examples in python but i has come to r and um it is very easy to use basically um we can create a ggplot graph and then embed it in the into a bloody visualization let me just run it and show it to you okay so here it is um so as you can see now we get a better visualization i'm setting here a height but um probably is by default arranging everything it knows how to set up the it has some default values and it's setting this graph for me uh so we can see the d plot we can see the different colors i'm just putting uh the different ones the different lines uh the thing you want to check when you do this kind of things is that the the color assignment has been the proper one the group of the data here that we set is you know the correct one so the the one goes with the ones the twos go go with number two and so on and well as i told you this is a graph an interactive graph so the good thing is that you know we can use our mouse to to check the value at the specific location definitely you can do this with a non-interactive clock but i would say it's a little bit more well less friendly i would say so we get you know the date the value so the information on the access the id etc all the nice things is that you can zoom you know this is definitely not a very [Music] it's not a very advanced graph so we have just few lines uh but you can imagine whenever you you do a heavy graph with a lot of lines the the fact of being able to zoom and pan around the graph uh i think it's very very nice and i think this is the kind of thing that makes your analysis look a little bit better a little bit more professional you know you can move around and i think i think it's nice maybe you didn't know and just wanted to share with them with you um so auto scale and well basically here on top you can see different uh different options um one of the good things as well is that you can find wrong yeah exactly you can download this spot as png so basically you can click here and this is gonna trigger a internet download with a graph so i am i have downloaded here and basically this is now a png in my virtual machine and this is the file that i can now uh well share with others um it's here share with others and put in my reports or whatever you do okay so i'm gonna finish here the the exercise just gonna go back to my slides and uh we'll finish the session okay so basically and to finish this session i just wanted to share with you some conclusions um so you know with the massive volume of data being produced by the sentinel satellites the challenge in remote sensing in earth observation is not any more data availability but rather storage and processing capacities to solve this there's also a knowledge gap so how to use the data so to solve this rusk copernicus is providing a free cloud computing free cloud computing environments or virtual machines where you will find a dedicated help desk with your experts to support you in your activities with central data and as you know already a dedicated training program so um i know it has been maybe a long webinar we didn't we go through the different details of the uh our programming language i went a little bit into the detail of the script um i hope for those of you that are there are beginners this um motivates you to continue your work with r to continue learning this programming language it has definitely a lot of potential proven potential to to help your data analysis for those that are intermediate and advanced users i hope maybe you have learned some new details today maybe you are familiar with r but not with earth observation or with sentinel in particular so in general i hope you have learned something new today we are now going to move to the q a session so i will just wrap up here this session and uh i i just ask you for a few minutes so that i can read through your questions and we will be back live again with the um with the organization so thank you very much again i i hope you have learned something new today i appreciate you have been here uh today and i hope to see you soon in another russo bernie webinar bye
Info
Channel: RUS Copernicus Training
Views: 3,741
Rating: undefined out of 5
Keywords:
Id: 1dAhrc-kw8o
Channel Id: undefined
Length: 94min 45sec (5685 seconds)
Published: Tue Nov 24 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.