RUS Webinar: Land Subsidence mapping with Sentinel-1 - HAZA03

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everybody thank you for joining this webinar I am Ellen Ababa yo-yo and today will be guiding you through this webinar in which we will employ a route service to identify and map plant substance in Mexico City using Copernicus Sentinel 1 data so let us begin by having a quick look on the summary and the main steps of the webinar during the webinar which will ask for one hour and 30 minutes I will firstly describe the study area the problem of land subsidence that precedes there and I will give you some insights on the background and the causes of subsidence then I will continue by giving a short description of the Sentinel 1 data we use to monitor land subsidence and the interferometry method we apply for this case next I will introduce you to route service that provides all the capabilities and support to help users on Earth Observation applications such as land subsidence monitoring and after the ending of the theoretical part we will proceed with a practical exercise by using routes platform and there I will show you how you can easily perform such a monitoring in real service since it provides you with free access to Sentinel data the storage of a huge amount of data with a provision of virtual machines and the free use of open source tool boxes at last there will be a question and answer session where you can pose your questions however I would kindly ask you to submit your questions also during the webinar so we can reply to as many questions as possible so let's focus on the study area which is Mexico City a Mexico City is sinking over the last century and less substance is a severe problem there the city is built on highly compressible clays and by reason of stronger groundwater extraction a total substance of more than nine meters has been observed during the last 100 years resulting in damages to buildings streets sidewalks stormwater drains the collapse in the central region of the city reached ten meters at the end of the 20th century while current subsidence rates alive between five and forty centimeters per year ground water related substance often results in major damage to urban areas in Mexico City as we hear as you see here in the slide the building's interact with the settlement and cause cracking tilting and other major damage in many places large sinkholes open up as well as surface cavities so let's not continue by having a brief presentation about Sentinel one data we will use and the interferometry method we will apply later in the exercise to monitor and study land substance in Mexico City the Copernicus Sentinel one mission based on a constellation of two satellites that operate in symbol is used to track changes in land and to monitor ground movements the revisit cycle of six days on a global scale provides us with a high level of service reliability that is really important for Earth observation and risk management applications Sentinel one data are acquired in three swaths using the terrain observation with progressive scanning sir imaging technique in other words tops are as you see here in this video and Sentinel one operates in for exclusive acquisition mode that is treemap interferometric white extra white and wave and for its mode sir products are raw data single or complex data comprising complex imagery with amplitude and phase ground range detected data with mock multi look intensity and level two oceans data for retrieved geophysical parameters of the ocean interferometry white is sentinels primary operational mode overland and for interferometric techniques we use SLC data exploiting the complex imagery of both amplitude and phase so let's explain a bit more the interferometric wide swath which is the main acquisition mode overland and satisfies the majority of service requirements it acquires data with a 250 kilometers wealth at 5 meters by 20 meter special resolution and captures three sub swaths using the tops our imaging technique as you see here in the slide the iw 1 2 & 3 the interferometric wide SLC products contain one image per sub swath and one purple ization channel so for a single polarization we will have a total of three images in an interval metric wide product or we will have six if we have a dual polarization channel so it subs worth image consists of a series of bursts as you see here with a red rectangular where its birth has been processed as a separate SLC image so if you see here in the sub for three we have nine verses that are included in azimuth time order and they have a black field demarcation in between so today we will use Sentinel data for applying sir interferometry so what is sorry Tamarama tree insert technique is used to detect and monitor surface deformation phenomena such signal contains amplitude and phase information amplitude is the strength of the rudder response and phase is the fraction of one complete sine wave cycle in other words a the face of the SAR image is determined primarily by the distance between the satellite antenna and the ground targets so what interferometric SAR exploits is the phase difference between two complex rudders our observations of the same area taken from slightly different sensor positions and extracts distance information about the Earth's terrain by combining the phase of two images we produce an interferogram where phase is correlated to the terrain topography and deformation so if the phase shifts related to topography is removed from the interferogram the difference between the resulting products will show surface deformation patterns or cure between the two acquisitions dates and this methodology is called differential interferometry so what we get in this case is a differential interference own ground deformation here are some results of enzyme monitoring in mexico city with different sensors and different monitoring periods in the middle we see the SAR interferometric substance map for 1996 with the use of ers and we have a rate of substance rates of 5 centimeters per year to the right the interferogram shows the surface information for 2013 with the use of red ursa 2 and with a rate of 20 centimeters per year as you see with the red color and to the left the results present the ground deformation with the use of Sentinel one data for the period 2014 and there you see with a red color some areas of the city subside with rates up to 2.5 centimeters per month so what is ruse and what is offering ruse stands for research and user support for Sentinel core products it's a service funded by European Commission managed by IVA and operated by CSI and its partners ruse is freely available to everyone from first-time data users as general public students to specialist users such as researchers scientists public authorities a ruse offers are a team of experts on earth observation applications and sentinel sensors to help users exploit sentinel data resources it provides a team of experts supporting users through the use of virtual machines with processing facilities and a team of experts to integrate your own algorithms on the existing virtual machines furthermore ruse offers a scalable platform in a powerful computing environment pre-installed the virtual machines and free open-source tool boxes it also provides an adaptable working environment which allows users to develop and prototype their own algorithms and tools and also a regular training sessions are organized for all kinds of users with online tutorials and webinars accessible to all users and face-to-face training sessions to handle and process data and train the trainers events for future trainers so how to find and how to access route service for cloud resources tools and data and support you can visit the route Copernicus website where you can register to route service you can request a virtual machine or additional support and if you are interested in training you can visit the route Copernicus training website where you can have some information for upcoming training sessions that is face-to-face or webinars where you can learn how to download process analyze and visualize Sentinel data for a variety of applications in different thematic areas also you can visit us in YouTube where you can find some dedicated videos such as how to download Sentinel data how to register to route service or how to request a virtual machine so please feel free to to visit us and so now we proceed with a practical exercise in this case our processing will be divided in four different parts since I want to show you some basic steps of the processing and also for you to be more familiar with the intermediate products of each step so after we have downloaded our Sentinel data and import them in snap we begin with the first processing part with ins our analysis well finally we get we obtain the phase measurements between the two images and here the final product will be the interferogram the phase difference between the two images that contains both topography and deformation while here we can have also our coherent image estimation then we will continue with the second processing part of differential interferometry and the study of ground information the output product here will be the differential interference will contain only the deformation after we have removed the topography from the interferogram and in the third the processing part we will include the phase and rapping in order to obtain the displacement measurements of course then there will be a geocoding of the product in order to finally produce the terrain corrected displacement map our input data set for this exercise will be two images of 2016 the first one is of June and the second one is of September so they have a time difference of three months to proceed we access the root platform and in the upper right corner we have to login and through your just border right here also here you can request a new user service or you can chat with support desk so we access the virtual machine and here we are in the Roose Linux environment with some pre-installed softwares we open snap tool box with the is a software for processing Sentinel data and the first thing we have to do is to to open our star images which have been added to the product Explorer pane so if I opened I expand my first image you see here that we have the metadata of the image some information about the processing time the orbits the assistant angles or any other information you may need and if we expand here the bats we can see that we first have three images one for each sub swath so we have iw 1 2 & 3 in the polarization channel VV and so here you see that the indistinct image here is a virtual a virtual band that means that it's not stored is on-the-fly calculated and what we have actually here is the bands I and Q which correspond to the real to the real and imaginary part of the complex data so the inq bands are the bands that are actually in the product and the virtual intensity here band is to assist you in working with complex data so if we open one image you can open it by right click and open image window or you can also double click it we open the image in the view window here in the to the right and as you can see the image has a number of bars so if we count from top to bottom we get one two three four five six seven and eight bar bars within this image so Mexico City is located around here almost with the white color and is included between the burst 3 & 5 so what we have first to do is to split our images and to work only with the three parts that a Mexico City is included in this way we reduce the processing time in the following steps and such a process usually is recommended when the analysis is focused only over a specific area and not the complete scene so we go to radar Sentinel one tops and Sentinel one top split and here we have to set some parameters the first one here is to select the input image so we select our first image of Jun then we have to set the name of the output product I use the default that the software produces and then here you can see that the software adds a suffix of the process we perform next we have to define the output directory and for the processing parameters tab here we have to select the subs worth Mexico City is included so we select the iw3 we select the polarization then we have to define the bars that Mexico City is included so we have to drag the arrows from three to five you can see it also here when dragging the cursors and then if you zoom in right here we can see with the red rectangular the extent of these our image and with white color the selected bars the three selected parts so if we run the the operator of top split we can get the splitted product and if we open it here we open the bands we see that we have only the iw 3 sub swath 1 image so here is our final splitted product with the 3 bars so also if we zoom in here I'm sorry you can see the black fill demarcation in between the bars so next is to do the same process the split process for the second image I have already prepare it so if we see here the output product of snap an output product line snap is comprised of two files data file and atom file so what we have to insert in snap is the dim file so here is our second image let's see if everything is alright ok okay so next if you want to synchronize your views just a tip see them together you can go to window and tile horizontally and then you can go to navigation let's do it again okay you can go open again the second one and you can press the two bottom buttons here so you can synchronize your views after the split pre-processing step we will continue with the main processing of course you can follow a manual step by step process but when we use and work with a huge amount of data it's better to proceed with an automatic processing of the images so in this case we have to use the graph builder tool which is right here and you can see that we have a top and a bottom panel in the top panel we design the graph of the processing chain we want to follow by adding operators for its processing step and then in the bottom panel we set the parameters for each operator we add so now we are ready to perform the first step of our processing which is insert analysis in order to produce the interferogram and the coherence map for this we have to design a graph by adding the operators for its processing step and we begin by the read operator as you see the two default operators are read and write read for input images and read for output products so since we have two images we have to add a new read operator to do this we go right click input/output and read and the next step is to to apply the orbit files in certain Irwin products in order to provide accurate satellite position and velocity information so at radar and apply orbit file we do the same for the second image so Rader and apply orbit file and then we have to connect the operator so we drag the red arrow from red to apply orbit next orbit file and next step will be to creditor that you sent in l1 images couragous tration ensure that each ground target contributes to the same pixel in range not smooth in both the master and the slave image so for this reason the second image which is a slave will be co-ed stirred with respect to the first image which is the master and for this reason for this process we use back to coding operator that correctly starts to split products of the same subs worth using the orbits of the two products and a digital elevation model so we go to add input a rider couragous tration s1 tops gorgeous ratio and back to coding and then we connect the operators okay okay and next we have to add the enhance spectral diversity operators that follows back to coding the is the approach exploits the data the overlapped area of the JSON bars and then performs range and azimuth correction for every bars so it's a refinement step and we go to the same path radio courageous tration as went obscure registration and enhance spectral diversity and we connect and at this stage we will produce the end of program between the interferometric pair I'm in the master and slave while we can also have coherence image estimation from the sack of the register complex images to add the interferogram operator we go to add Rader interferometric products and interferogram and we connect and then we have to apply the tops of the bust operator since our image is consisted of a series of bursts we depart the image in order to produce and continuous coverage of the ground so to add the operator we go to radar Sentinel one tops and tops are divorced okay and that is our processing chain for aims our analysis and then we connect with the output product here the right so now we have to go and set the parameters for each operator in the first read operator here we have to select the master image the splitted product so it's the first image of June then in the second input image we have to select the slave image the split product in the apply orbit file we select the precise orbits which are available 20 days after the Sun sank and also here we have you can also select do not fail in your bid file is not found we do the same for the second image okay and then here in the back geocoding our operator okay here we can see that we can select digital elevation model here we use the assert IAM and they will use these default parameters for the interpolation method and also here areas outside the dem or in the sea may be optionally masked out so it's better to select it and then when in hospital diversity follows back geocoding it's better to select output the ramp and the mode face in order to improve the courageous tration then in the ESD approach here okay it's better to leave the default parameters in the interferogram formation step we shall remove the flat earth phase which is the phase present in the anti parametric signal due to the curvature of the reference surface and is estimated using the orbital and the metadata information and then subtracted from the complex interferogram then we can leave the parameters as they are but also here we can change the coherence range and azimuth window sizes the default are 10 by 2 but I will increase it so you see here whenever I change the arrange window size the azimuth changes automatically so in this case we increase it in order to get a better estimation of the coherence then in the tops of the bars 2 tab here we select the Viva polarization channel and finally in the right tab we have to set the name of the output product please also check here that we have new suffixes for the operators we have added like the orbit files the qiraji stration the interferogram and the bursting and then we have to define the output directory the output folder and after we have set all the parameters we must save the graph so let's save it as graph process work ok and then we run the processing chain so I won't do it I have already prepared the output product this process should take around 20 to 25 minutes depending on your machine so what we get in this occasion is the deposited interferogram if we expand the box here we see that we have two new pants in the products the interferometric face band and the coherence band so if we open the face band we see here our interferogram which represents the phase difference between the two images and also here you can see that after the birthing we have a specific continuous image that means that any sub setting of the image should be applied from this app forward and we can see also our coherence map which is an indirect measure of the quality of the interferogram and the coherence shows how similar its pixel is between the slave and the master images saying the scale from 0 to 1 the areas of high coherence appear bright and areas of poor coherence appear dark so in this image you see that the areas the black areas that are closer to 0 represent the vegetated areas while the white areas that are closer to to 1 represent correspond to buildings and 2 to tool burn areas with this interferometric processing part we produce the interferogram which contains both typography and deformation so now we will continue with the second processing part of the differential interferometry for this step we will design a processing chain where the final output could be the differential interference only the deformation we open a new graph builder tool and here the first step is to remove the topographical induce phase from The Departed interferogram to do these we go to add Raider interferometric products and top of face removal then as the originals our images contain inherent speckle noise multi look processing is applied to reduce the speckled appearance and to improve the image interpret ability to add the operator we go to add Raider and multi look and then we will perform phase filtering of the interferogram in order to reduce the phase noise for visualization purposes or trade the phase and wrapping which I will show you in the next step so we go to radar interferometric filtering and constant phase filtering okay and then we have to save the output product we connect graph okay and then the final step in this processing part is to export the data for us now few processing in order to apply the face and wrapping to export the data in the format compatible for now few processing we go to add Raider interferometric unwrapping and snuff you export and we connect okay then we go to set the parameters for each operator so in the read tab here we select the deported interferogram in the top of face removal here tab we have to we use a digital elevation model in order to remove the topography from the interferogram in this case we use the Sur TM 3 seconds and also here we have the option to create some new bands so this has to do with your needs of course in this case we will create a band containing the typography so we select output topographic face band just to show you the typography the face that corresponds to topography that we remove with the multi look operator here we can get a square pixel of the image and we can also we also have the option to to change the number of range and azimuth loops so here the default values are 4 to 1 by increasing the multi look factor here we can we can smooth the face but since we get bigger pixel sizes we get a worse spatial resolution so I will increase it to 4 by 2 here also you can see that if we change the number of range loops B azimuth looks change the changes automatically and then we get a mean square pixel off for almost 27 meters so then in the goals and face filtering we use the default parameters proposed by the software and then we have to save the output product so we set the name of the output product and here you can see the new suffixes that have been added according to its operator we have added then we have to define the output directory and then we go to snuff you export tab where we have to to set the target folder of the snuff you process for the statistical cost mode we use the therefore for deformation the initial method we use the minimal cost flow algorithm and here here we have the we can change the number of tile rows entire columns the since unwrapping is a computationally demanding process there is this option here to process your scene in a number of patches so the default is 10 by 10 that means 10 patches in range and 10 patches in azimuth so we end up with having a hundred patches so this can introduce or cause artifacts that might result in unwrapping errors between the patches and to avoid that we we we let one patch one by one to perform we to perform the unwrapping in the whole area at once so also I cap changes to one by one also the number of tile rows and columns depends on the memory allocated to your own machine and then we save the graph ok let's save it as graph process to and then we click run this project should take around a couple of minutes I have already prepared the output product so if we open it this is the the multi looked differential and filter differential interference of phase top of phase and coherence so if I open the top of phase band here we see the topography we remove from the different from the deposited interferogram using a digital elevation model then we can see the face here okay here is the differential interfering with it in the form of fringes that vary as we see here if we go to color manipulation in the Instagram ok the values of the fringes vary between minus PI and PI and then I suppose you have understood that the different extent of the image the multi looked here in the program is having a square pixels so it is closer to map geometry and in this way it's easier to interpret it also what we have here is the face which corresponds to deformation so where the fringes are closer denser the deformation is bigger then if we also open the coherence band which is closer to map geometry you can see here that we have the areas of local heroes the black areas and the bright areas of good coherence that correspond to urban urban area to the urban area of Mexico City so if we synchronize our views okay here we can see that in the areas of low coherence which correspond to vegetation the face the face measurements are not accurate so the results can be accurate only where the coherence values are higher here we can also see in the middle right here or here in the right side some areas of not accurate face measurements so now we proceed with a third processing part of the exercise with phase unwrapping and displacement displacement measurements so if we open if we open the snafu folder where the data were exported for now few processing here we can see some files like the face the coherence and we have here also a header file of the unwrapped face which we will use to import back the data to snap you and then we have here a configuration file of nephew in which all the processing commands and parameters are stored and we can open it okay and when it's nephew tries to create a log file of the processing then an error occurs so we have to command the log file the line and the log file is not generated so we go here and we place a comment in front of the local file and then to continue with the phase unwrapping we open a terminal window and we navigate to the same path of this nephew and from the configuration file right here we have to copy command to call nephew so we copy and paste it in the terminal and then we execute the command so I want to run this process because it takes around of eight minutes so what we get in this case the results are stored in the same path so here you can see that we have the final results of unwrapping the unwrapped image so next step is to import the data to set up so in this case we open a new craft builder tool and I will load an already defined graph but don't worry I will explain you its operator and then you can have some more information on the tutorial you will get if you want to repeat the exercise so here we have to read input so in the first read input is 4 wrapped image and the second for the unwrapped image so here in the first input image we have to select the wrapped face of the differential interferon in the second read job we have to select the unwrapped data so we go mm a few folder and we select the header file of the unwrapped image I'm sorry okay it's already here and then we have to add the operator of snuff you import which is the operator that imports the data from a snap view you can also hear check do not save wrapped interferogram in the target product since we have since we have performed this phase unwrapping now we can continue by converting the face to displacement and by producing our displacement map so we had the operator face to displacement and here you see that they are not parameters to be defined so sofas along the line of sight is converted to meters and finally we save our output product which is the displacement map we set the name of the output product we have the new here suffixes for unwrapping a displacement and then we define the output directory and we save the graph okay and then we can click run so the output product which I will show you right now it's the displacement map okay and if we expanded here the bands we see that we have only one band of displacement so here our is our displacement map now we can work a little bit with the colors to to change the colors and then you can go to the color manipulation tap here and from basic editor you can choose another color palette on from table here you can change the colors and the values or you can go to sliders and there I can show you how we can create a color palette with the colors we may wish so we can first remove some sliders sliders okay then and then we can give the color of we can give the color of red for of blue for uplift for positive values or for red for negative values and here for the zero value you can give the white color and then we can change the the values here of the deformation so we can change to be from zero point one two okay didn't change so go again and then minus point one for substance and then we can we can distribute these sliders evenly between the higher and the lowest value okay here is our final displacement map we can go to the pixel info here tab and we can see that the red areas here they have negative values so the represent subsidence and then the the blue areas represent uplift having the positive values you can see the values right here the displacement values in meters since our product here is inside geometry we have to project the data from Sergio material selected the map geometry so in this case we go to radar right here geometric terrain correction and range Doppler terrain correction and here in the input output parameters set up we have to select as input image the displacement product then and next we have to set the name of the output product also see here the new suffix for Turin correction and then we have to define the output directory in the processing parameters tab we can leave the default proposed by the software however I will change here the pixel spacing to a hundred in order to have a faster processing which I will show you right now and then as you see here we use for this procedure this procedure involves atom and so the final results are also also rectified so if we run the process here okay we see our final product the auto rectified image of displacement measurements we can change the color palette we can input here predefined color palette as we created before so after we have created the color palette we can save it right here and then we can import it again from this pattern so here is our final ortho rectified image showing the displacement displacement patterns in Mexico City of course after that we can proceed with some post-processing of the of the image however as such step is not included in this exercise so I can show you the result after removing the local hearings areas from the probably image okay so here is the final result without by living only the accurate phase measurements so in this occasion our results are more reliable and accurate and we can interpret the deformation patterns with a more robust manner so to summarize in this exercise we were focused on interferometric processing of Sentinel tops data we followed a straightforward processing approach that has been extensively tested in different sites and our goal was to finally obtain the displacement patterns in Mexico City of course the final results can be enhanced by a post-processing I like to define reference areas or to mask incoherent values just like the result the result I saw you I just show you here and for for this way massage to demonstrate these steps in a follow-up or webinar so if you want to repeat the exercise you can log in to the roots platform and to request a new user service and then you have to add the code for this exercise thank you for joining and we are available to reply in the future you can contact us at any time so thank you for joining this webinar
Info
Channel: RUS Copernicus Training
Views: 34,562
Rating: 4.9792747 out of 5
Keywords:
Id: w6ilV74r2RQ
Channel Id: undefined
Length: 54min 57sec (3297 seconds)
Published: Fri Jun 15 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.