Hello everyone, welcome to the 12th lecture
on Introduction to GIS. In this particular one we are going to discuss a technique which
is called Interpolation. Since, we go for spatial data therefore it is also called Spatial
Interpolation Techniques. There are variety of interpolation techniques are available
depending on your requirements, but there are few things which I would like to mention
here first that, generally the interpolation is done from using discrete data to make them
as continuous. So, from vector to raster you go for interpolation.So,
in generally the input data can either be a point data or even line data. Line data
you know that line is made from several nodes and these nodes can be treated as point so
sometimes you might be having input as contours, so these contours can be converted into points
using the tools available in your GIS software and then point to surface that is going from
discrete to continuous. Another important thing is here is basically why interpolation
is required. This is because your point representation or contour representation as mentioned is
a discrete representation. So, say think that between a two contours you do not have information
there is a gap in the information. So, suppose this is one 100 meter contour this is 200
meter contour. Now, we do not have any information between
these two contours. Same with the point data and therefore we call them as discrete that
between two points we do not have an information about the; what if we consider as a height
then we do not have the height of these locations where no observations are available. We do
not have any height of information between two contour lines. So, if we want to create
a surface that will represent the terrain in case of elevation then we go for interpolation.
Now there is a assumption or a belief which one has to have before we go for interpolation
that a once the information no information is there then whatever the mathematical way
we can drive information that has to be assumed that it is going to be accurate. For example if I take a simple example here
that I say this is 100 meter and this is 200 meter height and I want to know the height
of this point about this, so a very simple way of thinking that linearly we can connect
this one and say that the height of this point if it is in the centre may be 150. So,this
way this is the called linear method of interpolation, but this is in 2D we need to have in 3D. That
means, we are going to create a surface. So, all these details we will see slowly slowly;
what are different methods, what are the advantages and disadvantages with different methods.
And also we will try to think that which interpolation will suite to my data set that can also be
thought on here. So, what is basically spatial interpolation?
As that it turns raw data or your discrete data into continuous data or useful information.
So, it it adds the greater informative content and value because it is very difficult to
assume a terrain using just point data or contours data, but if you are having a surface
representing the same area then terrain can be understood very well. And the best example
of this these kinds of terrain surfaces are now days which we use digital elevation models.
Many of them have been tried from satellite data or either through the interpolations
using contour data and point heights. Once we convert a discrete into a continuous
or point into surface then it can also reveal the patterns, it can reveal the trends and
also anomalies may be higher locations maybe lower locations and so on so forth. It is
not necessary that only input will be elevation values. It can be any values, suppose somebody
is working for ground water it can be the ph value, it can be ground water quality value-cations,
ions and you know a total dissolve solids all kinds of values can be used. So, for sub
surface conditions interpolations can be done if the data is available. For surface conditions
also the interpolations can be done. It provides a check on human intrusion. So atleast based
on certain mathematical concepts it will predict a value for a unknown location, for a location
for which you don’t have any information and a then try to help in the situations where
I might decide where we cannot think, but a mathematical surfaces can be drawn which
will give a near true representation of a value of it’s phenomena may be elevation
may be other quantity qualities. Now, if we go to the definition of a spatial
interpolation, well which is the procedure of the estimating the value of properties.
Properties can elevation or chemical properties may be ground water level or water table and
other things at unsampled sites for which we do not have any observation or data within
the area covered by existing observation. So, when we cover the area if we are having
point for area and it is distributed, so beyond a beyond observations then we call as the
extrapolation, but within the observations it is called interpolations. So, if your are having data say like this
and do the interpolation for this area then this part we will call as interpolation, in
this part we call as extrapolation because beyond this point these point on the right
side we do not have any observations. So,the tools which are available in GIS can
be used when you go for using point data to create surface and these interpolation extrapolation
are done simultaneously. Interpolation predicts a value for cell in
a raster from a limited number of sample data points and that is the advantage also with
the interpolation that you might be having say hundred observations for an area. Now
you can create a surface which will represent and give you lot of information which is otherwise
very difficult to extract or near it impossible to extract just using the point data. It can
used to predict unknown values for any geographic point data such as elevation, rainfall, chemical,
concentrations,noise level and so on. And in all most all cases the attribute must be
interval or ratio. Since we have discussed different types of
attributes so remember that like for a cyclic data interpolation cannot be done. For counts
and amounts interpolation cannot be done. And for nominal data interpolation cannot
be done. So, you need to have either interval data or ratio data where you can do the interpolation. The example is shown here that the input is
a point data and their values are written here. Assume these are the elevation values.
So when we decide the size of the grid or the surface which we are intended then in
this one the original values for each cell are intact here. So, when we 20 and 24 wherever
it is falling here the same values are in the interpolated cells, but remaining values
have been predicted here. So, these we may consider a sort of exact interpolation we
will see different types of interpolation technique, so converting from discrete data
here to a continuous surface as raster through the interpolation. Also here shown here that now you are having
lot of points are there they all they will have a values in your attribute table and
you choose a particular field where the values are there for which you want to create a surface
and then whatever the method you will choose accordingly a surface will be created. Here it is shown just in a continuous fashion
and some colours have been given which are representing range of say if these are elevation
values range of elevation values are represented through colours. And if it would have been
elevation model we can call as a relief matter. So, interpolation can be thought as a reverse
of the process used select few points from a DEM which accurately represent the surface.
The purpose here the rationale behind interpolation is that the where we do not have observations.
We want know or we want to predict that value and therefore we go for interpolation. And
this this concept is based on Tobler’s Law of Geography which says that again the neighbourhood
rule is coming here that the unknown point for which you want have the value whichever
the closest observation will have the maximum influence on this one, if I take example here
then the prediction for the say this value at this location will have maximum influence
from this value rather than from this one. And if it would have been middle then both
will have the same influence. So, this is based on the concept of Tobler’s Law of
Geography. And now interpolation may be used in GIS to calculate some property of surface
at a given point may be elevation, may be ground water or water table or water quality,
to provide contours for displaying data graphically and it is frequently used in the spatial decision
making process and such as terrain process hydrology, mineral prospecting, hydrocarbon
exploration etcetera. Because when you use the point data create
a surface from point data say point data is elevation data you have created a surface
which you will as digital elevation model. Now there are various derivatives of a digital
models can come like, slope, aspect and maybe gradient map, maybe using surface hydrologic
modeling in GIS you can create aid network, you can create water set boundaries, you can
create the sediment power index erosion index and many, many things. And this is what is that terrain analysis
you can perform once a data is in the surface form or in raster; you can also use in hydrology
or in ground water hydrology in a or you can drive various outputs which are useful for
hydrology, also you can have cut and fill analysis you can have dam simulation reservoir
simulation and so many things can be used once the data is in the raster form or in
the surface form from derived through interpolation from point data. Now as we know that this raster surfaces is
for each cell you are having a value. And these are equal size cells you are having
and as you know that for different situations because the terrain conditions might be different
for Himalayan terrain the conditions are different. So, the interpolation techniques might be
also different. For this point data for off indignant plane again interpolation might
be different. So, there is also very much. Before we go for interpolation we must understand
that phenomenon one and local information if available for that area that would definitely
help us to choose a right interpolation technique. Then in grid representation because we are
representing as a raster and that two uniform of grid rather than an image, so you will
have your x y locations and z value that is your singular attribute in raster and that
z value can be your elevation value or other concentration value, depth value, hide value
etcetera. And these functional surfaces are continuous because raster is continuous not
discrete. Now, these functional surfaces can be used
to represent your terrestrial surfaces terrain which depicts the earth surface terrain conditions,
we can also use in a statistics or mathematical representation so we call as statistical surfaces
that describers the demography mathematical surfaces based on the arithmetic expressions.
Hence this surface representation in it’s simplest form is done by a storing x y and
z value in the defined location of a sample. You can later on you can also create contour
lines or isolines, but it is sort of backward process because again from going for continuous
to discrete, but sometimes we have to do it because many people are very good to understand
contour data rather than surface, so they would prefer the contours. So, it is very
easy to create contours at desire levels using a surface very easily in any standard GIS
softwares.And as you know that the contours typically join the equal value or equal height
if these are topographic contours. And in case of contour lines representing
the elevation it is a line drawn on a map that connects points of equal elevation above
the datum that we know. In TIN also TIN is sometimes also considered as interpolation
where because the point input is there and then you create triangles so there you know
different way of representing the surface which TIN is since we have discussed so we
moved here from here.Grid your surface generally that like digital elevation model are in form
of grid and we know that each cell will represent a value. Some examples are here that in a discrete
form your point data is here. We can also represent in contours, we can also represent
the same point data in form of TINs and the same point data can also used through interpolation
to create surface raster grids. At this point we have already discussed while
interpolate because we do not have information where everywhere which where we want all the
time and therefore interpolation. So, visiting every location in a study area to measure
the height magnitude or concentration of a phenomenon is usually difficult or expensive.
And instead strategically dispersed sample input locations can be selected and a predicted
value can be assigned to all locations and input points can be either randomly or regularly
space points containing heights concentration or magnitude measurements. It is not necessary
that the input points should be distributed uniformly; not at all wherever the observations
are available using those point surfaces through interpolations can be created. As I said that the major assumption or belief
is that interpolation makes viable option that a spatially distributed objects are spatially
correlated. In other words things that are close together tend to have similar characteristics;
this is a Tobler’s Law of Geography. And for instance, if we say that if it is
raining on one side of the street one can predict with a high level of confidence that
it is raining on other side of the street. And one would be less certain if it was raining
across town or less confident still about the state of the weather in the next country,
because closer the point higher would be the confidence level and it is easier to predict
with high level of confidence rather than a point which is very far and we do not know
and it is it becomes very difficult to predict. So, the same concept has been employed into
interpolation and the same logic or analogy is that it is easy to see that the values
of points close to sample points are more likely to similar then those that are further
apart. And this is basically the basis of interpolation. A typical use of a point interpolation
is to create to surface from discrete data to a continuous surface. There are methods
various methods are available for interpolation which we will go in a sort of generic forms
then individual interpolation techniques which have been implemented specially in GIS software's
so we will also see. First one is global versus local another is
exact versus approximate and stochastic versus deterministic and then abrupt versus smooth. Let’s take global versus local, that the
global interpolaters determine a single function which is mapped across the whole region. And
for example, a trend surfaces where the function has been calculated and determined and that
function is applied throughout the data sets and that example is trend surfaces. And local
interpolators apply an algorithm repeated to a small portion of the total set of points.
And example is universe distance waited method. This method has been implemented this interpolation
techniques into the standard GIS softwares, so that can be also used here. Now exact versus approximate, that exact interpolators
honour the input data points which doesn’t mean that the surface is exact. That means,
that a the input value when a surface is being created it tries to have that value intact
as far as possible, but not all the time it would have the same value. Approximate interpolators
allow for uncertainty in the input data points which allows for a smoothing. So, when we
go for approximate interpolators that means, we are going more towards null linear and
interpolator. And when we go for null linear interpolation techniques then we the original
values are not kept in the raster data and there lot of variations might be there but
the surface which it will create and through this type of interpolation is going to be
quietly smooth, but if we go for exact interpolator then surface it is going to create is not
going to be the smooth. So, it depends on the phenomena. If we know
that I am going to interpolate for a terrain like Indo-Gangtic plane we know it is a very
smooth terrain. Then the approximate interpolators would be more appropriate than
exact one, but when we know now say example I am going to do the interpolation for a terrain
like Himalayan TIN which is highly rugged. And therefore, the smooth interpolators or
approximate interpolators are not suitable then I would prefer exact interpolator. And
that is why it has been mentioned that we must know the phenomena which phenomena I
am going to use, which type of data is going to come for interpolation. If it is elevation
data then what is the terrain condition, if prior information is there then it will allow
us to choose appropriate interpolation technique. The fourth one is a stochastic versus deterministic.
In stochastic methods which incorporate the concept of randomness similar to a linear
regression model or a surface of best fit. Whereas in deterministic method do not use
probability theory. So,these these types of you know generic form of interpolators are
there of about 8 types. Now in case of abrupt versus a smooth like
sometimes in nature there might be some 10 certain barriers and you want to keep the
information about those barriers. The influence in the surface while creating a surface from
point data and those barriers can also be used in the interpolation and that means you
would prefer the abrupt interpolators rather than a smooth interpolators. And example here
in this case, suppose I am working for ground water I know there are say some dice or say
quasaries are there and quasaries will serve as barrier which will not allow flow of water
from one end to across these quarries and the water might not flow smoothly. And therefore, these quasaries will serve
as barriers and if I ignore them and create a surface about the water table it is going
to be the smooth surface which is going to be inaccurate representation of a true water
table or water surface. Therefore, I need to incorporate such barriers while creating
the surface which should be more close to the real thing. And that is why these abrupt
inputs are allowed in different interpolation techniques and we will discuss little later
about this. That, abrupt interpolators allows for barriers,
for example; faults, fronts, dice, reefs etcetera a smooth interpolators produce a smooth surface,
so depending on your requirement, depending on the phenomena, depending
on the terrain conditions one would choose interpolation. Now, these same interpolation techniques can
be divided into different way in two different categories; one is based on the mathematical
functions, one is the linear interpolations. Where initially earlier I gave the example
it is easy to predict and therefore some for certain phenomenal linear interpolation works
very well or then you can go for non-linear interpolation. Now in linear interpolation, as a value for
this can be predicted because this location the unknown value is more close to the 130
then 140, so it will have more influence here. And it is very easy to predict that value.
So, a linear prediction is much easier and surface changes in a linear fashion and can
be simple mathematical function. Now, point based interpolation because in
most these interpolation tools which are available in GIS the input is maximum time is point
data. And if it is only point data and your having say contour data and you want to create
a surface no problem you can convert line to point and then point can go for interpolation. So, that is why it is also called point based
interpolations which is used for the data which can be collected at point locations.And
for example, if I am going for rainfall or other phenomenon related with weather then
weather readings may be heights or spot heights may be soil may be ground water levels or
water quality and other things. Let us take a one example of exact method
for point based interpolation which is also known as the Thiessen polygon. Then we did
not have this GIS tools. In hydrology lot of people use to have the input as a rainfall
surfaces they used to go for Thiessen polygon. So, what it is done that along around this
is observation points the polygons are created and their perpendicular distance is kept like
this and then lines are drawn and then these polygons are created. So, this is a proximal
which is a Thiessen polygon is also called local, exact, abrupt, deterministic all four
things are there and the data can be input data or ratio data. This kind of non-linear interpolations which
based on the Tobler’s Law of Geography points close together in space are more likely to
have similar values than points farther apart. And also distance weighted interpolator example
is IDW, as the closer the distance to the observation already available data it will
have more influence. The further it will have less influence and when because is a inverse
relation is used that is why it is called inverse distance-weighted method. So, in these IDW interpolators estimates cell
values by averaging or of your cell values of your raster or surface by averaging, the
values of sample data points in the neighbourhood of each processing cell. Now if there are
several observations are available around for unknown value for giving location then
the distances will be measured and the points which will have observation which will have
the closest distance will have more influence, more weight while determining the cell value
for location. Closer the point is to the centre of the cell being estimated, the more influence
or weight it has in the averaging process. Recall the re-sampling technique in geo-referencing.
It is almost the same concept and that is why also I mentioned there that sometimes
interpolation these re-sampling techniques people also put them as a under the interpolation
category, but they are not really meant for re-interpolations from point to surface. They
are just to find out pixel value for or a cell value for a after a you know once the
registration and polynomial transformation function is formed. So, the purpose there
is different but the concept wise it is same thing here. Now, IDW interpolator assumes that the variable
being mapped decreases in influence with distance from it is sample location, very logical here.
And here for example now another advantage which we can have either we can fix the size
of this circle the search radius we call as search radius or we can fix number of points
which will be used as inputs to determine the value for this centre point. So, it is
you know for example when interpolation a surface of pollution we know that the more
the distance location will have less influence of pollution. Same here the example is shown that it at
our point for where the question mark is shown here it will have the maximum influence of
point A and comparatively it will have less influence point C, but point D will have the
least influence. So, depending on the distance it is inverse relations, so further the distance
least the influence it will have. So, first distance from each neighbouring
point is measured and then number of points to include in the search can be selected.
If then we go for interpolation which I will show certain through the options which are
available in one particular software there we can restrict either using a search radius
or number of points. So, this is the number of points to include in the each cell can
be selected and then distance is calculated according to the formula. And example is here like in RGIS software
when I go for interpolation here you can see that the point locations are there and the
value which I am going to use to do the interpolation is the pH value the input map is file and
a these details we will see little later. The search radius is chosen is very well,
whereas number of points we have fixed is 12. So, if you fixed it here then you need
not to do anything here and then what you can decide at this stage what is going to
be the spatial resolution of your output grade. So, that also you can decide and where you
want to store and once you choose say you choose these things go for interpolation then
the surface will be created. Now, let’s see that what the meaning of
power here is. That power controls the significance of known points on the interpolated values
based on their distance from the output. That means, the high power; higher power means
emphasis is placed on the nearest point and the resulting surface will have more details,
but less smoothing. So, depending on your requirements sometimes when we do not know
the best thing is to look for some help first understand and then create surface, rather
than just choosing the default value and created a surface and then you do not know what kind
of accuracy it is carrying. And the same is just reverse of high power, low power value
if you give then more influence on the points that are farther away resulting in a smoother
surface. So, it depending on what phenomena here it
is pH we can assume that it is it is power two will have a quite good influence and we
are not looking for a very smooth surface and then we can go for interpolation. So,
the power two is most commonly used with IDW and that is why it is kept in default. Similarly,
the search radius as I have already mentioned that you can fix the size of the search radius
or you can keep as variable and then fix the number of points and then interpolation is
done. And similarly the barriers this is the point
which I wanted to bring here use the barrier polyline. So, whenever you go for use the
barriers that means, you are not going for smooth interpolators you are going for abrupt
interpolator, because your requirements, because the terrain or phenomena on which you are
working is having those issues. So, this barrier information has to come in form of a poly
line. A line theme has to be added then you can go choose this option and then a near
real surface will be created though it might be abrupt. So, as we can see that polyline
is a basically is a barrier maybe fault line maybe quasar, if maybe a dice and so on so
forth and in sometimes in surface water flow we can use river as a barrier as well. So,
polyline data sets used to break line that some other interruptions in a landscape. Only
those input sample points on the same side of the barrier as the current processing will
be considered. See the surface has been created which is
shown here as in a 3D without barriers it is quite a smooth, but when I will use the
barrier as giving as one of the additional input as a poly line then a abrupt surface
is created. This it depending on the phenomena if we are having information we must bring
that information during interpolation, so you create the surface which is more close
to the reality. So,barriers which reflects the presence of
fault lines cliff streams and other features that creates linear discontinuity in the surfaces
and also controls how surfaces are generated, not for all interpolation techniques various
are supported only in case of IDW and crane supports the barriers. This is about the search
radius. You can have either maximum number of points, so once a if you have decided 12
number of points then it will search only 12 number points in the neighbourhood all
around and then use those points measure their distance and accordingly the influence and
then value is determined. Whereas, in the case of fixed radius whatever the number points
which will fall as per the size of the radius it will use those points again measure the
distance and calculate the value. So, both options are available here numbers you when
you fix the numbers it is fine. Now the problem might come at the edges with
beyond which you do not have, so if am calculating a value for the boundary line areas there
might be some problems. So, maybe interpolation extrapolation might be done in those cases.So,this
I have already explained to you about maximum number or a number of fixing or either fixed
radius and so on. Now here this is another interpolation technique
which is Spline and in this technique we can have different variants of Spline. Like in
default you might be getting regularized or tension, when you go for regularized method
it creates a smooth surface. Gradually changing surface with values that may might lie outside
the sample data range. That means, whenever you go for any smooth interpolators you have
to prepare mentally that the values are not going to be exact values, values may go beyond
your observational ranges, because the surface you have asked is a smoother surface. When
you use the different again instead of regular if you go for tension then tension controls
the stiffness of the surface according to the character of the model phenomena. It creates
less smooth surface as compared with regularized one with values more closely constrained by
the sample data range. So, you will have value near real values,
but not as a smooth surface had in case regularized. Now also Spline the concept of Spline theme
of course from mathematics but it is called the french curves and lots of you know architect
and other people used to have the curves for drawings and these were nothing for the smoothes
the lines they were creating. So, Spline method estimates values using a
mathematical function that minimizes overall surface curvature resulting in a smooth surface
that passes exactly through the input points and these use to be these french curves of
made form acrylics. And people wherever the observations are there they used to fit these
french curves and then draw the lines. Now, mathematical ways of doing in GIS are
all available so we go for Spline interpolator very easily. And there are different options
are there which can be used in your GIS software. As I have said that there are variant variant
of Spline method two; major one is the linear Spline interpolators and quadratic at Spline
and then third one is the cubic Spline interpolators. The mathematics behind is also given here.
Why I am showing this particular slide the purpose here is whenever you go and start
using GIS using a particular software in the beginning of this series of these lectures
I have said that one should resort to the help. And sometimes online or offline helps
of the software are very very helpful. So, whenever you suppose you are going for
a interpolations some options are being displayed you do not understand what is the meaning
of this option, then before go and create a surface check for the help and very detailed
helps with extended GIS software’s are always available, maybe online or offline. And online
is better because it is updated all the time, so, if you are having access to net you can
go for online help and read before you choose the option for interpolation creating a surface.
And that is why you know which one to choose whether in default it might be linear spline
is coming. Now a question, when when you go and scroll
down you would find that a two more methods are there. Which are those two methods, what
are the variations, what are the mathematics behind that these, that you can check very
well through online resources. Now, Spline methods again here are regularized,
and then have the tension one and those other options are already there. So, you can have
the weight and other things number of points maybe in the default you may get 12 you can
choose as per the smoothness you want in your data. There is no basically fixed number of
guidelines here, but generally the default values are kept which are commonly used values. Now the last type of these interpolation techniques
which is very very popular and had been implemented into standard GIS softwares is Kriging method.
A Kriging is group of geostatiscal techniques to interpolate the values of the random field
may be elevation z value of the landscape as a function of geographic location. At an
unobserved location from observations of it are values by nearby locations. The theory behind this Kriging interpolation
that the interpolation and extrapolation by Kriging was developed by a French mathematician
whose name was Georges Martheron on based on the Master’s thesis of Daniel Krige,
and so they his named was carried and then Kriging method was developed. Kriging stochastic something that is random
exact smooth or abrupt global or local.And as you can see here in a tool two degree or
in a linear fascinate has been shown that these blue boxes are showing the observations
and these interpolated surface through Kriging is a is shown in the red line and this is
95 percent confidence intervals on the both sides of your interpolated surface. So, the advantage with Kriging method that
you can also estimate errors after the interpolation you get the estimation of errors where as
with other methods that kind of estimation is not available. So, Kriging is found to
be one of the best interpolation techniques, but again it depends on the phenomena and
local or local conditions terrain conditions. A natural data’s are difficult to model
using the smooth functions because normally random fluctuations and measurement error
combine to cause irregularities in the sample data values. What does it mean basically?
That the natural data like elevation points for a Himalaya is different than Indo-Gangtic
plane and therefore one has to be careful while choosing interpolation technique. Kriging was developed to model those stochastic
concepts it is based on the concept of a regionalized variable that has three components; one is
the your data and then it is having structural, spatially, correlated and random noise. As
I have said the errors measurements are possible or error estimations are available with a
crane. We see in this xy plot what do we find that these blue dots are the showing the observations
and then you are having a best fit line here which is shown in red, then you are getting
a green line which is connecting trying to connect all these points and these Kriging
along the green lines are showing your random noise or errors. So, error estimations are also possible using
Kriging. Now there are regionlized variable that is components of a Kriging, we can in
in the previous view graph it can be segmented. So, we can see that this is structural component
which is the straight line fit and then we can have a spatially correlated component
which is going through all your input data which is varying with the distance. And the
third one is the random noise component which is non-fitted data. So, this extra information
which comes with the Kriging is not available with other interpolation techniques. Kriging is implemented using a semi-variogram
and there are many different varieties or variants of Kriging are also available and
again this becomes sometimes difficult which one to choose. So, in your software various
options might be available one has to be very careful before you go and press button, the
options which are available are you going are you selecting the right one or not because
it ultimately the computer will create a surface, but that surface has to be more realistic
near to reality than just simply a surface. And therefore, understanding the options variants
of different Kriging techniques is very much required. In a in a software's like ArcGIS these are
the options which are available same example which I was showing here, z value you can
have here a pH value or some other value then you can choose the method either ordinary
Kriging or universal Kriging, semi-variogram model you can choose various models and then
search radius you can have variable or a number of points are fixed and always you can control
the resolution of your grid which you are going to create. And if you are having some
variance of predictions already available that to you can use here and can create that
one. Here is the example that the semi-variogram
is based on modeling the squared differences in the z values as a function of the distance
between all of the known points which is shown here as well. One of the very useful outputs from a Kriging
analysis is a uncertainty surface that can be generated. We can answer the questions
how good are the predictions. That means, basically error estimations which is not possible
with other interpolation techniques. One can create an ordinary kriged map and a map showing
the standard error of prediction, and then we can compare with the models like TIN and
others one. Some examples are here using the same input
creating the different surfaces using different interpolation techniques; first example is
IDW, then Spline and Kriging. See the input data is same and many options are kept same,
but is still the output surfaces are very very different here. And same if we go for a 2D representation
or trying to understand then this IDW is coming as a green surface your Spline is coming as
a blue your observations are shown as blue dots and then krige surface is that a blue
one it is coming here. So, all different interpolation techniques
are going to create different surfaces as you can see. Now problems with interpolations
because whenever you interpolate from point data to a surface a question can be asked
how accurate this representation is. So, this accuracy so will always come. And here it
becomes very difficult to answer except if you have done Kriging then you can provide
that kind of values, but if you have used some other interpolators probably you do not
have the exact answer. So, accuracy so will always be there. There
are certain ways of choosing appropriate one and avoiding or avoiding large errors in your
interpolations and this question of accuracy may be handled quite easily. Now, visual representation
is very much there. So which surface it looks more close to the real one that is also has
to be seen. And edge effects or lack of data at the margins. After like in this example also that here
the data where you know area of interest is available fine, but when we go beyond this
one then we do not have the data. So along the margins along the borders this will remain
there in case of interpolation, because then extrapolation is done and extrapolation may
not be as accurate as interpolation. So, these issues are always associated with this interpolation. Thank you very much.