OpenCV Python Camera Calibration (Intrinsic, Extrinsic, Distortion)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right it's in this video we're going to talk about camera calibration and opencv using python so we will start off by saying what it is why do we need it how does it work and jump right into the coding example so by the end of this video we could see how we could take uh a distorted image here and then undistort the image okay so what is camera calibration so camera calibration is used to obtain the camera parameters so we have two types what's called intrinsics and extrinsics so for the intrinsics what you have is the focal length and then you have the principal points which is the center of your image which we call as CX and Cy and then you have the Distortion parameters which you call K1 K2 P1 P2 and K3 okay so you have the extrins six so for this one you have the rotation Matrix typically it's a three by three but opencv has a three by one implementation and then you have a translation which is a three by one Matrix okay so what is camera calibration so to really understand this you have to know the camera model so for the camera model you typically have a camera Center and then this here where you have is called the image plane and then this point out here is your world point so for a camera color for like a camera calibration the idea is we want to find these parameters so that if we were to project um this point onto the image plane how do we project it correctly so all of these parameters we need to find needs to be accurately determined okay so the general equation is here you have a little x equals big p times big X so this big X is typically going to be your world point so you have a X Y and Z and then this one here is just for a filler because of the dimension of this camera Matrix here and then this camera Matrix you can see here is a three by four sometimes you'll see it as a different different size if they were to like truncate part of it off that's why you might see this sometimes that's like a three by three but same idea for here so here you get the image point which is our little X so this is the actual points on our image okay so here what is a camera calibration so if we break up that Matrix right what you want to see is uh the P Matrix that we looked at right here can really be broken down into two matrices so here we have our intrinsics Matrix so this is the part that looks like a three by three uh what this tells us is the focal length and then the PX and py sometimes they'll see it as cxcy depending on where you're from and then here is the extrinsic so this is your rotation Matrix and this is your translation Vector so that's what this is down here and then for the Distortion you have the different coefficients you have K1 K2 P1 P2 K3 and how that all comes in is if you were to map your points from your original points you have different mapping based off of what these coefficients are so this example down here is what's called a barrel Distortion so depending on what the Distortion coefficients are you'll have a different mapping based off of these equations and you have What's called the radial and tangential so you'll have different Distortion factors based off of which direction you're looking at okay so why do we need camera calibration so camera calibration is typically good for applications of like AR slam 3D reconstruction or just for having an undistorted image okay so how does camera calibration work so first off as you've seen in a lot of cases people will take chessboard images so that's used for calibration and what we do is we find all the corners inside our chess board so we don't find the outer ones just the inner ones so we find all the inner corners and then we find do some process to find the best location where those Corners are after you find all the corners what you can do is set up this Matrix so we looked at the projection Matrix so here we're looking at a mapping between our world points and our image points so we take that and then we could parameterize it here into this way so we could make each column written in this way and then here we have the the different parts of our rotation and translation so the idea is you get a system of equation and you move it to one size so then you get the form of like ax equals to zero you wanted to get your equations into this form and then you essentially just solve for your H and then we also make some assumption if you know you have typically you have like x y z but if Z is on an image plane we could say that the Z value is zero so there's some assumptions that we do so after you set up your system of equations you could solve use a solver to find out what the actual parameters are okay so let's jump right into the coding okay so here we have our camera calibration program so we're going to import the modules that we need here and then we have a function here we have calls called calibrates so the first function we're going to run is called run calibration so run calibration we'll call calibrate and we'll set show picks to True okay so what does this calibrate function do so this calibrate function what it does okay so what this function does is you can see right here we read in the image and then we get our calibration directory and then we'll get all of our images inside our directory so we first initialize the number of rows and Columns of our chess board so this is very important you want to make sure that you counted the number of Corners correctly and then you set up a termination criteria so here are termination criteria we could do a combination of the two so this is going to be a tuple of three values and then if you recall we have if we jump to our calibrate function which we'll see we have um so here we have the termination criteria which passes it passes into the corner sub picks so if we look at the corner of sub picks we have this criteria here so these are the criterias that we look at the three Tuple with the enum the stopping conditions for the error and the max iteration so here we're going to do a combination of the two and then what we want to do here is create some World points okay so the world points is what this does is it'll help us we're going to use these points to project it on later but then for now we're just creating like numbers from like one to the number of rows and columns as placeholders for the world points Okay so we're going to create two empty lists here to store our points as we calibrate our camera so here to find the corners we go through all of our images and we're going to read in the image convert it to grayscale and then we're going to call the fine fine Corners find chests for Corners function here so what this does is it takes in your image which we we're doing grayscale and then we have a pattern size which we have n rows by n column and then the flags which we're not using and then what this will return is your return value of the corner was found and then your corners um your Corners here which is a n by one by two numpy array of your X and Y coordinates okay so this should be uh two because we're using grayscale but anyway so here you have if we found the corners what we want to do is append it to the list of all our world points and then we want to pass it into the corner sub picks so this this corner sub picks function here what this does is it's going to find a more accurate location of your Corners okay so here we pass in our image grayscale the corners that we found the window size which you're using as 11 by 11 and the Zero Zone that's the region in the search that is not used to voice Singularity so negative one negative one means none so we're not using that and then we have our termination criteria that we just talked about so after that we append all of our Corners that we we find and then we're going to plot it out here if we say show picks so here we have our draw chess board um draw chess board corner so from here we pass in our image our pattern size our corners and then if the corners were found and then that will draw draw or modify our image with the corners okay and then the final step is to calibrate camera so what this function will do is do the actual calibration so it's going to help us find the intrinsics and extrins 6. so we pass in the world points list the image points list so these will be a size of npick list of n by three arrays and same for the image points can be an mpick list of M by one by two array and then the rest is going to be the image size here is going to be a tuple and then for some of the stuff we're going to set as none because we're not using okay and then the return value you have a reproduction error which is the float you have a camera Matrix which we'll need is a three by three Matrix your Distortion coefficient which is a one by five and then your RVC and tvec will be tuples of three by one for all the individual cases okay so here we're going to print out the camera Matrix and then the reproduction error to take a look and then this logic here is going to save our calibration parameters which we will use in our later video so we just want to prevent running two programs so that's why we're saving it so if we go ahead and run this we could see our calibration happening so each of our pictures we're running it and then finding all the corners and you can see that all of these are nicely done and all the corners were found so if I if you see here we print out our calibration Matrix right so and we have a reprojection error in pixels of 0.1687 which is pretty good and you can see that we have our F value which is the same here and then this is our X and Y so those all look pretty good and some things to make note of when you're calibrating I would say make sure I mentioned the corners you want to make sure to count the number of Corners you typically need at least 10 images and the images should be taken in different planes so you kind of want to rotate and pivot it so they're not on the same plane otherwise you'll get a degenerate case which your solver will not be happy and then you want to make sure your entire chessboard is inside the image okay so next up we're gonna try to remove the Distortion so we have a function here called remote remove Distortion so what this does here is we're going to read in our picture we have a special picture here that we're trying we're going to try to undistort and then what we have here is we're getting the high time width of our image and then we have our function get optimal new camera Matrix so that's a function that we're using and what this does is it computes the new camera Matrix to account for Distortion okay so based off of the Distortion coefficients that we have we could figure that out so what this takes in is going to be the image size Alpha and the new image size so uh for the sizes it's going to be the same that's why we're passing in width and height and then this Alpha here what this means is a scaling between 0 and 1. so for Alpha of zero it means that the rectified image zoomed and shift is the only valid pixels are visible and then for Alpha One all the pixels from the original image are retained in the rectified image so we're going to go ahead and use the one option here to to retain all the pixels okay so here we have a undistort function so for the undistorts what this does is so for the undistore function what this does here is you have your source which is your image we're calling this image here and this is going to be a numpy array and then we have a camera matrix it's going to be a three by three and then our Distortion the Distortion coefficients and that the stores the this DST is like the destination we're not using that because we're using as an output here so this is a image undistorted okay so that's what this function is and then what we're going to do is draw the line so we're going to draw the line to visualize how much Distortion is happening and then we're going to draw the same set of line in this um undistorted image and compare the results so if I go ahead and run this we can see that we have a picture of a chocolate here right and it's very subtle but if I zoom in here you can see that if I drew if I draw a straight line there's a little bit of block that you're seeing so you could tell that that image there's a little bit of curvature happening and then now that after Distortion um it pretty much matches the edge of that part of the chocolate Okay so you can see part of the Distortion has been removed okay so if you found this video helpful give a like And subscribe and I'll see you in the next one
Info
Channel: Kevin Wood
Views: 6,875
Rating: undefined out of 5
Keywords: camera calibration, camera calibration opencv, camera calibration opencv python, camera calibration computer vision, camera calibration explained, calibrate camera with opencv python, python camera calibration, camera calibration python
Id: H5qbRTikxI4
Channel Id: undefined
Length: 14min 24sec (864 seconds)
Published: Mon Sep 18 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.