Canny Edge Detection? ORB Feature Matching? - OpenCV Object Detection in Games #7

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what's up guys so right off the bat i've got to tell you this video didn't go as planned so my original plan was to use candy edge detection to find the outlines of the limestone deposits that we're looking for so that in our processed images the outlines of the limestone deposits would look the same whether it was day or night in albion and then we could pass that through match template like we have been doing and everything would be awesome but the problem is that didn't work i tried saving it using orb feature detection which is a more advanced topic than i ever planned on covering in this series but that didn't work either so i thought i'd still make a video showing you what i tried and explaining why it didn't work and maybe these techniques will still be useful to you and your own project or hopefully at least find it interesting but if you're just looking for the answers maybe skip this one and check out the next video i really want to get this working for you guys and not just leave it as a dead end project so i think what we need to do is we need to bring in some machine learning and i know i told some of you we weren't going to use machine learning in this series but i think that's what we need to do to really get this working so hopefully we have some better luck with that otherwise i just took you guys down a really long dead end road so hopefully it doesn't come to that and lesson learned from now on i'll have my tutorials completely planned out before i ever start filming hey i'm ben and this is learn code by gaming so the first thing i did was in our control gui i added a few more track bars so i'm going to use this erode and dilate feature of opencv and that'll give us more control over the outlines that appear when we're detecting edges and this kernel size parameter will also play into that and the caney edge detector itself has two parameters that we can adjust and those will allow us to make our edge detection either more or less sensitive and just like we did for the hsv filter i created another custom data structure that we can use to hold the data from the trackbar positions just for our edge filter and then just like we have a git hsv filter from controls i created pretty much the same function for getting the edge filter controls so this method here just looks up the current position of all the track bars that's relevant to our edge detector and it'll return an edge filter object from those positions and then just like we have a apply hsv filter method i also wrote an apply edge filter method and so here if we have a defined edge filter that we want to apply we can go ahead and use that but if we don't have one we'll get one from the controls in our gui and then before i show you the canny edge filter itself i want to show you the erode and dilate so first we can define a kernel size which is one of the parameters that we can play with and we'll use that same kernel for both a road and dilate and we'll pass in our original image to a road first and then we'll take the result of that and pass that one over into dilate and then the iterations for eroding dilate these are also parameters we can play with to see how they affect our results so for now let's go ahead and return the result of eroding and then dilating our original image and then once we see what that looks like we'll come back here and we'll apply our edge detection so over in main of course we've got to import our new edge filter and then let's go ahead and remove this fixed hsv filter we had from the last video that way we can still play with those adjustments with our gui before we pass that image over into our edge detector so we'll go ahead and apply that edge filter to our processed image and i'll disable the object detection part for now let's take a look at the processed image from the hsv filter and let's make another window for the edge detection image alright so let's go ahead and run this and see what we get okay so back here we have our processed image and we haven't adjusted any of the track bars on here yet so we just have the normal recording from the game and then in this window here we're seeing the application of the erode and dilate calls and in our track bar gui i've initialized these to be one iteration on the road and one on the dilate so that's why we're already seeing it blurred like this and then if you play around with these track bars you can see as you make your kernel size larger it gets even more blotchy or you can reduce it to zero and it has a much smaller effect we can also change the iterations of the other in the dilate so if you set both of these to zero you can see it doesn't apply any changes to the processed image and then as you add a version it'll get blurrier and blurrier and as you play with these you'll notice the erode will take kind of the small features and it'll blur them out more and the dilate feature has kind of the opposite effect as you add more iterations of it it kind of takes the large features and it bleeds it over the smaller features and you can use these in combination with the kernel size to get kind of these blocks either larger or smaller in size so why are we doing this well the purpose of the edge detector is to find edges between different parts of our image and by eroding and dilating it makes the edges a lot more clear you know without them applied at all there's edges all over the place even real thin ones like the rigging between these masts but if we only want to pay attention to the larger edges we can add some erode add some dilate and those smaller features will disappear so in the end this is just going to give us more control over the edges that we detect so now let's finish this out and add the candy edge detection itself so in apply edge filter we'll now take that dilated image and we'll pass it over to cv.cany and then we've got those two parameters here that will control the fineness of our edge detection and the result that comes from candy edge detection is a black and white image the black pixels are where it didn't detect any edges and the white pixels are where there are edges and so to get that result to play nicely with our other filters let's go ahead and convert that back into a bgr image now when we run our code again you'll see something like this for the results of the candy edge detection where again each one of these white lines is an edge that got detected and then we're just ignoring everything that isn't an edge and then by playing with these parameters we can get more or less detail in our detection in particular by turning those two candy parameters down low you can get a whole lot of detail in your edge detection so depending on what you're looking for you may or may not want all these details and you can see without any arrow to dilate at all we really get a ton of activity going on but we can sort of get that under control and get a result that looks a little more blocky by using our erode and dilate and of course i'll put links in the description for the official tutorial for candy edge detection and also for erode and dilate so if you need any more information about how these work that's the first place i check out now i thought the next step was going to be we would use these parameters and our track bars and adjust our hsv filters and edge filters until we had a good image of the limestone deposits isolated and the idea was is that these edges would remain pretty much the same whether it was day or night in albion and so we'd go ahead and crop out our needle image from there and we'd send the whole thing through our find method and then match template would line them up and we'd be good to go so i'm going to quickly adjust these track bars and see what i can come up with so i was thinking maybe something like this would be a good representation of what a limestone deposit looks like and then hopefully that would match the other limestone deposits a lot better than it matched say these trees over here and of course we could get much better results if we applied the hue filtering here you know each one of these three things down here is clearly a deposit but of course with our day night cycle problem we don't want to be touching the hue filter at all so then what i do is i take a screenshot of this and then from that screenshot i'll crop it down to something like this and then this would be the needle image that i'd use for match template but that doesn't work at all and let me explain why it doesn't work so if i move my character even a little bit i'll go ahead and take a screenshot here our new haystack image that we pass in to find would look something like this and of course this would be our needle image and the way that match template works is it takes your needle image and it drags it all across the haystack image until it finds a good match well more exactly it drags it all across your image and gives your score at each position for how well it matches so if i take this new image and i'll turn its opacity down to 50 and i try to overlay it on top of that same exact deposit that we just captured for our needle image you can see that visually i can get it pretty close to lay over top of it and even though it looks like a close match to us the way the match template works is it's checking each one of these pixels and when you have a black and white image like this only where you have a pixel that's white on the needle image and a pixel that's white on the haystack image we'll actually consider that a close match otherwise a white pixel in one image and a black and another is a complete not match so all the pixels you see here that are gray colored are pixels that don't match at all and it's only these white pixels where we actually have an overlap between our needle image and our haystack image and so overall match template is going to give this a really poor score even though it actually is a real close match to our own eyes and this gets even worse when you consider that we were hoping that this one template would also match other deposits and of course even though this is a limestone deposit here it's not matching up to this black and white outline of a deposit whatsoever so overall this was a big fail this doesn't work even slightly and i really don't even know what i was thinking to be honest in contrast if you're dealing with an image like this where you have a lot of the color data still let me show you what that would look like using our overlay comparison if we have a needle image like this right and we go to overlay it on top of a colored haystack image you can see that even when it's not an exact match it still matches a lot of pixels so even like down here these two clearly aren't the same shape but we're still seeing a lot of those same blue pixels in the haystack image that we're seeing in the needle image and again down here so that's why this method was working pretty decently for us before especially when you cut out all the other colors but it just doesn't work at all once you reduce an image to just black and white outlines and furthermore while this edge detection does produce pretty similar results between the day and night cycle and albion it's not a perfect match throughout the entire cycle so some of these edges from the same exact deposit will still change a little bit over time so once i realized that candy edge detection wasn't going to work with match template i didn't want to completely abandon it so i thought what if we use feature detection so feature detection is a more advanced concept that's supported by opencv and they've got a ton of official tutorials about it on their website so i tried hacking something together just to see if i could get it to work and the first thing to know is these sift and surf detectors are actually proprietary so they're not open source and technically if you use them you owe somebody money and the latest versions of opencv don't even support them anymore so the one i started playing with was orb which is advertised as the free alternative to sift and surf and the idea is was we can use orb to detect all these different features in our needle image and then we can use those features in our needle image to search for them in our haystack image and that should allow us to find the object that we're searching for so rather than checking pixels in a really basic way we just overlay them and see how they match up here you're searching for small features that appear in both images so much more sophisticated it's got a lot of stuff that i don't understand at all but let me at least show you the code i came up with and what my results were so this match keypoints method in the vision class is what i ended up with and again this is intended to replace the find method so we're going to give it a screenshot and we're going to take all of the key points that we found in the needle image and look for those same key points within the screenshot and most of the code for this is just from the official tutorials and then i also found this really helpful stack overflow post that i'll link to in the description and it just sort of got me on the right path of what these different parameters should be and then i'll summarize what i understand about this function so the patch size is basically how big of an area will be considered when determining key points so as you make the patch size smaller it'll find finer and finer key points whereas with a larger patch size it should be just detecting even larger features and noticing those as key points with the min match count this is how many detections of key points you need to find in the haystack image from the needle image before it's considered a good detection and i'm calling orb create twice here and that's why i can have a different number of features on the haystack image versus the needle image our needle image is really small so i didn't want 2000 features on that but on our haystack image i really wanted to consider a lot of different things in that image so i wanted to ramp up the number of features there and then these parameters here again i don't really understand them very well i did try a different algorithm and it didn't work quite as well as this one does and i did play around with these other parameters as well but i don't remember seeing any immediately noticeable effects so i just kind of left them where they were so if you do have a better understanding of this definitely let us know in the comments what the heck's going on here and all of these parameters are getting passed into the flan-based matcher and then we're taking the descriptors of the key points in the needle image comparing those to the descriptors in the haystack image and we're using this k n match method on the flan base matcher in order to get our matches and then once we have our key point matches we want to go through them and only keep the good matches and it looks like it's looking at the distance to determine that and again this part of the code is all just from the official tutorial on how to do this and then if we do have enough good matches that exceeds our min match threshold we'll go ahead and return those as a successful match and i was also interested in the points on the haystack image where those matches occurred so that we could use that to find where the limestone deposits were in our screenshot and then i returned just a bunch of stuff out of this method all the stuff that'll help me figure out what's going on so i can debug it in the main file so over in main i just call match key points and i'm going to use just the raw screenshot to start with so also be passing in the unprocessed limestone image as the needle and then with all those variables that i'm returning from match key points we can go ahead and collect those because we're going to use them in this draw matches function so draw matches is built into opencv and it's really nice it allows us to give it the needle image and then all the key points that are found on that image and then our second image which will be the haystack image and then all of the key points that are found on there and then we just take all of the good matches that we found and we pass those in and then this draw matches will circle all of the key points on both images it'll also draw lines between the key points on one image to the key points on the another image where it was a good match so take all that take that match image and then go ahead and show it spilled water on my keyboard hold up just gotta dry that off shake it so let's run this all right so here's the output of match key points you can see in the upper left over here all of those little circles are different key points that are found in our needle image so these are hopefully distinctive features that are unique to this needle image and then it's going to look for matches from those features with the key points that it found on the haystack image and then once it finds a good match between features it'll draw a line from the needle image to where it finds that same feature on the haystack image so once we find a good detection what we expect is a bunch of lines being drawn from our needle image over to the object we're looking for in the haystack image and one of the things you should notice is it's detecting a lot of key points on our user interface so in the upper left over here upper right down by the map it thinks that those are key points but in actuality we're not interested in any of those things so one thing that would help is to crop out all of the parts of the image where we don't want it to detect anything so i went ahead and wrote some code to do that cropping for us so what you see now is all of the ui is removed from our haystack image and it still sees a lot of key points like on my nameplate but at least we're also seeing a lot more key points being detected around the objects we're interested in which are the limestone deposits and let me crop out the exact image of one of these deposits so you can see what a good match looks like and of course you want to remember to take that screenshot from one of your windows where you don't have those circles and lines everywhere and when you're making these crops make sure you leave a little bit of white space around the features you're interested in because this key point method is really interested in these features of the edges alright so now we're using this needle image and you can see that now it's finding a ton of matches between the features in our needle image and the actual object that we're looking for in the haystack so now we could write some code that's like if you're detecting this many features go ahead and assume that those points are all near that image we're looking for and we could take the average of all of the matching points and that should end up somewhere in the middle of the object that we're looking for so i'll show you that code real quick so here in the vision class i wrote this centroid method and that's going to take a list of x y coordinates all those points that match a feature on the haystack image and it's just going to return an xy coordinate that represents the middle of all those points and then in main we'll go ahead and call centroid if we have a list of good match points so that'll give us our center point and we'll go ahead and draw a crosshairs at that position and then we also need to add the width of the needle image to the x position and that's because the output from draw matches puts the needle image over here on the left so now you can see that the crosshairs are being drawn at the centroid point somewhere near the middle of the limestone deposit so theoretically we could click where that crosshair is and select the limestone now let's see what happens when we move around a little bit you can see it sort of hooks on to some of these other limestone deposits but it's just not strong enough for the signal well it's not strong enough of a connection that i'd like like you can see even over here we've got three good limestone deposits but it's not really detecting any of them so now let me show you what this looks like when we use it in combination with our candy edge filter so in main i'm just going to set the needle image to our edges image and then for our keypoint image the haystack will go ahead and give it the results from apply edge filter alright so with the edge detection i just cropped out this part of this deposit right here and use that as my needle image and you can see it's finding that one well and then as i move around it pretty quickly loses focus and you also run into issues like this where it's detecting kind of the bottom part of one deposit and the top of another and if we were to just take the simple average of all those positions it would put our crosshair somewhere here which is nowhere near any of the deposits i killed a bunny and i'm sure there's some algorithm we could use to just focus on detections that are really close together clustered together but even still just from this debug output it doesn't look like you know we're going to get great detection results if we go and do all that extra work so this is kind of as far as i got it's about two days of development work for me and i'm just not real happy with the results i don't think we have anything that's better at detecting objects than what we had in the last video but you know maybe you want to work on this some more it'd be a pretty decent project i'm sure there must be some good uses for this keypoint stuff and at the very least at least it looks cool when you're shooting colorful lasers everywhere so if you want to take that on definitely let me know how it goes as for the rest of this series goes i'm going to dive into opencv's machine learning a little bit specifically i'm going to check out the cascade classifier that's mentioned in the object detection tutorial so hopefully that works better than what i've shown you today but i'll let you know how it goes either way but thanks for sticking with me guys hopefully you still got something useful out of this and i'll see you next time
Info
Channel: Learn Code By Gaming
Views: 14,104
Rating: undefined out of 5
Keywords: opencv, python, opencv canny edge detection, opencv orb feature matching, opencv feature detection, opencv feature matching, feature matching, object detection, image processing, edge detection, opencv tutorial, feature detection, computer vision, python object detection, open cv, object detection python, opencv orb, opencv canny, opencv canny python parameters, canny edge, canny edge detector, opencv keypoint matching, opencv erode dilate, opencv erosion and dilation
Id: PcOAB6lZ5l0
Channel Id: undefined
Length: 20min 48sec (1248 seconds)
Published: Thu Jul 30 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.