Face Detection on Custom Dataset with Detectron2 & PyTorch using Python | Object Detection Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Kairos today we're talking about face detection using the Techtron tool and this is the first video of the PI torch or deep warning with PI torch series so let's get right into it so what is face detection well if you take an image such as this one let's say of Charlie Chaplin the great one and given this image you your task is actually to find the face the human face in this image if there exists if I want exist on the image so in here a good result might be something like this so I'm going to basically show you how you can train a model or use a pre trained model model fine-tune it using a relatively small data set and basically draw these bounding boxes around the human faces and why this is useful well basically when you do some sort of face recognition so if you want to tell who this person actually is for let's say security reasons or maybe you wanna detect the emotions of the human face for something like let's say marketing or whatever this task is probably the first the first step in understanding or solving your problem so let me just show you a bit more about the detector on framework and we're going to basically open a jupiter notebook or google quad notebook and i'm going to show you how you can use small dataset to use a pretend model find unit and use it using the Techtron tool you will be able to detect faces in like under half an hour or something like that if you get lucky and get a good video card from the google go up so what is the Techtron - well basically the texture on - is a steady state of the art object detection and image segmentation framework done by the Facebook Google Earth sorry by the Facebook and this is a second iteration of the framework so this is still like cutting-edge alpha technology and you can have a look at probably the the most important places for this framework is the actually the github repo repo so as you can see the the development is really active and yeah almost every day there are a few commits in here but I can I've been using the texture on tool for the past maybe a month or even more and I could say that it's well relatively stable and extremely easy to use even for a newbie like myself so let's get right into it and as you can see this is the power of the Techtron pool i does object detection image segmentation and it does some polls estimation of the people so this is pretty cool it has a lot of pre-trained models called the detect run to model zoo and baseline models and as you can see they're models based on the cocoa object detection data set trained on I believe imagenet let me see yeah we have image net we train models in here and we have some models based on the cityscapes data set but yeah extreme extreme variety of pre train models and I'm going to show you how we can use one of those actually I believe it's probably this one yet but we will have a look at asset so the next thing that we're going to need for the object detection part yeah and basically face detection is just object detection of one particular thing and this particular thing is the human face so the next thing that we are going to show to need is to some sort of beta set with human faces and luckily lucky for us we have the data Turks data set a date set provided by data trucks face detection in images and it's a pretty old data set and it has about 500 images which is really small data set but on those images we have around 1.1 K faces tacked manually and as you can see we have the images and we have the bounding boxes around the human faces in here or in here if you actually open the the URL from the Kegel you can see that this has been completed and here are some examples you should take a look at this one in particular because you can obviously see that there are some annotations that are missing so these data's the quality of this data set might not be perfect but you will use it to train our model let's have a look at another example okay so some sort of movie poster or book maybe and next one he had multiple faces this one is good actually and these images look like they're taken from some sort of movies or whatever okay so these are good and I hope that most of the labels are pretty good as well so given that we have the data and framework that we can use for detecting human faces we can actually start and open a notebook and start calling I guess so the last thing that I forgot to mention probably is that the electron is actually built on top of pi torch but the way that you use the Techtron is like you don't even know that pi torches used under the hood so the Textron 2 is actually very well obstructed framework for object detection and image segmentation I don't really think that there will be a back-end that is based on time so for chaos or whatever but yeah just keep in mind that the actually the backend is written in Python and the code is actually the code of the Techtron to is actually really irritable and you can just dive into the github repo and probably understand what is happening and when it's happening is this package detection tool and they are based on features so it's pretty well structured project in my opinion at least okay so the last thing that I'm going to mention for the Techtron tool is that there is a a blog post a Google web notebook and a getting started document this blog post is relatively out in the pouring history like timeline but it shows the power of the initial project and it's yeah it's really powerful there are some improvements in here the Techtron tool is actually training really really fast compared well training a model with the Techtron tool is a really really fast compared to the Techtron one or other frameworks and there is a benchmark for that you can see this one in here the documentation for the tech Toronto is on the read the docs dot IO but it's relatively let's say sparse at the moment there are there are some good insights you can get them here but mostly I believe you should go and at least for now check the source code there's a getting started guide one you can use a pre trained model and there is a data crowns to beginners tutorial which is really good and it shows for all the steps for training on a custom data set in balloons in images which is really similar to what we are doing in here actually so yeah going here look at those they're good starting points for understanding the tech Toronto and this tutorial is just a tiny bit more complex because yeah we need to do some pre-processing on the face a face detection data okay so let me start with opening the club notebook I'll reconnect this obviously we were disconnected and let's start by having to look what GPU did we got assigned because that's really important okay so we have the pe100 so we're really lucky in that regard and I'm going to start installing the dependencies for the detector onto and the next thing that I'm going to do in here is actually to clone the repo and install the drone to project after the installation is complete we are going to have to restart the runtime because yeah the technologies on may we won't be able to import the libraries unless the runtime is restarted and let me dive in a bit into the imports as you can see we have pretty much the standard stuff I know those imports are like really worse but I don't care I just import everything can start experimenting from there some of those might not be used the table yeah that's left so we have most of the detector on to imports you can see that we have a default predictor default trainer a visualizer a dataset catalog some evaluator and the models ooh actually where we can you which we can use to get the model config file and the model pre-trained weights and I'm going to dive a bit deeper into that later on we have Seaborn on the porting stuff and ceding numpy and torch in here okay so while this is training I'm going to go to the Kegel again and have a look at the the data itself and as you can see the data itself is just a JSON file and this JSON file actually contains our content which is let me open this up for you which is an URL to an image that is contained in the data set so we have to go on and download each image and as you can see those images are hosted on s3 so on Amazon Web Services which is great and we have annotations and the annotations contain the battery the bounding boxes for the human faces and they have points x and y x and y so x min Y min x max Y max we have the image width image height and then another annotation so for this image you can see that there are the actual should be three faces but yet probably the annotators annotated this face in this face in here as well so that's just my guess based on the annotations so we basically have to download each image locally on the Google co-op one time and then convert the annotations to a format that is understandable by a detector on tool so yeah the next thing that I did actually was to upload this same JSON file face detection doc Jason to my Google Drive so it is available as a JIT downward so you don't have to actually go to K go and download the JSON file and probably upload it into the Google co-opting okay so this is taking a bit of time yes so let me pause this so now that the installation of the Techtron due to is complete we can go ahead and restart the runtime okay and then actually do the imports and download the JSON file and the next thing that I'm going to do is to warp the JSON file into a panda's data frame but if I do use this we're going to get on error and this is based on the because of the format sorry I haven't included the file name correctly but here we have trailing data so if I go ahead and just hit this we can see the first couple of wines of the JSON file and as you can see each wine is basically just another JSON object in here unfortunately we can use the wines through argument in here too what each one each wine into a panda's role so let's have a look at this what do we have here and as you can see we have the content the annotation which is really really not well parsed and we have something called extras which I'm not going to talk about because I believe every extra is and not a number so yeah we have we have a bit work to do we have to do some pre-processing to extract the data from here into a format that is understandable even by us and detect run through probably recall that we have to actually download these images and to do that I'm going to start with creating a folder called faces and in here I'm going to download every image and name it and rename it because those names are kind of insane after we download each image I'm going to record it every every images height width and I'm going to parse the annotations so each annotation or each face and notation is going to get a single wine into the into a data frame that I'm going to create right now and if you don't know or you don't instead what I'm talking about let me just show you the code for that and we can after that I reviewed the resulting date frame so it might get a bit clearer let me just start by iterating over each face or each example into the faces data frame and I'm going to start actually by creating a dataset list and next I'm going to iterate over each example and I'm going with TGM to follow the progress and actually right now TPD em won't work as expected so you have to specify the total number of items that you have and this is just the number of rows into the data frame so if I go like this you can see that ticket VM is working properly so let's start by downloading the image from the URL and I'm going to use URL request for this and in the role the comb is content so I'm going to take that I'm going to convert this into an image using pew and I'm going to convert that to RGB image okay so now we have the image data in the image variable stored and then I'm going to specify the new name of the image and this is going to be basically the faces folder 3 this is going to be face underscore the current index dot jpg so this will be the new name of the image and I'm going to save this into the faces folder and I'm going to specify the format which will be JPAC okay so next I'm going to take the annotations and as you can see those are actually an array or a list so I'm going to iterate over those and for each one I'm going to create a new data a role which were which we are going to store to the data set at the end and in here I'm going to specify a couple of things first I'm going to take the image width then I'm going to take the image height and I'm going to take the points the points are actually the the annotated points at which the bounding boxes should be between the bounding boxes should light and this let me just show the example again we have the minimum points and the maximum points and as you can see those are actually fractions those are fractions of the image with an image height so you have to to actually get the real coordinates we're going to have to multiply by those numbers and I'm going to show you how we can do that in a bit which is not hard at all let's be honest let's take the points now and I'm going to start to preview that data are all we are going to preview the file name which is going to be just the image name refill the width preview the height then I'm going to calculate the X min and this is just going to be da let me just go to the example again this is going to be the first point or the 0th index and I'm going to take the x value in here and I'm going to multiply that by the width so this u will be the absolute coordinates of the X min okay so this looks like points zeroth element for the minimum elements and I'm going to take the X and I'm going to multiply that by the weight okay so we again we can just leave it in here but this should just be an integer and I'm going to round that so I'm going to convert it to an int around it and this should be I'm going to do this exactly the same thing for the Y min and I'm going to multiply that by the height okay so we have the X min and winding then we are going to do exactly the same thing for the max coordinates xmax and ymax but I'm going to take the next element from the points and again this will be these two items okay so now we have the X mean Y mean xmax ymax the final thing that we need for this is actually the class we can have multiple classes for example you can try to detect maybe human faces and cut cut faces and those should be two separate classes but for our example we just have a single class which we will call just face okay so now we just run this and as you can see this can take a bit of time and I'm going to again pause this video in here okay so the data set is now downloaded and converted into everything that we need and I'm going to show you the folder in here with the faces this might take a little a little bit of time but here we have the images and I can show you actually the first image that we had so here we have it in here on the walk of Google coop machine now that's great but we just have the data set as a list and I'm going to convert this into a data frame and I'm going to show you what is the final result and as you can see we have for example the face 0 the first image that we have and in there we have the two annotations for the faces with different coordinates for the X min Y min xmax and ymax so here this should give you a rough idea of what the final result is okay so that's great and the next thing that I'm going to do is to have a look at the unique files that we have so what is the size of our data set how many images do we have and the total amount of rows okay so we have only four ten images which is way less than the one that they told us five hundred and we have a bit actually a bit more annotated faces so that's quite all right so let's have a look at a simple annotated image and to do that I'm going to copy and paste some code in the name of time saving time and I'm not going to go through this source code you can good you can do that on your own but let me just start by selecting an image and I'm going to just take the first image annotations I'm going to annotate it disable the resizing and I'm going to use PI plot to show it and turn the axis off so this again will be the first image and yeah as you can see we have the faces annotated at probably the right places so the data set should be alright let me just show you the next one okay so we have another face in here this looks good and let's just do another one okay so this is pretty interesting I mean look at how the horse is looking at those guys okay final one okay so in here you can see that again our world of annotations are actually missing around this guy's face and there are a lot of faces in here that weren't annotated so this is basically the real world sometimes the annotations are missing and your model will probably try to do its best but yeah given that your data is not as high-quality as it can be you can't really expect that your model is going to do a perfect job so when you prepare your own that date sets be aware that you might want to hear 2pf a high-quality data okay so the next thing that I'm going to do is to start preparing the data for the Techtron tool so let me just mark this in here face detection with the tech drama okay so we have we had a look at the annotations at the images and the next thing that we are going to do is to get the UH training data set and a test data set so I'm going to split the data but the splitting itself won't be using the train trip Train test split from psychic world why because well because we have uh we can have a couple of rolls that are annotated for a single image so if we split the data you might end up with rows for the same image in the training in the test set so we're going to avoid avoid that and let me just start by getting the unique files and then I'm going to take the train files and I'm going to use numpy random choice I'm going to take some random files from here and the amount of random files that I'm going to take will be actually 95% of the data so I'm going to take the length of all files and I'm going to multiply that but by 0 95 and I don't want to sample with the replacement so this should actually get the train files let me just check check those let me have a look at the first time yeah we we have a set ok so they they are just some file names ok so the next thing that I'm going to do is to to take the training files from the whole dataset and I'm going to do that by using file name is in these are the training files and the test files are going to be the files that are not into the train files and I'm going to do that using the tilde command so this is a negation basically ok so let's have a look at the Train give shape and test give shape if I can write properly so that's a bit strange sorry I had an arrow in here I don't want to replace the files actually so yep all right that's more like it we have the training data we have a lot of examples in here and we have only 54 annotations for the testing so we can continue with this and then again I'm going to take the classes which we only have one but this is just some generic code that you can use as well when you have multiple classes in here so you might train this with multiple classes the next thing that I'm going to do is to define a constant called images but and I'm going to say just faces because this is the folder that we've created this one faces okay so the next thing that I'm going to do is to create the dictionaries that are required by the Techtron - framework which is going to this will be just a function that takes this data frame and converts it into a format that is suitable for the Techtron tool and this is another or the final pre-processing step so let's just go ahead and do that I'm going to define a function called create data set predicts and I'm going to pass him that a data frame and the classes that we have I'm going to basically iterate again over all unique files into the data frame and for each file I'm going to create a record each record is going to contain the coordinates of the bounding box some sort of segmentation which I'm going to explain in a bit and the category ID which is basically the cross ID of this example so let's start by just filtering the file name of the image so we let me just show you this one again we are going to filter all the rows that contain the current file name so for example the first one we are going to take those two rows this will be contained two into the image DF in here and I'm going to create the file path which is going to be the images spot and I'm going to start filling up the record dictionary file name flowerpot image ID and the tech Toronto requires that each image has its unique ID and in our case this will be just the index of the unique file we have to record the height and width of the image and I'm going to take the height here just on the first element recall that each element we have the same width and height and then for each record I'm going to specify the actual annotations so for each image we'll have a single record in each annotation in this record will contain annotations which are going to be the actual face bounding boxes so let's start by creating a list for that and I'm going to iterate over all of the rows in the image so each row will be an annotation and I'm going to take all the elements in here we just do this so X mean Y mean xmax ymax this should be it and I'm going to create a polygon that will be the segmentation part which is going to match at 100% the bounding box of the image of the annotation and the poly poly 3 yeah this should be the X min Y min so this polygon for it we are going to specify the four different points at which it is defined and this will be the top left top right bottom left and bottom right basically and I'm going to use Peter tools to create a chain of points for this polygon finally I'm going to create an object that will represent a single annotation I'm going to specify X min Y min xmax ymax in here and I'm going to specify the mode and mode is going to be X Y X Y absolute coordinates of course you can use another bounding box mode but you have to do those coordinates according to this mode and the segmentation part is going to be just the poly and that category ID is going to be the index of the class and each cloud is going to be false this is a parameter I believe is used for detecting objects in watch amounts of possible object or something like that we are not doing anything like that in here and I'm going to append the annotation to the objects list finally after the four for each group is complete I'm going to obtain the annotations to the record and I'm going to finally that's it dataset dicts I'm going to append to it the record itself and at the end of the function I'm going to return the dataset so this should be it except that I've missed the comma in here yeah so this should be it and as you can see this really watch chunk of code but yeah this is the format that we that we were going to use for the detector on two datasets okay so the next thing that we're going to do is to register the data set that we have so the tech Toronto has a catwalk of data sets and made metadata and in here I'm going to register two new data sets one for the training dataset and one for the validation dataset so to do that I'm going to iterate over a simple array which is going to be the Train and the validation data set and I'm going to use the data set catalog register I'm going to register faces G so this will be trained on validation and a lambda function in here is going to be the name of the current train or validation date set and in here I'm actually actually going to call the create dataset dicts okay so this is going to be the Train DF if G is equal to train all wise is going to be the test the F and the classes will be just the classes okay so we have the data set cutter walk register complete and then we have the metadata catalog so in here I'm going to save faces underscore D again and in here I'm going to set tink classes so what are all the classes that we are looking for and I'm going to say that we have the classes from here which is again just the faces okay so we have registered the dataset and the metadata most about the classes that we have and then faces metadata is going to be the metadata catalog yet faces train so this should be it let me run this okay this should be working now so we registered the dead sets the metadata catalog and we have the metadata in here as a variable the next thing that I'm going to do is copy and paste our custom trainer which will basically help us evaluate the model while we are training the model detection the objects re the face detection model and I'm not going to go into a bit details in here there is much more details on that in the blog post that I'm going to link as as well as the source code so let me start by finalist walking at some detector on to code and we are going to start by configuring our model and this would be much much different compared to the what you might be used to when working with let's say by torture so fanciful in here we just creating a config file and then we are having a look at the models from the model zoo and actually in here I'm going to have a look at the masks are CNN at this one from here and as you can see it has a lot of time to train and it requires a lot of memory but it's average precision is pretty high and the mascot a segmentation mask average precision is high too there are a lot of other models in here that you can choose from but this one seems to be performing really well and it's available so I'm going to take just the config files that I've already prepared so I don't make a typo so these are this one's so instant segmentation masks are CNN and this big model that we've talked about and as you can see I'm getting the config file from the model zoo and I'm getting the checkpoint URL from the module as well so first the config file and then the weights so this is just great and then I'm going to specify the data set that we are going to use for training and testing and for testing I'm going to use faces valve for validation and I'm going to specify the number of workers that are going to be basically the threads that award the images so this should be fine too and the next thing that I'm going to show you is how you can configure your server or the optimizer in here so the optimizer contains the regular stuff like what in great may be the maximum number of iterations and bad size but there are a bit more there are a bit more configurable compared to let's say quasi co-pi torch if you're using atom or something like that and I'm going to show you how you can configure this one so let's start with the batch size and I'm going to specify for in here and this you might try to increase this a bit when you have more GPU more GPU memory but it might be advisable to have relatively small part size the next thing I am going to specify is the base warning rate this our al air and this is going to be 0 0 1 so this is pretty standard initial learning rate the next thing that I found a bit interesting was the warm up it there or warm up iterations and I'm going to specify 1k for this one so let me just specify the max iterations and specify the steps and yeah let me just go through this we have a maximum number of iterations 1.5 K for the first one K of those we are going to warm up our solver so our solver will basically go from worrying rate of 0 and progress to the learning rate that is the base learning rate and at step 1,000 just as the iterations are at this point yeah that's a big configuration ideas the solver will reduce the warning rate or divide by this number so basically we will get a much smaller right into the 1k and 1.5 k arranged so let's execute this and now we're going to configure the the model itself this will be specific to a model that you've chosen but in here you can specify the the path size per image and you can this is really important you can specify the number of classes that this model should classify Rho heads crosses and this will be the number of classes that we have in here the final config is going to be at how many steps we are going to evaluate the performance of our model and if I go to the documentation and go to the use configs and go to the config references you can see that this is probably the each possible config file config setting that you can set and there are really a water loss so if you go for example to the model boy hits bad size per image let's say the number of region F interests total number of regions of interest per training mini-batch so this is how this is calculated so the default the default value of value is 5512 and this is basically multiplied by the images per batch so in our case we have 64 and we are going to multiply that by the ya by 4 so this will be 200 here 256 we have for this regions of inter spread per training mini-batch okay so I didn't know that this is pretty interesting setting okay so we have the configuration in here pretty much done and I'm going to specify an output directory which is the by default output and I'm going to create a trainer and it's going to accept the config file I'm going to say to the trainer resume our world and I'm going to specify region false and then I'm going to told the trainer dog train and this should hopefully start the progress the process of training using euro dead set so this might take actually a lot of time and but the GPU add the Google coop instance is really fast and as you can see it already awarded the images we have 388 images only with these many annotations and the first thing that is going to do is to go ahead and download the model if you haven't downloaded it already so as you can see another thing that it does is applies some transformations in here it does random flipping edit some resizing all we have an error so I made an error somewhere let me pause this so there were a couple of errors in here I forgot to replace this with width and in here I said in DF which is strange so I'll have to reset the runtime and redo the imports and actually went ahead and save the annotations to a work of CSV file so I don't have to do all the pre-processing again and wait for it so let's start by running all of this and starting the training process okay and hopefully this should work out starting training from iteration zero so yep this should start the whole thing but let's wait and see if this is going to work out in the meantime I'm going to tell you that actually I went ahead trained a model for about two hours from Google co-op and saved a pre trained model so this pre trained model we are going to be able to now download from Google Drive and yeah as you can see this is even actually faster with the p100 and I'm going to go ahead and interrupt the wrong time okay so I'll go ahead and download the pre-trained model from Drive and I'm going to store it into the output model final and this will be just a checkpoint I'll have to wait for this and as you can see this is a pretty watch more than the pre-trained one that is okay so this is the model and the next thing that I'm going to do is to change the model weights to point to the pre trained model this should hold the model and then I'm going to specify a threshold at which the model will make decision so we want to be this certain or right about eighty-five percent certainty into the prediction and only then the model will make the prediction and I'm going to initiate instantiate a default predictor using the config file this should do the walling and we have the predictor okay so I'm going to basically skip this and show you just the result of that this will basically use the model use the phases of validation data set that we have and do some influence on top of that and just split up a spill out the results that we have into in here so how do we evaluate those object detection models well as you may already know the most important metric in here is actually I owe you or this is let me just go ahead and show you the for more this is the intersection over Union which is the area of overlap and the area of Union and if you don't know what I'm talking about there is a really great article on medium posted by the one and only Jonathan yeah you can see his name is and here's a very good example of what I owe you is so we basically had the ground truth or the annotation made by the human probably and the predicted validation and we have the area of overlap so this will be this area right here and as you can see this is the one over here and the Union or the area that is covered by both or either one so this will be the much larger area and this should give you a number that is between 0 & 1 so the greater the overlap or the the 3 so the greater the the predicted and the ground troops area that is combined together the closest the number is going to be to one and you can see and you can use that to set some threshold chocolate precision haricots based on that and you can get some pretty good average precision matrix and all of that using it so this is just a word one whirlwind tour of how object detection models are evaluated and you can go ahead and read this great article to get a better understanding of what of how you can evaluate object detection models so let's do something to show us what are the results based on the model I'm going to take all of the artists images and for each one make this sari and for each image I'm going to take the file path to the image read the image using open CV get the outputs of the projector of our model and do our visualization using the visualizer class and in here I'm going to do some color pre-processing set the metadata we have a face metadata specify the scale we have a full scale image and instance mode color mode image so we will just get a nice image with the colors then I'm going to take the instances of the predictions go put them on the CPU and basically remove the segmentation masks because if you recall that our model is also image segmentation model but we are just using it for object detection then I'm going to draw the actual predictions on top of the the image get the resulting image again do the same color augmentation get the file name of the image and use open CV to write damage into the annotated results write the result in there so let's just do this and this is running called model on the testing data that we have so this is now complete and I can open an image and hopefully this should be alright yeah this is the image from the in the beginning of the tutorial and as you can see it detected it as a face and in here you can see a lot of faces were detected but one of the errors in here actually we have two errors we have face winning face detection so this might be negotiated by choosing a different threshold but yeah applying a threshold for face detection is a really interesting problem but as you can see the model over detect it in here so in here we have two missing faces and another over detection in here again some of the faces who are detected I don't see you know any maybe I'm not sure nor detection in here but some of the faces are missing okay so in here we have probably correct yeah this works pretty good actually okay so this one it did quite well but yeah okay so in here again very good and recall that we are using just under 400 400 images are examples to train this model and this is actually very good compare our when trained when using the pre trained model on image net and just for the fun of it let's take another image that it's not anywhere on this dataset and let's say person okay let's probably take this girl let's remove the annotated results and let's abort the image of the girl this might take some time because the image is 10 megabytes I'll pause the video ok so the image is in Google co-op and I'll rename it to let's say Goro Jay Jay Packer rule ever I made a typo there okay so I'm going to take all of this let me just do this and I'm going to take this in here paste it specify the file path as a girl into Jupiter run this and open the annotated results let's download the image actually so it will be a bit better to understand it and again I'm going to pause the video okay so the image is downloaded and I'm going to show you the result and this is the result so yeah our object I our face detector is actually quite quite good and keep in mind that this image is actually really watch so yeah this is the dimensions of the image and our model was able to use this really large image and detect in the face I would say very well so this is pretty much it for the tutorial on face detection again you can use this pre-trained model as a first step in detecting faces or when you want to do something like face recognition understanding human expressions reading smiles let's say emotions from the person or for some security reasons however you want to use this one and there are a lot of pre-trained face detectors but you can basically use the Techtron or something similar framework to create your own or build your own custom data set and train it and train aren't we using a framework like the Techtron so i'll link the complete source code complete toriel down in the description please like share subscribe and I'll see you in the next one bye bye
Info
Channel: Venelin Valkov
Views: 31,657
Rating: undefined out of 5
Keywords: Machine Learning, Artificial Intelligence, Data Science, Python, Face Detection, Object Detection, Face Recognition, PyTorch, Deep Learning, Computer Vision, Detectron, Detectron2, Jupyter, Google Colab, Tutorial
Id: 8eLHZ0R5nHQ
Channel Id: undefined
Length: 67min 20sec (4040 seconds)
Published: Sat Feb 15 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.