Train, Validate, Predict, Export & Benchmark Ultralytics YOLO Models | Episode 12

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys welcome to new video in this video here we're going to take a look at the modes tab in allytics documentation so we're basically going to go over all different modes that they have in the documentation we also have videos covering some of the modes so let's just jump straight into the modes tab in the documentation we can see that we have both have a train validation predict mode and then we also have an export mode when we actually want to do inference and deploy our models we can also do tracking to regular with allytics and then they also have this Benchmark mode which we're going to cover in this video as well so we I have some videos about some of them here but again these modes are very important and also very useful when you're playing around with ultral litics and when you're training your own custom update detection model or if you want to use them for segmentation and so on so first of all you can read like an introduction here what all the different modes are doing in this video here we're both going to take a look at the validation mode and also the Benchmark mode because all the other modes here they are already covered here on the documentation and on Al litics YouTube channel so first of all here let's just jump straight into the validation tab we can see when we want to validate our model and also how we can do it this is specifically used for tuning Our High parameters if you want to optimize and make our model even better so you can see like y validated Al litics model Precision convenience flexibility and also for high parameter tuning that I just talked about we can both do it with some automated settings multimetric support CLI and also the python API data compatibility but we also have these user examples here but you can basically just copy past them directly into your own python script they have some different arguments here with image size bat size if you want to save the results as adjacent file if you want to use half Precision device that we want to run it on CPU or GPU intersection over Union and so on it's now going to take a look at the Benchmark tab so here after we've trained and validated our model we can go in and actually like do this benchmarking test if you want to evaluate our mod model its performance on various like real world scenarios so it is basically just to get like informed decisions resource allocation optimization of the models and also cost efficiency so we can go in and Benchmark it on different types of like formats that we export our model into but it could also just be like for testing it out on your hardware and the model performance on different data sets and so on so here we can see some user examples as well we can see the support export formats the key metrics in our Benchmark mode so here we can see that we're going to run Benchmark on all support export formats including onx T RT and we can also see all the arguments that we can specify if you want to use iner 8 quantization or floating Point 16 quantization it would actually be pretty pretty like cool to see this in a quantization if we actually lose any accuracy and compare some of the models when we export into different formats so let's not just copy paste this example here jum enter a poweron script and just copy paste it in let's open a new terminal so right now we're just going to run the program so first of all here when we take a look at the Benchmark results we can see we have the P torch model um we can see the inference speed here so 68 millisecond inference when we're doing like our validation it is 30 uh 35 we're running this on RTX 3070 if you scroll a bit further down we can now see that we actually like exported into torch script so when we're doing inference now we can see that we get down here to 4 millisecond inference which is actually like significantly faster compared to the py model so let's try to go down and see if we have some more so we have the O and X now successful here for ox we act like text the exact same objects here but now we can see that Onex here is running 21 milliseconds so we can also optimize it for running on the CPU right now we're doing it on the dpu can maybe go up here and try that so we can just hit CPU maybe also want to go inside the documentation again and specify the in 8 so we can also specify this parameter here so in8 should be equal to true so let's go and use a an quantitized model let's run it so if we just scroll up here to the top again we can see for the pord model it is running around 70 milliseconds in fence and this should be running on the CPU yeah so here now we can see that we running on the CPU I have an I9 13 generation CPU we can see that torch script is um the torch script here is actually like slower when we run it on the CPU so it's 115 milliseconds inference let's go down and see if we can see some results for um on X so on X is faster 80 uh 84 millisecond inference speed and for open V here we can see we have 23 milliseconds inference running this on the CPU so running this on the CPU optimized for Intel Hardware we can actually get the exact same experience speed compared to some of the xboard formats running on the GPU this does shows how important it is to actually run these benchmarks on your models when you have trained them especially if you have custom Hardware if you want to run it on the edge and again you just have to run this single function they have different arguments on the documentation test it out you can see the output directly you can compare the models and then you're ready to go you can then deploy your models into production so thank you guys what this video here I hope you have learned a ton this Benchmark feature and all the modes tabs on the AL ltic documentation and for the framework is really cool definitely go check it out play around with some of the code snippers that they have and use it in your own applications and projects hope to see you guys in the next video stay tuned bye for now
Info
Channel: Ultralytics
Views: 17,723
Rating: undefined out of 5
Keywords: Ultralytics, Object Detection, YOLOv8, Computer Vision, Artificial intelligence
Id: j8uQc0qB91s
Channel Id: undefined
Length: 5min 23sec (323 seconds)
Published: Thu Oct 26 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.