Object Detection using Tensorflow Lite C API on Windows

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everybody so we continue our series about cross-platform object detection using tensorflow lite in the previous video we converted an object detection model from tensorflow to tensorflow lite so we have a model in this video we are going to develop the object detector in c plus plus and test it using some windows application also as i mentioned in the previous video for each platform we are going to need the tensorflow lite libraries so we are going to obtain this as well for windows uh so let's start by setting up the environment i created a repository i called it the tflight cross platform inside which i already placed the model that we converted in the previous video and i also i also put here the labels a label file for the which is basically the detection categories so when the object detection detect an object from category zero it's going to be person category 1 going to be bicycle etc so we have this in our repository now of course that in a real project you don't want to commit and and check in binaries into the into the repository and this is just for our example i also created an object detection folder in which we will have our object detector a header file and the implementation currently these files are empty and there is a project a wind test application which is a c plus plus a project which is also empty at the moment and i got some test image that we are going to test again so this is the image and i took this image from unsplash so thanks to jack carter for the image okay so next we need the tensorflow lite library for windows now we can go ahead and build these libraries using a using the bezel build system for tensorflow or you can just download pre-built binaries from this repository that i try to maintain i call it tflight dist and this repository contains uh the header files and the libraries for tensorflow lite both in c and c plus plus for windows android and ios so we'll go to releases and we download the latest release which is tensorflowed 2.4.1 so i'll just download this file okay the file finished downloaded let's see what we have inside okay so we have this tf light dist and inside we have the include folders for tensorflow we have the headers for uh for the sequence plus and here we have the header for c now this is all the tensorflow headers it's not uh it's not the headers that you need for running inference only i just included here all the headers because i didn't know what what the the exact headers that that are needed so i just put everything and inside libs we have the libraries so we have for android and the library for c plus plus 4c and the gpu delegate we have it for each cpu architecture for ios we have the tensorflow lite c framework and for windows we have the tensorflow lite c dln for windows we don't have the c plus plus dll because the size of this of this dll is really big it's not it's not optimized so we have only the c dll for a for windows so let's take this and copy it to some folder so under tools i'm going to extract it here okay it's finished extracting it took some times because there are a lot of header files which most of them and are not needed but they are there okay so we have a we have the distribution of the flight this the next thing that we will need is opencv uh the reason that i'm using opencv is our model gets images which should be with specific size 640x640 so using opencv gives me an easy way to resize images and its cross-platform so we have opencv for windows ios and android so having a having opencv makes it really easy to prepare the data for the model and write it only once i don't need to do the resize on each platform so under library releases we are going to download opencv for windows we will use the latest version and i'll just download it okay download finished and this is an executable i believe it's a self extracting file that's it so let's run it where downloads open cd and we'll extract it to see tools as well okay finish extracting let's see what we got so we have open cv this is opencv for android we are going to use it when we build the android application and for windows we are we have the opencv and build x664 vc15 lib so we here we have the libraries that we are going to build against and under bin we have the dll which we will need for runtime okay so we have opencv and we have the tensorflow uh light distribution now in order to reference them inside our c plus plus project let's create environment variables that will reference that will point to these folders so i'm going to environment variable and you can see that i have an environment variable for opencv android and i already have an environment variable for uh for tf flight called tflight deal let's rename it to dft flight list okay and let's add the new one for opencv so we'll call it opencv test which will point to c tools opencv okay so we have this and another thing um we need to edit our paths and we will add the bin folder of opencv so dur as i mentioned during run time we will need a we will need to have the opencv world dll available for us so i'll just put this folder in the paths and that's it okay now let's set up our test application project i already created here a test application it's a c plus plus console application and it is currently empty here is the main file this is where we have our void main or int main and the object detector header file and the implementation file which are both empty so in order to use a opencv and tensorflow lite we need to set up our project to have a reference to the header files and to the libraries so we need to set up the the project before we do that please make sure to switch here the build settings or build configuration to 64 bit and to release we are going to walk directly on the release if you want to debug you can set it on debug but we are going to set the project for release 64 bits we are going to the project properties and here under cc plus plus general we have additional include directories so we are going to add the opencv header files which are under open cv list build and include if i remember correctly we'll check later and the tensorflow is tf lite dist slash include as well okay so first we check here at the bottom that the environment variables uh interpreted interpreted correctly and here i see that it's not i forgot this so now we have c tools opencv build include let's go open cv tools opencv build include okay so this is the root of the header files and under tf light dist include tf light please include yeah this is the root of the header files as well so this is for the include files next we need the libraries for the for the link for the link time so under general first we have the additional library directories we need to edit directories where the libraries are so for opencv it's under opencv list and it's build x64 vc15 lube and for tf flight disk it's dfight this slash libs windows x86 64. [Applause] so let's see so this is tf flight list libs windows x x 86 64 so yes this is the correct path and for opencv let's check it's opencv build x64 vc15 lib okay so the library that we'll need is opencv world 4.5.1 now there is the release library and the debug library which ends with the d i'm going to use the release library so i'm going to copy it and after we specify the library of directories we need to specify the libraries themselves so in additional dependencies we are going to add opencv world library and the tensorflow library which is this one this library and that's it oh no i see the configuration here we did for the debug so let's do it for the release do you want to save the changes whenever yes okay now it's deleted everything that we've done great so i'm going to do it again for the release configuration okay so i added all the configuration back for the active release and x64 so that's it for the project setup now let's see if it's working let's try by let's try to load our test image so we'll include open cv okay we have tensorflow we don't have opencv let's check again properties c plus plus general edit so it's going to see tools oh remove this one okay okay and let's see now so now we open cv thank god and we'll take a call and in order to display the image we need opencv hi gui now let's load the image the image that we are testing is this od test jpeg so matt img equals i am read we need to use name space using namespace cv i am read and we just give the path and then i show the name of the window it doesn't matter and the image and let's wait and return c1 let's see if our logic builds we build solution and it builds so let's run it and if everything is good we should see our image and here is the image so opencv is set up correctly now we will go to start and implement our object detector okay so let's start with the header file of our object detector now instead of you watching me uh typing endlessly i'll just uh copy paste and we'll go over we'll go over the code yeah so this is the header file we need opencv and we include the c api from tensorflow lite using nameplace cv and we define extract which is going to be our detection our detection result or a single detection result so each box that we are going to detect an object we are going to return this struct which is going to have the label of the of the detected object the score which is the confidence of the detection it's going to be between 0 to 1 and the location of the box of the detected object in the image and it's going to be with the y mean x-men y-max and x-max so this is the struct that we define as our result then we define a class which is the object detector class in the constructor we we are going to get the tensorflow lite model and we are going to get this model as a byte array so each platform windows ios and android each is going to it is going to read the tensorflow lite model into a byte array this is specific for a for the platform so it cannot be cross platform but once they load they load the file as a byte array we just we just get it and we also need the size of this byte array and whether the model is quantized or not we didn't er doing or testing qantas model is not part of this series our detector is going to support qantas model quantized model are models where the weights are not in float but in integers so we are not going to to test it but we write our code to support the quantized model i'm not 100 sure that that it's working i didn't i didn't test it uh we are going to have a destructor and this is going to be the main method that we that we are exposing to the application a a detect method which get an input image which is going to be an opencv matrix opencv image and we are returning an array of our detectors are extract the rest of the parameters here are our private members for our class so first we have the detection model size which is the uh basically it's the input size of a of the image now this is not 300 the model that we used expect to get image which is a 640. the number of channels if the model expect is a 3 and rgb whether the model is quantized or not and here we are going to store some tensorflow light object so the first is the model itself so we will take the the model buffer the byte array of the model and we are going to load it as a tensorflow lite model then we will create an interpreter which is going to run the inference for us so this we are going to store it in this variable and then we are going to have our input tensor so this is this will be the input tensor that we get from from the loaded model and also from the loaded model we are going to get four tensors as the outputs uh we are the operation of creating the tensor and creating the interpreter are heavy operations so we are going to do it only only once we are not going to do this every time that that someone scores detect that's why we store it here as a private variables and the private method which we are going to use in it detection model it's just going to load the model and the interpreter and make sure that everything is correct so this is the header file of our detector class now let's start to implement our object detector class so again instead of you watching me typing endlessly i'll just copy paste the code in in sections and we'll go over it yeah so this is our c plus plus file i'm including the object detector header class and some opencv image processor image process module so we can do some resize of the input image and this is our constructor so again we get from the applications we will get the model as a byte array or by buffer and we will load a tensorflow model from a format and also whether this model is quantized or not we just store it here and then we call to our private method init detection model so this will be the definition of unity detection model and the first thing that we want to do is to actually create a model from a form from the buffer that that we got so we create the we create the model by calling a tensorflow right function tf lite model create now if i hit f12 i go to the definition in the header file so you can see here that we are in the c api of of tensorflow lite and a tf flight model create it just returns a model from the provided buffer uh if you want you can also use the flight model create from file uh obviously this will not be a cross-platform safe method but if you just use windows or something you can use this this method so just take the model buffer and create a tensorflow lite model and here we checked that we got a model and that it was successful the next thing is we want to build the intel hotel so the third thing we need we need options for building the interpreter uh so we are creating the options using this uh this method and then we can set different options on the interpreter so here we just make sure that we are using a single thread to run the inference you can play with it and and do multithread and see whether it works or not but just to be on the safe side we are going to use hey we are going to use one thread and of course you can take a look at the different different methods that are that are here for to set the set options on on when creating the interpreter and then we are just using the model and the options to to actually create an interpreter and we checked that the creation was successful so now that we have the interpreter we can we can go ahead and allocate the tensiles so this is what we we do here we call utf flight interpreter allocate tensors uh what this method does so again we can read in in the header file and basically update the allocations for utensils and resize dependent tensors using specified input tensor dimensionality i believe this is just allocate memories for the input and output tensors and we have this warning on note here that this is a relatively expensive operation and that's why we are doing it only once when we are loading the model and not on every call to process and do the actual detection and now our tensors are ready and in theory we can start to to uninfluence and do the object detection but we want to make sure that that the model that we loaded is is what we actually expect expect to see so let's examine our model once again using netron um and we'll see exactly at the input and the output format of of the model so i'm loading up our model in here and tapping on the hamburger menu here so we can see that the inputs we have a single input which is of a float 32 and this is the size of of the input it's a single input with a 640 by 640 pixels and three channels and in the output we have four tensors which are also a float 32 so this is this is the model that we expect to load and this is what we are going to verify here that we loaded the correct model in terms of what we expect so the first thing is we are going to ta flight interpreter get input tensor count so we are asked asking the interpreter how many input tensors do you see in the model and if it's not one as we expect we print this arrow then we will get a reference for the input tensor so we do it by t flat interpreter get input tensor and at index zero and then we are checking the type of a of this stencil so if this is a quantized model again we are not handling a quantized model here but let's say it was a quantized model so we expect the type of the tensor to be answering it eight eight bits and on the other end if it's not a quantized model so we expect if it's not quantized we expect the type to be of type float 32 this is exactly what we saw uh when we inspected the model in in netron so here we verify that our input tensor is correct and then we are going to check the size of the input tensor so again in in netron here we saw that the size of the input tensor is 1 by 640 640 and 3 and this is what we check here the dimensions of the input tensor at it at each location so which we expect it to be one and then the detection model size which we set here and the in the header file to be 640. so the the second and third dimensions are expected to be 640 and the last dimension we expect to be three as defined here so in case i don't know you have a model that expects to get a grayscale image so probably the last dimension here would be one and not three so this is we are checking the dimensionality of the input tensor and next we are going to check the output utensils so again we we are checking we are asking what is the count of the output tensor so how many output output tensor there are in in the model and if it's not four okay this is this is not okay this is not what we expect and then we just get the reference to to this to this output tensors um so we know that the tensors are the output location the output classes the output scores and the and the number of detection is in the last stencil now how how do i refresh your memory how we know this so in the previous episode in the previous video when we converted the model we used this script the export tf lite graph and in this export it it said that the model is going to have one input and four outputs and these are the outputs detection boxes so here i just called it a locations if instead of boxes detection classes detection scores and number of boxes which you again i call a number of detections so this is this is how we know it and that's it we created we created our our interpreter and we verified that everything is correct in the model it seems to be what we expect to find in the model now we can just write our destructor for this class in the in the destructor we check if the model is not null we call to the tensorflow like model delete we just delete and free the memory of the model this will also free all the tensors etc and next we are going to have our main main method which is actually detection performing the detection so this is the definition of our method we are going to return a detect result array and we get as input and image and opencv slc image and the first thing we will just define some default default result and in case in case the model is not initialized we just return the default so result we will not we will not do anything the next thing is we need to resize the image to the input to the size of the of the model which should be 640 by 640 so this is what we do here we call to the opencv method the resize which get an input image output image and then the the target the target size that we want which is which will be 640 by 640 and the next thing that we want to do is to make sure that our image will have three channels um because this is what our model expect to receive so this is what we are going to do next we are going to check how many channels we have in the image and do the end of the appropriate the needed conversions so the number of channels we get by calling to the opencv image type and we get the it's not it's not exactly the number of channels it is the image type now if the image type is of type a atuc one which is a a one by eight bit answer unsigned and second byte and one channel so this is a grayscale we are going we are calling to the opencv convert method cvt color which converts the uh converts the the image in the image format so we change it from a gray to rgb and this is an important important thing to note if you're not familiar with with opencv opencv stores stores stores color in not in rgb format but in a bgr format and this is what we are checking here so if this is a three channel image we do the conversion from bgr which is the default opencv format to rgb which is the format that our model expect and this if this is a four channel image so we convert from opencv bgra to three channels rgb so now we know that we have we have the correct the correct format for the image and the next thing we want to copy the image to the input tensor so here we need to check if our model is quantized or not and again we are not handling a quantized bondeles here but just for uh i don't know if you will want to try a quantized model i didn't test it but in theory what we do we we get a reference to uh to the input tensor in its answering to it unsigned int eight bit representation and this is already an allocated allocated buffer and so we don't need to allocate it and we just copy the image data again in image data is a it's just a byte array of the of the rgb rgb bytes of the image and we just copy it to to the destination which is the input tensor memory chunk yeah so we just copy the memory and that's it but if this is not if if this is not a quantized model so we need to convert our our rgb bits from unsigned the byte to floater 32 and this is this is what we are doing here and also we need to normalize the image so each pixel is going to be between minus one to one now how do we know that so again um i'm going back to the script that we used to convert to the model from a tensorflow to tensorflow lite and here when describing the input you just say it the input image blah blah blah containing the normalized input image and also note see the process function defined in the feature extractor class in the object detection models directory so this is this is what i did i find here under the object detection models there is uh there is the feature extracted extractor that's relevant to our model which is the mobile network to fpn and this is the pre-process method and they just say here that maps pixel values to range minus one and one and this is how they do it basically they take we take each pixels each pixel value which is between zero to 255 we divide by 255 multiply by 2 and then subtracting 1. so if a pixel has the value of 0 it will become 2 all this will be zero minus one so it will it will be minus one and if a pixel has the value of 255 so it's going to be 255 five divided by 255 which is one multiplied by 2 which is 2 and subtracting 1 it's going to be 1. so this is how we normalize the input from -1 to 1. so now let's see how we do it in using opencv to solve the third thing we need to convert the we need to convert the image from unsigned byte to a to float so we define an image which is going to be our float image this is why i set the f here and then there is a method in opencv which is convert to the first parameter is the destination image the second parameter is what type this image is going to be so it's going to be a 32-bit float with three channels and in this method we already er we can already do a scaling so we have alpha and alpha as the documentation cell here it's an optional scale factor so each pixel we are going to scale by 2 divided by 255 and there is a beta which is optional delta so our beta is -1 and here we we are doing exactly what uh the feature extractor is doing here so this is how we map our input to what the model expect this is this is a really important and then we copy uh we we copy the the pixels to the input tensors so we get a reference to the input tensor in its float representation and again this is already a allocated memory and then we just call it to mem copy and copy the image data the image data into the input tensor and this is the size of the chunk of memory which is the size of float multiplied by the number of pixels which is width and height and multiply multiplied by the number of channels so this is how we prepare the image to fit into our model so the two imported important things or three first is resizing the input image to the input size of the model expect the next thing is here we converted the image from opencv bgr or bgra to rgb it's it's important and the third thing that for a not normalized model we normalize the input to be between minus one and one and now come the important the important the code of this the most important piece of code of this entire detector which is actually doing the detection so we are calling to tf flight interpreter invoke we are giving a passing is reference our interpreter and we are checking if the result is okay or if it's not okay we are printing an error and just returning our default default detect detection result uh you can decide if you want to return your hero to the caller or or whatever but that's it this is disabled line they will run the inference and now we can check there we can check the result so we have our output tensors and we get a reference to to the result to the result to the result arrays so the output locations this is the detection boxes we get a reference to the float array that that holds the the detected results and we do the same to the array that holds the the classes for each detection the already holds the score and a single integer that that holds the number of detection it's float but we just convert it to a to end to integer so now that we have the results we can start to loop uh over the result and for each for which detection we are going to build a detect result so this is going to be our for loop we are starting from from zero until we are going to the number of detection and we okay we have i don't think we explained this this variable it's in the header file here i think i missed it detect num so in the detector i i i wanted to limit the number of detections that that i returned so here i limited the number to five detections you can limit it to one or you can not limit it at all and returns all the results all the results that the model found i decided to limit it to five so this is the other the other condition in the for loop so we either run all on all the found detection or if we reach to the the desired results we we stop here so this is this this condition we start by setting the score and the label uh in in the detect result struct so this is what we do here now one more thing let's er let's talk a bit about about this about this arrays so a here in in in neton it doesn't show us the it doesn't show us the structures of of the arrays but let's do it notepad so this is just array of float of of float numbers and and um for the classes and the scores we have a single number for each box so for uh for the for the first box or detection we are going to have a float for the second for the third and so on and so let's say this is the classes and the scores are going to be the same so let's do class one class two class three class four and so this course is going to be score once go to score three go fo and so on so so this is this two two arrays so the size of this arrays are is going to be the number of detections as for the locations uh the array is going to look like a like this it's going to be the y minimum of the first box the x minimum of the first box y maximum of the first box and x maximum of the first box and then it's going to go to the second box so the number of elements that we expect to have in the detection location array is going to be the number of detections uh multiplied by four values for each box so here we can take the we can take the score and the class just by using the index of the detections that we are currently resolving now we are going to we are going to get the the box location so y minimum y minimum is going to it is going to be at the text detect location at a index i multiplied by 4 so for the first box detect the y minimum is going to be at basically index 0 of this array when we move to the next the next box so i is going to be 1 so the y minimum is going to be at index 4 in this array and we can continue we can continue to get x minimum which is just the the next float and then y max which is the next and x maximum which will be the last one so for each detection for each box we are taking four floats form from the array now the uh the location value the x and y locations are actually normalized and they are between zero to one and in our detect result we want to provide it in the coordinate of in the coordinates of the input image so we are going to scale up each we are going to scale up each yeah each location so for that we are going to get a we are going to store the original image so see is the original image the width and the height of of that image and then the y value we are just going to multiply by by the height so we will scale it from 0 to 1 to the coordinate system of the of the image and the x value we will multiply by width like so and this is a this is our detection result and once we are done uh once we are done going over all the all the detection results we can just written our result array and let's see how it was initialized it was initialized with this detect a detect number yeah so that is our that is our process our detect method which conclude our object detector class and the next thing we are going to run a test detection and see how and if it works so this is our main function we already load the image and i have here code for reading the the model into a byte array just need to include stream here and using namespace std uh so we we are reading the model and it's important to specify here and give this flag that is this is a binary file and the model is so we have it this this network that this folder and the model is one up inside the models so it's one up and more dels so this should be the correct path to the tensorflow lite model here we just get the length the size of the file we seek to the beginning we allocate we allocate the memory for for the model and then we read the file and the next thing is we need to create our object detector so we are going to include the header file of the object detector which again we are at this folder and the detector is in this object detection so it's one up object detection and object detector so let's create let's create the detector we'll call it detector object detector and we'll give it the model buffer the file that we just read the size of the size of this byte array and it is it's not quantized so force and this is our detection result array we just call to the detector take throw detect and we'll give our input image and that's it now we expect to get we expect to get some some result so we know that uh we know that we expect to get the detect detect num this is the number of detections that we expect to er to receive so we can just go over all the all the detections take two yeah now detections and for each one uh we are going to we are going to put the to print the label the score and we are going to meet we are going to to draw the the bounding box so the label uh this is just the category index it's not a it's not a text or something and maybe the name label is is a little bit deceiving and the score this is float score and now we have the box locations y and then x minimum this is going to be y max this is going to be x max y marks and x max oh this is the four corners of the detection and then we are just let's draw a rectangle so in opencv we have a render we don't have rectangle method because we need to uh to include image processing so include cv2 img proc so so now we have rectangle and the image will write on the image that we just read and we need to create so either we pass a rectangle or we give two points so 0.1 the vertex of the rectangle so we are going to create a point here with a x-men y mean so this is the top left and the other point and it's going to be x max y max which is the bottom right the next parameter is the color yeah so we'll do it we'll do it red remember a opencv it's bgr so b g r and thickness let's do two pixels so this should the teeth should draw the rectangle now it's okay let's on it the moment of truth so we are building i hope the relative path to the modeling is correct the code execution cannot proceed because tensorflow lite l was not found okay i remembered at the beginning of the video i did put the folder uh where this dll is in the past so let's check that um it's in the tools tf light this libs this one and this well okay it's not oh we did it for um we did it for the opencv dll not for the tensorflow lite that's a for like dll so we can just we can just take this dll and copy it to here this is where our exit is let's try now oh okay so we got an exception from here so probably it didn't the file pass here is not there is not correct to the model let's see that i don't have any typo so this is win test app and we are going one up what else oh it's not odssd it is od model and let's try again and okay it just stops which makes sense uh it will be a good idea if we are going to actually show the image so i am show image image and we wait and third time magic okay so we got our five boxes we have one around the orange two around the apple there is one small here which i'm maybe it's this apple i don't know and the larger one so next we are going to print the labels and see what's the meaning of all these boxes okay so we got our five boxes right we have the big one we have one here around the orange around the red apple around the green apple and there is one more apple here but this is just the boxes without the labels so we don't know what it means so let's print the labels now and we'll see what each box mean so in opencv we are going to use the put text method on the image and our string is going to be the label so we'll convert add label to string and then we'll do dash and we'll give this call so we're going to have the label as a number and the score then we need the bottom bottom left corner of the text so we'll just print it on the top left corner of the image of the box so it's x min and y minimum and the font so we have font hershey plane and the font scale let's do 1.5 and the color will do red so b g r and what's that parameter thickness yeah we'll do two pixels okay so let's run again and hopefully we will get the print of the of the label oh actually the the category id and and the score okay so we have 51 54 52 52 52 okay so d3 has the same category id so if there is an error at least it's a consistent arrow so if you remember we have the labels here so we're going to take a look at the labels and the the category id here are zero based so let's bring the line numbers okay so 51 it's actually going to be line 52 which is a banana cool and then let's see the 352's so it's going to be line 53 which is an apple nice and 54 which is going to be line 55 which is an orange okay so we got five objects and they are all correct this is really nice and that's it for this video okay one more thing so now i'm recording from the future after i finished recording this video i went uh i went and started to implement the android and ios application and when i came to run the actual detection in this line where we do the detection so here when we are running running the interpreter and the application just crashed and i tried again and it crashed i tried to replace the the libraries maybe i used the wrong libraries but it kept crashing so i went further to investigate this issue and what i found that we get a i i get a bad memory exception like we are trying to access memory that was freed or something like this i thought that maybe this copy of the image to the input tensor they didn't didn't work with or it didn't work well or somehow freed the memory but but it was not and then i noticed that this so in this discipline in this test application what we are doing we are loading the model uh here we are we are allocating a we are allocating memory for the model and we are reading it from the file and then we give it to the object detector and we run the detection during this time we are keeping the we are keeping the model in memory and this is good in this test application but what happens in the mobile applications we are taking our model and we are embedding the model as an as an asset in the application and then when we are reading the embedded asset from the application and we give this model to the object the textual detector the operating system is actually freeing the frame the memory so the memory chunk that we are passing to the object detector is actually getting freed and this is the crash that we are that i was seeing so in order to fix the fix this we need to make sure that the memory the memory that the model bytes are actually actually allocating allocated as long as this object detector object is is alive and this is what we are going to do so we are going to declare the model bytes here inside our object detector and then when we are when we are creating the model here so we are just going to copy and we are just going to copy the model that we got from the user uh to our uh to our memory that we that we will allocate so we check if we got anything if size is a zero we don't we don't we don't have nothing to do and now we are allocating memory for our for our model model bytes and using the size that we got from the user and then we are copying the memory so this model buffer that we got from the user the os uh either android or ios can go ahead and free this memory uh we don't care about it because we are not we are not using it using it anymore we will use the model bytes that we are keeping locally and now here we are when creating the the tensorflow model instead of model buffer we will just do the model bytes and that's it now we are and now the model is guaranteed to er to be allocated in in the memory as long as we are using this uh this object and of course that now now it's our responsibility to make sure that we free this memory in the destructor so we just do this if we have something first we we delete the tensorflow model and then if if we still have a byte we just free the bytes and that's it so after i made this change everything and everything was worked as expected and that's it this concludes our second video and in the next videos we are going to develop the applications thank you bye
Info
Channel: The Coding Notebook
Views: 1,476
Rating: 5 out of 5
Keywords: tensorflow, tensorflowlite
Id: EPF0R0M4Fb0
Channel Id: undefined
Length: 67min 8sec (4028 seconds)
Published: Mon Mar 22 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.