so this is the project in which we will be
working today on today's tutorial this is a real time license plate recognition project and
you can see that we're detecting all the license plates all the cars and we're also reading the
text from the license plates we're going to use Python yolo8 sort and we're also going to use
AWS we're going to build this super amazing and super complex project in AWS and now let's get
started so today we are going to work with a real time license plate recognition project and this is
exactly the pipeline in which we will be working today and you can see that this is a very very
very complex and a very advanced project we will be using many different products and services from
AWS we're going to launch an ec2 instance we are going to create an s3 bucket, a lambda function,
we're going to use textract, DynamoDB, SQS and Amazon Kinesis video stream so this is definitely
a very complex and a very advanced project now let me show you something else which is the GitHub
repository I created for this tutorial and you can see that this is a very very very comprehensive
readme file with all the instructions with all the steps we need to take in order to set up
this project up and running and in this video I am going to walk you through all the steps
in this process so you can set up this project up and running but this is very important this is
going to be a very high level description of the entire process and I am not going to show you all
the details which were involved in the different parts of this project remember we are dealing with
a very complex tutorial so there were definitely many challenges I had to solve along the way
working in this project and there were definitely many details involved in the different parts of
this process so in this video I am not going to show you all the details but this is going to be
like a very high level description of the entire process but I also created another video where I
do show you all the details in this process I show you absolutely all the challenges I had to face
along the way while I was creating this project and I show you absolutely every single detail
for example I show you the object detector I trained in order to detect license plates and cars,
remember I trained an object detector with yolov8 and I show you the entire process of how I created
the data how I curated the data and this was a very very very complex process and finally how I
trained the model so in this other video I show you absolutely all the steps in this process and
this other video will be available in my Patreon so it's going to be available to my Patreon
supporters. But now let's continue now let's get started with this project and the first thing
I'm going to do is to give you a very quick and a very high level description of Amazon Kinesis
video stream this is a very important product and this is perhaps one of the most important parts
in this tutorial so in case you are not familiar with Amazon Kinesis video streams let me give
you a very quick and a very high level description of how it works this is a very popular product
which is very commonly used to deal with real time data with real time video data and the way it
works is that you have a producer which is going to be producing data and then you will have
many many different consumers which will be consuming the data on real time and all of these
different consumers are going to be taking care of different parts of your process and everything
is going to be happening on real time so this is a very quick and a very high level description
and in our case in our project we will have a producer which is going to be streaming data, it is
going to be streaming frames from the video we are going to use in order to test this project
and then we will have two different consumers one of them is going to take care of the object
detection and the object tracking and the other one is going to take care of the visualization
so in our case we will have a producer and two consumers and this is a very quick and a very high
level description of how Amazon Kinesis video streams works and now let's continue now let's get
back to the GitHub repository of this tutorial and let's get started with this project, let's
get started by executing all the steps in this process and the first step is going to AWS and
to log in into your account so I am already logged into my a account this is my AWS management console
so this is the first step in this process then go to Kinesis video streams and create a video stream
let me show you how to do that I am going to type kinesis video streams and I'm going to click here on
create video stream I'm going to name this stream something like real time automatic number plate
recognition python AWS tutorial okay and then I'm going to click here on create video stream that's
pretty much all this is the video stream we have created and now let's continue with this process
then the next step is going to EC2 and launch a t2 small instance so let's go to EC2... this is the
instance we are going to use as the producer right we are in this part over here which is setting
up the producer so let's launch an EC2 instance I'm going to click here on launch instance... I'm
going to name this instance something like real time automatic number plate recognition python
AWS producer tutorial okay I'm going to select Ubuntu then instance type I'm going to select
t2 small and then key pair I'm going to select my key pair, this is very important if you do not have a
key pair this is where you need to create a new key pair, but in my case I already have one so the only
thing I'm going to do is to select my key pair and that's pretty much all and then I'm just going to
click here on launch instance okay now I am going to click here and I am going to the next step in
this process which is SSH into the ec2 instance so I am going to click here and I'm going to click
here... I'm going to copy the public IPV4 DNS and I'm going to open a terminal and I'm going
to type something like this okay ,okay I'm going to type yes and that's pretty much all, now I am
logged into this EC2 instance let's continue, now we need to execute all the steps in this process
so the first one is very simple the only thing we need to do is an APT update okay now let's continue
and now we need to clone this repository but let me show you this repository first because this is
very important this is Amazon Kinesis video stream producer SDK this is the official Amazon Kinesis
video stream producer SDK and this is going to be a very important repository in order to set up
the producer so I'm just going to clone this repository and I'm going to continue by executing all
the steps in this process now we need to create a new directory, I'm going to CD into this directory
and then... all the other steps this is going to be very very straightforward the only thing
I need to do is to execute each one of these steps one at a time and that's going to be pretty
much all so that's pretty much all I'm going to do now and I'm going to resume this video once I'm
completed of executing absolutely all the steps in this process okay now we have these two exports
and that's going to be pretty much all, okay, so I have completed all the steps we have over here
and now I'm going to continue over here which is download the video we will be using to test
this project so remember this is the video we are going to use in order to test this project and
this is the video we need to download into the EC2 instance so I am going to execute CD and then
wget... and that's pretty much all okay so yeah that's ready now we need to go to IAM and create a
new user with these permissions so I am going to IAM... I'm going to users then create user and this
will be something like real time automatic number plate recognition tutorial kinesis video user okay I'm
going to click on next and I am going to attach this policy which is Amazon Kinesis video streams
full access so I am going to search for Amazon Kinesis video okay this is the policy we need to
attach Amazon kinesis video stream full access and I'm going to click on next and then create user
and that's pretty much all so the user has been created and now let's see what's the next step
in this process now we need to select the IAM user we created and we need to go to security
credentials and create access keys... so security credentials access Keys create access Keys local
code I understand the above recommendation and I want to proceed to create an access key and then I am
going to create access keys and then these are the keys we have just created this is the access
key and this is the secret access key now let's continue now we need to go to the EC2 instance and
we need to run this command so I'm going to copy this command and I'm going to paste it over here
and obviously we need to... change a few things we need to paste the secret key over here so I'm
going to copy and paste then I am going to paste the access key over here and then I am going
to the region name which in my case is us-east-1 and obviously something else we need
to do is to change the stream name so I'm going to copy and paste this name over here and that's
pretty much all so I'm going to press enter and this is going to stream all the frames from
this video in real time and I'm going to show you something I'm going to click here on media
playback we are in the Kinesis video stream we created and now we should be seeing the frames
from this video over here so this is the video we are streaming using Kinesis video stream so
we are almost ready we have already completed the first step in this process which is setting up the
producer, the producer is ready now let's continue and I'm going to show you how to set up one of the
consumers the one that's going to take care of the object detection and the object tracking. Okay
now let's go to EC2 again and now we're going to launch a t2 xlarge instance so I am going to EC2
launch instance this will be something like real time tutorial consumer object
detection okay I'm going to select Ubuntu t2 xlarge and then the I'm going to
select my key pair and then this is very important we need to create this instance with
30 GB of storage size so I am going to type 30... okay 30 okay launch instance... okay now
I need to SSH into this instance... and this is how we are going to do... I'm going to copy
the public IPV4 DNS and I'm just going to open another terminal and I'm going to type
something like this okay I'm going to type yes... okay now I am logged into this ec2 instance
and let's see how to continue now we need to execute all these commands so this is the
first one sudo apt update... now I'm going to install virtualenv we are going to create a new
virtual environment and then we are going to activate the virtual environment okay, now
I am going to clone another repository and let me show you this repository as well this is
a very important repository and remember we used the Amazon official SDK in order to set up
the producer now we're going to use the Amazon SDK in order to set up the consumer and we're
going to use the python SDK in order to set up this consumer so this is the repository we're
going to use and actually we are going to use a fork I made on the this repository, this is the
original repository from AWS and we are going to work on a fork right, this is my account, and
the reason we are going to use this fork is because I made a lot of changes, I made a lot
of edits because I needed to set up everything that's involved with the object detection
and the object tracking and all the process we are going to do in this ec2 instance so we
are going to clone this fork which is in my repository, in my GitHub account, now now let's get
back here and I'm just going to git clone this repository... okay now I'm going to CD into this
repository then I am going to clone sort because remember we are going to do object detection
but we are also going to do object tracking and we are going to use sort as an object tracking
algorithm so this is the repository we're going to use in order to do object tracking, now let's
continue, I have already cloned sort and now let's start installing all the requirements so
let's install all the requirements we have over here of the Amazon SDK we are going to install
all the requirements from this repository then let's install all the requirements from sort
right so I'm going to run pip install -r sort requirements okay and I'm having an error when
I'm trying to install one of the requirements from sort and if this happens to you as well I this is
what you need to do in order to fix it I'm going to open this file which is sort requirements
and I'm going to comment this line which is scikit image and the scikit image version right we
are not going to install this requirement for now and we're going to install it later on so
I'm going to close it and I'm just going to do the same command again again pip install -r
sort requirements okay and now everything is okay now let's try to install a more recent version of
that um Library which is going to be pip install scikit... image 22.0 okay let's see now okay now everything seems to be just fine now
let's continue now we're going to install ultralytics remember this is a very important library
because this is the library we are going to use in order to do the object detection we are going to
use yolo V8 so we definitely need to install this Library okay and now I'm going to execute these
two commands... okay and now this is the last last one... okay and that's pretty much all, now
let's continue now we need to go to IAM and we need to create a an access role
in order to provide this ec2 instance with all of these permissions so let's go to
IAM and let's create a new access role, this is IAM so I am going to roles create role the
service with will be ec2 next and now let's attach all the policies we need
one of them is Amazon Kinesis video streams full access, I am going to select this policy
then the next one is Amazon Dynamo DB full access and then the other one is Amazon S3
full access and Amazon SQS full access so Amazon S3 and then Amazon sqs okay that's
pretty much all now I'm going to click on next and this role name will be something like this
okay access role EC2 consumer object detection and tracking something like that okay create role
oh it's too long so I'm going to over here... okay okay so the role has been created and
now the next step is to attach the IAM role to the ec2 instance so let's go here to the Ec2
dashboard... I'm going to select this instance... and I am going to actions security modify IAM
role and I'm going going to choose the IAM role which just created which is this one real
time automatic number plate recognition access role EC2 consumer object detection update
role and that's pretty much all now the next step is to download the object detector
into the ec2 instance and let me show you how to do that there are many ways in which we
could do it but I'm just going to use an SCP so I'm just going to copy and paste this
command and I'm going to update the IP address and the IP will be something like this right so I am
doing an SCP -i this is my key pair then this is the model the object detector we are going to
use and then this is the the location in which we are going to copy this file right so which is ubntu @
this IP address so I'm just going to press enter... okay and that's pretty much all now
if I do an ls in my ec2 instance this is the consumer if I do ls here this is the file
we have just copied so everything seems to be just fine now let's continue, now the next
step in this process is going to is going to S3 and create an S3 bucket because remember
we need to set the entire system this entire object detection and tracking consumer
so this is a lot of different services and products we will need to create
the first one is S3 so I am going to S3... and I'm going to create a new bucket so I'm going
to click here on create bucket and the name will be something like this bucket okay, I'm going
to click on create bucket and the bucket has been created and this is the bucket we just
created okay now let's continue, now we need to go to DynamoDB and we need to create
two tables because we are going to use two tables from DynamoDB in order to save, in
order to store, all the information related to our object detection, the object tracking and
the text detection so let me show you, so tables create table the first one will be something
like object detection and tracking and the partition name key this is very very important
it will be the fragment number and you are going to see why later on and then the sort key will
be y1 this is the one we are going to use and these two keys will be strings then I am going
to customize settings and I'm going to choose On Demand right this is exactly how we are going to
create this table then I'm going to create table then I'm going to create another table which
is where we're going to save all the numbers of all the license plates we detect so this
will be something like license plate numbers something like that and the partition key this
will be something like car ID and that's pretty much all we don't really need a sort key in
this case so we are going to use the car ID in order to put elements into this table
now let's continue and I'm going to create table okay so the two tables have been created
and now let's continue now we need to go to sqs and we need to create a first in
first out type of queue so let's type sqs... I'm going to create queue this will be a
first in first out type of queue and it will be something like real time ANPR tutorial
queue okay and this is FIFO okay and that's pretty much all I'm just going to create queue the
queue has been created and now let's see how to continue now we need to go to Lambda and
we need to create a new Lambda function and in order to so these are the files we need to
use in order to create this function and the files are over over here so I am going to
Lambda, create function, function name real time... ok, runtime I'm going to select Python 3.11
and I am just going to leave a default execution role for now but I am going to change it later
on so I'm just going to create function okay so we have created the S3 bracket the Lambda
function the Dynamo DB tables and the sqs queue now let's set up this lambda function, I am going to... I'm
going to enlarge the font a little okay and I'm going back to the GitHub repository over here I am
going to the code of the Lambda function and the only thing I'm going to do is to copy and paste
the code here okay and I'm going to click on deploy... now I'm going to click on file, new file,
and this new file we are going to copy the other file we have over here which is util.py, remember in
the other video I created and that's available to my Patreons I give you so much details of
the entire process right now the only thing I'm doing is to show you how to set everything
up and running and to give you a very high level description but in this other video oh my God
I give you so much details of absolutely the entire process but for now let's continue and
this file is called util.py so that's pretty much all let's see how to change the name okay we need
to click here and then save and the name will be util.py and that's pretty much all okay so now we have
two files util.py and lambda_function.py now let's continue... okay see I have a step missing which
is creating an IAM role for the Lambda function, I am going to update these instructions later on
but for now this is what we need to do we need to go to IAM again and we need to create a new role so
this is IAM, I am going to roles and then create role, this will be for a Lambda function and I'm going
to click on next and these are all the policies we need to attach the first one is sqs full access
then we are going to... attach s3 full access then Dynamo DB full access and also
textract full access right remember we are here the Lambda function and the Lambda function is
communicating with S3 Dynamo DB sqs and textract so we definitely need to provide all these
permissions then the role name will be something like real time automatic number plate recognition
Lambda function something like that okay then create role... then let's get back to the Lambda
function and I am going to configuration then permissions then I'm going to click here on
edit and basically I am going to change the existing role and this will be real time anpr lambda
function okay and another change I'm going to make is over here in the timeout I'm going
to change the default timeout and I'm going to set it in something like 1 minute and
I'm going to click on Save... then another change I'm going to do is in asynchronous invocations
I'm going to edit and I'm going to change the number of retry attempts and that's pretty much
all so I'm going to update the instructions in the readme file but for now let's continue so
we have created the Lambda function we have provided all the necessary permissions now we
need to go to the EC2 instance and we need to change some variable names so let's go
to the ec2 instance and we need to open this file okay so this will be something like
this... so I'm going to scroll all the way down until this section over here and these
are the variable names we are going to edit so the first one is the region name and in
my case this is something Like us-east-1 then the stream name let's get back
to my browser and I'm going to copy and paste the stream name we are going
to use in this tutorial... something like this... okay... then the bucket name... okay then the table name and this is the
table where we are going to save all the object detection and the object tracking information
so I'm going to paste it over here then the queue url let's go to sqs and I'm going to copy and
paste the queue url... something like this, okay and that's pretty much all that's pretty much all
if I'm not mistaken so I'm just going to close it and save the changes okay now let's get back
to the GitHub repository and let's see what's the next step in this process, now we need to go
to Lambda and we need to do exactly the same but with the Lambda function we just created
so let's go to the Lambda function we created which is this one and I'm going to edit all the
variable names I'm going to start with the region name and then we also need to edit the queue url... and that's pretty much all okay if I'm
not mistaken that's pretty much all yeah I think these were the only two variable names we
had in this Lambda function and I'm obviously going to deploy the changes so let's let's get
back to the GitHub repository and although the next step in this process is executing these
two commands I realize now that I completely forgot to execute this step over here so let's
continue by executing this step and then we are going to continue over here so we need to go to
the S3 bucket we created and we need to create a new event notification to trigger the Lambda
function this is very very very important and I completely forgot to do it so let's go to S3
this is the S3 bucket we created... real time anpr bucket let's go to properties and let's go to
event notifications and I'm going to click on create event notification, the event name will
be something like trigger Lambda function read license plate something like that... and we are going
to trigger this Lambda function only in these cases only for this suffix '_1.jpg' and if you
want to know why we are triggering the Lambda function only in this case then I invite you to
take a look at the other video I created where I explain exactly absolutely every single detail
about this project, for now let's continue, so I am going to trigger this Lambda function on all
objects create events so I'm going to click this button over here and that's pretty much all so
I'm going to scroll all the way down and I'm going to select the Lambda function we created
real time anpr process plates Lambda and I'm going to click on Save changes and that's pretty
much all so we completed this step over here and that was pretty much all so now let's go to to the
ec2 instance and let's execute these two commands I'm going to clear this output and then I'm going
to execute this command which is a cd into this directory and then we need to execute this script
which is basically the consumer right now we are going to be consuming data from the producer we're
going to be consuming all the data which is being streamed by the producer and we had an error I
think I know what's the problem we need to go to sort/sort.py and we need to comment these
two lines not sure why we have that error but if we do that edit we fix it so that's
pretty much all now let's see what happens okay everything seems to be just fine so basically
we are waiting until we receive a fragment of data which is being streamed from the producer but
we are not really streaming anything at this time so we're not going to receive anything
so I'm going to stop this script and I'm going back to the producer and I'm going to execute
it again but actually the way we need to do it we need to execute the consumer first and then we
need to execute the producer so now I'm going to execute the producer and you can see that we are
broadcasting data, we are streaming the data, and you can see that everything is just fine now we
are processing all this data we are receiving from the producer you can see that we are executing
all the frames in this video and this is a huge output we got from the object detection and
the object tracking this is all the data we are saving into the database so everything seem to be
just fine so we have completed the second step in this process which is setting up the consumer,
the consumer which is going to take care of the object detection and the object tracking so
everything that's here it's done, it's ready, and we can just continue with the last part in this
process which is setting up the visualization so let's see how we can do that and I'm going
to show you how to do it in my local computer, I'm not going to use an ec2 instance, no, but I'm
going to do it in my local computer, eventually, potentially you could set up all the visualization
in an ec2 instance in the cloud but I'm going to show you how to do it in a local computer I'm
going to use my local computer so I'm going to use pycharm, so I'm going to open pycharm, I'm
going to click on file, new project, and then I'm going to select a directory I have prepared for
this tutorial which is this one over here so I'm going to click okay and I'm going to create a
new environment using python 3.9, create, this window, and that's pretty much all so we have
created a new project and we also created a new virtual environment this is very important
now let's get back to the GitHub repository and let's see how we can continue so the first step
in this process is cloning the same repository we used before so I'm just going to clone exactly
the same repository but now we are going to use another script right remember in the previous
consumer in the first consumer we executed this file over here which is kvs consumer Library
example object detection and tracking and now we're going to execute this other one which is
kvs consumer library example visualization so this is what we are going to do now I'm going
to clone this repository so I'm going to copy and then I am going here to terminal and then git
clone and the repository url and that's pretty much all, so the repository is now cloned, now let's
get back to the GitHub repository over here and now we need to download these two files
main_plot.py and process_queue.py so let's go to visualization and I'm just going to copy
and paste the content of these two files I'm going back to pycharm this is main_plot.py... okay and then let's do the same for process_queue.py.... okay okay let's continue let's get back to GitHub okay now we need to go to IAM and
we need to create a new user with these two access permissions so let's go to IAM
and I'm going to open this in a new tab... and now let's go to users and
create a new user, the user name will be aws consumer visualization tutorial, next attach policies directly and we're
going to attach these two policies Amazon Dynamo DB full access and sqs full access okay I'm going to click on next and that's
pretty much all so I'm going to create user, the user has been created and it's this one over
here so let let's get back to github and now we need to select the IAM role we created
and we need to go to security credentials and create access keys so let's do that, security
credentials create access key local code and I understand the above recommendation click on
next create access keys and the access Keys have been created so we are one step closer of
setting up this other consumer so everything is going well now we need to go to main_plot.py,
process_queue.py and this other file over here and we need to change, we need to edit, the variable
names the same way we did before, now let's go to pycharm main plot and let's take
a look at all the variable names we need to edit... this one I think we don't have any actually no we don't have any in main
plot we need to go to process queue... we have many over here so
the first one is the queue_url... then the table name... the access key and the secret key... the region name which in my case is
us-east-1 and that's pretty much all so now let's go to the other file, to project Amazon Kinesis
video stream consumer and this one over here visualization... and the variable names we have
region name which is us-east-1 in my case, then stream name we need to copy and paste this
value over here okay we have pasted the stream name and now let's continue the next value we
need to copy and paste is the access key okay and now the secret access key and that's pretty
much all so we have changed, we have added all the variable names in this file and that's pretty
much all so now let's get back to GitHub and let's see how we continue now we need to create
a virtual environment and we need to install all the requirements but in our case we have already
created a virtual environment when we started with this project when we created this project so the
only thing we need to do now is to install all the requirements so let's get back to pycharm and the
requirements we need to install are over here within this directory Amazon kinesis video stream
consumer library for Python and these are the requirements over here so what I'm going to do is
I'm going to the terminal and I'm going to execute pip install -r Amazon requirements
okay and now we need to wait a couple of minutes okay so the requirements have been
installed and now we need to go to the next step in this process which is execute process_queue.py
but before we process the queue we need to do something first let's go to sqs let's go to
the queue we created and we need to do something you can see that it says messages available 7,
we already have 7 messages available and we need to do something we need to delete all the
messages we have so far so I'm going to type purge and that's going to be pretty much all,
if I refresh okay now the messages available is 0, the value is 0, so we are okay to
continue remember that the time we executed this consumer over here the object detection and the
object tracking consumer we already started to process all the data, to process all the frames, we
computed the object detection, the object tracking, we saved the data into the database we put the
data into the queue, we already implemented the entire process so we definitely need to delete
all the messages we already saved into the queue in order to continue, now we should have some
messages available, some data available in this table over here but I have already checked and
we do not have any and the reason for this, the reason for we don't have any data
saved into this table is because I forgot to make another edit to another variable we have
in the Lambda function which is the table name we have over here we need to change this value
and we need to put the value of this table right license plate numbers so remember the Lambda
function we edited some of the variable names, but we forgot to edit this one over here which is
the table name so I'm going to do it now okay I'm going to click on deploy and now everything is
ready to continue and now let's go back to pycharm and let's execute process_queue.py so I'm going
to click on run process queue... no module named pandas we didn't install this requirement so I'm going
to install it now pip install pandas okay now let's try again I'm going to run process queue
and let's see what happens everything is okay we are processing the queue and we are starting
all this process we have over here so everything is just fine everything goes well now let's
go to my directory and you can see that we have these four directories: detections, frames, license
plates and process fragments, these are very very very important directories and this is where we
are going to save all the information, all the data we are going to receive from the producer
and also all the data we are going to compute with our process so now we let's take a look at
the next step and it's execute main plot.py so let's go to main plot.py and I'm going to click
on execute and I'm going to click on run main plot no such file or directory loading frames okay this is
what we need to do now we need to go back to the GitHub repository and we have in visualization we
have this directory which is loading frames and we need to download this directory so maybe the best
way to do it... I'm just going to clone the entire repository... and I'm just going to take this directory over
here... okay and actually it's over here... this is a very important directory with very important
data for the visualization and obviously later on I'm going to update this readme file so
it's updated right this should be one of the steps in this process so now let's
try again I'm going to execute main plot.py and now you can see that we see something
which is only this... loading right we are just loading we are just waiting for data we are
just executing this visualization but obviously we are not really streaming anything so we are
not receiving anything either so everything is ready but now we still got the next step in this
process which is executing this file over here and obviously this is the file which is going to take
care of the consumer right this is the file we're going to use in order to consume data but in order
to consume data we need to stream data we need to produce data first so let's go to the other EC2
instance, the other consumer, and also the producer and now we are going to execute everything at
the same time so this is how we are going to do, I'm going to execute this one first oh it's
not this one it's this one I'm going to execute this one first the one for the visualization
and then I am going to execute this one over here... we didn't install ultralytics but we
don't really need ultralytics either so the only thing I'm going to do is I'm going to remove
this import and that's pretty much all let's see now... okay we got an error there it seems there's
something wrong with the permissions so let's go back to IAM and let's see if everything's okay... we
forgot to add another policy which is kinesis video stream obviously I'm going to update the
GitHub rpository all the instructions in the readme file okay next and that's pretty much all
let's see what happens now I am going to open these two terminals which are the producer
this one and the other consumer this one and now remember we need to execute everything
at the same time so let's see if we can do it and after a few seconds this is what
we got you can see that we're detecting all the license plates all the cars and
everything is working just fine so this exactly how you can set up this project up
and running and the last thing I'm going to say is that remember absolutely every single
thing we do in AWS is not free but we need to pay for it so please remember to always keep
an eye on absolutely all the costs which are associated with this project always keep an
eye on your AWS cost management console so you know for sure how much money you are spending
while you are working on this project, and also remember that I also created another video,
another course, where I show you a much more comprehensive description of this project so if
you want to know absolutely every single detail involved in this project I invite you to take
a look at this other video, this is going to be all for today my name is felipe I'm a computer
vision engineer and see you on my next video