Speech Recognition using Deep Learning Part 2 - GPU Inference

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what's up guys this video is going to be a quick continuation on my first video deep speech speech recognition part 1 in this video we're just going to start running inference on the GPU so not much is different other than we want to see if we can run everything on the GPU rather than the CPU so what we'll begin with is creating another separate virtual environment for this so we'll do Conda and you can see I already created one des GPU so what you'll do is create one really fast you can call it whatever you want I'm just going at des GPU and giving it the latest version of Python so after you create it you can activate it and then we'll just pip install deep speech - GPU and this will make sure that the binary actually runs on your GPU rather than rather than your CPU I've already installed it but you'll want to run this command right here so just pip install deep speech GPU the next thing you want to do is get the model and audio files if you're following along you probably have these already so you don't need to install all this stuff or pull this all down but if you don't then just curl down the models curl down the audio samples and on tar them both after that we want to make sure that you should make sure that you have an NVIDIA GPU and the proprietary drivers installed you'll be able to tell that you have that if you can run this command here Nvidia SMI VMs Nvidia SMI will show you like what your load on your GPU is all the processes that are using your GPU and so on so forth so it's a pretty useful command and it's also a good way to test if you have the the proprietary drivers installed so after that you might have installed CUDA you might have installed Q DNN before if you already have them you probably already have a way to do this but if you don't a very easy way to install them is just to run Conda install CUDA toolkit we're gonna be using 10.0 so you can run that I already have it installed so it's not going to say much for me and then install QD and then as well and I already have that installed as well these can take a little bit especially the CUDA toolkit I think took a significant amount longer than 2 q DNN if you install CUDA toolkit first and pin it to a specific version like this ku DNN will just install the proper one for that specific version version of CUDA toolkit after that we're ready to run inference again so let's take a look at this command alright now if you remember from the first video we ran it on the same exact audio file so it's experience proofs list but it was experienced proves this and we got the same exact output we got the same output even though we ran on the GPU because it's the same exact model the idea was that it'll just run faster so you shouldn't get better results you should just get a faster inference time from this you can tell that it actually ran on the model if you start seeing things like Numa nodes all this kind of output that is GPU specific you should see things like CUDA and all that kind of good stuff and that's pretty much it
Info
Channel: chris@machine
Views: 4,213
Rating: undefined out of 5
Keywords: deepspeech, speech recognition, deep learning, machine learning, tensorflow, gpu
Id: bKKVk6GehfA
Channel Id: undefined
Length: 3min 26sec (206 seconds)
Published: Mon Apr 20 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.