Conversational AI w/ Jarvis - checking out the API

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

I want Nvidia Jarvis to control my operating system and voice programming. I love coding but i hurt using my hands too much.

πŸ‘οΈŽ︎ 5 πŸ‘€οΈŽ︎ u/metal88heart πŸ“…οΈŽ︎ Apr 16 2021 πŸ—«︎ replies

that is the fastest speech to text I've ever seen, wow

πŸ‘οΈŽ︎ 6 πŸ‘€οΈŽ︎ u/george_watsons1967 πŸ“…οΈŽ︎ Apr 16 2021 πŸ—«︎ replies

Does it work only with Linux?

πŸ‘οΈŽ︎ 3 πŸ‘€οΈŽ︎ u/_santoshp_ πŸ“…οΈŽ︎ Apr 16 2021 πŸ—«︎ replies

I love sentdex

πŸ‘οΈŽ︎ 3 πŸ‘€οΈŽ︎ u/whitelife123 πŸ“…οΈŽ︎ Apr 16 2021 πŸ—«︎ replies
Captions
so yeah so that's like super cool and uh that is nearly instantaneous what is object oriented a programming paradigm based on the concept of objects yeah like i want to say what is oop you think i wonder if it could find out what an act figure out what is an acronym oh alup this one is correct i bet because of all these other ones let's see what it says no way what is going on everybody and welcome to a video on me just kind of poking around the jarvis uh library project i guess i would call it from nvidia now jarvis is actually contained within nvidia's ngc which i have looked into i've wanted to do a tutorial on i just like haven't gotten around to it um but if you're not familiar with ngc it's probably something you should take a peek at one of the things that i did with ngc was i grabbed um they've got a particularly well-trained uh text-to-speech so it's a wave glow in tacotron and uh so like on ngc you can download um like containers for example so the container is just like the all of the project files that you would need and it's just like good to go you can run it it's like a docker it is a docker container um and then you can also download like literally just like the model files and then you can grab the code from github so rather than using a container you can just simply incorporate it into whatever you're doing at the moment so that's kind of what i did actually is i just downloaded the model files and used them that way um but jarvis uh is quite the large project i think it would be super tedious uh to to do each little tiny part so what jarvis is is it's like all the things you would need to do conversational ai so there's just everything along the way starting from like speech to text so if you want to speak first to the ai and then hopefully get a response they've got speech to text and from what i've seen in demos and stuff like that it looks really good and then there is just a ton of natural language processing stuff that you can do from um you know doing your typical like named entity stuff down to all kinds of stuff that i think we'll be able to peek at when we actually get like the uh container in and then at the end of the day you get back to uh text-to-speech um so you really it's it's like end-to-end uh conversational ai so for someone like me that's kind of like one of the biggest projects and the apparently the hardest thing ever for me to figure out how to do is to come up with like a general purpose chatbot so i'm i'm kind of focused on the nlp side of things and i would be very curious to check out their um to use their bert model to train my chatbot so my my idea of a chatbot is something that is not boring like google or you know apple's siri and stuff like that i'm trying to train it off of reddit chat data um and uh this is a very hard problem apparently i just i didn't enter this thinking it would be so difficult i've just been something i've been like trying to do as a side project for many years so anyway um long story short we're going to check out jarvis see if jarvis can solve uh and end my pain and get me a chat by you know especially one that you could actually talk to like in discord you know and literally talk and get talks back and hopefully um everything works seamlessly so anyways uh well the first order of business will be yeah kind of just go over the terms so yeah ngc this is just where they house a whole bunch of stuff and then jarvis is the housing of a bunch of nlp kind of conversational ai stuff so let's get started so the first thing we need is uh well actually the very first thing i'm going to do is actually connect to the dgx station back there so you could do this locally the jarvis as i understand it is basically it's like a server of its own so so it's going to house all the models and you can query jarvis now you could do this locally if you want um but i'm actually going to do it against the dgx station i can't imagine a better a better place to test it than that so first let's see ssh okay nobody look at my password cool hopefully we get it yes we did okay so first order of business is going to be get docker now this comes from nvidia and i'm going to wager it has docker it probably has let's see um so i don't think we need to actually install docker but if you didn't have docker you would need to install it also somewhere i don't will stumble upon it possibly at some point i know that not everything on ngc is going to have this requirement but i know the jarvis specifically requires a gpu of an architecture that is later than pascal so like a gtx 980 or a 1080 ti will not work with jarvis specifically so if you needed that you would probably you could rent a gpu in the cloud or something something like that if what you see here uh is appealing to you so otherwise if you have something past pascal it should be fine like touring um ampere volta those will be totally fine um okay so docker okay so we have docker i'm trying to think because you want to have the uh nvidia docker and i can't recall i think you'll install nvidia docker on top so first let's go ahead you would just do a sudo apt-get update uh and then let's pull up uh maybe it's here or maybe it's the nvidia docker that has this requirement let's see let me pull this over here oh i headed up already so this would just only be required if you hadn't already installed docker oh the other thing we definitely want to do likely is grab these post install instructions so let's go ahead and do that real quick as well let me pull this over okay so we already have docker set up i'm going to guess that um this is tough so it probably already has let's just go ahead and do this so wants me to log out uh let's see if we can get away with really why can't i exit i thought i would get away with this let's try let's see if we get away with this [Applause] verify that you can run docker without sudo so i'm going to now essentially run this command here so just docker run hello world this shows that your installation appears to be working okay so once we have installed a regular docker then we're going to set up the specifically nvidia container toolkit which is apparently all of this thank you for the easy copy okay uh once we've got that [Applause] come down and ah note to get access to usb or the new mig capability i want to add uh i'm tempted because the ampere cards do have mig i might circle back to that one that's curious uh i definitely want to play with mig but depending on how much time i have with the machine i may or may not be able to get to it but maybe i'll circle back i don't remember seeing that before so we'll skip that and now i already ran that update so let's go ahead and grab nvidia docker huh is already the so it's already it was already here that makes sense came from nvidia uh default discord uh discord why'd you do that i did put discord on there uh okay i'm gonna i'm gonna go ahead and hope that uh that doesn't cause problems i don't think it will okay restart docker okay this is the command to check to see if we can run nvidia smi and i guess the next question would be can we run that without sudo beautiful beautiful just a wonderful looking uh what's that 81.2 gigs of memory oh my goodness okay so um so could we run that without sudo now [Applause] we sure can okay good okay so we've got nvidia docker done uh okay so i think now let me check my notes and i think the next thing that we would want is um nvidia cli so that we can just quickly grab stuff via this command line thing here so like in this case we want the jarvis quick start we want to be able to grab it but i don't well i guess we should check and see if ngc is here since they got the docker already no so we don't have ngc set up yet so to set up ngc we're going to go to this command line install i am on uh linux not an arm processor okay so let's go ahead and grab these um go grab this oh that's just a check anyway continuing on just running commands blindly how safe can it be config set okay grab your api key hopefully this does not display the key i guess i'll blur it out if it does to get your key you're going to need a ngc account and then you go into your account um basically uh ngc.nvidia.com like setup i think it is and anyway it's it's where you would expect a key to be that was not a hard a hard thing to find okay please don't show it oh my god you showed it [Laughter] [Music] i really hope that it wouldn't show at okay great phenomenal it ticks up too doesn't it that's great okay let me see if i can get rid of that key before i continue on see how much editing i hate having to move a blur ah yes wonderful okay so the website suggests that i will generate a new api key and the old one becomes invalid so i guess i'll leave that key there for now and continue on okay hopefully i'll remember to change it probably won't so he like ascii's fine with me cool okay so that part is done on to the next step let's see we did our key uh did we already export path yeah yep yep okay i think we're right um let's go ahead and try to copy and paste that uh queer uh yeah the cli command where was jarvis quickstart there it is so at some point um again to view this i will put this link in the description but to view it you have to like make your account and sign in um the key everything is like free so you don't have to pay for anything but you just need an account so um this this command here won't show if you're logged out so you have to log in also just take note i am using 1.0 beta 0.2 um this might change over time hopefully not much will change hopefully please don't don't don't do me dirty nvidia i swear every time i do tutorials on beta stuff it changes massively let's see what happens uh okay so really it's done that quick okay is it really only 256 kilobytes all right that's okay let's go check it out oh you know what it's going to download the model uh that's why i'm like thinking i was like man those models are not that big no it'll definitely um it'll download them in a moment so okay so then once you're in here you get your essentially jarvis commands and these are what are going to like start stop and stuff to the server so um i am going to first we need to allow these to be executable i'm not sure what jarvis clean does but we definitely want to be able to at least start stop and initiate uh initiate so let's go ahead and see h mod plus x jarvis clean well we'll throw that in there too uh jarvis init jarvis start oh i guess we'll grab start client as well jarvis start client and what else are we missing init start start client clean and i guess we'll throw in this the old stopparoo as well uh jarvis stop dot sh cool okay so now uh we will start with init and i bet this is what is going to download all of our models jarvis in it hey bud i'm gonna keep it down i'm filming a tutorial so this will probably take some time as the message states so i will be back when it is done okay it looks like we are good to go um the whole process took i don't maybe 20 20 minutes something something like that um i ended up walking away brewing some coffee and i think so that was i think jarvis in it yeah so what we're gonna run now is jarvis start dot sh and i think this will just basically start up all of our models that we just downloaded like that's kind of what was taking so long i was grabbing all those models so my understanding is once this starts up again at this point we are we have you know done some shell access into yes the dgx station but jarvis itself is going to basically it runs via a docker container so we are going to once this is done i want to say we can start uh yeah we'll do this start client and that will open up a container and then uh let me check something real quick actually so i think so so now the server is ready i'm not actually sure if we should just run this or run i'm not sure we want to start the client i think we should just probably run that and now i don't know jupiter so let's do what we'll first do uh jarvis start client ls yeah so we have jupiter here uh cd into notebooks possibly qa demo speech api demo let's go ahead and jupiter okay so it took me all too long uh to read the manual and uh as it usually does and the proper command is uh this with a j uh essentially jupiter notebook and we're trying to run this on zero zero zero zero zero and then allow root and then um i guess the directory for the notebooks so we'll go ahead and run that uh and then i will open this apparently over here close that okay um so the first thing we'll take a peek at is this speech qa demo here and uh let's fill fill our screen ish okay um so we got our licensing information here and essentially all this is and again these kind of notebooks are just kind of here to show you some examples of what's possible contained within jarvis is like everything you could need and it's just kind of showing you some examples so first off uh with the wikipedia question answering i believe the model here is bert it's this you know burt transformer model essentially the thing here is you take all these wikipedia articles you'll pass them through and then you could and you know do question and answering from that so let's uh we can check that out so we will install wikipedia as needed come down here we import wikipedia grpc for the kind of server stuff uh and then from the jarvis api we're importing um the jarvis nlp and the server i presume so we'll go ahead and do that coming down here this is just an example of an input query and it's going to query wikipedia um how many articles do we want to search for from wikipedia and um then basically we're just building one gigantic summary from the content of those articles so we'll go ahead and run that and wait for that to finish cool so essentially all of that was kind of um you know just getting the context to now query um the jarvis server so in this case um we are going to query localhost because we are actually engaging this notebook is localhost later if we wanted to on my main computer actually query jarvis that is running on the dgx station we would do that um with i suppose dgxa100 well i don't know if you can see the name i gave to this this machine but it's essentially a dgx a100 omg um so yeah we could do that or use the local ip uh i presume so let's go ahead and run it we can already see the notebook came saved with the output but can we just take a moment to appreciate the the inference speed of this server i mean that that's like nothing again nothing was like this part took a little bit because it's querying wikipedia's api um but here this is this is the we just did an inference on some you know probably a burp model um that's really fast okay um so uh so of course um i want to use a phrase that is not wikipedia or wikipedia nvidia's choice of phrase uh or question so the first thing we're gonna ask is um who is elon musk and i'm not even sure i bet the question mark doesn't actually isn't actually required here let's see what happens views of elon musk elon musk uh-oh so okay so yeah this i think i remember seeing this from before um again this is like a wikipedia thing um so my expert solution to um i i don't even know how to read this error it just looks like some sort of thing i'm not going to bother attempting to figure out how to read that error at this stage um again this is like meant to be a demo so i'm going to do a try tab that over and then we will accept x x oh my gosh exception is e and then we'll just just so we know it's happening we'll print stringy but [Music] i'm just not used to working in notebooks i'm so i'm expecting it uh when i make parentheses to move my cursor for me okay okay so now it did the search and then we'll do this not explicitly partisan um gosh that is really philosophical um i don't think let's uh we'll come back up here um i think what i'd like to do let's instead of max articles of three let's go with like seven um let's do something substantial throw in a bunch of data to see if we get a better a better answer okay so in this case uh yeah we grabbed a bunch of things come down here query driver server who is elon musk business magnet industrial designer and engineer i mean that's a highly accurate answer um it's all we needed to do apparently was just add more articles let's ask what is object oriented programming and then the next question i want to ask is what is oop i'm just curious if that'll work i'm expecting no uh but that would be very cute that would be so cool if it did what is object oriented a programming paradigm based on the concept of objects i wonder why it chopped off the closing uh i can't think of the word quote i'm getting kind of hungry um okay so yeah like i want to say what is oop you think i wonder if it could find out what an act figure out what is an acronym what oh alley-oop this one is correct i bet because of all these other ones let's see what it says no way oh my god i can't believe that worked based on what was passed i mean obviously this may be because it was hierarchically i don't know i don't know that's impressive i'm impressed i really thought we were gonna fool it with that question okay um all right you can tinker with that on your own if you want let's check out uh one of the other notebooks here man i can't believe that worked [Laughter] oh my gosh uh okay so uh what is this python api examples the basics so again this is just gonna be some examples just like um this was right this is it's just like an example of using jarvis and in this case the question this was just very specific question and answering um over here we've got just a bunch of examples it looks like this is kind of just a demo video of using jarvis i'm not going to play that for fear of youtube issues um so here they're just kind of introducing you to some of the things that i can do and again this this this really interests me um because these are like so question answering this this like once you have things in text that's really cool but if you want to build like a home assistant or a chat bot that you can actually you know fluidly converse with um yes this is nice this is really cool and i i just i cannot stress enough how awesome that inference time is um even if like a lot of times you inference on a cpu this is inferencing on the gpu but even inferencing like a transformer model on the gpu you're looking at two seconds or or more like you're looking at a considerable delay like you couldn't carry on a fluid conversation whereas this is this is at least this step is darn like that's so fast um but anyways i digress um besides this uh if you really wanted to converse you need to have speech to text and then later text to speech right to make it full fully um you know end to end so here we can see they're kind of talking here about i think asr that's automatic or automated speech recognition essentially speech to text there's other it's not just speech detect text which is why i think they're using asr here there's a whole lot more going on there and there's a live microphone demo that i'm going to try to do by the end of this video that'll probably be where i leave off and then maybe in the next video start diving into more specifics here because again my my goal is to have essentially that reddit chat bot um but then all of these things incorporated so it's like a home assistant that is not boring that's the dream um okay so so yeah so we'll just run through some of these examples but you can kind of run through them on your own but we'll see if we hit any any issues and stuff like that like here for example um you know changing that to seven was pretty substantial and then to try and accept so i i don't know what that issue was with wiki but okay i cannot i can't believe that answer my gosh um i hope you guys are as impressed as i am on that one um okay so so in this case we can see here that okay they're doing some nlp here this is probably just you know understanding of sentences generally with nlp asr this is probably mostly um speech to text but we'll we can kind of dig into it in a little bit and then finally obviously text-to-speech so we'll go ahead and do all those imports and again we have all of this because it's in a container and again what makes this what makes this actually kind of neat is to query um and to like interact with this like everything is running here on this machine but you could query it from any machine right it's just a you're just making a query here uh so you could query from a raspberry pi for example um you don't need a super high end machine except to actually run the server essentially um so let's go ahead and i think we did our imports from this uh and then here we've got an offline asr example they do have a live mic example i will i think it's a separate file we i will run that i think it's important to see because it's so cool um so in this case it's as if you maybe you for whatever reason you're you're bringing in the files that are recorded as wave files um so in this case we'll do this so if you store them in like a database or something like that you'd probably store it in some sort of format what is natural language processing so hopefully you could hear that but it's just somebody's voice saying what is natural language processing so in this case we haven't done anything jarvis related we just loaded a file and then here looks like we're building the the jarvis automatic speech recognition request setting some of these uh values here looks like probably most of this you would probably just keep max alternatives one so it sounds like maybe if you had i bet there's a confidence here yes there is so you could possibly check alternatives if you wanted also my guess is maybe the more alternatives you suggest you would accept maybe the longer it takes so who knows um i'm just making guesses uh okay so let's go ahead and just run it real quick well although we already since these already came so we see what is natural language processing so no surprise the wave file in the nvidia demo worked so i mean that's very interesting and basically if you wanted to apply it to wave files this is one way that you could do so um but i put the live mic example that's what i want to get to in a moment um but this is the that's all that was required and again the thing that i really want to stress is like i wish they had like i wish they timed all of these uh in the notebook to to really stress like this definitely sub one second i don't know what it is but it's the response time here is insane again to do to do this it wouldn't be two seconds i mean i feel like like if i did it it'd probably be like three to four seconds and i mean like and that's including on the gpu i mean maybe it wouldn't be three to four i'm pretty sure it would though um that's so fast it's so fast and yes well two things one it is running on that dgx station which is undoubtedly going to be as we found out in previous video like two to three times faster inference but i have run this on the titan rtx's that i have on this machine here and it is still blade i mean it's so fast that's just incorrect i i wish i could make models go this quick um core nlp okay so in this case so this is your speech recognition or basically speech to text we've got nlp examples so in this case punctuation add punctuation to the sentence do you have any red infinity okay uh so yeah we run that wait where did it go oh here oh this is actually interesting i don't even think i noticed this uh my first time through this notebook so this is actually very useful again um if you've ever done chat bots generally the best thing you can do is um i don't know i haven't removed punctuation from my chat bots but you generally uh lowercase everything for example um so that's that's pretty cool so like in this case i it just knows hey i need one yeah that's so cool um so now that i've seen this although shouldn't nvidia be all caps um now that i've seen this i kind of yeah it makes you because basically you can have way less tokens so if there's not like an uppercase version of something or if you know if you don't have to use punctuation that's that's less tokens that you you would have need for your model i'm not sure i would want to get rid of punctuation i think punctuation often helps the model learn but in this case you know there's probably all kinds of reasons why you'd want to add punctuation but as far as capital letters are concerned i really like that i did not see that my first time through uh and then coming down here we can already see um i'll just keep i just want to i always want to run stuff just to make sure how can i why do i keep getting stuck that's so weird i guess because it's act it's highlighting the cell below and then i run it anyway i want to run the cells just to make sure we don't actually hit errors um so in this case named entity recognition is what i believe ner would stand for and that's what it says right here so okay so in this case um yeah it's just going to take in some sentences rewrite text okay so yeah so in this case it's just all all it's doing is just recognizing okay jensen's a person nvidia is a corporation organization um and then santa clara and california are locations again depending on what you're doing like this is like an nlp task so it's not like a question and answer um but a lot of times you're going to want to figure out like who are we talking about in certain scenarios or um where are we talking all this kind of stuff so like yeah if you're going to do you know like a weather app or something like that like they've got going on down here um you know no figuring out how to pull that location can often help um so in this case text class so i guess you're just trying to classify you know what what a you know some text is uh corresponding to um in this case there are only four classes so again this might be very interesting if you have a real specific use case and you would probably transfer learn based on your data set unless you're doing weather stuff so yeah probably definitely the the transfer learning toolkit i'm curious to see how how well that works and then finally text to speech i don't know how well this will work but is it recognized speech or wreck a nice beach is it recognize speech or recognize beach that sounds pretty good um [Applause] not sure if punctuation will matter in this case this is a lot of cool stuff whoops i got lost this is a lot of cool stuff no i'm not sure the punctuation was changing anything there um wow there are so many examples here analyze intent so in this case is it going to rain tomorrow again so yeah if you wanted to convert this to some sort of app you know again you spend so much time doing nlp trying to figure out what what's a person actually want right so depending on your app and how general or very specific it is right this is um yeah this is very interesting what what did i just paste in there oh this some sort of score okay um i think we'll i still kind of want to peek at these but uh i think i think i want to show you i think we'll end on the on the microphone once the microphone is the coolest the coolest one and again like each of these things uh first of all i i can say at least i could never i could never have it be that quick um even not on the ampera cards like the like i said the ampere cards will be fast but even on like rtx cards um i couldn't get anywhere near i couldn't get even like a tenth of this speed and i've done it i've done all these things i have done uh text to speech i've done speech to text and i've done like things like question and answering models so i might i might have to make the transition i might have to uh it depends i'll have to see about also transfer learning to the uh the burp model see what we can figure out uh so the last thing i want to show is the microphone example and i'm not positive how exactly i want to do that just yet it depends on how this is all set up so for the jarvis api package as well as that uh transcribe mic file there's many ways you could do it probably one easy way would just be install the ngc cli on your local machine and then grab the quick start on your local machine or you can copy the files over with either scp or filezilla and the files that we need here in this case so first of all i am local but i'll get into the dgx station here and at least for me i installed literally the the quick start to the home directory so we'll just cd into jarvis quickstart and then inside of here is where you'll find the wheel file so again you just could copy that over local and then you'd be all set and then to install the wheel file if you didn't know you can just pip install in this case jarvis underscore api and that would install the api for you and then finally the microphone file is in this examples directory so you can change directory into examples and there's some other stuff in here too that you might want to take a peaksy at but for now we'll do transcribe mic but there's other uh flat files in here that you if you're interested in this stuff i would strongly recommend you check some of these other ones out so alongside the notebooks there's also some cool stuff here so anyway what we're going to be looking at is this transcribed mic file for now okay so uh basically what all i did here was i just copied the transcribe mic file to be local rather than where it was so um in this case the only other thing you would just need to do is install the jarvis api anyway so what we're gonna do is uh talk briefly about this um so if i if you open up this this this program here essentially you're using um at least to start a lot of pi audio so it's just going to pull from your microphone so in this case um the only thing that we really need to pay attention to is the server so by default it runs localhost well we're going to need to change that because we're going to actually query the dgx stations that's where jarvis is hanging out so now uh the other thing is the input device and probably you have no idea which input device to use and i'm actually not sure uh we'll run it and just see if it figures out what input device but my guess is we'll have to um we'll have to change that as well so what i'm going to do is go to python3 uh what was it transcribe mic.pi and then uh server will be uh what is that dgx where's the there we go all right why'd you do that okay let's fix this uh server will be dgx a100 omg uh and then what was the port 551 um okay so then sir i think we'll see if that works let's just see what happens so that oh fixed it oh my gosh so yeah um if you don't get giddy about uh what what you're seeing here then something is wrong with you so yeah so that's like super cool and uh that is nearly instantaneous i i don't know what to say other than that is just incredible uh let me pull this up so you can see it um oh i like moved the window and it got all freaked out anyway i'm sure there's all kinds of uh fancy uh console stuff going on there to make it live to the screen i cannot believe how good that is and how fast that is that's the thing is not only do these models actually work i wonder why is it doing that it's probably because i adjusted it i honestly don't think let me just restart it real quick i'm curious why why it's like spamming that out um anyway okay i think i'm gonna end it here um so yeah i mean obviously that was a whole lot of just stuff um but mostly it's just like just being aware that this stuff exists and it's out there for you for free yeah i wonder why it spams that i'm highly confident that is an issue with just kind of the display rather than the actual model like glitching out like that so um so anyway yeah you could this is so cool so you can either do the stream method or like with the wav file essentially they're breaking that wave file down and then doing a prediction on the entire thing so either you can you can kind of begin to predict live like what is going on right here or um you could predict what's just going red here interesting so obviously it makes some mistakes for sure uh and my i wonder i would like to compare like the streaming to um you know something that that does this and then just calculates at the end what it thinks you said um but also the other interesting thing is as i talk you can see that a lot of times it goes back and like adjusts uh what was said that is driving me bonkers and i hadn't seen that before why are you doing that i wish it wouldn't do that uh anyway okay that's all for now uh i hope you guys are thoroughly impressed um because this is all so cool except for the spam but but i'm confident that's uh some sort of console issue uh rather than the model so um yeah man that is so cool there's so much here like now i just need a working chatbot you know and then i and then home assistant is done and i know a lot of people that watch also want to do home assistance so um definitely take a peeksie into the old jarvis so anyway that's all for now um i think you'll be seeing some jarvis into the future uh from me because this is really cool and uh hopefully you guys don't mind this kind of format where i'm just just like running through stuff just because i think um like i spent so much time just like checking things out and then sometimes it shows up sometimes it doesn't um so yeah it happens um yeah i wish nvidia fix fix the spam honestly the last time i saw this i don't recall that ever happening so it's kind of weird i don't know i don't know it's because i zoom like what what is causing that because i assume like what man it fixes itself so well anyway okay that's all for now questions comments concerns whatever you know the deal feel free to leave them below otherwise i will see you guys in another video you
Info
Channel: sentdex
Views: 34,666
Rating: 4.9687905 out of 5
Keywords:
Id: fQzjgaKSrkc
Channel Id: undefined
Length: 43min 54sec (2634 seconds)
Published: Fri Apr 16 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.