Project Walkthrough: askFSDL (LLM Bootcamp)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] I wanted to walk everybody through the uh code base for the Discord bot that you were interacting with this morning um so as a reminder that's the ask fsdl bot channel in the uh in the Discord and uh yeah so it's sort of a a bit more mature version of what I walked through building this morning so just sourced question answering over a corpus of information using Vector storage for retrieval and so there's a uh there's a GitHub repo for this project I posted the link in the Discord as well so if you want to follow along look at some of the code yourself um there's a bunch of moving pieces to set up but you know document database and you know an account on modal so I doubt you'll be able to like execute the code um unless you happen to have accounts on every single service that we're using um but you can at least check the code out and and follow along and also still encourage people to ask questions in the Q a channel for any questions that they have so um all right let's uh let's dive in so um what are some of the like primary things that you need to improve as you're going forward so there's some like basic stuff that's just like generic python uh project management thing so for example um actually let me drop into vs code here um so one nice little feature that we like to do at fsdl with a lot of our projects is you make a nice little make file um for all the all the components of the project with a nice little uh help message so there's this is this is all the different pieces and the make file can sort of like set up and run a bunch of these things so set up uh Discord bot uh deploy the back end um uh setup a vector index all that good stuff so that's where you'll want to go if you want to see like what are the like set up run the setup commands so it's got things like the environment uh and setting up authentication with uh a bunch of our tools uh so then other things that you might uh that you might want in a uh in a project um the uh like we were just kind of yoloing code in in the morning in Jupiter notebooks and other places um there's in this uh in this project we've got a bunch of nice uh software tools so one of the main ones is uh like pre-commits for uh for uh GitHub so this is just like a code cleanliness thing if you're doing uh software engineering you want to make sure that when a team's collaborating on a code base that people um you know are are good stewards of the code um and so pre-commit checks every time somebody's about to commit something to GitHub uh you can just run some checks on it to make sure that they haven't made any common mistakes uh added extra white space messed up the syntax of common files forgot that they were in a merge added large files lots of little simple ones um and then also a bunch of other code quality things we like to use black for python Auto formatting to make sure formats of python files all look the same we've recently switched over in past iterations of the course we've used the tool flake 8 for linting for like sort of catching sort of simple stylistic errors or and like style conventions um rough uh came out uh just in the last like six or nine months became a lot more popular rust powered formatter for python that's um uh that's really quite nice um and yeah let's just uh let's run that real quick check to make sure I haven't messed anything up um so yeah this uh yeah these kinds of tools uh there's one more that I that I wanted to quickly shout out which is uh this shell check tool if you find yourself writing a lot of bash scripts um you know like copy YouTube redis.sh or whatever and you find yourself like Googling or going to chat GPT to check things uh shell check is really nice for catching those like dumb things about arrays and if statements and uh um other things that the devil bash scripting um yeah so uh some basic little software engineering things um and the but then the there's also a bunch of stuff this is so like the sort of thing that you need to support like uh any sort of software engineering project using using python you need this kind of uh this kind of tooling uh what about things that made this this project in particular better so I'm jumping around all over the place but what I wanted to show you was the uh the data processing step so one of the first things that we realized when we did the like uh you know bare minimum version of this it looked a lot like what I did in the morning which is just like scrape data and like chunk it up into little pieces and like though that those are the sources that you're going to use and that turns out to like not work super well um it doesn't give really good uh good results for people uh and so what we found was that we got the biggest improvements in the quality of the results by spending time with our data so there's a little Jupiter notebook that walks through some of the things that we did to improve our data and this also happens to set up the uh the overall like document database as well um so you can see how that's done um you can bring your own uh like mongodb database we used the like Atlas hosted platform uh so the like just to give you an example of the kinds of things that uh turn out to be like critically important so uh in addition to the like PDF files that we talked about this morning we also pulled a lot of stuff from the like full stack deep learning website where we host a bunch of uh like notes about our lectures and like other kinds of things and if you just like treat it as text and parse it as text you lose all the structure that's in that structured text that's in the like the features of the markdown so it's got things like section headers um it's got uh like uh like separate like paragraphs it's there's like a lot of richness there and in a lot of cases those end up being like links to specific components of pages and the goal of this was provide sources to help people quickly find information uh and so being able to like preserve that structure when you're bringing in the documents turned out to be pretty important um so this uh this requires like a little bit of knowledge of your of like the data that you're scraping so it's maybe like if you're scraping internet scale stuff this is going to be a lot a lot more challenging but if you're building something that's like oriented to like q a over your company's uh knowledge store then maybe you have a little bit more context that can do these things um improving the like automated processing of text is definitely something that large language models will hopefully uh will be able to do in the next couple of years um but so it's a matter of like searching around finding the right kind of like python libraries for solving these kinds of problems um without you having to write a bunch of hard to maintain code yourself um so all this uh splitting the goal there is to just get some like basic format of these documents that preserves the the information needed to link to them and preserves as much of the structure as possible so this is distinct from what you might also see people doing with um with tools like uh link chain and things integrated with it which is chunking stuff up to put it in a vector score that's like that's a matter of making sure something is small enough to fit into the context of a language model as opposed to like just parsing a document and like preserving its its structure while you're storing it um then the other thing is that like in the morning we talked about like having to load PDFs and and PDFs are kind of you know an image format in a lot of ways rather than text but it's plausible that you can just get the text out of a PDF uh but that's not like the only thing that you can potentially index there are lots of ways to automatically extract textual information from other kinds of sources so you can pass images into an image to text model the sort of reverse of something like dolly or mid-journey something that takes in an image and returns a description of its content the thing that we focused on was our YouTube videos so we put videos of our content online those videos get transcripts actually for free from YouTube um and those we can then use those transcripts as as sources for our for the question answering Bob so finding creative ways to extract data from your like from your knowledge bases and make them accessible to llms Via embeddings or via other ways of of like connecting them up with a language model um is like I think a big open open field for ways to make more useful language model applications so this uh you know Spencer spent a good chunk of time trying to figure out how to use the uh YouTube SDK before realizing there's just one python Library that'll do it for you YouTube transcript API um which you love to see um we can only hope that the number of dumb little python libraries that solve one little task goes up over time with language models um but even with even when you have like pretty good access uh to like quickly get a hold of data that's often some kind of scrape that's not taking your particular problem into account so the problem that I ran into with the YouTube videos was that all of the tags the time there were timing tags which is awesome timing tags can be used to like linked to a YouTube video you can you can link to a specific moment in the video and that's awesome for this sourcing application because I can just bring them exactly to the point in the video where their question is answered um but the subtitles are time tagged for like one second at a time because that's used to put them layer them on top of the video uh so you there's a pretty simple fix which is just to go in and chunk it into a size that like contains a useful amount of information or like has about a couple thousand tokens or a couple hundred tokens worth of text in it um so it's a relatively Simple Solution but these are the kinds of things that that unless you're thinking about a particular data source the particular problem you're trying to solve then like you won't uh you won't think to do them so that was our uh that was that was like a big increase in the uh like the quality of the results here I think for myself personally one of the things I found this most useful for is like finding components of our videos where it's like I know that I talked about this at some point you just ask at the CL bot and then it pulls me directly to the link um so like there's big dividends that come to this kind of like front work um or this like unglamorous work of like getting to know the data and um and like writing the the dumb little code to to manage it properly um so that's uh that's the main thing I wanted to talk about in this notebook it goes in a little bit more like there's a little bit of stuff there about how we get it into the document store but that's a little bit less interesting um are we getting any questions in that q a channel there's a question about uh can you talk more about how you're using modal and why yeah that would be you know that's suspiciously close to my next planned topic um but uh are there any other questions before I do that actually there aren't yet unless my internet's slow yeah could be it um yeah let's let's yeah let's do that then we got so maybe um yeah real quick [Music] I want to also just pull up that architecture diagram because as as dumb as architecture diagrams are they're really helpful for like keeping it all in your head keep on asking are you going to share the notebook yeah the notebook is there's a there's a link to that GitHub repo in the um in the Discord and that has that notebook in it Okay so what were we just talking about we were just talking about the kind of like ETL component of this which is like how do I extract information from places where it's located out there on the web or in my like the general data storage of my organization how do I transform it into the format that is most useful and then how do I load that into some like specific storage for that specific transformed data so that's the top left corner there for a lot of stuff I showed all that stuff happening in the Jupiter notebook but it's uh the sort of thing that you might want to use a tool like modalport um so modal like you may have noticed like I was I was just like grabbing random python packages and using them for stuff and the nice thing about that is it brings like the code that you need for the particular problem you're trying to solve but if once you start installing like 50 or 60 dumb little python packages all of a sudden it's like they disagree about which version of some like seventh order transitive dependency and now they no longer work um and yeah specifically the python Dev tooling stack is particularly bad about that I which is a great reason to use that pre-commit tool that kind of isolates the environments and makes sure that you aren't like you know uh black and Flake 8 have Inc incompatible versions of some uh yeah there's a particular example that was really bad and and broke a bunch of of installs uh like nine months ago um yeah so one of the so there's a bunch of of things that are that are solved by the like modal component of this of this application but one of them is that you can actually separate out different container images for uh like each thing that you want to do so uh to clarify what that means is you're basically like creating a lightweight virtual machine that runs in modos cloud or like for each task that you want to solve uh so I didn't do as much breaking down of the modal images as I would like to do in this particular example um but that's like that's one of the core the core benefits uh you define one of these images here a modal image um you pick some uh some base version and then you like you like customize it so this is all pip installs but you can do all kinds of other things many of the things if you're familiar with with making Docker containers yourself um then use like a lot of things you might expect like um you know mounting mounting things installing Linux packages running shell commands Etc and one big reason why I really love modal like um like six months ago or so I started looking into options and this is the one that I ended up sticking with is that it's actually not literally a Docker image it is an open container image compliant input like Implement uh implementation and it's like incredibly fast um and that means that a lot of the pain that you might have experienced if you've used another like similar tool for cloud native development goes away um so maybe just like a quick some quick evidence of that um would be to just do the debugger yeah so just as an example um like another like the things are often quite slow like I need like creating containers can be slow pushing container images around can be slow like that you end up introducing a bunch of friction that then reduces the actual benefit to your deployment speed so um with this um uh with modal the speed is is quite good and there's a lot of like nice little features added um that like help make sure your development Cycles stay really fast so this example here is supported by just like one little like uh block of code down here I'm like writing this little function that Imports IPython and then spins up an IPython kernel so this is actually now one of my preferred ways to do debugging even locally but I discovered it as the suggested way to debug stuff on modal it includes the context of your um of the things that you are running um uh in the cloud infrastructure because it's running there but it's like nearly as fast and nearly as convenient as running like a kernel uh running like an IPython set up locally so I think there should be a forget which things I implemented for the in an hour version versus in the in this version so I think that [Music] call it complete maybe isn't working yeah there we go okay so I can run things what is um [Music] yeah so all this uh so that function right there this q a link chain this is something that is like something that runs in my back end it runs in an environment that's like not something that I want to try and replicate locally that I wanna I don't want to keep in in line but I can run it a little bit like as though I were like developing locally uh by using uh by just like connecting to a container that's running on on modal and and doing it in in interactive mode um right so right so main pain points that are main pain points that are going away uh yeah so in the morning I talked about how it was like oh yeah launch 100 containers at once run them on like all your little tasks collected up so that's obviously an accelerator um we talked about the ability to spin up specific environments avoid those things oh and then in addition to all this like the goal of it it isn't just like a data engineering platform or anything like that it's also got this um it's also designed to like end up with applications so it's got uh things built into it like asynchronous uh server Gateway interfaces as endpoints that are just like um like very very well designed abstractions that you can like wrap on top of the stuff that you're building and end up with an application like the one that uh y'all been interacting with and just to see like you get a decent interface into it like I can check this um I can check the application and see activity I can get like a lot more detail um like you can attach to gpus if you want and and um like uh and then you can track things like utilization and then all this is done like uh is done serverlessly so whenever you like need compute you get a hold of it and then when you don't need it it goes away and so since a lot of stuff right now is in this very tinkering phase uh where you're like maybe you only have users for like 20 minutes in the morning right when you're showing them your application and then you have no users for the rest of the day um it would sure would be a shame to like have to spin up a bunch of servers in order to handle those requests and then leave them on while nobody's uh talking to you and so this um like spinning up and sitting down Auto scaling is built into modal and works in my experience like pretty well um definitely challenging if you're like loading a machine learning model putting it on the GPU and serving a request that's uh that's a pretty challenging application of serverless modal does an okay job at it in my experience um the other piece uh added uh that I wanted to call out that was added to this and is generally useful is a radio user interface so radio um scrolling down so there's our yeah um this is our like the modal asgi app we got that and um this for you is the tool that means that I don't actually have to learn JavaScript um already you know um like since then also chat GPT has meant that I don't have to learn JavaScript um but uh the the what it provides for you primarily is that you can describe an interface of like a user interface in pure Python and then end up with like a kind of simple like single or few page application that has that interface in it um and it's it's like surprisingly flexible it's like it's it's uh supported by hugging face and so they're like rapidly add all the features that you would want um as somebody building with machine learning so here's like here's an example um this also happens to be the interface that has been used by uh like alpaca and I think Flamingo as the way that they demonstrate their models they use like a little little radio app um Dolly mini was originally a radio app so it's a like it's a it's a great way to like get started uh really quickly um and yeah that was uh that was just a little server was cold boot right there I just hit refresh took a few seconds um so if you're worried about your p99s um you might want to uh do something like heat some things alive at all times um but yeah this this interface I describe it entirely in Python it interacts with stuff that is primarily in Python so Lang chain and like maybe I'm like doing stuff with my embedding vectors maybe I'm running my own models a lot of that stuff is going to be in Python um and I can play around with it here um and I like to do this it's so easy to spin one of these things up that I like to do it as something that like let's say I like run a training run for a model I might just like spin up a UI for it uh like for that specific version of that model so that I can go back to it and be like wait when did this weird Behavior get introduced in my models was it in this one or was it in like was it three months ago or was it one month ago um uh yeah so you know I can check uh my language Models Behavior on important questions like would you rather fight 100 llama size gpd4s or one GP to four size llama I can go back and use this example and check it on uh on all kinds of different models if I do that um great so uh yeah so radio quick uh quick uis both for like demos and also for like other places where a UI would be great but you don't want to take the time to spin it up um you can embed them as you can run them in Jupiter notebooks you can like embed them as iframes they're like very portable and flexible um and they come with an API with an open API spec I believe always an open API spec and so like you can hook up into a bunch of different tooling these are like getting hooked up into things like chat plugins via the like open API um oh and it comes with a client SDK for like every like for everything that you make like you don't just have to interact with stuff via like requests and HTTP so now we've got oh check it out like I have this client SDK for running my uh running my queue and a the sources okay um any questions in the Discord [Music] how did you actually hook up Discord to this oh yeah that was um that was kind of like the least interesting part but I can show that so Discord uh there's of course a python library for running Discord Bots it's called just uh so Discord Pi um was the one that I ran with I've actually heard from other people that there's one called interactions uh that's a little bit better so you might want to try that one instead but this one was was perfectly fine you know you know looking back on it I'm you know Discord is not the easiest interface to uh to to run stuff with there's a lot of like precise like very fiddly details for example um they're not actually called like forums or groups or servers they're called guilds uh dating back to when Discord was for organizing wow raids um and so there's just like a bunch of yeah it's a weird weird interface um but it is there's some nice stuff like um it was one of my first like um or yeah it was a good experience with uh like async python for um uh like being able to handle more requests um but yeah it's like very simple to add these like commands and then you get a nice this nice user interface so I shouldn't talk too too much bad about it but you just Define these like these like events and I think maybe the the more interesting thing or maybe the question the person was really asking is less about like what does it look like to run this Discord bot and more like what does um like how does the Discord bot talk to the service so if I remember correctly that's in yeah it's in the answer so we got the we had those slash commands so if you wanted to run this you type slash ask there are some there's some nice features in Discord like like um kind of uh arguments and documentations those arguments um where are we at yeah oh it was Runner not answered foreign so uh like when you set up something as a web endpoint on Moodle you just have like normal HTTP access so if this were a requests Library like If This Were uh or sorry if this were synchronous python we could just use the request library and this would look really simple it would just be there'd be a get request to a particular URL with certain parameters but because it's async python um everything has to be like async with async with a weight return um it's a bit yeah it's a bit clunky to write this stuff in my experience but um you uh what you end up with is like you can write a pretty scalable web server in Python um so yeah oh that's uh that's important Point radio provides these uis underneath the hood is fast API so fast API is one of the like more user-friendly ways to get like an asynchronous python uh like web service running um so uh if you wanted to go like another level down under radio like you wanted to offer stuff that where a user interface didn't make as much sense fast API would be a way to do it and I I believe that's what's underneath this Discord bot as well um yeah so that's just like jumping back to our diagram to make sure that where we're at is clear so that's the um I was just showing the Discord bot server that's just like sitting on a little uh Cloud instance on AWS and the like the meat of of the application is all the stuff running on modal's infrastructure as these like little containers that get like spun up as needed to serve traffic [Music] what library did you use to generate embeddings oh yeah um so yeah I guess I didn't call out the embeddings here so open AI offers embeddings um they have like a bunch of of additional embedding models that aren't good for like generating text but they are good for like extracting the context of text and putting it into vectors so it's the open AI ada002 model it's like it's like dirt cheap it's like 100 times cheaper than the generation endpoints maybe more um and it um it like gives decent results I think it would be a good idea to add additional types of search here like right now I just have data storage goes into a vector index and that's the communication I think really you want your like your data storage with your your metadata there to like actually pull out the the information not just the embedded and people also ask about processing PDFs yeah so that one was primarily in the previous one I actually don't have that code in this uh in this repo yet um but that one was I believe I can just do this [Music] yeah there we go so the like core thing I was talking about in the morning was like write some local code that just talks about the like what is the high level thing I'm trying to do I'm trying to use some like a description of some PDFs to get a hold of their URLs I want to read all of them and then I want to add them all to the documentdb um so uh the part where I spun up 100 containers that was this map call here which says like um uh which takes a you take a function that you've defined in modal and you map it over some iterator and like spin up you can control the concurrency I just said maximum concurrency for fun but you can control the concurrency how many things get launched to handle those uh returns them in order which is convenient um and then you get back something that's in your local Python and you take that thing in your local Python and then you can pass it back up to some modal function so with the application the way I've set it up is really like web services like there's a bunch of services they have URLs you talk to them you can also run things that are in a much more pythony way like it looks as though this is just normal python like I'm doing a map I'm just like calling something but really this is doing stuff via web services but the serialization and deserialization is um like kind of handled for me and you may have shown this but what what again is the retrieval like how is the retrieval results put into the zero shot problem um that is actually inside the Lang chain prompt template this is a little hard to pull it up um but the basic answer is that it's an F string so you have a templated string so you might be familiar with ginger templates if you're more of a web developer person and you just insert the things that come out the the sources into the prompt as like this is the source uh like brackets uh the source URL this is the content brackets content and that all gets stuffed into the prompt maybe just show the function if you have it here uh yeah it's so it's it's ins it's deep inside of the langchain library no but like just what's the link chain call oh yeah um that's not super and then a good question is what do you think are the top three challenges for you if you wanted to take the bond to like the next level yeah so yeah so you so this is this to answer the the previous question it's like you provide a little um a little dictionary with your the stuff that needs to go into those um into the components of your template right so this I'm calling a uh a chain on Lang chain uh defined with link chain and it's this dictionary here the that has input documents and a question um and this gets put into the prompt here um so not particularly uh like detailed on how that goes but you just um in general like hop on to Lang chains GitHub and like look up the the source code it yeah as something that's like mostly a framework kind of like the hugging face Transformers Library it's something where the code is often like quite simple um it's just that you don't want to have to write that boilerplate you want to use something that has an interface that other tools will expect Etc um so it's not super complicated so I often find myself just like reading the code um then the follow-up question was what what does it take to bring the spot to the next level so um I think the top three challenges one is making sure like improving the retrieval the second one is improving the quality of the model outputs and the third one would be actually uh you know identifying a like solid user base um so the you know the third one that's that's too hard of a problem like uh you know if this were actually a useful idea I could uh you know launch it as a independent startup so I'll focus on the first two um so for improving the retrieval I think the primary the primary thing is that people have been working on information retrieval there's like there's a bunch of of ready to go ideas there for improving the quality of the language models there's uh I made a little bit of progress in that direction so one thing that has been added is information is logged to a feedback store or a a place where all the behavior of my model all the behaviors with my model can be stored so if you're familiar with tools like um I mean even just like datadog and Sentry but also honeycomb for handling a web service you want to know everything that is going on in a web service you want to know like uh you want to know what the inputs and outputs were and if you're in a distributed setting it can often be like very difficult to debug stuff without that kind of uh that amount of tracing and so the same thing the same principle applies with ML powered apps and llm powered apps you want to trace what is uh you know what queers are coming in what uh what the uh and what the answers are and in this case and so Gantry provides that for this um I let's see where's my camera login code logs even scantry it's pretty straightforward uh like you I I log records you want to keep track of a join key uh like just to make sure that you have like a unique identifier for stuff in case you want to bring in like there is that emoji feedback that emoji feedback's not available at the time that the event is logged so you want to have like a key so that you can pull that later maybe you run like a nightly or weekly job to like Ingress that feedback and join it um so I think I'll close out then in our last five minutes with just like quickly showing uh what that looks like so um this so this interface here like here's uh this is just the high level dashboard showing like what happens today or in the last 24 hours on this application so last night I was testing it pretty in the evening then this morning put it out on the Discord looks like people used a bunch of this afternoon started talking about it again people started using it again so it's like high level count of events then I the maybe one of the really useful features is I'm like at the at the moment that the event happens I just want to like grab what happened and store it but later like the inputs and outputs are unstructured data so there's lots of potential questions you might want to like ask about it was this an instance of my model being like bias do I have a user who's trying to hack my model like do I have um like do I have a sudden influx of users who speak a different language um and the there's a nice Gantry feature for track for uh enriching data that you've already logged these projections so for example um is somebody asking toxic questions oh yeah this is a little small so yeah um so you can add these afterwards backfill them and then they'll get uh filled in as you go some of them are like uh natural language based things like is this an insult or not some of them are are more numerical things like what's the entropy of this of this uh of this text what's the number of sentences there's uh projections for other things there's some more advanced features like embeddings if you want to really get crazy I was looking at some comparisons for example you can intentionally try and extract like what are the toxic questions and non-toxic questions people seem to have behaved themselves pretty well this morning and nobody really asks any toxic questions so that's maybe good but there was in here this this toxicity check did pull up this particular record where somebody asked about prompt injection and was it like seemingly attempting to maybe prompt inject the model so give an example propped injection that works on GP 3.5 um so uh yeah nothing uh I like in the last half hour before this talk I went and checked and looked for any interesting patterns and it looks like you know things people were there were some questions that the model didn't answer super well but nothing nothing too concerning on behavior of users or on behavior of models um but yeah it's good to be able to like have that information uh to to look into um and then you're not necessarily you don't necessarily need to just analyze it in in gantry's interface so you can actually like get that stuff back down so I pulled it into a pandas data frame uh that I could check out on uh you know uh just on my on my own and uh the the thing that I tried with my like 10 minutes in during the break was uh like one of the most promising avenues for actually being able to check whether our applications work is using language models to to check on our language models so I was just like uh Hey chat gbt is the answer below a reasonable answer to the question if the reasonable answer responds yes it's an unreasonable answer respond with no question answer threw that up into like uh into into link chain uh ran a ran the chat the turbo model on it and uh just like you know parse the answer if yes is in the answer then um then it's a reasonable answer and uh tried it out chat gbt thought about half of the answers were reasonable um and we can see a couple of those examples here so what does it thinks reasonable how are you doing today I'm doing well thank you um uh who's a good bot um a good bot is blenderbot free a deployed conversational agent that continually always responsibly engaged so I you know I was hoping I would get like a magical moment with Elms but um uh it's I I it's it's a little um it's a little fiddly still would need some tuning on this on this data set to get it to work well um but uh but yeah in in principle you can like start to identify issues with your model by looking through data that you've logged like it's um frequently for example this particular prop template contains the State of the Union Address from 2020 in it um or 2021 in it and so it uh tends to accidentally say things about Joe Biden Justice Stephen Breyer and the coronavirus pandemic so like identifying that um and like making sure and like checking for that and your outputs very doable with language models and uh and then tracking it with something like entry so that would be the like that's uh to wrap it back up to the original question um so like uh the real difficult thing is like determining whether the thing is running well in in production whether users are happy um and there's like tooling for observability for ML um that's uh that's probably the solution contributions um yeah yeah so yeah it's open I would like to make this better I think um we would like it to eventually develop into uh like a teaching tool application and uh so yeah we we love contributions um but yeah I'm over my time so can't take any more questions thanks everybody [Music] thank you
Info
Channel: The Full Stack
Views: 6,218
Rating: undefined out of 5
Keywords: deep learning, machine learning, mlops, ai
Id: pUKs4xM1r5U
Channel Id: undefined
Length: 42min 6sec (2526 seconds)
Published: Thu May 11 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.