Getting Started with RAG in DSPy!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you so much for checking out this tutorial on getting started with Rag and DSP I am more excited about DSP Than Ever After completing this tutorial uh just the end to end of of loading in a data set defining an llm metric and then seeing how DSP can compile or optimize the prompt to achieve better performance it's all just super exciting I think we're walking into the next era of llm programming with this prompting framework automatic optimization framework so I really hope you find Value out of this video it's designed to be a full to end we're going to load in our data set and we're going to see how to load each example from our data set into ds. example objects we're going to Define an LM metric and we're going to write our rag program using the dspi programming model and looking at some details like the built-in modules like Chain of Thought react and all that kind of stuff and then the super exciting thing we're going to compile optimize the prompts by bootstrapping F shot examples and using the beian signature Optimizer tweaking the task description we're going to be using all the good machine learning stuff of uh train test data splitting to evaluate our new program and all these super exciting things I really hope this video helps Inspire your interest in dspi our uncompiled rag program is improved by about 30% in this tutorial by using the optimizers and I think we're just scratching the surface this this tutorial is meant to be kind of like a hello world of DSP or maybe more accurately like a CFR 10 classification tutorial when you're starting with pytorch so I hope this end to end helps you see how to use DSP how to write your programs and how to optimize them and then you know the the sky the limit with your creativity on where you want to go next with what kind of programs you want to write and optimize let's dive into it really quickly before diving in if you're looking for the code that's used in this example it'll be open source on github.com we8 recipes this is a group effort with a lot of the colleagues at we8 that I'm just beyond lucky to work with and it contains all sorts of other examples if you're interested in that but these notebooks particularly will be in SL integration SL DSP so the second thing quickly that I wanted to say in this intro is I'm terribly sorry about about the zooming in and out of the last video rest assure that there will be zero zooming in this video so hopefully that makes it a little more readable and again I'm really sorry about that uh so also really quickly before diving in I wanted to give some mentions to the DSP Community First of all major major thank you to Christa OBS solong who uh debugged the beijan signature Optimizer over the weekend and without her help on this that I would have been delayed with making this video so thank you so much also thank you to Omar kab Michael Ryan Carol deerling Arnav svi and the entire DS team it's such a talented group of people and I'm just so excited to see dsy evolve I also want to quickly give some mentions to some people in the dspi Discord Community who are doing amazing things first of all Sean Chapman he gave a live talk in the dspi Discord about some updated work on pantic and dspi also has a newsletter on LinkedIn and the dspi GPT guide which is super cool then Thomas ale who's created a poll request for pantic support with dspi signatures which I think I'm not super familiar with how this all work Works personally but it what I do understand about it from studying instructor n dspi is that this kind of pantic typ validation should be a major unlock with this framework so thank you both so much it's all super exciting then thank you to Knox who tweeted out a gist on how to use AMA and dpy I think that will warrant its own video on this channel because uh being able to use the local llms with this kind of compilation framework I I think after watching this video you'll definitely appreciate the value of uh of AMA and DSP so thank you so much and then Ste uh Steven Byron who's written an article why I'm excited about dspi I thought this was really inspiring so I highly recommend checking that as well so there will be some links in the description to some people in the community uh I highly recommend joining the dsy Discord and to everyone in the dsy community uh thank you so much for all the support on the first video and all the conversations as we're all uh trying to figure out dsy together it's definitely you know a lot to wrap your head around so it's so nice to build this community of people who can trade ideas debugging support and all that awesome stuff so thank thank you so much and let's dive in all right let's dive into it I'm super excited to share with you this notebook I've created on dspi end to end for compiling a basic retrieve then generate rag program this has just been so exciting for me and I hope this video will help uh share that excitement and explain the concept A little better so to kick things off the idea of DSP is I I think there are two major things to it one the programming model is super fascinating you have this way of writing llm programs you have a lot of default modules like chain of thought react all these super interesting things and there's you know some overlap there with the structured output uh work that's been done and then you have this compilation or optimization of the prompts used in llms or you could fine-tune the models with the synthetic data but in this example we're going to be looking at two kinds of optimizations for our rag particularly the question answering llm we start off with rewriting the task description and then bootstrapping few shot input output examples to use using the prompt this is also known as in context learning so when we're building rag programs we often start our program off with uh with a prompt like please answer the question based on the following context and now this is where all that manual prompt tuning starts to come in the picture is one way to try to improve this might be to have very important in all capital letters and then please please make sure that the answer is based on the context Exclamation point Exclamation point and so we're trying to you know use the prompt space to make sure the mod mod is really uh following the retrieved context and understanding the task so the first thing I want to point you to is when we're using this uh signature Optimizer dspi is going to rewrite that initial prompt of please answer the question based on the following context into assess the context and answer the given questions that are predominantly about software usage process optimization and troubleshooting focus on providing accurate information related to Tech or software related queries and you'll see even more interesting examples of how it combines that task description with some input out output examples of you know retrieving context reasoning by using that dspi dochain ofth thought built-in module and overall creating the super powerful prompt to serve as our brag program so uh this notebook is going to be a full end to end uh I'd say it contains four parts really although there's a part zero we're going to start off with dsy settings and installation then we're going to dive into dsy data sets and wrapping each example from your data set into ds. example objects in this tutorial we're going to be using a retrieval that has an index Corpus of chunks from the we8 blog post and we're going to be retrieving those chunks in order to answer the 44 we8 FAQs so the second key part is llm metrics in DSP so this is a super interesting concept where instead of using exact match or maybe keyword overlap scores to you know assess how the quality of the llm generated answer we're going to use another llm that's going to rate the answer on different dimensions such as how Faithfully grounded in the context it is how engaging and detailed the answer is and then just an overall assessment of the quality of the answer the third thing is llm programming with Ds py. module and the DSP signatures and so we're going to dive into how to write rag programs and I give a little teaser of the next video where we're going to be doing more advanced rag programs just to help you get this concept of how you can add depth to these programs to get more performance and that's a part of just this super interesting analog of DSP llm programming with say pytorch or Kos neural Network optimization is this idea of adding depth and inductive biases by adding more layers to your LM programs but that's not really this the focus of this video but the focus of this video is the end to end which brings us to the fourth key part which is optimization so again we're optimizing the examples in the prompt as well as the description of the task we're going to do this by using dsp's bootstrap fuse shot bootstrap fuse shot random search and the beian signature Optimizer so let's dive in I think that's everything from the beginning so setting things up with the maybe less exciting stuff we're going to be using the DS Pi hyphen Library we're going to be using the 2.1.9 version just in case you're unfamiliar with the syntax this is how you would tell pip what particular version of the library you want and in these tutorials we're going to be using the wv8 retrieval engine and it's very important that you're using uh one of the third versions of the client rather than the fourth version we'll be updating this soon with the new uh V4 client but nothing will change on the side of the dspi interface so then you we're going to be using the open AI models I wanted to give a quick thank you to KNX on Twitter who uh posted this gist of how to connect DSP to oama and that'll definitely be a future video this whole idea of using local llms these super powerful mistal llama 2 models and all these models it'll be so exciting to keep you know building on the LMS we're using these DSP tutorials but for now we're going to be using the open AI model so uh so all you do is you set the LM is dspi openai uh then you set the the the retrieval engine in this case the we8 syntax is just you connect whether it's hosted whether you're locally hosting wv8 it could be embedded in the notebook itself or you could be using the manage Service as so we pass in the name of the collection it's going to assume that you have a default text key called content that's it's going to be retrieving from then you set the DSP settings with the LM and the retrieval model so really quickly before going further this is all you need to basically get set up but I just quickly wanted to highlight this dspi context thing so when you're setting uh in the settings that means dspi is going to use this language model and this retrieval model as the default throughout the processing but if you want to say use a different LM or a different retrieval engine like something I think is really exciting is combining say your private Vector database like WEA with something like one of these web search apis like the U API or perplexity so you can use these uh this ds. context to use multiple retrieval engines and multiple language models so here's just a quick example of you can access uh gbt 3.5 Turbo with ds. settings. LM and then write a three-line poem about neural networks just to show you that these are indeed two different models so this would be how you would then you know Define another model and then pass another model next up is the data set that we're going to be using to optimize our llm program so as a quick primer the the we V8 RM that I connected to DSP that's already filled with chunks from we8 we v8's blog post and if you want to see the full tutorial on how to set up we8 locally and how to load chunks into the retrieval engine you can follow this link on we8 recipes so we're going to be using the frequently asked questions from we v8's website as the data set of our questions that we want our rag program to answer so I have these in a markdown file this is is just uh parsing out the S the format of the markdown file and then loading in the questions so we have questions like uh do I need to know about Docker to use we8 and so on so we have 44 of these questions so the next thing to know is once you have your data set into dspi now you're going to be wrapping each example into a DSP do example object so what this involves is defining the input keys so uh in this case we just we're just using one input key to kind of you know keep the tutorial simple and just sort of get out of the gate so we're just using question equals question but you could also say supervise uh intermediate predictions of your llm program maybe you want to have a gold uh documents that should be retrieved and maybe you're trying to optimize the prompts of LMS that go in the retrieval layer of the kind of retrieve then answer system I think if you're interested in reading more about that llama index has done an amazing job of outlining all these different retrieval strategies and so you can add supervision to the retrievable part by adding it as one of the keys in the dspot example so that then the metric can access that value as well so we're doing some standard machine learning stuff and that's just one part about DSP I love so much is that it's bringing us back to these uh train test split and having all this kind of machine learning thinking in our llm development so the first 20 examples are going to be used for training for tweaking the prompts we're going to then you know come up for air and see how we're doing with the metrics with these 10 examples from 20 to 30 in our development set and then only when we're absolutely finished with our optimization will we then evaluate on the test set okay so now that we've installed the librar and loaded in our data sets it's time to really begin the DSP party so we'll start off with llm metrics so first of all we're going to be keeping it relatively simple in this tutorial as we're just building a question answering system and you know assessing the quality of an answer whether it's based on the context and so on is a relatively straightforward assessment metric whereas you can imagine really long form things you could do with dspi like say write an entire blog post or if you're writing code you might have a different approach to how you have uh like programming test metrics and so hopefully this will like kick off the thinking and help you get a sense of this but I highly recommend checking out this tutorial on dspi examples of uh using llm metrics to evaluate the quality of llm generated tweets so the first thing again as we showed with the context we're going to be using a different llm for our metric than our um than our uh llm that's going to be in the rag program answering the question so actually let's use so we're going to use gp4 as the metric so the first thing that we're going to be doing in this case is is setting up a signature to initialize the prompts for the llm evaluator so we begin with the general template assess the quality of an answer to a question and then we're going to give it these input fields and our output field so our input field the context for answering the question this is the the retrieve documents from the retrieval engine then the evaluation Criterion we're going to be changing the evaluation Criterion with a prompt for is it detailed a prompt for is it grounded in the context and then just like overall how's the answer to this question then we're going to be passing in the answer to the question and then we're going to be using a rating between 1 and five and a little bit of that prompt engineering stuff we're trying to get away from but uh only output the rating and nothing else so it is an idea to optimize these metrics as well as a part of the dsy program and it it's pretty meta that that's part of what makes all this so exciting is that the abstractions are open to finally start doing this kind of synthetic data llms optimizing llms and all this super exciting stuff but let's keep it maybe a little easy just to connect the program and get started so then what we do is we are defining our LM metric we pass in the gold which is going to be uh one of these examples and so we have the the question key so we get the uh so from our gold we have the question equals gold. question then as we write our DSP program we're going to have the prediction and we're going to have the output key. answer so so we get the values out this way and then we're just going to use f strings to first of all just kind of log it for ourselves and then we're going to put these the question in the predicted answer into these prompts and then we're going to be uh passing these into the metric llm so the first thing to note is we're going to be getting the context again this way you could put the context in the output of the prediction so I'm going to be having the prediction from our our DSP program you know not to jump ahead too much but I'm we're just going to be outputting the answer you could also return the context in the forward pass as the output from the uh DSP program but so we're just going to retrieve again and then we're going to be using uh dsi's Chain of Thought So dsy has these built-in modules if you want to just pass in the prompt as you've written it you would just use ds. predict but dy. Chain of Thought has this really interesting way of formatting to come up with an a rationale a reasoning before the final output and I found this to just be super uh you know super fascinating with watching how it reasons and how that improves the performance and just a super effective thing you get out of the box with DSP which again I think there are two values to DSP the the the programming model and just being able to you know format your prompts clean up your code bases by writing the prompts in Doc strings and then descriptions of the input output fields and how it adheres to the output Fields but then there are these built-in modules like Chain of Thought or react or program of thought that give you this kind of advanced prompting just right out of the box with no need for you to try to figure out how you're going to parse the outputs format the prompts and all that kind of stuff so we initialize the the module with this assess signature and then we're going to be running a forward a forward path so you could also say you know do it like you know detail like detail module equals this and then you would you know run the run the forward pass like that right but we're just going to be doing it in one line okay so so that gives you the three metrics then they come out and then we uh cast them into floats again so this this is one of the most interesting Topics in my opinion and all avable in programming right now is the kind of pantic stuff the work that Jason Lou is doing with instructor on U making sure that it does output a float but the D Pi compiler as we'll see later on if it says you know no or yes instead of the rating the DSP compiler will just sort of move on but maybe something to be you know fixing as we're say scaling this and all that kind of stuff so we're going to be uh waiting the faithful answer times two because in my opinion with building brag that's generally the most important thing you're looking for and then we'll return the total divided by 5.0 so the maximum score you could receive would be you know 5 + 10 + 5 20 ID 5 is 4 so once you've defined the metric and quickly I had left an error with the underscore module but it's fixed now so let's test our metric and see how it runs so in this example we're passing in the question what do cross- encoders do and then we're testing the LM metric with an answer of they rerank documents so in this case you get a faithfulness score of five which is saying that the context does say that cross encoders reranked documents you get a one rating for detail because you just you know said three words and then you get a five overall rating so then let's try with an incorrect answer what do cross- encoders do they index data it's I mean maybe you could use cross encoders to index data but it's not really a typical practice to do it like that and so in this case you get one for faithfulness one for detail and one overall and then and this is a general thing with DSP if you ever want to see the last inference that just went into your llm you can do do inspect history n equals 3 so we're going to inspect the history of our metric llm to get another sense of what's happening with this uh with the with the metric LM so we we've done n equals 3 because remember that every time it's doing the metric it's doing these three inferences of uh how detailed is the answer how uh Faithful Is it and then the overall rating so here's what those prompts end up looking like and again these are uncompiled prompts these are the prompts you get just right out of the dspi programming model box so the the the first one that we did is um the assess question so first you give it assess the quality of an answer to a question then you give it some description of the inputs and the outputs and then you pass in the current uh the current inference so you're saying SS question is the answer detailed the assess answer they index data and then this uh ds. Chain of Thought the reasoning lets things step by step in order to produce the assessment answer we need to consider if the answer provides enough detail to fully answer the question and then assessment answer one uh so then we see how we do this with the other ratings so in this case we're passing in the context as well because the assessed question is is the assessed text grounded in the context say no if it includes significant facts not in the context and then again the reasoning about how it's going to produce its metric and then again with the overall rating so hopefully that gives you a sense of what the metric llm is doing and again you can always inspect the history you can similarly inspect the history of the of the uh the LM that's doing the uh the predictions by also accessing it like this so or this is the oh sorry because we didn't actually pass this through the rag program okay so let's step ahead and let's get to the next section where we're going to be writing our dsy program awesome so now that we have our data set our data set's loaded into ds. example objects and our llm metric let's dive into the dspi programming model so again we're going to be building a pretty simple rag program where we just retrieve and then answer the question so the first thing to note is to understand signatures and dspi modules so instead of using an explicitly written signature we could also do something like question context answer and dspi will parse that into a signature and I think this is a super interesting Syntax for quickly sharing your dsy programs I imagine we'll probably see something like a dsy gallery emerge similar to kind of the story of Lang chain and as people are building out these LM programs I think this shorthand syntax is a really great way to quickly explain what you're trying to do with these programs but when you do want a little more control you can instead pass in the full signature to a DSP module like this so we have our DSi signatures this is where we give it the initial prompt as well as some descriptions about our input and output Fields as well as as in our input output Fields okay so then we connect these components together in the DSP module so in this case we just have a retrieval engine and a generate answer llm so so this is pretty standard you can imagine adding other layers like if we want to have a you know like a ranker here we would add this other predict and say you know we had like question context ranked documents or if we wanted to add summarization like you know summary and then we want to call this our summary module and then say this came after this so we then said context now equals self. summary question equals question context equals context and then do summary and then you know say we then pass that into a prediction and so on but so hopefully that gives you the sense of how you can add depth so when we're saying add depth to our DSP programs what we mean is adding more prompts in the middle more components and say our computation graph of the llms that transform text throughout many layers of processing but in this case we're we're you know keeping it pretty simple with just retrieve and then generate because I really think it's more important to just in my opinion it's good to just have one one one LM to optimize in the beginning to see what the beian signature Optimizer what the bootstrap fuse shot is doing so you know again I think the goal of this video is to try to be like a CFR 10 for DSP and a simple example but anyways so this is how you connect the program uh let's have a little more of a look at these uh built-in modules so if you want to just use your prompt out of the box without any of dsp's uh built-in modules you can just use predict and you'll see the what are cross encoders uh and if we run that okay so so you see how with predict did just you know answer questions based on the context you know the initial description that you gave it in in the signature and then it answers the question accordingly so this is what Chain of Thought is doing and I am super fascinated by this it adds this reasoning lets things step by step in order to and so let's things step by step in order to produce the answer and you can also access this this reasoning as a DOT reasoning from the DSP so a pretty interesting interesting thing and then here's another module dy. react so with react it's that uh the tool use the action kind of prompt so in this case you say you'll be given context question you'll respond with the answer to do this you will inter leave thought action and observation steps so you know now it's doing this thing of uh search finish answer and so it's making these actions like action one uh finish answer thought two so I haven't dived in too much into this but I hope this just gives you the quick preview of of how you can instantly get more value out of your prompts just by using these uh built-in modules and there there are a few others as well like multi-chain comparison or program of thought so I haven't personally gone too deep into this but I just wanted to you know quickly make it aware in this video okay so now we have uh connected our program so again we have built this brag program as defined here with our just retrieve answer Chain of Thought on this generate answer signature and now we Define the program by just uncompiled rag equals Rag and now we can pass a forward pass by saying you know uncompiled Rag and then our input what are we rankers and search engines and access our answer and then again I think we've already seen this a few times but you can inspect the history like this okay so now let's get into the fourth part we're now going to be evaluating these programs and optimizing them okay so now the moment that you've been waiting for so first of all thank you so much for making it this far in the video or if you wanted to skip ahead to see the optimization also welcome now we're going to be using the dspi compiler teleprompters to optimize the prompts in the rag program and Achieve better performance according to our llm metric so as a quick reminder again we have our development set where we have these questions and we're going to be evaluating our program on the development set whereas we optimize them with the training set so we're using the DSP evaluate object we construct it we pass it in the development set tell the number of threads to be running the evaluation on display the progress and display the table at the end uh then we give it a newly initialized uncompiled rag program and we pass in our llm metric okay so we run this okay so it's now running through the test set and evaluating the metric on each of the examples uh so we have oh sorry so we we have 10 examples in our test set so I think our maximum rating would be 40 I'm not sure I think they're multiplying it by 10 here with 244 but so anyway so we'll just say 244 is the score for our uncompiled rag program okay so we can see the history of our uncompiled rag program without any optimization it just has a description of the task then it just you know retrieves the context and then it just has the question does weate use hnsw lib and then answers the question so then we can also again inspect the metric LM whenever we want to like this and now let's step into optimization so the first Optimizer we're going to be using is bootstrap fuse shot so that's going to be looking to get examples to put in the prompt and it's going to stop once it has and once uh adding more examples to The Prompt no longer increases the performance okay so we run this and uh yeah so so now it's doing its thing optimizing the program and I'm not super familiar with the details of boot DRP F shot and all these things I think this is something that we'll be continuing to explore in the video series but I think the analogy here is with pytorch and Kos when we were training neural networks to classify cats or dogs we had all these optimizers like atom nesterov and so it it was you know the abstraction was you weren't necessarily super interested in all the details of the optimizer you know kind of some people obviously you specialize it that's your thing but so I kind of think the abstraction is the same way here we have these different optimizers and you might want to just kind of try which one works the best for you in the end from my experience I don't want to tease the video too much maybe let me cut this and come back when it's finished compiling okay we're back and we finished compiling the rag program with bootstrap fuse shot so it runs through and it bootstraps four full traces or four uh passes through the program where it successfully or maximizes the metric with say you know five faithful five detail five overall and now we're going to be using these in the prompt so here's at the new so now we can just see an example of the new answer and then we can see the history to see The Prompt that it came up with so we have our initial description of the task and now what we have is examples in our prompt so so we have this example of looking at context uh reasoning about the context and then answering and this particular example was determined to have a high rating with the LM metric then we have another example that we add to our prompt and then we have our Uh current inferences where we're now uh asking the question what do cross- encoders do and we retrieve the context and the idea is that by having this these additional examples in the prompt the LM is better at now doing the current reasoning and the current answer so now for the exciting thing let's evaluate our compiled Rag and see if we've improved the metric so I'm just as excited as as anyone watching okay so we're running through the metric uh I might be getting rate limited a little bit because I just ran this uh offline so we wouldn't have to wait again for that but okay there will be probably a little bit of cut cutting around as we're running these compilations and I cut the video so you're not just watching it loading so in the end we've compile again so we compiled our program by bootstrapping these few shot examples to put in the prompt then we reran the evaluation with the compiled Rag and our metric increases from 220 to 274 so this means that you know that composition of scores of faithfulness detail overall has increased from 220 to 274 okay so now things are really getting exciting now we're using the bootstrap fuse shot with random search so what that means is we're going to be say we have uh eight examples of traces through our program that result in high scores in the LM metric but we only want to use four of them out of the eight in the prompt so now we're going to be randomly searching for the combination of those eight examples that when used in The Prompt result in the best uh performance for the system so this is quite a long Trace it's quite a lot of optimization that it's doing here as it as it's searching through all these programs and evaluating the scores and so on but in the end we end up with now improving our metric up to I think either 288 or 3 we'll see in a second when we run it through the evaluation but so now we see our new example and we can see inspecting the history we see these so these uh you know input outputs context reasoning answer these are determined to have the best performance of for our system with the new compiler so now let's run another one through the compil do the evaluation and maybe again I'll have to cut it and this is maybe the the problem with with doing notebook tutorials okay so after evaluating our second compiled rag program with the uh bootstrap fuse shot with random search we end up getting a score of 2 64 and I'm not sure what what this is about I think uh yes I'm not sure why that is lower but anyways let's now dive into our third teleprompter compiler the beijan signature Optimizer so this one is my favorite one because it's not only coming up with prompts I mean sorry examples it's also rewriting the task description and then choosing what's the optimally Performing uh task uh description so in this case also we're now introducing a third llm to the party we already have our llms that are used in the rag program and now we have the and we have the LM that's used in the metric and now we also have an LM that's used to rewrite the task descriptions so after running this and again I'm I'm running it offline instead of running it live with the talking through the notebook we end up rewriting the instructions to assess the context and answer the given questions that are predominantly about software usage process optimization and troubleshooting focus on providing accurate information related to Tech or software related queries again we bootstrap the examples so we combine this rewriting of the prompt with a new task description and this time we evaluate evaluate it we now get up to 290 so one more thing I'm running through now is now we're taking the party to the test set so we've been uh you know using the training set to tweak The Prompt test the examples and then we reported performance and the development set to maybe you know play with these optimizers and now we're going to test our programs on the heldout test set okay the moment we've all been waiting for the results of our compiled program compared to the UN uncompiled program on the heldout test set so we get two 271 with the uncompiled rag model then we get an improvement of 285 with bootstrap fuse shot then 255 with bootstrap fuse shot with random surch so I think there was a bug in the way that I set that one up but my favorite one 345.8 with the beijan signature optimizers thank you so much for watching the tutorial on getting started with Rag and DSP there are four major parts of this tutorial setting up the libraries and loading in your data set uh configuring the llm metrics llm programming with DSP and then optimization with the teleprompters I hope this end to end notebook uh helps you better understand how to use DSP what the framework is trying to help you achieve and I can't wait to see what people build with DSP uh for me personally I think my interest is going now into adding more layers to rag such as adding multihop reranking maybe summarizing search results all these exciting things to extend rag further but I'm also just so excited about say building chat Bots with this or writing blog posts writing code I think disgu the limit with what kind of programs we can build with this DSP model thanks to kind of the flexibility of the abstractions so uh please subscribe to the channel if you're interested in uh following along with the rest of this dsy Explain series U I highly recommend joining the DSP Discord and uh if you have any issues with anything in the in the video please feel free to leave it as a comment or just want to share your thoughts generally and I I'll try to respond to all the comments thank you so much for watching this video
Info
Channel: Connor Shorten
Views: 6,680
Rating: undefined out of 5
Keywords:
Id: CEuUG4Umfxs
Channel Id: undefined
Length: 31min 53sec (1913 seconds)
Published: Mon Feb 12 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.