Summarizer a 1 hour long Youtube video using Groq and Phidata

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi guys continuing with our F data Series today I have a special video for you today we are going to implement a YouTube video summaries app powered by Gro and built using F data I'll paste a link to this code book in the description and you can have a run but basically what we need to do is to select the appropriate models from here which is lama 370b lama 38 billion and uh mix TR 8 7 8 cross 7 billion you can put in any number of models you can select the appropriate chunk size I just need to put the URL video URL so this is a video of mine the last video I just put in the URL and click on generate summaries then you can get the video here and you have the processed captions here so all the captions uh have been accepted and processed and then we have this beautiful summary we have the overview section development of the rag application here then setting up the environment installing the AMA and the F data uh am is basically AMA but you know my pronunciation running the application here and then the takeaways so the rag application is just Lama 3 o Lama it identifies correctly and F data to generate high quality reports and complex topics the application can be set up using cond pip and then installation this is a beautiful summary now before we move on to test it more let us start from the beginning let me close everything let us start from the beginning and let's see the magic so I have my visual studio code running here I'm going to close this as well so this is f data the author of this F data library is a spr b many many thanks to you for giving me this opportunity to share this F data with my subscribers and viewers so so what you're going to do is copy this code click on code copy the URL then we are going to go to a particular folder where you are going to work then type CMD press enter and we have this CMD here I'm going to say get clone and and paste in the link this is going to download uh all the files from this GitHub repo then we need to change this folder to F data and now what I'm going to do is go back to this folder which contains our file for this particular video I'm going to copy the path here and I'm going to say change directory to that particular path next I'm going to say code code space. enter this going to open up the visual studio code editor and here I'm going to open up a new terminal uh we know the drill we need an environment a virtual environment to work with so I'm going to see my list of environments that I have so cond info D- envs these are the list of virtual environments that I have and then I I'm going to select the F data environment so I'm going to say so we can activate the F data environment using this command cond activate F data and if you are trying to create an environment this is the command cond create DN F data python equal to 3.11 Dy so this is the command that we that you can use to make this um environment now we have this um libraries and requirements.txt we're going to install all of these so pip install dasr requirements.txt this is going to install all these libraries which are mentioned in the requirements.txt file next we can have a look at the readme file but uh I'm going to repeat the steps uh for you so we have the assistant here in the assistant we have uh two number of functions first function or assistant is the get chunk summarizer the next is if you're going to minimize this okay the next is get video summarizer so I'm just going to read through one of these um assistant so basically it's a function here we have the model it's a string and the default value is this l 370 billion debug mode is a Boolean it's set to true then we return an assistant so the assistant name is this llm uses this description is you are a senior New York Times Reporter instruction is you'll be provided with the YouTube video link and information about the video pre-process summaries from Junior researchers carefully process the information then we add uh to system prompt using Dent function uh we specify that we want to add this to the in in this place so then generate a final New York word report in the report format provided below so this is the format of the report this is the video title will link then we'll have the overview we add some sections here key takeaways and the report generated on your month date year hours a.m. and p.m. we set the mark down to True uh add daytime to instructions and debug mode is true so so these are instructions and the main app is this where we set up everything uh starting from the page confi of uh streamlit we using streamlet S STD here and we set DET title the markdown this is the main function we have the main function here inside the main function we have sidebars where we can select the llms here if you let me just run this so the installation is complete now for running this we can just say before we run this we need to set the gro a AP key so I can say set Gro API uncore key is equal to put in the key in order to get the key we can go to Grog here and then go to API access then what you can do is go to playground on Grog API then go to API Keys then we click on create key and just put in some name I'll click on submit copy this and paste it here so set Grog API key is this that's done now you can just say streamlit run app.py this should start up a streamlit on your local system and you are good to go so what we can do now is once the load is set up uh I'll just explain what the features here so you can select any model from here these three are the models Lama 370 billion Lama 38 billion mix TR of 8 cross 7 billion and you can see the code here we can select any one of the models here then there are sliders uh which this slider where you can um put in or change the words in chunks so let me keep uh 6,000 characters of Chunk and then you need to put the URL here and then you can just click on create summary the summary will be created now we have this trend videos so intro to large language model this is the video by Andrew KP then we have this what's next for AI agents this is this this is this video of Harrison Chase so I'm going to run this code um just click here what's next for AI agents you know in the background we have added the URL of this U of this video and therefore we don't need to enter an URL here so you can see what's next for harison uh what's next for a agents we see the overview understanding AI agents planning in ux memory and future development key takeaways and the report generated on the date time and everything so this video you can go back to the code and see that this is the link for that particular video so these are the default uh three numbers have been given in the sidebar you can put in more defaults here but this is how you get started so quickly in summarizing this uh entire video YouTube video so long video so you can see this U is about an hours long of video and even you can summarize that as well so just click on intro to large language models so it loads up the video here it reads the captions here so the captions have been read and then it is summarizing uh summarizing the chunk one and trying to generate this so we can see that the rate limit has exceeded for this it's not possible but we can try to reduce the rate limit here to let's say 3500 and then let's try to summarize this again so you can build upon this we have so many examples I have collaborated with the founder of f data um asre Betty and I'm going to bring in more videos uh so it is still summarizing it's a long video because it's a on hour video and let's see if it is able to summarize this well so that's you know a 1 hour of video so let's wait for some time and then let's see okay it was able to create the summary so this is the name of the video there's a YouTube link overview understanding large language model then training and applications fine-tuning and reinforcement learning scaling laws and future directions security challenges key takeaways I mean that's a 1 hour long video and that has been compressed so well using Gro and the framework that we're using is f data so I'm sure um you like this uh video you like this simple implementation of using you know Grog and F data it's so easy to create now you can if you don't want to uh listen if you don't have time to listen to this entire video if you just want to find out the summary of the video video and justify and judge for yourself if that is a video you like to spend a 1 hour time for example this video you can just put it in the summarizer read the summary and then decide whether to whether to view that video or not so I'm going to bring in more use cases from F data but I think this video should be it so in this video you have learned how to use f data how to use Grog to create all YouTube summarizer so simply it's so easy to get started nowadays with llms having said that I think this should be the end of the video subscribe to my channel for more interesting content like this follow aspr BTY I'll put a link in the description for all the materials that I've used to create this video and I will see you next time support me on patreon if you want to support me for all the work that I'm doing and thank you everyone for joining with me today I'll be back with more interesting content tomorrow thank you and have a nice day please watch this next video
Info
Channel: Prompt Engineer
Views: 444
Rating: undefined out of 5
Keywords: prompt, ai, llm, localllms, openai, promptengineer, promptengineering
Id: Rk6anBVQLIk
Channel Id: undefined
Length: 12min 5sec (725 seconds)
Published: Thu May 02 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.