Using Llama3 with Ollama & LlamaIndex | Generative AI Tools

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everybody welcome back to another video within the generative AI tool series and in this video we'll talk about how you can use llama 3 with ol llama within llama index does that make any sense well llama 3 is out there everywhere it's all over Twitter it's all over LinkedIn it's even on your WhatsApp it's even available as a meta AI assistant on web it's making the right kind of buzz for the right kind of reason the performance is off the chart so let's go ahead and build a rack solution out of it so here's a video where you can use llama 3 with all I can do it I can do it seems like a llama T over or something I can do it [Music] so welcome back everybody so I'm in the official documentation for llama index and the good thing about llama index is that you can use out of the box llms as well whether it's mistal whether it's claw by entropic whether it's open Ai and even llama 3 and the previous version such as llama 2 so here we have a long list of all the available uh configurations that you can do for you know all kinds of llms out there so that makes llama index a really flexible tool so let's go ahead in the code and do it right now all right so standard stuff here so from the very first line I'm actually importing olama which is again a great Tool uh through which you can use open source models like llama 3 mistal you can even customize them over there so in the second line I'm importing Vector store index symbol directory reader settings and prompt template and uh in the third line I'm importing my EnV file and I'm I'm also loading it over here so I also have another video which I've made on a llama index which sort of introduce you to the concepts of vector store index and simple directory reader so I'm going to hook it up in the cards right now go ahead and check it out uh you would again gain more understanding out of it so let's start with the very first line of code where we sort of initiate AMA and we tell Ama that I need U llama 3 as my model and we provide it with request time over here and uh within the settings that we have imported from the Llama index core I'm setting this llm and the third line of code is again we need the documents upon which we want to build a rack Solution on so I have a data directory here and within the data directory I have a summary version of the dancing man which is like one of the stories of Sherlock Holmes so it's kind of interesting go ahead and read it if you haven't and the fourth line is we create an index out of it so if you're a new viewer just just for your understanding Vector store index is more like a data structure which helps index the embeddings uh which are generated out of the documents right here and then we are basically initializing the Cur engine and we are setting the streaming through and we are providing the similarity uh top K is equal to four so This Is Us telling uh llama index core or its cury engine that uh whatever embeddings you gather uh in terms of our cury just pick the top four so we're sort of providing it a window uh in which it can fit the similar Solutions and finally we are curing through our cury engine and we're asking who is the culprate we're catching the response here and in the fifth and the final sorry Sixth and the final line we're sort of printing the response out so let's go ahead and run this I'm just going to shrink the right here or right let me just fit it over here so all right wow that was subtle Apes slany you know just like that seems like a la model so let's go ahead and ask more questions so who is AB slany that is the best oneliner answer I've gotten from an llm nice all right a former criminal associate of L's from America who has been sending coded message as a warning wait I'm just going to stretch it all out so coded message as a warning to her to stay silent about the past Association which could jeopardize his plans for a new life in England oh nice let's say tell me the summary about the case and this should be interesting whoa I haven't saved the file save let's go all right A Country Squire receives a series of cryptic drawings of dancing steti figures on the state which seem harmless at first but soon escalate into mystery with potentially deadly consequences the victim's wife appearer disturbed by the symbols uh so I hope this is enough spoiler for you guys and enough details for me to say that Yep this is working so yeah this is it for this video I hope you liked it I hope you learned something out of it and that's it for this video and I'll see you in the next one
Info
Channel: The Code Cruise
Views: 1,439
Rating: undefined out of 5
Keywords: llama 3, llama, llama3, llama 2, ollama, llama3 demo with ollama, llama 3 8b, llama 3 ollama, ollama llama 3, ollama llama3, llama llama red pajamas, ollama run llama 3, llama 3 70b, run llama3 ollama, llama3 via ollama, ollama llama index, llama3 demo wwith huggingface, ollama llama2, llama3 demo with kaggle, meta ai llama 3, what is llama 3, open source models using ollama, llama 2 rag, funny llama, how to run llava with ollama locally, llama 2 llamaindex
Id: JQA79dv2v2Q
Channel Id: undefined
Length: 6min 18sec (378 seconds)
Published: Wed Apr 24 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.