Chat with CSV Streamlit Chatbot using Llama 2: All Open Source

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone welcome to AI anytime Channel in this video we are going to build something called chat with CSV bot okay using gamma 2 model so Lama 2 is a new large language model that has been you know released by meta AI which is an open source and commercially available llm and that's what we are going to use in this video so I have uh I have a I have one more video on llama2 you know that I have created a few days ago you can see this build and run a medical chat bot using llama2 on CPU machine all open source and that's what we are going to do in this video as well that we will load this llama 2 model from hugging face spaces so I have downloaded the quantized model from the Block repository on hugging face Tom jobbins is the creator of the block repository on hugging face and Tom jobbins is one of the one of the most important guy I will say you know in the or in the open source Community right now who who works on LM quantization and Ops and we are going to leverage one of the ggml model okay uh and that model we are going to load it through something called C Transformers so I will quickly write one of the things that we need and what are the things that we have to now consider in this video right the first thing that we have looked at the quantized the quantized llm this is the model that we are going to use in this and in this case it's Lama to GG ml model the block also has GPT QA model on hugging face but I'm gonna use gdml model for this video which at least works you know good for me right now and for few of the use cases I have tried and it works well and it's it's a ggml model which runs on community Hardware like you know single consumer GPU or on CPU machines as well so I am going to run this on 16 GB Ram CPU machine and and that's what I'm gonna do okay on the quantized LM and to load this model we need something called C Transformer C Transformers okay which is a C C plus plus binding the python binding of in C C plus plus you know are like to load Transformers based model okay so this is how we're gonna load the model so we're gonna load the model not from Transformer but from C Transformer these are the two things which require for CPU machine because they are going to run this on a compute on a limited compute devices like CPU and you know Andreas karpathy has released on llama C right baby llama which runs on through a single C file and that's that's what we need we don't need you know this model student on gpus you know uh to be honest and we have to look at the smaller models or the domain specific model in near future and that's what a lot of models like you know we are seeing a lot of models which are coming are having less parameters uh and they're performing good uh there has model like 5 1 Orca and cerebrum a lot of other models which has less parameters and have you know seen good progress on the uh performance standpoint so these are the couple of things that we need for our to run this on a CPU machine and what else we need here is that okay we need sentence Transformer so let me write sentence Transformers again a model which is available on hugging face we're gonna use all mini LM lv6 or something okay this is the model that we are going to use okay version six uh language model or V2 or something I have to see that okay this is the model that we're gonna use for create to create the embeddings okay on your data okay so this is for embedding so let me just write it over here okay I'm going to write it for embeddings and once you create the embeddings you have to save that embedding somewhere right when I store that in a vector store and in this video I'm gonna rely on fast CPU fast is Again by Facebook you have to give credit to Facebook guys meta AI you know they have been very instrumental to strength on this open source Community okay it's the race between open source and closed Source nowadays with Claude uh Bard and of gpt's model that are coming out through close or through commercially available apis but Facebook or meta AI nowadays has done great job you know to strengthen the open source community and so in we're going to use fast CPU okay as a vector store but you know if you want to use you can use chroma make sure that and you are using chroma you have to look at the chroma settings nowadays they have migrated from Duck DB to sqlite so if you are getting any error you can look at that uh you don't have to use the DB now for that one now the vector store and here it comes length chain for all the heavy lifting so we'll use you know some chains here guys I'm going to light chains here okay very quickly so these are the things that we need right we need Lang chain Vector store sentence Transformer and couple of other things that for to run this in CPU machine and the last thing is extremely okay we're gonna use streamlit and streamlined chat so streamlined chat is a library for chatbot interfaces or conversational interfaces within an streamlit application so these are the requirements you know prerequisite for this tutorial video and I'm gonna I will be a little fast in this video guys because if you know if you really want to understand you can watch my one hour video this this video I have used Channel it to create a medical board using llama2 so if you want to really uh you know understand a little bit in details you can you can go through that video and in this video chat with CSV so we are going to use something I'll let you also explain that guys a very simple thing so whenever say Lang chain so here in line chain what I'm gonna do is we are going to use CSV loader so I'm going to use CSV loader here to load my CSV data now you can also look at something called unstructured unstructured files or loader if it's not depreciated you can look at unstructured files as well in launch and I'm gonna use CSV loader to get this CSV data and I will have a I like to do everything on fly okay so when I say fly I I will let user upload the CSV file on the streamlined interface and then the embeddings will be created on fly because I will have a simple csb or a spreadsheet but you know you can also have complex and multiple CSV it depends you know you can extend this further right so let's jump in and start writing some code guys you know so I'll just start writing the code here so let me just do one thing let me create a new folder here and in this folder I'm gonna call it uh chat CSV llama 2 or something like this and I'll just go inside chat CSV llama too and here I have llama2 demo I need this file so I'll just copy this file lama27b ggml model file so I need this model you can see it's dot bin this is the chat it's a 7B model llama has models in three different categories and I'm going to use the lowest weight the parameters there which is 7B so now let's do one thing so here within this I'm gonna open this in terminal and once I open this in terminal I'm going to activate my Lankan environment within that I have installed all the required dependencies you will get the dependencies from my GitHub repository don't worry about it now Konda activate Lang chain let's let's open this in vs code now once I open this in vs code I'm just gonna let me just do something here okay and I'm just going to create a file called app.pi here quickly and within this app.pi let's import a few things import streamlit import stimulate as St that's the first thing that I need then I need from streamlit chat so extremely chat is a library that helps us create conversational interfaces within a streamlined app and I'm gonna use message here let me show you also what I mean by you know HTML chat HTML chat uh GitHub someone like AIS has created this credit to him all all credit goes to him for creating this wonderful library and you can also explore that you know further in some of your other applications as well so let me also have my AI anytime GitHub open here uh just in case you know we need this llama code from medical chat but a couple of functions there guys quickly and I'll just open model Pi here anyway now let's come back we are okay with a couple of streamlit dependencies now I also need temporary files so let me call it them file import M file and I also need now LinkedIn thingy so from Lang chain uh document underscore loaders as I said I'm gonna use CSV loader receiver in this case uh loaders uh I think it's dot CSV loader yes and then import uh CSV yes the first one here we go so this we will use this to load our file load the CSV file guys and then it's from Lang chain Dot embeddings and I'm going to use hugging face embedding in this case so let me just use hugging face embeddings if you are facing any challenges with hugging face embeddings you can also get sentence Transformers embeddings there you know it's also available directly in line chain okay and now we have to store that embeddings in a vector store and for that we're gonna use fast here so from luncheon dot uh Vector stores and then I'm gonna use import fast that's it uh this makes sense now the next thing is from Langston dot llms I'm gonna use uh C Transformers to load the model and last thing is we need a chain to basically retrieve or have some conversations I'm going to use conversational retrieval chain or something like that so from Langston dot chains I'm gonna use import conversational retrieval chain we are done with our import guys okay now if we want to for example if you have multiple files and if you have bigger files and once you are creating the embedding you want to basically store that locally right somewhere in the folder right uh you don't want to load basically create the embeddings every time to have the interactions if you're not changing your file so for that in that case it better to have a folder where you can save your this thingy the embeddings so TV fast path and let me give you a path called something like you know get the store or something okay Vector store DB underscore fast something like this okay we are okay with it now let's load the model guys I'm just going to write it loading the model here and within this I'm gonna use the C Transformers function from my previous Rip video and you can see here is a code in a simple thing I'll explain what we are doing here C Transformer is a fantastic library and also you can look at vllm as well if you want to load Transformer based model faster so let me just come here and say C Transformers GitHub and you can see that it's by Marilla C Transformers again marila thank you so much for creating this wonderful library and you can see it says python bindings for the Transformer models implemented in C and C plus plus using ggml library and it's fantastic right and it helps you load the model and influence it faster okay so we are okay with it and you can see that we are getting this llama27b chat ggml version 3 quantized eight being something like that okay if you want to use any other model from the Block let me quickly show you what I mean by it the block hugging phase what would we have done if there is no block that's that's that they were in very scary to think about it right you can see Tom jobins if he works always learning quantization fine tuning and so many models right more than 600 models block has Tom jovins has itself you know quantized and you can find all the models over here and the model you can see here is the Llama 27b chat ggml now you can go to files and versions Eminem files and versions you can download these files I am using the large file over here you can see this 7.16 GB large file lfs git lfs quantized 8B now depending upon what model you want to use if you're interested in using any other models please go ahead and use it make sure that you change the model type for example if you're using a vikuna model you have to make vikuna if you're using an alpaca model you have to change all those file your mix your FC Transformer is supporting that you know you can find that on the GitHub repository anyway now we are okay with our model load guys now we have a function that will load the model and let's write something like your title uh what should we write over here chat with CSV excuse me chat with CSV uh using llama using llama2 okay now this is what my title is I'm gonna use a mark down here guys also so let me just go back to my GitHub repository again and you know I'll go Within I'll see if I have something a chat with PDF I need that header there guys okay so let me see that quickly some markdown file you can see this markdown will do I don't want to write a lot of code in this video on these things at least so you can see here is our markdown let me do an ALT G and when I do that build by AI anytime let me change the gray streamlined supports markdown you can make a lot of changes on the design aspect you have to write that in markdown and then you have to say unsafe allow HTML true okay now let's make this color as white that's it okay there is some issues with the indentation let me just make it right okay now this we are done with it let's have something called uploaded file here a variable that will store our file so algorithm will do here is SC dot sidebar and in that sidebar let's have a file uploader so let me just do file uploader and in this case I'm gonna hit upload your data or data whatever you call it okay and type let's give a type X4 extension right so we're looking at CSV in this case let me just do one thing and here and let me do to run this app extremely drone app.pi once I hit enter it will open uh in a new tab in the browser guys you can see on Port 85.1 it looks good by the way so we have chat with csb using llama2 built by AI anytime with love and even the left hand side you see the user the end user the people who wants to use this app they can upload the file a CSV file over here and then we will write the code for processing let me see if I have something like you know llama llama icon GitHub or something like this okay I know I use some you know uh I want to use some icons to make it uh so it looks little good right so let me just put something here in the thingy over here can we get a pack for parrot as well for Lang chain so Parrot uh parrot Emoji yeah I need this emoji for Lang chain and I'll just put it over there okay now if I just do a rerun over here you will see that we get it looks nice and we are having llama uh Emoji over there now we will like to have user upload their files over here and when they upload the file it will have a will create the embeddings on fly and then we'll have a uh interface over here you know where the user can interact let's write the code for you guys quickly now very simple thing now we have uploaded file so let's say if there is uploaded file within it you can also do not know or something okay you can just do it play around with I'm not focusing more on stimulating this video guys so uploaded file and let's use with temp file so we're saying okay name file temporary file named temporary file if there is already delete so let's call it false for now delete false as TMP or something like this okay tmp5 this will also help you when you have multiple csbs anyone to return the source document this this will help you also right to retrieve that so TMP file Dot and anyway we have to uh probably have to show that maybe if you want to show it on streamlit you have to save it and show it or something like this okay so upload it while they can get value so let's get value of this it's a function so get the value of it the reason is that CSV loader only accept a file path okay that's why we are doing this uh to feed it to the CSV loader inline chain now Tim file uh TMP file let's give it a variable called path and then just tmp5 dot name or something like this okay TMP file dot name that's it we are done with this guys now what I'm going to do here is I'm going to have a loader variable pretty much straightforward from LinkedIn documentation right we'll have a CSV loader that we have imported if you want to your directory loader do it if you want to use Pi pdfload or do it if you want to use any other type of loader please do it depends on what kind of data that you have okay now on CSV loader what I'm going to do here is I'm going to have something called file path and my file path is nothing but TMP file path in the above line that we have written let's give our encoding for CSV utf-8 in this case so UTF 8 and couple of other Arguments for CSV csvr it's a dictionary key value pair let's write a delimiter guys here you know in this case if your if your CSV is not if your CSV is not properly you know in a proper format you might get some error there right so let's have a delimiter and in that let's write uh something like how it's I think it's comma so let's do that okay now delimiter is done and I think we are okay with this so where loader is we're okay with our loader guys now now what I'm going to do here data Let's uh load that data first of course loader.load from Lang chain so I'm going to do loader.load here and once I do data loader.load let's first print that data and I'm going to call it data for now and let's see if you're getting any error here okay now let's run this and see guys so I think we are okay with delimiter and uh this thingy the csv's handle so on browse file I need a CSV file so which file I can take uh I think I have a let me just go back I already have a pandas AI board that I built two months back this video is also available on my GitHub repository where I have used Panda's AI you know some libraries or with GPT integration in pandas I'm gonna use this file you know this file that I'll download it quickly it's a GDP file for countries you know I've just downloaded it somewhere okay and now let's you can browse file on downloads click on oh I already have this file cool okay fine so now we have data equals to Loadout load now what we're going to do is we're going to create an embedding so let's create an embedding for it so very quickly I'm going to write embeddings and in that embeddings equals uh what I'm going to write here is embeddings equals hugging face embeddings and within this embedding I'm gonna write uh the model name so our model name is nothing but sentence Transformer so let me just write sentence Transformers uh let me just get it from there one way I can write it all or let me just take it my from my from my previous video okay I'm gonna go on this llama to Medical chatbot and I'll take it from r dot pi and here you can see where do I see here it goes okay let me just copy this embeddings from here quickly and I'll just remove all this thing and I'll paste it over there so embedding sweet I'm okay with embeddings now you can see that we have our embeddings model name and yeah so sentence Transformer we have device CPU because we want to use it on CPU right so we are okay with we are using all mini LM l6v2 version 2 or L6 what did I write in the car sorry I wrote V6 no it's version 2 L6 and we are okay with our embeddings now so let's have a DB and in the DB I'm gonna use face and then from documents so in from documents and I'm gonna pass my data which is nothing but the CSV that we have and we are passing this embedding which contains our sentence Transformer embeddings model that's it so DB now what you can also do you can also save this if you want to let's write the code for it you know it depends save local and then we have DB path DB fast path that we have defined on top here you can see we have this TV fast path right that's it so it's okay now we are saving this embedding so let's call this llm now so I'm gonna use llm and I'm going to use this function load llm because you have to pass we have to pass that within a conversational retrieval chain or something like that let's create that chain and in this chain I'm gonna call it conversational retrieval chain you can see it over here we also have imported on top over here you can see that and which is Dot from llm and within this I'm gonna pass my llm equals excuse me llm equals llm and then I'm going to pass the retriever equals DB dot as retriever yes that's it which is the function which is right now I have written the function so far which will which has created the chain now so that we have to pass the query within that chain to get the response from llm based on the embeddings that we have created now let's write the function for uh extremely chat okay so I'm going to write first function called conversational chat and within this function I'm gonna have my query this will contain my query the query that end user will write on the stimulate interface let's have a variable called result so let me just do result and within this result I'm gonna use a chain and in this chain it's a key value pair so it has questions so question is nothing but it's query and then after query let's also have a chat history that's in a conversational retrieval chain so in chat history you can also show it uh the chat history then all if you want to show it I will have a HD dot session State session state of streamlit HD dot session State and then I'm going to pass my history here that's it so history that's okay now we are done with this so now let's have hd.session state here so for history so SC dot session State and I'm gonna call it a history and in this history dot append will append all the responses in that in append and again then we'll pass a query and then we'll write result so we'll get this result okay requests result and I think it's uh llama head answer find that's it now we are okay with it so chat history this looks nice query append query result and that's it so now let's return this so what I'm going to do is I'm gonna do a return uh result and answer so let's for now return this okay and here I can see that I have made a mistake it's query not DOT it's comma by the way sorry okay fine now we have conversational so now let's uh handle a session State okay if it's history if it's newly generated if it's passed right so we'll have three conditions three conditional the first conditional is if history if history not in session State then execute the below code within this okay and that is very simple HD dot session State history equals nothing there and that's what we are appending right on conversational chat function over you can see history.appen now if generated if generated not in session state is session State then St dot session State generated equals let's have a welcome message of some message there right so it's inside a list we'll write hello uh ask me anything ask me anything about something like you know about uh we will have this data name so let's call it a uploaded file dot uh I think it's upgraded file yes uploaded file dot name and what can we have here okay so we can have okay let's have this exclamation marks for now okay so exclamation mark okay it's not marks Mark now the last if condition will be the past response okay so the previous one so if passed state then we'll have St dot session State pass just uh hey this is the first this is the first line on the left hand side right here something like this let's use some emojis so let me just go to GitHub images quickly and I'm I'm searching for GitHub or something like this okay so it has Emoji cheat sheet I need some emojis guys you know just to make it so many emojis are there I okay so let's have this four I don't know if I can copy it I just want to put it over there so I just need that emoji okay so let me just go back here on the Emoji cheat sheet and let's see what I'm getting um yeah now I'll keep this as one so let's keep this here within this and one of waving the hand or something like that okay where is that here we go wave okay this we can copy it you can also and we can pull it over there or something like this okay it says wave I didn't copy if I have to copy wave I'll probably copy this so let's copy in the same Emoji format and see if it supports I'm not sure about it I don't know why I'm not able to copy it but anyway I'll just I'll just paste and see if Mark number I don't see that HD dot session understand that but anyway we'll keep this for now and we'll check it now we are done with this so what I'm going to write now next is we need we need to create a container for the chat history right so let me just write container for the chat history container for the chat history and let's have a response container and within this response container SD dot container and this is okay and then container dot HD dot this is for this category for the you just text input because we are going to have a form that that's where we'll take the input so container is HD dot container streamlit container we are using a stimulate container you can go through HTML documentation for more detail guys now with container not create with container I'm gonna have a forms so with sd.form and I'll give a key to this form and key is nothing but for example let's call it you know like in HTML we have my form or something okay my file or whatever input my form and clear it so clear on submit right so let's call it true for now clear on submit true and I don't know if we should do it and then we have user input so let's have our user input and the user input is nothing but HTTP or text input guys and in a studio text input I'm gonna have a query which will have a placeholder by the way query and then placeholder and this placeholder is nothing but talk with talk talk to your CSV data or something okay let me just like talk to your CSV data and talk to your CSV data here and then we can use some other emojis as well if you want and or something like this okay and then let's call Key equals uh input key equals input and this closure sets and then we'll have a submit button here so let's have a submit button so that user will hit on that to retrieve the answer and I'm gonna call it SD dot uh form uh submit button yes if you reform submit button and it should have a label there so let's call this label as chat or something like this okay chat now this makes sense now so we have two variables there so let's just uh get that variable so if they're available so what I'm going to do here is if now let's come out of that with with Loop now what I'm going to do here is if user input of first we could have if submit button and user input both have been given okay and if submit and user input both have been given then you execute my you get an output use that conversational chat function so we have this conversational chat function and when this conversational chat I'm gonna pass my you know user input that's it so this will have our user input and then we have state and the first is past and that start appending it right so the first append or with user input so if this is for the first time in the past and then HD rotation State not future it's generated so generated and this should append the output so the first will append the user input and the generator will append the response or the output so we are okay with it now come out of that with come out of that with container and if St dot session state generated so let me just write generated then with response container with response container for I in range uh length of that session state yeah this is okay this is what I need let me just close this excuse me if length of the session generated and couple of okay this will close here fine for I in range length of extra session State then print the message in streamlit chat so message HD dot session state SD dot session state and in that it will be fast and the first two so let's is I and then is is user crew and let's key equals string I and then uh underscore user in that session and let's we can also pass something called Avatar Style okay so in Avatar Style let's have a big smile or something like this so we have an avatari style in that estimated chat message function big smile and then let's have the final message for generated one this was for the past one the previous past and the next one is a generated so generated I and it will not have easy user true so generated I and it will only have key and then it will have string I and not user it will just let's give an avatar here so Avatar Style and thumbs that's it that's what we need so we are done with our code guys okay so Avatar Style terms it was fine it's okay I don't know why I have given plus zero to not D plus it's just comma okay now we are okay with our uh food here guys so I'll just explain quickly from top to bottom before running this application or the board that we have created now we have imported all the required libraries you can see we have a fast path or a vector store path where we will store our created and generated embeddings and then we are loading the model using C Transformers jgml model uh model type llama Max new token 512 temperature 0.5 you can customize the hyper parameter I'm not going to do that and then I have some titles markdown etc for the streamlit app then I have a file uploader in the left hand side sidebar on the streamlined where user will upload a CSV file then I am saving that file locally I have to save that okay because I can write it and get the value and then you need a you need a temporary file path that's because CSV loader gets needs the file path so you can see that we are passing that file path TMP file path and we are passing some you know csb arguments within that then we have loaded out load to load with that with the link chain okay function then we are creating the we are creating the vectors the embeddings using sentence Transformers one of the most used open source model for embeddings and then I have fast from documents if you want to use chroma that will be chroma from documents if you want to use queue brand Millworks Etc you can do that now I'm saving that locally in DB fast path I'm loading the llm so it's LM equals to load llm and then I have a chain conversational retrieval chain if you want to only use retrieval QA you can also do that you can also only use the Rhythm chain as well if you want so I am using conversational retrieval chain passing the llm and passing the retriever which is DB dot as retriever then writing a function which takes query from the end user passing the question within the chain with a chat history and chat history is coming from the session state within stimulate appending the session State history with the answer and just returning the result answer to the evidence so it can show it now we have three conditions the first condition if history not in the session State then have an empty list if generated which is the output that output that model will generate okay not in session State then generated hello ask me anything about an upload file name or something like this right file name and some emotion and then we have for past as well and then we have creating a container for chat history you know and then we are using container HD container with container and we have a form where user will submit their you know this query user input then we have a submit button and we are checking if both the values are there like if there's a value is inside user input you can also do length user input and do it then output equals to using that function conversational chat and then just appending it and we're just checking in the last and printing the message that's what we are doing right guys in this uh thingy here now let's go back and run this upload some file and see if this this is giving us the right response so once you go back here right on the uh this uh tab in the browser which already have chat with CSV let's just do a reference here chat with CSV using llama 2 build by AI anytime something like that right and you have upload your data where you can upload your data so let's upload this data and see if you are getting any error now once we upload our data you can see we have something called hey hello ask me anything about your fine name so you are uploading this file name and we have this conversational interface within a streamlit app okay which is which is inside a container right extremely container this is the container that we call it and where we have a form where you can pass your query you know talk to your CSV data or something like this right here that's what you can do now this is the let me just show you the data quickly guys this data is also available on kaggle you can explore that from kagger itself okay if you want and I'll show you up so it to you it has this data about countries and their GDP their score and social support Health life expectancy freedom to make Life Choices blah blah blah right so basically how how happy a country is you can find out uh based on these parameters within the data right so now this is a simple CSV file you can have thousand of CSV files you can have different type of questions you can check the complexity level of the model understanding as well how good reason it is on your data and a lot of other things as well guys so let me just ask the question I just I'm going to ask a question like uh for example which country or let me ask what is the GDP of the China what is the GDP of China this is my first question this is my this is a question I want to you know or this is the query that I am making you know uh to the model and I'm assuming that okay it will retrieve the information for me okay so let's let's ask this now once I click on send you can see it says running now as we are using a llama to here on a CPU machine I'm not expecting that this will give me a response within 10 seconds it will take you time okay uh with this one it might take up to a minute as well depending on what kind of uh a machine you you have and how big your embeddings and the similar chunks that it retrieved right the model retrieve because it has to process all those chunks from the embeddings and find out the relevant information for you right so it will take little time meanwhile just to give you some information on this guys you know I I some of you have requested that I should create chat with CSV using llama too so I thought okay I'll create this video okay and how you can load a CSV file and how you can create it create it with a similar way right like a normal chatbot so you can also create a mechanism for batch processing where you can upload multiple CSV files and you can run some queries on top of it right and you can you can do through a batch processing so you don't have to wait in real time to get the response so and probably if you have compute power to deploy this kind of boards you know to for real time you know interactions you can also do that so the code will be available on the GitHub repository you know you can find out all the code Snippets for this and you can see that the GDP of China is 1.029 you know trillion dollar or something like that okay the GDP of China right this is this is what let's ask which country has the least GDP according to the data set according to the data something like this okay I'll ask this question again you know so it took around around 30 to 40 seconds for a response which is fairly good I'll say on a CPU machine so I'm running it on a CPU but I do have a GPU uh in my current system but I just wanted to create this on CPU because most of you have suggested that okay most of you have requested rather that uh how we can then run it on a CPU machine because not everyone has a GPU okay now so the code will be available on the same GitHub repository and you can also find my uh code for this video as well develop a medical bot on CPU machine using llama2 you know that's also available you can just go ahead and get it and you can also you know do Source document or something like that to get the row basically because it's an Excel set it will give you row I just have to write that logic there you can also put some validation checks with pied antique or with guardrails AI there's a library in Python now you can put some certain guardrails from some validation check to find out if you are getting your data or the responses from the same Excel that you have uploaded because I do believe that without because we are not writing any custom prompts so if I ask here in this bot okay tell me more about the GDP of China it will basically give me some garbage response or some response no outside of this embeddings or the PDF because we are not using this context mechanism or the custom prompt implementing in line chain but anyway so let's wait for this response because I want to show you live that this works it's it's not a static page okay but but the potential is potentially the men's guide right chat with CSV or you know you put some speech speech to text and text to speech in this you make it completely automated right end to end and you can write some python script that will connect with your SharePoint connect with your databases feed all the data from there uh sorry scrap all the data from or collect all the data from there then pass it to land chain you know and something like that you can see here right according to the data provided chat has the lowest GDP per capita among all countries and regions listed with a score of 0.35 fantastic and I I liked it it's just taking around 25 to 30 seconds you know on average to generate a response or retriever response or information from your data set that you have so these are the data that we have taken you can see the data that we have over here and you know not much of a complex data you know but you can see your complexity also guys you know you can find it out uh that one so this is this is fantastic right I just wanted to see that if we can build something like this and it just just to show you something let me just you guys just you can see Chad over here at 0.35 you can see it 0.35 as a GDP over here which is very less and that's the response that we got over here in our chat with CSV using llama too that's it guys and I think uh that's all for this video uh now you have two videos and I am going to create two more videos on llama two one is on llama 2 deployment on AWS and the second one is containerizing this through Docker and deploy it on any Cloud right so two more videos on llama two will be uh will be posted soon on the YouTube channel and please stay tuned for that uh if you are interested so let me know if you want to extend this further if you are extending this with some charts or some plots you know within this uh please let me know I would like to see that what you do with this and if you have any thoughts or feedback for me please let me know uh I'll be more than happy to uh take the feedback and incorporate in my future videos uh to improve it further so that's all for this video guys not wasting any more time on this of yours so please go ahead and get the code from GitHub and try to implement it on your data and let me know your views on that that's all thank you so much for watching see you in the next one guys
Info
Channel: AI Anytime
Views: 14,421
Rating: undefined out of 5
Keywords: llama2, llama, meta ai, generative ai, llm, language models, large language models, youtube, ai, langchain, huggingface
Id: _WB10mFa4T8
Channel Id: undefined
Length: 45min 15sec (2715 seconds)
Published: Sun Jul 30 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.