Chat with Multiple PDFs | LangChain App Tutorial in Python (Free LLMs and Embeddings)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good morning everyone how's it going today welcome to this new video tutorial in which I'm going to show you exactly how to build this application that you see right here it is a chatbot that allows you to chat with multiple PDFs from your computer at once okay let me show you how it works and for this example I'm going to be uploading the Constitution and the Bill of Rights then when I click on process it's going to embed them and put them into my database right here now I can start asking questions about this so for example what are the three branches of the United States government that is a question related to the Constitution but then I can ask a question also about for example the First Amendment which is in the Bill of Rights and it's also going to be able to answer me it only answers questions that are related to the PDF documents that you upload so it really is only about the information that you provided I'm going to be showing you how to build this not only using openai but also using hugging face free models so that you don't break your wallet while trying to learn how to do this um yeah it's a little bit more complex than the previous projects that I had shared in this channel But be sure to follow to the end I'm sure that the result is worth the effort I hope you enjoy it and if you like videos like this don't forget to subscribe foreign [Music] real quick I'm just going to guide you through I have already created my virtual environment right here and which is where all of my dependencies are going to be installed as you can see I am using python 3.9 for this one we're going to be using that our DOT EnV file to store our secrets git ignore which is the file that tells git to ignore these two files so that our secrets and our local configuration are not tracked on git here is just my document of my python version and this is app.py which which is where all of the action is going to be taking place the first thing that we're going to want to do is to install the dependencies that we're going to need in order to do that you can do pip install and the dependencies that we're going to need are streamlit first of all to load to create the graphical user interface we're going to be using pi PDF two like that in order to read our PDFs we're going to need line chain to interact with our language models we're going to be using python dot EnV in order to load our secrets from tnv we're also going to be using files CPU as our Vector store we're going to be using open Ai and hugging face Hub because I'm going to be showing you how to do this both with open AI models and with hug and face models okay so once you do that you hit enter and it's probably going to take way longer for you because I already have this installed but yeah now we can actually start coding now that we our environment is completely set up all right so now let's start with the graphical user interface okay um first of all I'm going to first make this test real quick right here um this basically just tests that the application is actually being executed and not being imported so whatever is inside this condition right here is only going to be executed if the application is being executed directly not if it's imported that's kind of a manual test that you usually do and then you create your function right here and whatever is inside of this function is what is going to be run in the application so if I say here for example print hello world and I run it you will see that I see Hello World right here um because I am executing it um there you go so now let's start with the graphical user interface as I mentioned before we're going to be using streamlit for the graphical user interface and to do that we're going to start by importing streamlit that way previously installed so we do import streamlit as St as we do and then the first thing that I want to do is to set the page configuration okay so you do St set page configuration I'm just going to pass in two parameters but you can pass as many as you want from here and I'm going to set the page title to um chat with multiple multiple PDFs like that and I'm also going to pass in a page icon like that and I'm going to set it to the Emoji of books um also here I'm going to add a header why not SD header like that and I'm just going to say like this is going to be the main header of the application going to say that it's going to be also chat with multiple PDFs like that um like that and also it's going to add the Emoji right here of some books and also remember that below the header we wanted that to be a a text input where the user was going to be able to add their questions so let's add that St text input like that and whatever you put inside of here is the label from the input so you can say ask question ask a ask a question about your documents here like that that's going to be like the label that's going to appear on top of the user on top of the text input and then something else that we want to do is we wanted to add a sidebar where the user is going to be able to upload their PDF documents so in order to do that you do St sidebar and if you want to put things inside it you have to do with SD sidebar and then colon and then whatever you put inside of here is going to be inside of your sidebar okay watch out do not add parentheses here because otherwise that is not going to run you will have to pass in some parameters right here and you don't need that so just leave it like that and here just write the contents of your sidebar in our case we're going to be adding a sub header that reads your documents and right here we're going to add another streamlined component element that allows you to upload files right here and this one is called SD file loader there you go and just as with the text input inside of the parenthesis you just add the label okay label as you can see and my label is going to be upload your PDF upload your PDFs here and sorry about the ambulance upload your PDFs here and press on and click on process like that there you go um I suppose that's pretty much good let's just add a button SD button and the button is going to be process like that now if I click on Save and remember how you run your streamlit application you don't do python python app because that is not going to work you have to run it by with using extremely so you have to do extremely broad app.py so you do streamlit run and then the name of your file which is in my case at the py and now the app is running and as you can see I have my graphical user interface right here and up something happened here there we go I have my user interface right here and it seems to be working correctly now here I can ask questions and all here I can upload files in this case the files I want to upload are the Bill of Rights and the Constitution but so far it's just a graphical user interface there's nothing happening behind so yeah let's add some logic to it all right right so what we're going to want to do is to create our API keys and because we're going to be using some Services by openai and by hug and face so we're going to be able to connect to their apis and in order to do that we're going to be needing their API Keys the API keys we're going to be storing them inside of our DOT EnV file because that is the place where you put things that are supposed to be secret and they're not supposed to be tracked by git so whatever you put in here is not going to be tracked on GitHub when you upload your repository to GitHub okay that's the way to keep your secrets away from the public and the way to do this is we're going to create two variables right here one is open AI API key and the other one is hugging face Hub API token and let's go create them in order to create it at open AI you have to go to platform.openai.com create an account and then go to account API keys and then here you just create new API key I'm just going to call it PDFs you created and then you copy it and then you come right here and paste it here same for hugging face you're going to go to huggingface.co settings tokens and then you can create a new token I'm going to say that this one is also PDFs I think really smart and enough but I'm just going to give it right in case I want to use it again I copy it and then I paste it right here there you go um so now that I have my api's key set I need to be able to access them from here in order to be able to access them I have to first use this other package from python that we installed before and this package is called load load environment I think so you say from dot EnV file [Music] we're going to import load.tnv there you go and this is the function that you're going to run inside of Main in order to enable your application to use your variables inside of dnv so here let's just run load.tnv and now Lang chain is going to be able to access all of our API keys right here okay that's why I mean since we're going to be using launching remember that we have to name our variables exactly like this if you were dealing with your own framework you can name the variables as you want but since Lang chain things where you're going to be using language enhance it's a very specific way to name the API key variables so just remember that you have to name it like this and then just include.load.tnv right here and there you go now we click save and now our API keys are set and we can start dealing with the rest of the logic right here all right so what I'm going to do now is I'm going to show you how this and the logic of this application works um if you have already watched the previous video on PDFs and how to chat with a PDF this is going to sound very familiar to you if you haven't be sure to watch it because that's a much more thorough and detailed explanation of how this application works but in case you just want like a real quick refresher I'm going to cover it real quick right now but if you do want a detailed description with examples and everything of how this process works take a look at that video because it's going to make it easier for you to understand what is going on so um just like real quick what we're going to be doing is we're going to be taking the PDFs from our user we're going to be taking several as many as as he wants and this PDFs we're going to divide them into pieces of text okay so we're going to read all of the text from our PDFs we're going to have then just a huge string of text and that's that string of text we're going to divide it into smaller pieces and chunks of texts okay it is those chunks that we're going to later convert into embeddings now what are embeddings embeddings are you can think of that in a very simple way as a vector representation of your text or a number representation of your text and something very important about this string of numbers of the of this list of numbers that represents your text is that this list of numbers also contains information about the meaning of your text okay this means that we can potentially find similar text that has similar meaning to your text just by seeing their number representation and that's exactly what we're going to be doing later so once we have the vector representation of each of your chunks of text we're going to be able to store all of those embeddings um or your vector representations into a vector store or a knowledge base this is basically just your database of all your or folio or vector representations and this database it can be Pinecone it can be chroma it can be files in our case we're going to be using files but Pinecone is just like the most popular one so I just added this logo right here to show to help you see what's going on foreign but now that we have our database right here we're going to be able to take this um we're going to be able to take questions from our user the user is going to ask a question like for example what is a neural network or something like that and then the question we're going to embed it using the same algorithm as we did with the embeddings in our chunks of text that is going to allow us to find inside of this database the vectors or the vector representations that are similar from all of these chunks of text we're going to find the the ones that are similar or have similar meaning or semantic context to our question that the per that our user asked that way this is going to give us our ranked results of the chunks of text that are relevant to the question that our user asked and we're going to be able to send all of those um send that as context to our language model so actually the language model doesn't really know what the PDFs have what we're actually doing is the language model is already trained our language model can be can come from hugging phase or it can come from open AI or something like that but the language model doesn't really know what our model what our PDF documents have what we're doing is we're finding the chunks of text that are relevant to our users question finding them and ranking them in order of importance and then sending them as context to our language model so behind the scenes The Prompt is going to look something like based on the following pieces of text of chunks of text answer the following question and then we're going to pass in the chunks of text selected by our Vector store and we're going to ask the question and then the language model is going to be able to answer the question depending on on the context that we gave it and then it's going to be able to give us an answer and that answer is going to be sent to our user and that's actually what is happening behind the scenes and langching makes it extremely easy to do all of this with just a few commands so let me show you how that works all right let's do that all right so what we're going to do now is we're going to be dealing with the with the sidebar right here uh remember that we have our we have our document drag and drop here but so far it only takes one file as you can see only one file allowed so we're going to enable multiple files and we're also going to be dealing with what to do when the user clicks on the process okay so let's do that in order to take more than one file we're going to come right here to our sidebar and to our file uploader and there's actually a very convenient parameter called accept multiple files we're just going to set it to True like that there you go and we're going to store the contents of this file upload into a variable called PDF Docs like this and now what we're going to want to do is we're going to want to do something whenever the user clicks on the button so in order to do that we just have to add an if before the button that way the button will become true only when the user clicks on it and then we're going to have to actually start processing information right here and inside this button what we're going to do is three things remember the first of all is we're going to oops we're going to get the PDF text to get just the raw contents of the PDFs of all the PDFs then we're going to get the text chunks get the text chunks which is this part right here to divide it and then we're going to get the vector store I'm going to create our Vector store with the embeddings Okay so create Vector store there you go we're going to be building these three different functions in a moment but before that I'm just going to show you that actually something very useful to do when you're dealing with these kinds of processes especially when you're dealing with streamlit is to add a spinner right here so you do St spinner and then you just say processing or something like that and then just like with the sidebar you do with spinner there you go and then just wrap everything inside of it and what this does is that all the contents inside the spinner are going to be um processed while the user sees like a spinning wheel and it just tells the user that the program is actually running and except and processing things and it's not frozen so it's just for to make it more user friendly okay so there you go now we can actually start dealing with these applications with this functions now we're going to get the text from the PDFs right so let's do that all right so now what we're going to do is we're going to take all of the raw text from our PDFs okay in order to do that I'm going to create a new variable called Raw text right here and I'm going to create a new function called get PDF text like this and this function is going to take our PDF documents like that okay um so let's I mean the objective of this function is to take our PDFs documents which is a list of PDF files and it's going to return a single string of text with all of the content all of the text content of these PDFs okay so let's create that function up here um there we go so the function is going to be called like this and actually we're going to be needing a library that we installed before and this library has Pi Pi PDF so we're going to import it from PI pdf2 we're going to be importing um a class called PDF reader all right and you will see how we use it in a moment so inside of this function what I'm going to do is I'm going to first of all initialize the variable which is going to contain all of the raw text of my PDFs and then I'm going to Loop through all of my PDF objects right here and read them so and read them and take the contents of that and append it I mean concatenate it to this variable right here so let's do that now what I want to do is set for PDF in oops in PDF Docs I'm going to want to initialize a PDF reader object foreign here you have to initialize it with the PDF object that you want to that you want to initialize it from and what it does is that it creates a PDF object that has pages and it is actually the pages that you're able to read from so what we're going to do is we're going to loop as well through the pages to actually read each page and add it to the text so now for page in PDF reader um PDF reader dot pages then this page contains the method called extract text like this and this one just extracts all of the raw text from this page of the PDF and we're going to be appending it to our text variable and returning our final text variable in the end like this so let me just repeat real quick what happened right here so we initialized a variable called text which is in which we're going to be storing all of the text from our PDFs and then from each PDF we started looping we started looping through all of our PDFs we initialized one PDF reader object for each PDF and then we looped through all of the pages of each of these PDA of these PDFs and we extract the text from that page and appended it or concatenated it to our text variable right here so in the end what we should get is just a single string with all of the contents from our PDFs inside of this variable called raw text let me show you real quick how this looks so if I do St write and I just write the raw text right here I'm supposed to see I think I stopped the application yeah there you go now what you should see when I upload my documents right here and I click process you should see that it first of all shows a spinner saying processing and then it displays the raw text because that's what we're getting right here so let's do that I'm going to be uploading the Constitution and the Bill of Rights and then if I click on process you will see that now I have all of the text right here and now I am able to actually divide it into chunks of text so let's do that all right so um what we're going to want to do now is to split this huge piece of text into chunks that we're going to be able to feed our model like this okay and if you had seen the previous video you already know what I'm going to do it's actually very simple I'm just going to create a new function right here I'm going to say that I want to get the chunks text chunks like here and this function is going to be called get text chunks like that and right here we're just going to pass in a single string of text and this one is going to return a list of chunks of text that we're going to be able to feed our database Okay so um let's just create this function right here we come up here and we do Define this there you go now in order to divide our text into into chunks or pieces or paragraphs we're going to be using um a library we're going to be using blankching okay so we're going to be using a class from launching called character text splitter so we're going to do from Lang chain oops from langchain dot text splitter we're going to import character text splitter like that okay and this is the one that we're going to be using to divide our text right here so first of all we're going to create a new instance of it we're going to say that our text splitter is going to be a new instance of character text splitter and actually character text splitter takes several parameters the first one as you can see right here is the separator and we're just going to set it to say to say that this separator is going to be a single line break then we're going to say the chunk size like this now in this case we're going to set the chunk size to a thousand which means a thousand characters and then we're going to set a chunk overlap [Music] to 200 okay just to be clear the the chunk size is the the size of the chunk like this so if you start here a thousand characters probably going to go somewhere like here and then the chunk overlap is basically just to protect you whenever your your chunk ends in a place like this you're going to want to take the previews I mean you're not going to want to start the next one right here because you're going to lose all the meaning from this sentence so the overlap is just going to start the next chunk a few characters before so if it's 200 it's going to start the next chunk 200 characters before to be sure to to contain all the full sentences and to contain all the meaning that you need in or in a single chunk okay um so that's the chunk overlap and then the length function length function like this is going to be the length length function from python okay um and then basically we're just going to create our chunks from our text splitter we're going to say split text and we're just going to pass in the we're just going to pass in the text right here like that and then we're just going to return [Music] the chunks like that so if I'm not mistaken right now we should have this element right here that contains our split text method and that will just return to us a list of chunks of about a thousand character size and with an overlap of 300 so let's see how that looks so now that we have this right here we can just St write it and to display it in our sidebar right here so let's just refresh this page right here and let's load again our two documents we can click on process and there you go so now you have the first chunk is this one right here the second one is this one right here and as you can see this one starts like here okay so that's the overlap in action so there you go now you have all of your chunks divided and now it's time to actually use those chunks to create the vector store okay so now we are we created this and now we are going to do this part right here and it's very quick so bear with me right all right so now it is now that we have our chunks of text what we're going to want to do is we're going to create our embeddings okay now our embeddings if you remember correctly are here it is this part right here which is The Parting we create the vector representation of our chunks of text in order to store them in our database so that we can run this semantic search to find similar chunks of text that will be relevant to our question um I'm going to be showing you two ways of doing this the first way I'm going to be showing you right now is by using open AIS embedding models okay now this is paid for so you have to keep that in mind in your business model if you're going to be loading documents that are thousands of pages long but the prices are right here if you go to openai.com pricing embedding models like the latest one is like ridiculously cheap anyways I remember someone saying in Twitter that you could embed the entire transcription of Joe Rogan's podcasts for 40 euros 40 dollars or something like that so I mean it's not super expensive but it's definitely something to keep in mind in your business model okay um that's the first one and the second one that I'm going to show you is a free one called instructor and this one is actually very good however you do have to have the I mean it's going to be way slower if you're just running it in your computer and expecting it to to just make all of the embeddings in your CPU you you got to have a GPU or something like that in order for it to be performant however something that I wanted to mention that is that unlike with language models like GPT 3.5 or gpt4 were pretty much all the Benchmark for how a language model should be so far like open ai's language models are undoubtedly the best on the market so they're like other models are measured compared to them however when it comes to embedding models they are not at the top if you see right here the official I mean leaderboard from hogging face you can see that Adis I mean open AIS model which is Ada V2 here it's actually on six position and instructor which is the one that I'm going to show you how to use is in second position and it's probably the one that I would recommend if you have your own Hardware but yeah just keep that in mind that instructor you're gonna have your own hour your own Hardware or I'm not sure if they uh make it available here on interface API but it might be available on hogging face but yeah just keep that in mind and let's get to doing the hugging the open AI one for now all right so let's now create our Vector store from this text chunks right here using open AIS embeddings uh so I'm going to do vector Vector store equals get Vector store and we're going to do it from the text strings all right and then we're gonna have to Define this function right here I'm going to Define it down here there we go and this one actually is a very simple function um since we're going to be using for now open AIS embeddings we're going to do from Lang chain dot embeddings import open Ai embeddings and we're also going to be using files as a vector store okay files is pretty much just like Pinecone or like chroma or whatever it's just a database that allows you to store all of these numeric representations of your chunks of text okay the different thing about files is that it runs locally so in this case we're going to be storing all of our generated embeddings in our own machine instead of in the cloud or something like that so this is going to be erased when we close the application probably in another video I'll show you how to use a an external persistent database but for now let's do line chain dot Vector stores we're going to import files there you go so here we're going to say that the embeddings embeddings are going to equal open AIS oops open AIS embeddings and then we're going to say that the vector store is going to equal our files and we're going to generate it I mean right now we're creating the database okay we're going to be generating it from texts okay because we have the chunks of text right here and this one takes two parameters the first one is the chunks the actual text which are going to be our text chunks chunks of text and the second one is going to be the embeddings and here we take the embeddings that we just created all right so and this is the one that we're going to return return Vector store there you go um so yeah there you go now we have successfully created this Vector store right here by using open Eis embeddings and let me just show you how fast that is because we are sending just the chunks of text to open AI servers and then it is them who are doing all the heavy lifting um so let me do this streamlit run um one second stream late oops streamlit run app.vy let's see how that looks like there we go we have the application running right here I'm going to upload the same files as before I'm going to click on process and let's see how long it takes to process remember that we're using the open Eis API key that I set before and now it's ready all right so that wasn't very long um and it's open AI servers which did that if we want to do it in our own computer we can use the Transformers sorry the instructor API and and let me just show you how to do that right now all right so now I'm going to show you how to do exactly the same thing I mean remember that we have just successfully created our Vector store which is our database with all of our embeddings but do that for free okay before like just a moment ago I was charged for what I did for embedding the 20 Pages ish of my two documents all right so now we're going to do it in my machine and we're going to do it for free so let's see how that looks like we're going to be using just as I mentioned before the instructor embeddings and as you can see it is actually ranked higher as I mentioned before it's high ranked higher than open Eis embeddings um so let's click on that one and this is the one that we're going to be using however something that's important to keep in mind is that I forgot to mention that you have to install a couple more dependencies for you to use this so you're going to do pip install and the two dependencies that you're going to need are first of all the instructor embedding which is the main package that we're going to be using and then sentence Transformers which is just a set of dependencies that our instructor embedding is going to use however do keep in mind that this was super fast for me but these are pretty heavy packages so don't worry if it takes several minutes for you to actually finish downloading and then once that's installed you can actually come right here it's it it's inside the same module LinkedIn embeddings you come right here and you say hugging face instruct embeddings and here just instead of doing open AI embeddings we're going to initialize a new embeddings and this time we're going to call it from hugging face instruct embeddings okay and this one actually we're just going to pass in the model name model name and the model name for this one is exactly this one that you see right here remember that this is the one that we're going to be using so we just take this and we just paste this right here and now we should be able to pass in this embeddings that we just created from our hugging face embeddings into our Vector store right here and it should work however as you will see it's going to be way way slower let me just show you what I mean so if I do streamlit run app.py I'm going to put this on the side like this I'm going to get rid of this I'm going to show you on the terminal right here and if I bring my application up here um this as I mentioned before this is barely 20 Pages or something like that and remember that just before for open AI it took like four seconds for it to embed everything now if I click on process right here you'll see that it starts to process it loads the Transformer and as you can see it's loading in my computer okay so it loads the Transformer then you will see that it loads the it loads the CPU Etc and so here you have like the start time all right it started with my CPU I'm going to pause the video right now and let me just show you in a moment how much it actually took to embed only 20 Pages all right in my computer using CPU I don't have a GPU connected to this one right now so let's see all right so it finally finished uh it didn't actually take that much that long it took two minutes to do it in my computer however like do keep in mind that this can easily scale up if your computer is not um powerful enough or if you're just running on relying on a CPU okay so yeah so that's how to do the instructor embeddings in your application now what we're going to do is we're going to we have successfully finished all of this part we can actually do this and Langston actually allows you to do that super quick in just one single chain and we're going to include memory in it super quick as well so let me show you that in just one moment all right so now it's time to start creating this part right here and it's actually super quick and super simple because langching provides a chain that allows you to do this out of the box okay and it's actually very convenient because you can allow it allows you to add memory to it so this means that you can ask a question about your document and then ask a follow-up question about that about the same thing that you're talking about and the robot is going to I mean the chatbot is going to know the context of your question okay so let me show you how to do that real quick we're going to have to come right here and then we're going to have to create an instance of this conversation chain okay so I'm actually going to create a new function for that I'm going to say conversation create conversation chain so I'm going to call this conversation I'm going to store it in I'm going to create a new function just like we were doing get conversation conversation chain and this one is going to take my Vector store okay there you go so now let's just like before we're going to have to create this function up here um Define there you go and right here we're going to have to initialize a few things first of all since we're going to be dealing with memory I mean a chatbot that has memory we're going to have to initialize a an instance of memory in order to do that we're going to have to import this from launching and it's called conversational buffer memory okay so from launching dot memory we're going to import conversational buffer memory so there you go and now we can successfully initialize it right here we're going to say memory equals conversational buffer memory that we just imported like that um there you go and we're going to set this one in order to initialize it we have to set a memory key first of all I'm just going to call it chat history and let's say that it returns message let's set that to True okay if you want to know more about how memory Works in langchang and how you can use buffer memory or entity memory or other kinds of memories be sure to check my video I have a video especially about that so check it out and then once we have this memory right here we can actually initialize the session so conversation conversation chain let's call it and we're going to say that this equals the conversation the conversation of retrieval chain that I actually haven't imported so I'm going to say that from langching Dot if I'm not mistaken is from chains we're going to import conversational retrieval chain which allows us to chat with our switch our with our text with our context with our Vector store and have some memory to it so in order to do that first of all we're going to say that we're going to say that this one is going to be conversational retrieval chain from language model and this one right here takes a few things the first one is the language model that we're going to be using in my case let's use let's let me just initialize it right here the language model is going to be open AI open AI I'm going to import it from here blank chain from langchain.lms import open import open AI there you go so now I can use open AI this one's going to use DaVinci actually you know what let's use chat model a chat model instead chat models where you're going to use chat open AI and here we're going to initialize it with chat model there you go now we can use this llm right here um so the first argument that our conversational retrievable chain is going to take is the language model so I'm going to say llm equals my language model the second argument is going to be the vector store or the Retriever and I'm going to call my Vector store that I took right here I'm going to say as retriever there you go and then my memory is going to be the memory that I initialized just a moment ago okay and now this is my my conversation chain I'm just going to return it return my conversation there you go now something important to keep in mind right here is that we have just created our conversation object okay this one right here is going to allow us to to generate the new messages of the conversation this one in like in a very uh high level explanation all it does is it takes the history of the conversation and it returns you the next element in the conversation okay so this is the one that we're going to be using in the entire application later on right here so it is a good idea to have this persistent during the application something about streamlit is that whenever something happens like someone clicks on a button or someone submits a something on an input field or something like that um extremely tasks a tendency to reload its entire code so if I click on something or just submit a text input or something it's going to try to reload the entire thing and then that means that it's probably going to re-initialize some variables so if I don't want that to happen if I want some variables to be persistent over time I can do St session State and that way the variable is linked to the session state of the application and the application knows that this variable is not supposed to be re-initialized okay in this case we're not going to be doing this especially because of the originalization because this is only triggered when we click on the button however this is also useful when you want to use a variable or an object during the entire application so as you can see right here we initialize the conversation object right here but we may want to use it outside of the sidebar and that would be outside of the scope of the of this of this piece of code so a good I mean a good thing about the session state is that you can just use it outside of it if you go to session State DOT conversation oops conversation this is going to be available outside of the scope so that's a good way just to to use some pieces of some some objects outside of your of your scope if you're using streamlit uh something important about this is that it is a good practice that if you're using a session State object you initialize it before so we're going to come right here we do conversation and we're going to test that if it's not in my session State sorry session St dot session state if it's not in my session State we're going to initialize it and we're going to do sd.sessionstate that conversation equals none all right so this way when if the application re-runs itself it's going to check if conversation is already in the state the session state of the application and it's going to reinitialize it I mean it's going to set it to none if it's not been initialized and if it has already been initialized it's not going to do anything with it so now we can use it anywhere in during the application um so yeah so that's uh something important to keep in mind we're going to do the same thing with the history of the chat messages but yeah just so you know how to make your variables persistent during during the entire life cycle of your application okay just to make it clear this is not about refreshing the application it's it only lasts during the session of the application which is while the user is well the application is open all right for some reason extremely just reload some code um from time to time so there you go there we go all right so now that we have done this actually I'm gonna show you how to display messages all right in a previous video I showed you how to do this using a package from streamlit called streamly chat which is pretty convenient if you want to get it up and running real quick however I'm going to show you a different way to do this right now and it's basically just inserting a custom HTML into your application okay so if you're at ease with HTML this is probably a good idea for you if you're not probably not but um I mean it's pretty convenient um here I'm going to create a new file right here I'm going to call it let's say I'm going to call it HTML templates how about that um I'm going to call it HTML templates Dot py and this is some code that I had already prepared so what we have here is basically just the Styles The Styling from I mean the CSS styles that are going to uh style these two classes chat message and Bot and we have two templates so this is going to be the template for the user and this is going to be the template for the bot okay and as you can see I already added some images right here for the users but you can add your own just by replacing what's here in this Source part as well all right and yeah actually I don't need this part anymore now that we now that I think of it there you go so here you go and here's the message and this is the part that we're going to be replacing actually I don't like it this way I usually write my variables like this there you go there you go now actually we can we can save this and we can import these three elements into our application on this side so we're going to say that from HTML templates we're going to import CSS we're going to import bot template and we're going to import user template okay and remember that our CSS we're going to have to add it up here because the CSS is going to just like in a website you have to add your CSS on top so we're going to add it here we're going to do St write and we're going to add our CSS and we're going to say that HTML it's going to allow unsafe HTML okay and then just to show you how it works outside of the sidebar I'm going to add it right here underneath the input element I'm going to say SD right we're going to say that this is the message the user message user template and then we're going to say that we're going to allow unsafe HTML this is only to tell um to tell streamlit that it's supposed to show the HTML inside of it okay otherwise it's just not going to uh parse the HTML as HTML and this one's going to be the bot template okay and last but not least let me just show you real quick how to replace this thing inside I think that in Python the function is um replace yeah so in Python what we're going to do is here we're going to say dot replace and we're going to replace the message with my message here it's going to be hello human and here we're going to do the same thing we're going to replace MSG we're going to say hello Robert there you go let's see how that looks now if I refresh right here I should have there you go hello robot and hello human as you can see this is pretty much this looks pretty professional like a chatbot and here I am and here is the human here's the bot and all I had to do is replace the message variable inside of here with my user template with my personalized message out here I suppose that you can start to see how this is going to play when we replace this with our own sorry with our own message okay so let's do that right now um with our end conversation so let's create the conversation and just use these template to display the new messages all right let's do that all right so now it's it's time to actually generate the compensation all right so what we want to do is what when the user fills something in here we want to be able to handle that input okay so what we're going to do actually first of all I'm just going to get rid of the hogging face instruct embeddings because that's just too slow and this is just for demonstration purposes so I'm going to be continuing I'm going to continue using open AI embeddings for now but now you all at least you know how to do using hugging face instruct embeddings okay so right now what I'm going to do is I'm going to come right here to my text input and I'm going to handle the submission okay the thing is we do user question to store the value from the input in the user question and then we do if user question then this is only going to be triggered if the user submits the question we're going to handle user user input and we're going to pass in the user question like this and just as before we're going to create the function up here okay there you go and here actually it's this is pretty interesting we're going to be using the variable that we created just a moment ago in the sidebar this one right here we're going to be using it to generate the answer to the user's question and that's actually it's very simple to do that the way we're going to do this is we're going to right here we're going to say the response is going to be equal to SD session State and here's where we're going to be calling the conversation the conversation and right here we're going to be doing we're going to pass a key value pair of question and right here we're going to pass in the user question okay and now inside let me just show you what this looks like so St right no response okay so now when I click when I submit a question from the to the user I mean when the user submits a question I am going to handle that input and I am going to write the response from the language model and remember this conversation chain already contains all of the configuration from our Vector store and from our memory this means that this one right here if we use it again is already going to remember the previous question I didn't since I already set up the memory this is already going I mean if I keep asking questions it's it's already going to remember the previous context all right so let's see what this looks like I'm just going to refresh this and I'm going to bring I'm going to bring my two test files right here again I'm going to process them since I'm using open AI oh no I didn't rerun open AI I think it's oh no yeah okay um so I can now do um what is the first Amendment about then if I click enter it's supposed to tell me the answer to that however it's going to return an entire object with a lot of things right so here you have it it has the answer and it also has the entire chat history and that is what's important to us because remember that we want to we want to submit the entire history of the chat right so we're going to take this object right here and we're going to show everything up here everything in the chat history up here down here formatted with this template right here okay so let's do that what I'm going to do then is I'm going to just remove this part right here and I am going to create a new session State variable and this one is going to be session state DOT chat history like this okay and this one is going to be equal to my response and it's going to be equal to my response object chat history like this there you go and now this one is the one that I am going to be displaying now let's say for I message in enumerate s t um as teach at history like this this is basically allow me to Loop through the entire chat history with an index and the content of the index and if I if I mod 2. equals zero mod not and if I mod 2 equals 0 we're going to S to write we're going to SD write our user template our user template and remember just as before we're going to be replacing message with the message Etc so let me just paste this right here replace message but this time we're not going to be replacing it with something in particular we're going to be replacing it with the message and where is it located inside the message inside content so this is the entire message that we're looking through and I want only the content of the message so I'm going to do message dot content like this right here and this is going to since I did mod 2 this is only going to take the odd numbers all right of the history and then else for the pair numbers of the history we're going to SD write as well but this time it's going to be the bot template we're going to replace it as well and there you go now we can replace we can delete this part right here and now if I save this it should work now oh one one thing actually just remember that when you are using session State you have to initialize it up here okay at the beginning of your application so if um if chat history doesn't exist oops if chat history not in not in St session state we're going to initialize it to none so that we can never start using it without actually having it being initialized okay all right so let's go and test it um right here I'm just going to drop my two test documents right here I'm going to click on process and let's see how this works now it seems to be high to be processed I'm running the Bill of Rights so let's supposed to know what does the First Amendment say let's say if it knows and now it's supposed to be actually displaying the message templates that I created before okay so there you go what does the First Amendment say the First Amendment stage blah blah blah and then let's see if it rasps some sort of um context here so if I say how about the second one then if I click this let's see what it gives us going to put myself here and there you go so it knows that we're talking about the Second Amendment because we were talking about the First Amendment before okay so it has some sort of memory it has this um it has this chat like structure and yeah I mean I hope that you found it useful I hope that there's been educational uh let me just show you super quick how to do the same thing but using hogging face models instead of open AIS models because right here remember we used uh chat open AI but actually we can do pretty much the same thing with um with uh hogging face okay so in order to do that I'm just going to copy this that I have right here and I'm going to paste it right here so it's pretty much the same thing but we're going to have to import hugging face hub so from length chain Dot language models we're going to import hugging face hug and face Hub and now we can use it right here and I in this case I'm just using Google flight T5 uh but like you can use any language model that you have right here right so if you come right here and you find a model that you that you want to try with this uh with this structure you just have to remember that we installed before hugging face so hugging face Hub so this comes with that you don't have to install anything um I think you have to install anything else but in any case just read the errors it usually gives you exactly what kind of what dependency you're missing and that you have to install but yeah I mean just write the repository ID right here from uh Facebook Dyno um I don't know I mean just choose a language model that works and then you set the temperature for this one in particular temperature other than I mean temperature zero was causing problems but if you put it to anything other than zero it's supposed to be all right so I'm just going to test it like this and show you how it works um just going to come right here I'm going to refresh this let's just load the two files again and once it's processed I should be able to ask about the same uh First Amendment okay so what does the first Amendment saying and it's supposed to be calling yeah Congress shall make no law respecting establishment of religion prohibiting Free Speech there all right I mean it's less I mean here I passed in Google 25 but feel free to use in the language model here I am not running this locally I am using the hugging phase in inference API so that means that this is working pretty much just as open AI I'm sending the request to hog in face and I'm getting it back um but this is free and it's limited just for testing okay so yeah I hope this was useful for you I hope that you enjoyed it um I hope that you have a very nice uh project that you can now show your employers and your clients and I hope that you start creating really nice uh productive and beautiful applications like this to solve real world real world problems if you want to see more of this be sure to subscribe and if you have any questions just let me know in the comments and congratulations for following up the the end it was It was a uh some sort of uh long and complicated uh I'm a little bit more complex project than the ones I had done before let me know if you like this kind of project but I'm going to be continuing publishing uh content for beginners as well so thank you very much for watching and I will see you next time [Music] [Music] thank you
Info
Channel: Alejandro AO - Software & Ai
Views: 20,884
Rating: undefined out of 5
Keywords: openai, chatpdf, chatgpt tutorial, langchain, langchain chatbot, langchain pdf, langchain pdf chatbot, langchain chatbot memory, langchain agents, prompt engineering, chatgpt api python, langchain ai, llm, langchain llama, chatgpt, chat multiple pdf, multiple pdf chatgpt, chatgpt plugins, chat with your data, chat with your documents, private gpt, artificial intelligence, ai, chat with files, open-source GPT, GPT-4, gpt4all, gpt4all langchain, python
Id: dXxQ0LR-3Hg
Channel Id: undefined
Length: 67min 29sec (4049 seconds)
Published: Mon May 29 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.