Chat With Multiple PDF Documents With Langchain And Google Gemini Pro #genai #googlegemini

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello my name is krishn and welcome to my YouTube channel so guys in this specific video we are actually going to create an end to-end project of chatting with multiple PDF documents uh using Google gini now uh one amazing thing about this entire project will be that we'll also be using Lang chain and we'll try to understand how Lang chain has B some important ways of integrating Google Gemini Pro you know and probably developing amazing applications together so everything we will try to develop it I know this will probably be uh a lot many things there are two main important features that we are also going to use in this one is of uh one of the vector embeding technique you can use any different different vectors embeddings it is up to you but today I will show you a vector embedding technique or which has been created by Facebook okay we'll talk more about it as we go ahead now first of all let me just go ahead and show you how the demo will be right the demo and then we will try to develop this entire application so over here uh when I say chat with PDF I'm talking about chat with multiple PDF so what I will do I will go to browse let's say I have two important PDFs over here and these all the research paper one is attention is all you need and YOLO right and I will just go ahead and open it now once I open it what we will do we will go ahead and click submit now what is going to happen is that this entire PDF files will be getting converted into Vector embedding and it'll be stored in local or in any database as such right I've already showed you how you can use pine con database you can use cassendra DB you know to store all these kind of vectors right now you can see over here I've got the status as done now this is one important functionality here what we did we uh browsed the file we uploaded it any number of PDFs you can upload it the limit size will be 200 mb per file so any number of files you can probably upload it and and as soon as I probably clicked submit and process it has actually got converted into vectors now whatever question I ask about this specific PDF I will be able to retrieve it now as as said right I've uploaded this two PDF one is attention is all you need so you can go ahead and take up any question and probably ask to it okay so let's say I will go ahead and ask what is scaled do product attention Okay so I will say what is scale. product attention so let's see whether I will be able to get the response or not so it is running over here it has already been converted into vectors so I hope I should be able to get some output over here so let's see uh so here you can see scale product attention is a mechanism used in the Transformer model for calculating the attention weights between different position in sequences so amazingly we are actually able to get the response not only that uh I'll say please provide a summary about multi-head attention let's say say and you can write any prompt that you want provide a summary about multiple head attention so I will just go ahead and you know submit it so you can probably see and many people have actually asked me Krish how do I probably interact with multiple PDFs like there can be any number of PDFs over here I even tried with 10 to 15 different kind of PDFs over here so here you can see the reply is multi-head attention is a technique used in Transformer model to improve its ability to attend uh the different parts of the input sequence now this is with respect to attention PDF I also have YOLO PDF so let's uh ask okay provide a detailed summary I will just go ahead and write provide a detailed summary summary about V 2007 error analysis so I will go ahead and uh press enter and I hope I should be able to get the response now this is the the entire demo but again creating uh the entire application will help you learn multiple things over here so we'll go ahead and talk about it as we uh go step by step now here you can probably see you have got the answer also and you'll be able to see that you're getting the entire result okay so let's go ahead and now develop this application I will completely start from scratch as I said that we are also going to use this Facebook AI similarity search f um this is whenever you work with vector embeddings you can probably use this you can also use chroma DB if you want but uh in this particular video I've already used in one of the project of chroma DB but let's go ahead and now use this F so guys now let's go ahead and develop this project step by step please follow all the steps that I'm specifically telling you um and uh the best thing is that we'll try to keep it much more modular we'll try to use a front end application of backend application we'll try to see that how when a vector embeddings is basically created how you can store it in your local or you can probably if you have an option you can also go ahead and store it in any database as such okay so first of all as usual what I will do is that I will go ahead and open a terminal okay see guys whenever you probably start it right now you always need to create a new environment whenever you want to probably start a project okay I've already created an environment called as V EnV okay but I here I will give you a command how you can actually create an environment okay so for that you need to write cond create minus p v EnV and then you have to probably use this is my environment name V EnV and here Google gini pro it works well with at least python that is greater than 3.9 okay so here I will probably use 3.10 so as soon as you write this and just press enter what will happen is that an environment will get created on the name of V andv okay so please do this steps I know see I'm not showing you completely from uh executing the command because I've already done that in my previous videos right so you don't have to do again and again and wait till that entire steps actually happen in this video okay so once this V and environment is created we need to go ahead and write cond activate this specific environment that is VNV okay because inside this environment only I will go ahead and probably install all the libraries so these are the first first two steps that you really need to follow one is create the environment so for that you can write cond create minus p v andv um then you can give the python version python equal to 3.10 okay as I said Google giny pro works well with python version greater than 3.9 plus okay now the second thing that we will go ahead and do is that create our environment variable okay so here I have to use Google API key I have already shown you how you can use Google API key just go to maker suit google.com/ API key so this is the website that you really need to go makers suit. google.com/ apppp AI key just click on this create API key in a new project as soon as you click it you can copy the API key and you can paste it over here and that is the same thing that I have actually done over here so here is the same API key I have pasted it okay and the name that I'm actually giving is Google _ API _ key okay so this is the second step this is my environment variable without this I will not be able to use Google gini pro Okay Google gin Pro now the next thing is that I will go ahead and install all the libraries that will be required for this specific use case so I will go ahead and write step by step and what all libraries I will be requiring as I said I'll be requiring streamlet I'll be requiring Google generative AI this is for Google gin Pro I will be requiring python. EnV yeah to load all the environment variables I will be requiring Lang chain why because we need to read the PDF langin has lot of functionalities to read the PD PDF and for again reading the PDF also we have something called as P pdf2 if you want you can use chroma DB but right now in my use case I will go ahead and use f CPU okay F GPU will also come but right now I'm going to use CPU because I have less number of files if you have more files you want to do parallel processing you can use f GPU now this will be the another important library that you will probably be using that is nothing but Lang chain Google Lang chenore goore J so this basically means that with the help of L chain you will be able to access the apis of Google generative AI okay so these are my important libraries that I will be requiring for this project now what we will do we will go ahead and write pip install minus r requirement. txt so once I do this you can probably see that I've already done the installation so it is showing requirement already satisfied the main reason is that you should not waste much time in this two steps so that is the reason I've already done this and I have I'm proceeding with it okay now this is done I'm good with the requirement. txt my API key my uh API key is ready everything is ready now let's go ahead and do the coding and this time we will try to do it completely with scratch we'll try to understand what all things are there okay so first of all I as I said I will go ahead and import stream late as HD so stream late is done now along with this I will also be using from PI from PI PDF 2 okay import PDF reader PDF reader okay so this is the libraries that I will also be using because PDF reader with the help of PDF reader we will try to read all the documents okay and uh apart from this what I will do is that uh there are some important uh uh libraries that you need to import with respect to Lang chin okay so what all important libraries that we are going to import first of all I will go ahead and paste it over here okay so from Lang chain. text splitter I'm going to use recursive character text splitter uh as you know that from uh what we need to do as soon as we read the PDF as I said that we need to convert those into vectors so uh Google generative AI like basically the gini pro also provides you and embedding technique okay so for that I will write from Lang chain linore uh Google uncore Genai import Google generative AI embeddings okay so we are going to specifically use this uh from Google langore Google don't worry about this warnings it will go off as soon as we execute for the one time okay then along with that I will go ahead and import from Google generative AI as gen a okay so this is done because I'm going to use this generative AI functionalities only now along with this uh I will be importing four different libraries and I will explain you everything step by step okay so one is F this is for the vector embeddings uh from len. Vector stores we'll try to create that then lanin Google gen AI also has chat Google generative AI okay so this I will try to make you understand why we are specifically using this then from langin do question answer we are importing load Q so the this basically helps us to do the chat and any kind of prompts also I want to Define so for that also I'm importing this lin. prompts import prompt template and finally I'm also going to import EnV load. EnV so this is specifically to load all the environment variable so these are some of the inputs now I will go ahead and write load. EnV by this what will happen is that you will be able to see the environment variable okay and then finally I will write gen. configure because I need to configure my API key from Google gmany so I will write OS do get OS dot get EnV and here I will go ahead and write Google because that is the same thing API key so what we have done over here is that I'm confirming the API key with respect to whatever Google API key we have loaded in this EnV file or we have put this lot INB file Now understand one thing I hope you have visualized the demo part right in that demo part you could see there was a left side where we have we got some place to upload the PDF and convert into a vectors and in right right side there was a text right now in the left side as soon as we upload a PDF we should be able to read the PDF and then whatever data is there inside the PDF we should be able to give it right so for that what I will do I will create create a definition I I'll create a function and I'll write getor PDF text okay now in this case whatever will be my PDF unor docs right whatever document I'm uploading it now here I will go ahead and create a text variable and I will say for PDF in PDF unor docs right so what I'm seeing I'm I'm just saying that okay read all the PDF pages in this particular PDF unor docs and I will go ahead and read this with the help of PDF reader right so PDF reader I hope I have okay I have imported it over here so this will basically be my PDF reader so PDF reader will be responsible in reading the specific PDFs okay now as soon as it probably reads this PDF in PDF multiple pages will also be there right so the main important internal component will be that soon as we read a PDF using PDF reader it will be able to get the details of all the pages so that basically means this PDF reader will be in the form of list so what I will do so I will go ahead and say from page in PDF reader okay so I'm going to probably read all the pages inside that so I will write do pages and after this I will go ahead and say text plus is equal to page do extract text so if you go ahead and write extract text that basically means from this particular page we are going to extract all the details all the text ins uh inside that okay so once we do this then what we'll do finally we'll go ahead and return the text so this is perfect right so this steps in short is saying that we read the PDF we go through each and every pages and we extract the text so this is the first information that we want okay because later on we will try to get this text and now as usual what we'll do we will create another function as soon as we get this text I will divide this text into chunks smaller chunks so I will write getor chunks okay step by step we'll we we'll try to do this please you really need to follow all the steps because once you follow this so this will be generic for all the PDF files with respect to whatever application be developed in generative AI okay so here I'm going to give my text so this will be my another function and here I will go ahead and write textor splitter okay as you know we have used recursive uh character text splitter we have imported it over here right so we are going to use this and I'm saying hey make the chunk size of th000 instead of th000 I'll say 10,000 and overlap since I have a very big PDF so here I'll say my chunk overlap will be th000 okay so overlapping can basically happen so here what is basically happening I have the entire text and I want to divide those text into smaller chunks of this particular size of 10,000 words I guess or 10,000 tokens and there can be an overlap of thousand okay so that I get a detailed summary of any questions that I specifically ask okay now once this is done then I will go ahead and write text splitter do split underscore split splitcore text and I will give this specific text over here okay so I'm taking this particular text and splitting based on this information that I specifically have and then after this I will go ahead and return my chunks perfect so this was my PDF I got the text I divided the text into chunks now the main thing will be that I'll take this text chunks whatever chunks I have and I'll convert that into vectors right so here I will say definition get Vector store and here I will go ahead and write textor chunks okay whatever text chunks I'm specifically getting and here I'm going to upload my embeddings embeddings which embeddings that I have used over here Google generative AI embeddings so I'll copy this and in Google generative AI embeddings if you see the documentation there is a model which is basically called as inside the model folder which is nothing but embedding 001 okay so I'm going to use this embedding technique okay to probably do the embedding again in Lang chin also you have different embeddings in opening also you have different embeddings but right now I going to use an embedding which is completely free for everyone to use it from hunging face also you can do it but Google generative AI edings is already providing you so many features so why to go with others and pay money so here I will go ahead and write Vector store isal to F Dot from undor text okay so I'm going to see when I probably create this embeding I've already imported F right so f is already here F from text and I will say take all this text chunks and embed according to this embeddings that I have initialized okay that's it the embedding technique so this way it is basically creating the vector store now this Vector store can can be saved in a database it can also be saved in a local environment right so what I will do for right now I will go ahead and write Vector store Dove and I will say local and here I will say fscore index okay F uncore index now F uncore index over here shows you that I'm going to save this entire information in local right so this will be the folder that will be created and inside this folder I'll be able to see my vectors right in some format which is not readable okay so you'll also be able to see a pickle file so that whenever I ask a question to those vectors I should be able to get the information so these are the three steps first we got the PDF then we converted that into chunks and then we finally divided that into a vectors okay now I will go ahead and write definition get conversational chain okay so now I'm going to basically use this get conversational chain and here I'm going to Define my prompt I'll Define my prompt template and let me provide some meaningful prompt okay so I will say I'll give this question see so this is my question answer the question as detailed as possible from the provided context and make sure to provide all the details if the answer is not in the provided Contex just say answer is not available in the context don't provide the wrong answer so this is the information that I've actually used in The Prompt template I'm saying that you need to behave like this you need to uh uh act like a person who knows how to read PDF and all okay so this is my first step with respect to that and uh uh after this uh we will see the next step what it is going to happen okay now this is my promt template that I have actually created now I will go ahead and create this particular function that is chat Google generative Ai and here I'm going to specifically use model which is my Gemini Pro I hope this is everyone is familiar with it I'm going to initialize my model Google Chat generative chat Google generative AI G Pro and here I'm going to specifically use some temperature and let's say the temperature that I'm going to use is.3 okay then obviously I've created prompt so so I'm going to use the promt template which is available in Lang chain and I will write template is equal to promp template okay and you know what are the input variables right so if you go ahead and Define my input variables it will be in the form of list and there are only two one is context and the second one is specifically called as question okay I think it should be question Yeah question context and question okay so these are my information with with respect to the prompt template and this will basically be my prompt I'll divide this into prompt so I have to probably create my prompt now once we complete this once this is done I will go ahead and create my chain so it will be load QA chain and inside this I will use model whatever model that I have actually specifically defined and I'm going to use the chain type as stuff because I also need to do internal text summarization okay so this chain type stuff documentation chain will actually help us to do that and I will go ahead and give my prompt okay done perfect so this looks good and finally let's go ahead and return this chain so this is what is basically going to happen with respect to get conversational chain okay so this is basically all the functionality is going to take we are going to load the Gman pro model we are going to create our template and then we going going to get the chain okay now finally with respect to the user input like as soon as I probably Define or write in the text something should happen right so here you go so here you can see I've created an user input so this will basically go with my user question okay um if you probably go ahead see this is my embeddings that I'm loading then I load the fire index from the local okay this is basically loading see at the end of the when I give any information in the text input right I will say hey tell me about the summary of this particular topic from that particular PDF right already that PDF is converted into vectors so it is already stored in the F Index right so that is the reason we are loading F right we are loading this F index from the local right and then we are trying to do the similarity search based on the usual question right and then we are calling as soon as we do the similarity search what we call we call this function right after doing the similarities so by this we'll be getting the chain back and then we get the response and finally we display the response over here okay these are the steps in user input so this is specifically related to our text box what is basically happening in the text box okay so again I'll tell you first of all we give the question then what will happen as soon as we give the question it'll do the similarity search on all the F index F vectors that is basically created and then we go with a conversational chain I get the conversational chain and then we go the response here in the response we give the chain input documents as docs which is basically based on the new similarity search and question is basically my user question and finally we get the response Now quickly what we will do we will create our streamlit app okay so let me go ahead and create my definition as main function because this will basically have my streamlet app so here let me go ahead and probably do this and again I've used chat GPT okay to probably develop the front end so here chat with multiple PDF let's go ahead and write multiple PDF okay then st. header chat with multiple PDF again using giny user question is st. text in ask a question from the PDF file if user question then user input see if I am giving this specific question then this function should get executed automatically that basically as soon as I write the question over there in the prompt and I press enter then it should basically execute this okay but there will be also a sidebar where I need to upload the PDF and convert into Vector so that is the reason this particular code is there right with sidebar then St std. title menu then upload your PDF on uploader button and I'm saying that as soon as we submit the button we call get PDF text then we got get text chunks and finally we get get Vector store so this function that you'll be seeing this three function get PDF text get text Chunk get Vector store it is only for making sure that your F index is created right so that is the reason it is given in the sidebar and there we upload the file and we process it okay that is what we are specifically going to do in that particular case so this is done very simple I think I hope everybody's familiar with this and finally I will go ahead and execute it so let's go ahead and done I will say hey if underscore name is equal to underscore main I'll call this main function Now quickly let me go ahead and run this I think it should work absolutely fine so I will go ahead and write streamlit run app.py so let me just go ahead and call all this okay now see this is the left side right I gave a browse button as soon as I submit it then what will happen this entire code will be called right one by one function will be called Which F function will be called first get PDF text will be called then get text Chunk will be called then Vector store as soon as I do the vector store you'll be able to see here right now nothing is there no F index nothing is there okay now I will go ahead and browse it I will select both the PDF now see as soon as I submit an process so it is running processing now here you'll be able to see one folder that is f index now it has index. F obviously you cannot read it and there is one index. p pickle pickle right so this F index will be my Vector where the vector SE similarity will be done all this information can also be stored in the database okay uh as said right this can also be stored in the database and it can probably be called now this is done now here I will go ahead and ask a question now my question will be let's say uh provide a summary of provide a detailed summary of and I'll paste it now see as soon as I press enter now where it will go it was this function right see over here as soon as I press enter if user question I press enter then it'll call this function that is nothing but user input then here embedding model will be called then we'll load the F index then it'll do the similarity search and it'll find out the chain and based on that chain it will provide me the response okay response of output text so uh okay so let me just reload this I think I did not press enter or what I don't know provide a summary of I will press enter okay PR index is already there let's see I think we should be able to see some chat yes here you are able to get it right so guys I hope you like this particular video this was it right uh an amazing end to endend project now you are able to see chat with multiple PDFs using Gemini and I've shown you how you can use with Lang chain how you can use with f index and all you can also do it at chrom ADB so I hope you like this particular video this was it from my side I'll see you all in the next video thank you take care have a great day bye-bye
Info
Channel: Krish Naik
Views: 46,256
Rating: undefined out of 5
Keywords: yt:cc=on, chat with multiple pdfs, generative ai tutorials, google generative ai tutorials, google gemini pro, FAISS vector stors, end to end Langchain projects., krish naik generative aI
Id: uus5eLz6smA
Channel Id: undefined
Length: 29min 21sec (1761 seconds)
Published: Tue Jan 09 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.