LangChain Crash Course - Build apps with language models

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone I'm Patrick and welcome to this new tutorial about Lang chain Lang chain is a framework for developing apps powered by large language models for example if you want to build your own chat GPT app based on your own data then this framework is perfect for it but there is a lot more to it so here you can see all its key functionalities which are divided into different modules so there are models so you can access different models with Lang chain then there are prompts so here you can easily create your own prompt templates then you can manage memory with it then there are indices this is needed to combine the language models with your own text Data then there are chains so this are sequences of calls for example you can combine multiple different models or prompts and lastly there are a chance this is super powerful for example you can tell an agent to access the Google search so in this video we go over all of these module wheels and in the end you should have a great understanding of how this framework works and then you can hopefully build your own AI apps with large language models so let's get started installation can be done with Pip install link chain and then later when we want to use specific llms for example or a specific Vector database and we also have to install the corresponding packages and we will get to this in a moment so the first core functionality is a generic interface for different llms and we can have a look at the different Integrations here for example we have open AI we have cohere we have hugging face and a lot more most of these Integrations work via their API but we can also run local models with Lang chain so let me show you how to use open AI in this case we also have to install the python SDK and then we have to set our open AI API key and then so you can either do it like this in Python code or you can set this as your environment variable on your local system and then we can import the open AI interface from Lang chain and then create our open Ai and here we can set different parameters for example we can also set different model names so this is the default model currently and then we can give it a text and run the model and this should give you the same output as you would get with the official open AI API so here it provided a or it created a company name so let's run this again and see that we get a different output so yeah this is the company name it suggests now let me show you how to use the hugging face Hub as a second example in this case we have to set the hugging face Hub API token so you get this at the hacking face website then we import the hacking face up and then we create our llm by setting the repo ID in this case we use the this model from Google and then here again we can set different parameters and then again we can run our llm in this case we set we say translate English to German and then the sentence so this works so now you know how to access different models with Lang chain the second important functionality is prompt templates so link chain facilitates prompt management and optimization because often or most of the times we don't want to pass the question directly to the model like this so here we ask can Barack Obama have a conversation with George Washington so let's run this and see what we get and the output is Barack Obama is a current President George Washington was a past president so this is not quite correct and it also didn't answer the actual question so a better way to design our prompt is to say question colon and then we use the actual question and we also say let's think step by step and then we say answer colon so let's pass this to the model and see what we get and now the answer is George Washington died in 1799 Barack Obama Obama was born in 1961 so the final answer is no so this time we get a correct answer and Lang chain makes it super simple to create these prompt templates so for this we can say from Lang chain import prompt template this is the most basic one then we specify our template where we Define a placeholder like this and then when we create our prompt template we also give it the input variables and this is a list and now here we have to use the same names that we used for the placeholders so now let's run the cell and then we can for example say prompt dot format and then we use the same name as an argument here so the question and then again we give it the question and now you see this will be our final prompt but now we cannot directly pass this prompt to the llm so if we run this then we get a type error so now to combine a prompt template with a model we have to use a so-called chain so with chains we can combine different llms and prompts in multi-step workflows so here we can chain multiple models and prompts together and there are a lot of how-to guides for different use cases so the most basic one is the llm chain but for example there are also chains for conversations or for question answering or summarization so for this have a look at the documentation and now let's have a look at the llm chain so here we import this then we create this and here we give it the prompt template and the llm as a parameter then again we here we create the same question and then we say llm chain dot run and then only give it the question and remember this question is now we passed to The Prompt template and then the final prompt is given to the llm so if we run this we should again get a good response and as you can see again we get a correct answer so yeah this is how to work with chains in a lang chain now let's talk about agents and tools this is another core functionality in Lang chain that can make your application extremely powerful so with this we can solve very complex questions and tasks and in this concrete example I want to show you for example we will ask the model in what year was the film departed with Leonardo DiCaprio released and then what is this year raised to the 0.43 power so often your model cannot answer this on its own and for this it can access different Tools in this example it will Access Wikipedia to look up the the film and then it will use llm math to perform the actual math here so this is super powerful if used correctly and now first let's talk about how this works so agents involve and llm making decisions about which actions to take fake than taking that action seeing an observation and repeating that until done and for this we have to differentiate between tools and llm and the agent so the tool is a function that performs a specific Duty and this can be things like Google search a database lookup using the python repel Wikipedia the llm math and more then the llm is the language model that powers the agent and then we have the actual agent so first let's have a look at some of the supported tools so for example we have chat GPT plugins that can be used then the Google search the python wrapper requests the Wikipedia API well from Alpha and some more and then we also have to differentiate between different agent types so right now we have these four available the one that you probably see the most is the zero shot react description type so this will determine which tool to use based solidly on the tools description and now let me show you how to use this so we import load tools and we import initialize agent then in this case we want to use the Wikipedia tool so here we also have to install the python package then here again we set up a model and here I have to say that the agents and tools works best with the open AI models so let's create our llm then we say load tools and here as a list we can use all these supported tools again you will find their names here in the documentation then also you will give the llm that will power the agent and then you say initialize agent with the tools the llm and then the agent type and then we can give it our complex question so let's run this and see what we will get so here we get the whole output and here we can follow the thought process for example the model said I need to find out the year the film was released and then use the calculator to calculate the power so the first tool it wants to use is Wikipedia so here it requests Wikipedia and then it says I now know the year the film was released and then the next action to take is to use the calculator so it uses the calculator and gets the math output and then it says I now know the final answer so the final answer is the film departed with Leonardo DiCaprio was released in 2006 and this year raised to the 0.43 power is this value so yeah this is correct and as you can see this concept with agent and tools is super powerful and allows a lot of complex questions or workflows that you can do with your models the next important Concept in Lang chain is memories so with Lang chain we can easily add state to chains and agents the most popular example for this is of course if you want to build a chatbot and we can do this very easily with the conversation chain so we import this then again we create a model and then we create our conversation chain with the model and then we can say conversation predict and give it the first input so let's run this and since we set where both equals true we can have a look at the whole output so first of all you will see what the conversation chain will do this will first of all format The Prompt like this the following is a friendly conversation between a human and an AI the AI is talkative and provides lots of specific details then it says current conversation human said this and AI responded with this and then we get the response hi there it's nice to meet you and now if we run this again with the next input can we talk about Ai and now again we can see the whole prompt and here we can see the whole current conversation so it remembered the previous questions and answers and then again the answer is here absolutely what would you like to know about Ai and yeah this is how easily we can add memory to a chatbot the next important module is document loaders and so with document loaders you can load your own text Data from different sources very easily into your app and then feed this to your models so let's go over the supported document loaders here is the whole list and you can see there are quite a bunch of them so for example we have a CSV loader an email loader we have Evernote Facebook chat then we have HTML of course markdown notion PDFs PowerPoints you can load easily also URLs and for each of them it's super simple to set this up for example let's click on the PDF loader and then this is how you would use them so you import the loader then you set this this up and then you say load or load or in this case load and split in this notebook I for example use the notion directory loader then we give it the notion database and then say loader.load and now we have the raw docs but now before we can feed this to the model we need to understand indices so indices refer to ways to structure documents so that llms can best interact with them and the indices module in Lang chain contains utility functions for working with documents and in order to work with documents we have to understand these three concepts first we have embeddings so when embedding is a numerical representation of your data for example of your text then we have text Splitters that split long pieces of text into smaller chunks and then we have Vector stores so this can can be different Vector databases we can use and with this we can understand the meaning of the data and then for example have more accurate search results and usually in order to feed our own data to the model we need to combine all of these Concepts so let's go over a concrete example and then it will become more clear so in this example I want to load a text file that I download here so a DOT txt file and the first step is to apply a document loader so there's also a dedicated one in this case we use the text loader that we can set up here then the next step is to apply a text splitter so again there are different ones available in this case we use the character text splitter then we set this up and call splits documents then the next step is to set up embeddings and again there are different ones available in this case we use the hugging face embeddings and for this we have to install this third-party package and then here we simply create them and then the last thing is to use a vector store and there are different ones that are supported in length chain for example you could use elasticsearch Phi spine cone or vv8 in this small code snippet I use the vice Vector store so here we import this and then we call files from documents and then pass the docs and the embeddings and then for example you could easily perform similarity search so you could ask what did the president say about kitanji Brown Jackson and then this is the most similar result that it finds so the most similar text Chunk and as you can see here we also see the name so this is working and yeah this is typically how you would load text into your app so first you use a document loader then you use a text splitter then you use embeddings and lastly a vector store and in order to under stand is better there's also a very cool end-to-end example that you can check out this is the jetlang chain repository the link will be in the description below so yeah these are the most important Concepts you should know about Lang chain alright so I hope you enjoyed this tutorial if so then drop me a like and then I hope to see you in the next video bye
Info
Channel: Patrick Loeber
Views: 87,646
Rating: undefined out of 5
Keywords: Python
Id: LbT1yp6quS8
Channel Id: undefined
Length: 15min 19sec (919 seconds)
Published: Sun Apr 09 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.