The easiest way to work with large language models | Learn LangChain in 10min

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
is Lane chain the easiest way to interact with large language models and build applications it integrates with various llm providers including openai coherent hugging face and more if you want to create a question answering or text summarization bot with their own document no problem Lane chains got you covered line change supports a lot of different document loaders and various retriever methods including the most recent chegebt retriever plugin and line chain memory will keep your check history and even summarize your chat history the most exciting feature of Lane chain are chains and agents you can chain various llms together and use your large language models with a suite of tools like Google search python Rebel and more it also recently integrated with chargpt plugins that you can try out right away and I forgot to mention that lane Chang is completely open source in a community effort I'm going to walk you through some code in the docs that will help you get started using Lane chain right away okay let's get started looking at some code okay so let's take a look at the code you will need to install some needed packages import the packages and Define your API key you can write your API Keys here you will at least need the open AI API key cohere is another llm provider hugging face if you want to use hacking based models you will need hugging face Hub API token and search API is for Google search first of all like I mentioned earlier link chain integrates really well with many llm providers listed here here are some examples I loaded the chat gbt model the gpd3 model cohere model the command x large from the cohere model flame model flame T5 from hugging face Hub and then I give it a prompt how to be happy chair GPT returns this message note that with the chair model we can Define If the message is human message or an AM message or a system message here because we want to ask the question it is a human message and you can get the text right here looks pretty reasonable to me GT3 provided us eight tips and cohere returns also reasonable answer plan didn't do a good job just laugh at yourself next up if you want to do question answering with external document Lane chain makes it super easy there are many document loaders if you want to load your data from a text file you can use the text loader or if you want to do notion or PDF HTML so I have a local txt file that's why I'm using text loader and then I create the vector store index and then when I query it I get the answer for my question basically how it works is that line chain will help you break up your document into chunks and you will create embedding vectors for each of the text and then when you get a question the database and line chain will do a semantic search to see which of the text chunks the vectors are most similar to this question vector and then retrieve that chunk and add that text Chunk into the language model so that the language model have the most relevant text to look at another amazing feature about Lang Chang is that it can help you keep all your chat history there are actually various options you can choose for how do you want to keep your memory for example here we have the conversation buffering window memory where you can keep a certain number of conversations in your chat you can Define how many conversations you like here I only keep k equals one meaning that I want the latest conversation to be included in the prompt you can also do a summary memory which will summarize all the previous conversation for you this is especially helpful if you want to save up the number of tokens or you don't want to exceed the number of tokens in the prompt here's a template that we can use the following is the friendly conversation between a human and an AI the AI is talkative and provides lots of specific details from its context if that AI doesn't know the answer to your question it choose places it does not and then we have the summary of conversation from the summary memory we have the current conversation from this conversation buffering window memory and then the human will input something and that AI will respond here's an example where I said hi there's no summary conversation yet because it just started and I said can you tell me a joke the summary becomes the human created the AI and the AI responded offerings help and then the AI said sure what did the fish say when it hits the wall damn uh and then I said can you tell me on a similar joke so now the summary became the human grade of the AI and the AI responded offerings help the human asked I had to tell a joke to which that he had replied with the joke about a fish hitting a wall and then you can see the current conversation doesn't have High um High there to kind of help you anymore because I defined my k equals one I only want to keep my previous one round of conversation now ai responded sure what did the fish say when it swam into a concrete wall damn it okay so you can see it is a similar joke so the AI did have that context in mind when generating another response chain is a very important Concept in line chain it can change different llms together which means your output from one llm could be the input of your second lrm and you can chain them all the way and it can have different mechanism of your chains for example if you want if you want two outputs from Two Chains to be the inputs of this third chain that is also possible so it is very customizable however you want to change your language models let's take a look at this example we want to ask what is a good name for a company that makes a certain product we use the llm chain module to define a chain we can do chain run colorful socks we want the AI to tell us which is a good brand for a company that makes colorful socks vibrant Hues socks not bad and then the second prompt could be write a catchphrase for the following Company the company name is actually the output of the previous chain in a simple sequential chain we can chain those two together with the output of the first chain as the input of second chain and then we run overall chain round color for stocks so AI will propose a different name now which is rainbow socks and the catchphrase of this name which is put a little color you know step with rainbow stocks so that is how chain Works another very important concept is Agent you can use various LMS with different tools what do I mean by tools here's an example where I use search API to do Google Search and llm math to do math problems we can initialize our agent with those two different tools you can choose which agent to use this zero shell react description basically use the react framework to determine which tool to use based on the tools description so to show you an example we can ask who is Leo DiCaprio's girlfriend what is her current rate raised to 0.43 power so the agent chain first using the gpd3 model we tell it to use dvd3 to rephrase and understand this question I need to find out who Leo DiCaprio's girlfriend is and will then use a calculator to solve the equation so it did a Google search you find the girlfriend's name and uh and responded I need to use a calculator to solve the problem so it then goes to LOM math to use the calculator do the calculation so now I'm know the final answer which is 3.7 here's another example where I give it a tool on python Ripple so instead of using the calculator it's actually using python repo here you can see it's trying to print out the python code and actually execute this python code let's see if it's correct yeah it looks like it's correct so there are many tools you have access to those are the optional tools for the llm to decide which one to use they don't have to use it it's completely optional so that's the gist of chains and agents in this last example I would like to talk about cha GPT plugins because everyone is so excited about this and everyone is talking about it and Lane chain is moving fast it has added the chair GPD plugins in Lane chain already with this AI plugin tool this is exactly what openai is using actually so it's pretty exciting here is an example with the shopping website so here so this is the shopping website on and it has different categories and sell different stuff and then in this example we asked what t-shirts are available on this website so the action of the launching agent is I need to take a look at this canarna product and then he returns a bunch of open AI responses the next action is I need to do a request on this website and now we have different products directly from this website and now we have the thoughts um what are the available t-shirts in the final answer is some men's t-shirts it looks pretty nice so this video covered the very basics of link chain there are also many and many other features like model comparison evaluation data documentation and many more it's a very exciting project I hope you will give it a try and let me know if you like the video thank you bye
Info
Channel: Sophia Yang
Views: 31,543
Rating: undefined out of 5
Keywords:
Id: kmbS6FDQh7c
Channel Id: undefined
Length: 9min 41sec (581 seconds)
Published: Sun Mar 26 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.