Getting Started with LangChain | JavaScript Tutorial #1

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello and welcome back in this video you're  going to learn how to build apps using Lang   chain a Cutting Edge framework that lets you  use language models like ChatGPT in a whole   new way language models are awesome but they  do have limitations they can only respond to   simple prompts and they might not know much  about your specific domain business or your   own data Lang chain solves these problems by  adding two superpowers to your language models   first it makes them data aware this means you can  connect your language model to any data source or   knowledge base you want like a database second it  makes them a genetic which means they can now use   tools and interact with their environment in this  series I'm going to show you how you can use Lang   chain to create amazing AI driven apps Langchain  has support for both python as well as JavaScript   or typescript in this video we're going to focus  on the JavaScript version of this framework and   by the end of this video you'll be able to create  your own AI assistant that can search the web for   the most up-to-date answers to your questions  let's talk about the prerequisites this video   is beginner friendly and you only need a basic  understanding of JavaScript to follow along   since we'll be using node in this video we need  to install node on our machines so head over to   nodejs.org and install the LTS version of node  you will also need a code editor and in this   video I'll be using vs code you can find the link  to both node and vs code in the description of   this video then go ahead and create a new folder  on your PC and open that folder in vs code we now   need to instantiate a new node project we can do  this by opening the terminal new terminal and in   the terminal we need to run the following command  in PM init Dash y after executing this command you   should see this package.json file in your folder  now that we have our node environment setup we   need to go ahead and install Lang chain so again  in the terminal we need to run the following   command npm install dash s Lang chain this will  now install all the packages and dependencies   for using Lang chain now that we've installed Lang  chain we can start building our first application   our first step is to decide on the language model  that we'd like to use langchain offers integration   with several popular llms which includes open AI  Azure open AI Huggingface career replicate Etc and   Lang chain is constantly adding additional support  to the framework because we want to have a look at   extending the core functionality of chatgpt  we'll be integrating with the open AI llm so   let's create a very basic app that will generate  a unique business name based on the businesses   description so in the root of our project let's  create a new file and let's call that file demo.js   the first thing we need to do is import the llm  that we want to use in our project so at the top   of this file I'll type import open AI from Lang  chain llms slash and after eating slash we can see   this drop down list appear with all the different  llm models Ico here arguing phase openai replicate   Etc in our scenario we want to select openai and  from openai we are importing the open AI class   so after importing the openai class we need to  create an instance of that class and we can call   this whatever we want I'll just call this model  and I'll set that equal to new openai so we are   creating a new instance of this openai wrapper  openai takes in an object as a parameter and this   object requires two properties to be passed to it  the first is our open AI API key and next is the   temperature so in order for us to integrate with  openai we need to provided an openai API key which   will generate next the temperature property tells  the AI how creative it's allowed to get with the   answers a value of zero means it needs to be  strict with its answers so now creativity is   allowed a temperature of 1 represents full control  in terms of creativity and because we want the AI   to be creative in generating business names I'll  just give it a value of 0.9 let's go ahead and get   our openai API key so in order to do that you  need to go to platform.openii.com create a new   account and after creating your account you should  be presented with a screen like this so from here   at the top right corner click on personal and  then click on view API keys from here click on   create new secret key you can then give your key  a name like Lang chain demo we can then click on   create secret key and then you need to copy this  key then back in our project we can paste that   key in between these quotes please note that I'm  going to delete this key after this recording is   done so please use your own key now that we've  instantiated a new instance of the openai class   we can now interface with this model so let's  test that out on the model instance we can hit   period to see all the available functions on  this object so for now I'll just select the   call method and within call we can pause in our  prompt something like what would be a good company   name for a company that makes colorful socks so  what this will do is this line of code will pass   this prompt to the openai API and it will return  a response from the model I'm just going to close   this window a bit so we've got some more room  this call function is I promise which we need   to await and we also want to assign the response  to a variable so I'll type const race for response   equals await model dot call and let's go ahead  and write a response to the console we can now   test this out by going to our terminal and typing  node and the name of the file which we've called   demo when I execute this I'm getting this cannot  use import statement outside of a module error and   we can resolve that by going to the package file  and then in the package file perhaps just below   this license value we can add a new value called  type colon module and we can save that let's try   to run this again after running this we get this  response back from the model funky footbear socks   right this means our integration with openai is  working it's best practice to store private key   values like this in an environment variable file  so let's set that up real quick for this we need   to install a new package so in the terminal we can  type npm install dot EnV this will allow us to use   environment variable files in our project then  at the root of our project we can create a new   file called dot EnV in the dot EnV file we can  now specify the variable name which I'll call   openai underscore API underscore key and I'm just  going to cut this from this file and paste it in   the environment variable file and I'll save this  something to note is the name of this variable is   important as line chain will automatically  look at the environment variable file and   it will look for the specific variable name and  that means if we no longer have to populate this   value here so I'll just remove it completely  and Lang chain will automatically try and find   that value in the environment variable file so  what we need to do then is import a few things   from dot EnV so we need to import all as dot EnV  from dot EnV and then on dot EnV we need to call   Dot config with parentheses and we can now save  this if we run this again the model should still   work so in the terminal let's run node demo and  we got a response back for fancy Footwear right   next let's talk about prompt templates what prompt  templates allow us to do is to take the input from   the user and to reformat it in a specific way that  our model will understand so as an example in this   application we would typically want the user  to provide only the description of the company   like Max colorful socks in some input field on  the front end we then want to take this input   and format it into a prompt string like this  and this will ensure consistency across all   our users luckily Lang chain makes it very easy to  construct these templates what we can do is at the   top of our kite we can import prompt template from  Lang chain slash prompts then we can specify a new   variable and I'll just call it template for now  and we can then take this string and then assign   that to the template variable instead and then  we will replace this text which should be dynamic   input from the user with a kind of a variable  or placeholder so we can do that by passing in   curly braces within the curly braces we can give  this thing a unique variable name something like   product now does we have this template we can now  create a new instance of the prompt template field   so I'll call cons prompt prompt template equals  new prompt template which takes an object as input   and prompt template needs two properties the first  is the temp template which we also called template   up here the second is the input variables which  is an array of values and in input variables we   can specify the list of variable names in this  template string we only have this one called   Product and that's what I'll pause in here we can  now have a look at what this function will do to   the template string by typing const I'll just  call this something like formatted prompt which   is equal to prompt template dot format and format  takes in an object and the value of this object   will be the variable name which we called Product  and then the value for product which was colorful   socks this format function retains a promise which  we need to await we can now write the formatted   prompt to the console to see its value so let's  go ahead and run this so for now I'm just going to   comment out this code every year so let's run this  script so in the terminal I'll run node demo and   now we can see this formatted string great we now  have a reusable template which takes in Dynamic   values so back in our code I'm going to remove  this console log and I'm also going to add back   in the creation of our model and I'll actually  delete this code here and I'll also remove this   formatted prompt code over here so now we're only  left with the prompt template as well as the model   so up until now we've been interfacing with the  prompt template and the model directly but in line   chain you want to take these two components and  chain them together and fundamentally that is how   land chain works we create several components and  then we use Lang chain to bring it all together so   to create a chain all we have to do is at the top  of our code we need to import an llm chain from   Lang chain slash chains note that there are many  different types of chains that you can use with   Lang chain and we'll be having a look at quite a  few of these within this series but because we are   simply taking a prompt and passing it to an llm  we'll be using the llm chain Clause so below our   model definition I will create a new instance of  the llm chain the llm chain takes in an object as   input it's expecting a property for the llm that  we want to use so our llm was defined as model up   here and then secondly it needs the prompt and  for the prompt value we'll pass in our prompt   template so previously we called Model directly  and we called the call method on the model and   we then pass the prompt to it but what we did now  is we created the new chain and we tied our model   and the prompt template together using this chain  so what we can do now is on chain we can call its   call method instead and this takes in an object as  a parameter and within this object we need to pass   in the value of this variable over here which  we called Product so I'll call it product for   followed by a value like colorful socks so call  returns a promise which we need to await and we'll   assign this response to a variable like so so if  we just have a look at the code again we defined   our prompt template using this template string  and we assigned a variable called Product we   then instantiated the new instance of the openai  class and we then use the chain to tie the model   and the prompt together and now we are calling  the call method on the Chain passing in a value   for that variable let's console log this response  let's run this inner terminal I'll just clear this   for now and let's run node demo and after running  this we receive feedback from our chain and it's   given us back this text with rainbow socks Co and  there you go you've now created your very first   Lang chain application so let's start talking  about the real reason your year line chain adds   some Advanced functionality to our models by  introducing the concept of agents and tools so   let's first talk about agents Lang chain uses  agents to use our llms to determine which action   it needs to take next this action can be to use  a tool to fetch information or to interact with   its environment like writing files Etc or it can  take the output from one llm and then pause that   to another llm for further processing we also have  the concept of tools so tools allow our llm to   perform some Advanced functions a tool is there  to perform a specific function and extends the   capabilities of the llm an example of a tool is  the ability for the lrm to go online and perform   Google searches so let's put agents and tools to  the test let's create an application which uses   open AI as the llm and will give our llm the  ability to go online and Google information we   will also provide our llm with a calculator tool  to help it in its ability to perform accurate   calculations so let's do do that in the root  of our project let's create a new file for this   I will call this agent dot JS at the top of the  file I'm actually going to close these other files   as we no longer need them at the top of the file  let's import our environment variables then let's   also import openai from Lang chain slash llms  slash open AI then let's create a new instance of   our model I will call this variable model which is  equal to open AI new openai and in the parameter   list we'll set the temperature to zero I'm setting  the temperature to zero because we will be asking   our model to perform mathematical calculations  and we don't want it to get creative notice the   overall model we also want to specify the list of  tools that line chain should make available to the   model we can do that by creating a new variable  called tools which is an array of values so what   I want to do is I want to provide a tool to our  model which will allow it to go online and search   for information using Google secondly I also want  to provide a calculated tool to our model we need   to import these tools from Lang chain you can do  that by importing something from Lang chain slash   tools what I want to import here is the serp API  serp API is one of the many tools that's been made   available to Lang chain for browsing the internet  we also want to import the calculator which we can   get from Lang chain slash tools slash calculator  and from calculator we want to import calculator   then in the tools array we can specify a list  of all tools that should be made available to   our model the first tool that we want to make  available is an instance of serp API serp API   takes in a couple of parameters the first value is  the API key for serp API lisco and get that value   so what you need to do is go to serpapi.com and  then register your account after logging in you   should see your private API key over here just go  ahead and copy this key then back in your project   go to the dot EnV file and create a new variable  called serp API underscore API underscore key and   then add your API key over here then back in  our agents.js file we can provide this API Key   by typing process.env dot serp API underscore  key server API needs a second attribute as well   which is an object and for this object we can  simply specify an HL value of English and a   GL value of US optionally you can also specify  the location and this will assist the agent to   perform Google searches relevant to your location  I'll just leave this blank the second tool that I   want to make available is a new instance of the  calculator which doesn't take in any parameters   so now that we have our model as well as the list  of tools we can now get Lang chain the trigger   agent to manage the input to our model and to  assign tools when a model needs assistance to   get the answers so in order to execute our agent  we need to import something from Lang chain so at   the top of our guide we'll import initialize agent  executor with options and we can import that from   langchain slash agents So Below tools we can  create a new variable called executor which   will await initialize agent executor with options  this takes in a few parameters first are the list   of tools then the model then we need to pass in  an object with a property Called Agent type which   we will set equal to zero shot react description  we can see the different agent types in the Lang   chain documentation in our guys we're using a  zero short react description and if we look at   the documentation this is saying that if you're  using a text large language model first try zero   short react description if you're using a chat  model then try chat zero shot react description   and if you're using a chat model and want to use  memory try chat conversational react description   in our example we're not really implementing  a back and forth chat conversation but this is   really just a once off we'll ask it a question it  will go online get the answer and then come back   to us and that's why I'm using this specific agent  type so back in our code after we've instantiated   our agent I'll just write a comment to the console  to say that we've loaded the agent and we can now   pause a prompt to our application so what we can  do is we'll call our executor on executor we've   got the scroll method the school method takes an  object as input and will pass in a value called   input followed by our prompt and I'll actually  use the example from the Lang chain documentation   which is who is Olivia Wilde's boyfriend what  is his current age raised to the 0.23 power the   scroll method retains a promise which we need to  await and I'll assign the response to a variable   called raise let's go ahead and write us to the  console so I'll say console log raise dot output   and we need to install the serp API package so  in the console let's run npm install Dash is   serp API after that's installed we can now run  our file by typing node and the storm will run   the agent file we got the feedback for loaded  the agent and after a few seconds we get this   response back saying that they that Harry Styles  is Olivia Wilde's boyfriend and is giving us his   current age rise to the 0.23 power so this is very  cool so our model was able to go onto Google fetch   this information from the internet and then use  the calculator tool to perform this calculation   we can also see the processing of this agent  in action by specifying the verbose property in   the executor and I'll save for both to True let's  clear the console and let's run this script again   if you set for biosity true it's like a debugging  feature so in the console we can now see each and   every step that the agent is executing so after  loading the agent it's trying to run the chain   with the input that we provided then it's passing  this prompt to our model it's then determined that   it needs to go online and use the search tool  to get this information we can then see some of   the results that came back from Google and after  getting the information about Olivia boyfriend's   age the agent determined that it now needs to  use the calculated tool for performing this   calculation so the verbose property is available  on most of the components in Lang chain and it's   a brilliant way of troubleshooting and debugging  your application I'm actually going to remove this   for now so next let's talk about memory so up  until now our interaction with the application   has been quite straightforward we pass in a prompt  get a response and then the session terminates so   basically our application is stateless in line  chain we can introduce the concept of memory   to our models so that our model can remember our  previous interactions a good example of this would   be a chatbot where we want the model to remember  the previous messages that we sent so let's have a   look at memory in line chain I'm actually going  to close this file in the root of our project   I'll create a new file I'll just call it memory.js  then at the top of the code we need to import our   environment variables then we will import openai  from Lang chain slash llms slash open AI we also   want to import buffer memory from Lang chain slash  memory and since I want to create a simple little   chatbot we'll also import conversation chain from  line chain slash chains we first need to create   our model which I'll call model which is equal  to a new instance of openai if you want you can   specify the temperature but I'll just leave  it blank in order to use the default values   in order to use the buffer memory I'll create a  new variable called memory which I'll set equal   to a new instance of the buffer memory clause and  we can now Define our chain as well by creating a   new variable called chain which will set equal  to a new conversation chain conversation chain   takes in an object with two properties for the llm  we'll pass in our model and for memory we'll pass   in our memory object so let's test this out we  can now call chain dot call this takes an object   as input with a property called input I'll pass  in something like I I'm Leon the scroll method is   a promise which we need to await and I'll assign  that to a variable called response one let's write   this to the console to see what the model comes  back with okay back in the terminal I'm going to   clear all of this so back in our terminal we can  now run node and memory and after a few seconds   we get this response back saying hi Leon I'm an  AI nice to meet you what brings you here today   so in our previous examples this would have been  the end of the whole chain and there would be no   concept of memory and no chance of the model  remembering who we were but let's see if the   model can remember my name so I'll create a new  variable called response to which will await chain   dot call which takes in inputs and what we'll do  now is we'll ask it what is my name it's not also   write this to the console so I'll say console.log  response to Let's test this out by running node   memory so the first response is this highly on  nice to meet you and then the second response just   says you just told me your name is Leon so this  means the model was able to retain information   within the session and it was able to recall my  name using memory what we can also do is stream   the response from the model as it is generated  up until now we had to wait a few seconds for   the response to come back but we can use streaming  to show the response as the model is generating it   so let's have a look at that I'll actually create  a new file called stream.js in stream will also   import our environment variables and let's also  import open AI from Lang chain let's also go   ahead and create our model by calling new open AI  this takes in an object as input but what we can   also do is set the streaming property deity true  and when streaming is certainly true we can also   specify a callbacks property which is an array  of events so I'll just create one object here   and I'll call it handle llm new token which takes  in token as input within this function we can then   call Process dot standard out dot right and we can  pause in the token let's see what happens if we   call the model now so all await model dot call and  let's pass in right a song about sparkling water   and let's save this let's go ahead and run this  file by typing node stream and this press enter   and after doing this you should see the response  streaming back to us as the model is generating   it so I'm going to press enter and as you can  see the response is now being streamed back to   us I think this concludes the first video in the  series if you found this helpful please consider   subscribing to my channel and please like this  video in the next video we'll take what we've   learned so far to create our very first chat  model I'll see you in the next one bye bye
Info
Channel: Leon van Zyl
Views: 13,928
Rating: undefined out of 5
Keywords: langchain, langchain tutorial, langchain javascript tutorial, langchain explained, langchain guide, langchain node tutorial, langchain agent, langchain javascript, how to build AI apps, chatgpt, gpt 4, how to build a chatbot, chatbot tutorial, langchain for beginners, langchain quickstart
Id: W3AoeMrg27o
Channel Id: undefined
Length: 26min 25sec (1585 seconds)
Published: Wed May 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.