Building a Chatgpt like Chatbot using Langchain and Hugging Face || Step by step Langchain tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi all in the last video we talked about the capabilities of Lang chain and what Lang chain could do additionally we talked about why should you care about Lang chin in this particular video we are going to utilize the capabilities of language in order to build our own conversational bot additionally we shall build an interface like chargeability in order to use our conversational bot so let us get started first of all there are certain libraries that you need to install in order to make use of this conversational bot in order to build this conversational bot I have already installed these libraries in my environment so I am not going to reinstall it again these libraries such as hugging face Hub Transformers blank chain chain link chain lit is a python framework that enables you to create conversational interfaces okay in order to test if our chain lid has been successfully set up in the environment we call the command change it hello once I call this run this command I'll show you what happens so it opens an interface a chat interface such as chat gpts wherein we can pass our messages and in turn the chatbot or the AI assistant will give us respond so currently I will write hi I am sort of so you see since we do not have any background llm currently set up in our conversational environment it is simply returning whatever that I have passed as input and it says that chained installation is now working okay let us head back here and we shall import certain libraries such as the chainrit library from blank chain we shall import hugging face Hub prompt template llm chain and just as we did in our previous video we shall use the guest guest pass in order to set our hugging face API token I've already said this here and as you know that when you set this hugging face API token here these appear in the form of these asterisks or dots in order to keep it private okay the prom template is one of the elements of Lang chain which is necessary for building applications based on large language models we talked of prompt templates in the last video if you haven't checked that earlier video I'll attach the link in the I link above as well as in the description make sure to check it after watching this video now this from template defines how the model should interpret the user's question and in what context it should answer them so using the prompt template you can instruct your model as to how it should answer the queries that you pass okay next what we are going to do is we are going to set our conversational model so for this we can use any number of models that are available under the hugging face hub if you go to the model section under hugging face So currently we are going to use the test generation model because Lang chain supports text generation models with hugging face as a interface so let us quickly click on text generation here and you see there are number of models that are already existing here that you could use okay I'll sort this based on the most downloads and you'll see some of these models that have been used frequently So Thai UA Falcon 7B is a model which is having 7 billion parameters and since we are running this on a CPU and for the purpose of this particular X explainable tutorial what we are going to use we are going to use a smaller model in order so that it runs faster so we are going to use a version of gpt2 gpt2 we currently we are using GPT to medium based model which has around 355 million parameters certainly the results when using with Falcon based model model having higher number of parameters will give you a much better result much more relevant result as compared to this gpt2 based model okay you can play around with these models and get your own results here so currently I'm using gpt2 and I'm creating a conversational model by passing the hugging face API token I am passing this repo ID the model ID that is there and I'm setting the parameters for this model so I'm setting the temperature as 0.8 and number of new tokens or maximum number of Q tokens as 200 so this temperature ranges between 0 to 1 0 means you get completely deterministic output whereas 1 means you get a non-deterministic outputs or probabilistic based outputs so this scale would make sure that the model the output that is written by the model is probabilistic OK we are going to create a prompt template here in this template we are specifying that it is an helpful AI assistant that makes stories by completing the query provided by the user okay and we pass this in the prompt template we specify the input variable query so this is the query that is the input variable and we need to specify it here this is the template that we set we create a conversational chain using the llm chain we pass a conversational model as prompt and verbose as true so I already ran this and I specified the initial query as Once Upon a Time in 1947 since rgbt2 based model is going to complete the story it provided presented us with the story as the American public discovered that they had been lied to by a television network they called it Fox News effect and soon enough they began to believe that the world was full of invisible entities called these so you see that our model has done a pretty good job of creating a story based on the initial prompt that we provided so in this way you can play around and create your own prompts create your own cross your own queries and see the results accordingly right you can use other models or more advanced more complex models and get even more relevant results with it now now you get to understand the working of this llm conversational chain the llm based model let us create a chatbot interface so in order to create a chat bot interface we are going to use chain lit chainlit is a python package to create UI for chat interface applications and in order to use it we need to use a decorator from chain net for land chain so I have already created an application here okay this is the application that I already created here using chain lit I'll attach the link in the description as to how you should use chainlit and the documentation related to chain link go ahead and check it out this is the environment that chainlit provides okay now because we are going to create a new environment with this I'll close the original e existing environment here I'll close this because our port numbers can get mismatched here so I'll close the existing one and we'll create a new one here so chatbot demo dot Pi now you see in this particular chat bot or chatbot application that we developed here what we are providing we are providing the model ID the conversational model just as we created previously we provide the template all the things remain the same we will create a conversational chain under llm chain we provide the conversational model The Prompt and the verbose what differs here is how we use chain net for the purpose so for using this chain lit we've used this decorator at CL dot on chat start and at CL dot all message so these are the two decorators that we are going to use and on chat start is the starting point or the entry point to your chat application so here you specify your llm chain you specify the user session dot set you give a name of this user session and you pass this llm chain here and on message which is the response or is the action that will be taken every time a message is passed on the console or the conversational chat interface okay so for this we shall call our llm chain that we already set in the user session we are going to get the response as llmchain.acon a call is the asynchronous call okay we are using a synchronous line chain callback Handler here we can perform certain post processings on the response that we receive and then we pass the message to our interface again okay so the message that we are receiving from the user based on which we perform the action or perform the computation and then pass it back to our conversational bar so using this let me run this up okay now in order to execute our chatbot application we shall run chain lit run provide the file name chatbot demo dot Pi hyphen W hyphen W specifies that whether you want to reload the application as and when you make changes to your script right it will automatically reload it and then additionally I will specify the port so I will specify the port as 8080. okay it will ask for the password I will specify my hugging face API token the hugging face API token that you can find from the hugging face portal you can create your API token here we saw this in the last lecture as well okay now once you run this you see that your application is now available at your localhost 8080 so let me go over and open localhost 8080 so here I will type hi I am sort of so the chatbot now see response I am a software engineer interested in developing high performance software for machine learning AI translation I have been a software engineer since 2009 you can follow me it just generates some response here how can you help me it just takes some time to load and as and when you use larger models it will take a lot more time in terms of computing the response since we are using a gpt2 based model the responses are a little quicker here so the best way to get started is to send me a message with a link to your site once I get a review then I will send you the link to review so it provides certain response so one of the things that you will find here is this particular chatbot that we have built lacks context right it does not lack it does not have the context based on the previous conversations that we had in order to have the context of the previous conference conversations we need to use something known as a conversation right A Memory conversation so for that we have the conversational buffer conversational buffer memory and then conversational let me show you just give me a minute yeah there are two types of memory in blank chain conversation buffer memory and conversation buffer window memory I'll provide the link in the description you can read about it we shall discuss it in detail in another video in order to keep things simpler here and let us go step by step if you like the content give it don't forget to give a thumbs up make sure to subscribe to the channel to never miss a video stay on top of your learning have a smooth learning experience bye bye Jain see you in the next lecture
Info
Channel: datahat -- simplified data science
Views: 577
Rating: undefined out of 5
Keywords: data science, machine learning, data analysis, python, langchain tutorial, langchain, langchain ai, langchain in python, langchain demo, langchain tutorial python, long chain tutorial, langchain prompt, langchain chatgpt, langchain explained, langchain ai tutorial, python langchain tutorial, what is langchain, gpt 3 tutorial, langchain chatbot, langchain coding tutorial, chainlit, chainlit with, chainlist with langchain, chainlit tutorial, chainlit chat, chainlist chatbot
Id: cKjh5ZOWqus
Channel Id: undefined
Length: 11min 10sec (670 seconds)
Published: Sat Aug 19 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.