Build AI Chatbot with Streamlit & OpenAI!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what our output is so we can see it's streaming generating response and there we go welcome to the channel and in this video we're going to build our very own chatbot built with streamlet on the front end and open AI on the back end so before we get started let's jump in and see what we're going to be building in action so I've got the chat bot open here and I'm going to ask it a few questions here to see what is the final project that we're going to have as an outcome once we finish this video tutorial so the first question I'm going to ask the chat bot is to tell me a short story and see what it generates for me so we can see it's generating a response here and it is now streaming out the response back into the user interface for us um I'm going to ask it another question here so I'm going to say uh give me code give me python code to add two numbers so nothing too complicated here we just want to see what kind of output we get here and you can see that it streamed out uh the python code for us to add two different numbers together here so that's the app application that we're going to be building from start to finish you can see over here we've got some chat options over to the left hand side so we could select uh the GPT 4 model if we want it to and then I've got some other options here such as temperature we'll dive into uh details in this a little bit further as we're building out the application so we can increase the temperature which um creates more uniqueness to the output of the underlying model and then over here here we' have an option to set what is the max token length for the output for our chatbot here so this is what we're going to be building here from start to finish from starting developing the front end user interface all the way to coding the backend python code to interact with the open AI API here now before we get started here I have put together a GitHub repo so I'll go up to the top here and go over to the GitHub repo so there's multiple branches to this repo here so you don't have to start completely from scratch and set up your environment or set up the the project uh folder structure anything like that so I'll keep put this down in the bottom in the description of the video but you'll be able to come here and uh just grab this command here to clone this starter project that way you don't have to build out your project structure in the end and and then I do have another Branch here that is our main branch and this has all the completed code here so you'd be able to come in here and we can see all the code that is going to be part of this chat bot here so if you wanted to you can get started and just go to the description down below and actually clone the starter project repo so matter of fact I am going to walk through this step by step so that way you can see how you can do this on your own so I am going to copy this here and I am going to go over actually to my download so I'm going to delete this here and move it to the trash can and now I'm going to go here drag that down to the terminal and the terminal opens is on my other screen so I have this I'm going to just paste in that command I'm just going to put this project in my downloads folder so we can get started now you can see that I have this I'm just going to rename this folder uh for Simplicity starter starter project and I am going to open this in Visual Studio code so depending on what your IDE uh IDE of choice is uh I prefer vs code so I've opened this project now in vs code and we've got our starter project here now so we can see that we have the complete project structure sure that I have put together here I'll just briefly outline what I've got here and as we develop the code a lot of this will make a little more sense so I created a helper folder where I have some uh helper uh python files here so lmore helper that's where we're going to put um the code for communicating with the backend open AI API so here's just blank uh then I have a config file so this is going to be a file where I store some configuration information so this will be a python class that um stores things like references to uh environment variables that way we don't have to type things out and and hard code values and the next file that I have here is the requirements txt now this file here is uh it actually stores a reference to all the underlying python libraries we're going to need to complete this project here so we've got a reference here to open AI reference here to streamlet because we're going to be building our user interface with streamlet and then python. EnV just because we're going to be dealing with some environment variables within uh our development environment as we build out this application and then lastly this is going to be our main file here the simple unor chatbot python file this is where where the core functionality of our project is going to exist so I've now outlined the project structure at a high level so if you've cloned this project you can follow along from here uh from start to finish and by the time you're done with this video you'll have a completely functioning chat bot so one thing I want to do is I am going to if you're in Visual Studio code open up a new terminal so if you go to the top here and click new terminal cuz we're going to have a couple commands that we need to run now just a heads up I am running this in a dedicated python environment I'm using Anaconda to uh set up and run my dedicated python environments here if you unsure about how to set up your own dedicated python environment using Anaconda I've got a short video Below in the description that will show you how to set this up it fairly quickly so what I'm going to do now is Type in UN in terminal down here because like I said I'm using Anaconda as my python environment and everything that we're going to be coding is in Python here we're not going to be using any other languages so I'm going to type in cond let me blow this up here a little bit uh that way if you're on a smaller screen you can see it so cond activate and I have a environment called streamlet EnV this is where I develop out any applications that are streamlet based user interfaces I'm going hit enter here and we can see that I now am in the streamin environment by uh this here that's in parentheses the stream litore EnV now because we're going to need some python libraries to build this out I have the requirements text here that I mentioned before where our python um the dependencies that we're going to need for python packages or reference so that way we don't have to run multiple um multiple pip install commands or uh type things out you know open AI stream list python. EnV uh we can just do a-r which allows us to reference a file now this allows us to reference the requirements.txt file here to install our dependencies so I'm going hit enter here so it's going through this here and it is now installing them if you've already installed some of these like open AI you've already got that run in an environment then it's just going to say already satisfied um so I'm going to run clear and now we have our uh dependency setup our python packages that we're going to need I am going to now just to make sure everything is running I'm going to run streamlit run simple chatbot dop now that's referencing this uh this particular python file over here now I'm going to hit run and of course we get a blank screen we haven't put any Widgets or anything like that on our or within our application yet I'm going to close this out close that out go back to my project here and so I'm back in my project go back over to my files really quick oh actually I opened up the wrong one I've got multiple open I believe let me just make sure let me just drag this back down here all right so we're back here so I'm going to stop this here now we are going to need a um an environment file here so I'm going to create a EnV file and for those who aren't familiar this is a way for us to store environment variables in a development environment here so this uh particular environment variable we're going to call this open AI uncore API key now we do need our open API key that we're going to use so we're we're just doing some quick setup in configuration that we know we're going to need later on so I'm going to go back over to the browser I'm going to just go open AI docs first thing that shows up uh okay so I'm already logged in here I am going to go to my API Keys here I already have one uh for that I set up for demo I'm going to create a new API key so demo two and I'm going to click create now just a heads up open AI has added extra capabilities inside of the settings where you can now restrict what API keys can do you can set a readon API or you can restrict it to only accessing certain uh models right here we're just going to say uh we want access to everything in the open AI API so we're going to click secret we're going to copy this don't worry I will be deleting this once we uh once this video launches so this will not be there so I'm going to save that here never just kind of heads up never check in your EnV file into a uh GitHub repo uh fortunately if you do check this in by accident um open AI will detect that and disable this key for you but just for future reference be sure you don't check uh the EnV file in but just a way for you to uh prevent that be sure in your get ignore that you are ignoring Dov files here and then I've got some other ones here like uh this one here and then uh pie cach files or python caching uh files that are hell so we don't want to check that type of stuff in all right so we've got our environment variable set up here that we're going to use when we build out our chatbot here now like I said earlier we have this config file here that we know we want to um we want to actually be able to reference certain things and not have to worry about whether we're typing them correctly so I'm going to go ahead and code this file out here so we're going to import OS and then we're going to create a class we're going to call it config here and then we're going to create some constants in this class that we are going to reference here so open AI we're going to use we're going to actually use these constants later own when we are actually um making calls to the open AI API so here so we're just going to get that environment variable here uh that we created so we're going to call this what our environment variable is that we set up in ourv file so that's what we called it next we are going to create something called a system prompt and this is a way for us to steer our uh chatbot to um to only uh operate in certain constraints right so we can just say hey you're a helpful chatbot right or or whatever you want I've I've got some text here that I'm just going to paste in that way we don't have to you don't have to watch me type it in here so I've created a system prompt here um you are a helpful chatbot assistant that can answer questions for users so we kept it pretty simple you could you know add anything you want uh in that system prompt but we'll use this system prompt later on and we'll use this API key uh later on so now we've got some housekeeping done uh for setting up our environment variables and setting up our um config class here with some constants in it now we can start coding out our actual chatbot here so go ahead and navigate to the simple chatbot file there so we're here now we are going to need to import do some imports here now we're going to need to import streamlet as and we just do asst for for shortcut and then we'll we'll jump into some of the streamlet documentations as we're building this out too so don't be too concerned if you don't understand certain things that are going on I'll explain things in deeper detail um as we build this out now the other thing that we want to do is import our helpers so actually we're not going to import that one yet we haven't added the we haven't added the methods and functions that we're going to need so we'll add that import later as we get to that piece of code here or get to that piece of code so we're going to import config so that's our config file over here we're importing that class and then the next thing that we need to do is import the EnV library that we're using to load environment variables in all right so we've got our inpoint statement import statements in there now at the very top I'm just going to call this here this loads in the environment variables that we have here so that's what that function does uh that we're importing next we're going to go ahead and set it set up our API key reference our API key up at the very top so we're just going to go ahead and configure that now this is where you're going to see why I set certain things up so we're now going to reference that config and we're going to reference the open API or open AI API key here so we you know normally you could go here and you know type in your API key here that that could be error prone if you copy something incorrectly um so we're going to avoid that by using using our class here and referencing that constant so we never have to worry about um did we type our API key incorrectly now we just reference this here now next we're going to set up our page configuration for stream lid and so here we can set up some basic configurations like page title right we're going to do page title we're going to call this stream lit open AI chatbot and we're going to do one more we're going to set up one more setting here we're going to set the initial sidebar to be expanded that way we can see the uh the options that we set up so I've got that set up now now let's do this let's set up the title now again I'm doing St here just kind of quick explanation for those who may uh miss this is we're referencing streamlet as St so that's why we're using St and we're going to set the page title and this is what shows up in the nav bar in your browser open AI chatbot okay so let's rerun this here so we've got we we something should show up on the page now so let's just make sure everything's working properly I'm going to type in streamlet run simple chatbot dopy and this should open up for us now and we can see the title on our page so that's all that's showing right now and we're going to go back over to our code all right so we've got our title set up now we're going to need to set up a way for us to track the messages that our chatbot has generated so I'm going to create a list here called messages it's just going to be an empty empty list that we initialize here now one thing that streamlet has is it has a way for you to store things in session state so and that's going to be stored in your browser session state but we want to make sure that the session state is initi sh I so we're going to do some checks here really quick so I'm going to do uh if messages not in St session State and this is part of the streamlet library here so we're checking the checking inside of the streamlet session state to see is there a reference to messages at all and if not we are going to do St session State and then we're going to do messages equal and then we're just going to do this here we're going to do that empty uh list again there now now that we have session State set up to store our messages and again this is handled by streamlet uh for you um so now that we've got that set up now let's go ahead and set up our sidebar navigation so this is where we're going to set up our options for when a user is using our chatbot um you know what model are we going to be using uh what's the temperature that we're going to pass to the model and then what's the max token length so let's set that up now so I'm going to type in with and I'm going to do St side bar here now width is a way to say uh basically we're operating within the context of this sidebar so anything that's that we place indented in with underneath this width is going to show up as part of our sidebar so I am going to type in St markdown because we're going to we're going to embed some Mark so if you're familiar creating markdown files using you know for example our read me document over there is a markdown file so you can use your what you're used to doing with markdown here so we're going to save that and we're just going to say hey we just want to kind of title up there a header for chat option so that should show up if we refresh this here so we can see our chat options are showing are now showing up there so now I'm going to go back to our application here now let's add some control rolls and some widgets here now I am going to um grab a link to reference I'm just put some documentation in here actually let's open this up to look at some of the streamlit documentations as we're adding widgets here now I'm going to click on this documentation it's going to open it up this is going to take us straight to the uh streamlit select box so this would be a dropdown box so we're getting all the different options that we have here um again in the completed codee there will be a reference to this documentation list uh link I'm not going to go over over everything here let's just scroll down to the example so we can see the example here um you know that they put in their documentation and if we go here we can see that the options that we had that are entered up here email home phone mobile show up in this drop down so if you're ever wondering how to use one of the widgets that streamlet offers to you that's how you do it all right let's go back there and let's now add a widget of our own so we're going to do model and this is going to store the model that we select so again we're just we're using GPT 3.5 and GPT 4 as option so we're going to do this uh this here select box and then we're going to say what model would you like to use and then here in our options we are going to let me put a line break in here actually let me do that okay let's put a line break there uh that way it shows up so we're going to do GPT d3.5 dturbo and I'll show documentation to how you can go and look up these if you're not familiar now gp-4 so those are two models we're going to allow users to use now let's add another widget here we're going to call this temperature now this is the parameter that the model expects to come through that allows us to set how unique or how creative is the model going to going to be with its response so we're going to use a number input here so St number input and then we're going to just name this temperature all right of that let me minimize this actually I'm going to close terminal that way we can get a better view there and so what I'm going to do is put the value so the value here is the default value so I like to start with a uh 0.7 as the temperature when I'm testing things out with the different large language models and then we're going to do Min value so this is the minimum value that the temperature can be set to so 0.1 is the minimum value and then our max value that we can set the temperature to is going to be 1. Z so everything is either from zero to one when it comes to values but in this instance here is 0.1 is going to be the low does Mak sense to put a a zero temperature value in all right so next let's add one more widget like we saw when we started out the video we're call this Max token length and again this is going to be a number and I'm just going to copy this here that we're just not typing things out for the heck of it and and we're going to call this Max token where did that go okay Max token length and we are going to say 1,000 and this could go up to uh like 8,000 tokens so I think that's the max for uh 3.5 I know there's some uh some models that have larger contacts windows but we'll just stick it stick to this uh some round numbers to keep it simple here we're going to say Min value let's just say 100 and again max value is 1,000 so we've now added our widgets to our sidebar so they should show up let's see if we've got any errors here so we'll go back here and we will refresh this so now you can see our options show up over here so our drop down should show so GPT 4 and GPT 3.5 turbo and then again we can go down here we can continue to go down uh you can also type here if you want to for Simplicity let me do that and one thing okay one option one thing I do want to add here is I want to add what we call a step value here so go back up to temperature we're going to do step so this is just when you click the plus sign how uh what's the value that it increments by so we want to step by 0.1 there so we go back to our app and we just refresh now when we do this it goes down by1 or up by 0.1 or down by 0.1 so there we go okay so let's go back over to our app here and so we we've coded we've now coded our sidebar so now let's move in kind of to the meat of the application here now we're going to actually start coding functionality to actually build our application out now I am going to start with um I'm going to start with four statement now what I'm coding up right now is a way for our user interface to Loop through the messages that are stored in session state so I'm going to say for message NST session State and again we're storing our messages in session State called messages that we set up here right so that's why we're setting that up there at the very top and that's why we're initializing it so so now if there are messages that are being stored there so if we've had a conversation with our chatbot and that message has been stored then that should render onto the screen so I'm going to go ahead and code this out so we're again we'll do with st chat messages now this is a widget built into streamlit that will allow you to display messages B based on role so there's the uh there's multiple roles that this this particular uh chat message widget recognizes here so there's a user and then there's like there's an assistant so you'll see that further down in the code so I won't go too much deeper into that so I'm again we're doing with and so we're going to make sure that the output here we're going to do St markdown and we're going to do message and then we're going to do content and you'll see when we're once we get further down the code you'll see how this is written to our session state but again so We're looping through session State here for any messages that we have and then we're going to display a chat message widget on the screen if we have messages and it's going to show the distinct role so this could be an a an assistant or this could be a user and we're going to just say what's the content of that message so this could be a user message or this could be the actual chatbot message all right and feel free to uh put questions below in the comments if something's not super clear and you know I need to clarify something a little more I I'm more than happy to answer your uh questions below so now we've got our messages displaying if we have any in session State now now what we're going to do is actually start communicating with the model or actually start prompting the user for um for their question so we're going to store this in user prompt and we are going to do so here we're saying if a user has typed something in or um yeah so we're basically saying hey when you display the chat input to the user on the screen and if they enter something into that chat input and hit enter stored in this user prompt uh variable here so we're here and we're going to do chat input and again chat input is a streamlit widget that allows you to input um input content into a chat box uh chat box and and basically it's a easy way for you to build chat Bots without you having to build all the controls from scratch again we're we're Gathering that user input here we're GA gathering chat input and dumping it into a user input variable now this here excepts uh kind of a placeholder so what questions do you have all right so that's what we've done there and now we're going to hit enter and now we we've got to do some more setup here so what we want to do is also Loop through or look at what messages a user has asked so we want to be able to store that and display those here so if a user has entered something here we want to display the question the users asking on the screen CU right here we're just Gathering that input and stored in the variable but we want to see what the user is also asking um with output so this is going to be basically Echo the users's question so chat message here and this is a user chat message and we're going to again we're working in the context of this user chat message here we're going to Output this again I like to use markdown because we've got a little more control over formatting we're just going to copy this here and we are going to write the users question out as markdown so let's see if uh if this is functioning so I'm going to go back to the browser we are going to refresh now now thing to call out here is we now see the chat message box down there and let's go back so this here this chat message all right intellisense keeps popping up all right so the st. chat input is what is causing that particular input box at the very bottom to appear so it handles rendering the chat input box and the button all at the same time so you didn't have normally you would have to code this out completely from scratch if you're know if you're used to building things in react or some other JavaScript framework you would have to do a lot of work to to build out that control there and to disable the button if a user hasn't uh added input in all all this stuff is taken care of for you so let us go back there here actually let's go ahead and code up a one more line here let's actually add the the um question the user is asking to session state so here let's go type in St session State and then we're going to do message or messages and we're going to pin this because again this is a list now it expects it to come through as a dictionary here so that's what streamlit expects so streamlit expects a role to be in the dictionary here that we're going to use later so we're going to say the role is user and this determines the icon output also so if the role is assistant the icon is going to look like a little robot if the the rooll is a user the icon will show like a a just a person's head or face so you you'll be able to see a little bit of this later once we actually have this fully functioning so and next thing here is the next value that we're going to set or the next key in this dictionary is going to be content and this is the content that the user inputs so we're just going to say user prompt so we've now when we ask a question the uh value stored in the user prompt here and then we say display a chat message and within the chat message in the context of that chat message we are going to show markdown of what the user asked and then we're also going to store that in session State all right so let's go back over to browser and see if this will function for us so uh we just randomly type something who are you all right so it just Echoes it out on the screen for us it's not we're not communicating with the uh open AI API yet um so we're not getting anything back obviously we're just coding up the user interface coding up the the session state to store the user's question that we can then see a history of the user's question so if I type in another question uh let's see can you write python code and I henter that so you can see it's just echoing out anything that I type here and it's storing that in session State here all right so let's go back over to our code so we can see that our user interface is functioning we can see our sidebar has the options and we can see we can type things in hit enter and it'll appear on the screen for us now we're going to dive into actually communicating with the open AI um API we're going to we're going to code the bulk of the rest of the application now all right so let's go back over to the code go back here and next we're going to code up we're going to code up some more user interface components here so when a if you know remember at the beginning of the video we showed a spinner when the API was communicating with the um back in open AI or when it was communicating with the backend open AI API so here we are going to now type in with so again with just means we're operating within a certain context after we type uh type in a widget because the widgets are basically nesting we're basically nesting widgets when you see this width so I'm going to say spinner and I'm going to type in generating response dot dot dot all right so when we click that let me go back down here and so since we can't communicate with our API yet let's go ahead and start coding up our llm hoer so we're going to leave this particular class I'm just going to comment that out now we're going to go over to the llm op class here so click on that and we need to import some things here so we're going to again we're going to be using open AI so we're going to do open a open AI import open AI that it Imports the python Library first to communicate with the open AI uh library or uh API we're going to also again call the import config so again that's the beauty of us creating that config file is we don't have to uh fear you know hardcoding something in or typ having some typos so we're going to first initialize initialize the system prompt so again remember over in the config file so if we go back here we created this system prompt here because we don't want to just embed this in our code so I just created a constant in that config class but we're going to reference that now so we're going to do system and we're going to do config and then we can see our system prompt so that'll just pull that value in for us we don't have to worry about having that um hardcoded in within our class now we're going to Define just a quick helper function so again this is just llm helper we're just going to call it chat now I've got some parameters because we want to pass in the options that we have outlined here into our model when we're communicating with it so we're going to set up some parameters now so the first parameter I'm just going to call it user prompt so we're going to pass in whatever the this is just the question the user is as asking we're going to type in we're going to do model and then we're going to do Max tokens we're going to set a default for here we're just going to put 200 you know if you don't want to pass a Max Max tokens you don't have to we'll just have a default value and then the same here temp we're just going to set a default value of 0.7 next we are now going to create a reference to the open AI uh Library there we're going to create an instance of it that way we can communicate with it so we are going to now create the completion this and this is just called a completion within the documentation for open AI so I'll pull the documentation really quickly here matter of fact let's pull up the documentation now and let's just added add a reference here let's just click on that now if you're ever unsure about how to make a call to a particular um API in point right with a a particular model here within open AI you can always go just reference the documentation here so you can see here um I'm almost using I'm basically using just the exact same snippet here obviously with some minor changes we can see hey you know import that create an instance a client to communicate with open Ai and then here is the completion right so we do client chat. completions create so this creates a basically allows us to create a chat conversation with open AI now the other parameters if you want to you know make your chatbot a little more robust and allow it to accept more parameters you know you can add all of these parameters I just put some of the core parameters that are you know that are super important that a lot of people said right Max tokens this this control is obviously the output but also can help with um keeping down in cost say if you built this application for production um environment you may want to limit the token output right that way you're not you know hdden 8K tokens uh when someone's communicating with the underlying API and then we can go down further um we've got a stream you know I didn't set this as a parameter because we want our chat bot by default to stream so I didn't leave that as an option and then uh where was the next oh and there's temperature here so you can come in here and set as many of these parameters as you want and pass them through here again I'm only setting Max token length and um the temperature so this will be the the reference to all of the documentation is going to be in the final um code repo anyways so you can always just go in there and uh clone that entire repo if you want to look at all the links to the the documentation or just if you ran into an issue and couldn't get the code to work you just download fully functioning code so let's finish coding this out here uh type in client chat [Music] completions and then we're going to create all right so the first parameter that we're going to set is the model again we're passing this through as a parameter because we can either communicate with GPT 3.5 turbo or GPT 4 next we're going to do messages and we're just going to keep it pretty simple here we're going to create an array and we're not storing history in this chat bot we could always set this up where it kept track of his the model itself you could build on top of the previous conversation so you could ask one question and keep building contacts on top of your previous question we're just keeping this chatbot fairly simple to ask um answer one question at a time and so uh the next thing we're going to do is just going to we're going to pass a dictionary and it's going to be roll and we're going to just say system so this this one here this value here that we're going to set content and then system prompt so this is our system prompt so this is a way for you to steer your uh the underlying mod in a certain direction for us we just said hey it's just going to be a um it's just going to be a helper right like you're just a let me see what what do we put in here so you're just a helpful chat bot assistant you know assisted chat bot or whatnot right you could just say hey you are a expert at health and performance or whatever you want here and uh steer the chatbot down that particular path so here let's see where do we have user prompt okay make sure I didn't type something in correctly actually I'm just going to grab this piece that I know it's working we just go here there we go all right and the next thing actually I just already have a piece of this so no point in typing it out completely if like again you can just hit you know go here view the documentation there now the next thing that we're going to do is we're going to set the temperature parameter for the model and again we're passing that through a temp and then we're going to set max tokens to equal Max tokens because we're passing that through here if we don't set it it'll default to 200 again if we don't set temperature it'll default to7 all right so the last parameter that we're going to set is stream now this is what gives you the uh output similar to chat gpg if you don't if you don't enable this you can't get that same type of user experience so you have to set this to true if you want that again we'll go over to the documentation let's check out the documentation to see what stream what it says stream does so it'll it'll um if you set this partial message Deltas will be set sent just like uh chat GPT here so you're getting chunks of data sent back to you when you set stream to true and then as the data comes through you're going to display that on the screen and so the next function that we're going to add in our llm helper uh helps us with that so I created a helper function to this here if you don't create this function or something similar whether you put this in your main chatbot code or not you will not get the streaming to work um the streaming output to work so we're just going to call this stream I just call it stream parser we're going to call it stream parser and it accepts a stream and you'll see this in action so again like I said when you set this option it passes your data back in chunks so we need to Loop through those chunks as we get those chunks and display them on the screen so that's what this function does here so we're going to say for Chunk in stream and then we're going to say if chunk do choices and we're going to go with the first so we're only we're we could say hey pass us back when we're communicating with the chatbot passes us back multiple results at once but we we only want to have a one-way conversation we want to say ask a question and just give us one result back so that's why we're saying choices zero because you could say hey pass me more values back let me go to the documentation so I can show you all that uh oh I already have it open so if we go let's see if I can find that parameter parameter here okay there you go so you see the N parameter here how many chat completion choices to generate with each input message so if you say three if you ask a question to the underlying model in open Ai and you say I want three results back it'll pass you three results back but because we didn't set that it defaults to one but just the heads up if the more you set uh this to the larger your token um window needs to be right so if you say hey create me a long story you know then it could be and you set your your max tokens to 2,000 right you could run into some issues there um with things being cut off because you hit your token uh limit so let's go back here so we didn't set in we could go here and set n equal 2 now this would come back with two uh two outputs but we're not doing that there so that's why we say choices equal zero then we say Delta so this is going to give us the difference of what was passed back to us previously in that chunk so and if not none because the last output that pass it's passed back to you um is a nonone value that comes through so we don't want to display that so as long as it's not none in the value uh that comes back here then we want to yield this back and the yield just allows you to iterate over a generator I won't go too deep into that um if you got questions I can matter of fact I'll just leave uh a reference to what yield does within python so we're going to say chunk choices zero and give us the delta of what was previously passed and give us the content so I'm going to go back to the documentation again I'm going to probably jump around to the documentation if I'm if I'm making a comment about something uh I prefer for you all to be able to see what I'm talking about so we'll go back over to here and if you look here so here's you know what it looks like to make a call to the API the request now this is what your response looks like so if you look at the response we get back you see this choices here and it passes it back it's a list tells us the index and then it's a message and then it tells us the role so this would be the assistant replying back and the content so we're grabbing this here this content here so that's what's going on there just kind of heads up all right go back over and so that that's it that's all it is to building the stream parser here it's just a little helper me uh method that I put together I found pretty useful when when coding out um a chatbot for streaming now we do need to return the completion all right so this is our llm helper we just have a uh function here method here that allows us to chat with the open AI function or API and then we have the stream parser here which is just a little helper me method that allows us to parse the stream coming back from open AI so now we've coded that up let's go back into our simple chat and when I said hey we're not going to add that one import statement previously let's now add that import statement so from hopers llm hoper import and we're going to import the chat and we're going to import stream parser we're going to use it here okay we've got that all set up there now so now we can make uh you know make a reference to or call those within our code so let's uncomment this here again we're going to create a spinner and it's going to just say hey generating message but we're going to now within this spinner here this is where we are going to do LM response and then we're going to call chat so of course chat has user prompt we just set user prompt and then we have model that we're going to set and it's going to be the model and that model value comes from what the user selects in this select box there and then we have Max tokens that we're going to set and that's going to be Max token length that we set here do a line break and then we are going to set the last thing which is temp and that's going to be temperature so we set all of these we get all these values from our chat options in the um left hand chat options section now so we've made our call here to the uh to open Ai and we get that response back we get this response back again as a stream because we set that parameter now we are going to need to let's grab I'm just going to call this stream output and so this what we're going to do is now do St stream right so the beautiful part of using streamlet here to build out applications like this whether this is a production application or just prototyping is you're getting a lot of capabilities right out of the box that you don't have to worry about coding yourself so this by default will write a stream out to the screen but again we have our stream parser we're just going to call this method inside that one and we just pass in llm response because this it's this expects a stream to come through here and then again we iterate through that and so we've got that so we're going to get that stream output here now this is going to stream out to the screen for us like you're used to seeing with chat GPT and it's going to store the final value here as text for us so we store that still in a variable now we're going to call St session State and then we're going to call messages. append and so this is where we're going to append the response from the actual model here so I'm going to just copy some code that way I don't have to just type it all out uh let's line break here so again up here where did we do that at so up here we can see we're adding a message and the role is user and then we add the user the content uh or the user prompt to the content key for that dictionary now we can see here is roll is a assistant and we are actually storing the output from the model that was returned to us here and then we're going to display that so if we go back to our app remember I told you that this icon here so this is you know a little human head here but when and this is set by using by because we set the RO to system or the role to user but the RO when we set the RO to assistant we're going to see a little robot head there and a lot of the stuff again it's done completely behind the scenes for us um let me go back here and what do we call it blah blah blah so yes so yeah we call it a system so let's continue on coding now the last thing is just kind of a a little tweak that I put in here because um I was getting duplicate responses displayed on the screen so I track the last response so I say last response let me just copy this code here I'll explain it now let me close this so you can see a little more on the screen so I store the last response and so what I do here is I grab the last respond I grab the session State all the messages stored in session State and I minus it by one because that's going to be the last um response back from the underlying model and then I grab the uh content of that here so that's how I get the last response um from the L the the model there so the next thing we're going to do is display the response in the chat so now we want to display the uh assistant or the chatbot response so like we Loop through or like we display them at the very top for what the user is asking we're going to be displaying um what the the uh underlying chatbot is saying to us I'm going say if not and so again this is just some minor tweaks that I put in here this is just controlling output and so I say if the last response and the stream output aren't the same because if they're the same it shows duplicate uh it shows duplicate output on the screen so if they're not the same I'm going to do St chat messages or chat message and this time again this is an assistant so remember the width says we're operating in a certain context so now we're operating in the chat message contacts of the assistant because up at the top further we say hey this is the user so when we put user the again that controls the I icon that's going to be displayed for the message and then we say assistant and you'll see that it shows the uh Little Robot head we're going to do markdown and then we're going to stream output so we're just going to show this here as output and so that wraps that up there so now let's see if our chatbot actually works now so all right we're going to save go back to the browser let's refresh now if you recall at the beginning of the video this is what that looked like we're going to change this to ep4 we're going to say be super unique so again temperature determines how unique will the output be and then we just max out the top at least for for this the options that we set here you can go higher than this if you want so let's see I'm going to say also just write me a short story about a boy and girl and let's see what happens oh we got a error so let's see if we can fix this error and see what happened here so missing required arguments inspected messages and model or messages and model stream are argument be given so let's go and track down this particular bug here so we can see and it'll tell you the stack Trace here so we can see on line 46 within our simple chatbot and then we can see the llm helper if we go further down line 10 is where we're running into that uh problem there that's where our bug is Hing that let's go back over to the code and you're going to run into this as you're building stuff out I figured we would hit uh issue eventually where's my wrong thing all right still wrong okay let's go back over I believe it said what line so it said line 10 in our helper file let's see where we messed up in line 10 line 10 here model uh and if we look around okay so I found the issue here so the issue is we put message here and instead of messages and that's what was causing us the issue let's go back here go to terminal uh no we already have it activated all right so let's restart this here and it opens up for us so again let's go to GPT 4 let's do temperature one we leave the tokens the same and again let's see write me CP code that adds two numbers together and put it it in a function so let's see what our output is so we can see it streaming generating response and there we go so it uh wrote out some code for us put it in mark down here so if we actually wanted to use that code we could copy it and then it gave us some explanation of our uh of the code that was generated here up at the top let's let's do one more thing here so tell me a long story about a boy and girl let's see what it does all right so it's now generating and just like chat GPT it's streaming that out for us because of that option that we set and because we created that helper function uh so now it's just streaming all out and it's still [Music] going so it listen to us it's definitely telling us a long story about a boy and a girl here and it's still writing so I set max token length to thousand so uh it used quite a bit of those tokens there so we've now created our very own chatbot using streamlet and open AI um so if you found this useful uh feel free to hit like And subscribe to the video also I'll be creating other videos that are similar uh and a little more complex this was probably on the more simple side when it comes to functionality again we could add it could have added more parameters over here uh we could have had this um really track the history and actually uh the context of what the user is asking right now we're only displaying the history here um but if we were to ask the model a question about something we previously uh discuss it's not tracking that it's only displaying the history here for you on the screen um so that's you know so we could add some more you know fine tweaks to this application again you know streamlet is great if you're trying to build out proof of Concepts again you could have built this in any web framework of your choice again react view you know whatever you would like to build it in but streamlets great for prototyping things out you know you've got an idea and you want to build it out you know streamlets really the route to go uh again if you like the content please like subscribe if you got questions put those questions below I try to get to all the questions and again if you've got ideas about a particular uh type of application or solution did you would like me to build feel free uh to let me know uh again this channel is geared towards uh those who are you know in the tech field or looking to kind of learn about other Technologies you know right now I'm I'm really focused on uh generative AI but I also Focus heavily on cloud-based Technologies also so if you've got ideas that you would like to see another uh technology covered on this channel let me know again thanks for watching the video from uh the beginning to end and I hope you like the content and see you in the next video [Music]
Info
Channel: AI DevBytes
Views: 543
Rating: undefined out of 5
Keywords: ChatGPT, Chatbot, DevTechBytes, GPT, Streamlit, ai development for beginners, chatgpt explained, gpt 4, open ai gpt 4, chatbot ai, gpt-4, gpt-4 demo, chatgpt, gpt-4 turbo, ai, artificial intelligence
Id: UKclEtptH6k
Channel Id: undefined
Length: 70min 2sec (4202 seconds)
Published: Sat Mar 23 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.