How I Use OpenAI Assistants API To Control My Streamlit App

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I try to recreate the assistant demo from the open AI death day in Python in which the user asks to go Mark the Paris toop 10 ter spots to visit and gp4 apparently updates the map in real time hello data fans as a developer Enthusiast I didn't immediately understand this black magic is gp4 generating and running JavaScript code that updates the map for you when I don't understand a new feature I go read the source code for the demo I mean they're C open AI I'm sure that open source the demo app right or I build my own python web app in streamlet to play with the new assistant API so let's find out if gp4 actually crafts code to control my streamlet map or if it's not as magical as it seems open your favorite editor create a new app.py script and run the streamit Run app.py command to display your new app in a browser tab write the SD title in the script and ensure it quickly live reloads in into a beautiful app by clicking on the always rerun button if you miss it you can still activate it in the top right menu create two columns using STD columns write some placeholder text in the left column and a plotly Express map widget in the right column finally wrap up your script with a chat input widget at the end and that makes up the skeleton of our app now to add some open AI muscles to our skeleton grab your open AI API key from the API Keys page paste it into a new tr/ secret tunnel file in the open AI API key line since we read it I already know I'm going to use mapbox to display map tiles so grab yourself a mapbox API key and add it to the same file this file is like the keys to your house you don't want to throw them into the public therefore add the file to your G ignore so you don't push your secrets to the G up streets later on by accident if you're a developer you tried to let ch GPT generate code for you like I did in a previous video I should ask chat GPT this is actually more stressful than I expected even with all of the prompt engineering saying like you're a wise 100 year old Python Master who should only generate code without any bugs would you really trust gp4 enough to run the code without validating it when it's capable of hallucinating this a better way would be here are some pretty fine functions with a description for you like this one will move the center of the plotly map to create those pretty fine functions head to the open AI assistant page create a new assistant name it whatever you want instructed to be an expert travel assistant that can display a map and in the functions model window add ajacent description of an update map function whose role is to move to coordinates on a map the Json schema consists of the function name a description of the function plus the typing and explanation a of each argument you want gp4 to generate and send back save your new assistant it's now ready to chat with you and control your streamit app but are we ready to let our assistant control our streamit app with pretty fine functions not not not quite yet let let me add a single entry point for our llm to control our entire streamid script so it's not a mess first any Global variable you want to reuse through the app like the full chat conversation should be stored in streamit session States so initialize an empty conversation and current map coordinates in session State also prepare references to open AI objects like an assistant object a thread of messages and a current run we'll figure out each of those in a little bit for debugging purposes I like to display the full session State as a dictionary in the streamed sidebar using the St sidebar context manager that side sidebar is a bit annoying so I hide it by default through S set page config at the beginning of the script we want a beautiful view of our session state in our app so in the left column brow through the session State conversation in a for Loop to display each message as a chat message nothing will be displayed for now since you know the conversation is empty it's just the beginning of the date with the chat in the right column pass the session state map coordinates into the plotly press map basically the conversation and plotly map views you're seeing in the app are now fully modeled by session state so all left is to control the session State model through user interaction now time for a little test which app interaction should run a callback that edits conversation and map session State I'll let you think it for a second if you guessed whenever you submit text from the chat input at the bottom congratulations you win an open a I still wa wait add an unchange argument to chat input that calls a new onext input method before rerunning the streamit script from top to bottom apart from initializing the app this onext input call back should be the only and I mean only place where you control session State I mean I mean for this app at least wait you don't know how to get the value from chat input in the Callback well here here a streid pro tip for you whenever you're in a widget call back you can retrieve the value of any widget from session state by looking for the key argument from the widget in session State look for session State input user message and there is the value of the chat input widget now that the conversation was edited let's update the map state to some random coordinates write a new update map function that takes a latitude and longitude and also a zoom level to update the map session States the update map function here and the update map in the assistant page feel very similar right I hope you noticed then you can build a new key value dictionary which Maps out the string update map to the actual update map function take a small break and play with your app a little write text into the chat look at it update both the conversation and map session State at every submit and and yeah it works this is it we have a single chat input callback that controls our full app by updating session state if you want GPT to control the app from one place this is the only Bridge create a new open AI client using the open AI API key from St secret then link to your remote assistant and store its reference in session State at initialization the open AI assistant assistant workflow has three levels of abstraction first you have the assistant object which defines the instructions library of predefined functions and files to retrieve Knowledge from in this assistant object you can store a list of user message inside what is called a threed after adding user messages to your threet you ask GPT to generate an answer given the full fret conversation this creates a run now imagine there's a run going on whenever GPT is generating an answer that run can take some time depending on the length of the answer you will need to pull that run until it is finished to get the full answer but wait there's more if you ask to move a map to bordo the run will detect it needs to run the update map method to move the map so it pauses and asks you hey I think I need to run update map with those B coordinates please run it on your side and send back the results h GPT only uses the function documentation and argument signatures to decide which function and arguments you have to run and by the way I'm actually impressed that it's not hallucinating the coordinates of Bordeaux this should maybe be another get coordinates function that uses a python package to geolocate harder International cities but now that you know how the assistant Works in theory let's implement it in practice in the only bridge to our application by the way way uh this part may be a little too fast if you're trying to Live code this you can find the source code in the description under the Subscribe button you can't miss any of them from the on text input call back add the user chat message to the assistant message Fred then ask the assistant to run the Fred which will initiate the generation of the answer given the conversation store the current run in session state so we can track it while it is running build a while loop that depends on a completed Boolean flag FL in this infinite Loop pull the Run object toward in session state if the run is complete pass the flag to True else wait for a few decc or hundreds of milliseconds why why don't we say decc when the run is completed and you leave the infinite while loop because of the completed flag retrieve the full conversation with which contains the generated answer at the end and store it all in the conversation session States finally add another if step to detect if the Run poses with a function called request for example here's what happens when you ask to move to Bordeaux from the web playground in that case the Run retrieval will return adjacent object with the predefined function name and bordo arguments to run oh for each tool request it has check your string to python function mapping and run the correct function with the return generated argument here it will Run update map with the Bordeaux coordinate run the function and send back the tool results to the run so it can resume and our update map operated from the llm assistant has actually updated our session state so our map has moved on your chat holder and your assistant is aware of it because you sent it back some results easy peasy it's not magical to the point of having an autonomous agent generating and running its code in code interpreter or your app instead you configure a library of function specification so it Maps user intents to those functions yet you're still in full control of the codes at this point the llm only orchestrates through the library of predefined functions depending on what you ask you can add other tools like add markers to place markers on the top 10 par story spots or get the weather to send the assistant we information with multiple tools entered you'll be able to ask more comp complex questions like Mark the top 10 tourist spots around the evelt tower that will be sunny tomorrow it will simultanously run the moving the app get the WEA and add markers and sometimes it even suggests things by itself like adding markers so you basically have an automated natural language to rules andine translation so yay okay so I understand uh but is there a better way to style a similar demo in Python like if you want to have a more closely styled app I'll be trying to build it with another Library like Sola in this next video so I'll see you around bye
Info
Channel: Fanilo Andrianasolo
Views: 7,109
Rating: undefined out of 5
Keywords: how to use streamlit, learn streamlit, python, python streamlit, python web app, streamlit, streamlit release, streamlit tips, streamlit tutorial, streamlit book, openai, assistants api, openai assistant, function calling, openai streamlit, genai, generative ai, streamlit session state
Id: tLeqCDKgEDU
Channel Id: undefined
Length: 11min 41sec (701 seconds)
Published: Thu Nov 16 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.