Python and ChatGPT API - Introduction

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome back as you know I am Eli the computer guy and in today's class I'm going to show you how to use the chat GPT API this is not a class for normies this is a class for the Geeks so a lot of folks out there when they start talking about chat GPT they go to chat gp.com or whatever it is uh they have a nice little text box there they Tippy Tap Out type out their question or their PRS right uh and they get a response back from chat GPT and they say yeah I know how to use Ai and uh people like me look at those folks and go like okay good for you you can enter things into a text box and read what the response is um you know that is just wonderful but here's the thing right if you are going to be a technology professional if you're really going to to try to extract the most value out of things like AI as possible you do not want to be using the user interfaces the normies are using right if you sit down and you Tippy tap type out your question and you get a nice pretty response from the chat GPT platform that is simply not going to cut it if you're going to be creating your own systems right so what I'm going to be showing you how to do today is use What's called the API so we're going to use Python so python is an interpreted programming language we're going to use Python to query chat GPT with a question give a give some requirements and that type of thing and then actually get a response back from the that API and with that response we'll be able to do any number of things uh so for silicon DOA when I do the in-person Hands-On classes we've done a lot of stuff with chat GPT so we've done things like uh web scraping actually going to websites scraping the text from the websites and Auto summarizing doing that type of thing uh you're able to automatically create blog posts and actually have that dumped directly into the database so imagine if you simply t uh typed out 10 titles you say I want blog posts about these 10 topics or 10 titles and you essentially hit run and I don't know 30 seconds later 3 minutes later however long it takes to run uh that is actually live sitting on your website uh for people to be able to read and peruse that is why the API is so valuable because you can actually make it an integral component of a much larger system uh that you will be creating now today we're just going to be dealing with python and the API itself cuz much bigger systems yeah not a class we're going to go into uh but I think by the end of this uh you'll have an understanding about how this API works why it's so powerful and interesting and why folks like me laugh laugh at the normies that think that they're using AI when they when they just go and they start typing things into a a text box and they think they're all special so that's the class that we're going to be doing today if you give me a second we'll Dive Right In now before I start explaining a lot of things to you I think it's a good idea to give you a basic demonstration of what we're going to be working on today so that you can visualize the concepts that I'm talking about until we get to the point where I'm actually explaining the demonstration and explaining all the code so let's go over to the computer for a second to look at a bit of demonstration code using the chat GPT uh API so that you can understand what I'm talking about so this is the kind of code we're going to be dealing with for the rest of this today's class uh if you're scared by this kind of code uh trigger warning you should probably leave now so basically all of this is python though this is a python script and with this python script this allows us to access a chat gpta API uh to be able to get a response uh I'm not going to go over a lot of this again we're going to talk about this more in depth as we go through the class again I just basically want you to visualize what's going on here uh so if we come down here basically this is uh the heart of what makes all of this code work uh this is where we're using a function uh from open AI from CH chat GPT uh and basically we make a request of their system what we're going to say is we essentially we give a role we say uh what we want chat GPT to be and we say you are a helpful assistant uh you could say you are Shakespeare you could say you are a professor you could say you are a police officer basically this is where you say what character you want the AI to play uh and then down this under the user role uh we simply plug in whatever question it is uh that we want a response to so here I'll do something pretty simple what is the radius of the Earth then basically uh we're going to get a response from this and this is where we can print out the response on the screen so what is the radius of the Earth I can hit run up here it runs this script the radius of the Earth is approximately 6,7 6,371 kilomet right uh and with this again we can just simply oh uh change this to to whatever we want I don't know what is the radius of Jupiter right uh basically the same thing uh we can then go through uh radius of Jupiter is approximately speech recognition something like that uh and then we get this response here this is then just the response as text once this response comes back as text we can dump this into a database we can uh give this to uh computer uh Speech engine uh to turn text into speech so we can have the computer actually talk to us uh we could have this uh again have an if else say statement and based off of certain things um different events can be triggered all kinds of really cool uh fancy things um but this is the basic idea of what we're going to be dealing with today now the first thing to remind everybody whenever we start dealing with artificial intelligence is the fact that it's most likely not artificial intelligence um I have a buddy of mine that actually runs a uh an AI and analyst firm and I was talking to him a couple of years ago about all the AI products out there and we were sitting there having a beer and I was like you know out of curiosity how many of these products are actually Ai and he looked at me and he said Eli I don't think any of them actually are um I think this is kind of an interesting thing to be thinking about though whenever we deal with these products and whenever we say words such as artificial intelligence uh the question really should be asked what does that really mean what are we expecting out of these systems and is it reasonable to expect the things that we're expecting out of these systems right uh because again if you're going to deploy this especially into a production environment uh the results that you get from the computer system are going to be based off of how it's built and how it's designed and that type of thing and if you design your system based off of the response you want and not the response that you're actually getting from the system all hell is going to break loose at some point I will I will warn you about that so when we talk about things such as chat GPT chat GPT is something called a large language model a large language model basically what it does is it's been fed a tremendous amount of text Library of Congress apparently the entire internet right and basically what a large language model does is it goes through all of that text it turns the text into something called tokens it then figures out the statistical relationship between tokens and so when you ask a large language model a question what it's doing is it's actually uh trying to figure out statistical relation ships between these things called tokens and then it's giving you a token response that basically happens to get turned back into English so when we talk about tokens it's important to understand that a token is a word a part of a word or a symbol that has a a numeric identifier within that large language model and essentially all the Lang large language model is doing is it's trying it's trying to figure out uh the probability that one token should follow follow another token if I have these two tokens in a row what is the thing that is most what token is most likely to go after uh these these two tokens that we have uh what's the token most likely to go after that most likely to go after that most likely to go after that so on and so forth so it's very important to understand these large language models and do not understand English they're not actually intelligent they're just doing statistical analysis at an insane speed that's why they need all those Nvidia gpus and don't you wish you had invested in Nvidia a couple of years ago so that's uh one of the things to be thinking about with this um and again one of the important things with this large language model is all it's doing is a statistical analysis on tokens uh so when you ask things like math questions if you say 2 + 2 or whatever else if it has seen those numbers if it has seen that equation in the past it knows it's statistically 99.99% of the time 2 + 2 should equal 4 it is not doing math it simply knows every time it's seen 2 plus 2 in the past it's always been equal four after that right um so that's one of the reasons they talk about large language models being quote unquote bad at math um and yes they're horrible at math because they don't do math that's why using one of these uh these these uh scripts that I'll be showing you today uh with using the API call can be very valuable because many times if you know that the larg language model can't actually do math itself one of the things that you can ask it is you could dump a whole bunch of information into chat GPT and you could say can you please create an equation of what what you think this data should look like right so imagine a word problem you know one a truck leaves Chicago going 500 miles an hour and susami goes off in Tokyo I don't know what time should you eat lunch today or whatever right that type of thing right imagine that well since it's able to to figure out the the relationship between tokens it may actually be able to give you back uh an equation uh the the type of equation that you need to solve the problem and so you could get the response of the equation back and then have some kind of mathematical subsystem within your infrastructure then do the actual math because again the llm doesn't do math uh to provide you the results that you're looking for so one of the things like when I look at these large language models again artificial intelligence whatever what I think of them as is as exceptional as something called Rex regular expression uh so one of the issues that you run into in the coding world is if you're parsing data so what parsing data means is that you read information coming into your system it could be from user input it could be could be from PDF files could be from websites whatever else what you do is you create a script to read that data of coming in and then pull out the specific information that you care about well you know if you have information that's properly formatted in a standardized way again something like Json Json is a data format uh it's really easy to parse that information uh if you're dealing with humans right especially in a world where're digitizing everything a lot of times the text that we get in is a complete an utter mess and you as a coder trying to figure out how to parse a complete an mess can be a real pain in the butt one of the cool things is you could just you could just throw all of that mess up to chat PT and say I need their first name I need their last name I need their social security number and I need their email address and it will look through all of that and will say here you go right um so that's one of the things that makes uh again the large language models valuable but it's important to understand both what they can do and absolutely what they can't the other thing when you're dealing with chat GPT is there's something called Genera uh Ai and what this means is it actually creates the response as it goes It's actually kind of cool now with the uh the API I'm going to show it to you in a bit you can actually get the streaming uh you can get the streaming feed from chat GPT where it shows you how it's actually building out the response so basically you ask a question uh and it tries to figure out what the first word of the answer is and once it has that statistically what should be the second word of the answer that has those two words what should be the third word of the answer fourth word fifth word sixth word so on and so forth and so it's actually building out right so when you go to Google when you do a query on a database it just simply provides you a response you will always get the exact same response back with this it's always creating the answer new which is cool from a technical standpoint but can also run you into a bit of problems if uh you know as it's giving you uh the new response every time basically when Bob um makes a a query it gives one response when Sue does the exact same query 2 minutes later it gives a slightly different response when Patty does it does the same query a little bit later gives a completely different response Blas Blas Blas so again it's one of those things that can be technically cool uh but can actually run you into some issues and so that's one of the things with this API imagine if somebody asks a response or or asks makes a request to chat GPT you get the response back now imagine if you take that response and then dump it into a database so that the next user on in your system who asks the exact same uh you know question or maybe slightly 90% of the same question what if the what happens first is you simply pull from the local database and say oh this question has already been asked is this the answer that you're looking for so therefore it's staying internal you're not paying any prices you're not doing anything like that if it's not the right one they can say no I want the AI response if it is the right one again then you actually get a lot of value out of that so these are some of the things to be thinking about uh when you're going to be using chat GPT or any of these quote unquote AI models uh or tools and to really understand what's going on so that when you uh implement it within your system you understand the response that you're going to be getting now the next thing to talk about as a real technology professional cuz we be real technology professionals here is price right this is always funny when I when I talk with like noobs noobs never want to talk about terms of service or licensing schemes or pricing or any of that stuff no Eli I just want to get straight to the functions I want to get to lists and dictionaries and replication strategies I want to do the cool stuff it's like yeah well bucko most of our job is not the cool stuff and if you want to keep your job uh you'll make sure that the not cool stuff is done properly L uh so when we start talking about uh chat GPT they do have a pricing model right so normally when I use chat GT I use something called the 3.5 turbo model right so when you're you start using chpt you're given different models that you can use currently while I'm doing this video chat gbt 4 is the best model and 3.5 turbo is basically yesterday's model uh the thing that you you find out though when you start using these systems is the best model is a lot more expensive than yesterday's model right so when you're dealing with a chat GPT 3.5 it basically cost you about a fifth of a penny for a th000 tokens 1,000 tokens is supposedly somewhere around 750 words because again remember when we talk about tokens tokens are words uh parts of words or symbols so when they do that math you don't necessarily get a th000 words it's it's approximately 750 words uh so when you're using the 3.5 turbo model cost you about a fifth of a penny which is great you max out initially at 4,000 tokens so even if you do something really really really dumb you're going to pay a penny you're going to pay a penny that's not too bad Make It Rain the pennies uh the issue that you get to with chat gp4 is that you can go up to3 2,000 tokens isn't that a lot better right A lot of people like wow 32,000 tokens is so much better than 4,000 tokens why would I ever use 3.5 well here's the thing uh it cost you anywhere between 6 cents to 12 cents per thousand tokens which means depending on what you're doing you can spend a buck 50 to almost $4 for per request every time time you click run that might cost you $4 us you don't even have to be in a production environment to realize that might get bad really damn quick and so that's one of the reasons that I like using the 3.5 model uh because basically it gives me uh the the responses that generally I care about especially doing these classes and those types of things uh at a very very low price point uh the other thing to be thinking about too is once you start getting into the Chachi bt4 model uh where you can put up to 32,000 tokens allowing you to use 32,000 tokens might allow allow you to be frankly stupid and waste a lot of your company's money uh so uh we did the uh the the web scraping class as silic cond do a little while ago we will be doing the uh uh this the same class here in videos uh but one of the interesting things that I found is uh so you can use a simple python script to basically scrape literally literally all the text from a web page send that up to chat jpt and ask what is this article about or whatever here's the interesting thing when you scrape all the text uh from a web page and send it up to chat GPT all the text will end up being about 21,000 tokens so you go to RS Technica you see a little article little 5 600w article you set it up to chat GPT just full everything it'll be 21,000 tokens which depending on how it works out with the pricing will cost anywhere between like a buck 20 to a buck 80 simply for that one request in all seriousness but you're sitting there you're like well but that doesn't make any sense right an 800w blog post why would it be 21,000 tokens it would be 21,000 tokens because you're not thinking about all and I do mean holy crap all of the JavaScript all of the CSS all of the text that makes that web page be displayed properly for you uh that you just simply don't recognize because again it's often the source code uh not in what you think you're seeing uh and so again that's one of the things to be thinking about too like with a four model uh the four model allows you to be lazy uh and in certain situations that may make a lot of sense but if every time you hit the Run button it costs you a buck 20 again just imagine you know a thousand users uh hitting your s it or your platform every minute times 60 minutes in an hour time 24 hours in a day and yes this is basically how a startup company can burn through literally all of their money in the blink of an eye um I was I was talking to somebody about that uh earlier apparently there is a startup company that ran out of all their money because they didn't quite understand how this pricing uh system worked as far as chat GPT was concerned and so this is something to consider um when you're looking at using chat GPT again we are going to be using a 3.5 uh in almost everything today uh just simply again because it cost me a penny every time it runs versus however much uh you know chat pt4 would cost us uh and so this is something that you're going to have to consider okay so are you still awake I know that kind of stuff gets boring but again it's the boring stuff we get paid for I want you to understand as a real technology professional if your job is exciting you're doing it wrong have exciting Hobbies have a boring job I know for the 20-year-olds out there that sounds horrifying but I'm telling you I'm telling you that's how the real world works so with that let's go over to the computer and once we go to the computer what I want to do is I want to show you uh how to log into the dashboard that you're going to be using in order to get the uh the chat GPT the API key uh show you some of the information over within that dashboard and on their platform so you can get the API key you can set a budget uh for your spend with chpt so that you don't go over that budget you don't do some kind of stupid wild true Loop and start burning four bucks every time uh time the damn thing triggers right show you all that kind of stuff so let's go over to the computer uh so a lot of this will make a lot more sense so here we are at my computer uh again pricing uh just so you see it if we come down here it shows you the different mod models it shows you some of the different context and the prices right so the 3.5 turbo and I really do have to say the 3.5 turbo just works really really really well and it is so inexpensive so again for 1,000 tokens you now spend less than a fifth of a penny uh for 1,000 tokens if you're only doing uh 4,000 token context um so it's important to understand who too here with these prices when we talk about tokens used this is for both the request and the response so when you ask a question or you upload something to chat GPT to that's going to cost you tokens and then when you get the response from CH apt that is also going to cost you tokens so you may upload a lot of information uh and then get a little response back or you may basically say hey I want a 10,000 word booklet on God knows what uh and then it will try to give you all of that back uh but it's important to understand both sides of that transaction you'll get build for so if you do 4,000 uh is a project or a request under 4,000 tokens it'll cost you less than a fifth of a penny for the input and it'll cost you a fifth of a penny uh for the output so again worst case scenario you're you're getting build literally one penny uh every time you hit the Go Button if you go up the 16k Conta so they now have uh this this has just come around in the last little bit so using the 3 .5 model you can actually use up to 16,000 tokens even with this you're using a third of a penny third of a penny uh for the input uh and then you're using what 40% so slightly less than half of a penny uh for the output uh so again even with all of this if you screw something up with a 16k context I don't know you're spending like two or three pennies two or three pennies per mistake which is awesome if we come up here to the chat GPT if you use the four model right lot quote unquote better what you're going to see is it's three cents so again if you look down here it's 0.15 so less than a fifth of a penny here it's 0.3 so it's 3 cents per 1,000 tokens on the input 6 cents per 1,000 tokens on the output and that's for 8,000 uh context right um so if you use 8,000 tokens you know it's going to cost you a little bit uh if we come down here for the 32,000 context what you're going to see is it's 6 cents uh for the input uh and it's 12 cents uh for the output right uh so again if you look at you know 32 * 12 oh what is that going to end up being three $360 $3.80 uh so again depending on what you're asking out the system that's where I'm saying like literally it can end up costing you almost $4 every time you hit run which is why things like cashing and all that make so much sense you have prices down here for things like fine-tuning models and embedding models we'll we'll talk about these in in other classes but again I really just want to hammer this home for the umth time uh okay so in order to get to the API dashboard uh what you're going to want to do is you're going to want to go to open ai.com so not chat gpop a.com you're going to come here you're either going to sign up or you're going to log again I already have an account so that brings us here open AI so chash EPT this is that that platform that you're used to uh so if I click on this uh basically again this is the text box I ask it a question it gives me a response right as the normies do that's not what we're doing Dolly this is for images we'll talk about that in a different class here is the API that we care about so we can click on the API and it drops us into here uh for from here there are some interesting things they have examples uh so the examples uh actually show you small demonstration projects I do have to say they're kind of crappy to be honest with you not the best examples but you know they are examples uh so they have the examples here uh and then we have the API reference over here and with the API reference it will tell you uh basically how to install open AI onto your computer uh so we're going to be using python today when you're using python you almost always have to install modules so the module that we're going to have to install is surprise surprise open AI uh so this shows you how to do that if you're using node it gives you the information here uh we come down here uh and then it's going to give us uh information on basically on how to do uh the the request and what the responses look like uh for for chat GPT when we come over here so this is basically an example so again as you saw before this is what the example looks like uh from here it'll show uh show you the code for different model so if I want to use uh the 16k version of Turbo I can click on this and then basically it just changes that there uh if I want to use gp4 it shows me how the code gets modified uh so on and so forth uh it also is kind of interesting you have the no streaming and the streaming I'll show this to you uh you know further in the class but here so if you want the streaming so with what no streaming does is no streaming just gives you the full response you you make a request it figures out what the response should be and it sends you that response all at one go which is frankly the way that you should do it what streaming does which is kind of cool is basically as chat GPT open AI is figuring out the answer it's generating the response it literally sends you the words as it's generating the spot response uh this is the code uh for how to do that which again is is kind of interesting from an interactivity standpoint is kind of interesting uh generally I I probably wouldn't use it but I I will show you uh how to deal with that today but anyways you know here we got all kinds of things completions and embeddings again images when we do classes on images uh all of this information is here so what you're going to want to do is go under so API reference chat uh and then kind of scroll down here until you get to the chat completion uh and this will give you the code uh if you need examples of the code again click on no streaming and generally just do 3.5 turbo and you can just copy python node curl if you want to use it uh okay then for the account itself uh so we're going to come up here and basically with this uh it gives my uh my account name which will get deleted out of this video uh I can manage the account I can view my API Keys uh so one of the things I'll do now is click on The View the API keys and this is where my API keys are stored now you'll notice once an API key is created I will not see it in plain text again uh so basically you give the the key a name uh you copy the key when it gets created and then you save it somewhere else because you will not be able to see it again if you want a new API key so the a API keys that we'll be using today you can do create new secret key and I can say you know for class I do create secret key and this is the API key at this point in time you must copy out this API key and put it somewhere else because as soon as I hit done I get four class and again I get the last digits or characters so that I know what key is what but beyond that I can't actually see that key again you can delete keys to so to delete a key all I have to do is click on this uh I can revoke the key and the key is gone but that's all you get to do you create the key you copy the key and when you need to you delete the key uh that's about it uh when you come up here again limits something important to be thinking about if this is in production environment uh this shows you tokens per minute that you can use and uh responses uh per minute uh so for Turbo 3.5 turbo you can use up to 90,000 tokens per minute uh and get 3500 responses uh per minute down with GPT 4 it's 10,000 tokens per minute and 200 responses per minute so again depending on what your system is doing only being able to get 200 responses per minute might actually be a major limiting factor uh for what you're doing so again just things uh to be thinking about our projects does not matter uh beyond that I can come down here to billing and the most important thing with billing is setting your usage limits so you click on usage limits I have a hard limit of $10 when your organization reaches this threshold each month subsequent request will be rejected so even if somebody does a while true Loop even if somebody steals my API key I will not be out any more than 10 bucks uh soft limit when uh I reach this limit I get a notification email just so I know what's going on so when you start playing with this give yourself a hard limit and again make it five or 10 bucks especially if you're using a 3.5 model um you really don't use very much it doesn't cost you very much uh so this will will keep you from doing anything uh particularly stupid uh so this is basically uh the back end uh for how you deal with chat GPT for the API the big thing is for you to get your API key uh so that you can then use this in the projects and the demonstrations that I'm going to be showing you coming up so now that we have the boring stuff out of the way let's dive into this code so I can explain to you what's going on so that you can start writing some code yourself uh for whatever projects that you have going on uh so we're going to go over to my demonstration computer so at silic cond Dojo basically our lab computers are all Ubuntu so I think Ubuntu desktop using vs code is the best way to go uh don't tell LS Rosman this uh but we actually use uh old MacBook Pros uh we basically take MacBook Pros we install Ubuntu desktop onto them and they work fabulously uh great uh great Hardware that's incredibly cheap cuz again we use 10 or 11y old Ma MacBook Pros uh with a brand new operating system and vs code uh again it actually works very well for us uh so if you're sitting down at your computer whether it's Windows whether it's Mac whatever else and you start to code uh some things might be slightly different for you uh but in today's class it basically should be all the same um when I sit here thinking about there's nothing with the operating system or anything else uh whatever whatever platform you run this code on uh should work uh good for you so anyways with that let's go over to the computer so I can show you uh how to start playing with a code so the first thing that you're going to have to do uh before you start uh playing with the API is you do need to install the open AI module so again this is a python module so with python you use pip uh so being Ubuntu we use pip 3 uh install open AI uh and so basically when you run this command it will go up uh to the repository it will pull down the open AI module uh and then now you'll be able to use open AI on your system if you create the code that I'm showing you today and it fails out because it doesn't know what the hell the open AI module is it's because you have not actually installed it uh now again depending on your operating system version or configurations you will either use pip or pip 3 uh pip oops pip was for um version two of python pip 3 is for version three of python but you can change how your computer responds to commands so anyways on the latest version of ubun 2 desktop use pip 3 if this doesn't work for you use pip and if that doesn't work for you I don't know go to chaty PT and ask it why it doesn't work any who with that let's go over and actually take a look at some code uh so this uh is our first project open AI hyphen no stream. so what this is going to do uh is it is going to Simply get the response from open Ai and then give us the entire response once it has it uh now it is important to understand if you start asking for a long response so 500 Words 10,000 words something like that it may take a long time for open AI to respond to you which is one of the reasons why you might want that streaming to come back so you know it's actually doing something uh but for you know uh you know small small responses it won't be any big deal uh so the first thing that you need to do is you need to import open AI so again every time you do anything in Python you have to import a module even to get a random number you have to import a nut module so you import the op open AI module uh then you're going to feed uh open AI the API key uh so this is normally done as a an operating system uh environment variable um I'm just putting this here with the entire API key because it's lot easier to deal with many times um and again this API key will get deleted right after I do this class uh but the environment variable is the the the more secure way of doing it but anyways basically just feed it the API key uh then we have this completion equals uh so completion equals so basically this is where we're going to make a response of open AI uh and then that response is then going to be completion so open AI chat completion create here's where we say the model uh so if you wanted to use GPT 4 or GPT 3.5 16k or whatever else this is where we would modify that uh and then we have messages here uh and with this we have different roles right uh so we have rooll system we have role user and then you can have role assistant we'll deal with rooll assistant in a second uh so with these basically what the system role is is the system role is what character what character will this artificial intelligence be will this be the president of the United States will this be Shakespeare will this be Professor a professor will this be Tiny Tim basically what is the voice you want this AI to respond in so here you are a helpful assistant uh so this can be useful again if you're making blog post you know to say uh make it a journalist you know you are a journalist uh if you're if you're trying to do some kind of press release maybe you are a politician right and it'll change change the words slightly um then to come down here we have rooll user and so rooll user is this is where you ask the question what what question is it that you're going to ask uh so for this is what is the radius of the Earth between uh the uh the role system and the role user you can have role assistant that I'll show you a bit later and you can have as many assistant roles as you want and this guides the response so you could say I want a 500w blog post I want certain things in there basically you can put numerous assistant roles in there to guide what the response is going to look like so system role you get one because there's only one character that the system will play user role you only get one because that is the question that you're going to be asking then the assistant roles you can have as many as you want then we get the completion the completion is going to come back as something called Json format right so again we talk when I was talking before about something that is easy to parse and Json format is easy to parse right it comes back in a format that's easy to understand so that you can get the response that you care about the actual information that you care about so right now all I'm going to do is print out that entire Json response so what is the radius of the Earth I'm going to click on this it's going to take a second and then we get this Beast of a response well this isn't too bad because you know the answer isn't too much right uh so with here basically this Json response the ID so basically the the the identifier for this particular request it was created at this particular time with this particular model uh then we come down here and we get the content so the radius of the Earth is approximately 6,7 6,371 km why did it finish it finished because it got to the end so this is something you may need to look at so finish reason like you may have run out of tokens so that can be a big problem when I was first messing with this um is sometimes you'll get like half a response back and you're like I don't understand it's half a blog post right because it's generating it's generative AI so it wrote half the blog post and stopped in the middle and you look at the re finish reason and it's basically ran out of tokens like oh okay I need to figure out how to deal with that right so you see finish reason stop it meant it went through how it was supposed to and so that you're getting the answer that you're supposed to get if you have a different um you know finish reason here you may want to take a look at what's going on uh we can come down here for the tokens so again it's important for you to keep track of tokens especially for your boss right so prompt tokens basically uh uh how many tokens you know were were were uploaded to the system so this was 24 tokens completion tokens the tokens for the completion so for this was 18 tokens so total tokens used is 42 token and so this is very useful for you too especially from a logging um vantage point with your systems you can log basically how many tokens are being used uh in basically in near real time uh to verify that you're not going over budget to adjust things so on and so forth right so so anyway so this is the response that we're getting back now we took a look at this response what do we actually want here we just want content right and so if we come and we look at what I had uh I commented out here uh this is what will give us just what we want right I'm horrible with my keyboard right so with this so completion so the completion is the variable that we get back and then we have choices so complete comption do choices so at index choices so completion index choices then we see that we have a bracket right so in the python World whenever you have a squiggly bracket that's a dictionary so it's a named index so squiggly bracket Choice choices is the named index whenever you have a square bracket that's a list so list just has numbered indexes so we see the uh the opening uh square bracket we then see a squiggly bracket right after it so we know that this is index0 so index0 then we come down here to named index again message so that's where we see message and then we come down here to content and basically content so this is going to go to completion choices index zero message content and it is only going to print that out uh if I come down here uh type clear just so the screen is clear uh and then I run the script again now I literally will only now get that response and so I can send that to the user and a web form you know do whatever else to it but that's how you access just the response so now this is a version of the code like I showed you but it is actually going to stream the response to you so instead of collecting the entire response and sending it to you at one go basically as the the large language model is generating the response the the words are going to come in again um with this I think it's good as a party trick for your users if you want to make it look cool your system look cool uh or this might actually be useful if you're getting a massive response from Chad GPT so imagine you ask for a 10 th 10,000w blog post or something like that that'll take a while for chipt to create uh and so by having uh the completion come in as a stream that might be useful for you just to verify the system didn't lock up that type of deal but generally I I would probably stay away from this unless I needed it but it's here I'll show it to you cuz it's kind of cool uh so with this uh again you import open AI like you did before you have the API key like you did before and we have the completion essentially like we have before so again I'm using the 3.5 turbo cuz I don't want to get build a lot of money we come down for two the roles uh system you are a helpful assistant roll user how many chuck can a wood chuck chuck if a woodchuck could chuck would so I actually like this because it'll in the response it will show you um the tokens thing again where tokens are not words right this will this will actually make sense when I show you where the response is uh past that we're now going to add stream equals true so we did not have that before then coming down here we're going to have a four Loop or a 4X Loop uh in order to run through completion so as completion is coming in we're going to print out what comes in so four chunk so we just simply call this whatever we want we're going to call it chunk four Chunk in completion so completion is what is coming in then we're turning it into a chunk we're then going to say print chunk index choices index zero index Delta print out what is there so when we do that and I hit the Go Button uh basically what you see how this is coming in as a stream so instead of it coming coming in as one uh just solid response it's now coming in as the stream so we can see what's going on uh why I like this whole woodchuck thing uh is it also shows you that it's not entire words are coming in right CH so it knows it's supposed to have ch and statistically with the words that have come before U should be after CH so the response is CH with u without any space that's not the same as the word Chuck as far as a computer is concerned right so again it's important for you to be kind of considering this um you know when you're when you're dealing with these systems to really grasp what they're trying to do uh past that uh one of the things that we can do is we see that this is index content right uh so if we come here right so chunk choices zero. Delta what I can do is do content here and so with the do content here it is now simply going to print out the value in content uh so with this there will be a small error at the end because I want don't want to get into a whole IFL statement uh but basically it'll just print out the words for us and then error out uh when I hit the run now we see right then it failed at the end but whatever so a would chuck could chuck as much wood as a wood chuck would chuck if a wood chuck could chuck would so we get all these words uh but then that's also going to be something for you to be thinking about is how do you deal uh with this stream as it's coming in because remember you as a as a computer professional simply getting this response is not good enough you're going to have to figure then out how to turn this into some kind of data um that the user is going to care about right does it get dumped in a database how does it get printed out onto a computer screen that type of deal so basically this kind of gives you the idea though uh of how the stream works so now here is the final demo right so this this is going to show you why the internet is about to go to hell the internet I don't know what is worse than a dumpster fire so if the internet's currently a dumpster fire it's going to get even worse right uh so you've probably heard from all the SEO marketers out there they've been trying to figure out how to pay the rent and they've realized a AI marketing is the way to go right they used to you know they used to SEO optimize all these web pages and that turned the internet into a disaster and so now the idea is hey why don't we have artificial intelligence right blog posts Why Pay Bangladesh's 50 cents per blog post uh and spam the internet that way when we can pay chat GPT a penny a literal penny per blog post and really customize how we're going to spam the hell out of the internet um and basically our little script here is going to create a little Auto blog for us uh so when we finish this script uh we are going to write literally to a web page so there's going to be autol blog. HTML we're going to write to this webpage and we're going to be able to open up in Firefox and we will see a web page now normally again if you're in the real production environment uh the responses that are coming back you would want to dump into a database system so if you're using Wordpress something like that you would dump into the mySQL database or postgress or whatever you have on the back end with this we're simply just going to be writing to a file cuz we're not going to get into all that mess today uh so anyways with this uh we open we import the open AI module as we did before we have the API key as we did before and this is where we create a list of titles right so I want three different blog posts written and me being me they're just kind of snarky why fish run and and it is interesting chat GPT will tell you why fish run why cats fly and why bats swim so what's going to happen is we're going to iterate through this title list and it is literally going to create a blog post for each of these we're going to come down here we're going to create a file uh we're going to call it autol blog open autol blog. HTML A+ so what this me means is we're either going to append to the file or we're going to create the file if it does not currently exist then we're going to do that 4X Loop that I talked about before so for every item in the title list we are going to go through this Loop down here all the way down to this file right so completion uh looks like how we had it before open AI chat completion create model looks like how we've had before and down here it's very similar right uh so roll system content you are a journalist so I want you to do this in the voice of a journalist oh remember when you trusted journalist anyways that's that's another that's another show uh then here we have the role of assistant so you can only have one system role you can only have one user role but you can actually have many assistant roles for this I'm just going to use one role because that just makes my life easier and so with this what I'm going to do is I'm going to restrict this to a 250w blog post so you do have to be careful with this if you don't put an assistant here it may gave you an insanely long post or it'll give me you an insanely short post and so you have to think about basically how many words do you want this thing to respond to you and it'll it'll be Horseshoes and Hand Grenades you know you might get 200 words you might get 240 words you might get 150 words but basically it's you're just giving it a limit here uh this can be very useful again for silicon dojo we did a class last night where we using AI with uh with speech so text to speech and so with that whole system you get the text back from open AI but then you actually have to process that text and turn it into a an audio file and then you have to play that audio file so if the response is long right the longer the response the longer it's going to take to process you can have a lot of lag so that's one thing to be thinking about with this assistant do you need 250 words you need 500 Words maybe 10 words maybe 25 words maybe you know whatever two sentences you can put all that kind of stuff in here uh on top of that when you're doing this um let's say if you want a list of information back so you say I want this in a python list and then you could have additional assistance and those additional assistants will say the format should look like this the naming convention should be this Blas Blas Blas so any that's the kind of stuff you can do with assistants that you can play with then we come down here to roll user and content you'll notice no quotation marks no quotation marks you just put X so X X is the item so this is the variable so basically we're just putting the variable in so every time it Loops through it will put one of those titles in and then it'll spit it out uh then we're going to come down here we're going to print the title out onto the console screen just so we see what it is then we're going to print out uh the actual uh the the response onto this screen so we can verify that it's there then we have reply equals the response so all I want is the text coming back from CHP so that's where we're going to have reply here then what we're going to do down here is we're actually going to format the text in HTML P tags so when this comes back um it'll have next lines so it'll have slash ends so whenever you're dealing with python something to keep in mind or any programming language is the formatting of the text so HTML uses tags P tags H1 tags that type of thing for formatting uh normal text so asky text uses SLT sln so SLT is tabs slash in is next line that's how it deals with formatting so basically what I'm going to do here is I'm going to uh have reply so this reply is going to come back and in it there's going to be next lines and so what I'm going to say is I want a I want to split reply whenever there's a next line so we're going to have a list that's reply that will take the original reply and split that that string it'll be a string split it on the next line into the different values within a list we're going to have post so post is what's going to get written to the HTML web page and then four Y in reply so reply is now a list so for each item in the list we are going to have post so we want the entire thing so post is going to equal F string whatever was in post before plus y within P tags so what this is going to do is anything that had a next line character before this essentially is just going to wrap it within P tags and that's going to be assigned the value for post and so what we're doing here is whatever came previously add to that you know this line with P tags around it then we're going to do file. right uh so for and this is for each one so x x is the title within uh the H1 tags we're going to put the title and then we're going to put the entire post and then once we get to this whole the end of this whole massive thing we are going to close uh so with this I can now hit go uh so it's going to go and it's running and it's running so with this it'll it'll take a minute because again it's doing three different posts uh so it might actually take a couple of seconds to do this okay so that is the first post there so why fish run and then it gives us why fish Run Okay so that's the second post there why cats fly and it gave us why cats fly have you ever seen a cat fly no I don't mean in an airplane or helicopter I'm talking about cat taking the sky on its own gracefully soaring through the air any who and if we scroll down uh it has finished um why bats swim one might think bats are exclusively aerial creatures but anyway so what you'll notice here see these are the next lines right so this is coming back as asky Tech and we're getting the next lines if I simply wrote this to an HTML web page as it is it would be one really really really long line and it would be worthless uh but because we put everything within the P tags and the H tags we can now get a web page so I double click on this and as we can see why fish run why cats fly blasé Blas and again you can see that this is actually um a web page and so the text reform or readjust itself as you you change uh the size of the screen uh so why fish run you know although fish are primarily adapted for life in water there are certain species like mud skippers and walking catfish blah blah why cats fly additionally cats have unique skeletal structure that enable them to rotate their bodies midair and why bats swim so again imagine when the SEO experts out there start understanding how to use this API lit literally imagine when you could write an entire blog not a blog post right not a post imagine when you could write a, post blog literally every hour something along those lines imagine when those SEO marketers start getting the Bangladesh's not to write 20 blog posts per day but imagine when they start publishing 20 in entire blogs with a th000 posts each per day uh that is the power of being able to use the API and what it can do so there you go now you know how to use the chat GPT API again what I showed you today works with a 3.5 works with a four works with the 8K you know model the 32k model the whole nine yards uh essentially all you do is you just change where that model is into which model you want to use uh just do remember once you start using the 4K model it's going to it's going to start costing you a little bit of money uh so be do be very careful about that uh again what I think is really exciting about this is being able to use the API you can now make this an integral component into whatever system that you're going to be creating again so many folks right now they're using the crappy chat gbt interface and oh yay I'm using AI who the hell care great you you know you're using sort of a better sort of let me be clear sort of a better version of Google depending on how you uh you define it uh but again it's not it's not really that interesting Oh yay you can have it create an agenda for your next meeting Oh yay you can have it create a marketing plan um you know normies think that kind of stuff is exciting not so not so interesting as far as I am concerned uh what's really cool there's a p is you can now make chat GPT you can make open AI an integral component uh into your projects so that you can do a lot more interesting things uh again imagine when we talk about emails right so many times when they talk about AI right now ai can write emails and so somebody plugs some information in they get a response they copy paste yay whoopy crap who cares right what I think is more interesting is imagine if you have some kind of alerting system for your infrastructure uh environmental controls um right you have uh oh what all kinds of stuff going on right so uh whether a water sensor trips whether a CPU uh you know goes over 100% all of that kind of stuff now imine you code out your notification system uh for who's supposed to be communicated with at what times that type of issue one of one of the problems that you can run into is is essentially how do you how do you give your administ or your end users information that they understand and they can read you know fairly reasonably what's cool about this with Chad GPT is you could actually dump a whole bunch of information into a request and you could say hey turn this into some human readable format that's easier to understand uh and it can it can spit out you know a very you know a very concise uh little email alert so imagine if you were again CPU fans or what ever going off and so you can say uh turn turn this information into an email then once it it sends that response back to you you then take that response you then use send Grid or whatever email service provider you're using to then automatically send the email out to the appropriate person uh again imagine if you have issues going on with your system and you want to send a text message right SMS text message you go up to chat GPT you say turn this into something more human readable and then you use twio to send out a text message or again even something like a phone call you know send a phone call you can use text so open AI gives you back the text you can use some kind of text to speech uh module with python or whatever else to turn that text into uh a voice thing uh and so somebody picks up the phone and then they can have essentially this computer talking to them telling them what's going on right that that I think again is cool imagine that kind of thing being done uh again you can have a chat GPT make decisions we're going to have a class on this coming up I have to create it uh but what's interesting is right it doesn't make really make decisions doesn't really understand how decisions work again it's not AR it's not actually intelligence uh but it does again understand statistical relationships so what's kind of interesting is you can actually use it for a rudimentary brain uh for your system so like in our class we're going to use we're going to do things like play Tic-tac-toe or play Simple games so basically you can feed it what your game looks like you can feed it previous uh previous moves and then you can ask it which move it would like to make and then it does a little statistical analysis and then it gives you a move back right uh that is where you can use uh you know chat GPT or open AI to start making decisions again one of the things I want to play with don't quote me on this don't quote me on this if it doesn't work don't blame me uh again I'm really interested in iot like little robots and that type of thing and so one of the issues you get into with moving robots around let's say you're doing lawn mowing or vacuum cleaning or that kind of thing uh is essentially you have this room and then trying to figure out the best route to go to you know whenever whenever your little vehicle needs to turn around well if you turn your room or if you turn your environment essentially into an array a very very large array one of the question questions is is then could you just send that array up with a basic understanding of how your robot works and Cas PT figure out the code oh my God guidance systems for robots are a pain in the ass chat GPT you do it right again a couple of lines of chat GPT code might save you like weeks weeks of trying to figure that crap out on your own um so again that like that that's some of the interesting stuff uh that can be done again uh feeding in all kinds of like weather information again pulling it from the weather API pulling it from your calendar different things so like imagine so everybody wants to work from home now and the point to work from home is so that you can do cool stuff so let's say you want to do a hike right so it's like oh I really want to do this hike I'm not sure when I'll be able to do this hike imagine coming up with a system that pulls in from your Google calendar pulls in from weather API pulls in some other information and basically you just put in what day or when should I do this hike and it sees everything it figures everything out does a little statistics and says oh Thursday afternoon you should do the hike right that's some of the kind of cool stuff you can do and again pulling in from uh information from other apis other data sources um I think is really really kind of interesting uh and should be fun to play with uh so so anyways so that's all we that's all we got for the class today again be careful be careful it can get expensive do be careful I will remind you model 4 chat GPT model 4 can cost you almost $4 every time you run it if you do something stupid if you put that into a wild true Loop you can become poor really really quickly be careful about this I know I bring this kind of stuff up but this is why things like architect is so important uh so I know in the modern world with a lot of programmers that don't really think about architecture a lot again things like uh default gateways and all that kind of stuff seems so Antiquated right you know public Cloud versus private Cloud all that kind of stuff oh that's that's not what developers worry about U but they should but they should uh because again with the systems and with the price point of these systems sometimes just a little bit just a teeny tiny little bit of AR chitecture design uh might save your company a metric crap ton of money and remember when your company runs out of money you no longer have a job that's the whole thing like a lot of a lot of a lot of modern programmers like they like to be activists they protest their own company which is as it is here's the thing when you protest your company and your company has a lot of money they can they can make the decision about whether they're going to keep you whether they're going to fire you if you're comp runs out of money yeah protest all you want there's no money for you oh you want a severance package well you should have thought about that before you did the Wild true loop on a gp4 Model that's where your severance package went and your buddy severance package and your boss's severance package again four $4 per request is brutal so anyways as always I enjoy teaching this class and look forward to see you on the next one
Info
Channel: Eli the Computer Guy
Views: 4,774
Rating: undefined out of 5
Keywords: Eli, the, Computer, Guy, Repair, Networking, Tech, IT, Startup, Arduino, iot
Id: sOBtexyC34Q
Channel Id: undefined
Length: 65min 59sec (3959 seconds)
Published: Thu Aug 24 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.