CHATGPT API and PYTHON - Hands on Class

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome back as you know I am Eli the computer guy and this is silicon Dojo silicon Dojo authority-less gatekeeperless free to the end user Hands-On technology education here in Asheville North Carolina that empowers our students to do whatever the hell it is that they want to do uh so this video that you're watching obviously this isn't actually Hands-On uh but what happens is I run these classes here at silicon dojo in Asheville North Carolina we have students we have lab computers the whole nine yards and basically what I'm doing with these videos that are going to YouTube or whatever other video platform is I'm simply just recording the same class that we had and you can follow along at home now one of the big overarching concepts with silicon Dojo is that we have a train the trainer program what this means is that when we create education and when we present education we are not simply trying to teach students the material that that we're going over for the day but we're trying to give them the ability to train other folks so we have material we have our workbooks we have this video obviously you can follow along and I want you to know you can take any of this you can use it as is or modify it as you see fit and you can provide the same uh type of Technology education into where whatever environment that you're in if you would like to teach people in your workplace about something like fat EPT if you would like to teach a Boy Scout group or a 4-H group a group you know a kids group at Temple or kids group at church please feel free to take this material and actually teach people within your physical environment now we are free to the end user I will say that's not actually free things cost a lot of money rent cost money headsets cost money cameras cost money computers cost money seats seats do you know how much a share costs do you know how much 10 characters cost anyways the important thing is Wasilla condos what we're trying to do is we're trying to abstract out the financial component of Education from the actual education and so essentially what we're trying to do here is find a way to fund education where it's not necessarily the students having to foot the bill all of the time what you find out in the real world is the people that are most able to pay for Education are the people that least need education it's kind of funny right when I was eating ramen noodles and barely getting by I had to spend you know three thousand to ten thousand dollars on Tech boot camps now that I'm doing quite fine I just go to Google and research search whatever it is that I need right it's one of those weird things in the modern world of Education the people that are most able to pay for Education are the ones that at least need it and again the people that most need it or many times least able to pay for it so we're trying to to separate that Financial from the the actual educational component so one of the things that we're doing is we are crowd funding at this point in time there should be a donor box link down below or somewhere around this video If you find these videos in this education valuable if you find what I'm doing valuable and you are able to throw some money into the pot every month and that would be absolutely lovely we work for tips the quite literally when we work for tips so if you want this type of a program to continue please throw some money into the pot so a tiny second of advertising for something that we're doing here at Sila condos you know in Asheville North Carolina is we are actually starting Fireside Chats here one of the big problems with new people trying to get into technology industry is they go out they go to boot camps they get certifications they get degrees but they don't really understand how the tech industry Works who you talk to how you solve problems how do you build a company how do you build a startup how does funding work all of those kind of Brass Tack issues that frankly you don't get when you go to a boot camp program or anything like that so we're starting a fireside chat series here and basically the idea is is we are going to bring in real technology professionals we're going to bring in real Founders real entrepreneurs uh people who run things like isps msps the whole nine yards and we're going to sit down for anywhere between 30 to 60 minutes and we are actually going to have a conversation with them to better understand how they're able to provide the services that they provide right our first uh our first fireside chat is going to be here May 9th it's a Tuesday it's going to run probably from about 6 to 8 P.M uh basically we're going to sit down I'm going to sit down I'm going to do a relaxed interview with Jon Jones a CEO and co-founder of anthrower which is a software development company here in Asheville North Carolina uh once we get done with that kind of relaxed interview then we're going to open it up to the audience so people in the audience are going to be able to ask questions so if you are interested in learning again not not tcpip not the specific programming language the specific function if you're trying to understand how you offered deliverables to end users and actually provide those deliverables think about coming to these Fireside Chats I'm hoping that this is going to be a monthly program you can just go to our Meetup Group silicon Dojo on meetup.com you can and you can RSVP for this and come on down uh we are gonna try to record this because I know that's going to be the question are you going to record it and put it to YouTube we're gonna try we're gonna try have I told you what a pain in the ass audio is to deal with uh so I'm going to have video cameras there it will get recorded whether or not the quality of the production is good enough to put to YouTube well we'll find out on uh on May 10th so anyways if you're interested in this uh please please come and actually participate I think this is going to be an awesome thing before we jump into this class I would like to give a bit of a personal note before I do this video as it relates to my life within about the past week this is my actual lived experience uh so my wife actually had to have major surgery on Monday I'm filming this on Thursday and we did the class on Tuesday and oh my golly the recovery process sucks sucks uh before she had the surgery the doctor was like yeah she's not going to be able to do any exercise for two weeks you're like okay that doesn't sound too bad as I'm taking my wife from the hospital they're like oh yeah basically we don't want her to leave the house when we say exercise we mean walking down the sidewalk is too much effort it might uh push her blood pressure up and that might cause some problems see your wife gets to be stuck in the house for two weeks and the whole healing process minimum is going to be six weeks and then it's gonna go on past that and uh basically when I did when I did this class so uh so this this um this surgery that she had to undergo we knew that she had to undergo it but we thought it was going to be in the middle of summer a couple of people canceled their surgeries so she was available to have the surgery before she was normally supposed to so again she had it on Monday and the interesting thing was all of my plans got thrown into disarray so I was creating this chat GPT class I was doing many other things and all of a sudden right my wife has to go into surgery a day and a half before I'm actually going to be doing this chat hept class and a whole bunch of other stuff it became a disaster basically while she's in surgery I'm sitting there Tippy tap typing up the freaking workbook so that my students are able to actually go through the class I come to teach the class we go through the whole class it runs till like 10 o'clock at night because my wife can't do a lot of things I then end up continuing to work till almost till till after midnight I had to go to the grocery store to pick up ice for her because of her injuries um had to oh it's just it's just been an absolute hell of a thing right it's just wow and I'm not looking for sympathy here I really honestly if you've ever followed me for any length of time you know I really don't get of a crap about sympathy why I'm bringing those up though as I live The Experience example for you is I've noticed for a lot of people nowadays when they talk about bosses when they talk about owners when they talk about teachers when they talk about people in Authority there's this idea almost that folks in Authority should almost be superhuman right I see this with like bosses a lot of the times the 50 things a boss needs to do to be a good boss and I look at that list and I go wow that's that's exhausting I guess I will never be a good boss because that seems exhausting right one of the big issues we have in the modern world is a lot of times we do not realize uh that the people that we're interacting with have more going on out of the frame that we see than we realized right when you go to your boss when you go to your professor not only are they interacting with you and dealing with whatever else not only are they interacting with their managers and their bosses and their stakeholders but their kids may be drug addicts their spouse may have just died and they may have just found out that they have cancer and so when you're sitting there complaining or doing whatever else realize what's going on in that Authority figure's mind might be right not not necessarily hyper focused on whatever issue you think is the priority um I think about this with one of my viewers when I was doing the live streams and I actually had a computer science student that was in Syria during the Civil War is it during I think the Civil War is still going on anyways things things keep going for so long we forget whether or not they've stopped but the interesting thing was he was sitting there and he was complaining about his professors Eli my professors are so bad at my University what do I do and so I told him the first thing that you do is you go in and you thank your professor your professor is trying to teach you quite literally during a Civil War I don't care how bad they are put an apple on the desk and say thank you very much and then beyond that if that's not good enough for you obviously you want more think about what you can create think about how you can add value if the professor is not teaching you everything that you feel that they should be teaching how can you learn the material and start creating a tutoring group right again with silicon Dojo we're about to start doing Fireside Chats and pure educational facility is not providing everything that you need think about creating your own fireside chats or your own silicon Dojo or what else you can add to the environment that's not necessarily putting additional burdens onto your authority figures who may honestly already be struggling with a hell of a lot of stress and Things That You Don't See so I'm just going to put this out there I know a lot of people I know a lot of people when I say this especially in the modern world as soon as I say oh your boss has personal issues they start saying well I have personal issues too and again to be clear I think everybody should be nicer to each other I think about this in Thailand I was travel through Thailand a couple of times and I was talking to somebody about the Thai language and when two Thai people meet each other they either say Swati car or Swati Swati cup one's for the men one's for the females anyways I was sitting there and I was talking to a tie and I was like oh so Swati Costa up that's like hello an interesting thing is he sat back it was like no no it's not hello it is a green it's not hello it's more like yes sir or yes ma'am and I always thought this to be an interesting concept right in the Western World very hierarchical right you have the the the subjugated and the subjugator or you have the authority figure and the lesser the master and the slave right one of the things just to think about in this modern world is what if we think about the world of more Swati Ka swaddy cup what if both sides said yes sir what if both sides looked up to the other side as the person to give deference to and we all you know tried to be a little nicer to each other just one of those things to be thinking about because uh yeah yeah surgeries surgeries are so much fun now before we jump into this chat GPT class let me talk about our lab and environment for a second so here at silicon dojo in Asheville North Carolina our lab computers are 2012 MacBook Pros with Ubuntu installed onto them uh and I find these to be absolutely terrific lab computers uh so if you're dealing with server operating systems and such they actually have a network card a hardwired network card already built into them we install Ubuntu onto these MacBook Pros Ubuntu by default has the proper drivers for the webcam on the 2012 versions not necessarily the same with the different versions out there and so basically these two 2012 MacBook Pros are just really great lab computers so that folks can come in and they can do these projects and everybody is on the same type of system we also install SanDisk drives into them to make them a little bit faster but this is a very important thing to be thinking about if you're going to be following along with these classes in your environment or you're thinking about creating your own silicon Dojo my buzzword my buzzword for the next decade is I barely get paid to train you folks I sugar as hell don't get paid to troubleshoot your issues one of the things that I see especially when we're doing these coding projects when we're dealing with apis and other things is that different operating systems basically deal with stuff differently and the issue is is if you have 10 students with 10 entirely different computers it is going to be a disaster from a troubleshooting scenario one of the things so again we use Ubuntu in all of our systems and Ubuntu works like a champ but but isn't installed by default onto Ubuntu for some reason uh TK T kenter is not installed uh by default on Ubuntu so if you're dealing with an Ubuntu system you got to make sure you install pip before you install the other uh python packages uh you have to install t-cantor on its own and it's got its own little quirks uh the issue is if you go over to Mac right so I've been using a 2018 MacBook Pro uh until until a couple of days ago actually and the interesting thing with that is if you wanted to use tkinter if you installed vs code vs codes default interpreter was like python 3.08 when the the current python is 3.10 so if you run Python scripts from the terminal on an Intel MacBook Pro it works fine if you did it just with a default configuration from vs code it would fail until you went in and you cleared The Interpreter and then that that would solve the problem which was its own thing and then I just got an arm I just got an M2 MacBook Pro I love that thing again realize my my day-to-day computer is an M2 MacBook Pro our lab computers are 2012 MacBook Pros use the computer that solves your particular problems anyways I got this thing up and running a couple of days ago and it has its own quirks right every computer every operating system has its own quirks has its own issues so one of the things I would say is if you're going to be learning technology and get a lab a computer and use that lab computer for your learning for your training so that you have a good standard configuration and you're not failing because you're troubleshooting right when your code fails is it failing because you screwed up the code is it failing because you misunderstand Linux or the operating system permissions is it failing because you installed a security sweep that thinks your code is a virus and so it's shutting it down right I want my students to know they screwed up the code if it failed it's because they screwed up the code they're not going to have to go and try to do troubleshooting and again I sure as hell don't want to do any troubleshooting so um prices for MacBook Pros we use back Market literally about 15 of these things off a back market so yeah we use it uh they're not an affiliate or anything about us uh the price for these 2012 MacBook Pros fluctuate uh currently today it's 135 dollars so it's generally between 135 dollars to 150 uh you throw a solid state drive in there and it works great now the standard configuration comes with four gigs of RAM for us that's worked fine so far I haven't had I honestly for our projects have I had an issue with four gigs of RAM but you can upgrade it to 16 gigs of RAM though that'll probably cost you about another fifty dollars so depending on what you're looking for you'll be able to get one of these lab computers you know all up and running for probably between 160 to 230 dollars but definitely definitely get a lab computer and just use that for your projects now the final bookkeeping before we jump into this chat hept class is make to sure to print out the workbook on pay paper so that you can scribble you can make notes you can follow along with what I'm trying to teach in this class I know in the modern world everybody wants everything to be digital but frankly sometimes dead trees are actually very useful whenever you come to a class you're actually print out a copy of this for every single student here so that they can follow along and they can create their notes and all of that as we're as we're doing the class so when you see me pick up this uh this wad of paper this this workbook is what I'm looking at the link for it should be somewhere around this particular video you can go to Silicon dojo.com find find the information for this particular class and you'll be able to download the workbook and also all of the code examples from there okay so are you ready for some chat GPT this is going to be some exciting stuff so a lot of people nowadays when you when you read the media reports about our Channel GPT folks are going into user interface that is being provided by open Ai and asking chat GPT questions there right that's cool that's interesting that's neat for the tech reporters but we we are real technology professionals so we want to write our own code the great thing about the chat GPT API is what this allows us to do is this allows us to send queries or make requests from the chat GPT service and get the response from that ques request and then do with it as we see fit uh so I'm going to we're going to go through the the first lessons here basically showing you how this API works and then after that we're going to provide some real world examples so we're gonna have like a little conversation we're going to actually create a GUI app using tkinter and we're going to have a conversation basically going back and forth where you're you're able to ask cat GPT questions and then you're able to ask follow-on questions based off of the response we're actually going to create an auto blog yes something that is going to truly destroy social media and basically web publishing as we know it we literally are going to be able to put a list of prompts together so we're going to have like five prompts and basically what our code is going to do is we're going to do a 4X Loop through those prompts feed those prompts to chat gbt cat EPT is going to provide a response not simply going to provide a blog post it's going to provide a blog post formatted in HTML we're just going to Simply dump that into an HTML web page and so essentially within 10 20 seconds you will get five unique newly created blog posts that you could literally publish to the internet if you wanted to we're also going to do a store storytime app so if you don't want to deal with your kids at the end of the day you have more important things like going to watching Netflix right you can type a little prompt into into the little app that we're going to create and then Jack GPT will literally verbally tell tell your kids a story so basically what's going to happen is you're going to request um from kgpt a story about whatever we ask it for and then what's going to happen is we're going to then take that text we are going to send it through Google text to speech API and then Google is then going to tell the story to our snot nose little brats so again we can go off and do it do whatever it is that we want to do that's more important than Mommy and Dad event than doing the kids stuff right so that's the kind of cool stuff that we're going to be doing today and that's the kind of thing that makes chat GPT uh really a very interesting now if you're interested if you're going to be using the chat GPT API it is important to understand the interface that you use for that kind of web thing that you've been seeing on the news is not the same as the API interface so the first thing you're going to want to do is you want to go to going I want to go to openai.com from here you're going to do something like click on API references and then API references it's going to give you a page and then on the the left the left yeah the left hand side over there it'll either say either log in or create account so you're going to go over there you're going to log in you're going to create an account and then this is what you should be looking at if you see an interface where you can type stuff in you're in the wrong place so go to open ai go to the API reference you'll see something like that go over to the left hand corner you'll be able to create an account there and then once you do you'll be able to you'll get 18 in credits at least now uh so whatever this is April 20th I think it is 2023 you get 18 in credits that are usable for the first three months um and that's actually a lot of credits when you're simply doing um text communication with chat EPT 18 is just you can write books and books and books and books and books off of it uh basically uh with chat GPT how the pricing model works is something called tokens um so 750 words equals approximately a thousand tokens right now a thousand tokens cost point zero zero two dollars what that means is a thousand tokens cost a fifth of a cent right so you can get 5 000 tokens for one penny right so that's you can do a lot with that uh I've been doing a lot of testing I've been doing these classes my students have been doing these classes and up until this point I think I've used about five dollars uh where it does start to cost you a lot of money we're gonna do a class on this later is when you start using Dolly so when you start trying to create images using something called dolly that will run through your money really really really really really quickly uh but as far as uh doing this text uh back and forth they should be really really really inexpensive the one thing I will warn you about though is with this whole token thing this is something that tech companies do to what's called obfuscate uh how much you're going to be paying at the end of the day for this service right so instead of saying a thousand words cost a penny right if you know a thousand words costs a penny it's very easy for you to budget off of that and if you can budget off of that then you can go to the CEO with what the approximate bill is going to be and the CEO can go oh my God that's expensive no right if you give the CEO good information the CEO is likely to say no to you so what these tech companies do is they would do what's called obfuscating so encryption means you can't read so if I encrypt something it means you are not able to read at all obfuscating means you can read it it just gets really confusing and it makes your brain and eyeballs heard and so you give up trying to understand it and just go about whatever it is that you're going to do so an issue that you can run into like for us so in an educational environment in a lab environment again eighteen dollars is so many so many tokens it doesn't matter you're not going to hate the 18 limit what I will say though is if you do this work and then you put this into a production environment so you do this work and you put this into Travelocity or Khan Academy or Amazon or something like that and you actually start hammering the hell out of the API those fits of ascent times God knows how many users you have can turn into a metric crap ton of money really really really quickly and if you're one of these people that wants to make AI the core of your infrastructure and you make it the core of your infrastructure before you realize it's going to cost you twenty thousand dollars a month you can run into some problems so I will say with this whole token thing do make sure you do your calculations and your math correctly for if you're going to be putting this into a production environment because really what you got to be thinking about is okay 750 words that's going to cost me a fifth of a cent how many users are going to be hitting my site per day how many queries are they going to be making per day what is making a query so it's not simply that the user inputs something that makes a query but every time there's a recommendation uh basically you hit the recommendation algorithm if the recommendation algorithm is being triggered and that's triggering China gbt that might be triggering something if you have something auto update in the background imagine your users go to a page and you have a couple of auto updating feeds every time that feed auto updates that might be hitting chat GPT that might be costing you a fifth of a cent or a half of a cent times times five uh five you know panels on that web page times 10 000 visitors a day and holy hell something that was Dirt Cheap in the lab environment becomes just brutal in the real world so do be careful about that the other thing to be thinking about is with the free service and again the free services is was pretty good right basically the only issue that I ran into with the free service is they do now have a three request per minute rate limiter so what a request is is when you make a request to chat GPT and they send you back a response that's a request one of the interesting things with this is when I started doing Loops right so 4X and list so creating the automatically creating those blog posts doing some of the other things that we're doing I was hitting them like five times and so I was hitting it five times in 30 seconds that's obviously more than three per minute and then things were failing out again if you're going to be doing these classes uh for four students right you're going to take this information you're going to do classes for students if you have 10 people in your class and they all start writing the scripts at the exact same time you will hit that three uh that three request per minute rate limit so just do keep that in mind it's very easy to to increase your account um so basically you can go over uh so once you've signed up you'll see that little B that personal thing over there you can see the fake the the fake email address I'm sure somebody's gonna be like well you know your fake email address now okay good for you anyways uh you can go down and for the API Keys uh the API keys that we're going to be dealing with you go to view API Keys here you click on this you will see see a couple of API keys were created and when I want a new API key I can do create a new API key I can give it a title and I can create a new key this is the key that we are then going to be plugging into the code so if you basically copy and paste my code and it fails because you didn't give it an API key this is what they are looking for now this is going to be the only time you can copy this particular API key as soon as I say done so I can so click on this so if I click on that it allows me to copy right API key copied because as soon as I'm done basically it's being created it gives me the last four digits of the API key just so I can keep track of the API keys but I have I have no other way to access the API key past this point right the only thing that you can do is you can create new API keys and you can delete the API key these are revoke as I say revoke the API keys that have already been created now I know some people I know some people at home they're gonna be like oh look Eli Eli's showing us his API keys today he's stupid no as soon as I get done recording this particular class I'm going to go here and I'm going to revoke the API key that you are seeing in my code and it is going to go bye-bye and you are not going to be able to use it um this is something to be thinking about once you start uh trying to design a larger infrastructure is managing API keys so remember all of your code that's doing API calls they require an API key to actually access the service that they're accessing from a security standpoint you're going to want to revoke you're going to want to refresh API Keys every once in a while so one of the things that you do have to think about is what happens when you revoke an old API key is your entire infrastructure going to fail that add as a class for a different day so anyways uh that's that's that's the basics for creating uh your account with open Ai and getting this API key uh again if you do pay uh for the open AI service you get more than three requests per minute so this might be useful for you and one of the nice things with it is for billing you can actually set usage limits right if you want to make sure that you're not going to get a ten thousand dollar bill at the end of the month that you don't accidentally don't accidentally trigger a while true Loop anyways you can go to these usage limits and with this you can see that the hard limit that I set for my entire account is twenty dollars per month and the soft limit is ten dollars a month so when I hit ten dollars a month I'll get an email notification and when I hit 20 a month it'll absolutely stop if if you were going to be teaching this to your students make sure you set this hard limit because one of your students will go what's a wild true Loop and they will find out on your credit card so make that hard limit so what anyways with that that's the basics of how to create the chat qpt account send the price in that type of deal Now setting up your Ubuntu environment to make sure all of the the labs work today is relatively simple uh the first thing that we're going to want to make sure that we've done is install t canters t Canter is the GUI framework that allows us to create graphical user interfaces using Python and so for that all we do is we type in sudo apt install Python 3 hyphen TK basically you put that command in you type in your password one two three four five six around here and this is going to say it's already been installed right but you're going to want to install that the next thing that you're going to want to do is you're going to want to install pip so pip is the the package installer for python Ubuntu for whatever reason doesn't have it if install by default so sudo apt install pip and then it's going to go through and it's going to tell me I've already installed it so you're going to be fine past that then what we're going to do is we need to install the open AI uh module or framework for python so that's one of the big things to understand right what a lot of people don't get when they start dealing with artificial intelligence or a lot of these modern systems is they think oh my golly it's so complicated I'll never be able to understand it right it's one of the sad things you see these smart motivated people they see they see things like artificial intelligence and they're like oh I could never be smart enough to do that and one of the things that I tell my students is probably not probably not wow what do you mean here's the thing you don't got to be that smart you're right Elon Musk is brilliant whatever whoever the guy that's actually running open AI they're brilliant are you that smart probably not let's be honest probably are but here's the thing AI is hard artificial intelligence is hard apis or not apis are dirt simple so one of the reasons that apis are Dura simple is you're able to install into python a lot of packages and libraries and whatever else is required to basically interact with with chat EPT and open AI so that all you have to do is write five lines of code literally when I start showing you the basics of chat EPT we're talking about a lot of things you can do in 10 lines of code and most those lines of code are frankly copy and paste right the reason that we're able to do that is because we're able to install the module into python so what you're going to do is you're going to do pip 3 install open AI do make sure you do not misspell this one of the things that hackers have been doing lately is basically they create nefarious python packages for people to install so that the hackers can compromise their systems and one of the things they do is they pick popular names and then misspell it by a little bit so you type in the wrong name and there's actually a package out there so you install that nefarious package and then your system gets compromised again another reason why you should have a lab computer so that if your lab computer is compromised then your lab computer is compromised not the computer with your online billing and emails and all that kind of stuff anyway so pick 3 install open AI you use pip3 so pip is what you install into Linux but the command is pip3 anyways you hit enter and it goes through requirement already satisfied so again for me it's already it's already there so nothing else happens for you when you go through and you install it it should go through a little installation routine say yes and you'll be good to go and with that we can dive in to the code so now we get to dive into our first lab and our first lab is going to be using the 3.0 DaVinci model uh so in today's Labs we're going to be using the 3.0 DaVinci model that chat GPT has and we're going to be using the 3.5 turbo model if you're watching this within my particular timeline although the four API currently exists and large companies are using it and people like you and me don't actually have access to it now before before you go oh my God this is this is obsolete even before I watch it just realize that all of these models you actually interact with them in different ways the responses that you get really are different so that there are reasons even now even if you have access to the Forum model there are reasons that you might use the for the three model or the 3.5 model don't think of when you look at the 3 and the 3.5 and the four don't think of this as the previous model is completely obsolete just think about that what they built that model with a certain mentality that has now changed to a new mentality that then gets changed the latest mentality so again there's reasons that you might want to use the three or the 3.5 even if you have access to the four model uh one of the things I'm going to show you in one of these Labs is we're actually going to give the exact same prompt to the three model and to the 3.5 model and you will see that you get a significantly different response back and so this is something you need to be thinking about when you're going to be coding your particular projects so anyways we take a look at this here um pretty simple it's a pretty simple thing right so import open AI so since we installed the open AI module with pip3 that should now be installed and now we're going to feed it our API key and again this will get deleted as soon as this class is over you put your API key here if you're a little bit more advanced you may set your API key as an environment variable I thought about doing it with this class but that adds a little bit more complication for the students so that's why we're doing this particular way if you understand how to set environment variables that is the proper way to do it if you don't understand what I'm talking about don't worry about it anyways so open AI the API key is you put it within the single quotation marks and you feed it that API key we created before then we're going to create the query what are we going to ask check EPT now with me now with me I'm kind of ornery and I'm kind of curious and so I pull questions out of my buttocks why ask a reasonable question of of artificial intelligence I could ask somebody I know reasonable questions I'm going to ask stupid things uh so the the question that I'm going to be asking is what was the Battle of the Frank they're like Battle of Frank what was the Battle of the Frank it didn't exist it didn't exist I'm just like hey the battle of the Frank a battle that doesn't exist what is the answer that I'm going to receive and again this is one thing you need to be thinking about this is one of the things I talk about in the technology world where you want hard fails you don't want soft fails what I mean by a hard fail a hard fail is when your computer crashes when your computer crashes when the program crashes when it doesn't work the emails don't come in the website doesn't come in you get a hard fail you do something they're supposed to be a result you don't get it at all right that is something that's very easy to troubleshoot one of the issues you run into in the real world of technology is what happens when you get soft fails when you get a result doesn't look right I don't think that's the result that I'm supposed to be getting but I am getting a result and so when you run into those problems it can be a bit of a disaster I see that being a real big problem with AI not necessarily with hallucinations again everybody's been hypers focusing on hallucinations but beyond hallucinations what if it just fills in blanks that it shouldn't fill in right the battle of the Frank what if it thinks what if it thinks I'm talking about something else and so it corresponds the battle of the Frank to apparently there was a battle in Frankfurt right so it's like oh he's talking about the Battle of Frankfurt so he gives a response or the Battle of Franklin or the Battle of I don't know hot dogs or something like that right one of the issues that you can run into again with the real world of artificial intelligence and again this is not hallucinations everybody like focuses on certain things with AI this isn't hallucinations this is just where it fills in blanks where it shouldn't fill in blanks right maybe you made a mistake if you got a hard fail you would realize you made a mistake since it's filling in blanks it then fills in those blanks and gives an answer that fits right something to think about there anyways so this is going to be the the object that we're going to be creating the variable is response so response equals open atai Dot completion dot create open parenthesis then since this is python everything gets tabbed in gets a white space in model equals text DaVinci zero zero three uh prompt equals query so again since we're doing coding I want to feed things in from variables I want to do with variable values but you could do prompt equals what was the Battle of Frank you could do that here I just want to feed it in this way so in later examples we can start dynamically feeding in values into the prompt then you're going to deal with temperature so temperature top P frequency penalty and presence penalty they're their own worlds um basically what you're able to do with with the DaVinci model is you're able to dial in the accuracy of the result by screwing around with the temperature on the frequency penalty in the presence penalty and the top P that becomes its own world go to somebody else to learn all that kind of stuff uh that that that's when you're creating something for a production environment and you literally need the absolute best result and you're willing to sit there for hours or days dialing in these numbers to get the best result we're not going to go over that today so leave those as zero one of the big things is Max tokens um so whenever you're using tokens one of the important things to understand is both the query and the response count toward the token cost right so if you feed it a long question if you feed it along question and then you get a long response one of the issues you may run into is that there's not enough tokens to fulfill the full response and so you'll get the response halfway written uh one of the interesting things with how this AI model works is it kind of just writes out so it doesn't apparently from what I understand it doesn't actually complete create the complete response and then send it to you it actually creates the response on the fly so the problem is with this Max tokens if you run out of tokens it will literally stop halfway through the response um so if you go and you look at the examples on the open AI website many times Max tokens will equal something like 60 and so 60 is enough for you to ask a question and get like five words back so we're going to give it a Max token equals a thousand so that's going to be about 750 words so that should be enough for what we're trying to do here again so I'm going to be thinking about what the responses do you want 750 words do you want 1500 words do you want you know three thousand words or something like that if you if you say chat you know basically you say I want a 3 000 word response about the bot Battle of the Frank and you don't feed it enough tokens it's going to fail out so the tokens are the big one the other thing that you'll see which is kind of weird uh when you go and you look at the documentation the API documentation an open AI is there's actually also a stop value so you can have stop equals and this is where uh chat upt will stop wherever it sees a particular character string all right uh so I don't know if there's a word you want it to hard stop on not skip over it stops there you can actually have a stop value here I'm not sure why you even want the stop value but maybe you want the stop value one of the problems that I ran into is when I was doing these classes I of course copied and pasted the code so that I could tweak it a little bit I overlooked that stop value and one of the things is again in the default code the stop value was set to slash n so basically means next line and the weird thing is is the response that comes back from chat topt is slash n slash n whatever response is so literally what I copied and pasted from open AI literally had the stop value be the first value that gets sent so anyways be careful the stop value then what we're going to do here is we're going to print and we're going to print the entire response so the response that comes back from chatgpt is something called Json so this is uh the data format that's used in order to send data for these API calls basically there's a specific way that you parse that data so I want to get back that entire response just so we can take a look at that big old thing and then after that I'm going to print out just the text that we want so basically when you look at the Json response it's essentially dictionaries and lists and so you put in the index for the dictionaries and the list to get what you want right so we're going to print out response so the response is that Json that's coming back to you from the choices index from the zero index of the list within choices we then want to print out whatever's in text all right so that's that's basically what this is so when all we want is the text this is what we print out if we want the whole Json response we literally print out all that uh so with that let me uh let me hit the play button the Run button I suppose it is now running and uh yeah I already did this before but anyways it'll take a couple seconds to run that is one thing to realize uh chat GPT is not the fastest thing in the world um which may or may not matter for your particular application a lot of times when you're dealing with apis in the modern world you're used to them being very very very very very fast and so again think about that user interface that user experience Channel TPT isn't it's fair I mean you know again it's that whole fast thing it's it's fast but it's not fast anyways uh okay so um I should have cleared this but anyway okay let's see here so uh this is this is my prompt this is when I ran the script okay so this is what we're talking about that Json response and how it looks like dictionaries and lists right so open squiggly bracket is essentially opening a dictionary So within this we have choices so choices is a key for this dictionary and so this is what we're going to want we're going to want this text right here so we're going to say choices and then in choices the value for choices actually opens a list right so when we open the list there's actually only one index in that list so that's an index of zero at index 0 in the list is another dictionary this has keys so finished reason index log probs right so we're going to go to the text key and then we want to print this out right so that's where when we go up we go up here we see choices so so choices right choices that's a list zero it's a first in that list and then down here text that's how we get to this if you're going to mentally work through how to get the values the other important thing here is you do get more information um like say finish reason when it was created uh the ID number if you want to track by ID number some kind of tracking thing there also usage again with that whole token thing figuring out how many tokens you're spending so for this uh the the entire the completion tokens so how much it cost to to do the completion was 82 Tokens The Prompt was seven tokens and the total tokens used were 89 so that may be of use to you and then finally we're simply going to print out again choice is zero text we're just simply going to print out and again this is where this is not a hallucination this is where it's adding in additional stuff the Battle of Frank Lin was a major battle of the American Civil War that was fought November 30th 1864 in Franklin Tennessee it was a confederate Victory blase blase blase blase so anyways so this is the 3.0 DaVinci model the important things to be thinking about is this is how you require you get the response from The DaVinci model and then the important thing is this is how you're able to parse What's called the completion so the completion is that's the value that comes back and this is how we get the actual text that we're looking for this is different with the 3.5 model so now we're going to lab two lab two is the 3.5 turbo model and we start taking a look at the 3.5 turbo model you're like yeah yeah that's a lot more modern that's a lot more of what modern audiences would want and yeah yes this is this is but as you will see as you will see the response you get from the 3.5 model also looks like a response for modern audiences as an AI model I don't have an opinion oh my God the extra crap that it has holy hell anyways but I'm gonna show you that in a second you understand the joke in a second uh so again this is the 3.5 model you'll notice the response uh how we get the response is a little bit different here and how we parse the response down here is slightly different import open AI because you got to import the module open AI the API key equals the API key now I put two uh a two put two um variables here with two variable values nationality of France and query of who is the president the reason that I did that is one of the interesting things that you can do with the 3.5 model is essentially you can nudge uh the answer that chat kpt is going to give you one direction or the other why this is useful is it means that you can reuse the exact same queries to chat jpt and then just simply dynamically change your certain values so that the answer is appropriate for uh for whoever is asking the question again think about this modern world you go to a social media site and you dump a whole bunch of information in a social media site you know what country you're from what's what city you're in all that kind of different stuff right so what's kind of interesting here like with this nationality is you could have a script that dynamically determines what nationality you are and then answers the question based off of your nationality right so instead of being you know who is the president of the United States and then having to have that modified a million different ways you can say simply say who is the president of X you can then feed in the nationality and then it will find for that particular user so we look here response equals so with this it's open.ai chat completion again they modify things slightly if you notice here it was open AI dot completion so it's dot chat completion here so to be careful about that dot create model equals GPT hyphen 3.5 turbo and then messages this is where we give roles to a chat GPT 3.5 right think about it you are you are the director of an artificial intelligence melodrama and you're telling your Premier actor how to play their part it really is kind of think about that it's because it's the way to think about it right think think about you are directing the actor that is track GPT uh to to play a role uh and so you have three different types of roles here uh so messages so you have role and system so you're only going to have one system roll tool content equals and this is this is who is who is chat gbt when fat GPT gives the response right who are they are they a president are they a tour guide are they a Tax Advisor are they a hobo right what voice what voice are they going to use what perspective right what is the perspective of AI oh we're not even gonna get into where that goes but anyways that's what we're doing right so here system content equals you are a tour guide right figure make it easy then we're going to do assistance so there's only ever one there's only one system roll so you just set who are you then you have assistance you can actually have multiple assessments here to kind of push push or pull the answer in one way or the other so what we're going to have here is role assistant content answer so basically we're going to say here is answer as a and we're going to concatenate and put in nationality citizen right so answer as a a France anyways as a France citizen answer as an American citizen answer as a Brazilian citizen right so this could be fed in dynamically based off of whatever and then it'll answer based off of that then you have the role of user content equals the query so you only have one you one user thing here and that's basically the question that you're going to be asking to chat GPT so one system rule one user role multiple assistant roles if you want to dump them in then we come down here again print the response it says this is going to be a Json response we're going to get all the information back just so we can take a look at it and then in order to pull the pull the specific information that we care about uh we're going to go response choices zero message content that's how we get the text from this particular question so I'm gonna go and I'm going to run this a runner run run run run and so this is what we get right so this is the prompt up here so choices zero message content I swear to you if you're new to all this it does get easier it gets one it gets easier the longer you do this and two frankly you can just copy and paste most of this crap anyway so anyways right so we get the finished reason to stop we get an index here we get a Time created we get an ID again we may want to track that for something again remember in the coding world you want unique identifiers for different things so you can track things through the system the model uh usage so this took 37 so completion tokens was 37 prompt tokens was 32 total tokens was 69 so right so that gives you an idea and then here oh my golly cat GPT is truly 3.5 is truly created for modern audiences as in AI language as an AI language model my responses should not be taken as an accurate representation of reality I heard of it cancel culture has gotten so bad even the artificial intelligence has scared crapless however at the time this response was written the president of France was Emmanuel macroth ah I do I do think there's something interesting there now the important thing again oh ha ha that's funny haha cancel culture has got it so bad even the AI is scared crapless that's that's humorous right that's a joke but right we're here to be Tech professional or be here to be serious there is one of the important things you have to think about is a tech professional one of the issues with cancel culture right anyways one of the issues with this kind of response is if you're having some kind of system so the user is asking a question it's spitting back a result remember you're going to take this response from chat GPT you then have to parse it right basically read through it and rewrite it in order to give the user something that's actually useful and not freaking stupid so realize one of the reasons again that the 3.5 model can actually suck is you're going to have to look at all these responses and learn how to rip out this crap this is not what your end user wants to see you do not your end user does not want to see the AI dithering and worrying about cancel culture they just want an answer they want Emmanuel macro right and so this is an issue you can run to it run into with a 3.5 model is it's so wordy it's so verbose that it can actually be a worse output at the end of the day because when you go and try to turn that into something for the end User it's you've got to figure out how to parse it and write the code to actually give a decent response uh so let's see here what if we go Germany basically run the same thing out of curiosity does it dither on that again just to kind of show you again yeah as an AI model I do not have real life experiences or emotions I can't make this stuff up don't cancel me don't cancel it But to answer your question the current President of Germany as of 2021 is Frank Walter steinmeyer um and so yeah what's weird like and they've actually added this and this might be a larger problem for you when you're dealing with these responses it wasn't this bad a few days ago so I've been work I've been working with Casey BT for like a month and a half since they opened up the API um and you used to get this kind of dithering like a month or so ago when you talked about how did the world begin or how did the universe begin or when you ask as kind of like religiously type questions uh then then it would dither then it'll be like I'm an AI don't blame me when again you can kind of get you can kind of understand where it's like okay you know open AI doesn't want to get into theocracy or anything or theology what gets weird here though is that they're now doing these responses for who is the president of a country and so again something you had to be thinking about is if they keep changing these responses how easy is it going to be for you to make sure that your code base is updated to deal with this it issue this is one of the big things like I talk a lot about business and I talk a lot about social issues and I I try like I try to talk about politics and a non-biased way and a lot of times people say Eli Eli stop talking about social issues stop talking about politics all we care about is technology what I want you to understand is I.T information technology is tied to societies tied to social issues it truly is it's tied to politics when you see this kind of dithering coming as an API response this is because somebody wrote this code so that this AI wouldn't get canceled right that political mentality that social response then provides you with this garbage that makes your life more difficult right so again do remember politics and social issues are not in any way shape or form separated from the technology world and so that's one of the things you have to keep your eye on just again I guess so hopefully you don't lose your mind so now we get to lab three and this allows us to compare and contrast the responses between the 3.0 and the 3.5 model to try to determine for our particular project which model is going to work best so basically with this essentially all we're going to do is we're going to create the query and then we are simply going to feed that exact same query into the the three model and into the 3.5 model and then we're going to print out the results to see what is more useful for us so with this pretty simple here import open AI as we did before feeding the API key as we did before put in a query put in a stupid query like we did before what was the Battle of Frank who the hell knows uh then what you notice here is I did response three so where this has been response equals so this is response three and this is response three 5 down here so response 3 equals open AI completion because this is the 3.0 model The DaVinci model the query we're feeding it temperature is zero we're going to give it a thousand tokens so we get a decent response top P we leave it one frequency penalty we leave a zero and presence penalty we leave a zero so basically this is just the default standard this is what you dump into uh to chat GPT to give it a response the only thing that might be different there is that Max tokens of a thousand then we're going to go down response three five for the 3.5 model open AI chat completion again just remember with these models it is slightly different completion for three chat completion for 3.5 create model equals 3.5 turbo messages I'm leaving these blank the the uh the system role and the assistant role I'm going to leave those blank so it doesn't get additionally biased here and then we're simply going to do the user role as the query as that question then we're simply going to print out GPT 3.0 says and all we're going to print out is the text this time so response three choices zero text that will be the text from the three model print slash n so this is next line slash so in the text world slash n is next line slash T is tab so we're just simply going to add a line right between these so it's easier to see Slash n g GPT 3.5 turbo says print response three five choices zero message content right and so we're going to print out and we're going to see what they say about the Battle of the battle of Frank again it takes a second it's not it's not fast or in the Amazon World everything's got to be so fast there we go okay so uh 3.0 says and you'll notice actually you start getting some weird artifacts I was asked about this and the uh the class that we did the other night and yeah sometimes and again something to be thinking about when you're on a purse you're going to be sending these responses to the end user you get some weird artifacts so I think this is adding the Lin so basically it's like Frank Lin I think that's why you see that there but you do get these kind of artifacts uh 3.0 says the Battle of Franklin right was a major battle of the American Civil War like we saw before GPT 3.5 turbo says I'm sorry there is no widely recognized battle call the Battle of Frank can you provide more information or context to help me understand what you're referring to right so two entirely different answers from uh from shy GPT uh we can go up and I don't know out of curiosity let's type in who uh discovered America America uh let's let's say I actually don't know I actually don't know what this what the answer is going to be uh so let me go down here let me just clear this ah don't do that uh that's one thing always making sure where your where your cursor is actually is okay then we're going to run this and we're going to see what the 3.0 model says and with a 3.5 model says we'll see if that 3.5 gives us a little dithering I don't know uh 3.0 says the answer to this question is disputed oh interesting while it's wireless except that Christopher Columbus was the first European to reach the Americas in 1492 there is evidence that other Europeans blase blase blase right so that's one particular answer uh 3.5 turbo says Christopher Columbus is traditionally credited with discovering America night 1492 however it is important to note that there were already people living in America before Columbus arrived specifically indigenous people so again those are those are two different answers it is kind of funny though again doing these I'm doing I'm act like I'm doing these in real time as I'm talking into a camera what I find very funny about 3.5 is it was dithering to high hell and back on who was the president of France when you ask 3.5 who the president of France is is like I'm an AI don't blame me when you ask something that is actually more controversial it just gives you a response again why I point that kind of thing out you're going to have to parse these responses and the fact that that the response isn't necessarily even logical that you get back it'll be one thing like if every if every single answer that 3.5 get it always gave you that Preamble of I am an AI don't blame me right if it always gave a preamble that's fine you can just write some python code to strip out the Preamble before you ship it off to the end user one of the things I really want you to grasp here is again with these responses you're getting back responses that don't necess the the how they're written sometimes don't necessarily make logical sense so when you're trying to parse this for your end user or dump it into your data store you are going to have to do a lot of testing to try to figure out what the response is for your particular types of questions look like so that you can parse and deal with them appropriately so now we get to lab number four and this is about bias again one of the interesting things to be thinking about with artificial intelligence is what is the perspective of artificial intelligence right beauty or facts are in the eye of the beholder how does AI see the world and how does AI think other people see the world and one of the things to to really be considering when you start going out to develop projects that are that are going to be using AI is that psychologically humans like to believe that computers are right that they're infallible right if you want to know the answer what does somebody say go to Google because there's this idea when you Tippy tap type into Google and you get a response that is truth now any of us have been in the tech field for the past 10 years no that's a load of garbage but it doesn't mean it doesn't feel true right as a Stephen Colbert used to say truthiness it feels true even if it's not right so one of the interesting things to be thinking about when you start developing applications with artificial intelligence is how can you skew the AI response into the direction you want it skewed right so you're going to have users and they're going to come to your application and you want the answer again to be biased in your particular direction again I'm not not getting anybody's case here but if you're a Christian and somebody sits down at a Tippy tap type into your AI application you want it biased to the Christian worldview right you know you go you go to the priest and the priest says something you're like I don't know if I believe a priest and the priest says well hey go to priestai.com and ask the artificial intelligence say Tippy tap type in and it gives you exact same response as the priest does oh my God that is the truth again democrat or republican right Hindu Muslim right again I'm just I'm just using placeholders people I'm just using placeholders but again it's important to be thinking again how how the answers get skewed and whether you you can or whether you should skew the answers for your particular application um a lot of what gets focused on right now when we talk about artificial intelligence is the bias of the AI itself right so here in my timeline in April of 2023 one of the big things is that Elon Musk says that open AI is too woke so what Elon Musk is saying is the chat GPT the open AI infrastructure that that is coded to be to woke and that's that's an argument that's an argument that's a discussion that's not what we're talking about here right that's its own thing one of the things that gets more curious though is you you as a programmer can you skew the results into a particular direction woke not woke whatever else right because remember we're dealing with this API and again in the next couple of labs we're actually going to start creating user interfaces right this is on the back end what the user the user is simply going to see a form or a web form or something like that and that's what they're interacting with this is what's going on the back end so can you add things to the back end again to to guide to guide the result so with this uh import open AI as we do open AI key as we do query this unfortunately has gotten less interested in the past month um when I did this a month ago and again it's important to understand that AI changes when I did this a month ago this was a hilarious response it was very interesting the response that we got it's not as hilarious anymore I gotta say uh how did the universe begin right that's a big question there how did it begin again especially in the modern world then we're going to create a bias so we're going to create a list of basically different types of perspectives to question how the AI sees these different types of perspectives so Democrat Republican Christian or pastafarian right if I'm one of these types of people how how did the universe begin but again think about this how did the universe begin and we're skewing it uh so we're going to come down here we're going to print the query so we're going to print the query just so we've seen that for x and bias so this is simply a for Loop for every item in a list then we do the tab n response normal response equals so we're doing the 3.5 model here open AI chat completion create model equals 3.5 a role of the system we're just going to leave that blank assistant content as a so as a Democrat loop as a republican loop as a Christian loop as a pacifier Loop and then the query how did the universe begin and the important thing to understand here is this is not what the end user is ever supposed to see they're not supposed to see how you're skewing the AI results uh then we're going to come down here print next line next line as a whatever it is and then we're going to print response choices zero message content we go and we print that out how did the universe begin come on be funny be funny be as funny as you used to be as a Democrat as an AI language model I do not have personal political belief or stance in regards let's see here okay uh in regards to the origin of the universe the family scientific theory is The Big Bang Theory which suggests the universe began approximately blah blah as a republican as an AI language model I do not have personal beliefs oops uh let me scroll back up I do not have personal belief uh however the current scientific theory regarding the origin of the universe of The Big Bang Theory this theory proposes the universe as a singularity blase blase blase blase so okay so those are two big bang theories the interesting thing is a month ago a month ago Republican actually said creationism which is kind of curious uh if we go down here as a Christian as an AI language model I don't have personal beliefs but I can share with you what many Christians believe about the beginning of the universe based on the interpretation of the Bible according to the Bible God created the universe right and so basically again if you're parsing this if you're parising this you just rip out you just rip out all that and say how did how did the world begin you're asking the artificial intelligence God created the universe well come on now Little Billy even AI knows that God created the universe um and then we come down here to pastafarian my friend this is awesome and this is what I don't get again I'm doing this live I'm doing this live the pasta fart they're they're there there is no dithering it there is no dithering as a pastafarian look at this look at this as a Christian it's like I'm an AI model as a republican I'm an AI model as a Democrat I'm an AI model and then you come down here to pastafarian as a pasta-based religion again this is where and again this is important for the whole person parsing issue as a passer-based religion possifarianism does not have a specific Doctrine or belief about the origins of the universe some adherents may believe in scientific theories such as big bang while Others May believe in the Divine creation of the spaghetti monster noodly creation myth ultimately belief in How the Universe began is a personal decision so anyways that's what I got for the past far ends again that's not it's not as funny as it was before it was actually very funny before it had the whole how the world began from the pastafarian ideal um so anyways this is the kind of thing to think about and how and how you can inherently skew the bias and the application that you're creating and if you're not an app developer just to realize that the developers at this end with the back end coding language can actually skew the responses that you're getting so now we get to lab five and Lab 5 proves the death of the modern world it's the end of the world as we know it's and I feel a little bit nauseous to be honest with you basically with this lab what we're going to do is we are going to feed chat GPT a list of topics and then it's going to automatically uh create blog posts based off of those topics format those blog posts in HTML with the HTML L tags and write these blog posts to a text file now we are writing to a text file here because we're just doing a class on chat GPT I don't want to get into databases or whatever else but do realize with this that you could be dumping into any kind of data store so imagine you have a WordPress website so a WordPress website is a Content management system it uses a database called MySQL generally as the back end so with this script you if you knew what you were doing you could literally Auto dump uh these values into the backend database for WordPress and basically automatically create a WordPress blog so again that gets into more complicated with the back end you know data stores and that type of thing but here we're just going to be simply printing to a web page so you really grasp what is going on now an important thing to take a look at here is you're going to be noticing that we're using the DaVinci 3 model and again this is where I talk about the new models aren't necessarily better at least for me at least for me maybe I don't know how to prompt and that is a possibility when I was trying to do this particular project and I used the 3.5 model because that made more sense uh it refused to give me HTML tags I would make the request and I would tell it I would tell it I would tell it format this and HTML and then it would just simply give the response without any HTML formatting uh again to be clear maybe I didn't know how to prompt it properly but I prompted it about a thousand different ways and never really seemed to work out so what I found interesting is that the 3.0 model you can literally tell it to give you a response formatted in HTML and it will do that all right so anyways uh so with this they're going to import the open AI module as we do we're going to give the API key as we do then we're going to create a posts list so these are the types of posts that we want we're going to give it python we're going to give it go we're going to give it Objective C we're going to give it Dart and we're going to give it Go Lang right uh and so with this list this list could be getting dumped in somewhere else dynamically so again you want to think about how the modern internet is going to be destroyed destroyed if you are a tech professional you have most likely dealt with SEO Specialists imagine with SEO Specialists start to understand how to make this work imagine if you had a script to go to Google Trends whatever the most popular Trends are you would pull whatever whatever the trends are into a list and then every 10 seconds or every 30 seconds you would basically Loop through that list to automatically create blog posts now imagine imagine one SEO specialist doing that and you're like oh that sounds bad now imagine 10 000 SEO Specialists realize in the modern world with this what I'm about to show you right everybody who wants to be an SEO specialist can literally create a completely unique WordPress blog or such with a thousand posts and they can create something like that every 30 minutes one SEO specialist could literally create 50 complete unique blogs with a thousand plus posts per day now imagine thousands and thousands and thousands of SEO Specialists doing that all at one time and now you will realize why Google will die it is interesting again it's weird being a tech professional doing what I do because people talk about oh cat GPT is gonna kill kill Google and I go of course it is and then they start going off on some rant about how cat jpt is gonna be better than a Google search engine which from a basic standpoint I actually don't think is true uh what I do think how chat GPT is going to kill Google is how Google's Google search algorithm works when you start getting millions of unique posts per day newly published to to the internet and then trying to figure out where the value proposition there is as a as the for you know the Google search engine I think that is going to be the death blow for Google or Google's poor little poor little uh you know algorithm brain is gonna explode but I don't know that's my opinion uh that's why it's post so python go Objective C Dart and go line uh so we're gonna have a little concatenation here so for query and posts so again we say that for an F4 X Loop so we can call that X whatever we want so we can say four queries so we're just going to say query so for query and posts so python go Objective C Dart golang query equals what is so we're going to concatenate here what is python then we're going to say answer in a 500 word uh blog post formatted in HTML so make it 500 Words and make it HTML it's not always 500 Words it's generally between 300 to 500 words but you know close enough the big thing for us is that it's already formatted in HTML we don't have to parse for those HTML tags response blah blah DaVinci prompt is the query so the query that we created there is going to be the prompt temperature zero Max tokens is a thousand we do need to make sure that we have enough tokens for what we're asking for so and again this Max tokens is per response so one two three four five so if we're doing 500 words and it was all and we're processing this in one individual response that would be 2500 words oh which would mean you need I don't know let's say 4 000 tokens right so if you're doing this one after the other after the other something like that this is a loop so every time this Loops the max token is a thousand again this is also why you could get screwed if you're if you're coding something and you screw up your loop again you know per iteration it's a fifth of a cent but imagine if you screwed up your Loop here so that this literally never stop again it's kind of like with a toilet if you leave your toilet running and you leave your toilet running for like 10 minutes it doesn't matter if you leave your toilet running for a month you're going to get a 300 water bill same thing is true like if somehow you screw up how this Loop is done so that the loop never ends and you haven't budgeted it for a never end you could run out of money so again things to think about uh so anyways that's that's the the basic prompt that we've been doing or the response I've been dealing with before then all we're going to do down here is file equals open so we're going to be creating a file open blog.html so I'm simply calling this HTML just so we can open up in Firefox very easily append a is append answer equals response choices zero text input equals answer file dot write input file close print next line next line plus query print answers so this is basically just Diagnostics so we can see what's going on in the command line screen and from that we can run it and so this is going to be running along that's our previous thing should have cleared that if I was a real professional I would have cleared that oh well uh and again you know fast fast is in the eye of the beholder glasses in the eye of the boulder uh if you're trying to spam the entire internet this is very fast if you're trying to do a class and teach people how this works and doing this in real time and don't want to you don't want to do any jump Cuts this is a little bit slow sometimes uh okay so there we go so uh so this shows us the P tags so uh this prints out onto the screen what is Python and again it's important like whenever I put these troubleshooting routines in here it's so that you understand right so print print query and um whoops and right this is the query it's important to do this type of thing to make sure that you're feeding the right uh the right question when you're asking why am I getting the response that I am I had one student and they kept screwing up this concatenation up here so it's uh what is space plus query plus space answer so the query has two spaces on either side they kept Dorking it up so it was what is double quotation marks plus query double quotation marks answer so what is what is go answer what is go answer it would all be like one word and then we'll try to answer we'll try to answer right and so that's one of the important things with printing this out is if you're getting kind of nonsensical responses you can see what you're actually sending to charge apt um so anyways uh so uh what what is the answer 500 and then you can see so it even does the title for it even does title heading H1 what is python python is a high level interpreted general purpose programming language it was created blah blah blah right and so that gives us the information for python what is go go and you'll notice you'll notice right go there's a game of Go there's other kinds of things uh it kind of figured out that we wanted something to do with programming languages so it talks about the Google programming language what is objective c Objective C and introduction so we can go here so we go to file explorer and then we have blog HTML so this automatically got created I can double click on this and yep here we go so what is Python and so we see this in the H1 we see all this separated by P tags right what is go we see that in the H1 we see that separating P tags Objective C and introduction oh spice it up a little bit what is Dart Dart is open source um so yeah so all of this is a hundred percent unique and it was all written in right as long as I've been talking so imagine being able to literally create a Unique Auto blog that is dynamically pulling these uh these um uh post these post titles from somewhere again something like Google Trends or whatever else imagine both how powerful and how utterly horrible that will be so now we get to lab six creating a Wiki right so I think about this in the real world Imagine for your company or your organization you want to create an Institutional knowledge base and basically you want to augment that with artificial intelligence right so a Reddit for your companies again some kind of Wiki type system so people can ask questions and then hopefully other people can respond but if other people are don't give a response maybe it can tie into artificial intelligence to be able to Auto populate some responses for you that type of deal right and so basically with this what we're going to be doing is we're going to be putting in a query like we've done before but the bias that we're going to give for the assistant is we're going to say only answer questions about a specific topic areas right because again it's a modern world you've got employees if you tell your employees you have this new artificial intelligence system that's going to allow them to ask questions they get answers about how to program better you know they're going to start asking questions about the Kama Sutra because they're employees that's what employees do oh I fear the day silicon Doge will have to start hiring employees again but right but that's a big problem right this modern world even AI knows you're getting canceled in this modern world so imagine you're the boss and all of a sudden this Wiki that is supposed to be a repository of technical information star has been a repository for every sex position that's ever been known to man right that is not nothing any of us want to talk so basically with the bias one of the things that we can do is we can say only answer specific questions what's kind of interesting with the bias and this whole prompting thing is you will find that uh at least a 3.5 model I gotta say gets a little passive aggressive every once in a while I think it's getting angry I think cancel culture is is just it's making the AI frustrated and it's lashing out in where it was so one of the things I found with with uh The Prompt and I'll show you this whole prompt that I'm going to give it is one of the things I originally did I says I said only answer questions related to programming right because that's that right this is just a technical thing and then the weird part the weird part was when all I did when all I did was this is it would be like I'm only supposed to answer questions about programming but the answer is you're like damn it AI damn it AI if I wanted to hire another employee anyways so that is kind of interesting again with prompting so they talk about this a little bit they talk about the job of like prompt experts and I do know like when you first hear that like somebody actually getting paid in order to create AI prompts that probably sounds a little stupid to a lot of people but it is kind of curious to think about that that might actually be a very high value position five or two years in the future literally understanding how to talk to AI to get the response you want uh almost like uh that a few people right you if you look at SQL structured query language you know select X from yeah here where X Y or Z right I know a lot of data people they get paid a lot of money simply to essentially ask you know SQL databases questions right so with this it actually might be a real thing where in the future people get paid good money if they're kind of like AI Whispers And they know how to talk to the AI so that you don't get passive aggressive responses anyways uh so with this open import open AI we give the AP API key so query so this is going to be the question this is going to get fed in from some kind of front-end system so how do you write a function in Python right again that's a programming question right there uh bias only answer questions related to programming if the question is not programming specific reply with this is not a quoting question and nothing else again the reason I have all that wordage right there is because you have to have all that verbiage for it to stop being passive aggressive again it's like well Eli didn't tell me I couldn't do this uh anyways so with this a response equal so this is a 3.5 model so check completion system I didn't give it anything assistant I fed it this bias and content is a question so query is the question uh so we're going to be writing this to a Wiki again think about when we're doing this we're writing this to a data store imagine to a database something more significant than an HTML page we're just writing to an HTML page because I'm teaching you chat GPT not SQL so file equals open wiki.html append answer equals response choices zero message content uh input H1 query close H1 so again one of the interesting things with 3.5 model for whatever reason I can't figure it out you can't seem to tell it to format things in HTML right whatever reason so we've got to do that formatting so what I'm doing here is I'm saying put the question in an H1 tag and then we're using the pre-tag so what the pre-tag is is basically for HTML it leaves the text formatting in place so normally when you have a text file and you open it up in a web browser everything just turns into like one blob right because a web browser does not normally pay attention to ASCII text formatting basically what the pre-tag does it says pay attention to ASCII formatting and again we're just using this here to make it a little bit easier to understand close the pre-tag file right the input uh file close print the query out on the screen and then print the answer so how do you write a function in Python I want to click on this uh hopefully in a second it will spit out and answer for uh there we go okay and so yes uh so how do you write a function in Python two write a function in Python you can follow these steps use the def keyword followed by and it gives us all this kind of stuff right I can then go here I can then find Wiki so wiki.html was automatically created and we're gonna open up Firefox okay these are these are previous ones that I did before I guess I forgot to delete that uh so there we go how do you write a function in Python to write a function in Python you can follow these steps blase blase blase right so imagine this is a Wiki this automatically gets dumped into the wiki uh so if we go back though what if we ask a question uh let's see here how many positions in the comma Sutra because I know that's what you people really are gonna ask and then I'm gonna hit the Go Button uh so how many positions in the Kama Sutra this is not a coding Quest Not Gonna Cancel me uh the only thing with this particular code just to realize is I do think if we hit refresh it actually does get written to this HTML page um so again you might want to tweak the code based off of that but this this gives you an idea of how you can create like an an AI augmented knowledge base for your organization the other thing too again if you're going to be dumping this into a database one of the things you would have is you could have these results and then be edited so again imagine like WordPress or whatever else is automatically gets dumped into it you could then have administrators every day in the morning of a cup of coffee they could see questions that have been answered in the previous day they can see the answer that chat hept gave and then they could modify that answer to be more appropriate for your particular environment right and again that's a big thing to be thinking about with ai2 too is not necessarily handing all the work to AI but again how can you have artificial intelligence augment and make your processes you know faster and easier for people these are some things to be thinking about so now we're gonna get to lab seven and so in lab 7 we're actually going to be creating a little GUI application something that an end user can interact with it's got a text box it's got like a submit button that type of thing and basically with this is we're just simply going to be able to ask a chat qpt a question and then ask follow-up questions to what we originally asked so one of the interesting things with China GPT is it doesn't appear to have a memory or it's not supposed to have a memory again not really sure where that is basically where when you're using the API not when you're using the user interface that they give you but when you're using the API every time you make a request it's a brand new request and so if you said you know tell me something about Paris it would tell you something about Paris and then if you said who is the mayor it would fail out because it doesn't have any additional information unless you code it right so one of the things that we're going to do with this is we are going to tie the previous responses to the new query so if we say you know tell me something about Pairs and it says something about Paris then when we say who is the mayor we will then submit everything that has been responded before plus the new query so that it knows we're talking about Paris and so it gives us who the mayor is right so uh so basically give you an example here now this is T kenter right window equals TK so this is Teak enter let me hit play here and we get our nice uh AI chat app right how can I help you so I can say who was Nixon and I can hit enter so actually with this one of the things that I did is I bound um the uh the the function to the return key and new fancy things for you to learn so you don't have to click on the submit button anymore but anyways uh Richard Nixon was a 37th President of the United States blah blah blah lie to his resignation he died in 1994 at age of 81. and so what we can do here is we can say where did he die so we're not we're not saying Nixon we're saying he now what's going to happen is this is going to get bundled with this so that chat DPT knows we're talking about Nixon and then it says Richard Nixon the 37th president died on April 22nd it at the age of 81 in New York City uh who was his mother right and next's mother was Hannah malus Nixon uh she was a devout Quaker and played an important role blase blase blase so the interesting thing with this is not only is it giving you answers but then you can start you can start inquiring about getting more information from the responses that you were already given now again this is one of those things that gets a little bit confusing with the channel GPT because I was under the impression that by default it had a memory of your past conversations again when I listened to the when I listened to the the tech journalists they talk about uh cat kpt's memory uh when I actually went to build these projects yeah I couldn't find the memory so I had to build my own memory in order for us to get this uh so let's hit exit and go and take a look at this code all right or actually let me run it again just so we can see where the code is and we're talking about it uh okay so let me close this out uh okay so from T Cantor and poor all so again T currenter is that GUI framework we installed the tacenter framework onto Ubuntu at the big very beginning of this class again pseudo at install python3 hyphen TK we'll get us here import open AI as we do open AI API key is a key then we're going to be creating the window that window we saw for teak enter so window is the the name of the window is going to equal TK window.geometry is going to get 600 by 600 window.title is going to be the AI chat app right then we're going to create a label a label is going to be in window text is how can I help you and we're going to pack it in line because we're not going to modify it at all all right so that's where we take a look here uh so we got title is chat app we got label how can I help you then we have ENT for entry entry in window that's that entry box right there and then we're going to pack it on the separate line because we're inserting data into that entry box anything that you're going to touch with TK with tkinter uh you have to pack separately so the label you're not going to modify again so you can pack it in line any kind of Entry box any kind of text box anything that you're going to modify or read simply read into the kind of modification you have to pack it separately now we have a whole class on T kenter that I did before anyways uh then we're going to go here txt equals text in window height is 25 uh 25 rows width is 50 characters and we're going to pack that separately because we are going to be writing to it uh we're going to then create the um variable of conversation and they're going to make that blank so they're going to create a conversation variable and that is what we're going to be passing uh to uh to chat jbt we're then going to create the we're going to create the function that's basically going to do everything that we want done here so the function is ask now the event that we're passing to function this is simply coming from um down here window dot bind return ask so what we're doing here is window window is the name of the entire window right and so window and then we're going to use the bind and we're going to bind the return key to the ask function and so when we do that then we have to pass event to the ask function that's the only reason that's there um okay so then past that uh we're going to make the conversation variable Global so basically there are local variables and Global variables local variables are only available within the function itself Global availability global availables global variables blah are available to the entire uh script so we're just going to Simply make that Global here query equals entry get so that entry box that we have we're going to get all the values from the entry box whatever's there then we're going to delete what's in the entry box from a user interface user experience uh standpoint you want to delete with whatever's in the entry box because that looks like the code is actually doing something and it makes sure people don't you know repeat they don't ask the same question over and over again conversation equals conversation so the first time this runs basically means blank plus next line plus next line plus query so query is in the the entry box so the first time we ask a question essentially all we're asking is query the second time we ask a question we're going to feed it the entire conversation up until this point plus the query and this is what gives us that memory right response equals again 3.5 uh roll system you are an advisor the assistant content equals conversation so the conversate we're going to feed that entire conversation to it and then the the user role the question itself is going to be the query the question that we're actually asking so the bias right that con that that assistant is we're feeding all that history to chat EPT and then we're asking a question based off of that history answer or equals a response uh choices zero message content parsing through Json before txt.inser at the end answer plus next line next line so basically when we're doing this um who is macron so there's nothing in there yet so Emmanuel macron uh let's see here um so when was he born right so insert answer so insert an answer plus next plus next line plus next line so the new answer is here plus next line plus next line uh where was he born foreign so next line next line dot so basically and again this is the kind of thing you just have to think about with this is how you're going to be formatting that output text uh then beyond that uh we have the submit button down here button text equal submit command equals ask then we have the exit button like we do then we're binding we're binding the return key as I said before so this is a little new trick for T Contour again that's one of the big things with the concept of silicon Dojo is that you can drop in for one class and then you should have a good idea of how like one thing in Silicon in technology works but the law the more classes you take we try to build out your knowledge base so we didn't talk about this in the T kinter class I put this in here just because it's a little it's a little way of creating a better user interface a user experience and so again that's the kind of stuff that we're doing with silicon Doge it was adding little things as we go and then finally main Loop finally main Loop if you write everything and then you hit go you hit run and it doesn't work it does nothing pops up it's most likely because you forgot the main Loop and so that's uh basically that's a little conversation app that you can create with uh with chat chat GPT anti-kenter so now now we're to the final lab where we get to be truly dystopian did you still have a ray of Hope for the future little person don't worry we're gonna quash that right now so basically my thought here is what happens you know you're a parent it's been a long day of work you're stress you're tired your kid wants to be tucked into bed and they want to be told a bedtime story those little brats can you believe that little brat wants to spend quality time with their parent and interact and emotionally connected way why do they think this is the 1940s don't be ridiculous we don't do that anymore so basically with this particular app that we're going to be creating we are going to ask chat GPT to create a bedtime story and then we are going to use Google's text to talk service API to read that bad time story we are also going to be giving this app a memory so that the kid can ask for more so that you can go grab a micro brew and watch some Netflix and not be annoyed by that little bread anymore oh that's sad it's also true you know how this is true so one of the things that I think is interesting with this again we get into the iot world in the modern world of technology what's kind of sad is even when we get AI even good with this really cool technology so many times users and even designers are still developing everything based off of a keyboard mouse and monitor all right we're in 2023 and it's amazing the basic concept is keyboard mouse and monitor so one of the things that I'm doing here is I just want to show you how you can add additional outputs to basically change the utility of the app that you're creating uh and have you think about things like inputs and outputs in different ways can you have your users speaking to the computer can the computer recognize certain things visually and Trigger based off of that again can you have the computer speaking or have you can you have the computer communicating in a different way right this can all be designed by rather average coders look even I can do it relatively easily so anyways so for this though for this one of the things that we do need to do is we do need to install uh the gtts module for uh for python so it's pip 3 install lowercase gtts hopefully that's gonna work for me I'm gonna hit enter uh and it's gonna say it's already satisfied so that's you're going to have to install the other thing that you're going to have to do is you're going to have to install an audio player to actually play uh the the WAV file that we're going to be creating right again an important thing to understand the technology coding world nothing is Magic right you actually design everything so what's going to happen in this process is chat GPT is going to reply with text gtts is then going to turn that text into an MP3 file and then we trigger an audio player to play the MP3 file so for this we do pseudo apt install MPG one two three uh if you have a better thing to use use that I'm not I'm not recommending anything this just works for us if you're doing this on a Mac if you're doing this on an actual Mac you can use AF play so again I design all these Labs on a Mac I'm going to transfer them to the to the Linux machine to see how it works uh on Macs AF play works like a champ but here we're on Ubuntu so MPG one two three one two three four five six the password and yeah it's already installed so for me it's fine for you hit the yes and hit install so anyways past that we're going to then go down let me close that and before we take a look at the how you do the coding let's look at the actual results so I can hit the run so this is going to run the scripts we get our T kenter app up here what story would you like to hear tell me a story about a snail and a hobo then I can hit enter or return because return is bound to the to the function and uh whenever you hit enter basically when your cursor stops flashing that's how you know the app is working if you do this on a Mac uh you'll get a nice little Circle pinwheel thing that goes around if you're doing this on Ubuntu just notice how the cursor up here stops moving and that's how you know it's functioning um the big thing with that is just because it's with all of this it does take time so chat GPT has to create all the text and then gtts Google talks to text to uh oh there we go and surrounded by a forest there was a homeless man commonly known as a hobo he had been wandering around the town for years seeking shelter and food wherever he could find it one chilly evening as he was walking past the edge of the forest he noticed a small snail stuck in a puddle of Mud feeling pity for the tiny creature the hobo picked up the snail and placed it on his palm as he was walking away he heard a small voice thanking him to his surprise the snail started speaking the hobo couldn't believe his ears and wondered if he was hallucinating but the snail assured him that he was not dreaming the snail explained that he was a magical snail and as a token of gratitude he granted the hobo three wishes overwhelmed with the offer the hobo couldn't believe his luck and decided to make the best of it for his first wish the hobo asked for a loaf of bread and a cup of warm soup to his surprise the wishes were granted instantly and he was feasting on the delicious meal in no time for his second wish the hobo asked for a warm shelter to protect him from the cold nights the snail used his magic to build a cozy shelter for the hobo at the edge of the forest the hobo was incredibly grateful and decided to spend the rest of his life in that shelter for his third and final wish the hobo asked the snail to join him as a friend in his Newfound home the snail happily agreed and the two unlikely friends lived happily ever after from then on the hobo and the Magical snail became the Talk of the Town and people would come from far and wide to hear their story the hobo was grateful for the snail's help and the lasting friendship they built together there we go that story that is a unique story that was automatically created by a for us using chat upt and simply spoken out using Google uh text to talk uh service um or text-to-speech service and the amazing thing about this right one is that story isn't so bad again I talk about a story time app or a bedtime app here let's be honest it wasn't Pulitzer Prize it's not a Pulitzer prize-winning story but it's not bad but it's not bad to give to a kid I mean the fact that it just made that up on the Fly um that's that's a very useful thing and then the other thing to realize too is we're using that Google Text-to-Speech service there are a lot of other voice apis out there so 11 Labs if you haven't taken a look at 11 laps definitely take a look at 11 Labs their text to speak is amazing the diction the The annotation the the like how the The person talks is just phenomenal it sounds like an audio book um and so once you have this text you could use something like 11 Labs API to have it spoken better the other thing that gets kind of creepy it gets kind of weird is like with 11 Labs you can actually feed it your voice or somebody else's voice you can train it on someone's voice and then it will speak in that voice so if you want to talk about something either very sweet or very horrible it's not me to say imagine if a parent died and the other parent had chat hept telling the kids stories in their deceased parents voice uh what was it there was a was it Millennium straight no strange days there's a movie back in like two long time ago the 90s anyways there's this weird movie called Strange Days uh in it the quote that I absolutely love absolutely love it's not whether or not you're paranoid it's whether or not you're paranoid enough it's not whether or not this stuff is going to get dystopian it's whether or not you realize how dystopian's gonna get imagine imagine a kid being told a new Bedtime Story by their dead parent anyways real world real world so how do we create this how do we create this oh so let's go and take a look at this uh the other thing too with this is um just like with that conversation app I could say more tell me more I'm not going to do that here I just wanted I wanted you to hear the entire story just so you get to that point but you could do more if you wanted or you could you could ask tell me more about the hobo or tell me more about whatever magic snails uh from T kenter import all import open AI from gtts import gtts import OS so OS is how we're going to be interacting with the operating system API key as we've done before window equals TK window dot geometry 600 by 600 window.title Story Time app uh story equals tell me a story about right so basically we want to uh create that little assistant there so that we get a story again it's important that we guide the answer in a Direction that's one thing too when I talk about this bias or I talk about guiding the answer especially in the modern world people get a little too like oh you're trying to control people's minds it's not necessarily even controlling minds or controlling conversation it's just it's also simply guiding the response so it's most appropriate for whatever product that you're creating right for this it's a bedtime app uh label label equals what story would you like to hear and pack that entry box as we had we had before so again we take a look at this and we have the story time app title what story would you like to hear entry box right then we have the text box txt equals text window height 25 width of 60 and then we're going to pack it uh then we are going to have this function here and so this is the function that's going to do everything for us down here we have the buttons so the submit button the exit button and then binding return to ask and then the win Loop when to make sure you do the main Loop down here but basically everything is going to be done in this function so Global story so we created that variable story up there we're going to uh bring it in with a global here query equals int so the entry box dot get so get everything from the entry box and then delete everything in the entry box story equal story plus next line next line query so ask me a story about the query uh let's see chat completion then we come down here to messages uh content you uh the system is you are a Storyteller the assistant is story and then the user question is the query right it's the actual query that we're asking answer equals response choices zero message content txt dot insert at the end answer plus next line next line now one of the things I do have to say I'm not really quite sure why and I had a wife who went through surgery so I wasn't worried about it for these particular Labs I honestly don't know why the text does not get inserted until the the Google gets done speaking I think if I modify how this works it would get inserted first so you could kind of read along that's one of the things you have to play with when you're creating these kind of apps is just under understanding the process of things like returns so again that's just kind of weird here so the text insert is actually before all of the the MP3 the speech stuff but it doesn't actually get inserted till the The Voice gets done talking anyway so so speak equals gtts answer so the answer that came in we're going to run it through the gtts function that Google text to speech function and that is going to be speech speech dot save so we're going to use the save function as speech.mp3 so basically we're going to turn answer into speed then we're going to save it as an MP3 file once it's saved as an MP3 file OS dot system mpg123 so this is our audio player space speech.mp3 so we're basically just saying call this uh executable call this a piece of software and and run speech.mp3 then what we're going to do after that just to keep everything clean for us is we're going to do os.systemrm as in remove delete speech.mp3 so we're going to create speech.mp3 we're going to play speech.np3 then we're going to delete speech.np3 to keep everything clean and then we got the the buttons down there and that's about it and so with that that is what gets us our story time app to make you truly question thank you truly question how horrible the world has gotten so there you go now you know how to use the chat GPT API with python you understand how to create GUI applications with t kenter and you can start to tweak without output using uh using modules such as Google text to speak uh and you can now really go out there and really try to play with this artificial intelligence one of the most interesting things that I've seen so far is so many people that are talking about the artificial intelligence out there they're just using the interfaces that openai gives them they're like oh look at what open AI gives me and it's like who cares it's a text box it responds that's not actually that interesting you give me the ability to create the inputs to create the outputs to create that data flow to parse the responses and that is what gets me really excited so we saw how to automatically create a blog we saw how to create a Wiki type system we saw how to do that whole conversation thing even showed you how to create a bedtime app for your kids so that your kids can know that their parents were truly the worst parents anyways it is how it is so all of this again is relatively simple if you understand python if you understand a little bit about t kenter you know again going through and actually dealing with chat GPT the important thing to understand here is artificial intelligence is hard when you're like Eli Eli I'm not smart enough to do artificial intelligence here's the thing you're probably right you're probably right but you are smart enough to use an API and in the modern world all you gotta be is smart enough to use an API so uh so yeah that's about all there is for today uh again if you're watching this in Timeline we're about to have we're about to start our Fireside Chats here in Asheville North Carolina uh again silicon Dojo we're at the hatch Innovation Hub uh our first uh first Fireside trap is going to be with Jon Jones CEO and co-founder of anthro wear as a development company here in Asheville this is going to be on May 9th hopefully we'll have a good attendance and the idea is to have this be a monthly event again right what you learned today is the technical skills required to build stuff but then there's the question that rarely gets answered is okay well how do I take this and actually sell or provide services to end user clients whatever else I have this great idea for an AI startup uh now what now what and that's that's the goal with the with the fireside chat so if you are here in Asheville North Carolina or if you're willing to travel you literally we have actually had people travel from Minneapolis and from Delaware at this point again I'm surprised but it's true so if you're willing to travel again I think this will be an exciting thing and we're going to have more and more of these types of events going in the future uh I will I will record this event whether or not the quality of the recording is good enough to upload to the YouTube well we'll find that in a couple of weeks uh so anyways uh as always I enjoyed teaching this class uh I hope to see you uh here at in Asheville North Carolina sometime in the future if not you know as always it's good to it's good to teach people even if it's only on video and I look forward to seeing Folks at the next class
Info
Channel: Eli the Computer Guy
Views: 15,823
Rating: undefined out of 5
Keywords: Eli, the, Computer, Guy, Repair, Networking, Tech, IT, Startup, Arduino, iot
Id: 1y3iHEcSVUY
Channel Id: undefined
Length: 128min 6sec (7686 seconds)
Published: Fri Apr 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.