Getting Started with OpenAI API and GPT-3 | Beginner Python Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone i'm patrick from the assembly ai team and in this video we are going to explore the open ai api together open ai is well known for the creation of the gpt-3 model which is a powerful deep learning model capable of producing human-like text and what makes this api special is not only this powerful model but also that the api is designed not only for one use case only but rather it provides a general purpose interface so the user can try this on basically any task in the english language so it's really powerful and let's dive into it so first we can go to openai.com and then sign up for an api and then we can log in and first let's have a look at the pricing so it's not free but you get a free credit in the beginning so you get 18 and this is enough to get started and play around with it so i recommend to just try it out and have fun and after that it's a pay as you go model so here you see the different prices per 1000 tokens and the more powerful the model is the more expensive will surprise of course but yeah as i said we can get started for free and yeah let's go back to the overview and then to the documentation so the official documentation is pretty good so i recommend to just try this out and read through this a little bit and this video here is also closely based on the official guides so yeah so the first thing i want to show you is how to use this in python code because there is a python library that we can use to interact with the api so all we need to do is say pip install open ai and then we can start using it so we can say import open ai and then we also need an api key of course so you can generate and find this in your dashboard in this case i stored this in a separate file and that you don't see and i just import this then we set the api key then we define our prompt so here we say say this is a test and then we call openai dot completion so this is the completion endpoint and i will tell you more about this in a moment then we create our request and we have to specify an engine then we also give it a prompt and here we can use more arguments so in this case we also specify max tokens so we will have a look at the parameters in a moment and then let's print their response and let's run this and here we get the response already back so here we get different choices in this case only one choice and the text is this is a test so yeah it correctly applied the prompt and gave us a good result and yeah let's try this on another prompt because as i said we can use this on many different tasks so let's also try write a tagline for an ice cream shop and let's run this and we get the results with the text this time the ultimate sweet treat so yeah pretty cool and now let's have a look at this end point a little bit more so we can hover over this and then go to the documentation and here you see the docs so basically this is the exact same code that i just used and you can also select a different language for example you can use javascript or just a curl command but in this case we used the python code and then you find the different parameters so the prompts the max tokens so this is the maximum number of tokens to generate in the completion and then we have a couple of more for example the temperature is also important what sampling temperature to use higher values means the model will take more risks and yeah you can read through this all by yourself so it's pretty detailed and yeah now let's talk about the engine that we used so we also find this here we can go to engines and then here we see that different gpt-3 models um so in this case we used the text the winches so if we have a look at the winche then we see the winchi is the most capable engine and can perform any task the other models can perform and often with less instruction so yeah i recommend that you just read through this by yourself again and play around with this different engines a little bit um in this case we simply stick to the the winchy model and yeah then let's go to the endpoints so here you see guides for different endpoints in the first example i showed you the completion endpoints but there's also for example the classification and endpoint the search endpoint and the question answering endpoint so we will go over them um let's start with the completion endpoint the completions endpoint can be used for a wide variety of tasks so yeah what's important here is how you design your prompt so there's a whole chapter about prompt design and it tells us here that the models can do everything from generating original stories to performing complex text analysis and because they can do so many things you have to be explicit in describing what you want so it's really important that you define a good prompt and let's just go over a few examples how we can design the prompt to do different tasks for example we can do a classification task so here we say decide whether a tweet sentiment is positive neutral or negative then we say tweet colon and give it the tweet and then sentiment call on and leave this blank so let's copy this text and put this in our prompt so let's use a multi-line um string here and save this and let's run this and see what we get and we get here our choice with the text positive so it correctly classifies this tweet with the sentiment positive so yeah this works and this is how classification can be applied then we can also do multiple classifications in one prompt so for example we can use it like this classify the sentiment in these tweets and then use numbers one and then the tweet two and so on but yeah we have to be careful here because the more we use in one prompt the more um difficult and it can be but yeah this is also possible then another very popular task is text generation so here that's what we also set with write a tagline for an ice cream shop we can also yeah say brainstorm some ideas really fun then we can use it for conversation and we can tell it how the assistant should be for example we say the following is a conversation with an ai assistant the assistant is helpful creative clever and very friendly then we give it the first text of the conversation for example a human already said hello who are you then the ai said i am an ai created by open ai how can i help you today and so on and yeah so this is how we use a conversation so now let's try this and what's also really cool in this documentation is that you can open up most of the examples in a playground so we can click on open in playground and here we see this prompt and we also have all the different parameters so here we see the engine then we can play around with the temperature the response length and so on so this is really cool so let's click on generate and see what we get so in this case it left this blank i i was i was supposed to enter a text here so we left this blank and then they i responded with this do you want me to be your virtual assistant so let's put in yes and generate another response great let me know if there's anything i can do for you so yeah this already works and this playground is super cool and when we are done with our um playing around then we can click on view code and then again we find this code um in the in python code or node.js or simply as json so yeah then we can grab this um python code and put it in your editor and then we can start using this and here again we see this is the open ai completion prompt with the different parameters that we set here on the right side and this this playground is simply awesome so yeah i recommend to play around with this yourself so yeah this is how we do conversation then let's scroll further down we can also do text transformation for example we can do a translation so we can say translate this into french spanish and japanese and then we give it this text so again let's open this in the playground and simply test this here so let's click on generate and yeah here we get the responses so i don't speak any of these languages if you know if these are correct and let us know in the comments and yeah we can also do a conversation so we can say convert movie titles into emoji we can also do a summarization so we can summarize this for a second grade student and then give it the text we can also do a completion so in this example vertical farming provides a novel solution for producing food locally reducing transportation costs and so here we basically cut off a sentence and then it should try to complete the sentence for us and we also can design a factual responses prompt so um this means it gets a concrete question and we should give it the fact back so for example who is george lucas what is the capital of california so yeah this is really cool there are so many tasks that we can do and yeah i would say the the completion endpoint is the most powerful with the most variety so again go through all this by yourself and play around with it a little bit and yeah then let's go to the classification endpoint so we've just seen that we can also use classification in the completion endpoint but this here is a little bit more complex so here let's read through this the classification endpoint provides the ability to leverage a labeled set of examples without fine tuning and can be used for any text to label task and oftentimes this is combined with a query task for classification results so let's have a look at this in the code so the way we do this is that we for example have a file and in this case this is called classification.jsonl file this is the json lines format and the way it works is that each line is a valid json format so in this case it has a text so here it says good film but very glum then it gets a label and we can also use metadata so we have one positive label and one negative label and then the way how to use this in the code is let's remove this and in this case first we want to create the file so we say the response equals and now we use openai dot file dot create then we give it the file name and we have to give it the purpose so in this case we want classifications and then let's print the result and run this so let's save this and run this and this is how the result looks like so it now start this file on the server and already did the calculations it needed so what's important now is this id so now we can create a classification query so for example we can say the result equals and now we use openai.classification.create and then we need to give it this file id so we copy and paste this file and here we want to query some results so we want to query the movie is very good then we can specify more parameters for example the search model the model and the maximum examples then we don't need this anymore so we can comment this out and let's run this and in this case this is our result so we ask for movie is very good and then we see the label is positive and we see the more parameters that it used and here we also see the selected examples so basically this is the stuff we uploaded and yeah this is how we can apply a query for example let's query for movie is very bad and save this and then we get the label negative so yeah this works so yeah this is how the classification endpoint works so let's go back and have a look at the search endpoint the search endpoint allows you to do a semantic search over a set of documents this means that you can provide a query such as a natural language question or a statement and the provided documents will be scored and ranked based on how semantically related they are to the input query so let's have a look at how we use this in our code so again we need to create a file and in this case i also have another file we have called the search.jsonl again in this json lines format and the first sample has these um keys so the text puppy a is happy and the metadata emotional state of puppy a and then a second sample for puppy b which is sad so we again create this file and in this case it's our search.jsonl file we also have to change the purpose so now the purpose is search and then we can save this and run this and then again we get this response back with the id so now let's create our search query so in this case we call openai.engine and specify an engine and then dot search and then we can use different parameters so what's important here is the query so in this case we want to query for happy and of course we need to use this file id so let's put this here and save this and run this and we have the result back so in this case it returned one result so here this is document zero with the text puppy a is happy so yeah let's query for set and see if it finds the other one and yeah in this case it returned this text puppy b is set and you see we also get the score so this can be important for your application and yeah this is how the search endpoint works and now the last endpoint i want to show you is the question answering endpoint answers is a dedicated question answering endpoint useful for applications that require high accuracy text generations based on sources of truth like company documentation and knowledge bases so if we use this and then query for answers then the endpoint first searches over provided documents or file to find relevant context for the input question and then semantic search is used to rank documents by relevance to the question the relevant context is combined with the provided examples and question to create the prompt for completion so in order to use this in the code again we create a file in this case we can use the exact same file but here we have to change the purpose to answers and then we can again run this and see the result so then again we get a new id so now we can use the answers endpoint and let's comment this out again and grab the id and use this here so here we use this file and this case we say open dot ai.answer.create and then we use different parameters for example again the search model and the model and then this is the important part so the question is which puppy is happy and we also want to give this more context for example the examples contexts in 2017 us life expectancy was 78.6 years and we also give it more examples how to use this constant context for example we say what is human life expectancy in the united states and the result should be 78 years so yeah let's save this and run this and we also print the response and in this case we see the answers is puppy b is set so yeah this is actually not correct this time so let's have a look at the documentation how it should look like if it found the correct answer so the answers should be puppy a because we search for which puppy is happy and then we get the answer in the best case it should be correct and we also get the selected documents with a score and yeah this is how you can use the answers endpoint and that's all i want to show you in this getting started guide for openmain and now as last thing i want to show you one cool possible application that i created so we can combine this with another api in this case the assembly ai api and use a speech to text and then combine it with open ai so we get a virtual assistant that we can talk to so i already created this app so i simply want to show you how this works so it has this button that i can click and then i can talk into my mic so let's try this brainstorm some ideas about python apps and we get the prompt and the answer a to-do list app a note-taking app a weather app a news aggregator and an image added oh it even goes further a music player and a simple game so yeah this is super cool and yeah so the way it works is that for the prompt it's recording in my microphone then i'm calling the assembly ai api to generate this to text and then i feed it to the omai api and get the answer so it's actually pretty simple to implement this if you want to see a step-by-step guide then you can check out this tutorial how to set up the assembly ai api and yeah that's it that's all i wanted to show you for today i hope you enjoyed this tutorial if so then please hit the like button and consider subscribing to our channel and then i hope to see you in the next video bye
Info
Channel: AssemblyAI
Views: 233,182
Rating: undefined out of 5
Keywords:
Id: Zb5Nylziu6E
Channel Id: undefined
Length: 20min 24sec (1224 seconds)
Published: Wed Feb 16 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.