Creating your first ChatGPT plugin with Semantic Kernel (feat. Matthew Bolanos)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so we are going to be creating our first open AI plugin uh and I'll play the list of you already are familiar with that by your joining the call to cure knots uh this is the plug-in not an ecosystem that's powering chat gbt and soon even Microsoft products like Microsoft business chat and uh being chat now I know a lot of you don't have apps that it's at CBT testing open AI plugin so we'll also be showing you how to test whatever plug-in you can build with semantic kernel which is a team now before we actually dive into building our first plugin I just want to kind of set the stage for where many of us find ourselves in today and that's you know bosses or managers everyone's asking us hey can you add AI can you make your AI power or you're thinking in a couple weeks time I'm going to be joining hackathon and I want to build something really cool with AI but the calling challenge is it hasn't been super clear how to build uh AI Integrations with your audit right many of you are probably.net developers or typescript or Java developers and they're looking enviously at what's happening in Python world where they have all these great tools to do AI 80 billions apologies and the gold from lighting that's made the colonel team is to support all of you as act devs that you would like.net Java white skirt I think that you can take advantage of these AI kicking buildings as and we do that in two primary ways one and the reason why I'm doing a session like this is we want to make sure you're armed with the knowledge and the the the the metaphors and the mental models so that you can intelligently bring these capabilities to your ads and then secondly require the state called semantic current which I'll tell you a bit more bits but that's supposed to help you as an app that bring and merge these AI kicking buildings with more and let's see the first point understanding how to build these implications in our team to submit to current team we really fundamentally believe it starts with understanding three key things and what a year will be repeat these words over and over and over again and I hope there's one thing you can get from the section uh it's what what you know what plugins players and personas are all right they want to hear me say the over over again plugins player and obviously because this session is all about building open AI plugins most of our time will be spent looking at how plugins work how to build them how to and then as I mentioned the other thing that we do is we ship this thing called semantic kernel if you don't know what that is that is totally okay it's still fairly new uh it's essentially an open source object if you go to GitHub you can just search for a semantic Journal will pop up and the main thing it tries to do or does do is merge AI capabilities we call them semantic functions with native code we call those native it can basically build these pipelines or in the the AI space that call them chains that run these things together and they can talk to each other using the same context and you can buy LLS with native code you can get the best of both worlds so that you can accomplish things what's great about semantic kernel is it's been built for Nets we have a python flavor and we have a jobless later coming soon if anyone is interested in helping us with our typescript version volunteers are uh are definitely desire and it's very extensible so right now it's super hot is all the open AI models so chat uh gbt 3.5 turbo and gp4 whenever else comes next but because it's extensible you can also use models like Lana from Facebook or whatever open source models about clubbing events and what we're going to be focusing on today are plugins which is how you can enrich your semantic kernel application with triggers and and if through the session you're like wow this is amazing all things super cool I want to I want to learn more uh if you can download our repo um you can see it on the screen right now we have the.net folder open and there's a notebooks folder I highly highly highly recommend folks play around with these notebooks uh you've never used a Jupiter notebook before it's as easy as clicking a play button it'll run the code you can see what happens uh in this case this one that teaches you about romantic functions we can see a particular prompt that we send to an LOL or when we click of a play button we can see how to run in both that function with semantic kernel and get it that's a really great but enough about me trying to sell cemented Turtle let's build a plugin so what what our plugin plugins We Believe are the core building blocks or any AI application that you're going to build whether it's a co-pilot or some native integrated right now most of the world thinks of them as just like oh these are extensions that you can plug into open AI but I want to challenge y'all to think bigger though and by the end of this session I hope you kind of appreciate how you can create plugins to really get AI capabilities to any part of it by making it probably most familiar with plugins from chat gbt earlier this year they said hey you know the thing called plugins you can give super powers to chat gbt by giving access to the apis of your service and what's cool what's great is Microsoft at Bill said hey we're going to lean in and we also are going to adopt this standard so that when we support plugins and Microsoft teams business chats or Bing chats you'll be able to use the same plugins that you've already developed if you haven't already seen it um I'm pulling up uh chat right now you have to a for this uh which I'll talk about in a bit um but here I have a conversation I've actually just created rule one that's a couple of questions here will I share those with you right now uh yeah feel free to yeah feel free to interrupt because it was a question from Vikram but how's how is this seminar slash different from Lang chain okay oh it's the same manager kernel yes so they're very similar so if you if you're familiar with Lane chain Samantha kernel is basically my Microsoft's open source Labor uh we've been focusing though on a difference Persona different developer than LinkedIn we focus on Enterprise developers and so what that means and you'll see in a bit later uh we have a bit better like debugging robotics levels day features that I can't ask if you are a DOT net developer and what matters is the link production apps that have minimal dependencies that you can monitor and actually know what the hell is going on Mantic Chrome might be just a few other ones very quickly then and divanish was running where do we find The Notebook The Notebook is in the semantic kernel GitHub repository so I can go to uh Samantha kernel and yet so I have that here that's that's not the kind of reputable stuff okay um and then the last one was are you going to discuss the relationship between all group and Community kernel um with it planning on it um leave it to the end at the end I'll I'll share some of the internal Microsoft it's like what's going on with uh who's using what how integrate what what how do you how does that sound okay so if we go to uh chapter D and we ask how much did it cost to travel to New York from Dublin which is where I'm located remember the value of a plug-in is obviously the llm doesn't know this and if you really probed it and just make some sound up with a plugin however and this isn't using the kayak plugin it can call kayak apis and Order actually pull the correct information and then the element can summarize it and get it really powerful to build on top of that um in terms of the services that y'all build are basically in this little Globe so I'm assuming it's Microsoft you're working full Microsoft products see all episodes sort of like either Microsoft graph endpoint or some sort of endpoint that you include theoretically exposed to an AI agent like check something or other applications or even your own or or you're working with a customer and they have their own services that they want to expose so what's great about plugin is you can wrap this information and give it to these LM so they can start making these and these are acute samples that you're building like a CRA map now as I mentioned with great is this is a really easy way for you to get started crying out AI things without jumping all the way in right we all have Services we all know how HTTP requests work and so I'm hoping this AI concept is something that y'all can easily grasp and easily build so that you can start dipping your toes into AI without having to learn everything like that databases and memory and embeddings a lot of like complicated AI terms you don't need to know that for the first step which is just now the last thing I'll say about plugins before I I dive in a lot of folks just think it's oh it's just a collection of apis but if the API endpoints throw it to the llm and like a Sunbeam but it's more than that it's those functions and semantic descriptions what's different than what you've probably done before is you've described these apis and documentation to Human Experience now you're having described what these functions are what they do and how to use it to Elements which basically means when you write these descriptions when you write a plug-in when we semantically describe what is possible you're basically using clock engineering so that the AI can effectively haul your apis so here we have an example from the semantic kernel repo we have this Rider plug-in which has a bunch of different functions but the functions are they're great you can call them but what's most important is a Discord that description goes into the player we'll talk about what that is later in order to satisfy that English request this example it knows to use the short poem and sort story generator functions based on the descriptions that player was provided so keep that in mind that as we build our plug-in and together our two new plugins together remember what's most important is not necessarily code inside of it and all of you can code it's super easy it's fine and just creating another API Point what's really different what's new is writing those descriptions so that the AI can call your apis in the right way so for the rest of this uh session um I have this link that you can go to it has like GitHub repo that has all the sample code um you can't try calling along um as I was doing rehearsals I realized I was creating a lot a so it might be difficult to listen to me and if power when we get a repo at the same time uh so why the recommendation might be uh after this session is over it is being recorded uh you can of course come back to this GitHub repo and follow along and we'll stop in the closet so with this to do plugin uh what we're going to be doing is we're going to be creating an Azure that is going to wrap a bunch of apis from a to-do service um and basically to do service you can create to-do's you can mark the completes you can get them that all the basic kind of boilerplate behaviors that you would expect we're going to wrap that in a plug-in open AI plug-in so that we can give it to something like trap tubing semantic kernel or another Microsoft so if you uh go to this repo let's just take a look at what this looks like uh you'll notice that this is using kind of the uh one of the starters of the semantic Pearl team as has created I haven't changed the uh our reading too much if you're interested on actually understanding how I even got this points in this this the sample we do have a great tutorial uh that's been written on how to create and run attaching plugin that walks through all the steps on how to take that starter uh and use it to build a capture between plugin so this is another great resource for you to check out if you want uh they send the Samantha kernel docs uh it's called create and it run okay now you might also be wondering okay what is this dish service what what are we actually uh creating a plug-in for um Azure and some really great samples on how to basically use every single one of the services uh and they have to do samples and so that's what we're using today before this session I want to add and deployed a version of this that uh I'll give access to everyone in the bits um but this is what the UI looks like once you deploy it to Azure on the left here we have different uh lists for all of you is when we start getting uh if y'all are following along go ahead and create your own list you can see I created one called uh level or bow on which is my aliens so go ahead and type your A-list here and then when you navigate to your list you'll be able to create items and other things locally all the good stuff um and this is going to serve as that service imagine this is your your Microsoft sitting behind Microsoft graphs and we're going to build a plug because what we was is I don't want to have to type in a bunch of new tasks and mark and complete myself I don't want to click buttons when it's hot to an agent and have it perform those tests and then on the right here we have tattoo PT but since a lot of you don't have access to that uh I'll be showing you how to test with something technical and if we drill into the to new function I'm sorry the to-do plug-in what what does it actually look like um these are the core building blocks of whatever you need to build an open AI plug-in right there's three big pieces that I want to call one mentioned how important it is to semantically describe plugins so the first place that you start describing things is the AI Dash plugin.json file this is a Json file that was defined by open AI so actually it will go to any plugins that go to their Docs there we can actually see what the schema uh looks like and what they're expecting so this is what the Json looks like so you have basic things like what is the name of this plugin for a human what is the name for a model or the element and same thing for description uh you have a one for email model you can start to see with just this example how you might describe things subtly different or human versus an element we'll talk more about what those differences are in a bit the next Nate an open API spec I hope this is very familiar to a lot of you this is hopefully something that y'all been doing for years on oh I built a service I create some endpoints and in order for someone else to actually use it I better document it as an open API stat um and because this is so familiar to a lot of you it's one of the main reasons why I hope that this is a really great intro into AI we're not doing anything radically different than what we're even doing some gear and then Lastly lastly we have the HTTP endpoints this particular demo will be showing how to get asked create tasks and then Mark them alrighty let's look at some pound though I have already pulled down the uh to do AI plugin this is what you should see um there's two projects in here uh one is the actual Azure function with only different uh endpoints uh that's what will be one Focus most of our time on and then we have this thing called semantic functions generator let's pretend that doesn't exist yet we'll talk about that a little bit here so let's focus on the ideal function if we take a look at our diagram let's find those three key parts that I discussed earlier so first is the AI plugin.json file so we had a class here that describes that Azure function um we can see that's being served at this dot well-known slash AI plugin.json uh open AI is very very strict they only look for this file in this root folder of your domain uh so just keep that in mind and then inside of the starter we make it a bit easier for you to defy all those settings within your app settings on bed here you can see all those separate values description for model description for human Etc et cetera and we plug that in and serve that back out as Jason um so that is the AI Dash plug-in.json file the next thing of course is the open apis I'm just realizing like this is huge why on whatever that is it that is it distracting how Lighty it is uh so we have our open apis that um and the way you need to find that is it can go to one of the functions that we need to find uh here I have opening a git to Do's function and we can also sing in the navigation there's also the create one and the marker complete I'm starting to Define that and with these open API uh decorators we're using this tool so that we can automatic for our open8 Plus what I want you to kind of start focusing on is the descriptions that I'm providing these things because it's what's going to be used by the loan now once you're ready to actually run this thing uh there are a few things that you need to configure these are defined in the readme if you want to follow along later uh one that is you'll have to have your own local settings.json uh we're going to put the API key or URL in service and then in the app settings you'll also need to provide the ID of your list which is the uh the ID that you find in the URL so here the ID is e e a e i I don't think something something so you'll copy that in and then you'll want to update the endpoint information for uh your Azure opening it yeah Azure open AI service just to connect the dots a bit if you have your resource open you'll want to copy in the endpoints and uh or model deployments under managed deployments um hopefully you've already started deploying some models at night you can easily create a new one here we can see all the different models that I've deployed to Azure AI studio uh and for most of the scenarios that um we're showing today what I would recommend is to use GPT at 3.5 Turbo you can see the deployment name of that is gqi 35 turbo we don't have the period and that is what you're seeing here with the deployment or model like once you've configure the app settings and once you've created your local settings.json file with your API key um it's now time to actually run this thing and see what our plugin actually looks like so we're going to go ahead and click the Run button to start the Azure function uh I'll wait for it to build and so that we can see all the different endpoints that we have predicted now while this is loading you might be wondering Matthew I already have open apis these things sitting around why can't I just give this to the LOL the reason there's a few of them is an llm as much as we think they're a super smart they have a limit and if you worked in BML in all of these functions it would become quickly overwhelmed and it wouldn't know what to do them uh so that's number one two I kind of mentioned this a bit already the way you semantically describe something for human and for model is subtly different you might provide some extra hints in the model descriptions that you wouldn't give it human and so you don't want to necessarily pollute your existing dots your existing open API Stacks with a bunch of like model specific texts what's up the last thing um I would expect that the return types might be different again if you send a huge Json of like every single property of your of your API that might work for a human because they could they know which ones to pick out but lln has to read all of that and reading through very large outputs can get really and so that's why I'm recommending that you create and it looks like it's it's spun up that's why I'm recommending creating a created API that's smaller and more maintainable for your plug in this case we just have these link options that are simpler for the LL news I think I've seen a few questions are there any uh that I should answer back to those questions here so it's going to be reminded me the sequence that came in the first one was how does the element l m excuse me how does the llm choose the right plugin to use for generating an answer let's say we have many plugins we will get to that so once we're done we're actually about to test it and I'll show y'all uh how that works uh with planners um but I'm about to show you about the text this plugin without losing uh capturing T that does have some starter scripts we don't have a container it's not like a Docker comparing it though but we do have some starter scripts and they get easier to get started which is okay which is the great Segway okay so we have our uh uh our API they have these different things now we want the L1 to actually use it and how do we test it the challenge and we find ourselves in today is one there's a wait list and even people within Microsoft uh even when we like annoyed the open AI people to get access they still don't get access um and it costs money and no one likes spending money so what we have done on the semantic kernel team is basically create a chat chicken new style clone where you can import and test your plugins with similar logic that attached to the team has so that you can start seeing how your plugin would perform would be a in something like attributing or in the future teams business chats or uh Bing chat and that is cat copilot which is Samantha Kernel's Records app to test all of your back plugins so if someone asks like hey is there an easy way to like spin these things up and test it I would highly highly highly recommend to use chat copilot as like your reference application to see how you would build an advanced AI product uh which the mandated kernel is particular so if you if you just search for tactful pilots make the corner will pop up this is its repo um let's chat copilots uh you can see a sample of It kind of working we have some great starter scripts this is where if you're I try to follow along this is probably where it would take too long and you might have to try this offline after the session but basically you give it the information of like which AI service and what's your keys uh and then it'll set up all the configuration and then you can start it so that you can actually use this thing that is exactly exactly what I want to do so here I am inside of the uh Chaco Pilots repo and I'm already inside of the scripts folder so I'm going to go ahead and start it up this is going to start both the the back end we have this this web API folder and the front end we have this web app holder it's all react um and in a bit you will see our chat application or we can start testing our plugin so a standard chat interface if I want to I can create some new ones uh what's most important for us is we want to start loading in our plugin okay so we have this nice little card at the top or custom plugin so we're going to go ahead and select add we will search or select uh where our current Azure function is being served up which is localhost that makes up one or search for the the I'd manifest it reports the exact same validation that open AI performs so you can make sure that your uh your open a your AI plugin Json file and your open API spec is uh is working correctly and then once we have added our plugin a little bit more we can enable it and now this is the Moment of Truth where I pray to the dental gods that everything works out well um so we can now ask the AI questions that require the use of our Pueblo so I can say something like and you tell me all the tasks app and one of the questions was how does the Ellen know which plug-in to use the behind the scenes it's using what's called a planner we'll dive into that as a bit and so you can actually see how it works but if planar is basically just a really big prompt I talk a really big that ask the Ellen and you have all of these options these are all the things that you can do this is the user skull healthy how you can use those flowches to achieve that slow and what we're seeing here in our chat copilot application is in doing just that it's do they have this function called get to deuce and it's returned back a plan which is this newly Simple Plan it's just only one set that gets the list of tasks and returns it back to music um and what size is I can see the planning report actually runs so I can confirm our design so I don't want to say yes this looks great Samantha Carl convinced ain't that explain that was generated by the lln run it and give it back to them all we're seeing right here is spitting out this is this isn't great they have to fix that in a bit I'll show you how what it's doing right now if we click on this little info button this is this is what I really like about our chat co-pilot experience it's actually shown you kind of the inner workings of uh chat profile which is something that you don't get burnt action uh to actually be new right it's kind of a black bot here we can see what was uh set as a a as a details to the Ln and everything inside of this Source starts this results is the data that came back from our uh plugin we can see like it basically pasted out all of all this on them we could actually we went back to our open headcast stack and Randy could get to do substitute I actually see if like this is the exact same content right putting it up so that's what's given to the llm and the llm is then just like well this is what you wanted this is what I got for the API I want to model it back out to you now this is the high deal though right if we built a plug-in and we tested it and we realized that it's just spitting out like non-helpful things you need to find some sort of way to poke the lll to being more friendly to you so one way of doing that is we can add an additional function to our our set of apis that will memorize that will make it more and to do that what we're actually going to use at L when in our plugin right this is a big moment when I want to say it again we can use llins we can use AI either side of our plugins a lot of people just think audience are just like oh we're going to call apis to our existing services and also make them smart so what that's exactly what we're going to do we go back to our plugin we can see I have a folder here called prompts and I have one called Summarize list of Dunes and these these folders are what we call in Samantha curling semantic functions they're proposed of two files one is a configuration file that's again semantically describes what this function do does in this case it summarizes a list of tasks as and it also comes with a prompt that we send to the lln so that we can get the information back this particular prompt has to summarize uh some information uh ask the L and taking telling what's most important and what's great about insulated functions is with Samantha kernel you can templatize your prompts so here we simply ties the listings that are set to it so that every time you call this function a different Behavior so I've created these semantic buttons now we actually need to expose the as part of our Azure function I mentioned we were running nor the semantic function generator you can take the blindfolds off we can take a look at it now uh basically what it does um is it looks for all of the semantic function files and then creates Azure function endpoints for us to call and I'm just going to come back onto my project folder and enable this uh generator that when I run the semantic function again we'll have those two additional endpoint so just a recap kind of what we've been doing so we created our basic plugging which had some API calls we tested it and chat copilot and realized that wasn't a light Brady kind of scary to humans this is scary I don't like this and so now we're going back to our plugin and making it better by giving it some intelligence that it's smart enough to make this a little bit more curable so looks like our our function is up your pressure page you should see at least two new functions greatly there and now and now if we refresh our Chapel Pilots uh it'll be back in explaining slate and add a or plug-in backend and now knows about these new uh functions and now we're going to ask the copilots please summarize the tasks I'll and again it's setting this request over it's sending it to the planner again I'll show you what the planner is in a bit um like here with we have now not just a one step plan a two-step implant wanted to get the to Dooms and the second one we're now calling this new function we've created summarize the list up to dupes as really like about planner is it smart enough to take the outputs of the first step and assets as an input to the next step pretty cool stuff you can see how it can automatically create these chains of these functions right so I'm going to go ahead and say Yes proceed whole beans and it's going to run and if we're lucky next couple of questions with what you've explained so far how when this planner how plan does helpline planner comes into play and supports the orchestration of plugin that's a great satellite so I just want to how do I do that and talk a bit about what player is so player I mentioned earlier is you give a goal and there's this big L uh this big prompt that then takes that goal and creates a plan we actually have three uh and it basically kind of shows the evolution of how the entire AI Community was thinking about things based planners um generation one we call it action player uh and it chooses a single function to call horrible one accident pretty dumb I don't expect anything out to use it generation two is what you've already seen in the topical it's called sequential planner and it generates a complete plan to accomplish a goal uh and what's nice what's nice about things being open source is you can actually see how things work um and like you know attaching to like how does it know that's nice about Samantha Colonel is we can take a look at the code and find out how it knows if we go into uh our DOT Nets and go into some extensions we can see the difference audience uh or planners I was describing action planner sequential planner talk about step by a little bit if we go into sequential player actually look at the prompt that is used by the llm to create a plane so here uh it provides some instructions like oh this is the schema you use and there's an input and output and like anything past things uh this is kind of what it looks like um and probably what's most important is at the top here we actually paste in all the available functions the bottom we brought both and this is all set to an llm and this is enough for it to bring back some XML that's the rest of this code what it does and it just parses that XML so that the semantic kernel can actually write over whatever the L and F this is whenever you're you're bored and you're like oh let's see how planners work it's fun to just like look at these prompts and go oh this that's that's kind of cool that's kind of the last one uh we call it stepwise planner we believe this is what chat gbt uses and more or less what Bing uses um instead of asking that Ellen to create an entire plan all at once which can be very hard and sometimes it doesn't get the right answers the essay to iterate we always basically tricks the elements is like thinking tell them something but close enough right um and if we look at the The Prompt there we basically give it the instructions the steps foreign start with the question have the thoughts how would I actually address this reform in action do something into the real world get some results back observe it and then Loop repeat that over and over again other thoughts do something observe and just do that over and over this it's kind of what equal do right like we interact with the world see what happens and we try again until until it adds a final answer and now you can return back to the abuser and so uh what what I'd like to actually show you is you can actually we can of course test this and and copilot chat we probably don't have time for it um but you can also of course uh write things in code we haven't seen enough code yet so I want to pull that up so what I have here is a simple program file where I am creating a semantic kernel kernel and I am pulling in the plug-in manifest or R to do plugin and uh I'm pulling it Matt plugin let's just say didn't plug in here and we're going to ask it a simple question um I finished heating the dog can you mark it as complete let's actually see it make sure we have a task okay so we have this task I'll feed the dog and we want to Market Complete because I finished it and what this code is basically doing is it's creating a new player using the stepwise player so it's going to do that thinking Loop and then we're going to get a result and I mentioned someone asked the question earlier what's the difference between symmetic fernal and Wing chain um this is probably one of the biggest differences you really really want to make sure that people could see what the heck was going on and so we've added a ton uh in bug logs uh we're adding some oops so that you can start adding Telemetry we have insights it supports so that if all this stuff is really complicated and AI can regularly call random stuff and if something breaks you need to know why and track down what happened and so by you saying something like Samantha kernel we're just doing a bit better job of actually loading all this stuff down and what we're seeing in the debug console right now is actually the L1 performing this chain of thoughts uh so right it's starting with the question okay start with a question hey I finished the picking the dog can you mark it as complete it has its thoughts on how to actually achieve it so it's it's first thing you don't know what skates like I need to find the tasks to actually mark it as complete the first performance that skip seduce it gets back to the list of all that you can use that uh returned by the API it now says okay I have a list of tasks but I need to find the one that is actually about meaning dot because it's not necessarily clear and so there's another function that I added called get relevant to do where it passes in all of the information and uh the description of what's the uh searches and it comes back with uh I've got element two it with the ID of the relevant to do and now if we scroll at the bottom once it's gotten that ID it now has a thought okay I found the task I'm successful it has the ID of 74 whatever I'm now going to mark this as complete so now let's learn after all the mark to Do complete function and it observes that it's successful and it's done and here in the terminal we can see it completed it with four steps and it used three different skills and if I refresh this page uh it should aha hey Martin's done and so that's kind of the power of stepwise functioning can be a little more complicated things than the sequential player but if I just ask the financial planner to do that it would probably just like vomit and give up although stepwise it can actually I'm gonna uh transition and see if there's any other questions uh and I want to wrap up some give y'all some last remaining thoughts uh as I wrap up this session are there any more questions yeah throughout um okay so it was it are there any specific limits for contact size for semantic kernel it depends on the model the model so uh each model like 3.5 turbo has I don't know what the contact items are um but basically the more advanced the model typically the larger the contact size is that's also also why if you if you had a Keen Eye you had a Keen Eye uh that you probably noticed somewhere in the debugging era I'm gonna give up it said I was using drinking before you're using a planer because you do need more tokens you typically need to use okay um does open AI support chat gpt4 now two months back it only supported GPT 3.5 turbo and that may already be announced in the charts uh yeah basically you need uh access to gp4 you have to be on a wait list whether that's open ads wait list or Microsoft's waitlist um uh access gpus is very tight right now so you might just have to wait a bit until uh servers are instructed where the Newton's and again this may have been answered in the chat but I'll ask you anyway should the plugin be always an Azure function or can it be some sort of web service oh yeah make it make it whatever you want what I'm hoping you got from this is this isn't rocket science I can create some API and points create them however you want um whatever makes no sense from like a talks perspective whatever it is right all that matters is they shouldn't have any things described in a way that the L and do we have a python flavor coming for the plugins to be developed yes it's actually already available we have a starter for that um it's not documents Inlet so you can uh hate me for that hopefully by the time the applicant rolls around a couple weeks I'll be gone yay uh can stepwise planner get into an infinite thinking Loop is there a break point if it's stuck uh in this in the configuration you can say Max steps um but yes it can get stuck in a loop um that could happen uh so normally we depending on your scenario you would say oh I the max iterations is like five example does it matter which version of chat GP site doesn't matter which person which version of GPT 3.5 turbo that is version zero six one three or zero three zero one in a recent Hands-On this mattered on zero three zero one when it's only available in the East us or not East us too so I know so there's a there's also a GPT 3.5 turbo version one that does not work at all so don't use that I've only tested it with the Mini version the zero 301 I had not tested with uh let's say is that June uh the sixth version um my assumption is it works I haven't heard otherwise and then can I do this with any application as long as there is an API for it and I can create these planners Etc yes that's the goals you can give these ai's power to do anything which is scary and when we're working we practice responsible AI um but that's that's kind of the the jury is anything that has an API endpoint I give one last question here how would it work when you have users authenticated in not great question does it have access to the user ID or email and then is able to use it as a parameter to an API call so great question um the chat copile experience does it's a login experience and there's some samples of how to like pass authentication around uh the open AI standard does have a have some way into fighting authentication I get told by the Azure ad team that it was well thought through and they have thought um but it is also a new authentication and past tokens so the plugins okay so any final questions just come on to your mic and ask them off Matthew or Matthew wrap up is a wrap up you want to do real quick wrap up one thing to note players aren't perfect you can read this uh one of the recommendations I would have is instead of having a bunch of atomic functions wrapping then so that a minor doesn't have to do as much staking and we're just teens made the kernel of working out I think I'll sell this well we got time now and we'll get excited
Info
Channel: Alex Chao
Views: 2,612
Rating: undefined out of 5
Keywords: semantic kernel, chatgpt plugins, plugin, ai, langchain, semantic kernel tutorial, semantic kernel planner, copilot chat, microsoft copilot, gpt4, semantic kernel plugin, openapi tutorial, c#, dotnet, c# tutorial, office hours, microsoft, sk, planner, skills, plugins, chatgpt, openai, copilot, copilot stack, tokens, token management, intent detection, machine learning, large language models, open source
Id: 9L29ObRCHjE
Channel Id: undefined
Length: 47min 22sec (2842 seconds)
Published: Fri Aug 25 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.