.NET ๐Ÿ’– AI | .NET Conf 2023

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hey friends isn't this a lovely day we're just having all kinds of fun at netc John I'm feeling like a friend in Scott's presence I've seen so many of his videos so I feel a bit awkward and awesome here right now uh well flattery sir we'll get you everywhere with donet comp I appreciate you as well I've read your books and you're recently on my podcast you're just a joy to work with so we've teamed up to talk to you today about net and about Ai and about how I think about it how you think about it and see if we kind of think about it together interleaving processes indeed so I'm going to just bring up my machine right here and explain a couple of things that I've been thinking around Uhn net I happen to be at the open AI website and I did it on purpose I went there rather than Azure because I think that we need to kind of talk about the base part of the pyramid as we build up to these intelligent apps that we're going to see a little bit later from Maria and Louise okay you want to go to the lumber basically the lber yard yeah exactly we're going to build this thing from the basics and a lot of people are talking about chat GPT it's fun to say chat GPT it's fun to say AI you know it has good mouth feel and I think that's a problem because it gets a lot of people hyped but they forget that data science and ML and statistics and math these are all things that have been around for 40 or 50 years I feel like people are confused because it feels like AI just happened last year but there's a lot of interesting stuff going on a lot of science lot of work that's been happening in this space so maybe demystifying it a little bit is that cool let's go all right so I'm in here and I was just going to go and say it's a beautiful day let's go to the and I like to dictate my uh my stuff because I don't like to uh type too much let's Beau day let's go to the now in the corner of uh of my chat GPT here I put in full spectrum probabilities if you just say hey uh AI tell tell me what the next word is in this case it's a beautiful day let's go to the we don't know what it's going to say we can actually say show us the full spectrum of possibilities and let's turn on some token highlighting it's going to say which token was the most likely to be generated so John it is a beautiful day where would you go I would go to Seattle where it's always sunny it is always St in Seattle it's always stny in Seattle um and I'm from Portland and maybe I would go somewhere else maybe it's sunny there too but that's interesting because you and I have context about each other you know I know where you're from you know where I'm from and in our in our relationship as we learn to finish each other's sandwiches uh we know what the next word is because John and I as we're new friends are becoming statistical models in each other's heads about what's the next thing that John's going to say our contexts are lining up classic John I knew you were going to say that and I appreciate that you said it you see how that works I've been married for 25 years my wife now knows exactly what I'm going to say it makes me wonder from an identity perspective am I a good husband or am I just the most likely thing that handsome one's going to say next and if she's an Azure person as someone's watched all your videos I complete the Azure sentence as your spouse in that way I appreciate that so let's see where John and I would go it's a beautiful day let's go to the ah interesting so open AI said Beach and we can look at that full spectrum there suggested that beach was about 21% likely uh surprisingly not park park is really common we're effectively playing um Family Feud but if you're in Hawaii you might not say park right that's a great Point that's a fantastic Point Let's actually test that theory so I'm going to go and take that text and I'm going to make a little space before it and I'm going to say we're currently in Hawaii comma my wife was born in Hawaii period and we just loved this state period now of course none of this is true but we're going to test it anyway that's called grounding these days yeah so we are grounding look at that now this is interesting what's the word North doing there H well because the north beach is a known place on the Big Island right so maybe that's why by putting this little bit of context there we could say this I'm with my friend John he's from Seattle comma it's a lovely day here period we're big fans of the Pacific Northwest period okay now let's remove the and see if it's going to do something else maybe it wants to go to the Gum Wall in Seattle who knows oh that's interesting let's go to the beach okay but then they said and enjoy Seafood from Pike Place Market you know my mom is born and raised in Hawaii could you add in a little bit more context like John's mom is from Hawaii oh that's a great point and loves it there John's mom is from Hawaii and she loves it there period so this is interesting would you call call that conflicting context or just more context we don't know oh look oh it's built a whole narrative about how we've been friends since high school and look we both agree that someday we'll make a trip to Hawaii and take John's mom with us Ah that's beautiful she'll Lov that getting BT thank you thank you Scott thank you Scott you all saw it just now thank you Scott that's so cool though because what it's done is it's brought all of that context together and it's built a whole thing around it so the question is then is the AI smart is the AI thinking uh we don't know right we don't really know how these neural Nets are working but they are picking the next likely thing and they're using all that previous context but we only gave it two sentences of context uh you know you've known your mom your whole life you could give her you could give it a whole book of context and stories from your childhood that could be context you could feed into the model couldn't you I was never a competitive debate person but they're really good at that aren't they they extremely good at that and adding uh new extemporaneous speech adding new context uh and the more context you pile on the more this word is going to be different right we I'm a little surprised that it didn't say Beach but we could also say something like I'm in London and then maybe it sends us to the London Eye right or maybe it sends us to the museum that context is important to understand because we don't always know let's look at this we don't always know what that context is it's unseen context interesting in conventional code variables are the context that's a really great point because we're effectively calling a function aren't we it's this big world function on the world computer and we're calling that function so then all those parameters you pass in absolutely are are context and that's really interesting because you know when they started in computer when I started in computers they said don't trust user input right you have a text box and you put your social security number in it and you make a regular expression to make that correct we're doing everything that we can to not trust user input and now fast forward and the entire internet is now just a giant text area and we're not supposed to trust user input who knows what they could type in there you're using speech Reco as input but you couldn't do that writing regular code that's a great Point writing my regular code by by speech is more difficult and it would change my context absolutely yeah so one of the things that we can also do here is we could see what's behind some of the that prologue and not just the context I was providing about your mom about where we grew up but also about how I want this thing to behave personality maybe yeah that's a great way to put it I don't like anthropomorphizing these things because it's not a person but we are clearly interacting with it like it's a person right tonality yeah that's a great point I like that so in this case here I'm saying you're a helpful assistant and I'm I'm expecting it I'm assuming within helpful is a certain amount of kindness patience I I I I would prefer it not be rude I don't want it to be like I'm calling it and they're like ah hands someone again H you're so mean we love we love it mind you we love it not not Ms it but the it of old we love it we love it because otherwise my computer's going to suddenly stop working you know do the right kind of okay so one of the things that I like to do is I you know I like tacos I have tacos basically all the time why tacos I don't know tacos are my jam breakfast lunch and dinner and when I'm on an expense account and my wife's not here nobody can stop me from having double meat calzone's tacos similar no calzones are too much carbs two totally different things not a c so I could say I need a taco recipe so this is pretty straightforward here we have I need a taco recipe and you're a helpful assistant so I would expect a pretty straightforward taco recipe here is a simple and delicious taco recipe for you okay so this is coming from from this very large language model that has effectively been trained on the Corpus of all the text in the world right and uh one of the things that's interesting of course is that the internet is 49% nice and 49% mean and 2% that are still deciding it's guessing really well it seems to be guessing but I have no way of knowing if this is good I don't know what part of the internet it got this from if this is the nice part or the not nice part um I don't have a kitchen to test any of this stuff I could also change this tone over here where it says you're a helpful assistant and I could say you're a belligerent assistant period you'll eventually help me uh but you're not going to be happy about it and you're going to be sassy while we talk period this could be tragic it could be uh you know really awful uh but we don't really know but also I as the user of the chat assistant don't know that bit of tonality that was set up by the user interface people so then if I say I need a taco recipe from my new belligerent assistant we'll see something different let's see what's going on here hello belligerent assistant let's hit refresh being belligerent it is being it's so belig it refused to give me a tonga recipe there go that is interesting let's try it again you know that model parameter you have there yeah this is a great Point actually by the way I think this is where um oh there go I was thinking that that was where msit finally got mad at me uh but yeah you're pointing out that the model we're using is GPT 35 turbo that's a fast one turbo right yeah it is okay so this is good H nice drain any excess griefs because health I think I hope you know how to use a stove it's being pretty rude one packet of taco seasoning because clearly you need all the help you can get wow that hurts it does hurt if this were the way that it was going to talk to me then I might not want to spend any time with this model there was an article a couple of weeks ago where they said they did they did some research that if you are kind to a very large language model it will be kind to you and you will get better results and I think that's because if you use kindness words you end up on the kindness part of the internet the Kinder stack Overflow questions those midwesterners right they get to me have you have you have your favorite uh you know like uh kitchen and you know Iron Chef people you know how there was a time there in the in the late '90s early 2000s where the chef guy was mean like you're an idiot sandwich and they were really cruel and we cheer on those mean chefs those mean chefs have become kind and nice now so I think that I would much rather have a nicer assistant and a Kinder assistant that very large language model I could then call I could log into Azure open AI Studio get the rest apis and write this myself but things get really complicated really fast I want to orchestrate things I want multiple chat Bots to talk to each other all of these pieces are going to talk uh to each other in some kind of an orchestration of um not just one Chef where it's me making tacos but a sue chef and a prep chef and all of these kinds of things my kitchen is going to get very messy very quickly well as your AI co-pilot literally sitting next to Scott here I realize could you go back a couple frames could you go back to the open AI interface yes sir yeah open yeah so like there's that little flipper here where you chose the model okay so we know that there's all kind of models it's like walking into a Supermarket which model do you use you chose 3.5 turbo um four is a pretty good model that's a great point I chose 3.5 turbo on purpose because my thinking was it's 16 times less uh resource intensive than chatti pd4 it may not be as good but good in this case depends on what my inter my my needs were so my needs were simple so I wanted to use the smallest model that would get the job done to be respectful of cost and of resources I don't know if that was the right thing to do though um well you know uh in Japan they have this weird business where they sell very expensive fruit you wear this like a a melon can cost $200 or a peach can cost $150 this is very good fruit okay is it $150 good uh well that's a question sometimes it is if you're really in that mood for that peach yes and so I think of four was traditionally that very expensive Peach but as we know from open Dev day that new model is going to get a lot cheaper that's a great point because this the quality that is required to bring that to the market will then be understood it'll be codified it'll become commoditized and it's going to go down market right it's kind of like we got used to like this kind of goodness and this good relatively inexpensive model but suddenly each few months we're like wait I thought that was expensive and too and good it's going to be cheap and good and it happened like uh within a few months interesting so that's a great point it is I think it is valid to use the one that is the cheapest for your needs but also don't uh underestimate that the others are going to become cheap soon remember in the era of like the the um uh Pentium 150 Pentium 170 we had to have the latest right yeah and it kept like dropping in price but it was better right same thing happening here so weird that's a great point and you can in fact buy pentiums and 486 even now in 2023 $3 in a bin and you can P of pum used to be yeah it's coming and so that that model if you go forward to your slide of the co-pilot stack I find really uh useful as a segue to what is semantic kernel and why we love uh AI in the net Universe bio semantic kernel and basically this uh this toaster here is a kind of a kind of metaphor for semantic kernel you know you you turn up the heat temperature you know you are kind of cooking something you're orchestrating something there okay let's go to a two shot here and let's see what's going on because John has actually brought a toaster one of our first toaster right here to bring bring the toaster there we go and maybe to get in the mood if you wouldn't mind putting on a cozy AI kitchen apron would you do mine oh we are actually going to we're going to go into cooking mode here yeah cooking yeah yeah cooking mode so we are now entering John's cozy cozy ey kitchen okay kitchen being respectful of our microphones there we go okay so thank you sir all right here we go all right so um we're going to do some cooking and we do the cooking to understand the materials and um I really uh I I this is a fanboy moment because I I really am new to Azure and I've learned so much through Scott's videos and what I realize every time when as as a as a teacher myself um you have to understand where the materials came from or you cannot understand what this is um this uh I I I once work with a curator in Kyoto uh who uh told me this life changing story and he told me how um uh he once was able to interview one of the last Mia Dau their Mia era Carpenters they're ordained by the Emperor as like special Carpenters that can build temples and these temples in Old Japan stand for not just 20 years or 100 years they stand for thousands of years so this Carpenter was asked um how did how do you create these temples how do they last so long cuz modern architecture lasts for like 30 to 40 40 years and so he asked of course in a Yoda fashion uh why do you think they sent for so long and the guy said must be the architecture and the carpenter laughed at him hilariously he said no no it's it's actually very simple we the Mia Carpenters go to the mountain we'll pick trees from the east side of the mountain and bring them back to the site and use them only on the east side of the site then we go back to the mountain to the north side of the mountain take trees to north side and use them only for the north side of the site and so basically all the trees are picked in a way that it fits Nature's capability and that's why it stands for so long there's an intentionality there's a deliberateness there's a focus it's not just being slapped together it's being done with a plan the raw materials ra materials change everything it's kind of like why I noticed how you wanted people to start from the beginning go to the lumber yard m because any can understand it so in that sense that's why um in the Coan kitchen oh my go if this is gb4 the model that was expensive now a little bit cheaper that is heavy that is foundational foundational exactly found it is a foundational model um and what we do as you know you uh you were using tokens mhm yeah so that's a great point I didn't get too into that but each one of those words gets broken down into a token so in this case here we have tokens in the form of d or dice dice rice right dice Rice go exactly there go I'm propping rice all over the place yeah so because they're random right they are totally random you're absolutely right and we would then pick the one with is the next token I feed it tokens and I get more tokens out yeah so so we need tokens we need a orchestrator we need our model Foundation model and so let's um let's grab some semantic code shall we and put some tokens in the uh orchestrator you mind up pouring some in there so we have yeah there we go pour them in here yeah okay there we go yeah okay there we go we're we're not paying for this okay good perfect yeah okay so we got some tokens in there that's semantic code okay but as a longtime coder yourself we didn't have this fancy semantic code we had good old good oldfashioned code right oldfashioned code right so the key to these systems is you want to mix semantic code with native code the two go together right so if you can add some uh conventional code in there okay so we got two kinds of cod that's a good point because you know I might have an existing website existing code oldfashioned code I could have wrote yesterday right old fashioned code's good it's like it's nutritious that's good right okay so we got some we got some oldfashioned code okay so that's like the foundation of how to do that fill in the right the generative model the completion model and that is ungrounded so we want to ground it uh what are we going to do well we don't use this gp4 model we use a so-called embedding model OKAY like Ada so Ada that is a you know you that's interesting because I felt a moment ago maybe you were upselling me okay saying use the biggest fanciest one no no but Ada is a smaller model and if we switch out to my slides just very very briefly you can see some of the models in the GPT family of models Ada of course is a nice one for formatting text for parsing very inexpensive it's nothing it's fractions of pennies but it's a solid foundational model it's a solid foundational model for embeddings and with their slide up for those of you who who have heard these names before a b c d adaa babage c da Vinci a oldest the vinest I had a moment when that happened for me interesting also aring gender too it's pretty cool it's also interesting to point out if you think about these in terms of the planets Da Vinci being the Jupiter of the thing w Ada of course being like Earth or or something like that this does not to scale it's fair to say 175 billion parameters D Vin would be so large that ADA would become a pixel so you got to understand how large these things are the difference between 175 billion parameters and 2.7 billions doesn't fit on the slides and when you look at that thing you think aah a is so like it must not be good what is it good for but we love it for embeddings and what embeddings are okay it is the we use it for embeddings particularly to able to take words and convert them to Long vectors of numbers oh if if you want to read what it says here we've got here long grain embeddings vectors like your grandma used to make these are very comforting Grandma these are vectors from our Vector we have this is almost a data base effect we're adding grounding if we can add some grounding into the semantic kernel excellent okay so now this combination of embeddings and the completion models enables a ground grounded semantic computation right and by grounded you mean it's not going to make stuff up it's not going to get confused it's going to get more focused what we want it to do it is going to be less likely to be less grounded in how it communicates it's going to be biased the way you bias the the system that is a really great point because we talk about bias as it's a negative thing but if I'm making a coffee shop chatbot I hope it is biased towards my coffee shop and towards selling the products that I have and it doesn't start making limericks about coffee it's also steering right you steer it right okay so a great point so that is a simple way of explaining how the two models interact and you get basically this magical uh um interleaving of context and completion together this model gp4 does not run on my laptop it's huge it runs in the cloud it runs on gpus is where is Ada located is it another website is also up there too but as more people discover you can run embedding models locally and of course we know you can run these kind of models locally as well interesting smaller ones so we say running on the edge depending on the size of your company the size of your project The Edge might be your laptop in the case of a smaller model uh or it could be a machine that's in your local data center if you have a hybrid Cloud yeah if I like a water jet cutter I would like cut this into little tiny Model A tinier embedding Model A tinier Foundation model for completion absolutely so now I wanted to because your audience is expecting something they're they're expecting net they're expecting net well I I want to show you something new uh that we've created in semantic kernel to take it up a level to what open AI Dev day announced something called assistance okay and assistance are an abstraction on top of these components okay they makes it easier to write this kind of AI so let me just ask a quick uh you know kind of grounding question for myself excellent I saw on my little demo where I can you know log into openai I can log into Azure open aai I can get a key and I can make you know basically API calls very manually I could get down to the HTTP of it all correct and call into those model I send it a string and I get back a string but that's going to I'm going to outgrow that very quickly if I'm trying to build an intelligent app is that a fair statement well you know the thing is you're not going to outgrow it you're going to be swimming as fast as you can you know they say the the the duck is doesn't look like it's actually paddling underneath the water that's you and AI right now I got it boss I got it boss but you're swimming so fast oh my gosh it's hard so this assistant abstraction makes it a lot easier to work with this kind of AI the problem however we talked about grounding and humanization it it is a more humanized metaphor so just caution uh what I'm going to show you is not trying to humanize the model it's an attempt to bring back an idea from the '90s called agents I remember agent I remember uh the little paper paper clip was a clippy was a thank you yes the little wizard Microsoft Bob Bob thank you exactly Bob was our first uh first first first deployed agent and so what we're going to do is we're going to bring in something called well we two things the first thing is plugins uh you know the semantic colel short is plugins planners personas plugins are the foundations of enabling this uh AI to go and do things outside of its world if you look at your co-pilot stack diagram at the very top is plugins MH and plugins essentially extend the capability of the AI and so what we want to do is we want to add a few maybe we can add a few plugins let's add in like a bing plugin yeah so while we're doing that bring up bring up my computer real quick so let's just remind ourselves of where we are here that orchestrator to try to have these co-pilots talk to each other these agents require plugin extensibility because when I said you know hey John's from Seattle Scott's from Portland it didn't go in bing for that that was all based on what it found in the foundational model but I'd like it to you know grow arms and do stuff and go out and search the web for things and it's plugins that makes that happen you want to give it a tool belt o I like that or an apron yeah or an apron exactly it's protection Okay so we've now given it the ability to access tools on its tool belt um and then what we're going to do is we're going to use this new abstraction personas also called agents also called assistants now this is my uh jumbo jar personas um and let's open it up yeah good okay so let's uh put a few personas in there I think this is where it gets a little weird yeah I have a Fascination for wooden dolls I can't help it yes yes it's made of people yes they're automatons let's called them automatons tomons let's get computer science little green there so so those are essentially personas definitions this abstraction called assistance uh and then we're going to like uh give it some uh Ste we're going to Define what a researcher is we're going to Define what a project manager is okay now if we can switch to my screen now okay uh we're going to show how assistants work in semantic kernel with the assistant kernel okay so we're thinking about these personas as we switch over to John's computer which is the uh the dark mode machine right there right over here okay we've got some definitions under assistance yeah so under each assistants we have the ability to define a designer uh the image designer who you are your design that creates images this is a job description for the agent well this is interesting because I was over there saying you're belligerent and you're helpful right you've taken that out and that is now the definition this agent. yaml you've described how I want you to act absolutely yes and and and what models can it use G bt4 and what kind of inputs does it like does it take you know uh a project manager you're a project manager you enabl to the user to finish things and you notice how there's four of these you put some personas in there some multiple ones it's with the intent of enabling the agents to collaborate with each other to produce an outcome interesting okay this is like now if we go back to the kitchen metaphor because we've got our our lovely uh aprons here you've got your Sue Chef you got your prep Chef youve this is about separation of concerns you've got roles these personas are being explicit and I love that each one can use the correct model that is appropriate for their job well I like how you described it it's AI microservices basically it is AI microservices which is an an analogy that makes sense to me as a net developer so U let me show you this running so um uh I I have spun up the uh uh the the agent conversation space uh I want to make a house that is the so you just said of the I did I'm I'm getting used to that command I love it it works how this is the size of the tallest building in Paris so you're asking a lot of it here you want to make a house which is a whole thing mhm but then you also want it compared to the tallest building in Paris so you need to know that information which may or may not be current I don't know where that information is and now the researcher the project manager has Baton Pass to the researcher to go and figure out uh the tallest building and it's basically contacted Bing Bing is in its tool belt and it went off and it gave the information it gave it back to the project manager and then the project manager constructed the ability to create the house at the right size so you asked the project manager agent which was described with a specific prologue with a specific model to do something how did it know it couldn't do that and then needed help how does the project manager know to call on a researcher that's a that's an example of um uh scoping the micro AI micros service essentially this is what you're good at so it stays on point and therefore if it doesn't know it and your job is to handoff pick the agent that you know about to hand it off to it's example of where this General AI approach is to be able to be good at everything but as you know when you narrow its scope and focus it gets really good at one thing like picking a beach uh according to your cont I really like that because when I interview someone I like it when they say that they don't know because that means that they know what they know and they know what they don't know and they're willing to say that I would rather it be grounded and say I don't know let me ask and I'll get back to you and the coolest thing is you can tell which some agents have access to certain plugins like the researcher agent has access to the Bing plugin but everyone else doesn't seem to amage security as well that's a great point because there is that those those multiple uh those multiple personas each one having individual access to some third party uh I might have them running under different security contexts absolutely uh and it might it might not just be B it could be a plugin that's looking at internal documents and I don't want another larger model to see those internal documents totally in control but orchestrated by the semantic Kel exactly so this assistant kernel is a New Concept we released for doet developers to play with and we're for opinions out there okay so folks should try out semantic colel and we're going to see more semantic colel when Louis and Maria come around because they're going to get even deeper into this because there's so many ways to tie these things together now this is Samantha kernel does does it inet but other languages as well correct absolutely Python and Java MH if we switch back over to my uh my screen here I just want to call out this uh this stack again the AI infrastructure at the bottom that's your gpus that's the cloud then you've got your foundational models like we saw with our heavy ingredients here in the form of ADA and Chachi pt4 then then there's the user's perspective which is wow I'm talking to a co-pilot that's amazing and it went and it banged for stuff that's cool that orchestrator in the middle that's the thing the orchestrator that is the the role of in this case semantic kernel and semantic kernel is that Center of that co-pilot stack yeah and it really it's the brainchild of Deputy CTO Sam scal who also in his history invented something called Google Docs and he had access to gp4 last year ahead of everyone else and I think he might have consumed the most gbt 4 tokens like popcorn oh yeah and uh he noticed it was good at reasoning so that's why the sematic kernel was designed plugin first to have tools planners that can access it sub do multistep and the personas have been we didn't expect it to happen like so soon but after Deb day for open AI it's here and I really like that you you've said grounding multiple times you're focused on making sure that this is doing the the job that it needs to do and not trying to step into places that it's maybe not good at well I'm so proud to be at Microsoft because that co-pilot stack another diagram i' like to show is that we built AI safety into each layer because as Engineers you know it's impossible to make make something completely safe but you can architect it to be safe so AI safety is in each layer that's a great point and also that you made that that comment about the uh the Japanese uh Carpenters being intentional and being deliberate about what they're doing you don't want these things to run off and cause any kind of trouble because you're just making a coffee shop or you're making a kitchen you really want them to do the right thing now I know that we're going to have some questions our friends who are running on the tags board uh may have questions coming in at Don if you want to ask questions of us because we're live right now you can do that at has.net conf we've got John here and myself to answer questions for you so feel free to throw those questions at us and then our friends over on the big board will jump in and uh and ask those questions to us we'll get the questions answered as much as possible you know I was able to get the semantic konel working very very quickly sometimes with these kind of demos and things you're like H this is going to take me a while to set up I was able to get the hello world example running within just a few minutes and it was a lot easier as a net developer I just brought in uh you know the new get package that I wanted and in in this case here like if you bring up my machine it was quite easy let's go ahead and switch over to my computer very briefly as we wait for some friends to give us our uh our Q&A that's music to our ears and also this is a community project so for those of you who want to contribute and improve it or even provide constructive criticism please send it our way yeah let switch go ahead and switch to my computer please please and we can see here on line one that I've got using microsoft. SMA colel just a Newgate package bring it in we're going to build up that kernel I have my Azure open AI key hidden there inside of my region you can see between lines 9 and 12 oh that's how you do that so cool so what I did is I just made a region and you see I collapsed that region gosh another hand moment look at that got got nothing else out of this talk except that was the thing hide your stuff in regions appreciate that so yeah just say hey I'm going to use the completion service and then to your point ask for that samanic Colonel this is just probably six lines seven lines of code uh you know with a bunch of whites space and then I can go and say summarize it's meant to be simple made for the AI app developer not the machine learning engineer uh it's meant to make the because a net developer is busy with their real work and boss said go do AI so we want to make it easiest for them to help make boss a little bit happier you said something funny earlier when we were talking about uh why this was a good idea and you said I think that a lot of net developers maybe don't know what's going on around Ai and I said why do you think that and he says because they're working they're getting stuff done they're taking care of business and I think that's really great uh let's see if we have any questions I haven't heard anyone from the tagboard pop in quite yet I'm waiting for that to happen in about five minutes we're going to go into Maria and Lis where we're going to go deep into how to do these creating bots and in different things withn net and semantic kernel because these are now Not Just Apps they're intelligent apps right these are apps that are smart
Info
Channel: dotnet
Views: 7,617
Rating: undefined out of 5
Keywords: .NET
Id: A_PfDjcWIeA
Channel Id: undefined
Length: 35min 34sec (2134 seconds)
Published: Wed Nov 15 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.