An Implementer’s Guide to Microsoft 365 Copilot

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign thank you foreign hey everybody happy Friday how's it going how you doing Andy I've got a happy Friday in a weird layout there we go Friday yep all right welcome um we'll go ahead and kick this thing off good afternoon good morning and good evening welcome to 365 Deep dive with John and Andy he's John I'm Andy and today we have a very special guest uh joining us here on the show some say he works in a garage but they only have Azure tool kits and if the garage is powered by the Power Platform ladies and gentlemen welcome to the show Mr Jeremy Chapman hey Jeremy good to good to see you hopefully everything's working my microphone I kind of disconnected and reconnected right before we went live so just wanted to check that first but wow it's hard to live up to the intro of the Stig one of my favorite television shows of all times but uh thank you for that and I'm happy to be here today this is like this is a this is a great honor to speak with the MVP community and really go a bit deeper on a lot of the things that we've talked about with Microsoft 365 co-pilot but hopefully putting it together and packing it up and in two full hours of lots of details and technical details that maybe you might have missed along the way in the last few months yeah we're I'm really excited to have you man yeah I've been excited about this since we talked about it back at MVP Summit back in the in the spring I think that was right around the time of the co-pilot announcement uh as well still like been been gearing up for this one um so we know who you are but for some of those users that are tuning in for the uh for the first time do you want to give a brief introduction tell everybody well what you do and some of your role at Microsoft yeah so I started out working in it I think like probably a lot of people on the on the uh stream are you know in the current capacity and you know I was I was doing windows deployments I worked in Germany and in China I did manage SQL databases data back-ends front ends all of those things and all the infra and the the metal around delivering the services the backups and all you know everything that we had to do as part of just delivery 19 Services yeah and um basically after I moved back and it was the reason why the reason why I came to Microsoft is that our our son was born and you know we were living in Shanghai at the time and it was easier I think in terms of uh raising raising our sun here in in the U.S and so we came back and at that point I joined Microsoft and my first job and it was probably one of my favorite jobs ever was working on the business desktop deployment solution accelerator to get Windows deployed at scale and then so that that was kind of my first Soiree into into working at Microsoft and building out solution accelerators and there were a bunch of security and deployment and if you remember the architecture series that we did for ipd and wssra and those kind of things and then I went to Windows like proper to start doing Imaging deployment app compat all of those things and then from there went into office because then at that point we were like how do we get people to start using application virtualization as a way to deliver office clients and the endpoints out to different Windows machines so that's when we built out the whole click to run set of solutions and then at that point I started there was a lot of that's kind of when the whole mechanics thing started so I was I was doing a lot of evangelism going to all the different events I'd usually speak between half a dozen and ten different times if it was a ticket or a different SharePoint conference those kind of things and um then we decided we've got to scale this out we've got to go and try to make people aware of how the cloud works and for example on click to run and Office Pro Plus at the time that it all runs locally on the hard drive you don't need to be connected to the internet at all times to use it and those kind of things and then that just kept building so then we built out to the rest of Office 365 and then the surface team wanted to take part and then you know like all of the in Azure and then you know eventually Windows came in as as part of another contributor to what we did in mechanics we changed the name originally the garage as kind of you said in your Stig intro um and then the our name got taken from us when the Microsoft Labs changed its name to the Microsoft garage so um at that point we became Microsoft mechanics and we're all about like just reverse engineering showing how things work everything that we do and it's it's one of the best jobs probably the best job I've ever had where I just get to continually shift from topic to topic going deep and you kind of reverse engineer the way things work work with the engineering PMs and Architects for any Solution that's being built so that we're the double click in terms of giving additional introduction to any of the announcements that we make or some of the major initiatives things like zero trust or deploying windows or you know co-pilot right now and and I this is part of what I just love doing because you get to go in really deep and kind of immerse yourself in a different area and then change that up every few months as as different things are getting the spotlight so we've been doing mechanics since I think it was renamed mechanics maybe in 2014 or so but we've been doing it almost 10 years in terms of of that side of things and it's it's never a dull because everything continually shifts and the ground's always moving in terms of where to focus and where to have your attention put on things yeah yes we were putting this together over the last few weeks um a lot of what you were saying like the behind the scenes of how you come up with the topics in the show it reminded me a lot of Top Gear and that's why the Stig got stuck in the back of my head because you're you're deep diving into this information but you've got to distill it down into something you can deliver in like a 10 or 15 minute like you know short video to uh emphasize you know the the larger message that you're going through but you're shifting back and forth from you know product to product and app and service like uh you're like a walking encyclopedia uh when uh when you start diving in into this so I imagine that it is a lot of fun but also really challenging behind the scenes yeah it is more involved than others like I remember just immersing myself in uh Quantum compute before we did the major announcements back in 2018 and I just I spent weeks like reading everything I possibly could watching everything because at some point my main thing in terms of presenting I don't want to be on stage you know presenting something as the authority and not know enough about it to where like the people that are watching me don't don't learn something or get something extra from what they already know so if I'm in a room full of like Windows deployment experts I better have something extra to add in terms of value so I'll you know I'll spend the time the weeks the months to get as deep as possible I literally I probably installed a hundred thousand copies of Windows while I was doing Windows deployment in that side of product management because I wanted to know like where can things break what will happen if you know different things go wrong you lose connectivity those types of things so that's just my my way my method in terms of I'm like a method a method actor in that sense and but I'm actually immersing myself into those different areas and then you know I would I would say like it you if you don't stay in that area you do atrophy a little bit so then it's always nice to go back and revisit a topic and then get deep again into into that topic again because you know after six months or 12 months for example if you haven't covered something or looked at something deeply you need to get right back into it it's it's just it's a it's a fun way to stay like um to keep the brain neurons I guess firing to stay deep on various topics but yeah it's it's it is a bit of work in terms of making sure that we can you know learn how things work and the way to kind of describe them hopefully distill it down or it's not too complicated as we explain stuff on mechanics I think that's a big reason why mechanics has been so successful over the years is the it admins I mean much of that many of them are watching right now they can see right through if this is just marketing and it's just somebody reading on a teleprompter they don't even know what they're saying and mechanics it's got that real that real feel of like these are legit experts they know what they're talking talking about you know you come away with much more information than what you went into it you know you guys aren't afraid to go to go deep and get in the weeds because you know the audience can handle it and I think that's been really working well for you guys so right right and I like the joke that you know if if um if I wasn't really interested in the different they they wouldn't have like with with a face like this they wouldn't have hired me to be the spokesperson if I was just a random like thespian you know like I'm totally uh into whatever I'm I'm presenting and I you know I'll go as deep as I can on any topic and it's true for the whole team like there's a bunch of people behind mechanics and we're all like trying to really really find find the hook find the story find how the technology works and present that in a way that isn't too long or too you know it's it's consumable and the right length and those kind of things or if you invest that 10 12 minutes you'll come out with enough knowledge to be dangerous on that topic and hopefully you'll want to learn more about it and go deeper and try it you know like that's that's the whole point yeah cool so well I think that's like how long that's the immunity into what I think most people are really interested in hearing today and that's co-pilot Microsoft 365 co-pilot I think it's safe to say that 2023 is the year of AI it's everywhere uh and it seems like they're they're new Services coming out every single day that are introducing AI in different ways and some people way smarter than myself have said this is probably one of the biggest things that technology wise has hit since the internet and this is a pretty seismic shift not only in in how we work but maybe even in our day-to-day lives so for those that are joining us for the first time welcome to the show uh go back check out some of the recordings in the libraries um we do a deep dive so we're going to be here for the next two hours and dive deep into this particular topic and for our friends that are returning welcome back thanks for helping spread the word and being with us us but we do have a lot to cover so over the next two hours we do want to do a really deep dive into Microsoft 365 co-pilot so if you're joining us on LinkedIn if you're joining us on YouTube feel free to send questions comments thoughts in the chat window we're watching those and we'll work those into the conversation um where um as we're kind of moving along but with that let's go ahead and dive into today's topic an implementer's guide to Microsoft 365 co-pilot so first why don't we start off at a high level how will Microsoft 365 co-pilot enhance our productivity and day-to-day work in our everyday task yeah I've got some some thoughts there I've been using this for a few months now and um just you know getting started getting getting out of the blank page we even in mechanics we're building out shows about topics that might have existed for a while to where it's either known to the a large language model like GPT or it's something that through retrieval we can find out more information on it I was I was just thinking the other day about the show that we built with Mark racinovich on the infrastructure and there were a lot of technical terms a lot of product names have been around NV link and envy switch and you know various protocols infiniband Etc were you don't just like go down this Rabbit Hole of trying to search for stuff but it will generate you content and something that's that's quite good usable you know it's a draft there will you know as we always have in any of the any of the demos or things that we do there will be some mistakes is a AI generated content but just the time savings that you get versus going down like a rabbit hole of trying to find information about a certain topic it's been game changing for me just using that in the last couple of months to get really past that kind of blank page syndrome and then the other thing I love about it um and these are just probably two examples out of many but um when you're in a meeting and I'm in back-to-back meetings typically this is a lot of vacations overlapping and different schedules going on but usually you know or often I should say we're you know we're doing a say a dry run rehearsal on a show that we're building or something and we're 20 minutes 30 minutes late to our next meeting and that's been running so like going into teams and being able to say like can you summarize you know what's been talked about right now what is you know what's the mood what's the sentiment what's the vibe in the meeting right now those kind of things again like you could you could try to manually do that if you know transcript was turned on and you tried to read the transcript kind of parallel processing while the rest of the meeting is going on but this will just in seconds like give you a summary of what's happened and I I just I love that the the speed at which it can do that that kind of work and you can you can then ask it a secondary or a third question you know in terms of getting even more detail like hey somebody talked about me in the second bullet point that it summarized the meeting with can you elaborate on that and what's the expectation that I do to deliver on that item you know like the those are the kind of things that are just like for me massive Time Savers and things that you know just that you couldn't you could do before but it just took you know it took a lot more effort and and it was you know like anything that we're doing with generative AI to to you know write a proposal or something based on a existing dock yes you could manually write it and go to the other Doc and take its format into all those things but all of that stuff it it takes a lot of time so if you get something that will get you 95 or 90 of the way there that's awesome and that's that's kind of the way I've been I've been using it and just ways to just write even prose text about various like technical capabilities like it's it's just been a game changer for me in the last couple of months since I've been using it yeah so I I think what's interesting about co-pilot and I think what I'd like to delve into next about like Bing chat Enterprise that was just announced is the fact that like you said it makes your life easier and I think a big part of that is having it available and there in the interface right so you know whenever it comes to other things like I've used chat GPT in the past or Siri on my phone it's like I forget that it's there because you kind of have to go out of your way to get to it but things like having the Bing Bar up in the corner of edge having that available it's more you know it's just a click away at that point so you know I think it's I think it's interesting whenever they announced it at Inspire Bing chat Enterprise that like if nothing else if we can just wrap that existing functionality in the browser we're already using and secure that it's so available that hopefully we kind of lead people into that safer more secure llm to use in their daily work because it's there and I'm really excited to see as it implements as co-pilot implements into the different applications like how is it going to surface so that it's there to be helpful you know and it's like a fine line of being in the way versus being too out of the way and you forget that it exists you know finding that middle ground so and it takes that extra step because you know and you can do this and we showed it through Bing chat Enterprise where you could take say a a copy and pasted piece of text and give me you know an analysis of what I've just pasted into the prompt through Bing chat Enterprise and that you know we're guaranteeing the fact that you know we're saying that from a data protection and privacy perspective like all of those different needs are met and we'll we'll go a bit more deeper into that topic in a bit but um doing that in in say an open kind of uh solution where you're actually taking business content and pasting it and saying give me a summary like you don't know where that's going and there aren't necessarily going to be these kind of you know uh tenants around how that service is run and managed so it makes it not only like for for the office apps where it's actually part of the workflow and kind of contained in that app experience for the user but even when you're using Bing chat Enterprise like there are enough kind of um uh you know descriptions around how we're using that data you know things that we're not doing things that you know in terms of the data security and privacy that were that we're performing as part of using that data in terms of including it in prompts Etc that you're not necessarily getting those kind of assurances I guess from other other solutions that are out there where you might do something like that and then the the apps take it to another level because now you know when you're using Microsoft 365 copilot it's able to do get you know input commands against the graph so you can you can basically have not only retrieval of information that is pertinent to your role or to the thing that you're trying to create but even use put commands to plop it right into the document so that it's formatted and has all the right you know look and feel and and for everything to it to where it looks like it belongs in that document that PowerPoint or that Excel file so it's it's a lot more convenience another it's another layer of time frames versus like I'm going to grab a paragraph out of a txt or a docx file and then put it into this other service and see what it says and then paste it back now if it's fully integrated with the apps it's going to save you that much more time especially and it's going to do all the formatting and the additional things around it yeah so you prepared a couple of animations right to kind of like show what this looks like to to those who are watching who haven't seen things like Bing chat Enterprise do you have those keyed up that you can kind of talk through a little bit of what it looks like from a co-pilot perspective and I've got it on the screen here like what is copilot uh it's it's really the culmination you know using natural language to be able to interact with large language models uh leveraging data that's in the Microsoft graph and that's that's data that's you know part of Microsoft 365 or things that are that are wired into Microsoft graph and then the endpoints so being able to use the Microsoft 365 apps like we mentioned and so you know in terms of the experiences amazing things are happening in terms of putting them into word excel PowerPoint Outlook and and kind of the Monarch app the new the new version of Outlook and Microsoft teams in a few different ways uh one through like we mentioned with the meetings where you can kind of in real time do summaries those types of things and also through the cross app experience as well where you can chat using uh using information from the graph and and the large language model and ask it questions ask questions like you know what was what was the meeting like yesterday with this person that I missed or you know the when was the last time I worked with John and what do we talk about like those kind of things where you're actually to do able to do the retrieval against um graph you know graph data and then present that back with citations of of the work so you've got this you've got these nice endpoint um kind of inclusions that we're going to add across the different main core office app experiences and I know there upstairs on so Loop is is one of them too I don't have it on the screen but uh loop loop box also has a great experience and even like we showed how you can have multiple people like iterate against the prompt which is really cool and so what it does is it basically gives you access to the content that you use for business with all the Privacy security controls we'll talk about that and how that works it's all user context based and the context of of your work and you know the different other parts the graph is really good in terms of knowing like who is your manager and you know what are the different activities that you've done and seeing like how things are trending like a lot of the different graph API calls that you can make now can be manifested through um through the through your prompts effectively through cross app uh intelligence and through that that chat experience that we have in Microsoft teams so there's there's a lot that you'll be able to do and just to show what that looks like so this is the this is the actual what the current experience looks like I know probably a lot of people have seen what the March 16th experience looks like but if you were um watching the partner uh Inspire conference this is what the current kind of iteration looks like and it's it's interesting when you demo this because you've got to do it or we we do this from the our microsoft.com account so um this is this is something that is because you you know obviously you'd have like all of your all of your conversations and those things typically in frame but just to show you what it looks like so in in teams you basically have the ability to prompt the co-pilot in the large language model with questions that are going to be able to call the graph so prepare a 250 word sales pitch for roller Cloud uh Thunderbolt e-bike that's based on these different files and documents so you can choose the ones that you want to use so here I'm going to choose this word doc and then so while you're taught while you're typing it's doing those graph calls to say here's hey here's relevant people and relevant files that you might be helpful in building even the prompt before you sent it and you can see it so it's cool it's already created that and it's got the reference cited there that you can go to and make sure that it's the most relevant document and I love this one create a SWOT analysis so it knows what the heck is a SWOT analysis and how to actually word things in a way that strength weaknesses opportunities and threats and can give you great data on in this in this case um this this Thunderbolt e-bike that you're building and it gives you all of the reference and you'll see this is new the web reference as well in terms of public information in this case that it can pull from so and then you can check the references as well which you should like just to make sure all of the things that that are being referenced are factual so then you can ask it even deeper questions because it has that uh that language inferencing ability and so like here that gives you specific recommendations in this case to upgrade your uh Thunderbolt e-bike to a premium version what it would look like and in terms of catching up to maybe other Market competitors and and what the market looks like in this case for e-bikes but it's it's a very intelligent experience in terms of what you're able to do because it has that it has that um that language inference in the natural language input model that you can use with it so you don't have to be very like in the past you would have had to be pretty formalaic in terms of how you would ask these types of questions or query your data or those types of things whereas because it has the natural language ability you can ask things in in your terms effectively and it can it can decipher what's what you're trying to get from it and then infra infer what you want back and then get give you these types of results I want to draw attention to the ability to follow up with from like the original prompt query search the ability to follow up and expand or even in some cases pivot or cross-reference with something else I think that's been the the biggest benefit of working with some of these Technologies since they've they've become publicly available like even just in like Bing chat for instance I start off on a specific topic the example that I use often is uh Excel a vlookup versus an X lookup so I can ask what a vlookup is ask it what an X hookup is and then ask for examples of both and and pros and cons of when I might want to use one versus like another and that's getting kind of you know deep into like you know Excel land but it's the ability to ask that follow-up question and a pivot or cross-reference something that it really extends the capabilities of this I would have to do all that work manually parse through it all manually and this can kind of infer and then start doing some of that automatically and that's really for me the Game Changer and that was I mean if we roll the and we're going to go through kind of the history in a minute but like that was kind of where it started like the gpt3 integration that we had with with powerapps and Power Platform where basically you would say I want to do this and then it would create the power effects for you as a means of doing that so that it would know like it doesn't you don't need to ask if it's going to be a vlookup or other other types of info lookups uh you would just be able to say I just want this and this and this and it would say here's the power effects that I propose that you try out and you try it out and be like that's exactly what I want and that kind of morphed into and led to you know a GitHub co-pilot and being able to generate more code so that's that's one of the actual sweet spots in terms of llm and natural language and generative AI in general is generating these kind of code samples and recommendations for how to automate something or you know really uh up your game in terms of whether using Excel or powerapps mm-hmm yeah I have to admit like whenever I saw the co-pilot announcement in March I I was one of those it was like oh here's Microsoft trying to ride the trend and you know promise too much too early and and all that I I'm not in the Power Platform Community quite as much and I'm certainly not in in the GitHub developer to realize how long Microsoft had already been putting generative AI into things you know it's like yeah I knew that like the learning models were there and um being able to like take a picture of your receipts in one in one drive and be able to search it like yeah AI kind of in the background but I didn't realize like how much history was already there with things like GitHub and uh power apps and yeah things like that so yeah yeah I know it's it's not new I'll go all there's more legs than I thought but some of the history and where we started we're not I wouldn't say that we're like at V1 from a from that capability in terms of using GPT based large language models like there's been a lot of work that's happened over the years for that and I'll go through some of that in a bit do you think that's pretty exciting though from this week though just to go back to some of the things that were announced you mentioned it earlier with the Bing chat Enterprise so this is basically uh giving you the ability to do if you if you want to use business data the retrieval like I mentioned would be manual in that case you're copying and pasting like information that might be in a file that you have that might be say store in SharePoint or OneDrive for example but it gives you that ability to do that and do that with you know basically safety in mind and in the right privacy controls in place so when you right when you open it you can see that it says it's protected you know anything that you put in there is going to be protected in terms of how Microsoft uses uses it and how the Bing chat Enterprise experience is leveraging any information that's provided in the prompt we'll get to that in a second in terms of explaining that a bit more deeply but Bing chat Enterprise in the same kind of way like we should before if you bring with it some information you can see we're protected we're um it's designed for work so when you actually paste in something like we're doing here then it can actually create in general right AI in terms of in this case a sales pitch for our same Thunderbolt e-bike so in the same way that we just saw from the team's perspective the only the difference here is that you're actually going out and you act you have in mind the text that you want to give to it as part of the retrieval and we'll we'll talk about retrieval augmented generation in a bit we'll build up to these things but this gives you those same types of abilities a nice you know formatting like you can see here in terms of being able to easily you know generate content that that's very easy to follow and and kind of visually see you know the different in this case the different specs for our bike so it is it is um you know very powerful and the nice thing with this is this is just part of when if you've got Microsoft Enterprise or Microsoft 365 E3 or E5 this is part of that it's using enter ID as the authentication and authorization mechanism in terms of logging in and it's like like mentioned you know it's a protected environment in terms of how it works from a as you provide information to the prompt perspective what's happening to that information it's all secured over encrypted connection to and from the Bing chat Enterprise service so you can feel good about using it basically yeah and as we kind of wrap up here at the end I'm taking notes off on the side if if everybody sees me like looking over um we use a loop workspace to take notes and what I kind of want to end with here is like what's the implementer's checklist right of like what should you be thinking about what you'd be implementing and I think Bing chat Enterprise is going to be one of those things that definitely goes on the list to kind of prepare for the world of co-pilot in my mind if you're already E3 E5 I don't know why you wouldn't do Bing chat Enterprise and at least wrap that data in that security you know um like I don't know why why you wouldn't want to swing that that agreement of using Bing from the consumer over to your MSA for your organization if they're logged in with with Azure ad or intra ID now it's going to take me a long time to to stop saying aad you know enabling that type of feature so that at least like the chat history is not saved you're not sending data to be learned by an llm I think that's definitely something we'll want to put on our checklist for recommendations for our admins that are watching exactly and um and and you kind of mentioned uh in terms of how we got here and and this isn't like the our first Soiree or the newest thing that you know we've done in terms of the using large language models so I just I wanted to go through like a little bit of uh how we got here and and kind of recap of all the different major Milestones when you think about it so from a large language model and just uh AI perspective Microsoft's been doing this for a decade like it's been a long long time that we've been working with a massive compute infrastructure we were really early in terms of our adoption of things like networking gpus via infiniband and you can look this up in terms of the different n-series VMS things you can get in Azure and so um you know from that perspective if you look at even these are more more recent in terms of our Megatron during natural language generative model which I love the name of and it's because Transformer is the T by the way I think of GPT um Madness of naming with Megatron um but you know in terms of just the size and the growth of models if you look at from 2018 you know starting with Elmo going all the way through to the Megatron turn 530 billion parameters like these have grown exponentially like over the years and you know we've been building out compute infrastructure and this is you know this is massive in this show with Mark racinovich recently on the super computer that runs this and kind of some of the specs that I'll get to on like even running gpt3 but you know GPT 3 175 billion parameters you know these things are are not you know trivial in terms of their size and kind of the compute it takes to run training and inference against them so if we look at just relatively like GPT versus Megatron um you know it it grew three or four x and then it's even growing more you know now as we're getting into even more sophisticated models that as part of training are ingesting you know publicly uh known information and things you know part of the training set websites documents things that are again are public we never train on your private data or anything that's stored in Microsoft 365 or Azure and that's that's always part of the promise and just as I was working with um Mark rasidovich to build out the super computer so like they are two completely separate operations from a you know from an AI training and inference perspective like the the hardware the metal that's used to run training versus in the the frequency of the training versus the how inferencing runs 24 7 it's being used on completely different uh you know different spec uh systems different they're they're different spec hardware and used at different times and they're you know the training job could run for 45 or 60 or you know 90 days like it's it's a whole different thing and so we'll talk about that as well like we've built a bunch of things with project Forge and how you can basically use uh checkpoints as part of that process so that even if you had a power outage stay in a data center while a training of a large language model was running on day 44 out of 45 you don't lose 44 days of progress we've built for those types of things also as part of the platform to run Ai and really when you think about the 20 2016 that's all far back goes from a from a open AI perspective like that's when that's when openai came to Microsoft in terms of us running their AI infrastructure so then you know that led to 2019 which is already like four years ago almost to the day um in terms of them doesn't seem like that long ago yeah yeah and that's that's kind of when uh from you know from a Microsoft investment in that perspective going going all in on on these sets of Technologies and then quickly thereafter we started building that into products like I said you know with Power Platform you had you know powerapps things like powerapps ideas that I mentioned before where we could do things like author or power effects uh using gpt3 and you know have more capabilities that that were maybe less accessible because somebody would need to know like how does powerfx work and I want to do these various things with my app but if it's beyond their reach and you need like a little bit more kind of pro code we always say low low code Pro code if you need like a professional developer type expertise this this made that a lot more approachable and a lot easier to get you know those those more um Advanced capabilities running through powerapps and power Power Platform you know and then like I mentioned GitHub then was kind of the next iteration in in 2021 where with co-pilot there and and I've used this to write I haven't written a console app in a few years and I literally wrote a console app to call uh you know GPT models with with um GitHub co-pilot and it fully worked it gave me all the instructions to compile it run it like it's crazy how good the stuff is you have to check the code obviously but um just in terms of generating code getting you started quickly getting something that works that you know at least you know from that perspective getting operational features and capabilities in it's pretty amazing it the time savings that it gives you from a coding perspective mm-hmm and that got better you know we think about natural language to flow and some of the power automate work that was done that we announced last fall um being able to say hey when you know when something arrives in this Excel file then send an email that thanks them for you know appending their data field like those types of natural language commands and doing that versus I mean it was already easy to do um Power automate and create flows to some extent but now that you can just naturally describe what you want and it will infer what you're what you're trying to do and build most of that out for you again these kind of automations and things are just getting way way more Within Reach and way way way more accessible for anybody to be able to use Automation and generate some pretty amazing Solutions so all of this is just making life easier across multiple different vectors yeah I think Daryl has a good point on here that I think would be good to kind of maybe take a step back and containerize what we just talked about here Daryl was saying you know hey training compute is why this is additional it's not cheap to do this type of stuff what what comes to my mind is so what Jeremy was talking about whenever he's talking 175 billion objects 560 or whatever it was of those exponential numbers that is that is training that has already happened right so when you're thinking in terms of of Bing chat or co-pilot things like that that's one piece of the data that's being you know used to build the response right it's that's why like chat GPT goes up to what is it September of 2021 that's those items but what co-pilot does and what Bing does um whenever we saw Bing come out in in January it was you've got those objects it's aware of those things but it's also connected to the internet and everything that has been created since that time that the Transformer model was trained on that you know finite set of data you've also got the internet searches which is why you see Bing you know execute internet searches to learn like oh what is the current super Bowl teams and stuff like that and then you bring co-pilot in there's that live graph connection where I think I want to make it clear that um co-pilot is not training on the data that is in the graph not at all it's it's one of the components it's one of the the sources that it's used when it does that thing called grounding where it checks the trained data it checks the internet it checks the people the files everything that's a live Connection in the Microsoft graph in order to come back with a well-crafted response to that prompt so it's not that you know we're we're paying thirty dollars a month to you know subsidize training of the internet on llms it's the the trained model is just part of all of those different bubbles of what you get access to you know above and beyond if you were just using like chat GPT or something yeah yeah so and I I was gonna go there anyway again but I think you you fast forwarded me a little bit in terms of oh yeah jump ahead of your your picture yeah yeah problem and I and I want to say I'll just pause this here actually but the the nice thing with when you think about the different um this is all about retrieval augmented generation and retrieving information and grounding is part of that process that has pre-processing so it's there's a there's something called a system or a default prompt that kind of has rules for how things run and work that gets processed in plus what the user asked for plus what information can be retrieved through the graph and um and like you said in some cases like when you're using Bing chat Enterprise or Bing chat even on the consumer side it's moving information in from the open internet in terms of trying to find the most relevant content work that in apply the citations all of those kind of things before the prompt is even presented to the large language model hold on I want to let me play this back I'm having a little bit of technical difficulty here okay so before the prompt is generated or is is presented to the large language model so it's doing all of those different steps to um work with the different data sources that it has access to all based on the user permissions in the context of the person using copilot and so and another thing that we're doing is we're using the What's called the semantic index which has like relationships and other things in terms of your organization so it can more quickly give you results then searching over things like OneDrive SharePoint teams parsing that information then bringing that back into copilot so it can present all of that as part of the prompt and say like you know based on the information that you have access to here is that paragraph you wanted me to generate one one way to think about retrieval augmented generation is like if I asked you right now we haven't talked about this before the call but like if I said what's my shoe size to you there's no way that you'd ever be able to know this but if I said hi my name is Jeremy my shoe size is 12 I'm six foot four what's my shoe size of course you can tell you that you can tell that is basically what's happening is like that example we're going out we're retrieving that extra information of shoe size and and height or whatever then presenting that as the first prompt versus just the question of what's my shoe size so then it can answer because it's got additional information that's been appended to your question and that's kind of what's happening in terms of that retrieval and you don't see it because it's happening like instantaneously under the covers and then when that question goes out the the llm has the information it needs to generate the response for you because you appended that extra information in there and we'll we'll go through what that looks like in code because I think the best way to look at this is actually through code using the jupyter notebook like we showed with uh public or distinguished engineer for AI and search yeah but but these are the things that are actually happening kind of behind the scenes where it's not just your question that goes to the llm it's your question plus okay if I want to answer that question what must I retrieve in order to provide additional information for this llm that doesn't know Jeremy's shoe size to be able to answer that question that was posed to me and that's the way that extra retrieval works so those are the things those are the things that you kind of don't see that make the magic work behind the you know behind the scenes and that's what that's really what the graph and the internet and the different information sources that it can cite are giving to your prompt in order to keep that answer back so is Daryl correct here in that hey you know he says Oh I thought he was learning about our organization according to what we've talked about here the llm the data that the graph has access to those are separate and remain separate right it's just the prompt is constructing a better prompt with that additional context and then adding that context back when it comes to the end user but it's not the the peanut butter is not getting into the chocolate right you know it's not getting mixed together and learned on it yeah there's no recall that you can you can try this out with Bing chat consumer if you're not using Enterprise right now like if you use the sweep command which is the new conversation button oh yeah the little you can ask it a question today and it will respond with the with that and you can keep Drilling in and getting more and more responses and then you can you can sweep the history of that conversation and ask it the same the same question that you just asked it 30 seconds ago and no and the reason is because there's no storage of you know what you've been prompting it's not learning in real time the the the kind of fallacy there is like you're assuming that it's kind of like a human I guess and we don't want to do that we don't want to assume like this is like somebody that's retaining information as you ask it questions because it doesn't work that way every time you're prompting it new and it's getting new information and once you remove the conversation history because every time you ask it it's building the previous question that you asked it plus its previous response plus the you know and then once you wipe that away it's completely lost all context of what you were just asking it so there's nothing that's stored or retained by that llm technology or when you sweep the conversation that terminology is called turns right where you were you were showing the demonstration of like Bing Enterprise and it's you know one of 32 of 33 of 30. those are turns in the current conversation but whenever you sweep that yeah your turning starts over right right the the whole conversation is so the way to think about turns and this is in prompts by the way so when we think about the different prompting Styles you know if you a lot of a lot of people maybe have started with things like smart speakers where you're asking you know what's the temperature outside and that's what's called kind of a zero shot prompt where you're just asking um you know a basic question give me a more descriptive word for happy and then it will say joyful and these are all real things that we've tested restate winter is coming a good uh one for uh winners coming soon the days are getting shorter like it'll be able to give you that kind of response that's called zero shocks you're not telling it or giving it any expectation as to what type of reply you want yeah you're just throwing something at it if you then answer give it some q and A's like we're showing here which is called fuse shot then you can say like base based on the question I'm going to ask you I want a one word response back because we said like here are two different questions and the answer was yes or no they can infer that the next time you throw a q up a question up it's going to need to respond to you with one word and that's what that's what fuchsia prompting is about so that's that's kind of the groundwork in terms of what prompting means now in terms of how things work and this is another if you haven't caught this we're going to give Links at the end as well in terms of our playlist to check out but this is the show that we do with Pablo Castro using Azure openai service and this sample app is something you can get from GitHub by the way and deploy into Azure it's pretty easy to get everything up and running with the same code the same back-end date of the whole thing using cognitive search and Azure open AI so the nice thing with this one is it's it's basically connecting employees in this case in terms of their health plans and how how that works and like I said the the app has everything included in it and if you go into the different ways that you can leverage the app it's showing like what is this IM start what is it searching for the imstart system is the kind of the rules that need to be abided by when the llm responds to you so it'd be helpful you know be polite do these things correctly cite your information and your sources and then when it goes out and actually retrieves a source it can basically go in it's finding in this case a PDF file that you can see in the in the middle of the of the right pane there mounting that PDF extracting the text from it in the second PDF using that then to effectively respond to and further response to your question and just just to go a little bit deeper because I always say there's there's truth and code if you go into the um Jupiter note I love Jupiter notebooks because you can not only like see the code that's there but you can actually interact with it and see the responses to it yeah so like in a Jupiter notebook first you can see in our case with um with our open AI service we're using we're using Azure and you know authenticating with uh sorry we're using an entry ID I'm also getting used to the new naming um and Azure active directory is is the means to get into this um but when you when you actually run in an interactive way against the code using the notebook you can kind of see the way that the difference pieces are constructed so those are there's our enter ID now if we look at the different turns here the IM start and the the user and the assistant turns so if I do um a prompt the assistant responds and we can see that we can see how it's responding so if I go into if I go into history that history is actually something that we can use to augment future prompts effectively so now that I've got the future prompt and the response back to it and I ask a question how about hearing which is somewhat related to my last question it knows that I'm still referring to the Northwind Health Plus plan and those kind of things it's got the context from my previous prompt so then when we look at the content you can see it's mounting our PDF there in this case and when we look at the code for the prompt itself this prompt grew like a snowball it has the imstar which is the system prompt it has the subsequent prompts and responses as part of it and so like your third or fourth prompt it's getting presented with all of that stuff the entire snowball so then you know once you go back and you you save your completion you decide to remove history there was no recollection recall whatsoever of what you've been prompting the llm with all of this information just goes away because like I said training's a whole other exercise that happens at a whole other point in time with an entirely different stack of infrastructure so um the way that all we're doing here is basically doing um retrieval retrieval augmented prompting uh you know the generating based on the retrieval of information and the you know every time you're doing a turn like you mentioned each turn is every time you you ask a question that responds are you you um send a prompt and it responds back that's a turn and so every subsequent turn is stored into the turn history when you leave you're gone its memory of you is gone and it waits for the next person to come up with another prompt yeah so that's that's the way everything is is working there man it's crazy to see how how it expands you know because like I'm I'm learning prompt engineering you know and we're all kind of like trying to upskill into prompt engineering but you know I am absolutely not typing 150 words perfectly crafted you know into these prompts to to do that like that assistance of adding all that context to the beginning and the end of like this is really what John is talking about is way more helpful than me having to kind of like you know massage it into what I actually want it to be um it's interesting to just kind of see how the machine builds that based off of the conversation that you're having with it yeah yeah and that's and that's the and again these are like the massive massive Time Savers in terms of when you do search and you find that PowerPoint or that word doc or that PDF file and it does extract the right components there might be a 100 page file and it needs to get a subsection of the file and present that within the prompt because that's the most pertinent section and I always think about like imagine all the different times you've wanted to find an answer to something and you go down this Rabbit Hole of looking through a search engine and then it takes multiple different attempts at reading a bunch of stuff over and over and over again you might learn a little bit as you go but you know sometimes it's the third or fourth or fifth attempt to find that content that you want and those are the kind of things that you know the orchestration engine can help distill like the the most relevant content and give you back a response so that you save all of that time like I was talking about Quantum Computing all of the different searches if I had this at the time I would have probably shaved at least days if not weeks off of you know the time I spent to research Quantum compute because all of the terms of superposition and you know like all of those different things is part of quantum like it I can get that generated and I've tried it you know it does it in a very very accurate way because that is out there and it's even harder and a lot of times in in Enterprises like this because it's like okay what size low is what I'm looking for in and you know maybe that's not connected to Microsoft search so it's like well now it's in the HR System is it in my benefits is it in the the service portal you know to just find like your hearing information um and that's having that connection all around is so much more helpful yeah and that's that's what the extensibility layer and the plugins and and being able to connect out to other systems and other other um information effectively will give you is the ability to in a secure way um it's basically if you've looked into how the chat GPT plugins work you know it's a it's basically a Json handshake with the security the authentication kind of um written into it to make sure that you have access to that external data source then calling that external data source and having the retrieval basically able to retrieve information from that Source in a secure way present that to the prompt in the same kind of retrieval way that we were just showing before and you know teams message extensions another way you can do it if you've got teams message extensions now you've probably done a lot of the work to get the data already hooked into and wired into teams and then Power Platform connectors and there's a lot of there's a lot of controls there we set a bunch of shows around Power Platform connectors and the things that you can do from a data loss prevention perspective and permissions Etc around that so there's a great story in terms of how you can connect to systems and that you know there's everything ranging from like Power Platform connectors to sap and Salesforce and other types of services that you can use and there's a massive number that are available now that are already kind of pre-built and so like I want to make sure that if if your retrieval needs to go outside the boundaries of Microsoft 365 that there's a safe and secure way to go out retrieve the information needed and then be able to generate the responses that you're looking after again with all the peace of mind to know that the right authentication and the right author has happened to get to that data before it's presented to the prompt and generated as a response for you yeah and that's something I know like as you're talking about plugins and extensibility what pops into my mind as an admin is hopefully Microsoft is thinking about like what's the admin experience of that going to look like right yeah you got all these chat GPT plugins well I as an admin be able to control which plugins it can have access to right if we use this ride sharing program as an Enterprise and not that one or this travel platform you know can I get single sign-on and I think that'll be when we get to the next phase it's hard to think about like what's the step after you just turn on co-pilot something that like isn't available outside of very few people you know further down the road it's like okay well how is it you know am I going to be able to like do saml authentication to these different Services through those plugins and I'm starting to to even think like a couple steps down the road of after we already have co-pilot how's it going to be even more useful so yeah and and this this makes it this will make it more game changing in terms of the things that you'll if you need to generate things and call the different systems like we we kind of showed it through the example from Pablo which is basically saying cognitive search is wired into different repos to be able to grab information and present that to the prompt effectively to the llm and get the response back the more that you can connect it to useful information the better off it's going to be from a user experience perspective because if you think about like um you know a lot of the manual processes were oh that information is stored in an HR System somewhere or that information I'm sorry I can't you can't find it through SharePoint search natively because it's in this other thing for you so imagine if you have the right security and the rights um you know the connections to that information where you can basically answer this these types of questions very quickly then it's just gonna it's gonna help a save people time and B build more familiarity and comfort with you know people using the service and getting value from it frankly um in their everyday work and I think the plug-in the extensibility story getting that all um you know added additive to the Microsoft 365 copilot is massive to to make sure that you know there's there's this larger boundary of secure information access that you can use as part of generating content that's generated from the llm Daryl Daryl and I have such a soul connection I was thinking the exact same thing here is like I prefer to admin my co-pilots from one admin Center I'm curious Daryl in the chat what do you think the inevitable admin Center is going to be called and how long before you think the name gets changed to something else and it gets merged into another admin Center a future episode of mechanics like pendulums a little bit like there'll be a lot of times there's a massive consolidation push and then you realize no people are actually fairly domain specific as SharePoint or whatever and then they kind of the pendulum shifts out two separate things so I I you know we'll have admin experiences as part of the Microsoft 365 admin Center like that's the plan and that's what we've shown and I'll show an excerpt of this in terms of the setup guide and Licensing and those things that you do in Microsoft web admin center now but um yeah it'll it'll it's also big I need a co-pilot for my co-pilot and and those and those types of admin scenarios um are really cool like we just we just um we just did some work on if you watch build there's you know the Microsoft cost management um which is kind of its own it's an own admin Center frankly it's it's basically how you manage your costs now uh there's a integration there using large language models and GPT to say like okay if I changed my VM size from from this to this n-series VM or I changed my networking uh endpoint from here to here or I change my storage type from you know premium to standard storage like it's it's giving you the ability to do lookups and queries against your cost management data and it's it's doing that in the context of kind of more admin control so I'm I'm excited to see more of these admin things like I think like from this day making my life easier writing um you know things in terminal in code through GitHub co-pilot Etc like all of that's awesome but then when you start to in and corporate as part of admin guise like that's that's going to be another Next Level thing yeah yeah I know there was announcements I I didn't read anything about it but I know there was announcements at Inspire of like a co-pilot for security coming you know and being able to that was actually part of the star Ms security stuff oh okay so it was back in March yeah so I mean even like using co-pilot to assist those internal operations you know all the sizzle demos are about like oh this person in sales trying to get customer you know details of their customers and stuff but even internal operations will be able to benefit from these types of things and you guys are building co-pilots specific to those types of roles yeah that's that's and that was kind of one of the big stories I think this week from the Inspire conference was not only do you have these general purpose co-pilots that are part of what you experience through word excel PowerPoint Outlook team Loop but you'll have these like role specific ones like the sales copilot like I mentioned before the cost management one in Azure you know that we we talked about at build like these are more specific to your your domain or your area that you work in so you'll be able to get even more specialized value effectively from uh from the large language models in those cases okay so um Daryl has another good point in here about like so back onto the the topic of like preparing for co-pilot in the organization and moving into um like how do we remediate over sharing or what did the admins um need to think about like you know to get ready for this because it's not available now but we also don't want to be caught flat-footed whenever co-pilot is made available and the main the business starts demanding it what do we do to prepare for it right yeah and I I think the thing that you can do right now like whether you have access to the kind of the Early Access program or not in terms of co-pilot and sometimes this is the best practice in general unrelated to co-pilot is you know enabling just enough access and zero trust and all these things around information protections and the security around you know how your documents are are found and discovered by people within your organization so that's a that's a big part of it so in a lot of cases you know you'll have maybe too much access or what we call oversharing a lot of times where basically anybody in the organization can look so here I've got a sales person and the sales person can gain by looking at Future product plans they probably shouldn't be able to access R D team or just an acquisition exec team content around the future product plans but you know the sales team does need sales information to get their job done so then achieving what's called just enough access and making sure that everything is locked down to the right roles and the right people that should be able to access that content again that's not a co-pilot specific thing that's just good data hygiene and making sure that you know the missions and controls are in place you're not showing co-pilot here you're just doing a Microsoft 365 search and finding things you shouldn't have access to right right stuff we should be doing anyway and it's the same it's the same search basically again it's the co-pilot search is being run into the context of that user who's logged in and prompting um within those co-pilot user experiences but the search writes and the permissions to files and documents are specific to that user interacting uh with copilot in that specific app or in teams or in the you know any of those experiences so I guess from an admin perspective again these are things that if I wanted to go into SharePoint and search for stuff or through other Microsoft 365 you know web properties for example and do a search these are things I'd be able to find with or without copilot so you want to make sure that you know in this case just getting the right access controls and and permissions around all your files and documents and all this document libraries all the different containers and there's also a condition we I didn't it didn't even dawn on me until I was talking to our own internal I.T but there's also something called undersharing where maybe it's somewhat uh tribal in terms of where certain documents are stored for you know like your health plans and things we just saw with Pablo's demo yeah um where you want to it's undershared now and you want to make that more accessible to the large language model and that could be either moving that information sometimes from a third party domain into SharePoint and having that be the source of Truth where that document or those documents are being stored or create a plug-in to where it can access that external information because it's the it's the way that you know we've got a lot of uh you know like Health Care coverage information and things at Microsoft that are outside of the microsoft.com domain so it's not always easy to navigate between those two different domains and systems and where those different files are stored and these are things again that you can solve for in terms of solving for both over sharing and getting just enough access if there's too much sharing happening and then under sharing getting making sure that the model can find or the orchestration layer and copilot can find the information it needs to respond yeah so for my notes I'm I'm thinking you know for over sharing there's there's things like um auditing your admin roles and who has access to what you know do you have too many SharePoint admins do you have heaven forbid too many Global admins that's one thing that you can work through um instituting things like DLP and sensitivity labels like you said it's kind of a double-edged sword like I've heard I've heard some people in the industry saying things like you know oh well I want to make sure that nobody that co-pilot never surfaces a confidential document or something like that and I think that that's maybe a little bit too generic of a statement I think maybe what's a better question or a better thing is I want to make sure that co-pilot doesn't that or that co-pilot only Services a confidential document to the people that it should be able to surface that too and we get there with like sensitivity labels and information protection and stuff like that so it's like if if your house is in order then you you worry less about oh are they going to accidentally stumble into something are we going to have you know a delve moment where somebody sees sees something that they should have seen because it was people-centric or something like that so yeah and in a former life I used to work on like lots of kind of almost government type of Telco uh registrations and contracts and agreements and things when I was working in Germany and the people that a lot of times can benefit more than anyone else it's like you think of the attorney who's got to parse through a 2 000 page Contracting agreement and summarize some of the concept like those types of things even though that's a confidential document again if you lock the document down to just the people who should see it a lot of times those scenarios are some of the most valuable scenarios to where you're going to use um the power of a large language model in generative AI because you know some of this and I and I wanted to go into the law profession actually before I started college and realize that there was a lot of really a lot of documentation that you had to read and it was you know I was pretty involved and I decided you could have been that guy using chat GPT for for legal precedent but I can I can kind of empathize with with that scenario because that's not the most like compelling um you know to read information that you that you can read as a human typically and there you know there's a lot of nuance and and some of the verbiage there so having something that can parse that and infer what what's in there and summarize those Concepts out like again these are these are massive things that you if you think about it from just um e-discovery and and trainable classification that we have through a lot of the other solutions that we have that use AI as well like these are massive areas for AI to provide value so you don't want to necessarily say like if anything has a confidential label you know block it from the system it's more like make sure that if Jeremy is searching for something that document is the thing that only Jeremy should have access to and you know that it's it's not being shared in a bad way to where it's over shared with others that shouldn't have access so yeah Daryl has a good point again thank you Daryl for all the stuff you're throwing in here um but the reporting in SharePoint for who how many people are sharing to everybody in my organization or anyone in the world and can we you know put controls around that or can we expire links that are Anonymous links where everybody in the world that's built into SharePoint today that we can look at while we're preparing for co-pilot and what you have on your screen now about active sites that puts in my mind another thing admins can think about to prepare is a group life cycle management and not only having you know files that are too old from like a liability standpoint but also if you have all these orphaned groups out there with the data that's you know years and years old it also might you know reduce the the functionality or the the relevance of what you get out of copilot right so having fresh accurate data by cleaning up your groups on an appropriate time is another thing that you could do to kind of prepare for these things and say Hey you know do we really need more Microsoft teams than humans that are working in this organization maybe we should clean house a little bit um well part of that too in terms of the recency and those things you can include that as part of your prompt so you know include information from files only things in the past six months or whatever past six months or past 12 months so that a lot of that too is around um you know and I think this will be like the next the next kind of Frontier in terms of user training and adoption is like how are we successful with prompts in the best possible way like adding those instructions like we saw with few shot words you say I want to have it in tabular form or bulleted and make the bullets all you know 12 words or less and give me this type of information that's only you know six months or less old and all of those kind of parameters that you put in and then getting getting to a state where you're just very comfortable at speaking that language of basically asking what you want in the right way it doesn't have to be like a query language like you would do in SQL or t-sql or something but the natural language means of putting that information into the prompt and then it can basically LM will be able to infer what based on what you're telling it to do because it's got that natural language ability and then spit back the the response that you're looking for with the different parameters that you added those I think those parameters is like that's going to be the next area right where you know especially where you know where you guys especially as MVPs uh can help in terms of spreading that message of what's working what's what's giving you the best results in terms of uh prompting and how that's working out we'll do the same on the Microsoft side obviously but you know all of that all of that is people get more and more kind of uh into it you see it on on the some of the other models now in terms of people using it for image generation Etc the prompting is pretty key in terms of speaking that language and yeah describing what you want adequately and it's it's arguably a similar thing if you ask a person to do something for you the more the more detailed and the better instructions you can give the better results you're going to get back and that's kind of the same way working with um you know working with a large language model yeah Andy you're way ahead of me on that with your upskilling into prompt engineering and and being in depth about this I think that's something all of us can take as I.T professionals away is learning and and getting good at this because in a lot of ways we're going to be blazing the trail ahead of others to say okay how's what are the best tips and tricks and how do you you know get the most out of it because from my perspective it's it's almost like I've had to relearn how to use a computer again because I've been thinking for 20 years how to how to talk to a computer in terms of search engine optimization and you know and semantic search and stuff like that and I have to like break out of that mold to kind of give it more information on what I'm looking for rather than well I'm going to throw something into Google and then the hunt begins yeah and that's and that's where this concept too like just building an internal Community a center of excellence and I know Carolina is working on this a lot in terms of bringing a lot of the knowledge that she gleans in her in team from like getting teams out there and be having teams get used um in in very powerful ways like this is the same type of thing that you know that you can do from a like sharing your successes and and things that work as as you work with prompts and then that's the internal side of it and then there will also be a lot of I think info information sharing you know YouTube videos of course and blogs and things that are that are working for people in terms of how to share their prompt experience as well so uh just just to go back to the conversation in terms of tooling um that you would use in terms of getting to just enough access so the nice thing with you know SharePoint admin center it's a it's a way to be able to go in and kind of audit who has access to information and see the different site members your your visitorship settings uh the Privacy set to the site whether or not external sharing is enabled for for certain sites and and um or teams document libraries Etc the labels that are applied to them like all of these kind of things you can do it is a it is a process to kind of work through that list but there are there are other tools that can help in terms of automating some of those steps out so with Microsoft purview the nice thing is is not only do you have like imagine you're doing a project-based assignment and you've we've all done it and then Project based stuff where we might think okay we've declared Victory six months in now that we've done all of our work in terms of looking at every single site and document Library triaged everything then you still kind of want to have you know the different nuts safety nets underneath that that will do things like automatically label and apply protection the files and that's where things like Microsoft purview can help in terms of document classifiers trainable classifiers so like if we do see that contract or that you know patented application bank statement Etc that it will detect that type of document that you can then apply additional training to to make sure it's even more accurate than our built-in models which are pretty darn accurate as they are then that can apply the right label and based on the label that's supplied it will apply the right DLP policy or the right sharing permissions that file and then you've got a second safety net disconnected container of the document library for your legal team is not used and somebody stores that contract off board these kind of things will help in terms of being able to find you know label based on label then apply the right policies to it and then apply the right controls on top of it and the nice thing is it's the nice thing with DLP in general is it's multimodal in the sense that it will work across email SharePoint OneDrive teams and teams chats even you know your endpoint endpoint DLP that you can do on on Windows and other platform devices and then um Defender for cloud apps you can even wire and wire in on-premises uh file shares and I actually did this at home when we were building the show out for a DLP it all works like it's it's pretty amazing when you get all these different repos wired into into your data loss prevention kind of fabric of what's being protected and so these things will help and again in terms of making sure that people have the right access information yeah and then there's one one other thing that I'll mention and I didn't know Sasha Manny who's we've worked with a long time and Mark Cashman I see on the in the chat knows them well um but something else that's brand new is the SharePoint Advance management and part of syntax and so that makes things even even better in terms of identifying over sharing and being able to flag you like here we can see we've got our over sharing flag and email um that are highly confidential site project Apollo is being shared externally so it can flag you in you know kind of proactively uh tell you about it you can then request site access reviews for the site owner so that they make sure the right people have access to the site and then you can restrict it to uh security groups right from this one experience using the the SharePoint Advance management so that helps again in terms of correcting the over sharing situations and proactively getting notified as well mm-hmm yeah so getting to the admin setup guy I think we've we've kind of delved into it a little bit as far as like pre-work and you know like almost setting the Baseline the foundational stuff like even if you don't go into co-pilot these are good activities like like Daryl said again it's an ongoing process but delving into actually implementing co-pilot do you have like the prerequisites that are you know set up like how difficult is it going to be to actually turn on co-pilot whenever it arrives in our organization yeah so there's there are prereqs in terms of at the at the high level one licenses that are in place for users that you are going to scope in for co-pilot licenses so Microsoft 365 E3 and E5 are currently the ones in scope for that um and then we have something called the setup guide that kind of walks you through the prerequisites because beyond that you're going to want the right endpoints in like if you're using a um say a Microsoft apps Microsoft 365 apps for Enterprise kind of uh Perpetual licensed version from 2016 or something you want to make sure that you're on a current or new channel effectively of your Office 365 and Microsoft 365 apps and then a lot of the experiences obviously you can get to from the web and you know like if you're doing hybrid exchange those types of things you want to make sure the mailboxes of the people that you Scope in are the online mailboxes so that they can use that and they have the new mail client so they can they can use those abilities so there's there's a little bit of prereq in terms of making sure a the right licenses and services are in place and B your endpoints and apps are the right ones um but it is it is a cross app cross platform thing so we have experiences that will work on Windows Mac Android iOS like we've shown all these things since March and you know the web experiences are kind of any platform because they're from web um but but it gives you it gives you more flexibility in terms of once you have those prereqs in and then the setup guide is something that we've built for the admin Center and when you okay so there's already a checklist and this is basically type tool so okay this walks you through what you need so the Microsoft 365 apps for Enterprise Microsoft intra making sure that you're using enter ID because there are third-party idps that you might use that you know won't necessarily have the same type of fidelity and because remember we're doing search based on your information access and a lot of that's governed by the graph which entry ID is is the kind of access and authorization Source over and then you've got Outlook and getting the new outlook getting Microsoft teams making sure that's a sufficient level client and then Microsoft Loop which there are some steps to get Loop running as you know and probably Daryl Webster on the call knows all too well and then the semantic index for copilot something we will provision on the Microsoft side as part of getting that ready and then after that it's basically around what you do in terms of getting licensing assigned to the right people so you know you can from here in the setup guide apply if you need to E3 and E5 Microsoft 365 licenses to people um but you know those are already in place and you already kind of have standardized on say Microsoft 365 E3 at that point you just need to assign the co-pilot licenses so what you do there is once you click in you can assign the right groups those licenses from a group level and you know when you do that those in that case nine individuals in the IT team that I selected there will have the right licenses in place and then after that there is a there's an email that you can use as a template and I've and I've seen the feedback you might want to do this through teams but you've got a template that you can send through however whatever modality you prefer to go out to say hey we're deploying this and you're part of the initially scoped group of people that will have access to because we just gave the licenses or signed the licenses and then you can communicate like what's you know what's the experience going to look like within the different apps and services and this is this is something that's always being updated by the way this email so it looks a little bit different now but um but these are the you can have that built for you it's it's also optional that's nice from like certain organizations that are extremely large like maybe we want our service adoption team to handle those Communications you know but if depending on the culture of your organization you could let it send the email you know as the chat said it's nice to see a setup wizard like this I mean this is as simple as as what we saw with enabling Loop and going through to create like the office config policy it's just you know it's very user friendly you know it looks like I'm not gonna have to uh get into like Powershell you know me and Matt Wade are pretty are pretty Powershell adverse now is the new the new Powershell and the the reworking of Powershell so I've got to get used to that now that's I'm hoping to do a Powershell show so I personally can immerse myself yeah or newer ways of calling it via graph versus the um older older ways using the aad um for example but um but yeah I'm a huge fan of Powershell and I have been since since it was new so yeah yeah this is nice though the Readiness checklist I think will be really helpful for people for you know there's a really good question in the chat that kind of ties in with this and I wonder if we could just briefly touch base on it are there any considerations that we need to have with people that are licensed for co-pilot working with somebody that is not licensed for co-pilot anything that that users need to look out for hmm that's a good one um well the the co-pilot controls will only be exposed to people who are licensed for it so the the people who aren't licensed like in other words as you generate content it's going to be the the license holder who's who's generating the content um it weren't that was one of the questions that we had even we just did an AMA a few weeks ago in terms of like is the content that's generated does that belong to somebody and no it's it's your it's your content that you generate at the end of the day so if somebody you're doing this a paragraph that is written into a document yes you can share that with your organization because your organization effectively owns that generated paragraph um yeah so this is a good point that came to my mind is the only place I've seen where it would be like a collaborative co-pilot experience is in places like Loop yeah things like that where you can both see the um I I I don't know the answer to that one like because the list is still fresh enough to where that's the only experience I know of where two people can kind of see the prompting experience yeah and maybe the unlicensed some would need to have a license for that so I I don't know what the controls look like if user a has license and user B hasn't they're the passenger now if they're unlicensed right yeah like maybe they're just watching in that Loop type of in theory the the one that doesn't have the license shouldn't be able to add to that that price the prompts keep getting generated against and refined yeah Theory the one that had the co-pilot license would be the one that could could type and submit prompt but um I yeah I haven't I haven't um posed that question but that's that's one to look into in that case because that's I think that's the only case where two people can kind of yeah Jam together on a prompt basically um that sounds like a good thing it's preview document standpoint like let's say I'm we're co-authoring on a document right we both have the same document open at the same time and I have co-pilot and you don't and so I'm using copilot to like prompt against maybe a report that we did previous or other content boy I would assume that the other user is just going to see the output one yeah yeah because it's got the it's got it generates the content for you and then once you commit it basically to that file that's when the other user would see it from the co-authoring standpoint a bental note Andy I think whenever this hits like public preview and we inevitably do a show deep diving on co-pilot Hands-On that might be something to test is like can we have a contoso user not licensed for it and see what their experience is in Loop yeah we'll have to remember that for however long gone through the various apps yeah yeah sure yeah there were some other questions that I I saw like um one from I'm not sure if I'm pronouncing your name correctly marja line uh Maryland yeah Maryland okay I'm praying I didn't enunciate it properly um yeah yeah from a privacy perspective on Bing chat Enterprise uh as mentioned you know the the things that you're bringing and we've gotta how it works kind of visual in terms of how that works um any any information you bring to the prompt for Bing chat Enterprise is not used as part of training an llm it's not saved or shared it's it's just part of the way that um the service is designed so that all of the to and from traffic is encrypted so if you think about a consumer or generative service whether it's Bing chat or it's chat GPT um you know you're going to use it for Consumer type workloads the planning your parties are traveling to Greece or whatever but rewriting the marketing plan that's where things get a little bit sketchier if you're going to a unprotected type of service that's why you need that extra security and that extra promise around where is my data going am I protected so with the kind of commercial Protections in place you know in terms of Bing chat Enterprise when you do paste in that paragraph of information that might be from your business your organization um it's not saved there's no access to it from an eyes on access perspective and it's never going to be used to train the large language model whereas those assurances aren't necessarily written in all the other modalities that you might use to build a prompt for other more public services so this this gives you again it's designed to have that peace of mind that you can build out um build out that experience just to go a little bit deeper in terms of Microsoft 365 co-pilot this is where the orchestration layer in the middle is really and this is if you want to get to the this file and kind of how there's a really nice article on learn.microsoft.com when we've created an AKA for it AKA dot Ms slash M365 copilot priv sec I think you can throw the lower third up on the screen but yeah it kind of walks through all of the steps in terms of what's Happening from a processing and pre-processing first effective that the orchestration layer does within your trust boundary so you ask the prompt that gets sent to the co-pilot orchestration service the co-pilot orchestration service says okay if assuming that the large language model is not able to answer that question or respond to that prompt let's grab the information required from the Microsoft graph and the nice thing with graph is it does use Microsoft search it's the get command in the graph API and it can find the information that it needs it can find context around the user if you say you know show me X the fabricam contract whatever it knows that it's only going to need to search for people within a few degrees of separation from you versus if fabric hem and you know southeast Asia and you're in North America is working and they're both multinational organs not going to find you know the one that's completely separated from your normal work stream so it has that context around your user account and you know who you work with and those types of things to isolate the right file and that's that's part of the grounding process for for pre-processing then the information that's retrieved is sent into a prompt you know like we saw before the PDF was kind of mounted against the initial question there is a system prompt that's appended to the the top of it that says be helpful cite your sources you know be polite all of the things that you would want from a system prompt yeah in terms of giving you uh useful information back and not you know and then with the right tone and the right uh level of friendliness and usability Etc then once it returns that information then the orchestration layer again does a bit of post-processing to make sure that it's still you know relevant and all of those things are are going to work for the user at that point once it's done the post-processing and the grounding on that end then it's sent to the app whether that's you know word excel PowerPoint Outlook or teams to either you know respond to your prompt or to insert that content into the kind of the staged area for copilot to commit to your file um but that's kind of the the circle of life in terms of how all of this stuff works and as I was mentioning like the Azure open AI service and and the work that we do with Pablo before like that's all on the back end and one thing that I hear a lot and I see this in my YouTube comments too people will assume there's just one there's one GPT model that runs globally like there's only one um when I ask a question to GPT regardless of version flavor of GPT it's the one thing and and hey aren't all the API calls being sent under the covers to that one model like it's just one one all-knowing being that's out there The Wizard of Oz the main thing is there are multiple instances of the GPT model running and and a lot of them um you know the ones that are running for Microsoft 365 co-pilot are ones that are maintained by Microsoft running specifically for that service there are literally lots of these models running concurrently together it's not just one um and like I said it's completely separate from training so in inferences multiple models running concurrently with one you know across different hardware and for different purposes so this is these are instances that are intended for and used with you know Microsoft 365 copilot and so like let's say and then the actual um openai chat GPT model is also running in Azure that's not a kind of Microsoft managed quote-unquote instance of that or maintained instance that's maintained by open AI so um the the instances that we're maintaining as Microsoft are the ones that are being used for for inference on Azure up Nas service on Power Platform and Microsoft 365 copilot or Microsoft cost management type solution all those kind of things are running on the Microsoft maintained instances of the llms that are used okay Michael had a good question in the chat here about the permissions is it's really helpful by the way to see this graphic you know to see the steps that it takes and the number of turns that it does through the co-pilot system but do you know are the permissions permissions are checked at what point they're they're pre so it's all it's it's all while you're going the first time to the graph if you use the like you can go to the graph API Explorer right now I forgot the I don't have the link to it and I can't give you that lower third unfortunately but you can go to the graph API Explorer right now log in with your Microsoft 365 credits or enter ID and start asking it things like who's my manager what file did I last work on what was my last email maybe if if Andy or or John you can find that yeah but it's yeah it's totally something like um graph API explore AKA automatic Microsoft graph API Explorer or something but that's effectively the permission you have for initial search so it's the search that you're doing through API Explorer and through the API effectively that's just a GUI on top of the API is what is what's being found in Pre and so that it won't be able to even surface up additional information beyond what you have permissions to see in in the very first very first run the first time you prompted something that's going through that that initial level of permissions and it can't pull in more information than you personally have access to whilst using Copilot oh that's a cool idea though you can you can kind of get into the graph Explorer start seeing what it what it has access to and that that'll hopefully give you a little bit of nuance into what you can expect whenever it's the same like like mentioned if if you can get to it from search now if you can get to it from the graphics but it's the same thing um yeah these are things that and these are personalized to you like if if let's say you and Andy were in in the same org and you were in the leadership team and Andy was in the sales team you're going to have completely different results when you search for you know updates with fabricam or whatever you're searching for even though you're typing the exact same search string just like I was showing before for the sales team person the executives are able to find the future product plans or whatever that need access to those but the sales team in my example don't it's going to be it's going to be predicated based on your specific permissions and then you won't even see the files like you didn't even know they existed as the user doesn't have access to them because they're not being enumerated as part of the results right so that's that's the other part of it so you don't even see the title of the documents even if they exist because it's all based on your permissions to those files but I love this I love this because I'm more this is my whole thing like I like to reverse engineer and do the whole mechanics thing in general but these are the kind of these are the kind of visuals that I that I will like consume in my free time um to actually explain the whole the flow and process and there's there are there's some more details to this that aren't on here in terms of how the you know the the different search calls and and various things but at a very high level like the permission like I said the permissions that you have is the person that's using um using co-pilots can be predicated on your on your permissions all the different results are going to have that it's going to ground that information and the way that it responds to you and check the check the information going into om and the information returning from llm as part of processing so all of these things are there's nice checks and balances kind of at every level um in that sense and these are all things that again like if you want to use um Bing chat Enterprise effectively you're manually grabbing information from the Microsoft graph and pasting that into the prompt like that was the difference when we saw the e-bike thing this this automates and orchestrates all of that for you um so so that's that's um that's part of the part of the value here the the how it saves you time is that it will orchestrate all these things for you yeah it's cool to see this this visual kind of maturing over time you know and adding like those more critical details yeah probably in response to customers coming back um Daryl has a good question here of are there steps that we need to set up that semantic index for copilot like there was we saw in that step-by-step guide of like here's the prerequisites you need to be intra ID you need to have the 365 apps you need to have Cloud uh mailboxes what about that semantic index it looked like it was just kind of a this is going to happen as you turn it on but do you have more detail about how that index gets built and how long does it take for certain size organizations any of that type of info I think you know from a documentation of it this is something that Microsoft builds there are if you look at some of the documentation around like the connectors and adding uh semantic details inside of the connectors with like human readable and understand kind of parameters like some of those things I think will play into it I don't have completes like a background or expertise in terms of what the what the semantic index you know in terms of like curating that Beyond you know outside of what the Microsoft 365 and graph data streams are the native ones to the M365 service um that's that's an area that you know I'll certainly get deeper on as we have more information on that we'll have um definitely content around uh you know the semantic index what it looks like in terms of interactions with connectors plugins those types of things and there are some articles out now um and I read I read through a lot of the I've been reading through those as there as they're getting published but I think there's more more in terms of how that works when you start using more plugins and more extensibility effectively for the model that's that's an area I think of of opportunity in terms of adding more documentation around that once you know once a lot more used Michael is another good question here so we see an acronym that's not defined like Rai that responsible AI check that's responsibility yeah yeah which is a retrieval augmented generation too many acronyms now do you do you have more of the that you want to say about responsible Ai and what Microsoft's doing as far as like making sure that you know our bias is exposed and dealt with how do we handle those kinds of things yeah so there's well there's a there's a processing angle to that in terms of responsible Ai and making sure that we adhere to all the different compliance and security and privacy requirements that we need to in terms of just working internationally with with our customer set and then there's the the part that's kind of like you know as it was mentioned on the system prompt to make sure that the responses are helpful the tone is friendly that information sources are cited you know all of these that we're not we're going to recommend you doing anything that's going to be self-harm and those kind like all of those types of things are are part of just the rule set that are always part of a standard prompt so that when it does respond to you it's got all of these kind of rules in place to to where you will get use out of that response and it will be it will be deciding that source is responding to una friendly and appropriate way and you know there there are other new nuances around like some things might be regionally different from you know country a to Country B and what like all of that stuff is is all part of the science of getting you know getting those things right um in terms of like how it responds and responding in a way that's that's accurate based on the user and and location and all of those things so that's all this part of the the science of of building these Services out and doing it in a way that is responsible based on the consumer of the service and a lot of it like I said is is part of the pre and post processing but a lot of it is based on the data processing itself in terms of being responsible and and the fact that we're not using uh any of the any of the content or your prompts or the answers or the responses to train the large language models all of that's part of being responsible with AI and there's a lot more that I'm that I'm presenting here in the last whatever minute or so but in terms of our our approach to responsible AI but all of these things kind of contribute to it to give you the right types of usable responses that you know that you will expect that you know that can be used in a business context that are helpful and and and not damaging in any way yeah and Michael you asked about documentation um sorry for the long link but I did go find that for you yeah I put it in the chat so you can click it in the chat don't try to type it I think there is an AKA dot Ms slash responsible AI or something I I think I created that one I I create a lot of AKA links by the way I'm probably one of the AKA link creation Champions at the company um so yeah there's there's a lot of there's I would say it's something like them ak.ms responsible AI or microsoft.comresponsible AI but um those are probably ways to get to the same information yeah here we go I just found it did you find the AKA oh put that all right standard okay PDF okay that's good that works and those are those are things that were you know we're continually adding to and you know in turn we're not you know we're adding in terms of more detail and stuff to that as as these things develop because there's it's been such a rapid development cycle as you as you can imagine since um since December and even before that uh even November and and some of the times frankly that um we want to make sure that we're capturing all of that information through the responsible Ai and in the documentation that we have yeah so we've seen the experience and we've kind of talked through this um we've talked about the prompting level and some orgasm behind the scenes there um can we talk a little bit about the infrastructure topics oh yeah you want to talk super computers right I I love Hardware I love Hardware like I as you might know even if you follow my own personal you I'm like a total Gadget and Hardware person at the end at the end of the day so um some of the coolest Tech I want to say is is in the space and I gotta say I'm loving all of the work that you know Nvidia especially is doing in terms of um their gpus the H and Hopper series and everything so I'll just go through some of that so from a from a hardware perspective and we had if you want to see the the real source of this check out our show that we do with Mark racinovich it's it's over 400 000 views on YouTube now in the last Wilson's build and build was what the 18th or something of May so it's it's it's it's really really doing well all in terms of how the service itself both the public and the our Microsoft instances are run and how the training works and the hardware that runs it and the metal that's used but basically we're using state of the art and infiniband connected gpus and envy link connected gpus to run these because it takes a massive massive amount of compute to do this well and this is something that's basically exclusive to something that Azure can do because the background in using infiniband networking like that's that's pretty much from an unparalleled perspective in public Cloud as as I talked and worked with Mark on building all this stuff out this is something we've got a pretty unique level of skill in terms of running it so um the Azure data centers that run this massive were using like infiniband networking it's all Factory network based so if you're a networking person it's it's good uh we have VMS that can fire in up to eight different gpus per VM the M series um using the h100 the best of the best in terms of gpus from Nvidia then you've got all the Azure layer of manageability on top of it with with entra and you know the protections and arm templates Onyx runtime deep speed to do Hardware acceleration all of these things basically that contribute to having the right platform to run both training and inferencing jobs at massive scale one of the things that's really cool from the Microsoft kind of managing it perspective is we have as you can imagine like because of all the different stakeholders that are using uh you know open Ai and GPT llms you think about you've got stakeholders in bang you've got the Microsoft 365 team you've got a Power Platform and all of it to be able to balance kind of who gets to use what when in terms of Hardware that's all you know these are all very and then the customers that are buying Azure open air service obviously that need to be able to have capacity um these are all things we have to engineer for and have a way to be able to you know do things like checkpointing of training runs like I mentioned before so for example if you think about like the way an llm gets trained it could take days weeks months in some cases and so what you want to be able to do is if a training fails then you want to be able to resume it to where you don't lose any any um time and that's kind of what we've built in the project Forge and then if say information or resources are capacity limits from one data center that you can pause and migrate those jobs to other training centers all of that are things that you can do as part of project forage it sounds like my my video is frozen so I will change it you're fine one second you're good you need a super computer to run all these uh all these Graphics that you brought for us I'm sending a lot of pixels over my my Comcast uh network connection from home so now I'm on I'm on the Sony A7 for anyone who who cares um okay so I went from a zve E10 if you're interested um so this this is basically the way that uh the way that uh you know we are able to manage at massive scale across these Mega services and mega service too is as open AI itself running shot GPT and other services for its customers so all of these things are are part of the um they have their own ways of managing uh GPU and utilization by the way but for the Microsoft Services we're using project Forge and this is something that we want to be able to or that we're working on um releasing for customer use as well and so um those are all those are all part of just the metal and the platform that runs it and then when you think about like um from you know from a training perspective and fine-tuning we've developed something called Low ranking adaptive uh fine-tuning so for example if you do need to fine tune a model which we don't need to do in many cases because the prompts themselves are enough in terms of providing the additional information but in the days of say gpt3 we fine-tuned four GitHub co-pilot and you know those that's basically giving it additional ability to code if you think about like the and then the metal that's used to run um you know the checkpoints that we're doing in the old in the old days when you didn't have fine-tuning low ranking adaptive fine tuning you might have to run an entire Mega checkpoint um in terms of um running through that training exercise now as you just Target a portion of the llm and just basically think of it like you know targeting your your left brain versus your right brain or something if you want to teach it art or something you only have to use a much smaller set of gpus in terms of getting that additive fine tuning in those cases and your checkpoints are smaller because you're only touching fewer parts of the llm in that case um and when you think about the the hardware itself uh and these are massive 285 thousands for GB3 285 000 AMD infiniband connected uh GP or CPUs and then 10 000 Nvidia tensor core gpus that were infiniband connected for Jeep this for gpt3 we don't have the um the stats around GPT 3.5 or dpt4 but you know the compute used to run this is massive and you know there's there's physics around how far away the gpus can be separated from each other in terms of the number of meters and so all that's designed into our data centers and it's it's crazy how you know how much technology is is packed into um into what we're doing to provide these Services across Microsoft and also you know as as um open AI themselves consume the service and so uh the other thing that I think is is fun because there's two different networking paradigms there's the NV link and envy switch so Envy Envy link 3.6 terabytes it's capital B per second in terms of how fast it can transfer bisectionally and then you've got um 400 gigabits per per GPU an infiniband which is already pretty fast you can get up to 3.2 terabits not bytes um in terms of per VM when and this is when the gpus might be like 10 meters apart from each other so it's a crazy fast um pretty much at all levels and when you think about it versus like your normal Knicks or speeds that you have on the you know on the RJ45 or kind of networking side this makes those speeds look really really slow um when you're talking about infiniband GPU networking and then on the on the board itself when you've got the eight gpus connected on the basically the motherboard that's when it gets crazy crazy fast like we we had a stat in our show if you imagine being able to copy um 80 Blu-ray DVDs 4K DVDs in one second that's how fast the bisexual bandwidth is in terms of in terms of the gpus interacting with each other locally on the same board wow so so what you're saying Jeremy is like my little mini rack of raspberry pies I'm not going to make my own co-pilot and and save the license cost right right right I always like the comments of um will it run crisis um inevitably those will come up yeah um but yeah this is these are these are super computers logic you know logically designed supercomputers they're not like in one necessarily in one rack but these are things that are um very very smart individuals are architecting logically to work together and you can look up like the infiniband stuff it's all kind of um I want to say it's all command line CLI based in terms of wiring up uh virtual machines to use in a band but I've looked into that and and you know when you look at the VMS that are um that are the n-series VMS like they're they're expensive VMS to run so like when you're talking about wiring up 285 000 cores and multiple VMS and 10 000 gpus like you can imagine the magnitude of what's being run here for a training and for uh inference in terms of some of these models like it's it's um it's non-trivial I guess in terms of the compute that's being used wow so if you guys want more detail on this I'm gonna throw this up on the screen again this is in a video that Jeremy did in mechanics wig with uh mark rasinovich and um he actually provided Jeremy did a playlist of all of the mechanics episodes for AI topics and co-pilot topics together so it's this yeah this AI mechanics playlist um what else do you think we should be keeping track of to learn more about copilot and stay up to date it's kind of like the different layers of altitude that you want to stay up to date so I of course I'm going to recommend mechanic stuff because it's something I'm deeply involved with um but yeah if you if you do invest the there's probably I don't know 90 minutes or so worth of those 14 or so 14 videos it says um that we've done since I want to say since November Power Platform was the first one in that series with with Charles lamanna but if you go through that you'll you'll learn a lot in terms of the co-pilot the endpoints the way to deliver co-pilots if you're building power apps the way to use them inside of loop um you know how they work across Power Platform there's some really cool stuff in Power Platform with like site q and A's and various things that you can do now too and then all of the stuff that we talked about on on co-pilot setup and you know how it works from the user and retrieval augmented generation side and how the copilot orchestration works like all of those things and then the stuff that we showed or that I showed with Pablo's content around chat GPT and search like all of these uh I would I would encourage anyone who's like wanting to get deep on the topic to to watch all this stuff um not just from a selfish reason but I think it took a lot of hours dozens of hours hundreds of hours to get to where we could document this stuff and hopefully that's it's all pretty accessible in terms of making it easy to understand that so that's our goal at least and then I would say like at a very high level you know and we just saw the announcements this week um the Microsoft 365 blog for the things like the availability of uh when you know co-pilots and say a more public preview or when things like um you know Bing chat Enterprise and those things get announced the Microsoft 365 blog is like the source for the top top level announcements and the ones that are really important that you know like um the very very scene senior people are are obviously authoring and then have a large stake in so those you you can't ignore in terms of like making sure that that you're checking those out and you know and a lot of a lot of times the nice thing with these ones because they're so important is that you know press will cover them and you know post links to them Etc as part of their stories and there's PR behind you know PR teams that work with with these blogs so a lot of this stuff's kind of hard to ignore if you're if you're following Tech and you you're on you're on uh Twitter or threads or you know pick your social platform um and getting information or just reading uh you know third-party kind of tech news sites you will see a lot of what's here if it's important and then at the at the kind of more drilled in levels like we just launched this Microsoft 365 co-pilot blog um you can see there's exactly one post to it now but there you know between the Tech Community uh there are a few and then we just published one this week on um the Bing chat Enterprise FAQ which is a good one I would read as well and I will link to it that's another one because there are there are a few admin steps to get that running that are part of the FAQ from Jared Anderson um and by the way Jared Ennis is another person that worked in mainland China after I did but um yeah that was uh that was one of our connections and um yeah so so this is another one that the Microsoft Tech Community in general co-pilot blogs and then the other things that are coming just from a um message center perspective so assuming that you're a Microsoft 365 admin uh the important stuff you know when things will be rolling out getting ready for Microsoft 365 co-pilot these this is an actual tenant that I have and an actual post that I filtered so these types of posts will be in message center as well the importance important ones so hopefully between all these different channels we can reach you and make sure that you've got access to the information that you need awesome perfect that's that's a lot of resources um hopefully if you're watching live I don't know if you go back later yeah yeah I would I would say like just in general you follow Tech news I listen to a lot of you know podcasts like Windows weekly and those kind of things and if it's if it's at that level and it's from the Microsoft 365 blog like those will get covered somewhere and then you'll know like where to go for deeper information but mechanics like we we do what we can to be part of the first wave of announcements it kind of depends sometimes we're looking for is there a mechanic see enough angle to the thing that we're announcing like how much admin story is there that kind of thing before we do it so sometimes we're not there the very first day like March 16th but as soon as you know we start documenting like how retrieval augmented generation works and the different pieces of the the orchestration and then you know admin setup and all of those things those will be things that we'll be delivering through mechanics in the same way when when we get policies there'll be more policies that we're working on now that will light up and as those policy controls get released then we'll cover all of that on mechanics too like it's it's going to have more of a more of an implementer admin feel to it of course but um you know that's that's the area that I'm I'm passionate about and think I can I can at least help Drive some of the traffic and and discoverability so that's that's where you know that's where a lot of that content will find find itself on on my side and then obviously anything that we do is going to be covered extensively and learn.microsoft.com as part of product documentation so we're coming near the end of our time I want to take over the screen as rude as that is and I've been taking notes the whole time and something we do Jeremy is we've started Andy and I we will do a collaborative Loop workspace to kind of live this life right so I like to land and give everybody who's watching kind of a takeaway like you can screenshot this and you know some somewhere to go is kind of a call to action but what I'm feeling here is that some of the things we can do to prepare for co-pilot and get ready is the prerequisites that we talked about getting yourself like in the right State as far as E5 or E3 licensing Azure ad get your your office apps up to current channels so that you can you know experience and use these whenever these co-pilots as they launch and whenever we get to the checklist some of the things correct me if I'm wrong or if you want to add anything to this list is you know step one that you're doing right now is learning some of this stuff and we can talk about you know we talked a little bit about like just the basics of generative AI there's some great LinkedIn learning classes that you can take about understanding generative AI uh Jeremy talked a little bit about like the types of prompting zero shot one shot few shot um I think it's important for us to embrace that you know us as technology professionals are going to be leading the way in a lot of times in learning this stuff so it would behoove you to start up Skilling right now is like kind of your first step confirming confirming your prerequisites uh getting your house in order with over sharing so again some of these things that we mentioned just quick fire as far as like auditing how you're sharing is what your admins are doing getting into information protection life cycle management these are hopefully good things that you you can take away to start working on um Bing Enterprise that we talked about uh near the beginning here and some of the the reasons why you why you want to look into being Enterprise like it's already included in your Microsoft 365 licensing costs if you're E3 or E5 um and it just it secures it so that those prompts that are happening in bing that people are probably already using today in your organization it makes sure that those aren't saved nobody has access to it it's not used for advertising things like that so I think that's definitely a no-brainer to look at and then you showed us the other thing I would say like if you're using say an add-in like a um thinking like a Chrome extension or an edge extension I try not to let people use things that you haven't checked or don't trust because if you think about the prom the llm won't get trained in these cases because they're the person who's building these add-ins for Chrome extensions or edu they'll have no way to be part of the training process so that's that's that's the non-non-starter in terms of like worrying about that yeah the problem is that prompt is going to somewhere and you don't know necessarily where that thing's going so if you are using like a Chrome extension to say summarize my dock that's open on screen in my browser um just just make sure that you're looking after you know browser extensions and that you trust them if you're in a managed environment and you have control over those types of things like test test and authorize the ones that you trust I guess that's the kind of a high level thing and which should be true for all extensions but especially something that we always Harper and you know is the way quote unquote to use um an llm you kind of need to make sure that you have the right controls around those things that's the nice thing with Bing chat Enterprise is that you know we've got the right assurances in place that we're not we're not doing anything you know that's that's undesirable in that sense a quick question that just came in from Daryl do you know if Microsoft is planning on like extending Bing out to other browsers like Chrome or I don't have any information on that like I could ask I don't I don't currently know anything about that that's fair yeah cool but no that's a good point on the side works I mean I've been using that with Bing chat yeah I love that yeah that's effectively similar but yeah I don't know what the extension story is to be honest okay but I think that's a really good point is like I'm always an advocate of giving you know enable your people with the right tools so that they don't have incentive to go use the wrong tools and then you're just playing whack-a-mole which if you keep up with AI in general you cannot play whack-a-mole and just keep plugging all of the little places where you know llms launch or plug-ins launch it would be better use of your time for my opinion to set up the proper path for people to do the right thing so yeah um cool yes and that final thing is assigning licenses right that's kind of the last step that's the easiest part actually to be honest in terms of in terms of just the it admin style work um once you've been and like I said get in your house in order in terms of graph API search those types of things that say best practice with or without copilot um and that you should be doing DLP sensitive information labels getting trainable classifiers purview of deployed um you know it's optional to do like things like the higher end stuff with the SharePoint Advance management and syntax but those are all good like you want you want layers of protection information on that kind of content so yeah all of those are very recommended for sure so um I want to put your information back up on on screen as we're kind of a little bit over time but um you know again another resource when we were talking about how do we stay up to date on stuff make sure that you follow Jeremy um on Twitter and his fantastic YouTube channel as well he talks about like you were talking about your cameras a little bit uh when you switched over if you're interested in cameras which I know a lot of people that Watch Andy and I are um Jeremy has a awesome Channel where he tests out amazing new cameras space and just things that help you work better at home remotely so if you want to pitch your your stuff I have a bit of a gadget addiction like I I've got a collection now of these things that I just I'm interested to try them out I've got one coming that you can actually detach the microphone off of it and wear it as a live mic and that camera is Wireless it's like I'll see these things pop up in my in my feedback like I gotta have all right like an illness um but yeah I'm naturally curious I guess and those those are what I do in my spare time on the weekends I just test the stuff out and try it hopefully it's helpful um but yeah I've got my own channel on YouTube and then I would say like in general I don't do a lot of stuff in terms of I don't I'm not a huge fan at the moment of up here I'm kind of on the decline and Facebook myself but um yeah it's it's it's uh we we I would say from a mechanics perspective we we do regularly tweet and post on all the platforms but I'm more I would say I'm more frequently on YouTube than any other platform both for work and personal awesome very cool so I think we sufficiently satisfied the requirements of a deep dive because we went really deep into this all the way down to uh to the architecture uh this has been super enlightening we do have a bunch of aka.ms links that we're going to share we'll put those in the um the description on the YouTube channel if you follow me on Twitter I've actually been live tweeting this uh throughout the the entire session including links so um there's the the main video I'll put that link in the uh the chat window for anybody that wants it but um follow that all the links are there as well but we'll make sure we share out all the references and all the materials with this um I think that's going to bring us up to to the end of this one I know we'll look at a future episode of Hands-On with Microsoft 365 co-pilot and around the holiday time we like to geek out with uh with tech so maybe Jeremy can join us on an episode around um the holidays and talk some more audio video and other Hardware outside of the super computers that uh yeah that you get to uh work with every single day I would love to but I don't want to feel bad that my camera froze like middle two-thirds of the way through because yeah I feel like there are new pivots really really well on that one yeah you you handled that one really well the pivot went really well on that without having to undock and re-dock but no um this has been great and I do want to thank you for taking the time not only to be here today but the prep to and the resources you share with John and I um leaning up up to this this has been a really eye-opening I know we've got a whole lot to kind of debrief on over the next couple of weeks as we're starting to learn more about this and take that back to our organizations so thank you so much uh for uh for being here and with that thank you everybody for joining in that is 365 Deep dive for July of 2023. we look forward to seeing you in the next episode the end of August we're working on that one now so hopefully we'll have something to announce soon yeah man but uh with that thanks everybody and we'll see you again in the next one thank you Jeremy thanks everybody thank you [Music] foreign [Music] [Applause] [Music] foreign
Info
Channel: John Moore
Views: 2,426
Rating: undefined out of 5
Keywords:
Id: s5JAM7jMlRI
Channel Id: undefined
Length: 127min 58sec (7678 seconds)
Published: Fri Jul 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.