[MUSIC] Kevin Sherman:
Hello everyone. First of all, huge, thank you for sticking
with us today. When they gave me the
news that this would be the last breakout
session on the last day, they quickly also told
me that they like to save the best
for last to make sure that everybody stays there. I still don't know if
they told me that to make me feel better about
myself or if it's true, but we will do our best
to hold it true today. So, I will get off the stage in just a couple of minutes
because I've got three of the most knowledgeable
people there are about Copilot from Microsoft
365 with me. But before I kick
it over to them, a few housekeeping things. First, what we're going
to be talking about. We've broken this out
into three sections. The first will be focused
on Copilot orchestration. A little peek under the, under the hood of how Copilot works. Where is the data flow, how
does the data get there? How does Copilot do
these magical things? We're going to unlock a little
bit of that magic for you. Second piece is around
inapt data flows. Copilot does some
incredible things in the apps that we know today. It can create decks
and PowerPoint. It can create new
beautiful drafts in Word, can even do conditional
formatting in Excel or via natural
language. But how? Dave will give a little
bit of background on that. The last piece is around
customization and extension. Heard a lot about Copilot
Studio this week. Going back to May, we heard
a lot about extending through things like
graph connectors and message extensions. In this session,
we want to bring all of that together to give you a complete picture
of what options are available to you to extend
into customized Copilot. Lastly, we've got microphones
on either side of the room. We hope to leave about 10
minutes for Q&A at the end. So please line up and
make sure you're getting your questions answered
before we leave. Before diving in,
a quick step back, I love talking about this
topic of how Copilot works. That's because it does
this little thing that is so helpful for me inside of Microsoft when
talking to customers. Especially in that it demystifies what feels like
a really complex topic. It helps users get more out of the product when
they know how it works. But more importantly for people
like you and in your shoes, it helps to build that trust
and that confidence that you understand the product before trying to implement it or push
it in your organizations. I've spoken with
probably 100 customers to this point since we
launched Copilot in March. There's always a part
of that conversation where their eyes light
up and they say, ah, I get it, and I hope to be able to create
that with you today. I wanted to ground
the conversation on two of the topics
that we hear most often. The first is, how does it work? And the second is, how can I extend
and customize it? The first question comes
from a good place. We to some extent, are naturally skeptical about
new technologies, especially when they're
as new as this, and especially when people
like Microsoft or groups like Microsoft are claiming it's as transformational
as it is like this. Admittedly, I was skeptical. I had a lot of questions
when I first saw it. The good news is the answers to a lot of those questions
are really simple, and we'll talk through
those answers today. The second is around
customization and extension. We have talked about
Copilot as being a Copilot that is with you through your entire
life at work. We mean that, but we
also realize that people's work isn't only contained in the
Microsoft graph. How do we connect it to those other services
and how do we extend it to make all of that
available within Copilot? Just as important
at how it works is where are these
questions coming from? And again, every time I'm
talking to customers, if they ask a question
like how does it work? How does the data flow? How do you commit to X, Y, or Z? Or if they ask, how
do I extend it? How do I bring more data in? Before answering, I really want to understand why
they're asking. Are they coming from a place of skepticism or concern or
are they just curious? Is there a service that
they really know they need to get in Copilot
to get the most of it? Or do they have some
misconception that you need to extend it to be able
to get any value from it? Being able to answer
that why has helped set me up for success of
then going into the details. Again, I hope to be able to help you on that
journey as well. Last point, I want to ground on some principles
we'll talk about today. The first is that
Copilot inherits the Microsoft 365 commitments. We built Copilot on a
platform of security. It was not an
afterthought that we had to figure out how to
add it in at the end. A big part of the
way that we did that is that we borrowed and leverage the
existing infrastructure and architecture
of Microsoft 365. That inheritance is a lot of what Mary will talk
about in a few minutes, but a lot of what helps to drive the trust story that we continue to talk about with Copilot. The second is that
Copilot can be customized and extended to expand value. I mentioned it a second ago. It is not a
prerequisite for value, but it's a way to expand
value by connecting to different services that
you have in your ecosystem. You get more out of Copilot. The good news is we have lots of different ways to get there, from low code to pro code, to extensions to plugins
and things like that. Ben will be talking a lot
about that in a minute. Without further ado, I'm
going to have Mary come up and she's going to
talk through some of the Copilot orchestration. Mary David Pasch:
Well, thank you all for being here. My name is Mary David Pasch. I am so excited to be
here today to talk you through how Copilot
orchestration works. I am principal
product manager on the Copilot engineering team and we build the platform that all the Copilot experiences
are built on top of. Before I jump into the flow, I really want you to make sure that you walk out
of here feeling like you understand how
Copilot works so that you know what you need to do to get your organization
ready for Copilot. And so that you have
the confidence that your organization is
ready for Copilot. The thing I want to
address first are some of the concerns and some
of the topic that comes up the most when
I talk to customers. And that's all around
data protection, security, and privacy. At Microsoft, the way we build AI follows our responsible
AI ethical framework. This really permeates
our culture and how we build product. I hope that as we walk
through how the flow works, how orchestration is happening, that you get a good
sense of how we built in the responsible AI principles into every layer
of orchestration. As we walk through
the flow, there are a few things that I
want you to remember. The first thing is that
your data is your data, and that means it's yours
to own and control. The second is that
we do not train the foundational large
language models on your data. The last piece is that your data is
still protected by Microsoft's comprehensive enterprise compliance
and security controls and that last point is probably
the most important one. People hear about GPT
and LLMs that have really taken off
in the last year, but we've been building
the platform for Copilot and Microsoft
365 for decades. In particular, we've been
investing in how do we keep your sensitive
content secure, and all of the investments
that you've been making in keeping your data secure
accrue directly to Copilot. There are a few things
you should think about when you think about how Copilot works and how it protects your sensitive
business data. The first piece is
that Copilot is all within the Microsoft 365
compliance boundary. People hear GPT and
they're thinking that somehow they don't have
control over their data. That it's going to
go somewhere that they don't understand. But we're using large
language models that are in the Microsoft 365
compliance boundary and those are private instances of the large language models. The second really
important piece is that the data
that Copilot has access to is limited
to the data that you as a user have access to. Copilot is not able to
somehow elevate your access or maybe even generate insights on data that you
don't have access to. This is really important. It makes the third point
here really critical, which is access controls. You can control what people
have access to in Copilot by controlling what
they have access to using our data
access controls. I worked for many years
in SharePoint permission, so I get really
excited about this because this topic was important before LLMs and it's
even more important now. Make sure that you're following Microsoft's best practices
for ensuring that your organization's data is only shared with the people that
it should be shared with. In particular, earlier this
week there was a session that talked about how to get your organization
ready for Copilot. Purview has some great
features and mechanisms that allow you to prevent over
sharing with things like DLP, sensitivity labels, and other policies that you can set up for your organization. I highly recommend taking a look at that when
you have a chance. The last piece here just to call out is that users have access to data via Copilot for the
data that's in their tenant. You're not going to have people somehow able to generate
insights over data that's outside of their
tenant or other people outside of your
tenant somehow get generating insights
on your data. Now that you have a foundation
of how we think about protecting your sensitive
business data in Copilot, I want to walk through the flow and some of the
scenarios you could do. One of my favorite
scenarios with Copilot and there's many
ways to be productive is being able to ask
questions and getting answers to things
that might be buried in your tenants data. In particular, I think this is really useful because today
you can already do this. You can go to search, you
can search through and see, find all the content. Read through a bunch of emails or files to find the answer. But with Copilot, it
does that for you. As an example, you could be in office.com, you
can ask the question, maybe I want to know what are the project milestones for this Project Falcon
that's coming up. What Copilot will do is
it will search for all of your content across
Microsoft graph, the content that you
have access to to generate the response and
it'll actually list out, here's a list of project
milestones for you. At the bottom, it'll also reference where it's
generating that content. If you have something like
purview set up where you're labeling all your data to say
this data is confidential, this data is general, actually in the references, it'll say what type of data was used to generate this response. If you then want to go further and validate the references, you can actually open
the file because it's listed right
there in the response and see exactly which slide in this PowerPoint the
answer came from. What's really happening
behind the scenes here? The first thing is that I ask a question which is
the user prompt, and this gets sent to our
Copilot orchestration engine. The orchestration engine is what powers all of our
Copilot scenarios. But it doesn't just
send your user question directly to the LLM. The first step is a
very important piece, which is pre-processing. This is where the
LLM does grounding. Grounding is the way that we get enough information to answer
your question accurately. It's a big part of how
we don't hallucinate. The way that it gets
more information is by calling Microsoft Graph. Simply put, Microsoft
Graph is your data. It's your emails
and your files, who you're working with, what you're working
on, and all of that context and relationship
around that data. This is also the
part where when you control what users
have access to. Microsoft Graph honors
that with Copilot. In this step, users
only have access and Copilot only has access to the data that you
as the user have. Once the orchestration
engine says I have enough information to
answer your question, it takes your question, the additional grounding data, and sends it in a modified
prompt to the LLM to give the LLM enough information
to generate a response. There's also some
other modifications to the prompt that we do here where we make sure
that we're following our responsible AI principles. I'll talk a little bit
more about that in a bit. The large language model
will generate a response. Then the next step, I think
is the most important step, which is the post
processing step. We do additional
grounding where we can call the Microsoft Graph, validate that the
responses are accurate, make sure that we have the right references
and citations, and also check to make sure
the response isn't generating harmful content and it is still maintaining our commitments
to being ethical and fair. Once the orchestration engine decides that there's enough
to give a good response, it responds back to the app. This was a very linear
flow that I showed you, but actually the
orchestration engine is an iterative process so it could call the graph
multiple times and it can call multiple other skills. All of this flow that I
showed you happens within the Microsoft 365 compliance
boundary and as you see the large language
models here are also our private instances
that are within the Microsoft 365
compliance boundary. This flow is a pretty
simple flow where you saw the orchestration engine get some additional grounding
data from Graph. But what I love about
the orchestration engine is that it can pull together multiple different
skills to actually go from just chatting to
actually commanding app. This is how we do
things like generating an email for you or writing
the document for you. Dave is going to come up here in a second and tell you how the orchestration engine
can be more sophisticated. You can also make the
orchestration engine even more powerful by customizing it with things like connecting it to public web data to give
it up to date knowledge. Adding additional
grounding data with our graph connector
story and adding more functionality
with plugins. This is what Ben is going
to walk through in a bit. Something to call out really quickly here is
that this can live outside of the Microsoft
365 service boundary. This is an area
where the admin has a lot of control over
what they want to enable or disable
and then the users also will have a toggle
to enable or disable it. But before we jump into
that complicated flow and our more
sophisticated flows, I just want to call out a
couple things that I hear a lot from customers and that's
around the Microsoft Graph. This is a really core point
of how we actually get grounding data to answer
your questions in Copilot. Microsoft Graph is your data
as I was mentioning earlier. The way Copilot has
access to your data and the insights in Microsoft Graph is through Microsoft Search. Which provides relevant
search results and insights on
top of your data. If you've ever used email, like if you've ever
searched for an email in Outlook or search for
a file in SharePoint, then you've used the
Microsoft Search. We are constantly evolving it, adding better relevancy ranking, extracting insights and answers. All of those innovation
will actually accrue to a better
search experience, but it will also make your Copilot experience
even more powerful. Because Copilot is built on the Microsoft 365 platform
and on Microsoft Search, it means that you
can control it with our existing Microsoft
Search features and our existing
security features. You'll also see in a little
bit how you can extend it the way that you can in Microsoft Search
with graph connectors. Being able to get the
most relevant information from Microsoft Graph is really important because we leverage a technique called retrieval
augmented generation. I told you earlier that we don't train
on your tenant data. Then how does the LLM have enough information to answer questions about
your tenant data? We do this by having search, finding that grounding data as we saw in the flow earlier. You ask a question
in the user prompt, we get some grounding data. But there are two
other things that we send in the modified prompt. One is that we send
chat history and that allows the LLM
to actually remember what you were talking
about and have a conversational
experience since we don't actually train the LLM and
it doesn't actually learn. This is important for the user, it's also important to you as an admin because
you're now able to actually do features
like e-discovery, retention, auditing on top of
that chat session history. The last piece here
is the system prompt. Microsoft 365 has a
default system prompt that gives Copilot responsible
rules for interaction. This is things like where you should search for information, how you should
cite your sources, but also things like
style and tone, and how you should
act responsibly. This is how we ensure that we're not generating harmful content. All of this, the user prompt, the grounding data,
the chat history, and the system
prompt all make up one large modified
prompt that we send to the large language model
to answer your question. The great thing about
Copilot though, is that it can go beyond
just responding in chat and actually make you
powerful by commanding apps. Things like
generating the email for you or creating
a slide for you. Dave is going to
walk you through that complex flow
in terms of how do we go from just chatting with Copilot
to actually doing, to making you more
productive with Copilot. (applause) David Conger:
Thanks everyone for being here. I'm David Conger from
the Office AI team, and I'm here to take you
a little bit deeper today on how Copilot works and
can actually command apps. I focus on bringing Copilot
into those applications. For me, one of the
things that's most important is how Copilot can help us take action to
complete a task completely. I want to show you commanding
in action to start out. Now we've all been
in this situation. We've written a long,
extensive document. Only to be asked to turn it into a PowerPoint deck and
present it quickly. This is something that Copilot is very apt to help us with. It can help do it for
us and get us started. It processes the
document that we have. It can outline along
the way so we can see what progress it's making on
generating the deck for us. It even can create
speaker notes, so from that rich content, you also have a rich deck
that you're ready to present. In not too long, a whole
presentation full of content is ready for us
to start iterating over, including designs from designer. I can keep taking the
next steps through Copilot iterating in
natural language, asking it to create an
additional slide for me of something I think
maybe it's missed or needs to be included, and I can even ask it to make revisions to the slides itself, changing colors or changing functionality
within the slides, just using Copilot to interact with the PowerPoint
presentation that it created. One of the biggest questions I get when folks see this is, well, how does Copilot
actually drive the app? Let's demystify that
a little bit on how we actually accomplish
this behind the scenes. There's a few things
that we need. The first is we need the
current state of the document itself or the application
that we're working with. We need the content
that's of course going to get generated and
created by Copilot, and we need a safe
execution flow. I know this is
important to many of you on how we actually
interact with the app to produce
that final content and create that
content in the app. Now when we start
talking about this, a lot of people immediately jump to likely how we do
this is by using a general purpose language
like Office-JS and asking the LLM to create code
for us in that language. But this presents a
series of problems. First, the language
is far too low level. It's very verbose
in what it creates, and it's hard for us to
manipulate and validate. Second, we know that LLMs
are prone to hallucination, and this becomes a problem
when the code seems correct, but quickly can be
found to be wrong. It opens up a lot
of surface area. Office-JS and other
native languages are really powerful and there's some things we don't
want the LLM to be doing within the
application at this point. With that, what we
really find that we need is we need to
separate concerns. The LLM itself is very good
at intent understanding, it's good at solving
certain types of problems, and our API's are very good
at executing within the app. How do we do this? Well, we leverage symbolic
references to entities. We need to make
sure that those are encompassed in what
we're creating. We also need to make sure
that we're not losing track of being able to
create complex flows. We know that a human developer could do this within
a native language, and we need the LLM
to be able to do this again within a safe way. Finally, we do need to be
able to detect and recover. Because we don't
have an active group of developers developing this, we need to find problems and fix them before we execute the code. Program synthesis is the
method that we use to do this. The LLM abstracted from the
code itself and then we use transpilation to end up generating the code and
executing it on the client. For us, we've created a DSL, a domain specific language, we call ODSL, or the office domain specific
language to do this. We dynamically construct
prompts and we translate the natural user
intent into a DSL program. It's easily authored by the LLM, it's very clear how
it can create this, and then we can generate
consistent code from that DSL to be executed. Let me briefly walk through the major components that
you see here so you can understand the data flows
a little bit better and how the execution happens
and how we keep things safe. First, we do an entity
classification. We want to understand which types of entities we're going to need
to interact with, either to make changes to or to generate within the
content itself, and we want to determine how relevant the document
context itself is. You saw that in some of
the examples earlier. The slide I created,
for instance, didn't require any
document context, but the later manipulation of a slide that already had, did. As we move into
construction and synthesis, there's a few things to
take a look at here, which is we take basically a five step process through preparing the
prompt for the LLM. The first is the rules itself that the LLM needs to abide by. The second is that
syntax guide for ODSL, because of course,
we need to teach it what it can and
cannot generate. Now, we want to keep the
prompt as tuned as possible, and so we have a large
prompt library of potential types of DSL
that could be generated. We use that with
the previous step, the analysis that we did to find code samples that might be
most relevant to the LLM. This allows us to keep
that package very small that we send
up and targeted for the prompt itself to just
the types of actions that we think are most likely
to be needed by the LLM. We, of course, also include
the document context and the user query itself. On the right, you can see that program synthesis
getting generated. You'll notice a few
things within it. First, there's a level of
uniformity in the code. Similar statements,
it's easy to read, it was easier for
the LLM to produce. You can see these
patterns if you look across the different
apps that are getting generated as well to see that it is a very similar
concept across apps. Second is the syntax
is very compact. Quite a lot is happening here
in a small period of time, we've really optimized on both
ends for tokens and tokens out to try to limit the amount that we need to
utilize within the system. Finally, we're very
document context aware, as you can see the fields
and properties and data being manipulated into
the program synthesis here. In the final step we
interpret and execute. This is where we
transpile into native API's like Office-JS and
execute on the client itself. We control permissible
statements in this flow, so we only allow certain types of code to
be generated from the DSL, protecting us from things
like file right actions, which we don't need to do. We also have rigorous syntax
checking and validation to make sure the program is correct before we try to execute it, and we can even correct
buggy code because of how the DSL works and how the DSL is manipulated we can make sure
that corrections can be made on the fly again
before we execute. Very quickly, the DSL is generated and executed
on the client, and at this point we
have a robust safe and verified program to
complete the task. I'm going to invite
Ben up to the stage. While we took this
low level view into how commanding works, Ben's going to take us
through how now you can extend Copilot
to your needs, Ben. (applause) Ben Summers:
Thanks, David. Good morning everybody. Yes, at the beginning
of this talk, Kevin said there
were basically two things you needed to understand. One was how it worked
and the other was how you could extend,
customize Copilot. I'm going to close
this session up by talking about extending
and customizing Copilot. Let's start with "why" I think
we're going to go through "why" and then we're going to go through "what" and then we're
going to go through "how". The Copilot for Microsoft
365 has a certain set of, let's say, baked in
skills and baked in data. It has a set of skills that
it knows that are related to productivity around the emails and PowerPoint presentations
that David just showed us, Outlook, Teams,
all these things that are part of
your productive day, but don't necessarily encompass the entirety of your
productive day. There are going to be other
things that you're going to want to do in
your organization. For example, there's
accounting functions, there's sales functions,
there's legal functions, there's all these other
things that require skill and that could be
incorporated into your Copilot, but they're not necessarily part of the Microsoft 365 suite. The point is you can actually
bring those things in. Similarly, you can
also expand the set of knowledge that
Copilot has access to. We talk about having access
to the grounding data that is in the Microsoft graph or publicly available
data on the Internet. But you guys have all other data sources that could also potentially
be used to further ground what it is that Copilot can understand
about your organization. For example, PDF files or
items from say, Azure DevOps. There's all these different
things that you can bring in to expand that
base of knowledge that Copilot can
look at in order to provide you with richer,
more relevant responses. Then I think the other
point that I want to make, and Mary has talked about
this at some length, is that you can do this and leverage all
the work that we've already done to make Copilot
work the way that it works. All the work we've done
with the responsible AI and governance and our
boundaries of security, our UX, all the things
that we've invested in. Because it's a platform, you can leverage that
and get more value. Which is again, something
that Kevin alluded to from the extension
and expansion of the Copilot from Microsoft
365. That's the "why". Now let's take a few minutes
to talk about the "how". There's basically two ways if
you want to extend skills, if you want to say add
legal skills to it or add sales skills to it, you build a plug in and a plug in is a
really simple thing. It's essentially an
API plus a manifest, and the API is the data
that it will take in in run time and the
Manifest describes the skills that you're going to enable in Copilot that will
interact with the LLM. The other way to do
this is to expand the baseline of knowledge by building a Microsoft
graph connector. Mary told you about the graph. Well, this is a way
of indexing data from third party sources and bringing it directly
into the graph, into that 365 service
boundary where it is then managed as part of your Microsoft 365 tenant
with our commitments, and so that's actually a very
powerful way to do things. I want to pause here because I think it's important
for you guys to understand a little bit more about the distinctions
between these two things. When I'm talking
about a plug in, I'm talking about something that is deployed to individual users, and that is generally great for run time interactions with
often like structured data. When I talk about Microsoft
graph connectors, I'm talking about
something that I think the word I want
to use is persistent. That's probably not
the right word, but I want to say
it's persistent. It is something
that is deployed at the tenant level by
your administrator and is bringing in data that
becomes widely available to everybody once
your administrator has actually enabled it. There's this idea of runtime, individual user
interactions versus persistent tenant
wide interactions with data versus skills. Think about that
and we'll talk a little bit more I
guess as we go along, or in Q&A if you have
questions about that. Skills and knowledge, plugins and Microsoft
Graph connectors. We're going to walk through how these things work a little bit. Let's talk about plugins first. Obviously, the first
thing you do when you interact with a plugin
is you're going to ask a question in the UX that's
going to invoke a plugin. Let's say, for example, that we're asking about
something that requires a skill that Microsoft
365 doesn't have. Let's say it's a ticketing
system or something like that. Do I have any work
assigned to me? Well, the Copilot service is going to go out there and
look and it's going to say, well, I don't necessarily
know this skill, but do I have a plugin
that does this? It's going to go
look at that catalog of plugins and you
can see I have like Poly for polling and Adobe for documents and Smartsheet
for documents. There's Priority Matrix as well. That catalog is going to return available plugins to the
Copilot orchestration system. It's going to look at
those and say which one of those actually can do what the user is asked me to do based on my
interpretation of intent. It's going to look
at this and say, Priority Matrix
is the one that I want and it's going to generate a plan
for responding to me, it's going to pass it through, the process that you
saw Mary outlay. It's going to execute
that and return me a natural language
output that tells me what it is that I have in Priority Matrix in a
natural language way. That's the basic flow, obviously highly
simplified and stylized, but that's what you
guys need to know. That's plugins and I
saw some phones go up. I'm going to just give you a
second to take a picture of that and then a pause
and I'm going to click. Now let's talk about
Microsoft Graph Connectors. Actually, let me back up
for one second and say plugins are actually
now in public preview. If your organization is interested in starting
to deploy plugins, it's an opt-in process. There is a link that we can
share with you that will allow you to actually opt into that process and start deploying plugins in your organization. Having said that, let me go back to Microsoft Graph connectors. These are actually
generally available and we've actually
had graph connectors out in the world for I
think about two years now. The idea of a graph
connector is, as Mary said, and as I've alluded to as well, you're indexing data
and bringing it into your Microsoft 365 tenant. You're essentially
augmenting the graph. You're doing this at
a tenant wide level. Literally, your
administrator goes into the, I think it's called the
Security Intelligence Blade in the Microsoft 365 Admin console and can turn these things on. There are a bunch of
prebuilt connectors, but you can also roll your own if that's
what you want to do. When you light a connector
up in your tenant, you're not just lighting up
the connector for Copilot, it's actually powering
a lot of other interesting a lot of rich
experiences like Search, the Microsoft 365 app. It's giving more context and more data to Viva
and to context IQ. Even if you are not necessarily already rolling out all your
licenses for Copilot users, you do get a ton of benefit from rolling out graph connectors
even before that. Then once Copilot comes in, you bang, you get that
extra value as well. I just want to emphasize again, this is a tenant wide
deployment when you do this. Everybody gets the benefit of a connector when
it is connected. The other thing I want to
make sure you understand is that the data that you bring in, the access lists, the labeling, all those things are preserved
when you bring them in. It's not like you're
just going to take a bunch of data
that used to have permissions and it all
gets stripped out and somehow just dumped in
for everybody to find, those permissions,
those capabilities are preserved as you bring that
data in with your connector. Let me wrap up here and
talk about the "how". This is the tooling slide. This is a pretty over
simplified view of things, but I'm going to try to explain
it as quickly as I can. If you want to build
plugins to put skills into Copilot
that it doesn't have, you can do it with Pro Code
and with Low Code tools. The Microsoft Copilot Studio
that you've heard a lot about this week is a
great place to go, get started to building
plugins that, as I said, will bring data in that individual users can use
and can use with the LLM. Absolutely, the right
place to start. You can also build plugins with our professional coding
tools, visual studio, and particularly with the
Teams tool kit extension is where you can start building plugins like Microsoft
Teams message extensions, for example, are the the canonical example
that I'd give you. Plugins can be built with
Low Code or Pro Code tools. I recommend that you start
with the Low Code tools. Then if you need to customize work in the
Pro Code environment. Microsoft Graph connectors, at least at this point in time, can only be built
with Pro Code tools. You will go to Visual Studio, you will get the
team's tool kit, you can build these things. I'll pause and remind
you that there are a lot of prebuilt connectors, things that you don't
necessarily have to worry about, so you can look
at those as well. I'm going to leave
it at that and say I'm going to call my colleagues back up to the stage
for little Q&A. If there's a slide that you guys can take
a picture of with a bunch of calls to
action and resources. While we're at it, let's
give the sign language guys a hand for everything
they've done at this event. It's amazing to watch them. (applause) But Mary, Kevin, David, let's come back up on stage and open up the
microphones. Thank you. (applause) Kevin Sherman:
Thank you everybody. While people are hopefully walking up to the
microphones with questions. I'll prime you with a question
that I had this morning. We were thinking
about the session, I'm like what are
we going to do if no one asks questions? What questions should
we start with? I decided to go to Copilot
and I said Copilot. Copilot says, what
can I help you with? I said, what are some of
the most common questions we're getting related
to how Copilot works? It's a long answer with
four different citations. Some of the most common
questions we're getting related to data processing,
security, privacy. For example, users want to know how Copilot interacts with their organizational data,
"blah blah blah". It also says,
additionally, users are interested in learning what resources and
best practices are available for using Copilot. That was one question
we didn't answer, so I said, great question. What's the answer to that? It said, Microsoft's
perspective on the adoption of Copilot
is that there's, it is an ongoing process, not just a single moment, to ensure a successful adoption Microsoft has created resources such as the Microsoft 365 Copilot Adoption Guide workbook, which provides
high-level overview of the adoption framework. It also talks about
the prompt tool kit. It also talks about Copilot lab with citations for
each of those as well. Anyway, we've got people
the mics now I just wanted to also give
you a little bit about adoption and show you some
of the power of Copilot in action. Please
first question. Speaker 2:
(mic cuts out) Powerpoint decks and
work docks and stuff, is that going to be
available in plugins? Can I write a plugin
that modifies a PowerPoint deck? David Conger:
It's a great question. I don't have anything to talk about in that
area at the moment, but it is an interesting place that we'll want to
follow up on because I'm guessing a lot of
people will want to extend how the apps are
actually manipulated. Speaker 3:
Hi, the plans today they mostly focused on
the chat experience. What about the rest
of the integration that we have in the Copilot? Do you foresee that the plugins
will come from the apps? If I'm in Outlook and I'm
writing an email today, there's no plugin
(mic cuts out) very limited to the functionality
Copilot (mic cuts out). Ben Summers:
I think the answer is actually the same as the one that David gave. Which is to say we really aren't quite ready
to talk about that yet, but I certainly know
why you want it. (laughter) Speaker 4:
Got a question around general availability. Microsoft seemed
to have redefined that term given that it's only available to organizations
with worth 300 seats. Is there any timeline
open that up to medium-sized businesses
some 100 seats. Kevin Sherman:
Yeah and Copilot didn't tell me someone
who would ask that, but I knew somebody would. Most recent guidance
on that is online. We have launched our
early access program for small and medium
sized businesses, which was an invitation
only program in much the same way as the general enterprise
early access program was. We will have more to share soon. I've got nothing more to share on that today unfortunately. I will say there's a version of that question which is also my organization has much
more than 300 seats, but I only want to try
it with a small handful. That one is a bit of
a different answer. I think one of the big things that we looked at when making that decision is we looked at our early preview customers
and we talked to them. Especially leadership, leadership and people
executing the program. One of the things
that we learned is the ability to have
a forcing function, to get more broad and
diversified usage across their organizations
did two things. One is it gave them
a better sense of where the value
could come from. It wasn't just a small pocket
that used it and said, it's not really
related to my work. It gives a more
diversified sense of where value could come from. The second piece is it
got people talking. We've noticed this inside
of Microsoft as well. The more people are
using it and the more those users interact
with other people we see a scaled value of what
people are getting from. Because you learn from others, you're inspired from others. Being able to start
that fly wheel, or kickstart that
fly wheel has been massively important for
early preview customers. Which is another reason
that we wanted to encourage that higher minimum.
Let's go over here. Speaker 5:
Good job. Thank you for the insights. It's very helpful. When we see the
enterprises data, the LLM is not getting trained. I'm sure it'll be there in
the documentation as well, but for audit purpose and for the security reason to prove, is there any way to see the proof that it is
not getting trained? Mary David Pasch:
That's a good question. Today what we have, is we're persisting
the session history. When you interact with Copilot, you ask a question
and Copilot gives a response that has references. All of that chat history
now gets persisted so that you as the admin could then do things like e-discovery
on top of that. You can see that type of thing. Now I think some people are
a little bit worried that the large language model will somehow remember or learn
and so they're like, give me evidence that
it's not learning. The large language
models are stateless. Let me tell you, I wish
it was that easy to just train a large
language model and say, now this is your
large language model. But it really takes a lot of
effort to do that training. It doesn't happen automatically. For the large language
models, they're stateless. The way that you can see
what is being fed to the large language models is through the chat history and some of those
particular features. Kevin Sherman:
Back over here. Speaker 6:
My question was about if it's possible to when we saw the
PowerPoint presentation, can you instruct it to follow the company's
guidelines and branding requirements and is that also possible in Azure stuff like that
as where it's released? David Conger:
We do follow the capabilities through designer and allowing for brand tool kits and other materials to be included. We'll definitely
continue to look and understand what
customers want around customization and being
able to pull that data in and have Copilot
take account for those. Speaker 6:
Doing it right now would be really complicated at
the current stage. David Conger:
Right now I'll put whatever designer is able to handle with regards to the style of the
presentation itself. You can help direct
Copilot and your intent of what you want things to
look like and how you want them to look to
give it that guide. But we'll continue tooling on it to make sure that they do that. Kevin Sherman:
On the first day we announced that images. Corporate approved images will soon be part of the package for Copilot and PowerPoint as well. I can ask Copilot PowerPoint to insert an image that's part
of my corporate library. It can access that as
well. Go back over here. Speaker 7:
Hi, my question was around Copilot Studio. If we wanted to
use Copilot Studio to expand our connections
and so forth, what type of licensing
requirements would that need one to build something in Copilot Studio to connect to external
data sources, and two, for the users
to make use of that. Kevin Sherman:
The Copilot Studio will be included with Copilot for Microsoft 365. That was also part of that initial
announcement on first day. I would say, look back to that. Then we've got
online documentation which is probably not
in one of those links, but we already do have some online
documentation to give more details behind
that as well. Speaker 7:
Thanks. Speaker 8:
Hello. She said speak loud. Two questions here. I submitted my question
here via this QR code. There's a handful of
other questions there. I'm just not sure
will we get notified after the session with
respect to responses? Kevin Sherman:
We will look into that. Ben Summers:
There should be people working the chat. Speaker 8:
Thank you. Second question, My understanding is that the LLMs won't be
trained using our data. Will Copilot service use our
data for its own training? Is there an opt-out process especially in like
the GCC high DOD, Azure Gov instance
when it gets there, will there be an
opt-out process? Kevin Sherman:
It's a good question. I am not going to speak to that, not because I want
to be evasive. I want to make sure that we
get the precise right answer. I don't have that answer
with me right now. Speaker 8:
Thank you. Kevin Sherman:
I apologize. You've put that in the chat Speaker 8:
It's in the chat. Mary David Pasch:
We'll put up on that one. Ben Summers:
Good. Speaker 8:
Thank you. Speaker 9:
Just a real quick question. One of the requirements
from what I understand with the
Microsoft apps, it requires you to be
basically on current channel. Are there any plans to
extend that to semi annual? Kevin Sherman:
We've announced I believe they've
extended it to monthly. Semi annual I don't
have details on. We're getting closer.
Last question we've got, it says a minute left. A quick one and we'll
get you a quick answer. Speaker 10:
Quick question. It's a follow up question
to Copilot Studio. According to the
online documentation, you would just rebranded the virtual agents
to Copilot Studios. According to online
documentation, you need SKU for virtual agents. At the same time you say, well, it's included in
the Copilot license. Kevin Sherman:
That is feedback that I will take and we'll
look into that. We'll update. I apologize. Speaker 11:
Hello, question about the plugins and extensibility. How does Copilot, maintain
the security between other systems that may
use different users? Ben Summers:
It's effectively, in many cases, calling API's that for example, if you install, plug in for service now it's
effectively calling the exact same API's
that you've already approved because
you own service now and you have the same
security policies apply. We actually vet those apps
when you publish them. There's a permission
and validation process where we go through a whole store validation process when the app is published to make sure that it not only complies with our
existing standards for, say, a Team's plugin, but
also all the other security and compliance policies
that we've established, largely in line
effectively with open AI. That same set of
parameters is applied to the evaluation of every
plug in that's submitted. Speaker 11:
But how does it know who I am now in this scenario? What permissions I
have in this now? How permission is Ben Summers:
--Azure active directory. AAD. Speaker 11: So,
it will respect the single sign on
base applications? Ben Summers:
Basically your identity is required in order
for you to get to, so there is a common understanding
that it will be able to inherit SSO slash
OAuth of that. Speaker 11:
Is there a licensing for these type of connectors? Ben Summers:
No, they're third party. The licensing for
connectors will generally be established
by the third party. The license would come in the example I just
gave service now, or from whoever it is that
published that connector. Obviously, if it's internal, you don't have a
licensing issue. Kevin Sherman:
--Sorry. We are getting told that we are out of time. Kevin Sherman:
We'll still be around for the follow up. Thank you everybody so
much. Really appreciate it. Mary David Pasch:
Thank you so much. Ben Summers:
Safe travels home. (applause)