>> Hi everyone. Thank
you for joining. My name is Raman Kalyan. I'm on the Microsoft 365
Product Marketing Team, and I'm excited to be here with my colleague and friend Talhah Mir, who is the Principal Program Manager on our Insider Risk
Management Engineering Team. And we're going to talk
to you today about Insider Risk Management
from Microsoft 365. That's helping
organizations worldwide intelligently investigate and
take action on insider risks. Until I remember a couple
years ago when we first met, we were having a conversation with your old boss Brett Arsenault
about what kept him up at night. I was surprised to hear that
it was Insiders at Microsoft. Because from my perspective
going into that conversation, I was thinking that it
was external adversaries, whether it be nation-state
hacking or malware, phishing or something like that. But he said that really,
it was insiders, insiders like these people here, not working for Microsoft but
working for another organization, a large car manufacturer, and they stole information by
just e-mailing it to themselves. Intellectual property
with the subject line, "you sly dog, you..." Hence why
it's called the Sly Dog Gang. And this is an
interesting case in that we've got both a malicious risk, where they take an
intellectual property and then left the organization, but an inadvertent risk, where the only way the
car manufacturer found out about this was it was inadvertently leaked from the
competitor to whom they went. >> That's right. And if you really
step back and look at this, this is really one of the most common insider risks
that companies face today, which is departing
employee data theft. >> Yeah. >> So when people
leave an organization, either intentionally with
malicious objectives in mind, or with really inadvertent, or think of it as benign, reasons. You want to take some
content with you because you have sentimental attachments to
a project that you worked on, or we've even seen
cases from customers. They want to take content with them because they feel like they have such loyalty towards a company that if they reached
out after the fact, they still want to
provide support and have all the documents
with them to do it. This case, on the other hand, was an explicit example of somebody
with intent to cause harm, to steal intellectual property, go to a competitor,
and take a leg-up. >> Yeah, but that's not, I mean, there's tons of other
types of risks as well. There's a broad range of risks
of violations from insiders. Whether you have IP theft, you've also got leaks of sensitive
data, inadvertent or malicious. You got people that are corporate
sabotage, corporate espionage. You've also got workplace
harassment type of violations. People inappropriately
sharing something in communications that
they shouldn't be. So as we think about that, what we've heard over
the last two years since we've been on
this journey together, is that 90 percent of organizations around the world feel
vulnerable to insider threats. And they're concerned both about the malicious threats as well
as the inadvertent threats. However, as we look
at what are some of the solutions that organizations are using to try to combat
insider threats, most often they're using
solutions that are not purposeful built for insider threat
detection and remediation. They're using this hodgepodge of solutions such as user
behavior analytics, or combined with a user activity monitoring agent that they need to actually capture the signals
or data-loss prevention. It results in this
fractured approach, and I know that you explored
a number of these options at Microsoft until we
built our own solution, upon which insider risk
management is born from. What was some of the bigger
challenges that you saw here? >> Yeah. I mean, that's
a great question. That question comes up from
customers all the time as well. Before we dig into that, I think there's two things that's important to keep in mind when you think
about insider risk management. To have an effective program
to manage insider risks, you need one diverse set of data for which you can curate and kind of reason over
it. So that's number one. Number two, you have to acknowledge the fact that unlike
traditional security, to manage the risks
with insider threats, you have to be able to collaborate
effectively beyond security. So with your Human
Resources Department, with your Legal Department, with
your Compliance Department, with your Privacy Department. So when you take that into account, and you step back and look at the technology stack that
most organizations have, you quickly find out
that what we have, you can't really, to your point, is put that together, hodgepodge some things together, glue it together, and expect it to be an effective insider
risk management solution. >> Yeah. >> So as an example, let's
dig into this a little bit. Traditionally, you see companies that have piece of technology
called the UEBA, or a sim, or a security
analytics platform. The idea is, you have
this magic black box, you're supposed to go pipe
all the signals into it, and do some sort of orchestration, and it gives you these alerts. That's expensive to do. It's not a purpose-built solution. You have to write all these
custom rules on top of it. It's very security-driven. You might not have a lot of the context that you need
to effectively manage the risk with the
information that's available in those systems. So that's one. The other end of the
problem that you see are agent-based approaches,
so UAM-type solution, user activity monitoring where
you might take an agent, deploy it on your important assets, and you collect these signals. Or DLP, where you have some
agents that you deploy to try to manage either the audit of data loss or the
prevention of data loss. >> Right. >> Those are not
going to be effective because they're very limited to
what you see on the endpoint. They're costly to do, and they're very focused
on looking at data loss. Where, as you and I
just talked about, insider risk extends beyond data loss into sabotage
and physical concerns, and fraud and other areas. >> To your point, context is key. So I could certainly
write a number of rules for data loss
prevention to say, "Hey, if somebody is trying to take sensitive data from my organization
and share it on e-mail. Flag it." But what if they
actually take sensitive data, put it on their desktop, do some sort of renaming, try to obfuscate, get that data, and then share it via
some other situation. Now how do you write a rule for that? Now, if you try to write a rule
for that, you have multiple rules. Now you're trying to manage
multiple rules, multiple signals, you end up with not
only alert fatigue, but a signal-to-noise
ratio that's going to be significantly resulting in
poor identification of risks. >> Yeah, and I think that's worth drilling into, Ram, for a little bit. This context thing is
absolutely critical. When you think about context, you need context on the
individual's actions that the person that you are actually potentially looking to
build a profile around, build a sort of a
risk profile around. And when you look at this
data loss prevention, the tools you have, they're really focused
on the transaction or what actually took place, versus bubbling up and looking at the actual individual risk
which you really have to focus in on to be able to better mitigate that
risk that you have. I think one of the things
that I learned from you was really designing an intelligent
insider risk solution requires a lot of effort, not only on the technology side, but also requires a lot of collaboration across
different stakeholders within the organization. >> That's right. We talked
about this briefly a while ago, but security is important, right? >> Yeah. >> Accuration of signals, you do a lot of
identification of risks. But then you have to partner with HR. Why? Because HR is
the one that really defines the culture
that you have in place, and what is and is not acceptable level of programs
extent that it can go into. Legal is important, because the kind of individuals
that you're looking at, you're bound to them
through employment law. So the kinds of
detection that you might put in place for your
employees in the US, could differ dramatically from the kinds of detection you do
for your employees in Germany, versus Japan, versus
other parts of the world. And last, certainly not least,
compliance and privacy. How do you take that into account as you're building
out your program so you're not compromising regulatory
requirements that you have, or other policies that you need
to abide by in your organization? >> Yeah, absolutely. So before we get into the demo, which is going to be,
I'm excited to see a lot of the new things that
you have on doc here. Insider Risk Management
for Microsoft 365, we launched this in
February of this year. It's been a great journey, and we're excited about it. Customers certainly tell us that some of the things
that they're excited about is the ability to utilize some of the built-in machine-learning
templates that we have, that require no agents
to be deployed. No scripting to be
created to actually get the signals and start
understanding and identifying hidden risk patterns
within your organization. We just talked about privacy. Privacy is built into the solution. [inaudible] anonymization. These are employees that
you're dealing with. You want to make sure that you are respecting their privacy rights, as well as conforming
and being compliant with regulatory requirements in
employment laws in your region. Then finally to your point, we've got stakeholders
across the organization, and having end-to-end investigation
in workflows is important to address these risks in a
well coordinated fashion. Today we'd love for you
to now go through a demo. I know that we have some new
investigation capabilities, some new templates, some new
insights. It's going to be awesome. >> Yeah, let's do
it, Raman. So Raman, this is our Insider Risk
Management dashboard. Super-excited, like I said, to get into some of the new
features that we have. Let's do a quick overview
of the whole solution. So this here is our home screen. You can see, for example, we talked about anonymization
and privacy being key. So all the usernames
here are anonymized. This has been there since the GA. It's important to call out the feedback that we're
getting from customers, that this isn't just about privacy, it's also about bias control. Because again, you're talking about internal employees and contractors. In this case, Raman, if I was
working at Contoso Electronics and I'm seeing an alert that pop
up for you and I knew it was you, you know I'm not even going
to do the due diligence. I'm going to escalate
that immediately. But let's look at a day in the life of setting up one of these policies,
and what that looks like. Because as you and I talked
about earlier today, ease of use is absolutely critical. We've heard this from our
customers over and over. It's complicated to set
up these UEBA, UAM, SIM type solutions, and get the right orchestration happening
and the signals getting fed in. I'm going to show you how easy
it is to get things started. >> Awesome. >So you go to the Policies page. I have some policies already set up, but I'm going to show you how
to create a brand new policy. So we go to "Create Policy. " And immediately what you notice is a bunch of these policy
templates that are available. Remember early on we talked
about insider risk is really this ocean of different types of
risks that you can come across. And one of the biggest
mistakes that people can make is to try to boil down your
emotion, it's not going to work. You have to compartmentalize, prioritize, and then tackle
the risks that you care about. There is data theft, data leaks, sabotage type issues with
security policy violation. So I am going to get started. >> I see here you've got some other
ones that we just introduced, like for example, disgruntled
employees, priority users. >> Yeah. >> It's just a way for
you to get more granular. >> That's exactly right. So even data leak can have different
flavors of it. Insider Risk is a game of indicators. The more you have, the more proactive you could
be in identifying the risks. One of the leading indicators of insider risk is actually
poor sentiment. If you have employees, contractors, business guests that are disgruntled, that could be a leading
indicator that they're about to potentially pass some harm. So super excited about this. What we launched just
about two months back is this idea of capturing
disgruntlement triggers. So with our HR connector now, you can actually feed in data like somebody getting a
poor performance review, or getting demotion, or being put on some sort of
performance improvement plan. All those research shows
over and over again, are actually leading indicators
to somebody being disgruntled. So you can actually use
that, feed that in. If you choose and use that to build out data leak policies that are very specific to looking
at disgruntled users. Sentiment priority users. You heard about this
from customers as well, that not all of their user
base is exactly the same. Typically it's cut up
into two categories. You can add priority users that
are high value, or high risk. Things like HR
watch-lists, for example, or high-valued users that have the ability to cause a lot of damage because they vest a lot of the
crown jewels in your organization. Maybe you want to look
at more strict policies for those individuals as opposed
to the rest of your population. Now you can get into that
fine-grained policy setup all in an effort to help you produce
more signal, and less noise. >> Nice. >>So we'll pick "Data Leak by Disgruntled Employees as we
were just talking about that. I'll use our template of
pilot test. Click next. The first thing you get to use is define the users or the groups
that you want to target. Now of course, you can
choose a selection of groups or users from your
Azure active directory, or in this case, just check this box that everybody comes
and scope automatically. >> Yeah. >> Next is the content
you want to prioritize. This gives you the ability to say, if certain content
defined by location, such as SharePoint sites, sensitive information types
that they might contain, or sensitive labels that
they might have are considered higher priority than
not for that specific policy. For example, maybe
there's a policy for your research department,
research and development group. And they're working on
some super-secret project, and all the files for
those projects have certain labels on them, for example. So now you can mark that, define them in the policy. What the system does, is it looks
up for anytime that user in scope touches content with
that type of an identifier, then it automatically
risks ranks that higher, because you told it that
that is a priority content. Now let's look at the indicators. As I said, insider risk management
is a game of indicators. One of the hardest things
for our customers to do is to be able to
set up those scripts, set up those data feed, pull the data in, normalize
it, all that stuff. That's significant overload
for your IT department. >> Yeah. >> With insider risk
management solution, it's literally as easy as a
check of a box, that's it. What you see at the top are all these opposite indicators
that we can pull in. If you check off downloading
content from SharePoint, then we can go in your
Microsoft 365 tenant, go into your audit, and then pull in that content and
orchestrate over it. It is important for me
to stress that it has to be a checkbox because
you're always in control. You get to choose which
indicators you want to use that align with
your risk appetite and your cultural
considerations and to build that holistic insider
risk management program. The system will work regardless of how many or how few
indicators you let it choose because it's built
on a progressive detector. The more information it has, obviously the richer
contexts it can provide, but it can work off a
few indicators as well. It's not going to be a problem. But the thing I'm really excited
about is the device indicators. This is where you
traditionally have to worry about taking these agents and deploying these agents on your machines and now you're getting
all these signals from them. If you have the breadth and
depth of Microsoft 365, it's quite literally
a check of a box. If you say, I want to look at people printing documents or
copying files to USB, you check that off and everything happens for
you behind the scenes. We'll get everything orchestrated. The pipe is all there. The information comes into your
tenant from your endpoint, and we can orchestrate on that. >> That's awesome. So on Windows 10, all I have to do is check this box, no endpoint agent to deploy, which also, of course, has performance degradation if
you decline all these agents. Now you don't have an agent
to deploy and you can just pick a number of these
signals that you want. >> That's exactly right. Not only is the performance degradation such a huge thing for our customers
and their IT departments, but there's another benefit as well, which is that now you have
reduced the attack surface. As you know the more agents
you deploy on the endpoint, the more inadvertently you're actually increasing
the attack surface. Because when there's
vulnerabilities that are found in those agents that in turn
compromises your environment. Not only is it great from an overall security posture
of your environment, it's great from performance
management and it's an amazing thing from an IT overall cost of management point of view because
there's nothing new to manage. >> That's awesome. I
see that we're going beyond Office 365 and Windows 10 endpoints to actually now introducing physical
access indicators. >> Yeah, super excited about this. This is something we are
announcing in right now. This is how we're transcending beyond the digital footprint into the
physical footprint as well. With the physical badging connector
that we have available today, regardless of what badging
system that you're using, if you choose to feed
that in we can also orchestrate and try to
identify signals such as somebody trying to access your physical building after their digital accounts
have been shut off. Or trying to access sensitive physical assets and
getting failed attempts at that, which could be an
indicator of potential sabotage that's about to happen. Last thing I'll call out here
on before we move on is, we ship with a GA, we ship with out of the box
thresholds that Microsoft recommends. But now you have the ability
with a single slide of a toggle to customize that
to your heart's content. Different organizations have
different risk appetites. As I mentioned before, now you can align that risk appetite to the kinds of detectors
that are built. You can define what is the threshold that has to be met for
something to be considered low, medium, or high impact
to the risk score. >> That's awesome man. >> With that policy time frame is another indicator that you
have complete control over. I'll skip over that in
interest of brevity here. You click "Next",
"Submit", and that's it. That's all you got to do. No complicated scripts,
no agents is to deploy. You're good to go. There's
our policy right there. >> Sweet. Voila, like
Christophe likes to say. >> That's right. Voila, indeed. Now let's have a turn over and look at what an actual
case might look like. Let's say you're day 100, you're seeing couple of cases
come in, you're investigating. I want to use as an opportunity
to highlight some of the new features that we're
not seeing at ignite as well. Going to our same old
case that we've been talking about for a
while now, Case 84. Open that up. I get into my
quick overview of the case. We still have our amazing macro
exploration as we like to call it, which is this user 360. I can span back quite a bit in time and see what's
going on with this user, that's all available there. Another thing that we announced
just a couple of months ago at Inspire was this ability for
you to not provide feedback. This is something I'm personally super-excited about with all the
work that we're doing behind the scenes with ML and
our research work there. This is information
that's starting to build that pipe for us so we
can get smarter and smarter and start to fine tune these detections specifically
for your organization. But a new thing that
we're announcing as part of Ignite is this
activity explorer. So not only do we have this
ability that will give you this macro exploration
of what's going on in the environment and with that potentially risky user that
can go back months literally, but we can give you micro
explorations as well and dig into specific minutes and seconds and tell you what's going on with that
user when they created a file, downloaded a file, shared a
file, so on and so forth. Super excited about this
ability and the kind of opportunity it opens up for your
investigation capabilities. We always had content explorer
that's still available today. Our customers tell us
how important this is in its effort to provide
you that deep context, that rich context, so
you can understand what's going on with that user. Of course, that contents
all available to you. You could still escalate
that investigation to Advanced eDiscovery in the case that this needs to turn into a litigation or a possible
law enforcement escalation. Two new things that we're
super excited about here is the ability to create
a Microsoft Teams, team. We talked about how critical it is for collaboration to be
built into the solutions. The fact that you have
to collaborate with human resources and legal, we thought why not give you
the ability to integrate and spin up a Microsoft
Teams team that gives you a secure workspace for you to do all your collaboration and all your note-taking and evidence
gathering that you need to do. You can do that right out of the box. Secondly, also super excited
about this ability to integrate with the power
platform now gives you the ability to automate some of these workflows where you
might have to notify a user that they're under scope or notify the manager or maybe initiate a
conversation with a manager to verify if what you
observed in your system is inline with expected duties or something more nefarious
might be happening. Last certainly not least, I'll
just touch on this a little bit. We had the integration with
service now that we announced back at Ignite so you can integrate with your sore platform
like ServiceNow, Ticketing and what not. Of course, we have the ability
to also integrate with your sim, such as Sentinel with the ability to export alerts that we also
announced at Inspire. Super excited with some of the stuff that we're pushing
out the door based on the tremendous feedback that we've been getting
from our customers. >> That's awesome man,
tons of great value. We've introduced this ignite. We've come a long way in our journey in just a
few short months from February GA to now, September Ignite. I'm looking forward to
what's coming next, but I'm certainly very excited to see how our customers
are going to utilize these capabilities to help them identify risks and remediate
them within their organization. Wow, so that was a lot of
great information from Tala. I am super excited to see how our customers worldwide
are going to start leveraging the capabilities
we showed today to help them identify and remediate insider
risks within their organization. For those of you watching, there's several ways
that you can start leveraging insider risks
within your own tenant. First and foremost, if
you have an e5 license, you can just go to compliance.Microsoft.com
and start implementing policies for insider risks. You saw how easy it was. If you don't have e5, we have an e5 trial that you can sign up for. Of course, to get more
information about these features today
we have a blog that we released at
aka.ms/insiderriskignite. With that, thank you
so much for watching. [MUSIC]