>> ANNOUNCER: Please
welcome from Microsoft Security, Vasu Jakkal. >> VASU JAKKAL: Well hello,
hello, hello, everyone. Good afternoon. It is great to be
here with you all. Another RSA Conference. This holds a special place
in my heart as I know it does yours as well. We get to be here to form
community and belonging, to learn something new and
exciting, to meet old friends and form anew. And I certainly hope you have
met someone new this year. What an exciting
age it is to be in. Today we have one of the most
consequential technologies that are going to shape
our world and our future, artificial intelligence. I love the theme of this
year's RSA Conference. The Art of Possible. As I was listening to Doris, it
just reminded me of the limits of our own perception. And as we embrace this art of
the possible, as we embrace AI, what are the limits that
it's going to take us to? Limitless possibilities. Two years back when I was here
with you, I talked about AI for security and at that time
what our hypothesis was. This was the slide we showed and
how we believed innovation was going to take shape, leveraging
generative AI for security. Last year, we came back and
that timeline had shrunk. What I thought was going to take
us three years took us a year. And that base of innovation
is rapidly accelerating. Not surprisingly, when ChatGPT
was launched, in less than two months or two months about,
100 million users were using that technology. Me included. And if you look at the curve of
innovation and if you look at other technologies which have
had a meaningful impact on our lives, mobile phones,
the internet, you can see the numbers here. It took us sixteen years
for mobile phones to have 100 million users. Just gives you an idea of
the scale and the magnitude, the pace and rate of
adoption and innovation. And today, fast forward just
eighteen months later, more than a billion people around the
world are using large language models in some way. That is incredible. Absolutely incredible. So, what does that mean for us? These are great numbers. What is the promise of AI? AI is going to have a
profound impact on all dimensions of our life. In healthcare. When someone is diagnosed
with cancer, it changes their life in a split second. But what if we could – imagine
if we could diagnose that faster and with accuracy
and could find a cure. Imagine if we could use AI to
reach the thousands and millions and billions of people around
the world in rural corners. In education, imagine if you had
a personalized tutor who could teach your kids, patiently – I
have kids so I know that is a real feat to accomplish –
patiently, at their pace, helping them understand
in their ways. Imagine a more equitable future
where you could use AI to teach thousands of students who do
not have access to education. Autonomous transportation. Yes, we have work to do on
safety but think about a future where transportation is
efficient, it is convenient, it is safe, and it's sustainable. Imagine. Climate, one
of the most grappling challenges of our time. Imagine if we can use AI for our
Mother Earth, if we can detect endangered species and help
that or monitor deforestation. If we can even detect natural
disasters before they happen and maybe prevent the impact, what
would that world look like? And bringing it right home,
in our world, cybersecurity, it's hard, isn't it? Many days it feels like we
take three steps forward and four steps backward. It's been hard. Imagine if we could put the
superpowers of generative AI in the hands of every defender and
reduce the barriers of entry to be a defender so that we
can defend at machine speed and scale. Even in its relative infancy,
AI for cybersecurity is already able to do incredible
things, solve complex, sophisticated challenges,
do reverse engineering of malware in seconds. It is able to tilt the balance
in favor of defenders. Natural language is the most
powerful coding tool and that in multiple languages
simultaneously is helping us bring diverse perspectives
quickly to the table and form an international
community of defenders. Imagine that. So, clearly, it is evident that
generative AI is going to have an impact on everything. Here is some data from the
users of AI that we're seeing. They're getting faster. They're getting more productive. They're getting more accurate. We're better protected. It's pretty cool. And it's for all. We're big believers at
Microsoft in security for all. No matter who you are, no matter
where you are, no matter where you are on the journey or
what part of the world you are, AI can help you. So, no wonder when I meet
people and organizations, the first question we get is,
well, how can it help me? I'm a small and medium business. How can it help me? I'm a large enterprise. How can it help me? What is that? What are the outcomes
it can drive for me? And right at its heels is the
next question, tough question, which as defenders and
security professionals I know you get a lot. Well, is it safe? Are these models safe? Should I use them? How do I know that generative AI
is going to be used for what it is supposed to be used? How do I know that? And how can I protect
something which is evolving so rapidly, so fast? How can I keep pace
with that innovation? These are all great questions
and we all must ask them. It is crucial we ask them. It is crucial. Because today, we live in
one of the most complex threat landscapes ever. Every time I come to RSA, I use
the word unprecedented, and I'm waiting to not use that. But it is unprecedented. Identity continues to be the
battleground for security. Identity related attacks
have increased by 10X just year over year. Cybercrime is both
nation-state and ransomware. Cybercrime is a gig economy. And if we measured that in terms
of a country, if cybercrime was an economy, a country, it would
be the third largest GPD in the world, and it's growing at 15%. Imagine. Imagine what we could do
if we made a dent in that. Not to mention the talent
shortage that each and every one of you is facing right now. And a complex
regulatory environment. Microsoft is tracking 250,
on average, regulatory updates every day. That's a lot. And what happens with AI? Because generative AI is
a tool that attackers will use creatively. They can use gen AI to
proliferate malware rapidly and quickly and create new variants. Do password cracking more
intelligently with more context. They can use it to prey on
that what makes us human, our curiosity, using phishing
and new techniques there. And I'm sure each of you has
your own favorite deepfake video that you've seen. Super concerning. Did you know that just a
three second voice sample can train a gen AI model
to sound like anyone? Something as innocuous as your
voicemail greeting can be used. Combine that with the phishing
attacks and think about the harm that can be done. And just like attackers can use
generative AI creatively to do all those things I talked about,
there are also new surfaces that they can go and harm. Along with our traditional
attack surfaces like devices and identities and data and cloud
and infrastructure, dot-dot-dot, the new gen AI attack surfaces
includes prompts and LLM models, AI data and orchestration. So, we have to think about that. Okay, so, what do we have to do? Because clearly, AI has so
much potential and promise. We as defenders have the
responsibility to protect it and we need to protect it
comprehensively, with security at the center but also thinking
of privacy, thinking of fairness, thinking of
inclusion, thinking of quality, reliability,
all of those things. That is what is needed. I absolutely believe AI will
elevate the human potential. I'm seeing it every single day. It reminds me of the beautiful
poetry that Doris was saying. What is beyond our perception? It's going to help
us understand that. Gosh, beautiful. But it has to start
with security. It has to. You defenders are the heart
of trust, the heart of an organization's trust in AI. It is a really important
responsibility. You're the ones who
provide a safe and secure space for exploration. You're the ones who make it
possible so someone can lose themselves in inspiration. You are the foundation. And I love this. You are the yes. So, I hope you know that because
at RSA, we come and we do a lot of things and we are immersed in
technology, but sometimes it is good to take a moment and pause
and think about how much you do and how important you
are for this world. You are the yes for AI. So, as we embark on this magical
AI journey, we cannot do this transformation without
security transformation. So, how do we do that? There is clearly a lot
of literature, a lot of research out there. Based on our learnings at
Microsoft, we have come up with a simple framework. There are three pillars. Discover, protect, and govern. Discover. What we don't know creates fear. Fear creates anxiety. I have seen organizations who
are so afraid of AI, they're just not using AI anymore
because they don't know how to manage the risk. Now, as I say this, 93% of
organizations have some AI usage, are using AI, and only
1% of leaders feel equipped to deal with that risk. So we have to start with
understanding the AI usage. We have to understand what
are the gen AI apps being used in a company? How are they being used? How much are they being used? Who is using them? What is the data
flowing in and out? Create that blueprint. Next, map that
blueprint to the risk. And we have three ways
in which we assess risk. First one is application
and model risk. Think about just models, large
language model and model poisoning or prompt
injections, or you can also think about jailbreak. Those are the things. Supply chain vulnerabilities. That's your application
and model risk. The second one is data risk. Data loss, insider risk,
unintended use of data. That's data risk. And then the third one is
governance risk because there's a lot of regulations coming up. This is related to regulatory
compliance but also code of conduct and policies
that you may have. The second pillar after
you discover is protect. Mitigate all the risks you can. Prevent them from happening. And for the things you cannot
anticipate, have the right guardrails and controls. If you look at application and
model risk, one of my favorites continues to be zero
trust architecture. Verify explicitly, use
least privilege access, and assume breach. Three everlasting tenets. Use zero trust architecture. Start with identity
and expand that to AI. And what I mean by
that is use AI threats. Like, what are we observing
with new TTPs and new indicators of compromise? Do threat modelling using
that and expand your threat protection because your
traditional techniques are not going to work here. So, expand your threat
protection, your XDR technologies to really be
inclusive of these AI risks. Posture, one of my absolutely
favorite, favorite things because it starts
with how secure am I? Make sure you understand
your posture and you're managing this posture. There are so many great tools. There is cloud security posture,
extended security posture. Integrate your AI
threats into that. Be expansive. And the lastly, have
content safety controls. What is your LLM processing? And what are the outputs? Have the right filters so
harmful content, malicious content is not served up. Data risk. Really, really concerning. What is the data
that is being used? Where is this data? How do I understand that? Data is the fuel of AI. That is what it uses. So, you have to make sure that
you understand your data, you know where the data is, and
have the right labeling. I cannot stress this enough. Labeling and classification
is going to be critical. Sensitive data needs
to be protected. And this data is being generated
so fast that one of our biggest takeaways from our own learnings
is auto classification. Use auto classification. You want this at the speed
at which the data is being created and used. Risk-based access controls. Both user risk as well as data
risk, combine them to understand who should have access, how
much access, what access, and integrate that into your
threat protection, into your zero trust architecture. And then lastly, make sure
that you are looking at data loss prevention
policies holistically. These are some of the things
we are doing at Microsoft. And then the third
pillar, governance. Now, I know governance can
be an interesting term but I think governance is so
integral to everything that we're doing right now. We have so many learnings
from not just the technology revolution of the last
few decades, but just the Industrial Revolution. And governance is about
that human agency. It's making sure that we put
ethics, we put humans at the front and at the heart of
technology to understand how should we build this
safely, deploy this safely, use this safely. We need to be really
thoughtful about this. I shared with you earlier that
Microsoft is tracking 250-plus regulatory updates every day. We're seeing new
regulations on AI. The EU AI Act – these are just
a few – the EU AI Act, the Artificial Intelligence and Data
AI Act, Digital Act in India, like across the world. And we need to make sure
that we're compliant. So, from a governance
perspective, one of the things I love about the EU AI
Act is its risk-based approach to governance. I love it. Because then it helps you
understand, okay, I'm going to prevent access
to these risky apps. I know how to evaluate
the risk of that. And for the low risk apps, I'm
going to make sure I have the right controls in place. Maybe it's for some users. Maybe it's about some
data so I'm going to put those controls in place. We need to think about policy
and we need to think about regulatory violation. Both in terms of are we
compliant to this and making sure that your tools integrate
that in some sort of automated way, because otherwise
it's going to get complex. And if someone is trying to just
use AI in harmful ways, how can you detect that and how can you
make sure they're compliant with your code of conduct policies? Content safety filters, we
talked about that earlier. It matters deeply here. And then the last one which is
critical, is user education. It is not just for governance,
it's across both the other vectors as well. It starts with user education. It starts with us. Just understanding it. How do these models work? What are they using? How do I need to evolve
threat protection? How do I need to educate
people on content? How content needs to be created? Because you have so
many content creators. How content needs to be
labeled and classified. What tools do they need use? All these are really,
really important and that education is critical. I shared this earlier. While security is at the heart
of the AI transformation, there is so much more that
we need to think of. Our roles as
defenders are growing. Your impact is growing. When we think about AI and when
we think about security, we need to make sure it is for
all and it is by all. We need to make sure that
we're including everyone in building this. Diversity matters. Cognitive diversity
helps us build better AI and better security. Fairness matters. Equity matters. Inclusion matters. Transparency matters. Accountability matters. We've got to think
about all of these. Clearly, if you can't
tell, I'm an AI optimist. I think the age of AI is going
to unlock massive potential. I believe it will help us
stretch our imaginations. It will help us dream a little
bit brighter, a little bit bigger, help us make each
other's lives better. Build a better world, a
more equitable world, a more equal world. Wouldn't that be wonderful? I truly believe it is
going to do that but we have a huge responsibility. As Uncle Ben in Spider-Man –
someone corrected me – said, "With great power comes
great responsibility." That is the responsibility
we bear as defenders. I love science fiction. I grew up with Star Trek
so I'm a hardcore Trekkie. Live long and prosper. And I thought I would end with
a science fiction thought. The limits of the possible can
only be defined by going beyond them into the impossible. So, as I come full center with
this conference's themes on the art of the possible, I invite
you to join me, and I invite you to join me fearlessly, bravely,
keeping security at the heart. And with care, together, I
invite you to really dream big, to make our world safer, and
to work together, because I think it's going to
be a beautiful world. Thank you so much
for joining me.