What if I told you you can quickly spot real-time indicators of
issues as they unfold without the need to poll or manually monitor changes in your data and without writing a single line of code. That's the goal of the new
Real-Time Intelligence service, part of Microsoft Fabric's platform. It extends Fabric to the
world of streaming data across your IoT and operational systems. As a data analyst or a business user, you can easily explore
high-granularity, high-volume data, and spot issues before
they impact the business, and as a data engineer, you can more easily track
system-level changes across your data estate to manage and improve your pipelines, and today I'm joined by Courtney Berg, who also helped build Microsoft Fabric. Welcome.
- Hey, Jeremy. Thanks for having me on the show. - Thank you, and congrats
on the availability of Real-Time Intelligence
in Microsoft Fabric today. Now, most of us are familiar with event orchestration systems, but this is a lot different than that, so what makes this different?
- Well, this will give you an intelligent, unified, no-code way to listen and analyze
real-time changes in your data wherever it lives. So, for example, you might have telemetry data collected from your business systems sitting in other clouds, along with streaming data of
your IoT devices on the edge. You can pull from the sources you want and transform and combine
the data in real time. And then with interactive
real-time analysis, you can explore the data,
spot emerging patterns, and isolate that single data point that could be the first
indicator of an issue. We make query authoring
easier with generative AI to help you quickly discover insights, and you can act faster by establishing rich conditions
for active monitoring and defining what should happen next, so whether it's notifying the right team or triggering automated workflows for system-level remediations. So, you can build your own custom integrated and automated systems to detect changes in
data across your estate, analyze these events in context, and trigger early actions, all without writing a single line of code. - Right, and if we compare that to most event-based
systems out there today that are tied to specific data sources and those aren't listening
to changes in data at an aggregate level, this is changing things significantly, so what's behind that? - Well, as you mentioned, Real-Time Intelligence is
part of Microsoft Fabric, and what it does is it
orchestrates the process of being able to ingest
streaming and event data in real time, analyze and transform it, and then act on it. The way that it does this is through a number of capabilities. So, first is Eventstream, which lets you bring in data, whether it's from Microsoft
Services, external sources, or even change data feeds
from operational databases using available connectors, and a major thing we're solving for here is how to capture real-time data in motion so that you could directly act on insights while they're fresh. So here, underlying everything
is the new Real-Time hub. This provides a single
location for streaming data, as well as discrete event
data across your organization happening at a system level. Now importantly, as a central location, it's also cataloging the data, and it makes it easy to search for and discover real-time data: something that's been
historically difficult. And from here, you can take two paths to make a decision for
your available data. So first, as data comes in, you can take immediate action
using our Reflex capability, which is part of Data Activator, to look for rich conditions to trigger notifications
or specific processes. Secondly, your data can go
directly into our Eventhouse that provides a unified workspace to work with all of your data and is optimized for time series data. From there, you can
easily query it with KQL and visualize it on
our Real-Time Dashboard before acting on it with Reflex. - So, just to pause
you there for a second, so where do existing components for Fabric like Synapse Real-Time Analytics then fit into the picture? - Yeah, so what we're doing is we're removing those tech silos so that we could better
orchestrate the entire lifecycle, from capturing, analyzing, and acting on Real-Time Intelligence. We've built Real-Time Intelligence on top of proven technologies while adding new functionality. So, capabilities from
Synapse Real-Time Analytics and Data Activator in Fabric have been unified, and there's also a number of
real-time streaming analytics and data exploration features
from the Azure platform, along with Fabric's strength
in data visualization from Power BI that we
bring in under the covers. - Right, this is really going to provide a familiar experience, while expanding the approach to addressing the specific challenges of working with real-time data. - That's right, and all your Entra ID, information protection
and governance policies, they all apply here. - So, can you walk us through an example with all this running? - Sure, I'm going to show you a scenario focused on business events where we take change data recorded from one system to another that causes a butterfly effect. In this case, we're a
direct-to-consumer food retailer. So, imagine it's a hot spell and our sales and marketing team want to improve customer
loyalty and satisfaction. They've come up with this idea of an aggressive discount on ice cream, which sounds like a great idea, but there's a chain of dependencies with different teams and systems on point, and it's not until you
integrate these systems together that you can catch and react to what's unfolding in real time. Plus, we want to catch that
sweet spot of early indicators, and Real-Time Intelligence
will do that for you with zero code. Here in our Real-Time Dashboard, you can see we've brought in all the relevant information
across multiple systems into one view. Data is coming in from our
sales and stock system, which gets updated hourly, I have real-time information
from our IoT sensors with the refrigeration
temperatures in our stores, and on the right here, I see data from Postgres backend of our mobile delivery app, showing orders ready to be
picked up and available drivers, and in fact, we see average
temperatures across our freezers are increasing over the last few hours. - So, just because we have
perishable goods here, and like you said, it's a hot day, we've also got logistical dependencies, including different
teams that are on point, there is a lot of things that could potentially go wrong here, so how do we get ahead
of something like this? - Yeah, listen closely, because
the devil's in the details. You'll notice that our dashboard currently shows aggregate
numbers across multiple freezers across different departments. So, the first gotcha
is I don't have a view of the individual freezer levels to be able to spot hidden issues with the freezers
containing the ice cream, so let's dig in a little bit more. Now, I can manually query
to see what's going on, but to save time, I'll ask
Copilot to do this for me. I'll paste in my prompt, "Show the average
temperature by department as column chart," and it generates a KQL query that I can insert to get my chart. This looks pretty useful, so
I'll pin it to a dashboard to keep me updated in real time. Now, I'll select the existing dashboard, give the tile a name, and add it. I'll move it where I want it, resize it, and now I have visibility over
freezers by each department. - So, I can tell there's
a lot of different things to look into here, so from the report directly, can we slice and dice that data? - Yes, everything is filterable live and can be queried on each tile. Let's explore the freezer data some more. I'm going to drill into the
aggregate freezer temperature. This should normally be
pretty flat over time. I'll start by removing the summarization to look at the data
across all of the stores. There's way too much here, so I can aggregate it in different ways. I'll start with the average temperature, group it by timestamp,
and also by department. Now, I can see the
frozen dessert department is trending up over time. Now, we found something in the information that's interesting. The temperatures in the
frozen dessert freezers are moving up, which is not a good thing when you're dealing with ice cream, so of course, I get notified of the issue from any of these tiles. I could set up alert if
a threshold is exceeded, but that wouldn't be
super useful in our case, because we've alerted it to
change the aggregate temperature across multiple freezers.
- That makes sense. So, could we zero it in then on the freezers that we care about, maybe the frozen dessert
ones that have our ice cream? - Yeah, absolutely. You can
get pretty granular here. To go into particular data streams, the Real-Time hub is the best option. Here, I can find and use all
the streaming data in Fabric. I can filter with these options on top and I can search through these streams. I'll start typing "IoT" to
pull up all of my IoT sensors. This top row represents
our freezer sensors. In the details, I can see here what other
items are using the stream, and then over here on the right, I get to preview the
actual events coming in with the details about each event, and from here I can also set an alert. I'll do that, and at this time, I'll set my condition to
be on event grouped by. In the grouping field,
I'll select FreezerID, in when, I'll choose temperature, for the condition, I'll select
it becomes greater than, and for the value, I'll set it to 29. So now that if any freezer
goes above that threshold, I can get alerted in Teams, and I'll use the same workspace as before and the same item name, TemperatureAlerts. - So now, you can see all
these alerts per freezer as they happen. - Yeah, and the logic can
get more sophisticated to look at business conditions and how they're changing when the alert condition
happens over a period of time, not just event by event. So, let's look here. In fact, if I go into the
reflex that was created, I can see the trigger and the individual streams
that it's monitoring. So, it looks like freezers D1 and D2 have gone over our threshold, so we're seeing a few
indicators of issues. - So, that's one sign,
but you might also recall that we saw some orders and wait times that were trending up in the dashboard, so what can we tell from that? - Yeah, that's valuable data. That might give us an early warning from our mobile customer order app. So, if I go back into Real-Time hub and search for your orders data, I can't find that information yet, so let's bring that data
in from our app's database. I can add more data to integrate a complete
view from Real-Time hub. I click Get Events. You can see all the data
sources that you connect to, like Confluent, a few Azure services, Google Pub Sub, and Amazon Kinesis. In this list, there are a few marked CDC, which use the open-source
Debezium framework that we host for you. Here, I'll connect a Postgres database and listen for the changes. From there, you'll add the
connection details, the server, and then the database instance. And I need to specify other Eventstreams that are going to manage
the connection stream, And finally, I'll put
in the name of the table that I want to monitor for changes. In this case, it's delivery orders. Now, I just need to
confirm it and that's it. - So, is that then going to streamify our CDC data feed in this case so that we can get real-time updates from our Postgres database? - Exactly, in just a few steps. In the Eventstream, I can see and transform the events. This is a preview of the
list of all the changes to the orders as they come in. If I go into Edit, I can do
all sorts of transformations, like aggregate, expand,
filter, group, or join the data to integrate multiple
events to a cleaned up feed before I publish it back to Real-Time hub. I'll choose Manage Fields, and now I can select the fields by reaching into that
schema under the payload, and then order_id,
customer_id, order_type, delivery_type, and waitTime. I'll refresh, and now preview
shows me just those fields. This looks pretty good. It's the sort of output I need, so I can choose from the stream output to publish it back to the Real-Time hub for my orderWaitTime feed, and now it's configured. - Okay, so now the data
from the mobile delivery app is flowing into the Real-Time hub. - Yeah, it just takes a moment
for those events to come in. Now, I can try the same search
as before in Real-Time hub. I'll search again for those orders, and then there's the
orderWaitTime feed I just created. When I open it, I get a preview of the number of events
that have been generated and flowing through the stream. I can see connected
items, the event stream, and things that are
subscribing to the events, and I can create alerts
directly from here as well. To save time, since I
showed you this before, I'll fast forward and head
right over to Data Activator, because I already have my trigger ready for when the wait time
crosses nine minutes, and it will send out an email. - And there's nothing better than getting an email
or a Teams notification, except for maybe a mobile
app notification, right? - It's funny that you mention that. This is right up your
alley: workflow automation. So, in this case, I might
change the promotion to something that reduces
the number of orders but balances it out increasing the average
amount of each order. You've probably seen examples, where instead of getting
25% off a single item, the promotion might be something like when you spend $100, you save $25. So, instead of an email, I can actually have it start
a Power Automate workflow. In fact, in this tab,
I've created one here. It listens to when the
reflex trigger fires, then it creates an approval, and an Adaptive Card is
posted in the Teams channel to start a new campaign. This should reduce the number
of smaller customer orders and drive higher value orders so that we don't tie up our drivers. Back in my reflex item, I just need to change the action that I want to have take over to this custom action for
the campaign approval, and that's all I need to do. - And that's going to then
kick off a Power Automate flow effectively every time that trigger fires, so does this only work then
with business-related events? - It also works with system events too. Every Fabric item
generates a system event, and activity in Azure storage does too. We can use these events
just like business events. You'll remember that our inventory system only updates hourly, and we can make that closer to real time. Starting from the pipeline, I'm going to create a new trigger, and then ask it to listen
for a particular event in Real-Time hub. Our inventory stock system writes all the recent transactions into Azure storage account. I'll connect to existing account and choose the correct subscription. Now I'll choose my storage account, ContosoStockOutput. In Eventstream name, I'll paste in ContosoStorageEvents. Then I can choose the event types. In my case, I only want the create events, so I'll deselect everything else. Then I just need to create, and after that, hit Save, and like before, I need to fill in details for the workspace and the new item. I'll name it EventBasedDataLoad
this time, and that's it. If I head back over to the monitor, you'll see that the batch
sales load succeeded, so now Real-Time Intelligence
is listening for events where files are added to
my Azure storage account, and will kick off the
pipeline automatically. The analytics over the stock
system are event driven, so you'll see updates faster than the hourly poll we had previously. - And I can see a lot of cases where this would be really
useful to Real-Time Intelligence, whether that's for recommendation engines or for things like generative AI. It could also be used to
ground large language models with up-to-date information for a lot more accurate responses. - Yeah, and of course,
you can use system events to start other Fabric jobs. For example, you could run a notebook to train an LLM using
real-time streaming data routed into OneLake via
Real-Time Intelligence. So, basically any data activity in Fabric can now be event driven
rather than scheduled. - So, it's really great
to see all the updates for Microsoft Fabric, so where can all the
folks watching right now go to learn more? - Yeah, Real-Time Intelligence
is in public preview today, and you can learn more at
aka.ms/RealTimeIntelligence, and for all things Microsoft Fabric, check out microsoft.com/fabric. - Thanks so much for
joining us today, Courtney, and of course, keep
watching Microsoft Mechanics for all the latest tech updates. Be sure to subscribe
if you haven't already, and as always, thank you for watching.