>> Need to profile your program
but aren't sure where to start, there is a new profiling
tool in Visual Studio called the Counters tool that
you should definitely check out, which we're going to do continuing our profiling series on this
episode of Visual Studio Toolbox. [MUSIC] Hey everyone, welcome to another
episode of Visual Studio Toolbox. I'm your host, Leslie Richardson. Today we are continuing our very long and awesome
profiling series in Visual Studio. To do that, I am once again
joined by Sagar Shetty, who's a PM on the Profiling
team. Welcome, Sagar. >> Thanks, Leslie. Good to be back. >> Cool. We've talked about a lot of different profiling tools
on the show lately, what are we going to
talk about today? >> Like you said today,
we're going to be continuing the profiling
toolbox series, talking about another
tool in the toolbox and that is the .NET Counters tool, which is a tool that will help
us visualize .NET Counters right from within the
Visual Studio profiler. >> That's great. Why is
this important to people? >> I think we should
start with talking a little bit about what
.NET Counters are. .NET Counters in the first place, it's a tool developed by the
.NET team, as you might imagine. Basically, they're a collection of first-level metrics and diagnostics. Basically, a collection
of metrics that help you get a better understanding of how
your application is performing, especially if you've
never really done any performance investigation before. They're really meant to help
you start your investigation. Up until this point, the
primary way you'd visualize these metrics and counters
was through the command line. Basically, what we've done
with the .NET Counters tool is allow you to visualize
right from within Visual Studio like the
profiler within Visual Studio. With that, you get tons
of extra capabilities in terms of more visual
representation and graphs, more analysis and robust tables with metrics associated
with those counters, and because it's in the profiler
or something we're going to stress today is you can run the Counters tool with
other tools for really nice and deeper performance
investigations. We're going to look at
all of those things. >> That's awesome. As powerful
as the command line is, it's not always the
most accessible thing for everybody, definitely. So it's nice to have another
tool as an alternative. >> The command line tends to be like that interface that you
know very well, Leslie, it's that MVP once and it's
like that starting place, but then we're moving it along and bringing this individual studio. >> Sweet. Let's see it in action. >> Let's just jump right
into Visual Studio and I have a demo prepared today. First thing we want to do, of course, is launch the Counters tool. To do this, it's the same experience as all the other profiling tools, which is to get to the Performance
Profiler. How do you do that? If you've watched any of our other videos you've
probably seen this already, but you can go to the Context
menu and go to Debug, and then launched this
Performance Profiler right here. There's also a keyboard
shortcut Alt F2. If you ever forget that, it's always listed here in the Context menu. I'm just going ahead and launch that. Now we get to this page. For today, we're talking
about the Counters tool, so of course, we want to
have .NET Counters selected. Something we've talked
about before Leslie is that you can run multiple
of our tools at once. I know we've mentioned this before, but that's something
I especially want to stress today and hopefully today I prove in one example of why
that's really valuable. In this demo, I actually want to run the Counters tool with
the CPU Usage tool. One last quick note before I run it is just with
a lot of our tools, you have your icons next to them that allows you to configure extra settings
in case you're interested. For the Counters tool, I'm just
going to go ahead and click on this just in case
people are interested. But these are essentially the counter providers that's specified by .NET and
the .NET Counters tool. Currently, the tool we have in
the Profiler supports all of the counters rather that you would already get to see from
the command line interface. Over time, you can imagine
we're going to add more to this and allow you to
add custom counters. But that's more in the future. >> This is a new UX
experience, isn't it? I feel like I'm seeing
this for the first time? >> Yeah. This is newer. The Counters tool I believe has
been up for about a year now. But these UI pages a little
bit more recent than that. >> Awesome. It's cool
how over the course of this profiling series
you've been doing and feed us progression are just making the profiling tools even
cooler than they already are. >> Definitely, we're
spending a lot of time in terms of revamping the
UX for all of our tools. Yeah, this page is just one
of the many examples of that. You can also change the
intervals as far as collection if you want
to collect more or less. For now, I'm just going to leave
the defaults because that's good enough for our purposes today. Hit "Okay", and then
go ahead and start. Now, this is going to
launch my application and start collecting data. Now's it's a good time to
talk a little bit about what the demo app we're
going to use today is. This is an application
we've actually showed off a little bit before in some
of the previous episodes. A shout-out to David Fowler from the .NET team because
this is his application. But basically, this is an ASP.NET scenarios app that shows
off various scenarios in.NET that you might be doing and shows you some good and bad
practices as far as coding patterns. We'll dig into all of that. As you can see, there's
various scenarios here as far as returning and retrieving
JSON responses, JSON POST. We're going to run a
few of these scenarios. Actually, to do that today I
have a command-line utility, which essentially will allow
me to generate a lot of load on this application instead of manually clicking over
and over and over again. I'm going to run a few
of these scenarios, the first three, actually. This first command is
when essentially you send about 1,500 requests
to the first scenario. I'm going to let that run real quick. That's the first scenario about retrieving and returning
a JSON response. I have a couple of
more commands for the other two that I'd like to run, so this next one is about the HTTP client to
retrieve a JSON response, and that's going to run. Then lastly, we're going to do
the big JSON POST scenario. Now we've essentially generated
some load on our application. Let's go back to Visual Studio. As we can see here, the Counters
tool is up and running. We have the list of counters as specified by the
counters providers. Ton of different metrics that are
all showing the current value. We have the swimlane up here showing
the graph of CPU utilization. I'm going to go ahead
and stop this collection just so this diag session
doesn't get too big, and we'll do a deeper dive into
some of the individual metrics. >> Cool. >> As it's running, basically
what it shows you is just the current value of
the specific counters. But now that we've stopped
collection, as you can see, we're getting a few more data
points for each of the counters, basically, over the
entire time range, what was the minimum value for
each counter over the time, the max, and the average. Just to piece this
apart a little bit, we have a lot of different
counters here that give you information on tons of different
aspects of your application. We have things like allocation rate, stuff about different
timers, CPU usage, a number of counters around
garbage collection in terms of the size of how many
Gen 1 objects you have, how many times garbage collection is running for various generations. I really encourage
people to go and look at all these counters because
there's just a ton of different metrics that
cover a lot of ground. >> That's interesting.
Do you still get some of those counters like the garbage
collection 1 and the CPU usage 1 specifically if you didn't use the CPU Usage tool in the
garbage collector tools? >> Yeah. >> Yeah, that's a great question. The counters you get
are specified by that UI we looked at on the summary
page, that little key icon. Regardless of whether you run CPU usage or any of
those other tools, you're going to get
all of these counters. You'll really get a
lot of these insights even if you don't run
those other tools. But it's still good to run the tools as we'll see later on in the demo. We have a bunch of
different counters. One question might be, how do I start my investigation? What counters really matter, what counters would I
want to look at first? To be honest, this is going to depend on the nature
of your investigation. I'm going to suggest a
few counters here today, just in the investigation I have in the demo that
might be interesting. For that, I'll call out
the CPU Usage counter just because CPU Usage is really
interesting to look at. A lot of performance problems
are related to CPU utilization, and so that's always an
interesting counter to look at. Of course, we did run this
with the CPU Usage tool. As you're probably seeing here, these two graphs are very similar. As you probably noticed, whenever you click a check box
next to the counter, you get a graph in the similar, and so you can help visualize how
the counter changed over time. As far as other useful counters, I also want to look at
those garbage collection counters that I mentioned
a little bit earlier. For that specifically for today, I'm going to highlight
the Gen0 GC count, the Gen1 GC count, and the Gen2 GC count. In previous episodes, Leslie, I know we've talked a little bit
about garbage collection before, especially with the.NET
object allocation tool. Just as a refresher for
anyone that may not remember, or don't necessarily know a whole lot about garbage
collection, basically, it's this awesome way that.NET automatically helps
manage your memory and clean up a lot of memory that's being allocated towards unused objects. That's great that that exists, but it is quite
expensive for the CPU. Unfortunately, in
software development and in a lot of applications, sometimes you might find
yourself in situations where you're forcing the garbage
collector to run constantly, and that can lead to a lot
of performance problems. With the Gen0, the
Gen1, Gen2 GC counts, you're essentially looking at
how many garbage collections happen for that generation of
object over a period of time. If you can see, whenever
we ran those scenarios, and that's indicated by
these spikes in CPU, we're also forcing the GC to run. That's something that we see
right here from these graphs, and that's something we're
going to want to look at a little bit more in the future. I'm also going to highlight
the large object heap size. Basically, what this is
is a portion of memory that is reserved for objects
that take up a lot of memory. You want to be very careful about
when you're allocating memory that goes on large object heap,
because generally speaking, when you do this, you're also forcing the GC to run on the back-end because these
objects take up so much memory. It's one thing to take
up a lot of memory, but if you don't actually need these objects to be stored
in memory and maintained, you're going to force the GC turn a lot so it's something
to keep in mind. From these counters, Leslie
the point I'm trying to make is that we ran
this investigation, we didn't really know
a whole lot about the performance of our
application to begin with, we ran a few scenarios, we see that the CPU is being utilized the good amount and that we see that garbage
collection is running. So from the counters tool, I may ask myself the
question now, like okay, I see that all of these
things are happening, I see these metrics, what functions are costly right now? What is forcing the garbage
collector to run so much and where in my code can
I potentially optimize this? This is where I get back to how the counters tool really
plays well with other tools. For now I've run it with
the CPU Usage tool, so we're actually going to jump into the CPU Usage tool to help
answer that question. Before I could do that, quickly
one thing with the [inaudible] , and this is consistent
across all of our tools, is, it's not just a graph, it's also
a way to filter down by time. I can click here and drag. I'm just going to drag
around this first spike. What this is going to do is
update our table and our dataset to just the selected time
range across all of our tools. This also filtered down
to the CPU Usage tool. Now finally, I want to jump
over to CPU Usage tool. I'm going to jump over there. Let me drag this up a little bit so that everyone can see the table. I'm actually just going go
to the open details view, although the summary page
already shows the hot path, which is what I wanted to go
back to to figure out what are the really hot functions that are
taking up a lot of our CPU time. As you can see, it seems to be this GetPokemonBufferedStringAsync
function that is taking up a lot of our CPU Usage
each time and it's expensive. Another great aspect
of being right within VS is the fact that you
can get the source. Let me just double-click on this. >> Make sense. There's
a lot of Pokemon. >> Now we're at the CPU Usage
tool on the summary page, looking at the hot path, and we see that it seems
like this Pokemon service, GetPokemonBufferedStringAsync
function seems to be a function where our CPU
is spending a lot of time. I'm just going to click on this
Open Details View and get to the call tree where I can see the
full call tree, full call stack. I see that this is on the hot path. One thing I might want to do now
that I see that this function is on the hot path is
investigate this further. Because we are in Visual Studio, we have this close
connection to source code, and I can ultimately look
at my source code and see maybe how I might optimize
this a little bit more. I can double-click and
get back to source. I actually have the
function up right here. Now, we're going to look at
this function right here, the GetPokemonBufferedStringAsync
function. One of the reasons why this
particular function might be taking up a lot of CPU time and as well as forth a lot of
garbage collection. Leslie, see if you notice,
within this function, essentially what we're doing is
we're creating this variable, this JSON variable, this
massive JSON object, because we're retrieving
the JSON object and then returning it back. Essentially what you're doing is if you put this in a variable
and store this in memory, this massive object,
this goes straight on that large object heap that
we were talking about earlier. This is what's forcing
the garbage collection. Like I said, David Fowler is the
one that wrote this application, and a big part of this application is showing off good and bad patterns, and this is an example of
one problematic pattern. I'd encourage people
we'll link to this repo, but he has examples of how
you might rectify this issue. One kind of solution I
see suggests is just to read from a stream asynchronously. Stream that JSON object and read from that asynchronously instead of having this massive JSON variable being stored
over and over again, and he has a way of doing a further below in case you're interested
at looking at example. >> Plus aligned to that repo that repo was really
good about explaining like good async practices
and bad async practices. >> For sure. It's a
really educational repo. But walking back to the counters tool in terms of where
we started this investigation, when we started it,
we didn't really know a whole lot about how our
application was performing. We had not run a profiling
session on it before. We used the counters metrics to get a high-level snapshot or high-level overview of how our
application is performing. We dug into various counters, in particular the CPU usage and
garbage collection counters. Then ultimately, we used
those counters to make an informed decision on
which tools to select next, and we select the CPU Usage tool, because that's a tool that's
more design for a deeper dive, a deeper investigation
on the specific areas. Then ultimately, we
found a function that we could optimize that was
using perhaps a bit of a bad pattern in terms of storing just a large JSON object and
a memory in the variable. Yeah, that's how our
investigation worked on. >> That's great.
Overall, it seems like the counter's tool would be the
best tool to start with if you don't know where it is start with your performance profiling journey. >> Well, said. That's
a huge motivation for using the counter's
tool. Gray entry way. It doesn't necessarily
give you that deep dive, but that's also not really
the point of the tool, it really gives you the
high level overview. Then from there you go
to your CPU Usage tools, maybe an Events Viewer, just a lot of other tools, but it really helps you get that initial snapshot of how
your application is performing. >> That's great. If you want to
learn more, where should they go? >> Yeah, with all of our
tools, we have docs. That gives you the deeper dives. Definitely check out our
other toolbox episodes because as was pointed out, with this tool, you're going to be using a lot of the other
tools with this tool. We can definitely linked
like for example, the CPU Usage tool. Just to reiterate a few of the key, I think pairings that work well, I think the CPU usage tool, hopefully as I demonstrated today, works really well with this tool. If you're doing a lot of
stuff with asynchronous code, you want to get a better
understanding of what tasks might be taking a long time to
Async tool works well. I mentioned the Events
Viewer as well. One of the counters is
related to exceptions. Within the Events Viewer tool, we show you a ton of events. We haven't made a toolbox
episode on the Events Viewer, but that segues into
what's coming next. That is our next planned episode
going after the Events Viewer. With that tool, you'll
see that there are a lot of events and information emitted. Well, you'll see in the next episode. But if you have like the CLR
exceptions event provider enabled, you can get a nice understanding of what exceptions are being
thrown your application. There's a lot of different pairings. But to answer your question as
far as if you want to learn more, we're constantly trying
to put out blog posts. All the tools have docs, and checkout our
entire toolbox series. >> Awesome. It's really great since this is one of the
newer profiling tools. Now there's a first stop for those who are new to profiling especially or just
don't where to start. >> Yeah, absolutely. >> Great. As you mentioned, next time we're probably going to be talking about the Events Viewer tool. So that's really exciting. We're continuing our
profiling series. Yeah, thank you so much once again for for coming and sharing, Sagar. >> Yeah, thanks for having me.
Pleasure as always, Leslie, thanks. >> Great. Until next
time, happy coding. >> Happy profiling. [MUSIC]