>> [MUSIC]. >> Welcome today's ASP
NET Community Standup. After we're in our nine plus years of service here.
Pretty proud of that. >> We're in tenth year now. >> That's ridiculous. We
got to schedule that. There's got to be people jumping
out of glimpse or something for. >> It really does. >> I'm super happy I
wrote both of you in. This is a very busy time of here. We're getting ready to ship.NET 8. We have a bunch of
conferences coming up. Somehow I wrote you both in. We're going to talk about
David's Twitter thread. What features you
want in ASP NET Conf for.NET 9 which is going to be fun. I will start as always
because I love community, with community links. Let's go. These links are in
the chat and I have a thing and there's a banner. Have to share all the ways. They're all here. I had
a whole bunch this week. First of all this post series
started by Rich Lander and this is talking about like API design
and answer some questions like, for instance, why are there
more than one ways to read a file and.NET and thing. I thought this was really
interesting and it sets the stage. You're talking about, different
ways of reading file, what do you want, evolution
of the API's and stuff. >> This is a good book
because Rich has a series coming on examples of these. I think we actually spent, I want to say maybe a month or two
working on one of his samples. That's coming up in his
next blog post and it shows the differences between going super low level and a high
level and the per differences. It was fun just watching someone who wasn't steeped in
writing code like that have to come in and
learn and then ask the experts which he
has direct access to how to do things and
we found bugs on the way. It was fun. this will
be a fun series. >> It leads to some great discussion here and
online and other places too. It's good to go deeper and not
just be like boom here it is, but here is why we are
thinking about it. This reminded me too that
we do these API reviews. I think it's weekly, on The.NETFoundation
YOU TUBE Channel, they're continuing to go up and they're live and
stuff, let's see. This is need here, debugging
enhancements in.NET8. This rolls up some stuff that's also been mentioned
in some other posts, but just showing a lot of
quality of life stuff here. Instead of seeing a bunch of
types for debugging output, seeing the actual values, which is super nice. Anyway, this is something I'm going to be happy
about every single day as a developer just to see that. What's new in system text? Jason for. NET 8 tons of stuff here. I love seeing all the performance. Also the serialization stuff like
combining source generators. Just tons and tons of stuff. I read through this, I
wasn't sure I got it all. I signed Eric up to do a.NET Conf
session so you can teach me. >> Excited to do that. >> A few more things
got.NET Conf coming up. NET Conf itself runs
November 14 through 16th. Day one and two, the 14th and 15th, and then the night of the 15th, all the way through 16th, 24 hours. We do a community 24 hour
around the World stream. I had a lot of fun looking
at time zones yesterday, making sure all our speakers
were not scheduled to speak at 3:00 A.M. their time.
That is coming up. Before that starts,
there's a student zone, so that's November 13. This is great for
beginners and learners. The idea is you can show up. You can get started
with.NET during that day. Then you can show up for the rest The.NET com If anything else that seems
interesting to you, you've got more stuff you can learn. This is a great free thing. It's run by a lot of great community leaders and
covers a lot of different things. AI, Backend, Frontend, Mobile and Game, etc.
Good stuff there. Also, if you're getting
started with.NET we have this.NET certification that just
came out on free code camp. Actually we have these live series on Learn Live that will
walk you through those. A lot of great
alternatives including international spoken in Korean and Chinese and also
in Taiwan. Very cool. This is something
that was new to me. I've been paying a
little more attention to unit testing again lately and I was seeing this
on contract testing. This is an interesting, I don't know if you
folks have seen that. The idea is for testing
micro services. He talks about that you can
do integration testing, but it can be slow and there's
interactions and stuff. The idea with contract
testing is you're more validating the grid contract. >> What does that mean?
That the doesn't run? It's just not in the spec? >> I think it is verifying both
consumer and provider and also the format of what's
transmitted between them. It didn't go super deep. I was looking at the code a bit. The idea here is like, here's a test and you're verifying both what you receive and what
you're outputting and stuff. He's doing a high-level
walk through, and then here's the
provider test too. You're validating I guess more like
all the different parts of it, not just when I'm called, this is what I respond with. Then I saw that he's writing
about this packed document and there's this whole packed foundation
and there's a bigger thing. It's something I'm
going to explore more, but I just thought
it was interesting. There's a whole pack foundation. What the hell is
going on over there? Exciting new thing to learn about. I will put us down
in a different spot because I'm going to
hit our other tabs. A little bit of
spiciness here today. Andrew has been walking through
a lot of stuff with.NET 8. Should you use the.NET
8 identity endpoints? His basic premise was, no, you should not. I thought it was interesting. There we are. He talks what they are. He's already gone over those
and he explains what they do, bump up the font size a
bit, what they solved. High-level to me, like we've
always had ASP.NET Identity, but it's been baked into NBC and razor pages as a response request
server side identity solution. Doesn't work well
with API's and SPAs, and that's this thing in.NET 8. Here he talks through here's some of the different ways that
you can integrate them. What he's saying is,
these APIs allow, you can have bare tokens, you can set them up with that, you can manage API without requiring a server
side page interaction. He's got a bunch of
stuff about why he doesn't think it's the best option. I think this was the main one
that I took away from those, barrier tokens in the browser
should be avoided when possible. >> Fully agree, I'm not going
to hear me argue with that. >> It's the idea
here that basically, but it's not always
possible to avoid barrier tokens and then
there's also the cookie, you can use cookies instead. >> I think Andrew has some
good points in the post. Maybe I don't agree with the
conclusion, the wholesale. But for the most part, the entity and points basically
let you do cookies from a SPA, having built in endpoints for
doing the common operation, so you can build your own UI, so it has two purposes. One is if you have to build custom
UI in your framework of choice, could be Blazor, or
Angular, or React, or whatever, you can then have
the UI be filled out by calling the back-end APIs to sign up
to change users in general. The log in endpoint
is probably the one that's the most like sticky. By default we do want cookies to be the default for SPAs
and then the reason we did bare tokens was for simple
API's and for mobile apps. >> You don't really have
options for mobile apps. >> You can use cookies
is just not [inaudible] in non browser clients. I think there's a really good point
about scaling up or being tied to the ASP.NET Core
specific identity system. I think if you're in a polyglot environment
where you have other apps that are not on.NET, you definitely wouldn't
want to use this. Because then how do
you pass this token to your Java app or your
Python or whatever. There are some places where
you don't want to use it, but I think in general for
the scenario where you have a back end on a front end SPA, like it really is better to use this than to spin up an ODC server. No matter what anyone tells you, it's really hard to hide the details and we've
seen all the attempts. But you basically have to understand this bare minimum of
scopes and apps and all these primitive concepts to even exchange a token
for credentials. >> It's very easy to get wrong. At least this has been like
reviewed and gone through community review and all that stuff as opposed to something
you'll write on your own. >> The other thing to bear
in mind is we know this is a area that there are rightfully
very strong opinions in the, not just our community, but in the ecosystem about and we are definitely not
the only stack who has some built-in
authentication/authorization/identity support that others feel that is lacking and they need to go
off and do something else. As others have pointed out, including Andrew in
this excellent post, because he only ever
does excellent posts, is that this is an
evolving area too. The specs are still being written, and this is very much a cat and
mouse game between vulnerability and people seeking vulnerabilities and developers wanting convenience. Just make it work. Make it easy. It's like, well, I'm sorry, it's
just not that easy. It just not. >> On the note about security, there's been two changes to
flows that everyone were using. The implicit flow and
then the code flow. First, you were supposed to
use implicit flow from SPAs for ODC and then that was insecure, it was broken and then code
flow is on supposed to use, they supposed to use code flow
at PKCE another thing was found. I think there's a lot of Angus around not using
the industry standard, yet there's a reluctance to, you have understand it. It's really hard to
hide things that bleed. We tried really hard with
Wendy in the previous version, but it's really hard to make
it simple for these cases, it doesn't scale them, is my take. >> And the thing that I
guess we would say again, which we've said before, is that not all of our
customers are the same. We have customers
who are like Andrew, who are very well-versed, have no problem in going off deep into a new
problem area like this that is very daunting
and continually changing and feeling confident
they can make the right decisions. Then there are others who literally
just want something tone key and they never want to talk to
an outside service for this. They don't want to have to
rely on OS or platform oath, they just have, in what is in their views, very
simple requirements. That is why we keep investing
in the built-in stuff, is because there are
customers who will not use a third party identity provider. We would say to people, you should use a third
party identity provider. That is the industry
best practice now, whether it's enter from us or whether it's OpenID or you pick
your poison, I don't care. Or use a white box style
self-host product. What's the one that we
were looking at, Fowler? The Go one is it a Go?
No. Which one is it? The container that you just
run IDP in a container. What's that one called?
Something thread, something, steel, something,
something. What is it? It's going to bug me
now. Forty's brain, I can't remember like they used to, they're using something
like that, but [inaudible]. >> But to Fowler's point key clerk implements of standards and then that gets exposed to you. You can't avoid it. Now a lot
of the time, that's fine. I've been around long
enough that I've implemented one version of OpenID. Connect as a client before or written custom or flows to
some external provider. But doing that well and doing it integrating it deeply into
the app with the concept of permissions and scopes and
all that type of stuff is hard and not everyone wants that. We've tried to focus
very much on how do we make something
more approachable and simple for folks who just have
first-party both requirements where they own all
the pieces of the app whether it's the SPA front end
and the ASP.NET Core back end, and some other ASP.NET
Core service or a WinForms app and an ASP.NET Core
API or whatever it might be. They own all the pieces, and so that does simplify
the requirements somewhat. But we've also fully understand
whether people like Andrew and many others who
strongly feel that no, that's not a good approach. All people should be doing
this other way because you'll ultimately
likely need that or if you ever do need a
thing it'll be easier. As Fowler said we just don't
come to the same conclusion. We just know that we have
lots of customers who just flat out when we talk to
them and we interview them, and we've talked to them over
many years just saying no, I just don't need those things. It's far too complicated
for my requirements. This is not about us
saying everyone should use identity endpoints or everyone
should even use the inbox identity. As Andrew again points
out in this article, there's a lot of functionality; an ASP.NET Core identity
and it's deceiving. If you've never been exposed
to this stuff before, you think it's a registration page and a login page, and that's it. There's a lot more stuff going on under the covers and a
lot more pages exposed, and so all that's been
replicated in endpoints now but it is very specifically about the
two scenarios that Fowler said. We still recommend if it's a browser
based scenario, use cookies. That's the simplest, most
secure solution still today. At any endpoint we'll
allow you to do that, because you can return a cookie from those API core and
still use cookie off. If you're using a rich client or another client other than the
browser, you can now use tokens. If it's not first-party, then you use one of the many other options available to use an external identity provider. >> I saw a comment that says that
you shouldn't use ROPC anymore. >> I think that's Damien Border. >> I trust everything he says. >> I agree his article is amazing, how many articles he
cranks out on off. Incredible. I think
it's important to clarify why this literature
is in the OIDC spec. What we're shipping looks like this. >> Yeah. >> Which is not the
recommended practice, but why it's okay to ship it. >> The whole idea is that if
you're doing first-party off. If you own the server that is accepting credentials and
you are storing passwords, it is fine to send using the
password for that specific app to your specific back end to
get a token for that call. The reason you don't want
to do this normally with OIDC is because you're delegating off to an external service,
a third-party service. You have an app called John's great app and you want to accept
Google credentials. There's no way I'm
going to trust John. Sorry John. I don't trust you. >> But it's my Google password. >> My credentials is for Google. ROPC basically tells you
don't do an OIDC server. We're sending
credentials directly to the off provider for
the app and so don't share your credentials for
the third-party off server with this running app that you
don't know where it came from. >> Imagine there was a sign in
with Google button on damien.com, and when you click it, it just pops a form on damien.com and asks for you
user name and password, and then I sent the user name and password to Google and log in there. >> Exactly. That's
what it is basically. It's funny like when we
were researching this, we looked around at I
looked at the apps I have installed on my machine
and I looked at the number of apps that use a variant of this flow for this
scenario where it's first-party. The Invidia app asks for the Invidia user name and password and then sends
it to the endpoint. It's not doing a flow,
it's not doing OIDC. It's their app and
it's their endpoint, so I'd have no problem putting my Invidia credentials
into the Invidia app. Because it's no different, then if I went to invidia.com and I put my user and
password into invidia.com. Now there's a whole other discussion
about whether the passwords themselves should be removed
from the face of the Earth. Absolutely, there's lots of great efforts underway with
things like passkeys and whatnot. >> Use passkeys. >> To reduce their importance, and so we have a long way to go. >> Passkeys, I don't remember
what they are I will say. >> I do public/private key pair. There's a new standard
in the browsers, a bunch of APIs, and then there's a flow that you implement with a bit
of job server stuff, and like classic negotiate this, do a non, do a thing, do a blah blah, back and forward. You can effectively use a
physical machine, like a browser, like an app running
on a physical device as credential once you've
done that initial flow. That includes signing up for
a site for the first time. You can say, hey, I'm a new user, I'm going to register
with a passkey, and then the site in their database only knows you as that passkey ID. They have no credential, they don't have a password for
you. They just have that ID. >> Pretty cool. >> It's pretty cool, but I
admit the concept of passkeys I've set them up a couple
of times on Google and stuff I still don't
really get it as a user. I'm not sure if that passkey
is tied to my machine. Then I go to another
machine and I ask for my passkey that I don't know
what I'm supposed to do. I've got UB keys as well. I still hasn't really sunk in. I actually implemented a
passkey implementation in JavaScript but I still don't
quite understand the user flaws. >> I was honestly hoping
you would clear it up for me because I've learned. >> I'm sorry. Maybe
someone on the chat. >> Yeah. >> Someone says it is tied
to your machine which was my instinct which is why was strange to me when I did
it on my Google account. I went to some other machine
and said, all right, they are signing with your
passkey, and I'm like, but I'm on a different machine. I can't do that. Why is it promptly for that? It actually made it more. >> Is it tied to your
machine? I thought. Mine forget how they stored it in my one password and I went
to my phone, I signed in. >> The password has the
passkey implementation. >> That will roam is my take. >> As I thought, what else
did they do it that way? >> Is it tied to the machine
or is it tied to the password? >> I think it's tied to
whatever it's stored in. In the browser it's tied. I thought passkeys in
Edge were synchronized if you signed in with
Edge, but maybe I'm wrong. >> It could be that. >> Jeremy's saying it depends on
the thing storing the passkey. That's what makes
most sense. I'm just reverse engineering this
form what I was planning. >> It's private keys. >> It makes sense to me that if I signed in to Edge and use passkeys, they would be synchronized
because they would ultimately be stored in
my profile in the Cloud. Whereas if I just use Windows ones and I'm not signed in with an MSA, it's just stuck on my local machine. There you go. That makes sense. >> I've had some where I sign
up and it's stored on my phone, and then it's like,
do you want to use the passkey from your phone? But then it's like, I don't have a passkey for this
domain on your phone, I'm like I don't have it. >> Then to Fowler's point about
using a password manager which I also use it then adds
another element of like, wait, where did I put
that passkeys in an Edge? Is it in Windows? Is it in my one password? oh god, where did I put it? It doesn't solve everything. It's just yet another thing
I have to think about. Anyway, that's not what
we're here to talk about. >> What we're here to
talk about today is David tweeted about this
thing here I'm sharing, I realized that I used
a different window. I'm now sharing my live Twitter
window which could go wrong. There it is. Let's put us over here. I just saw this fun thread
pop up and I was like, hey, let's do it live. You tweeted about what
are some features you want in.NET9 and C#13? I don't know, there are
just some cool responses, I thought we could just
talk through some of them. I'm zooming some. I will try to help you assure us, but I don't know because
it's Twitter and they do all these things where they
don't want to embrace the web. >> You could use Damian. >> It seems watch a little bit. >> One of the first things here
was about threading and WASM. I don't know, is that
something we want to? >> I think we added experimental
support, didn't we? >> We did. In the mid least, the WASM run time, which is the mono based run time, I thought did add
experimental support. Is WASM threading like ratified? Is that actually over the
line yet in the spec? >> I don't know. But
people seem to want it. What I find funny about it is multi threading
makes everything heard. >> It does. >> But unlike JavaScript
basically got away from that. In the both you're very happy and there's no
lots and no threading. The first thing people asked
for and browser was in this, can you give me Threads so I can
have starvation in the browser? >> I can shoot myself
on the foot. I get it. For me web has other things
like web workers and stuff which are parallel
adjacent or another. You can do things but they're not. Some people just want threads. They just want to
be able to do stuff using similar threading models. Says it was ready to
try in Chrome 70. I'm assuming we're way past Chrome
70 because that was in 2019. That's probably way out of date. >> Yeah. >> Why it's so hard to find
info on some of this stuff? Here we go. Threading proposal. Web assembly, gidhub.com/webassembly.
This still doesn't help me. Someone in the chat knows
is it actually done. Is Threading finished? >> I guess I should have known
this was going to come up, but we talk threading, people are already saying what about green threads and all that stuff. >> Well, so there's
been an update on that. >> You should read Fowler's
other tweet from last week. >> We had a super good
update about green threads. >> Which basically said
we aren't doing them but the first pulse was
very high level saying, we looked at doing them and we had an existential crisis
thinking about do we want to have a third model. Do we want a third model? Do we want to have value tasks? Do we then want to have new sync API's for every a sync
API's that can yo work well. >> [inaudible] with
the previous models, the deprecated APM models. >> Exactly, those
are the previous one >> We invented asynchrony model. >> Then there would be all
these models that you could very easily be crossing the streams. >> There are also some
interesting technical challenges that don't exist in Java and Go. Because Java and Go don't have programming primitives that let
you do things like pin memory. You can fix memory and you
can pass it to native code, which means the GC can't move it. You can't copy stacks. There are a lot of small things that to make the
implementation more difficult. Not impossible, just more difficult. Our implementation just allocated a big enough stack so we
didn't have to grow it. Typically, these systems allocate a super tiny stack in the run time. As you run, they hit the end of the frame and they
allocate more stack space. The two strategies are
normally you allocate a new blood memory and you have a link list of blocks
or you basically copy. You allocate a new memory and you double the size or whatever and
you copy the list stack of. The Go actually started with
the link list approach. They're called link stacks. What ends up happening
is if you get unlucky, let's say you're in a
hot for loop and you end up on the boundary of the link list, what can end up happening is you
can end up pushing and popping. You end up allocating on
the boundary over and over. That is called the
hot split problem. That's the thing that makes performance unpredictable
because it can happen anywhere in your code base at
any point in time based on how deep you are in the call
back and Go [inaudible]. >> It's not how I already coming back through there's no free lunch. >> It's free. The
other super cool issue that happens with green threads
is when you do have a tight loop, normally the OS can preempt and run arbitrary code on other threads because it just knows
how to do that stuff. We would have to implement the
same thing or you just we say when you're in a hot loop on a green
thread, that thing is done. Basically, at that point you lose the benefits
of the thread being green because you're burning an
actual core doing high CP work. There are some trade
offs to be made. >> Part of that just
to finish on that, is also.NET is optimized
for interrupt. That's one of the key things. Is that pin via platform in
a raw.NET made a bunch of choices in its run time that
optimize that type of stuff, because we don't have managed
implementations of everything. We call out to the OS
to do a lot of stuff including crypto and networking
and a bunch of stuff, whereas on other stacks that
use green thread style stuff, they implement everything
in user space, in a managed area. They limit interrupt. Because as Fowler pointed out, one of the challenges
is if you're not actually running on real threads, when you do need to do something
at the platform level, now that it's a crap ton of
work you have to do to set up an OS thread to execute
what you wanted to execute. Then you have to deal
with resynchrony. For example, if it's an a sync
API you can't fake it anymore. Introducing yet another way
to do that would also mean, but what happens when you
hit one of these cases where you have to
call out to the OS? It's just not free. It
doesn't matter what we do. It's a fundamental thing that's
different about how we do stuff. >> Cool. Let me see. Next general one, let
me see I don't know, performance improvements for Blazor. There's continual stack [inaudible]. Do you want to talk about anything? >> Those are mostly my assumption. My assumption is that is talking
about Blazor WASM specifically. >> My guess. >> Is my guess because that's where it shows up the most
right now obviously. >> There's a big thing in
A. The jit interpreter was the big improvement in. >> I don't know how big of a
deal it'll be for your scenario, but constant improvement
is being made. >> Let me see, debugging with HTTP client network issues?
Gosh, I don't know. >> I have a theory. Whenever you're in the Cloud, everything is virtual. There's late of networking
beneath you and packets get lost. When you get a failure, you're trying to figure out
which layer cause the failure. Was the call because of DNS, was it because of the call itself? It's really hard to know without
doing real network debugging. How do you get a network
trace in the Cloud? >> How do you get like
a wire trace from a random virtual [inaudible]? >> If you do get more of that like with the open telemetry and stuff. >> I think the thing Fowler is
alluding to is that the thing I have to keep reminding myself with networking is that
ultimately it's not magic. It feels like magic until
you start looking at it and like you understand what
happens when it's like neath, the net frame happens and then like a packet goes on, how TCP/IP works. It's like everything
is just guessing. It's all just timeouts,
everything is fudged. It's like, well, there's no such thing as a live
connection to another machine. It's still made up.
It's not like there's a live electrical signal and we're measuring resistance across that. We're sending data through pulses in physical or through radio waves, and we're just waiting some time window to see
if we get another one. If we don't, then the protocol decides if we retry or
we just say screw you, I didn't hear from you and we
send a final packet to say, well, just in case you do get this, I don't care anymore. Which is what this message is. Once you realize that,
to Fowler's point, it's like once you're remote
and you're in the Cloud, it's always going to fail somewhere. Now I appreciate that, it does suck if you seem
to be seeing consistently, things that are just stopping. Having worked in data
centers and stuff before, sometimes it's bad
firmware on a switch, sometimes it's a
corrupted routing tables. There's so many things that
can cause networking issues, I'm not sure putting more logging in HP client's going to help you. You know what I mean? >> We do have an item to
ship a logging stream, so you can see the bytes, that helps you when you're
in an environment where you can't just run wire or something
be happening there still. But it's not a thing you turn on because it'll
destroy performance. It's literally logging all
the bytes open the network, which is [inaudible], some things are encrypted, so it's not always super useful, so you have to put it in the right
place before it gets encrypted. >> I'm sure it's probably
changed again by now. But this reminded me when we were talking about how
crazy networking is. I remember at some point
it was saying that Cable Internet works by basically like packing
Internet frames into what was expected to be video. >> Video. >> A series of images, so instead of a JPEG, here's a frame of data. >> Yes. >> Why not? It works. >> It's gray, isn't it? >> This one was interesting
here from Tolga, about easier usage for binding
options and secrets in JSON. I agree on this. I don't have an idea
of how to fix it, but like some magical syntax, right now, you have to do a little bit reaching into config and say, give me this call by value. >> Yeah. You want a type
provider type thing, like just magic members
from JSON type of stuff. >> It could be easier.
Right now it's a little bit explicit where you
have to first declare the type, which you can or can't do. That's the thing that
is required anyway. The question is, what
is the gesture to make that type be populated
from some section. We don't have any
defaults on purpose, like we don't assume the
name of the type exists in config. If you look at spring. Spring is annotation heavy, so you would have an
attribute on your class and it would magically just
like get your list from. >> Get re-hydrated concrete. >> Yeah. >> We're never going to do that. I'm pretty hardcore
about that level of magic not being there by default. >> Okay. >> Yeah. >> We're going to step. >> That's the thing too, you could have from like a NuGet
package or whatever, that adds magic attributes or
does something if you want to. >> It's funny how much of this
ends up looking like minimal API. You could imagine an API where you call a method
and you give it a string, which is your root, which is
like the section in config. Then you just have
some parameters and it just winds them to the
parameters for you. >> From configuration. >> Yeah, that would be super
trivial for us to build. Then you could basically
do inline binding to variables or complex types with inferred rules like this type
of stuff we have in normal API. The question is where are you using it and what are you doing it for? >> Yeah, that does sound
interesting actually. From config attributes
and then it'll reach into the config system whatever
configuration of certain environment. Thoughts on dotnet watch in general, I feel it works a lot of the time. I feel like when I go into it expecting it to maybe
work, then it's awesome. But then I'll work with other
people that are like newer to.NET and expect dotnet watch
to always catch up anything. You know what I mean?
They're surprised. >> I have a very soft spot in
my heart for dotnet watch. We've been there
since its inception. It was built by the ASP.NET team before when we were building DNX, and we did a lot of
updates to it in.NET 6. The primary [inaudible]
worked on it, unfortunately left the company
and then it moved teams. So dotnet watch is now actually
owned by the tooling team. I'm not aware of any
fundamental issues. It actually shares a lot
of the same infrastructure with Visual Studios to
edit and continue support, but more so the hot reload
stuff that was added in six for when
you're not debugging. Because there's two
types of hot reload, there's when the debugger
is running hot reload and then there's when the debugger
is not running, hot reload. They're quite different mechanically
and dotnet watch obviously uses the one without the
debugger because there's no debugger in dotnet watch,
it's just running your app. Then it has like an outer
loop effectively where it's monitoring files on disk for
changes because there's no editor. dotnet watch, it's just
command line tools, so it doesn't know that you're
typing in an editor and seeing a buffer change or anything
like VS does or VS Code. It's just looking for file changes. In my experience, historically, there's been a few places
where flakiness gets in. One is that files
change notifications on some systems for whatever
reason can be flaky. On Windows it's pretty good. We've seen things on
certain Linux distros, and even on Mac where whether
it's what we're doing, what.NET is doing or just how
the OS file system works there, it just doesn't always
detect everything. Obviously, if you don't
get the notification, then you'll get a bad experience because we won't detect
that there's a change. The other time that can have errors, just the normal hot reload thing, like what is compatible with a hot reload level
change versus a fall back to shutting down
the app recompiling and restarting the app like a restart. dotnet watch used to
only do the ladder. >> Yeah. >> Obviously we invented
hot reload and did it in six and we ended it to dotnet watch. Then there are things like
what type of app you watching? Is it a Blazer app or a razor app which has another
level of complexity? Is it a web app that relies
on the CSS bundling feature of razor compiler which
has another level of complexity which is a
whole other mechanism? Now the watch mechanism has to understand that if
this CSS file changes, that's not just a simple refresh
the file in the browser. I need to run a specific
MS built target to go and recompile that bundle and then
tell the browser to refresh. There's a lot of moving parts, but my understanding, I'm not
aware of any outstanding issue. I'd just like please
log issues if you find them and give us a
little bit more detail. I know it can be hard because
some things just feel unreliable. Sometimes you just
like use the thing in your experience with it for
the project you're working on. Is that just like it just feels like it doesn't work more
than it does work. When you try and get them to narrow
it down, it's just difficult. It just doesn't work, anything
I try is not working. Most of the time I find, I have that experience a
lot of the time as well. It comes down to a specific thing in your project that is causing the
whole thing to become flaky. That specific thing often is
just some complexity that's resulted in some bug in the
way the hot reload is working. >> Yeah. I think this is it, it's multiple things working
together that as a user, I'm not thinking through, oh, there's dotnet watch, there's hot reload,
there's the project. >> You shouldn't have to. You should just, I want to run the thing and just have
it work like, I get it. It just as we're all
software engineers, we know it's hard to reproduce
if we don't have detail. >> One thing that I always
like for that sort of thing is some indication like
what went wrong. For instance, this is a root
edit cause because you edit. >> We've done a lot of work in
hot reload to improve that. I know VS now will show by default the reason for hot reload
cannot be applied. You change cannot be
applied due to a root edit. It will tell you the change. It'll say hey, cannot change a protected member of blah,
blah, blah, whatever the thing. dotnet watch actually
would echo that stuff as well because all that
data is there and it's just about making
sure we expose it in a user helpful way rather than just spam you
with a bunch of stuff. The more challenging
ones is when it's just like this is a supported change. I know it's a supported change, but you're telling me
that the project change in a way that can't be applied, or you're saying there's a
build error when there's not a build error or you're not
detecting the change at all, it's those state management issues. That can be really frustrating for the user and for us
to try and diagnose, but all I can say is please keep logging the issues and try
and help us narrow them down. >> Cool. Let me see. I don't know. This one say, ability to override the new operator as well
as activator and dispose. >> That isn't a new
request by the way. That actually I've heard
that several times. Let me control like what happens when my object gets
created and destroyed. >> Could you do that
using an interceptor? >> No. You can't today
because they only support hooking method calls. You can't hook new. >> Okay. >> Yeah, no way. That's a
cool thing to talk about. I don't see how we do that in C#. It basically makes C# super unsafe in the super obvious
main line code path. Right now you have to
write unsafe, always true. You have to write unsafe to do
unsafe things is the mantra. We don't want normal unsafe
code looking like safe code. I don't see a world
where we allow new and out of scope to
just call your stuff. I think maybe there'll
be other things to give more control over
pooling in the future. This foot gun is like C++. >> Yeah. Cool. >> I guess here just a quick, there's a question about, what do we do on here? What chat is this? So this is.NET Community Standup. And the idea here is this is behind the scenes where we talk with
product team and we just, especially at this point in
the product cycle.NET 8 is pretty much wrapped up and we're
getting ready to release it. We're doing the final
thing, so like document and talk about it when it
goes live in November, so this is an early chance
to just chat about.NET 9, where we can still maybe, although I guess that's
a good question. The planning cycle. Planning is
already happening [inaudible] >> There's a plan;
the planning cycle, which I'm not even joking, and that'll start soon. The team is still
primarily focused on execution and delivery right now
of eight because we're not done. We've still got
another release to put out and then we've got a
final release to put out. But yeah, there's a
planning exercise that gets kicked off around
this time every year. And then there's an effort to
try and improve it based on what we did last
year and what did we learn and that type of stuff. It gets a bit better, which
can be it, it's normal. I think every team has
that thing and it can be sometimes like writing like that. But yeah, so now is a good time. What I find interesting
about these types of Twitter threads is
just the low level, esoteric stuff that people ask for. Some part of me
expects when you say, what would you like to
see in the next version, you would get very high level
type of feature requests, like something in Razor
pages, something in Blazer. Can you make it do
this and this, please. I'll make it easier
to do XYZ in tooling. But there are a lot of
people who are just, I want to be able to
hook new or I want these really low level
cross cutting things, which is mad scientist level stuff. They always surprise me. >> I've done this enough
times internally as well to understand the
spectrum of requests. >> Of the type of asks. Yeah. >> It spans from super specific, give me this API to let me write wind forms and have
it render in the browser. I think it really depends
on what you need. There's a lot of people
thinking about what they need to make it easier
for them to do something, there's like you do
this insane thing. Then they'll be like, well, even thinking through the
steps to make that possible, kind of tricky, not
just not trivial. I love when I ask
people for more detail, then it forces you to think
through the ask a little bit more to see like do you really
want this? Do you need this? That often happens on issues. People will fill issues
saying, can you add X? We'll come back a year later
and say, is it done yet? It's like there's no detail,
there's no scenarios. You just ask for this
super obscure thing. You have to do the GPT where you're fixing the prompt to get more
information to figure out what, what is really being asked for unless it's super clear and obvious. >> I mean, I've heard
Hanselman say that before. I mean this is old school now, but like somebody walks into
the cell phone store and says, I want a bigger antenna on my phone. It's like, why do you need a bigger? I dropped my connection going
through the tunnels or whatever. It's okay, what problem
are we solving here? >> We have a lot of people using.NET and literally
millions of developers, and so they operate
at different levels. Like a lot of people never get down below the understanding
of what APIs to call. They're extremely proficient when
it comes to writing apps in.NET. They understand the most of the app level language
primitives and they know all the API's they can
call to build apps, and they never think about
underneath that line. They don't write libraries
for other people. They don't do anything advanced. Then there are other people
who think all the way down to the assembly being
generated by the Jit. That's like a whole other world compared to just writing
an app in C-Sharp. They're both perfectly valid
and we invest in both levels, and that's part of one of the
fun things about our product. The challenges is trying
to figure out how do we allocate time and investment across the full spectrum of what.NET is, which is pretty deep. It makes a lot of layers. Yeah, I want people to understand that just because a lot of
people ask for something, it doesn't mean it will happen. I think there's this sense
of everyone's asking for, or this person is asking for, or it's been opened since.NET
3 and you didn't do it yet. There's a combination
of people asking, us having to do
research to understand a scenario and our desires, like do we want to have this be
the direction that we going? There's a thing in the
chat talking about you. Could we add a PDF
reader in the framework. It's not that we couldn't do it. We could add anything you
asked for into the framework. But should we do this? Why should we do this
versus using having people in the ecosystem have one. One of the eternal
struggles we have is our customers rightly so expect everything in the
box, literally everything. They'll take any product
that they have used in a Nuget package and we'll
ask us to put it in the box. and we'll go, no, we won't do that, keep
us in the package. But why don't you just put
this in the box and it's like, why do you need it in the box? I understand it's
supported, it's there. If we keep doing that, we would literally
have this never ending growing framework of
everything out of the box. There's this line we
have that maybe isn't fully called or drawn
anywhere in public. But there is this line
I think in the back of our heads around, that's too far. That shouldn't be in the framework. It hasn't reached the point of Ubiquity that it should
be in the framework. Because the thing needs to be pervasive and it needs
to have staying power. It can't be a thing
that is fleeting. I think there's a lot of thought that goes into when
or why we do stuff. So you will get frustrated
because we will not ship discriminated unions,
which everybody wants. I want them. I want it. We want
it in the framework, it's not as though we
don't want to do them. But any software team, they're like, things to solve and issues to
get over to make progress. >> I remember way back
before working at Microsoft, going through with reflector
and finding passports. I found name ****** for passport authentication in
system.web and stuff like that. I was like, you end up with this big unmaintainable thing
if you ship everything. Because use cases change over time. You got to ship a
tight focus product and then people can
add on other stuff. Then also I feel realistically, if Microsoft shipped PDF support, there would be APM who would have that as one of their
things they were tracking, then there would be a death
that did a little bit of work on it and then it might
release in.NET 9. But how much is it going to
be maintained and updated for new PDF support and
vulnerabilities two years later? When the PM's moved
to another project. Are they going to love PDF the
same way that a company that, or a passionate community
that shipping a PDF package. >> We have examples of this already. Like the Zip support was
like this in the framework. >> They're a lot of them. >> Doesn't support every bell and whistle that you can possibly do. There are many file
formats protocols that we don't support in box
that are used in places and they still doesn't meet the level to Fowler's point that
we feel like it deserves to be the next thing that
we prioritize and spend resources on and put in
the box for the next release. >> What Dennis is saying
here for the community. Like why not, you know? >> Yeah. Community and ecosystem, and it is zero-sum. At the end of the
day, we are a finite. People look at Microsoft and see a multi trillion dollar company
and it's not the.NET team, sorry. A lot of Microsoft runs on.NET so
obviously it's well invested in. But the team might not be anywhere near as big as you
might expect it to be. Especially when you
consider how much of the team has to be spent
on maintenance and turning the crank to
ensure that we can produce reliable software and that we can react to things like security
incidents and stuff. There's a lot of costs
associated with doing that part. It's not just people
slapping on keyboards, putting features out every day. Which I'm sure it's not for
most professional developers, people understand that I
think. It is zero-sum. When we choose to do one feature, it means we're choosing not to do another one or three other ones
if they were the same size. We don't always get it right or
you don't always agree with us. That's part and parcel
of having a product, that's just the
reality of the world. We have millions of
customers and they don't all agree on what's more important. Then we have strategic goals. As a product, you put
aside the fact that we're a framework and a language and a
tool, but we're also a product. We have organizational
product strategy and focus which is separate from
the open source project. >> That influences where Microsoft spends its dollars when
building.NET as well. I think that's well understood. I don't think that's contentious. I think that's why Microsoft
does spend money building.NET. Sometimes those things
get higher priority than something that you
might care a lot about, but we try and balance that. We do our best across the team
to really try and balance that. Talking about planning, we
talk about top-down and bottom-up planning and we try and balance that
every release as well. To your point, John,
the team working on a particular feature
or product area are typically the ones that
knows their customers best. They're the ones that
hear the screams, that react to the GitHub issues. The bottom up process is like, well, that team should make sure that
they've got capacity to address the top issues and top feature requests that their
customers are asking for, but they also have to leave
capacity for the top-down asks coming from the product owners. That's a balance that shifts from release to release,
from team to team. That's just the reality
of doing business. >> Investment shifts as well. I feel like, as a result
of those two forces, we may choose to invest in
Signal more next release or NBC more next release
or some other area. You'll see shifts in
investments from release to release based on the combination
of bottom-up and top-down work. >> As others have
pointed out in the chat, anyone who works in
software development knows you don't get linear
scaling with adding more software developers to a
project like not infinitely anyway. Like certainly for some things
that are highly parallelizable, you can do that, but then
there's an added cost. There's an overhead associated
with every new set of hands on keyboard that has to be paid, and then there are tipping points
where that cost can suddenly, that debt, you can't
avoid it anymore. Now you have to spin
up another team to manage the fact that you've
added 10 new people over here. We are also a global company, so we have to care about laws and
requirements and whatnot that a lot of software development
groups don't have to because you're
building software just for your team or
whatever it might be. But we're building an international
product. We have partners. We have partners that
take the.NET source code and they build it themselves
and ship it in their product, whether it's internal at
Microsoft like PowerShell or a partner like RedHat that does
it in their Linux distribution. That's a lot of investment
required continuously. >> To provide cycle for all of them. >> Yeah. I think people would
agree that that's worthwhile. We get a lot of benefit and they
get a lot of benefit out of that. But I think sometimes it's worth just remembering
that there's a lot of stuff we have to do
that isn't just adding an API to your thing
that you wanted to. >> I think too it's important to be clear as much as possible about
what our product focus is and isn't because some people
know what they can rely on us to stick with long term
so they can invest in. But then also, somebody in the chat said, please let my company
continue to exist. Say you're a PDF producer company, Microsoft could jump into a bunch of different things and lay waste
to it by shipping a PDF, and now they're an image
editor and now they're doing UPC, whatever scanning. It's good to signal, hey, we're focused on
delivering a runtime that's for developers
to ship these apps. Then the ecosystem and
other software vendors can get a feeling of like here's an empty ****** where we feel
safe developing a product in. >> Yeah. I agree with you and I can, but I also think it's not
that simple. I wish it was. I think we're not simple
like Node is simple. Node is really simple product. Go and look at the source code, go and look at the docs
on the Node website. There's a very finite amount
of things you can do with it. Node's power is the
ecosystem behind it and also its problems in some way. Similar with Java, and Go, and lots of other stacks, we are actually not like
a lot of other stacks. There is very much
not a stack like us. We are, in some ways, a product of when we were born and of prior to
other previous time, and then we've adapted, and grown, and changed as the
industry has changed. But it's still left us being a product that is quite
different from other products. We tried in the
beginning of.NET Core, Fowler will remember very well. We tried like, what would we look
like if we modeled ourselves more on
something like Node, and we got a very strong rejection. >> I remember that. >> We're doing that. We think we found a
fairly good balance. But to your point
about making it clear, we're building, this is our goal, I think sometimes it's very nuanced
and it's hard to articulate in a single message that resonates with anyone who's
asking that question. Often it requires a
discussion that's tailored to the audience asking the question rather than just a simple
manifest we can stick on.net.com. It is difficult. The closest we've ever
gotten I think is Fowler once said that we're a
batteries-included framework. That speaks to what we were talking about before about we
should just be in the box. There's challenges to that.
There's two sides of that coin. If we say we're batteries-included, then it's natural for folks to say, great, put this battery in place
because I need that battery. It is nuanced and it's evolving, and one year when we
think we're going to be in a certain thing
for the long haul, that can change next year
because the industry changes. If we don't change, we
know we might upset 10,000 people who looked at what we
did the previous year and said, I thought it was clear you were
in this for the long haul, even if we didn't
explicitly say that. Because we have a
support life cycle. It's three years in
LTS, that's what it is. But we don't have a
crystal ball like anyone else in the
industry doesn't have one, and we don't know where the next
big shift is going to come from. We don't always get
the timing perfectly. I think there are some things we've done well on even in retrospect, like our investment in Kestrel, in ASP.NET Core specifically, and investment and focus on
performance and protocol support like HTTP/2 and HTTP/3 really early. Earlier than most other mainstream
open source service stacks. I'm proud of that. We made
that choice. That was a bet. It required a lot of internal
partnership as well. It wasn't without its
frustrations or complications. We added HTTP/2, but nothing in Azure supported it. We went out with our
product and look, we've got HTTP/2 [inaudible]
on your Cloud and we'd say, you stand up a VM, because none of the
passers supported it. We have to deal with that. It's like okay, and then
Fowler in particular has done a lot of work to help partners adapt Kestrel and get that support in some of our
Azure passes and stuff. But that cycle continues
every release. It's never as straightforward as we just have to worry about
what's in the open source repo or we just have to worry about what's in file near
in Visual Studio or we just have to worry about
what our Azure partners or our third-party
partners are asking. It's always some combination
of all those things. >> Well, we're getting
short-ish on time. I think two things we
should wrap up with. There's one kind of medic
question I wanted to jump to, and then when we finish too, I want to make sure we talk about how can people get their
suggestions and stuff in. I'll get to that next. But this just general one, there's a few that
are saying this like more stability in
performance, less change. Then there's some other
things like delay, do minor, don't do major features. I think there's a little bit
of fatigue on some people. >> Did Jon freeze? >> Did we lose Jon? >> We could talk at
this one, though. >> I think you get the point. >> We had an internal
discussion recently about some of the comments on that blog post that Richard wrote and about some of the
comments in my tweet. We were talking about, do you ever feel a sense of mastery if the thing you're
trying to master keeps moving? For some people who
really want mastery, I can feel a pain
because you're trying to know the entire language,
the entire framework. You feel like you've mastered these things, and
then span comes out, and these comes out, and that
comes out and you're like, why do I have to keep
learning new things? It sucks. Fortunately, I would say that is not just a.NET thing
that is industry-wide, and in other ecosystems
it's worse than others. But I think there is merit to some of the
complaints about the speed. It doesn't mean that I think
we should stop our slow down, but I feel the empathy of how do you master something
that keeps changing. I think there will be discussions
as we get more mature about what kind of language features do we add in every release
do we do every two years. There's been no actual
concrete changes. It was just an
acknowledgment of, yes, it makes sense because if you're
trying to match or something, understand it, not have
to learn new syntax, and you're seeing code of views. The hardest thing
for me personally to understand I guess when we started this journey of having a release every single year was I'm someone
who embraces the new naturally. Whenever a new feature comes in, I'll find ways to use it. Obviously, I'll use this new thing. But there's this other side of the
coin where it becomes a burden. Imagine you're on a
team and you have figured out C# and you
have all these rules. Now someone new joins or someone who wants to
use the new stuff joins and they add all this syntax
that looks to you crazy. Because you are used
to C style home base, if statements, and this
very clear structured code. I think it's funny the
adjectives people use when they describe the new syntax because
it really shows you the lineage. I think that is right because we came from a
C-based language, a C#. C is a fixed, simple language. It's supposed to look like this. What is this arrow syntax? Why are you adding F# style
glyphs to my language? There's a real sense of like loss of ownership of like this
used to be my language, and now you've made it
into something that I don't understand and it's C++. I think that the
complaints are valid. But personally, I think the
changes have been quite nice. >> I think the short answer
is humans are complex and we absorb lots of different things into our
sense of self and identity. I don't want to get too profound. But Alan and I have long said in the team that it's very
clear that some people associate their code
with themselves and they feel it when someone says
we're going to delete that code. One of the things that we used to
say to new developers was like, you're not your code you need to not get attached to the code you wrote. Like this code is going to live as long as it needs to live for
and then it's going to go away. And I think I made that realization a long time ago in my
career that it's just code, it's literally called software because it's designed to be changed, that's the whole intent of it being. But people get attached
to their work. People absorb things
into their sense of self and part of that is
this idea of mastery, or this idea of understanding
your domain and the world that you live in and so they get
strong feelings about things. But also developers are kind of a quirky bunch and
I'm part of that cohort. I think we all often joke
about how quirky we can be and like how two people in
the exact same industry who have the same experience and
can argue so vehemently about Play Store or white spacing or like dot comics or regions
or like all these. It's just like, it's so stupid. But it's part of what
makes our industry so fun and it's just
part of our make up. I think again, as a product, we had to strike a balance. We had to look at what
the industry was doing, we had to look at what our own
history was with our product, and I think the
balance is well made. We're never going to be
able to suit everyone. I know some people
don't agree with us, I know some people are just like no. If you just made LTS five years, that would solve all the problems and no one would hate you anymore. Well, everyone would be happy. It's just not true that there
are tradeoffs in doing that. That would mean some other person or some other group of
people would get less than they currently get and they would be upset with that because they have no interest in a five
year LTS at all. That's just the reality.
We have to pick a balance. We listen to the feedback
and we consider it as Fowler said and we do make changes
when we feel it's necessary. But yeah, this pace, I think, overall, has been
good for the product. We are empathetic with folks
who feel it's overwhelming. I used to know everything
about CSS in 2010. >> Oh my God. >> [inaudible] I know about SS, but I know about CSS. >> I used to feel like such a
CSS star and now I look at it, I'm like, what is all this about? >> Yeah, so I get it. But I still think that that is
the nature of our industry. It's just the nature
of the industry. C# is modern, Dot.net
is a modern stack. It's not a legacy stack and
legacy stacks don't change, modern stacks keep evolving. >> I feel like an important
thing too here is that the backwards compatibility
thing is real and it's good. Like for instance, if you feel
overwhelmed, I don't know, I'm always telling people
like upgrade to the new Dot.net and you don't have
to use all the new features. Basically, there's
all these analogies. It's like your favorite
story just added a new wing, or you just downloaded
a new DLC for the game. Like you unlocked
some new capabilities but you don't have to use
them. You know what I mean? Like for the most part, I was looking through breaking
changes for Dot.net 8 the other day and they're
pretty minimal and focused for the most part. I would feel pretty comfortable
upgrading a Dot.net 7 app to Dot.net
eight pretty quickly. I personally feel
overwhelmed sometimes like, what is all this new syntax? What are all these new API's? You don't have to
use them in order to just take advantage of
faster performance. >> I think that it gets
tricky when you're on a team and one person on the team
wants to use the new features. I think what ends up happening to
people who really get annoyed at the changes are someone sends a pull request I
know it's my problem, so I can't ignore it. You would think if you don't want to use the new
syntax, don't use it. The software is a team sport, so now your team has to
all agree to use features. We had this big debate when we first did Aspen core and it was like var, are we going to use var?
You know what's funny? The Aspen team and the Dot.net team, like the core runtime
team, disagree on var. If you look at Dot.net runtime, var will only exist when
the right side is very clear and on the Aspen code base
you'll see far and everywhere. It's just funny, it's just
like same overall team, like two different people that chose different things
and those things happen, it's offer all the time,
even in the same company. >> I guess one last thing
just to wrap up here is for people as they're
starting to think about like this Twitter thread is cool. But for people that
formally want to make requests or get involved in
discussions on next version, this GitHub issues, if you're not sure where do you put
it like that thing. >> Definitely GitHub is
where we will triage. Don't do this, I won't
say it like too much, but Upvotes on hearts, on issues do matter
at least to a point where we will pay more attention
to things that have more bot. One of the reasons we decided
to work on off this release was because of the number of likes and
Upvotes on that specific issue. If you file an issue and you have no Upvotes and your issue
doesn't seem to be one that is I guess part
of the overall vision. We have that thing, themes
of Dot.net where we have high level themes that
plumb down to scenarios. Sometimes work gets
pulled in because it happens to coincide with
one of our existing scenarios. But if you're going to
follow a PDF reader, I can tell you for sure we'll
just ignore it until we'll put in the backlog or in future
because that's not happening. It's not a thing that we're
going to do in the short term, but definitely look for Upvotes. If you don't follow the
issue, it won't happen. I'll just say one thing I see pretty often is people will
suggest things at me on Twitter that are
reasonable and could be done and it's like we're open
source and if you follow the bug, sometimes someone else
has the same concern. We'll find your issue
and we'll send a fix. Sometimes the fixes are just like small and paper
cuts and we'll fix them. If they're big features, that's where it gets
more complex because it requires back and
forth between like, do we want this,
should it be in here, should it be in your own package. But yeah, definitely fall issues, and Upvote other issues. I can mention before we go, AOT keeps coming up. >> Yeah a loT. >> We want to do more AOT in line? The goal is to like do more of the framework to make
sure that things work. It's going to be slow. There are so many features that
don't just work. >> Not the performance, you mean the process of making things
native AOT [inaudible]? >> Exactly. Yes, that's right. Trying to make the entire framework all work at once was
too much work in it. Just making minimal API's work, like the Pandasy
graph of things that we had to make function to make minimal work taught us a lot about how to do the
other parts of the status. I think in nine, we're hoping to tackle NBC signal
and how those things work. We still have ideas of how
they can work EF for example. I'm hoping we will get to
a place where at least most of the heavily used parts
of Dot.net are AOT compatible. Once you're compatible it's
easy to stay compatible. The first hop is like getting there. You make all of the big changes
and all the big design changes. Then once you're there, new features just end up
having to be compatible because you're now thinking of them when you're building
it to begin with. That is the big issue today with trying to retrofit native
AOT on existing Dot.net. Like patterns make
it really hard to do certain things we're learning
from our experience, and hopefully the broader
ecosystem also does that as well. I'm hoping that will be a big
source of future work for us. >> We should probably wrap up
before my browser crashes again. I don't know what's awesome. Thanks a bunch and anytime I
can rope you folks in, I will. Let me know when a good
time is to do another. >> Sure. >> I will play the thing before my computer catches on
fire. Thanks everybody. [MUSIC]