ASP.NET Community Standup - ASP.NET Core Architecture - Part 4

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] [Music] [Applause] [Music] [Music] and hello welcome to the asp.net community stand up at a crazy different time uh shuffled things around i was actually this morning i was on a well called ask the experts for asp.net core apis and somehow by some mistake i was an expert on this panel you were there i was there strange john i'm kidding you're an expert this is part four of the asp.net core architecture series with damage part four of and of the ongoing drama how many slides are in the deck can we predict this it keeps growing yeah it's longer longer oh my gosh well awesome okay i want to jump right into this i also love community and it's important to go through the john loves community community links so here let's do it okay as always i share these out they go out in the chat they will go out in the chat here we go all right and i just have to call out we got friends from all over we got egypt we got korea we got argentina and we got cincinnati so all over the place oh canada up there too hello friends okay let's do it so uh first of all this was an interesting one hassan talking about up and running with odata and asp.net core six so talking about how to set this up get this going and so he he walks through the setup uh his example using uh net six and visual studio 2022 so he creates a new application and then he um installs odata so that's using the uh the nuget package so it's just installed as a nuget package and then up and running running and then he shows the wire up there so the um configure services and just add a controller and the odata options so pretty straightforward definitely handy you know i talked to people that have existing applications they've built something out with odata or for whatever reason no data is a great fit for them it's cool odata's is um you know available as a nuget package under.net foundation everything so that is neat um and then one other thing he just kind of sneaks in here so of course he shows off all the filtering and the order bias and all that sort of stuff he sneaks this interesting thing in here and i'm going to be paying attention to this the future of odata so uh he the odata team and he are looking into this odata version next and looking at making things more modular technology agnostic etc so he gives a github issue that you can follow along there it's interesting interesting stuff uh this is neat from niels talking about building a discord bot and when he sent me the link he's also like hey david fowler commented on this that he liked it when he tweeted about it so so this is pretty neat this is using the worker template so uh he goes through first of all through discord doing the setup creating an application uh you know configuring the scopes and all that sort of stuff once that's done so here the the scope is associated with bot permission and then he uses the worker template and so creates that and goes through and he's uh hosting this using azure container instances so he shows creating you know pretty straightforward worker that's that's interacting oh i skipped over one neat thing he's using the d sharp plus uh library which kind of makes it a lot easier to interact with things so uh the the worker just uh responds to messages responds to things uh so like you see there an on message created and then uh so when he walks through those sorts of things then he goes through and hosts this um so there he's got the ping pong and responding to things uh and so he shows also dockerizing and throwing in azure container instances so pretty neat to be able to do that and um pretty neat how you can you know host these sorts of worker services and this very light uh resource usage and um i've i built out a small one like this in the past uh not discord actually this was slack but our team was using slack for communicating and and it like handled integration with that it was pretty neat at what you can do with the bots what's that slacker teams so i did slack because this was a team that was migrating from slack and actually we had just a few weeks ago or maybe a month ago we had the teams we had john miller show off building teams bots using blazer yep so that's the next bot i would build i wouldn't want to try that out it looks looks pretty cool yeah if folks aren't familiar with the worker template it's just a fancy console app right like don't be scared of it it's a really cool console app that uses the same underlying hosting library that asp.net core uses from microsoft.extensions so that you get di and the config support and the logging and then on top of that we add um support for easily hosting it on like as a windows service or obviously as a console app which i said already um or as a like a daemon in linux using something like um systemd and so we have you know first class support for that we have docs for it if you boot it as a windows service it'll automatically write the event log like it should it'll automatically coordinate with the windows service manager um or you can just run it from the command line if that's what you want to stick in a container and run it that way right so it's a very versatile template for doing these long running console apps basically so check it out if you haven't looked at it it's nice how you describe it too because it's got the simplicity of a console app but all those things that you mentioned like logging and di and all that if you start with a console app and wire that up it's a bit of work you know every time i do it it's like i got to get to oh i need this package i need to set this up and my you know my program bootstrapping code and stuff so it is pretty nice to have this like all kind of set up yeah and it's it's just to separate it slightly more i guess is if you're building an interactive console app that you expect someone to like type like commands into then there are other libraries that you should use for that whether it's system console or the what's the really cool one fowler that they're prototyping right now this is not command line this is my command line and there's like the dragon fruit one dragon fruit yeah yeah um those are really designed for interactive command building interactive command line utilities whereas this is more focused at long-running um statements right that's why it's a worker service template um and that api that it comes with so check it out if you haven't looked at either of those two things before yeah and as niels calls out here he's the one that wrote the post but he commented you know it's nice if you want to pull config from different sources like yeah just like posting yep yeah very cool which side note i i've got a post about i i did a um i created a function app that was connecting to sql server so then i was like well i guess i'll use any framework and because of that i ended up doing a net 5 isolated model function and all of a sudden i was like oh this is why it's cool i can use all my configuration it's like i get all the power of the asp.net core stuff that didn't totally click for me before but being able to pull the configuration and having that it's a lot more than i would normally get just from from uh the function app support so anyways all right uh so this is cool from andrea talking about permission-based security for asp.net web apis and uh so he talks about his example here is this is on the off zero blog so he's showing how to configure those apis on the hot zero blog he has a few call outs here that i thought were interesting and i i have mixed up in the past permissions and privileges and his definition here is saying that the the scoping is different so i'm not sure if this is a security standard thing that i've just never heard before if this is more kind of off zero focus um the way i'm reading this is that it is kind of it's the computer security way when when you say permission that is within the scope of a resource whereas a privilege is within the scope of the application so it seems like you know kind of a small um distinction but but it is interesting to know and something i can use to it's funny because i i'm gonna take i'm gonna further nuance what you just said because i don't think what you just said gels with what you've got on the screen right now okay so i'm reading it as a permission is specifically defined as an action that can be performed on a resource it doesn't involve a person it doesn't resolve it doesn't involve granting that permission to anyone it is just the permission like can read document can upload file that is a permission uh okay that means a resource of some type like the file or the document or the app or whatever it might be a privilege is a permission that has been granted or assigned to a user or an application because you can have like system identities right or application level identities and in order for them to be able to uh use resources that are protected by certain permissions so i think that's just the nuance there that i yeah you know it's funny i keyed in on resource versus application but that's not the distinction the thing that you pointed out is the person or is the um is the person is is the privilege level and then the the permission is not so yeah the privilege is the granting of it to it to an identity like whether that's a person or an application it doesn't really or a machine it doesn't really or a subnet right it could be anything that you decide anything that you can address um but the you give that thing the permission that you defined as yeah now you can read this resource or edit whatever it is okay i will speed up because i want to make sure i give david plenty of time one other call out that he made in here that was interesting was he also talked about permissions versus scopes um so this is another one of those kind of digging in and talking about clarification and so the thing he calls out there is the first party versus third-party authentication scenarios so pretty interesting all right uh syed so this is a blog about the web forum support with web live preview in visual studio 2022 so this is the new web forms designer yep so um and this is specific in this release to web forms um we he was on the show previously and we talked about you know potential other other uses for it but this is an area where it's you know it pulls in like for instance it gives a lot more accessibility support and things that we didn't have so and the web live preview technology isn't just used for this new web forms based sort of design view it's also in this preview form it's an extension you can install in vs today it supports razer files on asp.net mvc apps not core and then in visual studio 2022 we're integrating the web live preview hot css reload capabilities into hbnet core applications and so what that's a step beyond what you get with something like net watch or you know just hitting f5 in the browser or something like that in that you can just edit the css file and not even save it in vs and those changes will go over and be live sort of dipped and applied to your application without you having to refresh the page or lose any state and it literally is coming from the css file that you edit they do that for razer files as well for the old framework in the preview and our hope is that with the hot reload support that we're introducing in dot net six and then beyond.net sixes we'll get something similar for razer in and blazer apps sometime after dot net six so you won't even have to save the file or do anything you'll just make your changes in the dot razer or cshml file and after an appropriate idle and validity check like those changes would just show up in your application so there's some really interesting tech in the web live preview stuff but yeah as you said it's in this in this context it's being applied to web forms apps very cool yeah this is amazing it's like to be able to do that and the productivity like i'm always a lot of time for more in-depth css stuff i'm just using the browser tools and then copying my stuff over you know and so being able to work directly in the source code is hot um okay here's just the blog post about that visual studio 2022 preview 2 and you know and links to download and try that out uh and then we have just the blog posts about.net 6 so this is net 6 preview six and this is the kind of high level one and then uh this is the what's in asp.net core there's a ton of stuff in here um so i don't know man there's there's a lot uh i you know i'm excited about these minimal apis it's neat to see that also um a lot of stuff for blazer like uh i've been learning more about except web accessibility stuff lately and seeing some of these accessibility things uh that help kind of show the right way to build things so a lot of stuff and then just a lot of kind of tightening up the parameters um so and and things that for building controls and components so gosh but just tons and tons of stuff in here so i might i might let the audience into a secret which is interesting going back to the beginning of this show and like why we originally did it just to give folks a a a sense of the timing involved with these releases y'all are reading the blog post about preview six right which went out last week like on thursday or something yup we did the blog post it's great the same day we literally shut down preview 7. so like preview seven moved into and it was two days late it moved into the next phase where we have to get sign off from the director of directors like fowler's boss director of managers for engineering to get bugs fixed in preview seven like preview seven is done right now and we're all working on rc one and preview seven is being verified right now um uh now of course anyone can get any of these builds any day from github and that's what's the great thing about the engineering setup that we have now with the open source uh stuff thanks to the great work in the dot net engineering team is that you can just go to like dotnet installer on github.com and grab the latest ci build for the sdk and the sdk is what like assuming that i know guarantee how fresh any of the bits inside that are you might get a really fresh compiler but like an easternet core from four days ago because it hasn't flowed yet or some days you'll get everything that's an hour old because things are flowing really well because you know infrastructure and issues happen all that type of thing um but i it's funny like we still do these big blog posts we do these monthly releases and it's really important that we do it but we do have folks who work on the edge and actually like pull down these daily builds then we have the really keen folks who are building it themselves yeah uh which even i don't typically get my projects to do that um but yeah preview six is obviously out preview seven is pretty much done and you'll see it uh in a few weeks time and then if you wanna get stuff that's we're working on right now for rc1 just go and download a daily installer make sure you use visual studio 2020 preview um to otherwise you you won't have a very good time but you can do that and it'll work pretty well we have had requests in the past somebody is a few people have said they would love to see how developers at microsoft like devs on the team actually are we should do that show because well okay i would agree with you until because failure knows like in back in hp net core 2 i was doing a little bit of coding in the repo i contributed a few features i wrote the original web host class uh that gave us the first higher level hosting stuff in two and then i basically like stopped coding in the repo for two releases like once i got razor pages done and i got that done like i kind of stepped back and didn't do any and i only dove back into it like three weeks ago four weeks ago as part of the minimal api stuff and updating templates so i'm like neck deep in this stuff now and i had a whole bunch of folks help me but i was like man we should really do a show that shows how to walk up to the asp.net core repo from nothing and get it building occur and now because if you've done it recently it's much easier to do again if you haven't done it in a while it can be a bit painful and if you're doing it for the first time we do have docs in the repo but they're not always perfectly up to date um or you might hit some you know snags i would love to do that show john our rentals are super unique like we we basically live on the edge so what happens is we get build from the dot-net runtime repository literally every day so there's a constant flow of changes which isn't isn't the worst thing there will be because if you don't pull you won't see them but you if you fall behind you will pull a massive amount of changes like every couple of days like this huge change right we have the sdk we update the runtime we got bit all the dependencies all the time so you're in this constant state of moving forward over and over and over again and it's not a thing you would typically do if you were deving in your own repo you'd be on net five or six for like i don't know two years three years we literally update our.net builds like everything he's and even that doesn't really i think convey the magnitude so what he means is that you rebase your branch that you're working in after a you know a pull from the from from maine and you're you have to run restore on the entire repo again you probably need to run build on the entire repo again because the dependencies of the things that you're working on have likely changed close vs and restore includes like installing a new.net sdk sometimes and then as fowler just said daily updates to visual studio so i first thing i do in the morning when i sit down typically as i run the visual studio interstoller there's usually two updates for the two different channels of visual studio that are revving frequently i apply those while i'm checking email and then i go and download the latest build of the sdk because that's not in visual studio yet that doesn't get inserted until much later so that i can use the latest full build of the sdk which still doesn't represent what's latest in asp.net so then you go to asp.net repo you do the the the you know sync there the restore in there that build in there and then if you're lucky all that goes well and then you're on the latest and by the time you finish step 10 is a new build now step one why do you think i have a lovely powerful computer because doing this on a laptop or you do what fowler does you just carve out a little niche for yourself in the repo and hope that nothing else changes that breaks yeah yeah it's funny i haven't done this for a while but i experienced this the most in the past when i was working on build keynotes and conference keynotes and i i would work on the teams that would build out the demo apps and all the parts are moving and oh you need this special new git feed i know you need this special thing and like it would be crazy and you're running all these command line things and you know strong name hijacking and whatever to make it work and then by the day it goes live everything is just file new project you know what i mean so all this stuff we've been like hacking through it's like oh it all just works you know it was much worse i mean like i remember when i first joined microsoft it was like just painful builds those would take forever it was 150 given by enlistment on day one yeah you hit it you couldn't build your repo with your with like a normal command shell you had to use this special commercial to build and stuff it was that razzle razzle it was very very different yeah it was just like a pain compared to no so i am grateful oh my gosh well i am done with community links i i i don't think it's worth like digging through this this is a great post and maybe we should do another show in future but um so this last one i'm just pointing this out because this is the slide deck that david's going to be showing and this is included in that community links list that i already sent out we'll send out again i'm going to stop sharing i'm going to pop over to david's desk boom that's fast let me see should i show this so i think i was here last time where i showed the um the this is castro's guts as we call it the transport layer the the connection layer the hd the http connection layer and the protocol layer i think last time we spoke about how we basically ended up with this model as a result of having two transports libya v when we first started experience core and then we added socket so we needed a way to kind of plug in different transport so it's kind of evolved from need um so now we have this kind of plugable model where you can actually plug components in at any point here so if you want to change the transport if you want to plug in your own fast lock at transport you can if you want to plug in a different a different transport like a name pipe transporter or something else like that um you can here as well what's funny we actually have this transport written by the service bus team at microsoft they wrote a transport that uses what's that thing called again uh the reverse gateway thing um oh yarp no no no no it is if you want to punch a hole through your firewall so you basically instead of having a monkey connect your thingy connect uh where's clemens he's gonna kill me um instead of instead of having to expose your computer to the internet you basically punch a whole outwards into azure algebra so really they've built uh an explanatory server implementation that uses azure really to accept connections from an outbound connection so your your app connects to azure relay right and then the client connects azure relay and they will bounce bounce data from the incoming client to your application and then decode the packets and turn it into an http request from desktop in external core super cool actually sebastian actually ran our perf infrastructure on asp.net core with that with that server implementation to run it behind like secure a secure network right without having to expose that to on the internet super cool so the fact that you can you can replace the entire transport layer and do that without having to change the rest of the sp net core was a goal um so these boxes represent logical layers of our stack um do you ever hear from customers that are doing that like this is definitely something that we do internally right but yep so it's funny there's there's internally like the.net team and then there's the customers that use our stuff that are more advanced so so for example in the connection layer we see customers parsing the bytes before it gets to the hdp layer doing mutations doing rejections doing super fast things filtering for example um and i thought it was super rare but you wouldn't believe how many people do this like oh people how many of our big customers that have like reverse proxies style things they slot into this component here and they do all kinds of crazy stuff to kind of get information about the connection and and the host name before it goes to the http layer um i figured this layer would be like lightly used but like we ended up building a component that can filter that can give you the the the sl handshake information before handing it off to castrol itself because it was such a common ask that we had a bunch of teams actually like ask for that feature wow um and the fact that you could do it without having to like like we didn't when we made this layer we made it for ourselves for layering and we knew that it was useful for logging and that kind of stuff but then as we kind of went went forward people ended up people ended up having to use it in their um designs for for more this filtering stuff which is pretty cool so and then once we get get past that layer we have one of these that implements the http layer and it handles and hb123 and then we kind of unify the model because hb one two and three are very different in how they're parsed hp1 is text hp 2 and 3 are binary hb3 is over quick hp 2 is over tcp they're very different in how we handle like the the bytes the the parsing all that stuff and that's kind of like a traffic in this layer the connection layer there's a hp one two three connection that are all different but then at the very end that you get a request object and that and that is what we have is this base class called the hp protocol which implements the the feature collection which is the core primitive of a spin of obvious finite core so it kind of it splits off and then it it comes back into a single abstraction exposed to users so when you do hp2 and hp3 you don't see the difference you just get a request over and over and over um under the cover is the server is doing different stuff to give you that request two two more questions just came in that look interesting so one is are people is this primarily tcp i mean is there other transports we're seeing so we only ship sockets in the wild there's name pipes i think those are kind of the main ones we see people use um quick is a new one that we just did so there will be a quick a quick layer for this stuff too and quick is a bit different because quick is um a multiplex transport but it has the same layers so it's the same stack just to transport isn't the same transport interface it's a different interface because it has different features but my gut is we'll see people do quick things directly on quick without having the hdp on top cool so there'll be a new gi pc grpc will probably use http 3 and not quit directly right but signalr something else will appear signalr can definitely use quick directly without hdpr because it's effectively just a slightly more structured udp connection it's basically tcp but you can instead of having individual requests over one connection you multiply individual streams so streams are a first class concept knowing quick where you can where i can hold likes and things over one connection so it's like udp merged with http 2 websock and websites all together somehow yeah it's think of it like a multiplex secure yeah tcp so it's tcp plus tls without the headline blocking problems and it's multiplexed right so it has elements of http 2 udp because it's udp um tcp because it is stream based um yeah and ultimately because the bottom layer is udp it'll work everywhere effectively that you can punch udp through exactly exactly yeah which is that which i don't know where that is i mean if you use an old-school http proxy server like my old enterprise i used to work in you could still we used to run little python servers and stuff that could tunnel stuff over http right the http vpns and things um but udp that's a different port usually that's a different problem but not necessarily in this world with quick and whatnot they just allow it on the same port right otherwise your browser connect to it it's funny like i think the way http 3 works is you send a request on http 2 and then the http 2 response says i i am actually hosting on http 3 on this like i think it's called the alt service header so the server returns a header saying hey i'm also listening on this port http you can you can send the next request to this new unless you have an optimistic client right like chrome like chrome we're probably super aggressive chrome to try and reduce latency especially for mobile clients is how i understand it for things like root youtube it would go let me just try and connect to http 1.1 hb2 and quick all the same time with the assumption that quick will come back first if it's supported and then i'll just cancel the other two and and so and even if two comes back first i'll try upgrading too quick after that like they just yeah chromium edgium that classic browser is super aggressive like you you type an address in the browser you don't hit enter yeah it's like hitting breakpoints on your server you're like uh [Laughter] so there's another question juan's asking can middleweight be used separately um what's the contract i interpreted that question i mean like what's the contract of the middleware could i plug it into something else or something you can i you you basically need a thing that can host the same model so like as long as you're as long as your you know your transport layer could could and could execute this this kind of um pipeline it would work fine like um if you look at my project on bedrock framework it basically mimics this model right um i say copy from castro directly into this mod so so in theory yes you can hold it it's not it's not type the actual server so if magma ever became a thing yeah that your same connection middleware can work on either theoretically work on magmar assuming i wasn't using tunneling in to try and get a specific thing from the underlying yeah unless the transport is so different yeah that it just wouldn't make sense so for example like when we did quick we tried to make the current transport attraction work but it was deemed to be too different yeah the multiplexing that we had to fork the pipeline so it's only if the transport cannot fit into this current model if you don't have a connection attraction if you have something else then it wouldn't fit got it yeah all right so kestrel optimization so casual is our premier web server is extremely optimized it is super efficient we're very proud of it um it's multi-layered there but there are a couple of kind of high level things that it does to kind of make the performance better all up uh buffer pooling is a thing you always hear about your iteration and poo poo buffers we do it aggressively we have a custom pool in case we're trying to get rid of because we want to not own it anymore but that's a different a different topic there's a pool there that pre-pins all the memory um as it allocates memory so since we're doing network i o um we end up having to interrupt with um the os apis which require pointers in into memory which require the actual address to do the actual i o so we have to pin the memory to do the i o typically when you pin the gc can't move memory so the technique typically in the past was you allocate huge chunks of memory and you pin it all up front and you and you tell it did you see what would just to leave it alone essentially um and was that to avoid fragmentation it was to avoid fragmentation um turns out it doesn't always do that super well actually we actually found a couple of dumps in the last like i was in the last year or so where if you just happen to allocate with certain patterns you could have two of our pin blocks between like a bunch of memory and since we have the memory permanently pinned it can never um compact the space in between so if you get lucky you'll be fine um in donet 5 we changed the allocations to to allocate in the pin object heap which is essentially a new space just for paint objects um yeah so and that and that i think will it will improve the overall fragmentation um there it's funny how much resource contention issues can just be solved by creating a new bucket until until the bucket becomes this rock is too big when i put it in this bucket and it's just taking up all the room i can't fit my little rocks in we'll just create a special bucket for this rock otherwise you could ask the question what happens to the pin algae even then i'll just go like yeah it's the same it's just it's not colliding with the unpinned stuff anymore so what we learned so what i learned i don't even know like when you allocate um statics we allocate a giant object array that's pinned right um and we start we store your statics in there and it's pinned because the jit can optimize not having to like figure out where it is so it just hard calls the address to a static because it knows it's always pinned it's not going to move that makes sense and then and what happened was the statics were like being allocated in between our blocks and it just caused this bad fragmentation really oops interesting so we move statics to the pin object on it five as well wow there you go yeah yep who knew i didn't know i think we should we should at this point it's worth just drawing a line under a part of what you've said is that i think it's fair to say kestrel an aspirin at core um by extension but kestrel has driven a lot of improvements in net core you know f5 plus now like you know from the network stack like we moved from a custom networking layer which we talked about last time to system.net.sockets and as a part of doing so we worked with the the networking team to make improvements there you're talking about buffering and pooling you see in the project although the structures to do the buffering yeah they used to be custom in castrol if i remember and then a lot of that learning was taken or they built better ones and they used theirs instead um and even things like the http parser has gone through lots of iterations yeah the the vectorization and the um what's the name of those libraries ah intrinsics there's been a again a sort of a symbiotic relationship through a lot of these pieces as kestrel would move ahead trying to push performance and then the run time would like catch up and then kestrel would re-plat on that pipelines of course how could i forget pipelines pipeline span it's funny the parser started off as purely unsafe code and pointers right so we had we had a goal to figure out how to get back to safe code while keeping the performance and that took i think three versions in 2.1 it was all pointers all unsafe you don't want to have unsafe like code in your server like there yeah there are couples we don't want unsafe code one is the server we did it for performance we were pretty sure it was fine you know for the most part um but now it's fully safe and it's actually faster but that took a lot of work legit team actually spent a ton of time looking at our scenarios trying to optimize spam for those scenarios the goal was to make span as fast as pointers and safe and safer right um and we got there eventually just to touch on what you mean by safe because some people may not quite understand what you mean like dot net is a managed run time and thus makes guarantees um and you may have heard the term type safety it talks about type safety like you can't call a method on a type that doesn't exist like the runtime will go no you can't do it will throw an exception rather than crushing the process right and because it's managing the memory for you it can move the location literally of a method from one part of memory to another and it manages that for you now there are you can turn off the guardrails and do unsafe things in it and say no i know what i'm doing in this case i am literally smarter than the runtime or the language or both together and i know that this is safe even if you don't so i'm going to enable unsafe operations so that i can avoid you doing bookkeeping and checks that are slow that's what typically is meant by doing things that are unsafe you typically don't do unsafe things just because ah run time's wrong like there's a very specific reason because the runtime just doesn't or the jet doesn't have the the the logic in it yet um or it's a very complicated computer science problem um that to to safely do it with guarantees without checks um but a lot of those things have changed like even since like the last three or four years the jit is learning new stuff every release um the runtime is learning new stuff the language has new primitives like spans the compiler is getting smarter and we've got a fantastic community of of performance interested folks who want to help the language advance and then we've got like experiments like the native aot stuff which drives new ideas into the runtime which gets rolled into core clr as well so it's an incredible amount of stuff that as someone said is the benefit hugely of dog fooding and having a really engaged community and a and a workload that we really want to work well that we're pushing aggressively in this case the web workload till now that we've used to push the framework forward and the runtime forward yeah i guess as an example of the things that unsafe turns off is a very common one called bounce checks so whenever you access an array index the reason that throws an exception if you're out of bounds is because there's a check that's emitted by the jit that says well legit and and the libraries right if you access this array out of bounds if the length if if i is is greater than length then throw exception right that's better that's a bounce check but that costs every time you access the array it costs that so the tries to eliminate bounce checks if it knows you're within the range so if you do a for loop and the jet can recognize the length property and it knows that it's five and your element zero to four the j can safely remove the bounce check so the trick is you have to write code that you can understand through move bounce checks so a lot of the work in span was around those those things like arrays are insanely optimized it's funny it took us super long to get back array performance with fans like spans are like super super cool can represent stack memory keep memory whatever arrays are unsafe but arrays are hella optimized like insanely optimized so all the work in span was but not all of it a bunch of the working spam was getting the perf back to what arrays are like and people still find bugs every now and then and say you know array is is 10x faster than spans in this one case if i do a loop and i change this to be a length over here and legit can't follow it and it's like oh wow we should fix that too um so yeah the the parser is in a state where it's funny though i would say we we were at a point we were in a state once where if you touched the parser and you didn't really understand what was happening you could regress performance but you change the pattern the pattern is very specific don't change the pattern right um that's getting better the j is is beginning to understand more patterns now so it's less of a maintenance burden yeah we used to in comments yeah because we would do tricks or people would send us tricks saying yeah i change the order of these like operations or i put this special offset in a thing because i know the jit can recognize that pattern yeah or if i do it this way the jit avoids a bounce check whatever it might be and it doesn't sound like much but the thing is in high throughput adding a bounce check could suddenly result in enough cpu cycle time spinning that you get flushed from a cache line or you get scheduled preemptively to a different core or like all you know you get you start you lose the magic right so it's funny how in our community we always have these arguments about stuff that's too magic or not like everyone has heard that before when we talk about framework design and apis oh it feels like too much magic i'm going to tell you people dot net is nothing but magic that's the entire point of the runtime and the language is so that you don't have to write things in assembly or even c or c plus plus and manage your own memory and call system apis that are different the whole point is so you don't have to worry about that you deal with objects you deal with values you call methods you don't know where they are you don't have to dereference them but that it's all magic right and the magic gets better released to release to release there was i think the best hack i saw that that we learned trying to optimize code that this was like if you write to the array backwards if you if you have like let's say four of these right and typically you have a bite so they imagine it wasn't infinite was bytes and i want to write to indices zero to four or yeah if i went zero one two three and i wrote numbers like i did like x of zero equals like one etcetera the jit as a balance check making sure that that zero is less than length four right but if i write three first and i count down legit this needs one one bounce check at the first one then it knows that two is less than three and one lesson two i remember changing our code to do this and it was like maybe maybe we're crazy maybe this is like this is insane but it was funny like we have i think we have this code all over the place now and i think it got better in.net five maybe so we can undo it so we keep having to like write code and then undo it as it gets better well there was a thing for a while within kestrel where there's a big code generator right there still is the headers the header dictionary is actually insane so then when we design the header the header we wanted headers to be super untyped and loose so it would it represented what the actual http um request had it was a string name and a string array of values we didn't want to have to allocate an array for every header value because rarely do headers have have more than one value because we have this type called string values which is a string or string array type um so as data comes in from the the internet or the the client we parse the bytes and we know that this extent of bytes is a header key it could be content length it could be like keep a life could be it could be could be anything right we don't want to allocate a string for that we know the byte pattern for those those strings so we just do a fast comparison and we basically co-generate all the comparisons give for a given by the way we know ahead of time like if does this thing match these bytes so we match on length first and then we match on like some kind of surface and then we do some kind of like crazy matching to make it fast um and then we know we don't allocate a key we just know the index of the content header is like number four would you would you would you phrase that i will i'm not i'm not a comp side person but is that like a custom tree type of thing like you have a was it just like a multiplayer yes it's like a multi-face trait where we first compare on left and then we compare on unique prefix and it's all about narrowing the the the amount of things to look for right yeah is it yes a trie okay yeah try um yeah so we do it for the for the header key comparison so we we don't actually allocate header names that are known we have a bunch of known header names and we don't allocate anything there and then for values we do allocate um but then ben adams one of our performance like godson.net we can pull header values across the same connection so if so on http 1.1 you have one connection for n requests right over those end requests we can share the values if they're the same right oh wow because the way we allocate the string because the bytes are coming in dynamically off the wire yeah and so we typically at some point you'd convert that to into a string and that results in it being allocated but instead of doing that you go i know these bytes are the same as the ones already rated for this same client so just point them basically at that string i've seen some production workloads recently and i've seen some headers that are like massive i couldn't even believe probably they were like 16 kilobyte off headers asleep wow okay so yeah that's funny i actually had to get that team to turn the cooling off because it was hurting their memory because the the headers ended up in gen 2 and typically they'd be in gen 0 because it's a 16 it's a 16 kilo um that would go up a request and since we were pulling across the connections it was ending up in gen 2 and i was like oops let's just turn off this these connections john can you highlight um juan's recent uh comment i'd like fallout to talk about that because we we talk about you'll hear us say we don't allocate quite a few times it actually means something different in different contexts yeah and sometimes it literally means we truly don't allocate at all from app startup for this thing other times it means other things doesn't it yeah so for for headers specifically we literally have a string constant so string string columns are not are not they're they're basically bytes instead of the binary they're in turn they're like if you have a single object representing that string constant and for known header keys we never allocate a value beyond the constant that's in the binary for those known headers right um and then i guess don't allocate can mean a couple things that like damian said for the most part what we mean is if i say literally don't allocate it's that constant case if i say we don't allocate it's normally amortized over some period where for example if we don't allocate per request we do allocate per request we just happen to reuse all the memory across requests so we do have an allocation a set of applications that happen for requests but then they're amortized over the connection right so if you have multiple requests of that connection you won't see any new allocations right right so you'll get one bunch and then you'll see like no you won't see any more per request and so you can literally in certain tests get to a steady state where you right you literally do not allocate any more memory no matter you send 10 billion more requests no more memory gets allocated because the things that those requests were set up to do in the application is so minimal or only touches parts of the stack that are non-allocating once they've been set up on the connection like a lot of our performance tests are um that you get your 100 meg allocated at app startup when the first you know 100 connections comes in for the load test and then that's it no more memory ever gets allocated gc never has to run right and like everything basically is just pure compute after that so for example if you wrote a middleware that wrote to the response hello world in internet core that returned the task to the middleware pipeline and you run it on kestrel in a tight loop you would see no allocations you would see the allocation of the connection itself you would see all the state for a connection but if you hit it with a benchmark tool you would just see all the allocations that were per connection and if you actually look at the tools that i can try allocations you can look at how many allocations were how many things were allocated and how many requests you sent and typically what you do is you you send a thousand requests and you look for things on the order of a thousand to see what things were allocated for request right if you were to run a performance tool on kestrel you would probably see a couple of connection allocations but it wouldn't allocate anything for requests yeah fixed load tests like where you have a fixed amount of load is a really easy cheats way of finding stuff that is being allocated for that lifetime right i'm going to put 100 connections and they're all going to do a thousand requests each and that that's your test you run it with a memory profiler and then you just look at the object counts afterwards oh i got 100 of these and i've got 10 000 of those and i've got a hundred million of those oops i've probably done something wrong on that one yeah it took us to net 3.3.0 to nuke the last of the allocations the last step we're like throw pool dispatches and there were so many like we do a small object per dispatch and they're all they're all reused though it's absolutely incredible so like even things like routing like doesn't our routing our hello world routing test doesn't allocate that once the routes are set up in the system and the tree is built you like every request doesn't allocate something to look up the route like it's literally no memory you need to have a once you have patterns right it'll allocate a dictionary which is funny because like the dictionary has to exist because you you want to call you want to actually access the values in the future right you can imagine you could imagine if we were if we had this a system that was declarative and we knew that you were going to parse into integer the rotary system could basically say i'm not going to store it i'm going to parse it in line and just avoid the allocation completely right so you parse the real parameter and as you were going to like store it you would say okay i'm actually going to call this method i won't bother storing it in values oh you can oh together yeah you put on the stack and then it gets passed directly to the next caller that you're dispatching to like your action method which let me be clear that almost never could happen in nbc because there's so many layers in between your action method and but in like raw routings and middleware that's because yeah yeah interesting so yeah um we pulled the hp contest objects that was a super fun change we did in 3.0 that was a sphere context is like what we call a god object it is by design a poorly designed from a solid standard point like it does more than one thing it has all these properties that kind of reaches out into the whole system it it's a convenience layer um we pulled these in net three from three on it caused some interesting bugs where people would kind of hold on to the hp context and pass the request lifetime touch things and you would think that would that would that wouldn't be a bad thing and dotnet because it's managed right and you have a gc and that trash lifetime the moment you do pooling is the moment you're saying i am smarter than the gc and we're not we're we're absolutely not smart but we thought we were so we say you know we're smarter than dc we can do a better job last pool we know when the request is over but it turns out like people lay all the time right like they say i'm done the request but turns out they're still holding on to some object in the closure somewhere it's like oh it's hanging on um so we had this cool bug where people would access the next request in their code because we reused the same object and you know the first time we saw it i remember going like oh no [Laughter] across the streams maybe to be clear there's a really bad version of this bug and there's this version that he's talking about yeah as i understand it right so this version of the bug is bad in the sense that it'll probably cause exceptions for you the really bad version is that you can leak things you shouldn't leak across different beauty contexts right different users and things um but we were no one was brave enough i think to suggest that we would ever reuse contexts across connections because the connection is the boundary of the security principle right yeah yeah exactly so yeah so we actually had a bug like that in 2.1 where the execution context was being reused across requests on the same connection but requests were coming from the out of process asp.net core server so people were getting the wrong like windows principle right across requests and it's not it's not a thing you would experience i think i i think people typically don't mess with execution contact and internet and.net directly you typically get the the current principle and you assume it's correct because it was set up correctly by is we weren't doing it properly in castro back like for that scenario and it was just like horrific what could happen well that's a bit of a segway to the next glide where you're talking about windows https right oh yeah https um okay so we we we spoke about i think i don't know in in part two we spoke about how we support multiple servers i think like let me find that slate here there's kestrel htv says is test server and star other right um hb sys is a kernel mode driver in windows that means it's actually in the kernel it comes with those it's a part of windows it's part of the guts um i think sys is a typical extension for drivers um http because it's the actual hdb protocol is actually written in there it was built i assume for is i wasn't there i wasn't here when it was built but my assumption is we want to put a web server in the in the os in windows server and to do that to put it in the os it has to be on by default um and they want it in the current to take advantage of some of the benefits of being in kernel which we'll talk about um so there's a there's a thing in windows built-in that does http on every windows version and it's what is built on top of and we have our own managed wrapper on top of it if you've ever used hdblistener in.net.net framework it's the same component http listener directly invokes into windows into these apis to to create an httpsys um server it's a bunch of pmboks for handling requests at the at the os level it has a bunch of unique features that that you may want to use in asp.net so imagine the reason is works on one machine where you can kind of share multiple applications on the same 480 that are different that are differentiated by path path prefix i can have site 1 type 2 site 3 on port 80 all working in parallel on eyes and that works fine it's because http 6 supports port sharing natively so when you configure http sys and you configure a process you say i want to listen on this prefix you don't say like you don't exactly say i want to listen on this like address and on port you say i want to listen to this like url prefix and it can contain a path of any depth and the kernel will basically send requests from the listener in the kernel to the right process assuming you have permission so it does all the work required to kind of do port sharing across process in a very efficient way so so that and that's why we have a bunch of people that still use hb6 directly um in general request to creation hbc has has these things called request queues so you create a request queue requests come into the queue you you bind the request queue to a to a prefix to a a url on a path and a port and request get handled and come into that queue and you the key from that from that that um that queue and then we use server in a spin of course so we grab a request from the queue and put it on the thread pool and we execute your http core pipeline so the if if i was going to draw the architecture which would be much smaller they'd be like request queue and thread pool queue so we and we go from one to the other to execute your code talking about a couple of specific features of each pieces that don't exist on other servers kernel mode caching is one of them um you can tell hp system cache this response using the using the the cache headers like you assume this response is cacheable because it has all the right headers and it should be useful hbs will never enter user mode it will serve the request directly from the kernel so it never has to go all the way through the networking stack all the way back it will just serve it from the net just directly should we talk about the difference between user mode and kernel mode a little bit i mean we have a variety of people so so kernel mode is think of it like there's this boundary in every os there's like two layers there's more than two but there's two layers there's where your code runs and applications run and there's a kernel the kernel has to interface with hardware the kernel protects your apps and your code from the hardware the hardware is like doing all the crazy stuff and talking to all the devices directly and it's at a place where you know we get a blue screen the blue screen is the equivalent of a a normal crash in kernel mode you don't want apps to blue screen like crazy right and they're different they're different levels of security at the the os layer for example in the kernel there you can see all the apps in a single space um and user mode every app has their own security context um so think of it like there's just different layers of security and access on the kernel mode drivers that let let the os kind of cheat i can kind of do everything it's kind of like master mode right i can kind of see everything it's god mode right yeah you're in the kernel um and using what is is where your app lives and is protected and then those those transitions between kernel and user mode are very expensive right right yes there yeah you can think of them as thunking right like it's like you have to set up a bunch of bookkeeping do a spat across this layer have to wait for the thing yeah like it's it's preparing the right context right so it for going both directions so whenever the kernel has to come back into you it has to know your process your thread your whatever and i have to kind of set up all the context to kind of change from where it is right so i it's funny it sounds like magic at the core of it is just some some structures that that look that would look normal to you if you were like a normal programmer it's like a big excel spreadsheet yeah that someone is editing in real time every time they want to perform something and it's like fowler just said like it has to figure out where it is and then like pause that so it can call you like that literally means just go okay i've got all this information in the cpu right now i have to get that off the cpu put it somewhere i can get back to it later so i basically store the state right state machine or whatever it is pull it out the registers and things and then i'll set up the new context which is just a fancy word for some more data like something that's a memory structure and then i'll you know tell the cpu some things ask you for more things and it's just basically that bookkeeping over and over and over and over again yep so they're all there so essentially when we do networking when we wrote write done applications we're in we're not not in the kernel mode right and we have to transition between kernel and user to do things like pm books or assist calls or any call into the operating system right um and that has to return to us and and give us right data for that for our specific call to that api um so yeah that that's kind of the word i guess the whirlwind tour of like the the the high level difference between kernel and and um the other mode send file is is a unique capability of it's actually available at the socket level and the feature here is if i want to send a file as a response to a request so someone sends a request to me i don't want to serve a file it could be javascript css whatever file is required typically what happens in castro for example is the request comes into into our code and we have to open the file in the kernel and then copy the byte from kernel mode through our code back into kernel mode right so it goes from file into our code copying the bytes into the response that's two hops right what if i could just say hey colonel can you send this file to the response without ever having to like change modes don't hop at all just split this file over the wire so it has this feature called send file and send file can basically blit the file to the response without having to change modes right it just does it from kernel mode directly because kernel mode can access and read those files and it can just cheat directly yeah it's gotten more than right yeah rather than the user mode thing reading the file bytes up into user mode processing just to send this straight back down again you can just say now you just do it send this file it's the path it's copy that file to this response right two kernel objects um back on top this this feature exists for sockets we can't use it because if you do ssl in user mode i can't just say sockets can you do the needful and copy stuff because it would basically avoid the the seller so you basically need to do you need to do everything in kernel or or everything in in um the the the other mode to get the um to get the whole thing to work properly you can't kind of kind of have half and half um so that that's kind of why it is the way it is right now when this auth isn't a unique feature of of htv since i believe we actually have this feature now in castro um but it but it is built into https today but i believe we do have a new middleware and a spinneret core that can work on castro to do kerberos not ntlm though antelope is i think considered like where's barry it's considered never use this yeah legacy it's the old one and request queue delegation is a brand new feature in net five where this is interesting think of it like a fast reverse proxy i have two hb assist processes on the same machine typically what you have to do if you want if you want to make a reverse proxy on the same machine you could you could open a name pipe you could open an http channel between the two between the front and back end server and send requests over that way the http but you have to reserialize it and send it over http and get it on the other server etc etc with request to delegation it's interesting the kernel sets up a shared queue in kernel mode between the two processes and i can basically transfer requests from one queue to the other queue without having to to to re-serialize the state so think of it like a fast way to forward a request to a different process um on the same machine is it a bit like fork on linux like people talk about forking processes i mean i know you could the process itself gets forked like in node clustering i think you fork and then the the front one is the one that forwards the requests at this the node level copy my right yeah this is a bit this is a level right it's faster because you never have to research a request it's lovely shared memory so the kernel can cheat right the kernel can just say i'm going to send this request to this other queue over there in kernel mode i have i have these two structures i'm going to put this request from this one over there i see and then jump into jump from from kernel um and then you can just like process requests that way so all you do is you configure your request queue yeah to be to to be to have data sent to it and then i can kind of forward your request without having to like so https has had that feature for a long time right no no okay so the the web garden feature in is did this differently didn't do this oh interesting okay yeah this this is this is because so web gardens let you set up a bunch of processes i believe on the same binding the same binding yeah and then i believe that by default with just run robin just round robins okay this one is i can read the request data in one queue i can actually dequeue it got it you can dq from my my current queue read the headers read a bunch of things and then i can say forward to this other cue i understand and it will transfer you can't touch the body because the body wants this body is over but you can actually delegate the requests across process on the fly it's super interesting in the concept you can think of in the context of a proxy server or a front-end proxy of some sort where you could you could have a high priority process for certain requests and lower one for others so like one that has more memory or containers or like some orchestration thing like you could do something interesting uh perhaps with that but i think i think what's it mostly used for today like why did we add this feature i could uh what do we add it yeah initially we asked for it we were thinking of using it on um for amcm that was the that was the impetus of doing this um there are a couple of teams that are building on top of it now at microsoft okay so we have it built into to exponent 5.0 as a result it seems like it's too high of a level of abstraction but i almost like that whole cute like delegation you could almost see something like that like a microservice context or something you know yeah yeah exactly yeah i mean you could imagine yeah i don't know kubernetes has one built in right like that comes with a a request uh not maybe a connection ingress response action level not not incorrect by default yeah connection level right yeah yeah yeah it's done in some super hockey way right okay wow do you from request queue yeah this is pretty boring it the code is the code looks gnarly but it the design is actually pretty simple because most of the code is deep in hb6 um and what's important is to notice that we actually do not get information about the transport layer here you can't get well no it can't in general some things aren't exposed to you because you have been given this request abstraction so you get a queue up request you don't really get the the information about the tcp or anything under the covers and hp6 does handle http 1 2 and 3 as well and the model itself hasn't doesn't change you get a request queue that's all the same and then we turn around and dispatch to this red pool as a result so this applies to anything hosted on windows architecture including i'm guessing then like azure websites and stuff that's a good question yeah so as your website azure websites on windows uses is and is that it should be this wow okay yeah yep it's funny i had to read the is code recently to debug something and i had to read how they interact with http sits and everything clicked it was like oh that's how it works super cool okay number is so iso is the last server it's the one that everyone uses this is this is how you use http system indirectly on most of your deployments if you've been doing net before i guess five years ago before we had kestrel um you hear i say a ncm and cm stands for the asp.net core module it is the is module that we wrote in net core a couple years ago five years ago now there's two most to it when we first started off there's only one mode the goal was to have a single web server castrol and that was going to be the one that wrote them all and it was beautiful but then it turned out to be a bunch of problems with doing this what we call the outer process mode now so there's two modes there's in process where the net application runs in the worker process so sub3.p that process um it runs in that process itself and then other process there's a 3.p and a.net process that gets that we forward request to um obviously improc is much faster there's no process hop to do http um we don't support running multiple applications in a single worker process i think this was supported on system web on a spinet of old so you could actually run more than one application in the same process because we had abdomen app domain isolation that's gone in.net core so we only let you run a single dot-net core application with internet pool that's what that polls were yeah exactly right exactly um our module does not interact with the entire is pipeline if you are familiar with asp.net and not asp.net core if you're familiar with like the integrated pipeline begin request http modules it's asp.net core used to sorry asp.net system web used to actually interleave manage code so you can handle every single is event in c sharp and spin net core that is not possible we run in a single um module event i think execute request handler um and you don't get to the pipeline you can run side by side with uh system web modules because they're both is modules so you can interleave but you don't get the same amount of control you can't like handle authenticate requests in asp.net core you can't handle post resolve request cash like there's a there's like 50 50 events right you get to run in one handler and there's no way to like jump out of our code into the pipeline as well so once you enter asp.net core it it will own the response in system web you could do crazy stuff where you could system web essentially gave you a way to write managed modules so it tried to expose the full behavior of is in c-sharp so you could like in one event kind of say i'm going to hand the request and then the second event say you know what i was wrong i'm going to buckle and you should resume the pipeline that control is lost in extended core we give you access to the h to the is um hdb context essentially and that's about it and then the overall design is that they're they're two shims so when we when we wrote the new asp.net core module amcm the goal was to have a model that we could that we would really have to update any global install we wanted to be able to run and update the module without having to install the new msi right that was the mindset we wanted a package to be able to be installed and you could update the is module turns out if doesn't support who would thought is not support locally local modules like on your machine you have to install it into the global folder and see windows like with like admin privileges it so that was a hard a hard thing to swallow so we broke up the design into this uh model which we call the shim um so the the native shim exists globally it's the thing you install it's the expert core hosting bundle installs the shim globally um and and the shame implements the is the the is interface for modules so whenever you want to write an is module in c plus you have to write a piece of code compile it as a shared library a dll and install it globally on the machine and then put in in um application host config here's the module here's the path to the dll and it has a certain shape and it all works fine so our shim does that and then our shame does magic to load from the right location the right location either the shared framework or your application path or wherever the another shim that is native called the the request handler and it could be impro or prod that thing will initialize the clr it will hold the clr boot up the entire thing run main and then it will boot into asp.net course typical program cs logic the implementation of the isi server what it does is it basically wires up all the callbacks required to handle requests in the in process request handler shim and then from then on requests that come from iis will get routed to asp.net core via the shim and then the goal of the is server implementation in c-sharp is to marshall request information from native to c sharp i'm back so when you call response.right that buffers a bunch of data and then we'll do a call in into into one of our um pm books in in process in process request handler to write to is so this this middle component has a bunch of people a dll import layer exports where we essentially say here are all the things you can do with is from this code you can write to the response you can read headers you can read all these variables and then we expose the the feature collection and the feature question is it's implemented over those primitives so when you call response.right that's the right thing when you try to try to do websockets or any other interaction it does the right thing um it also has to minus minus the lifetimes properly because we're doing some crazy interop here so whenever the request ends we have to make sure that you don't touch it so you don't access um some piece of code that will crash your process in the wrong time there's a lot of code in these two layers to manage that um also with this design what's interesting is whenever we want to like deploy a new version we use we do this this trick where you know remember app offline from the olden days oh yeah online has to be recognized at this layer the shimmer and the in processor the shimmer because if you have an app offline we don't even start this part this half of the universe right apple flame basically says you can deploy a new build of of your your application and we will not load it until it's gone so apple flame gets right here and it says i won't even boot your your your um your clr until you tell me app offline is gone so the so the where components are depends on what needs to read them if you need to basically not have a dot-net code run you need to basically do it in the gym the shim is is a thing that runs before any coding.net runs and then the the handler is where we actually we actually interrupt between.net and and c sharp sorry no c sharp and native um but that code actually boots up.net so it's too late there if you want code to run before.net loads it has to be in the shim i mean it means that we can deploy a new version of dot net new exactly like net 11 can come out with new hosting native code and all wonderful new features exactly in theory we only have to update we don't need to do an admin install on a server because you can literally deploy that middle box with your app in your bin folder exactly and what's important is for self-contained deployments these components are not installed globally these components are local in your app folder right that was the big innovation here with regards to how we hosted on iis and that was we avoided it for a few releases because it was all i mean i think we kind of had the idea but it was like yeah so we just did it like the normal model no it's an admin install and we just keep getting the feedback and like and frankly it was also a pain in the butt for for the azure app service team because we wanted to be able to have them very quickly stand up new versions of net but if you're running a global fleet of hundreds of thousands of servers it can be you know and you say yeah just run this installer the doors closed before you even got to the end of that sentence right so we had to do something better so i think we've got one more slide left on this iis group and that's just the is architecture out of process and we're kind of late in the day so maybe we wrap up here and then jump back in after this one yeah sounds good so the the last piece is the the other process um design same same overall design where there is a shim still the shim will load either the other process module or the improv handler um that's based on a on a switch in web config so if you have a config i think there's a there's a mode for other prop versus improv if you mark your your app as other process the shim will load the other part handler which is i believe installed globally because we don't really update this thing super often don't use it um if you look at this process is the 3.p the the worker process for iis and over here is kestrel so what ends up happening is there is there are http requests being made for every single request where we translate the incoming request and we send a request to the network process using it using winhdp which is a kernel mode hdb client i didn't mention before uh the yeah the same team owns both which is good um so windows has a built in hp client and hp server there you go and it uses the hp client and windows to make requests to your application so so pretty simple but the perfect is massive i think in processes four times faster than the other process so it's a significant perfect to to want to use the other process mode but the benefit is you can run more than one process in the same app pool even though it's kind of a light because you're not in the app pool you're kind of yeah on the side of the app pool you still have your other processor is will manage the launching of this process the port selection it does all that for you on the fly um but yeah other product isn't more that we want to deprecate so stop using it thanks [Laughter] so to wrap up a bit there's one question from chris what is the net web hosting bundle installed we first put that installer together it installs this the shim yup the improv handler and the shim and the other part handler and two handlers and.net itself oh yeah you're right it actually yeah you're right that's why it's a bunny has both it has both things it has a shim sorry it has the is support and on it yeah and so it does not have the sdk it only has the runtime and it has both the 32-bit and 64-bit version of the runtime that's also something unique about the web hosting installer because iis is a dual business web server in windows because windows server still supports 32-bit processors even if you're on a 64 hardware hold on and so it installs both and there are two versions of every native module in this diagram that val is talking about there is a 32-bit version and a 64-bit version and they live in different paths and you have to load make sure you load the right one for the right application um and that's one of the other challenges with the in proc hosting is that net is usually any kind of pins and so we have you still have to load the right thing though um and you know default i think is still i know in azure websites for windows i think it defaults to 30 different processors and you choose 64-bit if you want it but on windows i think it defaults to 64-bit and you have to set it to 32-bit if you want it yep that's right so it's the opposite um and so yeah we had to do all the work to make that work so the bun the bundle installs 32-bit and 64-bit asp.net core module for is 32-bit and 64-bit net host for net exe and the net runtimes and the asp.net core shared frameworks that run on that that's what you get all that stuff in the hosting bundle but i assume it does not put the 32-bit dot net on the path probably no no but so you can get yourself a pickle though and people do because they they they don't realize that the runtime is installed and so they go and get the 32-bit runtime installer install that and that brings in the 32-bit host which then is uninstalled so it ends up on the path last and then they try and run.net xc just from the command line normally and they get the 32-bit one so you know is it's all fine because in is we do the logic thing right um but it can muck you up if you're on like a dev machine but you're running fly s and you get the wrong thing on the path so you do have to be careful with the command line because you typically only have one global thing on the path and it's native so it'll be one bitness cool all right i'll do uh two last things so uh question on which um which version this is at least oh yeah 2.2 i think it was the first 2.2 was it yeah 2.2 which is out of support so 3.1 is the currently supported version the lowest one that has that okay and we'll end one with a slight controversial thing with is is still a thing shots fired yeah so one thing that we already covered is definitely like a azure app service that's all running the windows version of it the windows version obviously as well yeah but then you know i mean there's there's a lot of different there are some big companies that run a lot of stuff on you had nick on here a couple weeks ago didn't you yeah yeah so stack overflow big companies like microsoft by the way yeah i mean but yes it's it is a mature technology it is a mature web server that's been around since before i had written any dotnet code or even new.net existed it is around is has been around longer than.net right it is it was an af asp classic um and so yeah it's been around a long time but because of that it's everywhere like it's i mean it's it's highly optimized and it's something that can be administered like it does not take you know decades to become able to administer it it's also a jack of all trades like because it's been around for so long and it's designed as a generic web server there's a plug-in for everything from if it's not from microsoft then from a third-party company or you can write a module for it so you know like apache in that way it's been around for as long as that type now that said there is something we can announce about dot net six that perhaps i'm not sure we've shouted very loudly yet that for net six the default launch profile for new.net six apps in visual studio is now kestrel instead of ios express and we made that change just in this release this is the release where we're going to do that and so there you go if you wonder why when you create a new.net 6 app in visual studio 2022 and you control f5 or f5 you don't see is express tray appear or whatever it's because we changed the default to kestrel you can set it back to ios express it'll still work just fine it's still there but we just chose the other one awesome okay so i think we'll wrap here can you flip to the next slide just to give a preview of coming attractions so we are we are so we're moving on this we talked about like hosting models we talked a lot about the iis like interfaces and stuff and then we move on to request processing next time oh yeah middleware yep middleware a lot of good stuff so there you go sneak peek wonderful okay this is cool i hope some people were able to there's nice c sharp fritz hello um i hope some people were able to join that can't normally make it because we just did it at this random time um but uh we'll we'll get the next one scheduled and we'll keep plowing through these so it's awesome all right thanks a bunch everyone good evening and i will play the happy music that says thanks for watching [Music] [Music] you
Info
Channel: dotnet
Views: 8,731
Rating: undefined out of 5
Keywords:
Id: Eq0Jvhk0o1Y
Channel Id: undefined
Length: 87min 3sec (5223 seconds)
Published: Tue Jul 20 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.