Writing async/await from scratch in C# with Stephen Toub

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Applause] [Music] mental model for how things work and I feel like you you don't have to understand every line of code that went into something that you're using but the more of it you understand the better you're able to use it and take advantage of it and you're right as syn8 has been around for so long and yet still or maybe because of that uh there are a lot of misconceptions about how it's actually implemented and built so last year maybe about a year ago I wrote a blog post called how async and a weight really works in C um but it's long and it's detailed and it's walking through individual lines of code so what I thought you and I could do would be sort of pretend it's 10 years ago and Implement a super simple version of async and8 from the ground up um and uh along the way kind of really see the the bits and pieces that that make a tick uh I I won't be particularly worried about performance which pains me uh and I'll be doing things that I wouldn't do in any other context but our goal here is just to sort of understand uh how things fit together and get a really good mental model for it well David fer and I recently came off of doing a like 20 or 30 part series on beginner C and we got right up to but did not get to async and a weight and the number one most requested thing on our on our channel is is more complicated more technical content would you put this at 200 level or 300 level like if you're beginner intermediate Advanced what should someone know when they join us here 300 level yeah 300 level okay so we're we're not going to be doing anything that is fundamentally difficult um but some of the concepts can be a little mindbending with how things relate to one another all right well I am going to put my empathy hat on and try to be like it's 10 years ago and I've been doing this so I'm going to be the audience and I'm going to ask questions if that's okay so let's take a look let's take a look at your Mach you're on Visual Studio here and we've got conso outright line hello Scott yep awesome so one of the first things you realize when you start talking about async weight and asynchrony is uh asynchrony sort of breeds concurrency right you you start something and then it might complete immediately it might not but you've just launched it and you can go off and do something else at the same time and so then when that thing completes and maybe it's going to do something else after now you have multiple things possibly happening at the same time and you need some way to enable multiple pieces of work to all run at the same time so at the very bottom of the stack you have thread pool uh and since we're going to try and build up this whole thing started from scratch we should write a thread pool a very simple thread pool yes so to level set though on definitions if I'm going to say I'm going to do things concurrently I'm going to do two things or more than n things at once if I'm going to do things asynchronously that's not quite the same as saying I'm going to do multiple things at once yeah so let's say I had a for Loop here let's say I equals Zer I is less than a thousand I Plus+ and then I'm going to use a thread pool we're now pretending that this does exist again even though we were about to pretend that it doesn't uh and I can come in here and you know put some work inside here um this this line here is just queuing a work item and immediately after I queue it I can do something else so I have cued this work asynchronously I'm launching whatever the body of this is I'm running it asynchronously from where I currently fire and forget fire and forget it'sing without the join it's you know yeah and that doesn't necessarily mean it's going to run concurrently if this was the UI thread of a Windows forms application and I wasn't using thread pool cues or work up I was instead using control. begin invoke that's really just asynchronously running that work but queuing it to run back on the the very place that I am so it's asynchronous invocation but it's not necessarily concurrency where concurrency comes in is uh the fact that this actually you know this piece of work here actually could end up running at the same time as this piece of work here and so the you can't have concurrency without asynchrony but you can have asynchrony without concurrency and then it's arguably not deterministic whether or not those two dot dot dots those two ellipses they might run at the same time depends on your processor depends on a million different things depends on if you get preempted who knows absolutely absolutely and in fact we can see that what this actually leads very nicely into what I just wanted to show which is let's say I have this very Loop here and I'm going to say console. right line uh I and then uh fred. sleep 1000 now you might think that when I run this it's going to print out the numbers zero through a th000 or 0 through $9.99 um and ideally that's what I wanted right I'm I'm queuing a bunch of work items this machine that I'm running on is has 12 logical cores the thread pool is going to have about 12 threads is uh and so you would kind of expect to see 12 numbers printed 0 through 11 and then 12 through 23 and so on and so on um but I actually have a bug here and if I run this it gets to exactly what you were just talking about oh this came up on the wrong monitor you can see it's printing out all 1000s and it's exactly because what of what you just said uh if I sort of you know just minimize this and forget what this was doing all I'm really doing is queuing 1,000 work items and then I'm going off to do something else by the time I do that I is now a thousand and then when these work items eventually end up running they're just referring to that same uh I value that was captured by this work they're all just referring to the same variable and so they all see it as uh a thousand keeps popping up on the wrong screen uh rather than printing out what I wanted to have it print out um in fact what's really cool is I can actually just select select this code and uh I could ask GitHub co-pilot for example why is this printing out 1000s wanted to say something well while it's while it's thinking about explaining that to us is it going to print out a thousand a thousands and then is gint out a thousand n99 this is just gonna print out a thousand a thousands that's that's the end of it yeah uh why is this printing out thousands when I expected it to uh print in incrementing numbers I love that you're using it as a rubber duck right rubber ducking is this idea that I'm just going to have something on my desk that I'm going to talk to and it's going to help me understand it I actually have a rubber duck that I have on my monitor that's just there to ask these questions to but now I can do it to co-pilot well the really neat thing about this is it's more than a rubber duck is it's explaining the problem to me oh wow it's but then it's actually recommending a solution which I can go and preview uh and then just accept the change so what done is explain the problem and then uh it's basically saying rather than using this variable which is sort of in this outer scope and is going to be reused across all of the the closures it's instead putting the thing that's being captured into the local scope so every work out of I queue will have its own copy and if I run this now uh we'll see indeed that we end up getting that behavior we were expecting where we're printing out these incrementing numbers as we go and we're seeing it across even though we're seeing it all in one line in one row here we're seeing uh them fight a little bit exactly so what we want to do is here I'm using the real threadpool but I need to do this with my own thread pool gotcha so we're going to go in here we're going to write a little class called my threadpool for lack of a better name since I'm not very creative when it comes to names and we need that same cues or work item uh that we just saw so cues or work item we'll take an action and we need to do something with this and one of the things I love about doing examples like this is they kind of write themselves in that I'm saying Q here so I need a q like I I need somewhere to store this data so I'm going to come in here and say static read only and there are lots of different data structures that I could use but I'm going to use one called blocking collection uh and the beauty of a blocking collection here is that um you can store things into it it's basically concurrent queue but when I want to take something out I will block waiting to take out uh the thing if it's empty and that's what I want my threads to be doing all of my threads and my thread pool are going to be trying to take things from this que to process it and if there's nothing there I want them to just wait for something to be available so my qes or work item is just going to say work items. add and put that into the queue uh and then I need a bunch of threads to do the processing so I'll say in a static Constructor for here I'll just kick off a bunch of threads I is less than environment I mentioned I'm on a 12 uh 12 logical core machine here so I'll just have 12 of these and each one of them is just going to kick off a thread uh and start it now interestingly sorry go ahead Scott if I may I just want to call if you scroll up just to smidge I want to make sure that for folks that are following along and learning in on line six there you said delegate uh and now you're using uppercase a action action is a delegate right it's public delegate void action so what why why did you went maybe explain the ju position there as you immediately and intuitively picked action yep so action in.net is the you can have delegates of all shapes and sizes that managed function pointers basically um net has built in definitions of some of those for very common shapes one of those super common shapes is just a parameterless void returning method that's all action is so this com to anything that is parameterless and and point return and so um because we're not doing anything fancy we're not accepting any state we're not returning any additional arguments I've just used action here and if if I write delegate and I Chang this to my thread pool we can see that it binds successfully because the compiler is able to convert this uh Anonymous method into an action it's also able to do that with a Lambda which is just another way of writing the same thing just a good reminder to folks an action is a delegate that has already been defined all right cool please thank you yeah no thank you so um I'm creating a thread here uh this isn't really here or there but interestingly net sort of distinguishes two kinds of threads it has what are called foreground threads and background threads and the only distinction between those is when your main method exits do you want your process to wait around for all of your threads that you created to exit as well foreground threads it will wait for them background threads it won't wait for them uh because I don't want these threads that are sitting here in a infinite while loop to keep my process alive forever I'm just going to say is background uh equals true uh and that way these threads don't keep my process from exiting now is that something that might not necessarily be intuitive to someone who came from a Unix world who is not going to think about that kind of foreground thread background thread and there's also the concept of green threads and native threads In some cultures yeah and frankly it's it's not something that you frequently run into or or that matters but since we're sort of looking at implementing the lower level stuff here call some of these things out along the way um so these threads just sit in an infinite Loop uh doing something and what are they doing well they're taking the uh next work item from work items take and running it and that's it now I've got my my thread pool and if I kick this off uh we can see that we get the the same behavior that we saw before even though we're using my thread poool it's behaving in a very similar fashion I've just we've just implemented our own sort of new threadpool now if you were to look at the real net threadpool it's a whole lot more codee than the what is this 15 lines or 20 lines here almost all of the real code goes into two things one making it super efficient and two uh not having a fixed number of threads a lot of the logic is about thread management and increasing and decreasing the number of threads that thread pool has over time in order to try and maintain good throughput for your application but as I said I'm not worrying about perf so I'm sort of but one of the interesting things here though because we're implementing this sort of at this lower level is there are other things that we need to think about that uh most developers implementing most Library implementing most code don't need to think about but because we're implementing the details here we do and in particular if you're familiar with say like aspnet right aspnet has this thing called HTTP context um and you're able to use this HTP context accessor to kind of basically say give me the current HP context for where I am or uh if you're using uh like principles with threads and you say who is the current principle associated with this threat that information somehow flows when you cue work items or you do other things there's all this sort of ambient state that somehow seems to be able to magically flow from one from one thread where you're doing something to the continuation or to the other work that you've queed okay and that has to happen somehow and it happens via something called execution context so in lines 4 through 18 here we have that captured value I does that then join thread local storage and kind of go along for the ride as you cue that that work item it's it's not exactly what we would call thread local storage so this value here is really just being stored onto an object that's being passed into qer work item um if thread local storage would be if I actually had a static uh a static field and I taged as thread static what that then ends up doing is saying that each thread has its own copy of that static field um but with something like asynch a weight or cues or work item I'm going to be hopping between threads so if I put something in thread local storage on one thread it may or may not end up being available in the work that I queed because it might run on a different thread so what we need is a mechanism to say that ambient state that we kind of have hanging out there like in thread local storage how do I enable that to automatically flow with my work because in the case of something like async await I do a and then I await something and then I do B and I await something I'd kind of like that ambient information to be present even across right all those possible hops think about like if you were if you were building a large distributed system maybe like a correlation ID that's allowing you to go and and track The Logical transaction over the course of a large distributed system in the early days of asp.net a lot of people got nailed by marking things as thread static and then a thread gets reused in the pool and then someone else's account number is there and like wait a second that variable is not my data exactly and you mentioned you mentioned you know for distribution or tracing or whatnot um that's a good another good example where we take advantage of this the activity stuff that's used for doing sort of distributed tracing and you have you can await any number of times yet somehow your correlation ID your your IDs for your your spans end up being the same and that is via this mechanism so there's a type in net called async local um and I'll just call this I and then I don't want I let's see uh let's call this um my value and then here I can say my value. value equals I and I can use my value. value here so this looks like I have a single shared thing right it's sort of the same as the initial problem I had it's it's Shar it's outside of my Loop uh and I'm just storing the value into it I'm using that value here uh but if I run this we'll see that it correctly does what I wanted it to do right I've still got my incrementing numbers and I'm not somehow sharing the same value across all these that magic is happening via something called execution context Inn net which is this thing that takes all of that thread local state that has been specifically put there and flows it with all of these asynchronous operations like Q use a work item or new thread or await or task. run or any of these things now normally that all just happens for you right that magic is happening for you by cues or work item or by task.run but if I switch this over to my threadpool and now I run this it's all zeros because right we're we're reimplementing that lower level of the stack we need to handle that in that flow so async local is this kind of Multiverse friendly interdimensional traveler that's G to like okay we're going to start hopping around from thread to thread and it's going to be passed in as a parameter to this function and then any changes to that object are going to be seen by the CER but then if you assign a new value it's not going to be seen by the Coller yeah and we and this is where seeing exactly how it works under the covers kind of helps clear up exactly how that flow happens it's one of the reasons I love learning about the stuff at the lower level because you kind of your mental model for this kind of locks in place so what is this actually doing well rather than just storing an action we need to also store that execution context that thing that's getting passed around and then here rather than just adding the action I'm going to add execution context. capture so I'm going to grab the current context and store it along with this action in this collection and then when I take this out I'll just take out that execution context as well so now I've not just I've not only dced the action but the execution context associated with it did you have a question okay so action is a delegate so I get that uppercase a action is a effectively a delegate execution context seems to be like a very friendly and convenient thing that you just happen to have available to you in the Base Class Library what is its underlying data structure if you were to use something a little bit lower level than even the fact that you could just say execution context execution context is basically just a dictionary of key value pairs yeah that is stored in thread local storage um it's a little bit more fancy than that but really just what it is and everything else is kind of an optimization and then it provides these apis to say capture it which just means grab the current one and then what we'll see now is we need to be able to actually use it so we we kind of captured the context that was present when we queed the work item and now we need to actually use that same context to kind of restore it temporarily while we invoke this delegate now it is possible that it's null because it's possible to suppress execution context flow that not really relevant for our discussion but if it is null I'm just going to invoke the delegate and not worry about it otherwise oh look get up co-pilot wrote it for me otherwise uh I'm going to take this context and run the delegate that I'm passing in using that context uh and so now I previously got all zeros now when I run this uh we'll see that again we kind of get that behavior that we wanted because now that ambian state is Flowing from where the work item was queed to the invocation of that work item let's let's scroll down and let's just spend a moment uh looking at line uh 36 just a little more deeply for those that may have seen that fly by because you're casting that state to action let's just make sure we understand what's happening there on that on that line so this is the line that GitHub co-pilot wrote for me and if I was writing this on my own that's what I would have written as well it might be a little bit clearer if I for our purposes here if I write it a little bit less efficiently uh and that is if I run it instead like write it instead like this not context work on it uh these are functionally the same thing I'm just saying invoke this delegate which is just going to invoke this work item with this context set basically as current have it restored and then it's going to undo it afterwards right the difference between these lines is execution context. run actually takes a State argument and then that state argument is passed into that context call back delegate so that delegate is just an action of object basically just with a different name so you can pass State into it um in fact if I I should be able to browse to the definition here and if I look at context call back all you can see you can see it's just a delegate that takes a state object this was introduced before action and action of object were added so it's a you know a dedicated delegate type if we were doing it again today this type wouldn't exist it would just be action of object right and then just go back to program.cs sometimes for those who may not be familiar when you see something like State show up you see context which is uh uh which might you might think oh that's a that's a variable it is in this case but then State it's a named parameter exactly it can be easy for a 200 level person to kind of go State what state where' that get declared yeah so I can expand that a little bit this is just uh the the argument to this function and so basically this state this this AR object here is then being passed to execution context. run which will invoke this delegate passing that object in as the state and the reason I said this is for efficiency is because this version has what's called a closure and it needs to reference this work item that's defined out here so there's actually multiple objects being allocated here to be able to capture that work item into some object and create a delegate that's been passed in and here I can avoid that in fact I can see that it's being avoided and that there's no closure by using the static keyword in C if I were to do anything in this delegate that tried to use State out here like if I were to try and do this the compiler is going to warn me or give me an error and say you can't do that you're capturing State and you told me via this static keyword to not let that happen so I'm not letting that happen yeah so so control Zs back to Glory just a moment ago and I also want to call out there's two fun things going on here one from C uh 8 which is the they call it the damit operator the that null forgiving which is State and I mean it make that and talk about that for just very briefly and then of course you've got object with a question mark because that no work Happening Here Yeah so the nullable reference type support in C is quite nice it's not perfect there are some apis where you just kind of can't fully Express from an API definition perspective what you want to express and in this case what I really want to be able to say is I want to be able to pass in something here that is null something that's not null uh and I would like that to then impact whether this is nullable or non-null if I pass something here that's non-null I'd kind of like it to be this and if I pass something here that is null I kind of like it to be that because this question means can this be null or not and you can do exactly that with generics this API was introduced before there were generics and so there's only one thing that this can possibly be and it has to be able to work with nullable or non-nullable things that are passed in here and as a result the the only thing that can be is the thing that can possibly be null because something that is non- null or maybe null can both be maybe null um in my case I know it's non-null because I'm only in this code if work item sorry if um if work item is non-null and therefore I say that warning that you would otherwise give me by trying to use this thing that might be null I know it's fine I know better I know better than the comp we know better than the compiler hence the dam it operator actually the null suppression operator well null forgiving I think is what we yeah no forgiving that's yeah exactly so that's all right so this is starting to this is starting to take shape here yeah so we've got our threadpool but Q user work item is pretty low level right it we Q work we can Fork but we're not really joining with it and because of that I've got this console read line here to kind of prevent my program from exiting I'd really like to be able to both cue the work and then have some object that represents that work that I can later say wait for this thing join with it right and for that inet we have a class called task so I'm going to implement my task and we're going to implement our you know again a very simple version of task that can then layer on top of my threadpool um so there are a few things that we would want to do with this task task is just and it's core it's just a data structure that sits in memory that you can do a few operations on um one of the things that you can do is check whether it's completed so I'm going to have a little bo is completed property here and I'm just going to kind of scaffold this out and we'll we'll fill it in in a moment um you also need to be walk up to that task and say well you know I can check whether you're completed but I want to be able to mark you as completed basically say that you're you're done and for that I'm going to add two methods I'm going to add a set result set completed whatever you want to call it and also it might it might be representing an operation that has failed so I want to have uh set exception you saw again get help co-pilot there automatically completing the line for me which was quite nice um now in the real. net uh these are separated out onto a separate type called task completion Source that's not a functional thing that's purely so that I can give you a task and not be worried that you're going to complete it out from under me so I'm kind of reserving the capability to mark this task as having been completed um for our purposes I'm just putting it right on to to task and then I also want to be able to wait for one of these things so we were just talking about being able to join with it so I want to be able to say you know wait for this task to complete or if I don't want to synchronously block maybe I want a call back maybe I want a notification that the task is completed I want to be able to walk up to it at any point whether it's completed or not and give it a delegate that it will invoke when it completes and for that we're going to write a method called continue with that again we'll just have take an action uh which it will call when it's done so this is the surface area of our task we're going to implement and yeah and this is this is why I just love one of the best and most fun parts and the hardest parts of computer science of course is naming stuff and I'm sure you've probably been in in in meetings with with uh partners and Friends saying let's get a thesis and find the word and when you find the word it's got the right mouth feel you're like okay that's what it is so like a task is an action but it has it has other actions so a task can have actions like has a is uh you know all of those kind of things when when you started writing task I'm thinking to myself well gosh an action's kind of a task but no tasks they have more things they have more context they need to be a little different it's not only sort of you know representing some operation but also then interacting with that operation in in some way yeah um so we can start filling this in and again it kind of writes itself so is completed all right well I need to track whether my task has completed or not so I need uh this completed field and we see that I can set an exception so probably need to be able to store an exception onto here again uh we can we can see that um the the question mark because I may not have an exception I may have an exception so this is nullable uh we can see here you can walk up to this task at any time and give it an action to that it's going to invoke when it completes I need to be able to store that somewhere so we're going to have action uh continuation and then as we just saw with the thread poool not only do I want to store that action but I also want to be able to take that EX execution context that was sort of floating out there and capture it and restore it uh when I invoke this thing so I'm also going to have an execution context and now we can start filling in our method so let's do is completed first this is the easiest one uh I'm going to say get and then I want to return completed now I do need to do a little bit of synchronization here because this task object is sort of U it needs to be implicitly thread safe because something over here is going to be completing it something over here is going to be joining with it the real task in net does a has a whole lot of code to try and make this synchronization as cheap as possible with lock free operations and whatnot I'm going to do a really simple thing that I don't recommend anyone else do in you know a general case and I'm just going to say lock this and just protect all of my operations with everybody get in line behind this guy yeah exactly um but with that this this method has been implemented um and again it's what's kind of NE is you kind of look at your state and I love the syntax highlighting in vs because it kind of shows me what I've used and what I still haven't used yet the things that are great out of the things that I haven't kind of used yet and then in that way it's kind of guiding what I do um so both set result and set exception actually need to do the exact same thing so I'm just going to implement them in terms of a single helper uh that optionally takes an exception and so then I can go up here and make this just call complete with null and again GitHub co-pilots knows what I want to do and writes it for me um and now I just need to implement this complete method so this is uh the operation has completed this task is being used to represent that operations so the code needs to come along and mark the task as having been complete so again big honk and lock uh and it doesn't make sense to complete one of these twice so we'll just say if it's already completed throw an exception stop messing up my code and then we can now proceed to actually implement this so I need to mark it as completed that's pretty obvious and I need to store the exception uh that I was given uh and now um I'm almost done but we can again see if we look at our state right I've I've set is completed I've set exception but we said that this continuation was meant to be invoked when the operation completed so now I can say if continuation I can type it if continuation uh is not null and again it tried to write it for me which is pretty to then I want to queue a work item that invokes the continuation now this isn't 100% correct uh and it's a good reminder that while something like GitHub co-pilot can help you write most of the code you still want to check to make sure that it wrote what you wanted it to this is functionally correct except it's missing using this context yeah so I'm just going to go up here and again uh do exactly what we saw before which was if context is null then just invoke the continuation otherwise do that whole execution context execution context run Thing feel like there should be a way to say all of that in one line though yeah maybe we should add an overload of run that you know does the right thing just doesn't yeah um yeah and so uh complete is now now complete um uh so we've implemented is completed we've implemented set result we've implemented set exception uh let's do continue with next this one was also pretty simple uh so we'll just lock around you know ourselves um and now we can say if we're already completed well we can just cue the work item that the user asked us to invoke we don't have to do uh anything anything special um I just do this something else otherwise we're just going to store that for later and then we also need to uh capture the context for use and that's that's continue with right so now we've hooked at this delegate and all we're doing is saying if the task is already done and run this now by queuing it if it's not done store it such that when it is completed this code over here can then launch this in in the in this in this I don't know if the word naive implementation in this simplified implementation is it how bad is it I just want to make sure how bad is it that you're locking on this like is that a reasonable thing to do because we are creating a low-level component like as a general rule application developers should not be locking on this because they don't know who else is locking on the thing but is it less of a sin that you're doing it so there's two aspects to your question here one is using locks in general and the other is locking on this I'm speaking specifically about lock this which I was like taught and ingrained to never do don't do that right yeah so the concern is it like I would if this was actually task you would definitely not want to do this and the reason you don't want to do it is it's this the lock that you that you is basically an implementation detail it's private State and yet this the reference to the object is public so it would be akin to having you know it would be exactly the same as having a uh my lock object but then choosing to make this public rightone else could lock on it and now you're having this weird interaction with code that didn't expect to be touching your private State and so that's really what it's about if you know that no one else is going to be that no one else except you will have a reference to your object you could lock on this I got it so we are a public class my task is a a a a public class but well I say not a public class pardon me it's not no one will ever have a handle to us so there's no way for anyone to ever lock on us is what you're saying right Ah that's good that's good information um y so that I didn't leave any of those lying around did I I don't think so okay um so our last method the only thing we have left to implement is this weight because I want to be able to for at least especially for demo purposes walk up to the task and say let's make the fonts just a smidge bigger sorry I know that you're I want to point out how how how talented you are and your zooming your control scrolling is very good I like watching your brain as you're like I'm getting increased scope and I'm reducing scope and you know you're using the zoom yourself not just as a presenter but also as a way scoping the space that you know that this work will take up I want to be thoughtful for our friends on their phones and on their iPads too apologies to them yeah sorry um so now we want to be able to wait for this test to just walk up to the task and synchronously block and what's fun about this one is we can actually implement this in terms of continue with and this isn't just some novelty that I'm going to do here this is actually how task. weight is implemented it's also implemented in terms of continuations so I need to be able to block and anytime you want to kind of synchronously block waiting for something you need some sort of synchronization primitive in this case I'm going to use a manual reset event uh and I'm going to again lock and I'm going to say if we're not completed I I need to do something and I'm I'm purposfully ignoring what GitHub co-pilot is telling me to do here because it's right but it's also uh making it hard for me to be sort of pedagogical and teach because it's jumping way ahead um so I'm going to wait for this manual reset event but only if I create one and so I'm only going to create one if this task hasn't yet completed if it's already completed I don't have there's nothing for me to wait for so if it hasn't completed then I actually instantiate this and now I need to Signal this manual reset event to become in a signal State th that anyone will waiting on it will wake up when this task completes how do I do that I can say continue with manual reset event. set so now I'm implementing weight in terms of continue with by saying when this task completes hook up a delegate that will invoke manual reset event slim. set which will then cause this to wake up and manual reset event slim is literally the slim lighter weight version of manual reset event and because you're not going to be waiting long it would be appropriate to use the the light the the DI Coke version of man it's actually appropriate to use the diet coke version in 99% of situations uh and better to use the diet coke version even though I know you know my wife tells me that I shouldn't be drinking diet coke I know my wife says the same thing um but in this case the manual reset event is just a very thin wrapper around the os's the kernels uh equivalent uh primitive and that means that every time I do any operation on it I'm I'm kind of paying a fair amount of overhead to dive down into the kernel manual reset event slim is a much lighter weight version of it that's all implemented up in user code in networld um basically just in terms of uh monitors which is what lock is also built on top of um the only time it's less appropriate to use it is if you actually need one of those kernel level things which you typically only need if you're doing something more esoteric with weight Handles in in a broader yeah so anyhow totally good here the last thing I need to do though is um we can see and I mentioned using the the grayness of my fields to know whether I was done or not um obviously this one is still grayed out I'm I'm missing something it's this is this greatness is saying that it's I said it but I've never actually read it and that's because you know when I wait on this thing I actually want whatever exception was there to propagate out and I haven't read it yet so now that I know this is done I'll just say if exception is not null and again I'm going to ignore GitHub co-pilot even though it's it's right um I basically want to throw this exception so it propagates out now this isn't ideal either um if You tab an existing exception object that has previously been thrown that exception contains a stack trace it contains some uh what's referred to as the Watson bucket which uh contains sort of aggregatable information about where that exception came from for use in um postmortem debugging and Diagnostics um when I throw exception like I'm doing right there that's going to overwrite all of that information so I kind of don't want to do that um one common way around that and that was the only way around that when task initially hit the scene and doneit framework 4.0 was to wrap it in another exception so you might wrap this in and have like an inner exception exactly and so you can see exception has an inner exception and now throwing this will populate this exception stack trace this exception will still be available as the inner exception and it won't be touched so all of the stack Trace will stay in place and then we're not doing a just a throw because we're not in the middle of an actual active exception that we have just previously previously caught and are rethrowing exactly yeah um now so task basically had to do this while it was doing that it also factored in the fact well a task could represent multiple operations that were sort of all part of the same overall operation like if you have task. whenall uh that will you can wait for multiple tasks and that produces a single sort of task result which needs to be able to contain multiple exceptions so task instead of throwing regular exception it throws an aggregate exception uh and you can see from uh Constructors that are available that you can give this any number of exceptions and it can wrap any number of inner exceptions that's what the frams there means um but here I'm only wrapping one now since since task was introduced uh and something that was very useful for a weight is another sort of pretty lowlevel type called exception dispatch info oh wow the name doesn't really matter but what this does is it takes that exception and it throws it but rather than overwriting the current stack trace it appends the current stack trace and so for anyone who's looked at a an exception that's propagated through multiple weights you might be used to seeing a bit of a stack trace and then a little dotted line that says you know uh continued ad or original throw location and then more stack Trace every time this exception is getting rethrown up the call stack up the asynchronous call stack more state is being appended to that stack and that's all handled via this um so we've now implemented task and that's basically what it is so I can go up here and I'll just say I have a list of my task and then oh actually one more thing I want to do first what I was going to say was I was going to have a tasks and then here I was going to say add my task.run and then I realized we haven't actually implemented my task.run yet so let's do that um so oh is this an opportunity for you to go over to that run and hit uh you know control Dot and see if it will a visual studio will generate that run for you I could try what what do you want me to do is like a quick action if you hit the little if you hit the little uh generate method run will it do the right method so it generated the method uh but without implementation y um now co-pilot can actually start filling this in for me but again I kind of want the fun of doing it so I agree I'll let it do the little things uh and uh and it made assumptions as well of course in that case their Visual Studio made some assumptions about scoping and things like that so internal so public static my task run action yep which looks a lot like task.run uh now in all of these little helpers we'll see implemented they all have a similar form we're going to create a task and we're going to return it and then in the middle here we're going to do something that does the operation and completes that task now in the case of run all that's doing is saying my threadpool qer work item uh we're going to have a TR catch block that invokes this action when it is successfully completed we'll say t. set result and if it failed with an exception then we'll say t. set exception and we'll bail and now we fully implemented task.run and again other than some minor perf differences this is exactly what task.run actually does Q's a work item that completes the task when the delegate has been invoked right and I think that chunk right there that that that's where it really crystallized for me from 105 to8 right there you've abstracted away that previous use of Q user work item added a lot of value around the things you might want to do with a task check on its completion and things like that set continuations huge amount of value and a small amount of code absolutely and this also speaks to the kind of the ubiquitous nature of task one of the the most important things that task does isn't even the operations on it it's conceptually the fact that it unifies into a single type the ability to join with any arbitrary asynchronous operation Inn net and that was a critical step for async and a weight because you want to be able to use a weight with any asynchronous operation and by having a single type that can represent any of them it makes that a whole lot more convenient so this is the building block this is the beginning of it we've got about 20 minutes to bring it home then so let's understand how task then becomes such a powerful pattern so that's that's great so I want show two aspects of that first we can see now that my Squigly are gone and here I could just say for each uh T in tasks t. weight and I'm going to lower this number because I don't want to wait for thousands of these to complete but now when I run this where's oh it's still building build and let's let's try to zoom in on both our terminal and our code as soon as it finishes here oh is it on another is it um think there you go you're over here so um when this gets to 100 we can see it hasn't exited but the moment it gets to 100 then my application exits because it was waiting for all of those tasks to complete now you mentioned Scott that we start to see kind of this being a building block and we can build other things on top of it it's kind of unfortunate that I'm having to wait for each of these tasks individually you want to wait for all of them right wouldn't it be nice if I could say my task. whenall and just pass in all of these and then for the purposes of My Demo I'm just going to block waiting for that thing so I'll use your little trick here and I'll say you control dot I think as well control oh I was using the alt control dot generate meth all uh so this we're going to have this return of my task I also want this to be public this is taking a list of tasks so we're going to do the exact same thing we saw before I'll say my task and return T and now we again just need to fill in this intermediate part I'm going to handle one base case which is if if the number of tasks is zero then I'm just going to say all right I'm done nothing else for me to do right otherwise I need to Loop through all of these tasks and hook up a continuation to each of these that will basically count down how many are left and when all of them have completed then it will set that task so out here I'm just going to create a little continuation that I'm going to reuse uh for all of these tasks I'm going to have a little counter how many are left tasks do count and here I'll say if after decrementing remaining I end up with zero then I'm going to complete the task now I should also be doing some stuff with exceptions here not going to bother with that right now it's kind of not the point but now I can take this continuation I can uh put it here oh I have something else named T Sor right uh task in tasks um and now I've implemented when all so if I go up here I've Got My Little My squiggly is gone I can run this again bring this window over and again now when I get to 100 uh we should see this all right and then jump back to the implementation of that very briefly for me sir so I want to call out you're using interlock decrement instead of just saying remaining minus minus because because uh I don't have no idea what these tasks are doing they might all be completing at the same time or not and if they were to both complete at approximately the same time this continuation two different threads might be trying to decrement this value and if they each tried to do it without any synchronization their operations might sort of stomp on each other and we might lose some of the decrements which would be a big problem because we wouldn't know when we actually hit zero so I'm using this lightweight synchronization mechanism to ensure that all of the de are tracked and that only the one that is actually the last one to complete performs this work because as we saw if I dive into this if multiple of them think that they're the last one and they try and both complete it it's going to fail right and you said lightweight synchronization method as opposed to trying to do some locking around that which I suppose you could have done totally could have u i could have had it taken a lock here but this is one place where it's really simple and straightforward to use basically the lowest level syn ionization primitive that I have available to me which is uh a lock free interlocked operation very cool um as long as I'm implementing other helpers I can Implement some more and we'll see they all follow the same pattern one of the most useful that people find with tasks is delay so let's also Implement that so I can say delay and we'll have some timeout here and this is again going to follow the exact same pattern we've seen before so we'll say new task and we'll return that task and then here I just need to do something that after this timeout has happened will complete the task uh I can use a timer for that so I'll say new timer when this timer completes it's just going to set result and then I'm going to schedule the timer to complete in this number of Mill why is that more appropriate than what a what what someone who may be trying to use do this exercise themselves might naively say oh thread dot sleep that's a great question thread. sleep takes the thread and puts it to sleep um for the specified amount of time so if I had 12 threads in my thread pool and someone wrote thread. sleep 1,000 as part of their work item now all of the threads in my thread pool are unable to do anything else and that means if someone comes in and cues something that's actually important they're going to have to wait for all those threads to become available wouldn't it be nice if we could instead sort of still have my logical flow of control pause its logical flow of control for this period of time but allow that thread to do something else while that's happening and that's the beauty of await task. delay so we can we can see that sort of in practice now that we have our delay I'm just going to go and delete or I'll comment out all this up here and I'll just do something simple like console WR uh write hello and then I'll say my task. delay um let's say 2,000 and then after that delay I'm going to use our new our continue with method to now print out console. WR world right and uh again I'll have our console. readline here to make sure our program doesn't exit because we're spawning this asynchronous work but we're we're not currently joining with it uh and so now when I run this uh my window pops up we get hello and then two seconds later we get World um but I would kind of like to be able to just say wait here rather than having that console do uh console. readline but we you can see we're getting a squiggle here saying continue with returns void we can fix that exactly the same way that we've seen in our other methods I'm just going to say my task new task down here I'll return it uh and I just need to do something slightly different than queuing this action rather than queuing that action I want to have uh a different uh let's just call this call back and what this call back is going to do is is invoke action and then call t. set result now we're going to do the same dance here just to be good citizens that we saw before we we'll catch exception uh we'll set the exception onto this task uh and so if this action were to fail um we will still end up completing this task and now I can just take this call back and use it instead of uh use it instead of the original action that was passed in uh and now when I go up here we'll see that I I no longer have a squiggly and I'm going to make this a little bit longer just because I keep having to move my I can't figure out how to get my window to start over here so uh we got our hello and then once the world appears um that's when the program hello pause for effect World exactly um kind of like an llm right you're spitting out these little tokens as you're waiting for each one yeah um now it would be nice if I could sort of not just do one thing after a delay but it'd be kind of cool if I could just take this and say after another two seconds I want to print out like and like the llms chain them and just have them go hello hello hello exactly chain these things together and then maybe I want to do that again in here uh I'll say uh how are you right but we can see I'm getting a squiggly because I'm trying to return a task out of something that was just accepting an action you were we were talking about the delegate earlier that the action delegate is just void returning um moreover even if that worked I want this wait to not only wait for this work that has completed but also for any task that it's sort of returning out of its body um so I need a slightly different version of continu with that's able to sort of unwrap that inner task uh continue with here I'm going to just copy and paste this whole thing and create a slightly different version of it we already talked about action uh I'm just going to take another version of it that not only invokes this action but this action is then going to return another task and we don't want to return the task that's completed from here until this task has completed so I'm just going to store that next take this result out of here because we don't want to complete when this outer one has completed only when the inner one has and then I'm just going to hook up a continuation to this so I'll say uh when this task completes kind of a linked list of actions here sorry say that again oh just kind of a linked list of actions just like what is the next one in the in the in the tree so here I'll just say set exception with that exception otherwise we'll say set result and I don't need to change anything else now my squiggles have gone away and with any luck when I run this we'll see Hello World and Scott how are you and we don't exit until that whole chain has completed nice um this is a pretty unfortunate way to have to write yeah zoom zoom in a little bit there it looks a little weird I mean like it it's kind of like you know what they called it Arrow code in the old days where exactly and and we can fix that to some extent we could go in here and we could delete this and then here I could say continue with because of that because I've already implemented yeah that's an aesthetic at this point like it's right so I could run this and it would do the right but I have this very linear continue with continue with continue with if I wanted to instead do something like for I equals zero I is less than or forget even that part just this and I wanted to print out forever yeah for the current eye but I still wanted to have that my Tas without delay in here that nice delay I I don't what do I do right I I this won't work because I'm not going to be waiting at all I don't want to use thread. sleep this is where if I had something called the weight I kind of want it to to kick in here but I don't interestingly there is something that almost serves that exact purpose that we've had since C 2.0 and that is async iterators so if I were to instead have code that did this uh if I had uh I inumerable of int uh call count just got uh for I equals z i is less than count i++ here I can yield yeah out of here and somehow I'm able to magically come back in So if out here I were say four each into I in count count 10 oops right and that yield is just kind of like hey here's the next one keep returning keep returning it's just yields of not used enough and not well keyword and one of the great ways to understand that if I just debug into this and I start stepping through this call count I didn't actually step into count yet yeah until I move next and then when I step we can see I end up back in this method yeah and it's restoring the state each time I step right it's remembering that state from this previous operation wouldn't it be nice if I could do that exact same thing except instead of yields returning out I ah okay this case is rehydrating the state in this case the state is simply I you're going to rehydrate the entire execution context so what I'd kind of like to do is to have this code but have this in a method let's call it uh forget what this is actually returning for a second let's just call it print async and I want to sort of yields return out this task from this and rather than kind of manually pushing it forward calling move next on that I numerator what I want to have happen is when this task is yielded and this task completes I want its completion to call move next I want it to sort of drive itself so we can Implement that and actually we can implement it pretty easily let's go down to where I was writing all these helpers and we're going to write one very last helper I'm going to call this iterate this is going to take an enumerable of task we're going to do the exact same thing we saw before return that out and uh if you're familiar with enumerators the main thing on an enumerator that moves It Forward is a move next method so we want to move next method here I also want to invoke it to sort of kick things off uh I need to get the enumerator of my task out from here so we'll say tasks. getet enumerator and now we just need to implement this little bit of code that says move the state machine forward move next get the task that was returned and when it completes move it forward again so I'll say if e. move next if we were able to get another one and we'll fill that in in just a moment if I wasn't able to get another one well I'm done there's nothing nothing more for me to do and again for good measure we can wrap this with a catch block that will set the exception and now all I have to do is this little bit of code here what is this going to do well we're going to say what is the next task it's whatever was yielded and we're going to take that and say continue with move next and now when I call iterate with this lazily produced iterator of tasks we're going to start it off we're going to enter the method calling move next which will push the iterator forward which will start running my the code in my iterator eventually it will yield return a task we'll get that out we'll hook up a continuation and we'll exit when that continuation runs it will call move next it'll push it forward and so on and so on eventually there won't be anything else to yield my iterator will have reached its end and will call set results so if I go up here and now I can just say my task. iterate print a sync and if I run this we'll see we're getting we're running weing these tests yeah and we're getting that delay and we've been able to do it with just this little helper and believe it or not that little helper is basically what the compiler generates for asyn O8 we've effectively implemented async O8 here in fact in the comp in the C compiler the logic to support implementing iterators and the logic to support implementing async methods it's like 90% the same there are a few differences here and there but for the most part it's implementing a state machine that allows it to be uh exited and re-entered and rehydrated and come back to where you were and the real thing that differs is who is calling move next is it the the developer code with for each calling enumerator Dove next or is it the completion of the the awaited task or the in this case the yield returned task calling continue with with the move next that will that we'll feed back into it um we can take this you know further I can show kind of Full Circle how this uh we can actually um you know replace this with a weight by implementing a little a waiter on this and how we can replace this with async my task by implementing an async method Builder but at the end of the day it's just some syntactic sugar uh that's allowing the C compiler to use our custom task we really have implemented a single weight from scratch I love that word syntactic sugar because I think people don't realize that like each little additional layer of abstraction is is indistinguishable from Magic and if we accept those little abstractions as being black boxes then we are uh you're going to struggle but if you realize that like when you went and made that iterate function go back down there there's you just buried it you hit it but it's so clean and small and now you have a nice helper function and but you can go and look at that you can go and see that you can see what the compiler generates you're not helpless exactly and I know you have to run but just for to exemplify that yeah um I'm just gonna for now we're going to pretend that task exists again and I'm going to make this await await is just the compiler saying like hey I want to hook up a continue with a continuation tell me how to hook up a continuation to your thing it knows how to do it for task it doesn't know how to do for my task but in just a few lines of code we can make that keyword work for our task so I can just write a little struct here called the waiter that's going to accept uh a task uh here I'm using a primary Constructors I just have to implement a little bit of code I say I notify completion uh and Implement that interface we're not going to you know what I'll just have the let's just let the get copilot write it all since I know you're stressed for time um so we're just going to uh fix up a few things here and we can now with one more line get a waiter do this and if uh oh this needs to be public uh public um you notice this squiggly has gone away we're now able to await our custom task as part of uh this Loop and again we see the exact same thing but using the actual a yeah yeah yeah so you just swap so just like you swapped uh from task to my Tas then you can swap from a waiter to a weight and you're really showcasing that the the core functionality is the same exactly and if we had a few more minutes we could do the exact same thing and I could make this say my task but I won't keep you from your son I appreciate that uh this has been incredibly helpful I hope that the other folks uh who are watching have enjoyed this as much as I have that's a that's basically 70 minutes just a little bit over an hour to understand that fundamental concept behind a weight and async how it works why it works and then a good reminder to us all that you can see that you can dig in if you choose to I want to encourage folks though who may be application developers who might think like why do I need to know this a reminder that I tell myself is I pick the layer that I understand truly and I go one layer below to get a little bit uncomfortable I don't think Stephen you're telling us that we all need to drive stick shift we all need to have a kit car in the garage that we built from scratch if you want to build a toaster you don't have to smelt your own Iron but it's fun to just look underneath the hood and go huh I use it every day now I know and and by doing so you build a a better sense for how it works and then you can use it better even if you never have to write that code ever exactly fantastic all right well I think this has been super fun I'd love to have you and some of your engineering friends on uh to chat with us again sometime so we'll do that soon always happy to chat [Applause] oh
Info
Channel: dotnet
Views: 91,592
Rating: undefined out of 5
Keywords: .NET
Id: R-z2Hv-7nxk
Channel Id: undefined
Length: 66min 1sec (3961 seconds)
Published: Thu Mar 28 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.