import asyncio: Learn Python's AsyncIO #4 - Coroutines Under The Hood

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hi this is Lucas from HDB and I badly need a haircut it's been a while and you're watching the fourth episode of import a sink I owe an introduction to the Python framework enabling asynchronous single threaded concurrent code using co-routines this series is intended as a beginners course so if you haven't used async i/o yet that's perfect just be sure to start at episode 1 if you haven't seen that yet and if you have seen the three first episodes yes it's been a while our what we've been doing at HTTP is working on our alpha 3 release so I highly recommend you go and check that out it's been a while since the last episode so we'll make up for that by explaining how coatings are implemented under the hood we'll see how tasks and futures build on them and we'll spend quite a bit of time on the future objects in particular so to build our understanding of co-routines we need to first talk about what futures are without knowing this important building block it's hard to connect all the dots and we didn't have time to play with them in the last episode so we're gonna do it now we'll learn about them from the users perspective and then we'll move on to explain their implementation equipped with this understanding will briefly go back in time to Python 3.4 to see how ASIC I initially was implemented and how keratinous looked like at a time before async and wait we're added to the language then we can see how it all connects together in the tasks class but fast forward to 2020 the implementation is no longer so straightforward so we'll look at some of the gnarly C code that is used to implement co-routines natively in C Python today then we'll briefly look at the other coating like types on that you can create yourself which can be awaited on and finally I'll mention the most common pitfall that you as viewers are complaining about and asking about how we can deal with that so I wanted to address this in this very video sorry I I have to what is the future it is an object serving as a container for a result which we don't have yet but hopefully we will in the future so it's a standardized way for the producer of the return value to set the result our or an exception if a result couldn't be computed in turn our the viewer the consumer of the value also now has a standardized way to check if a result is available already and what it is so both parties now can communicate are using are kind of established protocol and they can even cancel the future essentially saying that they don't need or they cannot accept the result anymore so let's see how that works in practice a future object can report whether it's done or canceled if we create a single one we can ask it are you done false or are you cancelled false so it's got a special method to get its result but if we don't have the results set yet it will raise an exception so let's set a result result is set cool running that same set of methods we'll see that it is in fact done as a future it's not canceled and the result is now available so this is a pretty nice API that allows us to communicate future results between different parts of our program hopefully this API isn't really surprising I do think that it is actually kind of standardized across languages it's a very popular construct sometimes it's called the promise in other languages or frameworks it can be called the deferred but it is pretty much you know replicated across the board it is very fundamental told we're gonna see next so now the cool thing about the futures in async IO is that they can be awaited on just like co-routines so here we create an async function get result which simply awaits on the away table that we give it as its sole argument so we assume that the returned value here will be a string let's just have this assumption that's fine we have some basic error handling so if there is any exception during the await we will return some dummy value instead and even have some print based logging so that's it for our function so now creating the future object we can tell an event loop to just after 10 seconds set the result on the future that allows us to see what's going on so that when we use run until complete with our get result core routine now passing it the future will see in fact it does return the result as we said it using the loops called later scheduled callback so works as intended awesome but if you look closely at our async function it doesn't actually say if you in a type annotation it actually doesn't say future anywhere so that's intentional thanks to the fact that futures can be awaited on seamlessly we can compose them in a powerful ways with other co-routines to see this in practice look our weekend now I can wrap get results multiple times and see if it still does what we want let's wait for a bit longer like the 20 seconds is here and now instead of just running at result let's wrap three of those together so having this we're gonna have to wait for other longer while I picked 20 seconds just so that you can see that in fact there are some weights of a you know involved here so yeah another result we still got it even though there was this much rush curve of calls right now pretty cool composes very well we could see that the inner await was blocked on waiting for the futures result to be set now if we were naughty and the future does not set a result but we have an exception set our instead our gain result co-routine will handle it right we remember there was this try except block that was supposed to handle our exceptions so instead of set result let's use set exception right now and see what is going to happen with our run until complete call we can just use a single or get results now so waiting we'll see how our error handling in fact did kick in oops problem encountered no result right that's our fallback a result that we've set in the implementation pretty cool but wait like wasn't cancellation an exception as well let's see if that gets swallowed by our basic exception handling so co-leader in 10 seconds we are only going to cancel the future instead running this let's see if we swallowed this with all we have not the cancelled air managed to propagate so the cancellation works correctly why because starting with Python 3.8 canceled errors are now based exceptions similar to keyboard interrupts so this works around Pokemon exception handling braking async IO cancellations which used to be a rather popular problem so like the Pokemon exception handling is a kind of meme term for every time where you decide to catch all exceptions and you don't really care about their type this is rather flimsy as an error handling technique altogether but thanks to making cancel pair our base exception now we are more resilient against this kind of flimsy programming so the framework is ever-evolving another thing I want to show you now is that a single future instance can be awaited on buying multiple co-routines well if we set a value why not let multiple coatings read it right and it's fine so you've already seen nested get result calls but what if we ran multiple get result calls concurrently where they no async i/o gather so let's use that to run three of those at the same time and in fact turns out this is fine too our futures allow for a surprisingly broad array of use cases so finally you can add callbacks directly to future objects and they will be called when the future is done done doesn't necessarily mean that the result was set right there's also the error case but that means it's over right so there's a detail about how this works that is surprising to some users so let's see this in practice we are adding the done callback of the callback that we defined above but I'm also doing another callback in a sink IO you can schedule multiple callbacks on a single future and in this example we're just stopping the loop it's kind of peculiar well let's see why so when you set the result callbacks should be running right but they're not why not so now we run the loop forever and suddenly we see oh now the callbacks have been run so it is a bit surprising right like callbacks run on the event loop and not directly on the future when the result is set but that's actually a good thing like they are now treated as any other async I'll call back if we're if you register them with call soon or call later so as soon as we run the event loop the callbacks on the future registered wall were invoked as expected but wait like how did the event loop know about our future and about the callbacks we registered on the future so let's look under the hood the implementation of futures is easily found in a Sankyo slash future store py the future class holds most of the functionality you can see in dunder init that in it takes an optional loop or argument and in fact if not given it will take the current one instead why does it need it oh if you actually look at a dumb callback it schedules the callback on the event loop that it set during dunder init that's interesting but what is even more interesting is that the act of setting the result actually has some logic there and then schedules callbacks to be wrong so here again the future is invoking loops call soon in line 243 we can see is happening well why is this done like this the reason why is so that now we are using fair scheduling doesn't matter how you schedule callbacks on the event loop whether using consume collator and friends or you're using callbacks on futures there's fair scheduling first-in first-out that way we are less likely to starve tasks from execution when there is a chain of futures that have callbacks on them so that's a pretty interesting detail that sometimes people miss and then there's confusion when they don't expect this kind of behavior but all in all I find it really elegant so by this point you are already familiar with coroutines with futures and tasks as users right like that's pretty cool we've seen how the future is in fact implemented just now but what about the implementation of the core routine itself sorry I had to so let's get back to 20 well in 2014 when Python 3.4 got released that's the version that first introduced async i/o as part of PAP 3156 that initial Eason kayo iteration didn't include a sink in await as keywords so using async i/o was even more awkward back then I built a Python 3 for 10 or just to show you that in practice so from the users perspective instead of saying acing dev as as we are right now you had to use a decorator so if we had a regular function that did some long-running computation that we always like to simulate with time sleep the equivalent in async terms would be an act a sink Ayoka routine that now has a yield from a sink i/o dot sleep hmm that's peculiar why are we using yield again well we'll get - that don't worry but for now let's check if this even works let's start with the running the regular function just to see that you know expected D it's slow but it executes right cool return zero so now there wasn't an async IO to run back then so we have to grab the event loop and do run until complete on the async function right so we do that just that as expected we have to also wait and then we're getting zero as a result but at least now what we can do is you can use gather or whatever else to run many of those at the same time to get all values back without having to wait three times the time so seeing that in practice that is in fact what is happening cool boom so if you look closely the difference between regular function and async function isn't just the coroutine decorator even though that's not an important one the fundamental difference is that async function here is a generator right it uses the yield keyword in its body so Python recognizes this as a generator function and compiles it differently so both the regular function and async function objects are functions right however calling async function reveals that it will return nothing more than let's see a generator instance cool so let's review what that generator is then so that will be important real soon so you might know this already that a generator is a special kind of function that yields values one by one this can be used in many ways the most basic of which is either iteration or directly calling the next build here we created a very simple generator that uses a counter variable and just use a bunch of values until it's done so creating a generator by calling the generator function we see that it has some machinery some interesting machinery inside like it stores its frame even though it's not running yet like there is a frame for its variables it's empty right now but as soon as we called the first next now we have our locals in the frame so running it a few more times we will see that in fact the code arm updates the variable it behaves just as any other Python program but it's interesting that there is the suspended execution until we run to the very end right like once we call next when it's greater than 9 it will raise stop iteration meaning I don't have any anything else for you I'm done so now GI running will say false to us interesting so if you immediately think like this is very similar to async functions with awaits yet you're right but you probably also notice that isn't kayo in 3.4 was using yield from and not yield what's that so generators can access the full power of Python in their bodies right like including iterating over more generators kids in my day would just quote exhibit and say like I heard you like generators so I put a generator in your generator so you can iterate while you iterate and in fact we're doing this in the outer iterator right here yielding from the generator we defined above see that works just fine when we run it in a for loop and it works but your question would probably be hey Bob I could just create another for loop inside that outer generator and be done with it much quicker and without having to learn of a new construct called yield front so why bother with that turns out the humble yield from expression is an equivalent of almost forty lines of code if you want to really correctly account for error handling and generator sins for details about why this is necessary I invite you to go read pepp 380 I already mentioned that one in the first episode of the video series but for now if you haven't heard of being able to send values into generators this is where stuff gets interesting the details are all in pep three for two but for now let's see an example redefining our Jim generator with an inner counter that allows external users to change the value see the yield expression can return a value - most of the time it won't so it will be none but if it is not none that means the user the side that iterates sent us a new value so we're gonna use that new value for iteration so in practice as you can already see right here the generator behaves exactly as the previous one but if you use the send method which is a special form of next so it also already does the next step of the generator you can change the generators behavior now including causing early exit right now moved past the rule that said our exiting were no longer returning results so that is actually the interesting part that we can now communicate with a generator both ways so what else is there there is in fact a special method provided just to be able to cause an early exit pulled closed there's also a more generic method to raise arbitrary exceptions of not stop iteration but anything called throw with all those building blocks now a clever Python framework could use the fact that gene Raiders keep their frames around until they're exhausted and treat those generators as cooperative functions cooperative routines quarantines so after all this long introduction about generators how did coroutines work in async i/o in 2014 let's start here what this is is async io / co-routines py and we are looking at the body of the decorator to mark coroutines like this probably has some meat in here right that changes a regular generator into something really special the font is a little small but if you look closely you'll see that the happy case it's just six lines of code actually is this a generator yes assigned to Korell are we in debug mode now assigned charata wrapper then put a magic underscore is coroutine attribute on the wrapper and return it done that's disappointing especially looking at the rest of the code all the other lines are needed for special cases like what if somebody used the co-routine decorator with a function that never yields inside so it is a plain old function not a generator function so we deal with that in lines 139 to 144 and in the debug mode we wrap the generator in an object that allows us to tell what generator we'll dealing with and you know that has some logging with an error message on unused coroutines so that's pretty much it that wasn't very interesting it seems like in Python 3.4 coroutines are really irregular generators they're just marked with a special attribute probably for some type safety reasons right so where is the magic that turns generators into pieces of cooperated multitasking we've looked at coroutines py we've looked at futures py before that no dice so let's try tasks py next ha support for coroutines you say that looks promising and indeed the dog string for the task class says a quarantine wrapped in a future the class inherits from future and indeed also in its init method uses a loop attribute as well well what's that right in the dunder init method we use a call soon on the loop interesting with an underscore step let's see what that's doing there's quite a bit of code here but the most important piece is here right we're calling next on the coral generator so just a single step right here and there's implementations of some exceptional conditions like stop iteration like cancellation like other exemptions exceptions that can happen if there is an actual problem trying to compute the result but in the end you can see the meat is in the next of quorum so when there is no exception happening there's a lot of handling and the most important piece here is another call soon call with underscore step scheduling itself back on the event loop that is pretty interesting it almost sounds like something we've seen before so you've also seen that image before most in the episode about of the event loop so just now we saw that the way quarantines work is indeed by just running callbacks they just run a single step of their execution as every callback here on the event loop and if there's any exit left they used the trampoline pattern that you might remember from previous episodes to schedule themselves back on the event loop so really what we will see is something like this if there are many tasks so many karate is executing in the system they will each have their steps executed one by one in this particular example the first execution of t1 underscore step scheduled itself back on the event loop but before that happened T twos underscore step was already scheduled so it runs first fair scheduling that then we have another t1 underscore step and a new t3 underscore step what happens later who knows maybe t1 and t2 are done so only t3 is left and later more tasks or maybe they just continue going on or scheduling themselves back on the loop one by one the most important here thing here is to just realize that we just have a single thread of execution so just a single on callback being executed at a given time but thanks to slicing all this code into small pieces we achieve concurrency so many things happen at the same time okay I know this joke is ruined ruined but I needed this backdrop to let you know that we are moving back to Python 383 and we'll look at some modern async our code now so first let's look at some important changes in a sink i/o the framework note that the task class is actually implemented in underscore async i/o what's that Oh dot s Oh underscore async i/o those are typically hints that what we are dealing with is a C extension Molly module but there's still a tasks py file we just looked at it and it links to the C implementation but it also provides a pure Python implementation so let's look at this code for a bit so the Python task class in 2020 inherits our from a Python future implementation not much changed in fact since we looked at the futures py in Python 3 for there is some new handling for things like context variables and some api's that evolved over the years but we still use the same trick with using the loop and scheduling ourselves on the event loop with a step but now it's called a double underscore step I guess this is to protect it from being destroyed by sub classing which is now also supported cool it actually works exactly as before but now coroutines as constructs in the language have their own type they don't have a dunder error and under next but we are using the send method of them to signify that stuff is happening so the exception handling was exactly the same and the trampoline pattern is still in place we are putting ourselves back on the event loop after we are done with a single iteration so that is interesting but this is not the implementation that a scenario runs by default as we can see it imports underscore async i/o and just set this as a task so where does that live this lives in modules underscore is in Keio module see it's almost 4,000 lines of RC so if you're brave enough nah I'm kidding just you can actually see for yourself that most of the code is a direct parallel of the Python version in fact the tasks step here is just a small wrapper around the tasks step implementation if we click through it we'll see that what is happening here you know we need to kind of look through the boiler plate of see will notice that oh we actually do have ascend and there is a throw like beneath it and whatnot so it's still the same behavior as we just observed in the Python version and in fact more boiler plate we will also be able to find our trampoline task call step soon pretty awesome right looking at this thing you can pretty much see that all the implementations should stay in sync but now we are using one that is way more efficient than during Python 3.4 times so now recap from previous videos async async functions are regular functions right when we call them what we get back is a core routine instances of the core routine type hold their state in special c or underscore attributes we've looked at GI attributes for generators before and we can still see a few very similar behaviors here right there's a sear frame which is very similar to the GI frame we've seen before under send and throw as we've learned from the comment in tasks py there is no longer a derp and next on our core routine objects but we can pretty much achieve that same functionality using just send alone so we're using this for co-routines api looks decent let's see how this is implemented so you might be a little surprised that the native quarantine implementation still lives in the same file where generators are implemented so you'll find plenty of definitions see the desert attributes on the object and we have our methods below them and if you look closely you will notice that are the three methods that we're using on native coroutine objects are in fact the native generator methods like so we don't even have to customize those the default ones provide all the functionality we need now I really want to drive the point home about how tasks run you know the coroutines here are split into steps but how much does a single step run to see this let's build an example async function that has a few hour weights here and there so we can actually test this in practice we're starting with some simple example async function takes a count argument and let's say returns a string so it's mandatory like it's in the law I have to use async or out of sleep so let's use that right from the get-go if count 0 we return early but if not let's just have a loop right that loop can actually run another async I of sleep for a while and then return some value that value to be interesting is awaiting another example call just with a smaller count this will enable us to test some recursion or as you can call them our weight chains or yield from chains as they're sometimes still called in the documentation so to really see what's going on we have to subclass the Python implementation of the task this will enable us to wrap around the double underscore step to just create some logging before that step is invoked and after it's been invoked you have to excuse me for hacking around the double underscore or name mangling that Python is doing to protect private methods but you gotta do what you gotta do so as you can see we do have the preamble that says oh we have a step on a task of a given name is it done we're running that step and after we've run it we again say step is done what name it has and whether it's done so to actually use this on an async i/o event loop what you have to do is to use set task factory it happens to have a different API than the tasks in it method which is a bit unfortunate so we're using a helper lambda for this now to really see what's going on inside that async function it would be helpful to have more of those prints sprinkled before our weights and after awaits right like I know that printed debugging isn't the thing but in this particular example like we're gonna have nice output that we can analyze so from this I actually edited it so that we have this instead it's the same function as before it's just a few skated by all those prints before the first away after the first away returning result before our weight inside loop iteration and so on the point is it will enable us to see how far a given step actually went before being done so let's start with the simplest example that we can go with if we put 0 in the count argument to example we will return early right with the if count equals 0 guard so doing this we actually see our nice logging and there's already two steps interesting we never recurse but we can see the first orange step runs until the first oh wait and then there's a second step invocation which runs from that oh wait until the return by the return we can see that of the task is done done equals true in the closing step log log line so things get significantly more interesting when we increase the count argument to our example so let's do that if we just put one instead we already have twice the number of steps hmm the start looks almost the same but because count is not zero we are not returning early in fact we're already entering the for loop and we're stopping at the oh wait that is there hmm so that's step one and two but the interesting thing happens in step 3 so we see in step three that our we are going past the oh wait since that four loop was only one iteration we are reporting after a wait inside and before our await on example why wait the green thing on the Left step 3 also logs a line saying before first await indented a bit more why ha turns out that Python already treated part of the await call with example as part of that single step already in that same thread step we ran the recursive example counting until its first await finally the last step just keeps on running that recursive example coroutine until it itself as the first example we've seen before catches the guard if count equals zero we are returning let's look at this closely for a while because this is something profound now and even if we thought we understood what's going on before that is really showing us everything step by step so what's happening here is for simplicity you can just say a single step runs code until the next await expression but there's more through this right as you can already see in step 3 I think I already is recursing to our next example co-routine and runs that until the next await expression is found similarly the final fourth step both finishes running the recursed example co-routine and returns from our outer example coroutine completing the task again we see the completed task says done equals true on the last log line so this is interesting it seems that steps are really only stopped at actual IO boundaries that is something that we're not going to go into much more detail right now but it already shows how python is slicing on computation step by step now there's one more thing here that might be a little harder to know this so know that there is only one task here the entire time you see the log lines here and it all says task 1 task 1 task 1 task 1 but we have two example coroutines so why is that this is because once we wrapped the outer core in a task object we already created this mechanism with our trampoline that enables us to just follow steps as they go and execution inside can just follow regular generator based computation this is what why Pepa 380 is called delegating to a sub generator from async iove's perspective from the users perspective the inner core routine execution is just part of the outer core routine execution we are pulling it 108 at the time looking at the very most away right so stopping only at our weight boundaries you can think of it like this a task object is the gateway to current in computation this is where the trampoline pattern is used and where your computation is split into multiple discrete steps computation within a quarantine most often won't have its own wrapped task unless you create it using async i/o api's but other than that will be just running our coroutines all the way down huh finally in real-world async our programs there will always be more than a single task in our webserver for example you will have many of them handling on different users requests and that's gonna all happen at the same time so each one runs only a short amount of code before coming across on a wait and then it schedules itself back on the event loop and yields execution back through to it so the event loop can choose R to execute more callbacks or it can react to i/o events and so on this is how concurrency is achieved for our final subject let's go back to the beginning and look at the modern pure Python implementation of the future class one interesting thing here is that now it supports ad under our weight method in fact the dunder error method is now simply an alias to the under await so users can still for now I use the legacy yield from so this is already a hint how you can create your own available types custom available types are not very common in user code I want to show you this capability to complete the understanding of the four fundamental types in async i/o but I'll be surprised if you were to implement your own type on your first big project probably not but looking at the data model which you probably should browse through anyway like that's Python documentation it explains things really well the dunder await method must return an iterator and that's pretty much it for the documentation here so let's think of why this is our requirement so cuttings are their own types right now that's true but they are still generators internally in a lot of their behavior right like they share a lot of code in fact like the implementation of coroutines lives engine object dot C so let's think of them as generators for a second and let's think of yield from so what does a generator do it generates things right so very often in async i/o code you don't just have one yield from you will have a chain of them we already saw a chain of awaits in our silly example with our example co-routine returning a weight example so that it actually kind of nested right so that is typical to async arrow applications but what does this generator generate in fact when you are calling it ER or next on a generator you expect to get an item like if not stop iteration like you when you really need something so at the very end of the chain the cannot be gist yield from all the way down just gotta be a yield there's gotta be some termination of this chain so that that's at some point we are actually returning a thing so ha this is exactly why our implementing dunder oh wait you are supposed to return an iterator that will yield a value we've seen this in practice like steps that are suspended on some actual IO with async IO right like we've seen steps being kind of arbitrarily broken into and this isn't why this is happening the break happens at the terminate in yield with custom avoidable objects you need to return an iterator which terminates this and the future already does this but it returns itself it's a kind of a special case that's it at this point you have a thorough understanding of the building blocks behind a scenario like today you learned about how Co routines are implemented in C Python and how futures and tasks build on this functionality to power async i/o and how the step functionality uses the trampoline pattern to enable concurrent execution now for the requested thing like this was something that some of you mentioned in the comments on previous episodes you'd like to know a bit about how to deal with this annoying problem that async i/o programmers encounter when they first learned the framework we're gonna do a full episode on error handling debugging and testing that's gonna be at the end of the series but since this is such a common question let's address the top gotcha right now to do this let's look at the text editor with our example co-routine again if we just run it you see I stripped our confiscating prints from the core home and now I stripped one of the weights from the from the coroutine huh Colton sleep was never awaited that's something that Python will already tell us about but it doesn't say where we made the mistake if you said you should always add Python I think i/o debug when stuff is going wrong but if you said python trace malloc equals one at the same time it will tell you where you forgot your await so that's one particular case that is often kind of omitted or it is a source of grief and confusion but there's even a worst one if you actually use return value and the return value happens to not be awaited on there is computation that happened python a ran your code perfectly fine but it returned a corroding object instead of our result but what's here I'm using my PI and my pie through my type annotations on our async death told me that hey you are doing something fishy here the return value type is incompatible with what you said you're gonna be returning so if you thought that the type annotations I've been putting all that time are just for decoration think again like every now and again there is an await that you're going to forget and when that happens most likely at least Python will raise a warning but if you're less lucky it will execute your code until type errors happen a long way away from where your problem was actually introduced so to fix that Hue's type checking it will tell you exactly where you made your mistake finally just as an excuse to show you my favorite illustration of the series once more a comment about a singers performance so what the event loop is doing with all our tasks is splitting them into tiny steps right and there's very many of those the event loop itself is already some computation we've seen how it works on a few weeks back in the first episode right there is a selector or a pro actor that handle IO events there are some collections that the event loop keeps internally for their own purposes for the callbacks and so on and so on so none of this is free all of the computation that we're doing here has a price compared to regular synchronous code computation so how come is this a model that is worthwhile and provides better performance in the end well the reason why is that as soon as there is networking involved the actual network latency between the server and the client is typically orders of magnitude higher than the computational cost of maintaining an event loop and splitting your cuttings into small steps by that time it is much more efficient to use this green threading or cooperative multitasking to make sure that you are able to use your single thread to its max capacity you could just use regular threads again they are not very easy or efficient in Python arm in many ways so async IO at least provides you with a more clear deal that clear deal admittedly something is something that sometimes has some thorns on it for example if any of those steps ran for a long time it holds up the entire event loop so this is something you need to well plan for and you need to be able to discover when it happens blocking calls should not happen within async deaths sometimes they will will discover them you'll fix it so when whenever you see big latency variation in your servers typically that there's some blocking computation are happening there or there might be just regular Python computation that is never split with even a single oh wait for a long time but for regular web servers for regular networking systems because it's just not just web obviously I think i/o will provide you with very nice capabilities to ensure that you maximize a single thread and then you can run multiple copies to the number of the CPU cores you have on your machine the operating system also is a scheduler so you could not have all this and just run synchronous processes right like maybe tens of them and just let the operating system schedule them when they are waiting for i/o like so why is that not better well it's not better because each of those processes requires memory especially with the garbage collection and reference counting we have in Python processes that were using are not very copy-on-write friendly they're gonna use quite a lot of memory we're not particularly more hungry than any other runtime based programming language but what I'm trying to say is that you can rather realistically handle thousands of clients in a single ace entire process at the same time you're going to have issues running thousands of processes on even powerful boxes so in that scenario you're just getting a better bang for your buck using asynchronous programming even though sometimes it's annoying because you're forgetting your awaits sure enough now having all this knowledge from the episode so far we can start building applications that are going to produce more exciting examples like I hope we won't have to come back to async IO dot sleep I'm believe me I'm as tired of it as you are so what's next yet again this particular episode turned out to be longer than I thought it would there's more information in it than I thought I would put in a single episode but that's nothing compared to what I'm planning for the next one the ambitious goal for episode 5 is to introduce you to async context managers async iterators and async generators now that coroutines have their separate type in Python async io can differentiate them from regular generators so we can have a synchronous generators they're actually very powerful so we're gonna be talking about them and more importantly we're we're also gonna see some of the best of what async IO has to offer as a library so we'll be creating connection servers watching file descriptors spawning subprocesses which is one of my favorite api's in Python period and using all swords of synchronization primitives you actually already know one the future can be used as a synchronization primitive so if you are now puzzled how treat this as your homework try to figure out how you would use a future as a synchronization primitive in SNK in fact what I would like to know from you is what is your favorite and your least favorite thing about async programming with async IO especially if you're new to this like all this series is in fact done to teach you program in it like to become comfortable with it so your answers I put them in the comments like might inform our future episodes I cube as this particular gotcha that we cover today we can even make more drastic changes to our plan depending on what you find most valuable in our videos here so subscribe to make sure you'll get notified when the episode is out and I promise you won't have to wait a month for it thanks for watching see you next time [Music]
Info
Channel: EdgeDB
Views: 8,397
Rating: 4.9742765 out of 5
Keywords: python, asyncio, edgedb, async, await, coroutine
Id: 1LTHbmed3D4
Channel Id: undefined
Length: 58min 58sec (3538 seconds)
Published: Mon Jun 15 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.