Implement Rust Async Future Trait with Tokio Executor πŸ¦€ Rust Programming Tutorial for Developers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys my name is Trevor Sullivan and welcome back to my video channel thank you so much for joining me for another video in our rust programming tutorial series now in our last video we introduced to the concept of async operations in Rust and we saw how that allows us to build applications that have the feeling of improved performance but only because we are allowing tasks to kind of interoperate with each other and pause execution temporarily while other tasks can run on a single core processor now async is not the same as multi-threading where we can actually do work in parallel with each other instead async is what we call concurrent programming where we can have a lot of tasks in an actively running state but only one of those particular tasks is actually going to be able to get processor time at a time now there's another crate out there called Tokyo and this is probably the most popular asynchronous run time for rust as we discussed in our last video we need a runtime executor in order to actually execute async tasks otherwise there's nothing to kind of push them forward there's nothing to call that poll method on a future so in this video what we're going to be doing is actually taking a look at something a little bit different than small which we used previously we're going to be looking at Tokyo instead because Tokyo is a much more robust Library it provides asynchronous apis for things like file system operation Network socket operations and things of that nature and again this is a really popular Library so there's a lot of momentum there's a lot of support behind it and if you're building a production grade application Tokyo is most likely the crate that you're going to want to look to in order to do asynchronous but you can also do multi-threading with it very easily as well so so we're going to take a look at how to use Tokyo but we're going to add a second really important concept to this video which is how you can actually Implement your own custom future as well back in the video where we introduced small and Futures we discussed how we can use the async keyword in front of a function to make that function asynchronous but you can actually Implement your own custom data structures that track their own separate internal State and implement the future trait from the standard library and then we can use a runtime executor async executor like Tokyo or small to actually drive our own custom types to completion because they Implement future as well so we're going to be covering a lot in this video but for starters I just wanted to introduce Tokyo here and kind of go over some of the fundamentals of the Tokyo apis so you have a better idea of where it's going to be applicable and we're also going to take a a look at how to do single threaded versus multi-threaded operations in Tokyo here as well so for starters let's take a look at the documentation here for Tokyo so if you go over to the Tokyo website at tokyo.rs and that's t-o-k-i-o for anybody who's listening and not watching this video then this website is just kind of the landing page for Tokyo they do have some decent documentation over here so if you're brand new to Tokyo and you're looking for more of a beginner's guide to its capabilities this is a really great great place to look but if you are a little bit more experienced with rust if you've been kind of following along so far you probably have kind of gotten the idea that the authoritative documentation over on docs.rs that's kind of the main documentation site for various rust libraries this is really the best place to go to find the authoritative documentation for how various apis are going to work now one thing I do want to point out is that Tokyo as a crate as a dependency is actually broken broken down into lots of child dependencies so there's a lot of different features that you can enable or disable in the Tokyo crate now if we do a regular installation of Tokyo there's going to be some essential features that you might expect to be available that aren't going to be installed Again by default so we will need to make sure that we tack on some features as you can see in the documentation here they actually recommend that you go with the full feature set if you are new to Tokyo so that you don't run into any roadblocks but personally I actually kind of like running into those roadblocks because you do get some pretty quick feedback from the rust analyzer extension for vs code that kind of indicates to you that a particular feature is missing so you can just go back to your command line do a cargo ad specify which missing feature you want to install and just pick up from where you left off and that helps to keep your binary size a little bit smaller your compilation times are going to be a little bit faster instead of just going with the full feature set that's kind of a nice crutch to have as a fallback option but I would really encourage you to just kind of practice with installing a base feature set and then just adding the specific features that you need now if we take a look at something down here called attribute macros this is really the main way that I think most people are going to use Tokyo by using this special macro down here called Main and the main macro is something that you can do to your main function in your rust application in order to designate that you want Tokyo to basically wrap up that main function as an async function that's going to be your entry point into your application so when we apply the main attribute from the Tokyo crate to our rust main function that will actually allow us to Define our main function as an async function whereas normally if you're not using an async executor better then you won't be able to have an async entry point into your application so we're typically going to want this feature down here and what I want you to take note of is that in this little kind of blue background text right here after the name of the attribute macro it actually tells us right here in the documentation which specific crate features we need to enable in order to have access to this specific macro now RT which is a child feature of the Tokyo crate and macros which is another feature of the Tokyo crate those are not actually installed as part of the default feature set for the Tokyo crate so if you just do cargo add a Tokyo to your rust project those RT and macro features aren't enabled and so by default you're not going to have access to this main attribute macro here so you won't be able to create an async main function so all we need to do is make sure that we install either RT or macros features for Tokyo and that will enable this attribute macro to be available and this actually applies to other child modules of the Tokyo crate as well so one of the nice things about Tokyo is that we have the file system operations here that are asynchronous all of those are kind of wrapped up into their own separate module and you'll see again in this blue background text right here that there's actually a crate feature called FS that allows us to include the file system async operations in our rust project now if you don't plan on doing any you know I O or if you're not planning specifically to use async IO then you don't have to include that feature at all same thing for Process Management here if you're not planning to asynchronously kick off any external Rust commands then you don't need to include this particular feature in your project and that helps to keep your resulting binary a little bit smaller in your compilation times sure order um also there's a signal crate feature here that allows you to handle operating system process signals like uh Sig Hub uh Sig kill uh you know terminal window resizes things of that nature so this is actually a really cool thing that allows you to respond to operating system level signals from both windows as well as Unix so it is compatible with both of those there's also things for working with time operations down here but anytime that you see one of these crate features that is just going to indicate to you what you need to enable in order to get access to those particular features so once we kind of drive into the main feature of Tokyo which is the async executor what they call the Tokyo runtime here you're going to notice that we need the RT feature in order to get access to the types that are exported by this particular module here so again depending on which app macros or modules you want to import into your application you might need to enable certain features and we'll take a look at how to accomplish that now in addition to using the main attribute macro there's actually a second syntax that we can use to build an async Executor runtime using Tokyo so using the attribute here is pretty straightforward all we do is add the Tokyo main attribute to our asynchronous main function but inside of this runtime module here we also have this thing called a builder and this allows us to basically build out a Tokyo runtime instead of using the attribute macro called main on our entry point so in that case we would have a synchronous main function but then inside of that synchronous main function we can create an async runtime and then call certain functions on that runtime in order to perform async operations in a blocking fashion if you remember the video that I did previously in this rust playlist that covers kind of an introduction to async operate and rust and using the small executor we actually used a function in small in order to call block on so let's search for Block on here and so Tokyo actually has a similar function here on the runtime that allows us to basically just pass in a future and block the main thread until that future has completed and so that way our main thread isn't going to exit and cause the future to be aborted before it even has a chance to run so we're going to go ahead and take a look at how to include Tokyo in a project and just kind of get started completely from scratch we're not doing any copy paste pasta type of things in this video because I really want you to be able to implement this on your own in your own project completely from scratch so what we're going to do is jump over into vs code here and we're going to create a brand new project we'll include Tokyo we'll take a look at the attribute macro called main that allows us to kind of denote that our main async function is our entry point to our asynchronous program and we're also going to take a look at how to enable multi-threading and we'll take a look at how to use that Builder pattern for starters in Tokyo and if we don't have time in this video to get to implementing a custom future then we'll go ahead and do that a little bit later on maybe in a different video but I'm kind of hoping to package all this stuff up together in the same video now before we get into the actual content of our Hands-On demo I wanted to invite you to subscribe to my channel I'm an independent content creator so any kind of support that you lend to this channel helps me to bring you content on Rust and other types of Open Source tools so head out to my channel youtube.com Trevor Sullivan check out my rust programming tutorial playlist down here also if you learned something from the video please leave a thumbs up or like on the video to let me know that you learned something also leave a comment down below let me know what kind of things you're interested in seeing on my channel and let me know your thoughts on this video and also check out the pinned comment those affiliate links help me to fund the development of feature content for this channel so please use those affiliate links if you're going to make a purchase and that'll help me out a lot so in any case let's go ahead and spin up a project here I'm on my rust server here so I'm on a remote Linux VM as I often am in most of my videos I think there was one video where I used Windows locally just to kind of simplify things but in general I'm doing my development against a remote Linux VM so what I'm going to do is just make a project folder here so we'll say async with Tokyo I just want to make sure there's no naming conflicts with crates or anything like that so we'll CD into that directory we'll do a cargo init just to create a rust binary and we'll do Ctrl K Ctrl o to open that up inside of our editor here so async with Tokyo and that'll just cause the vs code window to refresh all right so you should also have the rust analyzer extension installed because that is how we get things like Auto completion and all that fun stuff and I actually cover kind of the whole Dev environment setup process for rust on a Ubuntu Linux headless server in my very first video in that rust playlist so make sure you go check that out along with all the other videos in the series all right so inside of our project here the first thing we want to do is bring in Tokyo as a dependency of course we've got our default main.rs file here with hello world but we're going to just eliminate this and we're actually going to turn this into an async main function normally if you just plug in the async keyword there you're going to see that rust doesn't allow you to have the entry point of a rust application be async because again there's no executor there's nothing to hold the future and actually drive it to completion and that's why we need to use Tokyo so down in our terminal we're going to just do a basic cargo add Tokyo and this will install the default feature set now as you can see there are a lot of features here that just barely fit on the screen here that are disabled by default over in the Tokyo documentation we saw that the async file system operations are not included by default we've got things like macros that enables the attribute macro called main that we can use to annotate our main function as our entry point we've also got RT and RT multi-thread so if we want to do a custom Tokyo runtime using the Builder type or if we want to do multi-threading in Rust then we also need to enable the RT multi-thread feature in order to use that functionality so we need to go ahead and for starters try to do this Tokyo main attribute on our main function and we're going to get this error that says hey it couldn't find Main in Tokyo and this is because we have not enabled the necessary feature from the documentation in order to use that macro so back at the root crate here if we scroll all the way down to attribute macros it tells us right here either enable RT or enable macros and that will allow us to use this macro this main macro in our project so once again we'll just do cargo add Tokyo dash dash features equal macros and now we've got macros and Tokyo macros here and we should be able to use that annotation now it may take a moment for the rust analyzer to kind of get every all the metadata all the intellisense analyzed here but once we enable that now this is a valid reference here we can even do use Tokyo and we should see main show up in this list here now we're getting another error here because it's telling us that in order to use this main macro we have to import the runtime or the multi-threaded runtime for Tokyo so once again we're going to have to go down here look at this feature list and say well I need the runtime the Tokyo async runtime so I need to enable the RT feature here so once again we're just going to say cargo add and we'll just add the standard synchronous runtime rather than the or I've single threaded runtime versus the multi-threaded runtime so now in our attribute here it says the default runtime flavor is multi-threaded but we have not enabled the multi-threaded feature so now if we take a look at the documentation for this main attribute macro here in Tokyo we're going to see the syntax that we can use to specify the threading model so again the default if it's not specified is going to be multi-threading but we can actually pass in this flavor parameter to the attribute macro and specify that we want it to use current thread in fact our editor our editor even helps us out with that so when we type flavor here unfortunately Auto completion doesn't work for that but if I specify an invalid value like ASDF you're going to see this error come from rust analyzer saying that there is no flavor called ASDF the only supported flavors are current thread and multi-thread so even though the auto completion doesn't work in Rust analyzer it does still the error messages do still tell you what works and what doesn't so if I do accidentally an s on the end and say current threads well that doesn't match the desired value so it's again it's going to throw an error there all right so that's how we can switch it from the default multi-threaded to the single threaded model all right so now that is going to work just fine we've installed the necessary features we switched from multi-threaded to single threading and now we should be able to compile our application and it should run just fine but right now the application isn't actually doing anything so how do we actually do something inside of this function well for starters because we now have an async main entry point here we can call other async functions so if I have an async function that says test something and say print line hello from Tokyo put a semicolon to terminate that line and then just call the test something function from our main function remember that async functions here return Futures and so in order for this to actually execute we need to await that future and that will tell the Tokyo executor that we want to drive that future to completion before we proceed with the main function execution so now we get hello from Tokyo in our output here now at the moment we're not really doing anything that would necessitate having multiple threads but I just wanted to show you how we can kind of prove the number of threads that are being spun up so what I'm going to do is just put a little sleep in here so we'll say standard thread sleep and we'll say standard time duration because the sleep function takes a duration as input and then we'll do from Millie's and say I'm going to wait for how about 5 000 milliseconds which should be five seconds right so it's going to sleep for five thousand five thousand milliseconds and then print out our statement but before we actually run this what I want to do is bring up a separate terminal right here and I could do this in vs code but I would probably block it so I'm just going to spin it up in this separate window and we want to use this utility called b-top which is basically just a process monitoring tool and it does require a minimum size for our terminal here but I'm going to hit the minus sign a few times in a b top and that should decrease our uh refresh time up here so by default our refresh time is 2 000 milliseconds and for some reason it's not responding to my attempts to decrease that so we'll just leave it at 2000 milliseconds for now but now I'm going to do a cargo run and we're actually going to hit F to filter the process list really quick here so let's do F to filter it actually doesn't look like it's taking any input from me for some odd reason here now it seems to be working so I'll bump my refresh speed down to 500 milliseconds and then we'll hit F to filter and we'll say cargo right now we're not running anything under cargo so nothing is going to show up but if we do cargo run and then immediately switch back let's try Target here sometimes it shows under cargo sometimes it shows under Target so let's try with Target here and see if it shows up no let's do cargo again all right so I'm going to disable with two and three let's do a search there and then say two three that'll kind of expand to the process window so we're not looking at memory or network i o information and now we can see this field here called threads that was hidden just due to the size of my window and so now if we do cargo run see it doesn't show up under cargo this time for some reason so let's see why that's happening here let me actually just change it back to Target here okay so as you can see I think just for a split second there you can see that the thread count is one right and that's because we changed multi-threading to single threading but let's say that we wanted to change this back to multi-threading so we can actually run multiple features in parallel and let the Tokyo executor runtime figure out which Futures to execute on which threads right on which CPUs so what we're going to do is change the flavor here back to the multi-thread so we could either eliminate flavor entirely or we could set it to multi underscore thread right here and now we're going to get this error once again saying that we don't have the multi-threaded runtime installed so we'll need to do a cargo ad and specify the feature called RT Dash multi-thread so we'll say features equals RT multi thread and that should include that feature now so if we come back to our code editor everything looks just fine here now if we do cargo run we should be able to see that there are multiple threads spun up so let's come back here a cargo run again so now we can see that four threads are spun up and the number of threads is actually determined based on the number of CPU cores that you have so as you can see on this system right here I have three CPU cores zero one and two plus we also have the main thread so there's three worker threads one for each CPU core and the main thread which totals up to four threads there now you can optionally specify the number of threads that you want as well I think there's actually another parameter on here called worker threads so we could go to the main attribute macro and say all right I want worker threads to be 10 and that'll spin up more threads so what we'll do is come back over here and say worker what was it called uh worker threads equals let's say 10 to a cargo run and switch back here and so now we have 11. so 10 worker threads plus the main thread that results in 11 threads so Tokyo not only handles executing the executor role as kind of pushing Futures forward but it also allows you to spin up multiple threads and then the Tokyo runtime when you spawn off these async Futures it'll actually assign each of those features to a thread so now what I want to do is actually take a look at how to implement a custom future by implementing the future trait which is built into the standard rust library and this will allow us to create our own custom types in our application that are Futures and then we'll use the async multi-threaded Tokyo runtime in order to actually execute that all right so I'm going to comment out this function right here and we'll comment out that call to it and for this particular example what I'm going to do is use uh kind of racing as an example so I'm going to call this an F1 racer and so if you think about an F1 racer you know you've got those fast cars and they go around some kind of track and they do maybe a bunch of laps right maybe five or ten laps so what I'm going to do is basically represent one of these Racers as a construct and then we're going to implement future on that and then we'll create multiple instances of these Racers and we'll try to discover what the best lap time is for each of these Racers that'll be the results that we want to ultimately get back from the future but every single time that they do a lap we want to basically um allow the future runtime to execute other work so if you know maybe one guy does a LAP somebody else does a LAP then somebody else does allow so we want to be kind of fair with our scheduling and carve up our entire race as different laps and then each of those laps is a separate unit of work that the executor can delegate to a particular thread so it's all a little bit confusing I would not blame you if you felt a little bit confused by this stuff but uh we're gonna go ahead and implement this and hopefully it makes a little bit more sense as to how it works and we can also dynamically determine which thread we're executing on so that way we can see in the actual output the different Racers getting assigned to different threads and even the different laps eat that each racer has completed getting assigned to two different threads so what we're going to do here is start by defining just a simple data structure and this data structure is going to represent whatever object it is that we want to create in our application so I'm going to call this an F1 racer so so an F1 racer is going to have a name and we'll just set that to a string type and we'll also say completed lapse so this will keep track of how many laps the Eraser has completed we'll also keep track of the number of laps that we want the racer to complete so that'll be an unsigned 8-bit integer as well so we have a name we have completed laps and we have a total number of laps that we want to complete so for now that should work all right and we also want to have a best time so we'll say best lap time and we'll just do that as a number of seconds for now and so we'll make that a u8 and we'll just assume that they can do it in under 255 seconds um but of course you could use whatever unit of measure whatever data type like maybe a u16 or something like that as well all right so now what we want to do is to implement a type that allows us to easily construct new Racers so I'll just do impul F1 racer here and we're going to create a new function so that we can instantiate a new eraser and so inside of this here we'll just say return F1 racer and we'll go ahead and construct one with some default values and of course we want to specify that it's going to return an F1 racer as well and then so right in here we're just going to give it a name and so I'm going to use a name of like Max verstappen he's a F1 racer and we also do number of laps that we want to complete is let's say five laps and then we'll say completed laps that'll always start at zero because when you start a new race you haven't actually done any laps and the best lap time I'm actually going to set this to the maximum value of an unsigned 8-bit integer which should be 255 because we the first lap that we do should be some number under 255 like maybe I don't know 85 seconds or 80 seconds or something like that so we want to start with the assumption that the worst lap time is going to be the defaults and then once you complete the first lap which let's say it's 85 seconds then it will assign that new value to the best lap time now we can always override these values too just by making this a mutable value once we construct it but this will just help us to instantiate new objects and then just allow us to override any values that we want to now the other thing that I'm going to do and this might seem a little bit weird from a data structure standpoint but I'm going to specify the lap times for each of these individual Racers as a VEC so we'll do a VEC of u8 here and this is going to basically contain their lap times so that when we actually pull the future we actually know what the current lap time is each time and so we can just kind of customize that every time that we construct one of these new Racers so what I'm going to do for lap times here I want to make sure that I have a VEC of u8s and I'll just do something like 87 maybe 64 maybe 126 maybe 95 and how about 76. so we want to make sure that we have a number of lap times that corresponds to the total number of laps that we are going to complete and the way that I'm going to implement this as well I'm actually going to be pulling a value off of the end of this VEC so 76 is going to be lap number one 95 will be next 126 64 and then finally the fifth lap will be 87. and then we can always override those values anytime that we create a new racer so let's go ahead and implement the future type as well so this is where things get a little bit more interesting so we want to implement standard future future or the F1 racer type now this future type here this trait that we want to implement for our custom data structure here defines two key things that we need to provide so first of all we have the type of output that the future is going to provide so in the future completes and it's done Computing then we want to return some value and a few minutes ago I think I mentioned that we want to return the best lap time for a particular racer we don't care about how many laps they've completed or the total number of laps that they've done or anything like that all we want to get back from the future is what is the best lap time for this particular racer so the output type is just going to be a u8 that represents the one by lap that that particular racer had now the meaty part of the future trait here is something called the pull function the pole function is what the executor runtime so in this case Tokyo or in my other video we used small the pole function here is going to be repeatedly called by the executor and that's what's going to ultimately Drive the future to completion so the poll type here or the poll function not type is going to return standard task pull and once it's completed it's going to resolve to the output type that we specified right up here so this pull function at the final completion needs to be able to return a ready value that wraps the final value that we want to return to the caller so essentially what we're going to do here is return whatever self dot best lap time is because that's the value that we're trying to get at right but we can't just return the u8 value by itself we actually need to return standard ask that's a child module called task and then we have this poll enum and it has two variants we've got pending and ready and ready is what wraps the final value so we'll say self dot best lap time is the value that this variant or the pole enum wraps as the final result so now that satisfies the return type of this function but at the moment as soon as a new racer is created the very first time that this pull method is called on eraser is just going to immediately return the best lap time and what is the best lap time well anytime that we initialize one of these Racers we're defaulting it to 255 so we would just get 255 back we're not actually going to see any of the individual lap times that we have right here so we actually have to implement some Logic for the Eraser to erase these laps right so first of all we need to increment completed laps until it reaches the final number of laps here and we also need to iterate over each of these lap times here and it's going to be in Reverse again but we need to iterate over each of these and see if the number is better than the previous value and if it is then we want to assign it to the best lap time so the first time that we run the poll function we should have 76 rewrite the value for best lap time then 95 is greater than 76 so that's not as good so we just discard it then we go to 126 well that's not as good as 76 so we'll discard it then we go to 64. well 64 is a better lap time than 76 so best lap time will get updated to 64. and then on the final lap of 87 that's not as good as 64 so we just discard it so we should be getting a final result of 64 as an unsigned 8-bit integer from the future here and again we can kind of test things by just customizing these values for the lap times running through the future and making sure that it returns truly the best lap time so I know that probably all sounds a little bit confusing but hopefully as we implement the functionality inside of this pull function it starts to make more sense to you so in the poll function here the first thing that we want to do is to kind of grab one of the lap times and that's only if we still have lap times available inside of this VEC right so we want to inspect the VEC to see if there are any lap times and then we can go ahead and either update the value of best lap time or not depending on what the actual value is so let's say let lap time equal the self dot lapse and then we're going to pull a value off of the end so we're going to go ahead and grab a value off of this VEC so the VEC is called lap times so down here we'll say self.lap times and then we'll pop a value off of the end and that will decrease the size of the VEC by one so every time that we pop one off the end that value will get returned to us all right so the current lap time is whatever self.lap times dot pop is also you're going to see that self is not mutable here so we need to actually get a mutable reference to it so what we'll do is say self dot get mute and that should allow us to pop a value off of the end and then that'll assign it to the result right here and then we can basically do an if statement to see if we want to update it or not for the best lap time field now before we attempt to pop a value off the end here I want to check to make sure that the number of completed laps is less than the number of com the number of laps that we want to complete so basically what we'll say is if self dot completed lapse is less than the self dot lapse only then do we actually want to grab a new time otherwise we'll have finished the race and what else I'm going to do here is I'm actually going to move this logic to get the current lap time and kind of update everything into the F1 racer type here so what I'm going to do is create a function here called do a lap and then inside of this function we're going to take a mutable reference to self and that'll allow us to call do lap on each instance of the F1 racer this one up here doesn't take self as input because this is a function that's going to be associated with the type itself not an instance of the type so that's why we take a mutable self as input down here because this is actually affecting a specific instance of F1 racer so what we're going to do is say let lap time equals self and then we don't need get mute anymore because we already have a mutable reference and then we're going to pop a value off there and that'll give us an option wrapping u8 so now what we'll say if the lap time so if the current lap time is less then self dot best lap time then we want to update the best lap time with the new value and of course this is an option here so what we're going to say is if lap time dot is sum because we want to make sure it has some value and it's not none and then we'll say if self.bestlaptime dot unwrap I think let's do best lap time or actually it's laptime dot unwrap is less than self dot best last time so basically what we're saying is we want to make sure that lap time is some value not none and then down here we want to unwrap the actual value and compare it to best lap time which is also an unsigned 8-bit integer so we have to call unwrap to get it out of the option and then up here what we'll do is say self dot best lap time equals laptime Dot unwrap so now every time that we run do lap if the current lap time is better than the best current best lap time then it'll go ahead and update it the other thing that we want to do in do lap is make sure that we increment the number of completed laps so what we'll do is say self dot completed lapse plus equals one and that will increment our number of completed laps until we eventually get to five laps and then let's see what else do we need to do here so we're incrementing the number of completed laps we're updating the best lap time and so what we want to do down here is call Self dot do lap and that will allow us to actually invoke a lap and then if we do a lap and we have more laps to complete then we want to return standard task poll and we're going to return ending instead so what this is going to allow us to do is complete a new lap and then we're going to return control back to the executor so we're basically going to be kind of polite to the executor and say hey I did a little bit of work I did one little unit of work so I'm going to give control back to you and if you want to do anything else on that thread that I was using previously feel free to but I still need to be woken up because I have more laps to do right it's not until we actually call Standard task poll ready that our future is completed and the output value you is returned back to the caller now in order to fix this little error right here it says we can't mutate self so we're going to say dot get mute here to get a mutable reference to self and then we'll call do lap on that particular instance now at the moment this particular future is going to execute pull once and it's going to completely hang it's never going to rerun the pull function repeatedly and the reason for that is because when you're implementing a custom future you have to make sure that you wake the future up so you have to indicate to the executor in this case Tokyo that this is ready this future is ready to do more work which we've already kind of done here but if we go into this context right here that the executor passes into the future when it calls the poll function there's actually a wake function on here it's called Waker and that gets access to a Waker and then we can call this function down here called wake by ref and that will basically tell the executor that it's okay to call poll again because this needs to be woken up so we've already done all the work that we need to in this do lap function right here but we need the executor to try to wake us up again all right so let's go ahead and maybe test out this code and see if it's going to work all right here so we are updating the best lap time popping off a Time on the end we are setting the number of completed laps here and so we can go ahead and try to instantiate our eraser and execute it so we'll say let racer01 equal F1 racer new and that'll just give us a default configuration of eraser based on the parameters that we specified right down here and then we want to make sure that this is driven to completion so because we are operating inside of an async main function all we have to do is say racer01 dot await and that will await the function here also I want to print it put print some debug statements here so we'll say print line doing a new lap and we'll also put the person's name in here as well so we'll say person is doing a new lap and then pass in self dot name right here so that we can see that happening and we might want to do some more debugging statements and say print line racer has completed all laps and once again we'll pass in self dot name now remember that after we await this result here the final result based on this output type down here is going to be returned back to the caller so we can actually capture that after awaiting the racer01 so what we'll do is say let best lap time equal the final result of a weight here and that's going to be the unsigned 8-bit integer that only contains the final result from that racer so we'll say print line and best lap time was X and we'll just pass in best lap time all right let's give this a run and see how things are working so far so as you can see it says Max verstappen is doing a new lap one two three four five and then finally Max verstappen has completed all laps right now remember we talked about multi-threading and how each unit of work each time that we call the poll function on a future and do some more work that work can be scheduled on a particular thread in a thread pool that's been spun up by the runtime earlier in this video we discussed how to set up multi-threading for the Tokyo runtime which again is the default and by default it has a single worker thread for each CPU core plus the main thread but in this case we actually overrode that and specified that we want 10 threads so how do we actually see if multi-threading is working or if it's not working well all we have to do is go into our poll function right here and say a print line and then we'll say thread assigned is ID X and then we can go to the threading library and I've actually got some other videos in this video series that talk about threading and scope threading and things of that nature so you can actually do threading without using an async Executor at all you can just use the standard library but in order to get the thread ID we actually want to reach into the standard library and say standard thread and then we want the current thread so we're going to call this current function right down here and then for the current thread object we want the ID function to return the ID of the current thread and so now every time that we call this pole function right here we're going to get the ID of that thread returned to us and then we'll just do colon question mark year to turn that ID into debug output so that print line knows how to format that for its output so now we'll do cargo run and as you can see the thread that's assigned for every iteration here is thread ID of one now I'm not sure why it's choosing the same thread every single time here but to make things a little bit more interesting what we can actually do is have maybe two different Racers right so we could say racer0108 and then racer02 dot await as well and then we'll go ahead and say print best lap time for racer number one and then we'll say best lap time equals racer02 dot await and we'll make that a mutable variable so that we can overwrite its value and then we'll copy that line down here and so now we're awaiting racer zero one we're assigning the best lap time where awaiting racer02 and then we're printing the best lap time again now the only thing that I'm going to do differently here is to customize the lap times for racer zero two so we'll go into racer02 and we'll go into laps right here and we'll go ahead and let's say do lap times actually so that's the VEC that contains our different lap times and then I'm gonna pop a value off the end of that and we want to make that mutable as well since we are customizing the VEC values and then I'm going to do racer02.laptimes dot push and I'm going to push a lower value like maybe 57 so 57 should be less than all of the other default lap times so previously 64 was the default but we're going to override that with a lower lap time of 57. so racer number two should have a best lap time of 57 this time so now we've got two different Racers and we are executing all of those right here now for some reason we are getting the same thread ID for all of these even though we are multi-threaded we do have 10 worker threads here let's go ahead and just eliminate the worker thread specification I don't think that's going to change anything but let me see what's going on here really quick all right I see what's happening here I totally forgot that I need to actually spawn these on separate threads so let me show you that API it really quick in the Tokyo documentation so let's do a quick search for spawn right here and so inside of the Tokyo runtime so inside of that async main function we can actually call this spawn function right here and we can pass in a future and then this will give us a handle so we can actually spawn multiple Futures at the same time and then Tokyo will basically take those and it will delegate them out to different threads so we need to just change this code a little bit here because basically we're just awaiting inside of the main thread we're not having that delegation happening so instead what we want to do is call the Tokyo and then we're going to drill into the task module here and we're going to call this spawn function and we're going to pass in racer 0 1 and we're going to pass in racer zero two now each of these are going to return so let's remove a weight here really quickly each of these are going to return a handle so uh let's just maybe hold on to those for a second we'll leave those errors there and we'll fix those in just a sec but these are going to return a handle so let's do let handle 0 1 equal the results of that and then we'll say let handle 0 2 equal the result of that so we've got these join handles that are basically going to allow us to monitor the completion of these two different Futures and you could spawn a hundred futures or a thousand features it's really up to you but this is this spawn function here is really what's going to do that delegation to multiple threads on the Tokyo runtime here so now what we want to do is take a look at handle 0 1 and 0 2 and what we've got are these is finished functions that we can use to check the status of the Futures the future pending is it finished and if it is finished then we can go ahead and resolve the final value and use that data inside of our print line functions here so what we're going to do is basically just create an infinite Loop right here and we're going to be polite by just sleeping on the main thread here so we'll do a standard uh thread sleep and then Standard Time duration from milliseconds and we'll just go ahead and check every let's say 300 milliseconds to see if they're finished now inside of this Loop right here what we want to do is say if handle 0 1 dot is finished and handle 0 2 dot is finished then at this point we know that both of those Racers are finished and we can go ahead and re use those values from those join handles right so the join handle is wrapping the final value that we want to get access to and so inside of here we'll just say all Racers have finished and then we can go ahead and break out of the loop because all of our erasers are complete so now what we want to do is to go ahead and get the actual values so what we'll do is go into the handles right here and we'll retrieve the value from those handles so we want to get access to the final race value so what we want to do is say handle01 so down in these print line statements we're just going to have a results for racer one and for racer number two so we'll plug in two different variables here and say best as a prefix here just so we can have a different variable name to hold those and we'll say best lap time for racer zero one and best lap time for racer zero two and then we should be able to go ahead and populate those variables from the handles now I thought the join handle actually had a way to get the underlying value from it but I can't seem to find one there but because we have access to racer01 and racer02 here in the main function scope we can just come down here and say eraser01 dot best lap time and then we can also grab racer02 and specify best lap time and I think what we probably want to do is to move these statements into the poll function here so what I'm going to do is just say let's just do this in the poll function so after we've completed all laps we'll just say best lap time for person name was X and then we'll do references to self dot name and self dot best lap time so the reason we couldn't do that right up here is because the racer01 and racer 0 2 are actually getting it moved into the spawn calls right here and so we don't have access to those later on but we can go ahead and just print those details right before the poll function finishes for the last iteration so now if we do a cargo run here you can see that the thread ID that's getting assigned to each of these units of work is actually different each time and every single time that we run this it's going to go ahead and assign two different threads so you can see it's choosing any worker from two three or four and we could go back and add in the number of worker threads and set it to something like 10 here and so now when we run you can see it's any random thread from I guess number one through or maybe number two all the way through uh 12 I guess potentially I'm not exactly sure where those thread numbers start and end at but as you can see we are getting scheduled on different threads right here so if I change the racer name as well that'll give us a little bit more clear output here so we'll do eraser number two dot name and we'll set them to Sergio Perez Dot tostring answer now when we run now you can see Sergio Perez is running laps and Max verstappen is running laps and each of those laps can get assigned to different threads so we've got thread number nine doing some work they're number two doing some work and so it's just kind of dispersing these tasks all across different threads in our Tokyo runtime here so anyway I just want to kind of show you Tokyo as well as how to implement the future trait it does take a little time and just practice in order to understand how this future trait works but really the bulk of the work is being done inside of this pull function here as far as making a little bit of progress on whatever kind of data structure it is that you're working with in this case we have Racers and individual laps those are the individual units of work that we're scheduling but it kind of just depends on the structure of the application that you are building and then once you give control back to the executor then the executor can use that processing capacity for some other future that may be scheduled on the same thread so anyway I hope this gave you some insight into Tokyo as well as the future implementation here of the future trait that's in the standard library and I guess we didn't have time to get to the other syntax of Tokyo where you can actually use the Builder pattern but I'll save that for another video in any case again please provide your support to this channel since I'm an independent Creator leave a like on the video leave a comment check out the affiliate links in the pinned comment down below and make sure that you share a link to this playlist and this video with your friends family and anybody else who you think is interested in Rust programming thanks again for watching and we'll see you in the next video take care
Info
Channel: Trevor Sullivan
Views: 11,046
Rating: undefined out of 5
Keywords: rust, rustlang, rust developer, rust programming, rust software, software, open source software, systems programming, data structures, rust structs, rust enums, rust coding, rust development, rustlang tutorial, rust videos, rust programming tutorial, getting started with rust, beginner with rust programming, rust concepts
Id: PabDPIrt9fk
Channel Id: undefined
Length: 55min 40sec (3340 seconds)
Published: Mon Sep 25 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.