Rust Linz, July 2021 - Stefan Baumgartner - Serverless Rust

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
i want to talk today about serverless rust so i'm i'm stefan for the folks who don't know me i'm i have the the worst twitter handle in the entire world but i'm also one of the co-founders and co-organizers of rustlings and over the last let's say one and a half years up to two years i've done a lot about serverless so serverless and serverless platforms and cloud providers who provide serverless offerings are what they do basically on a day-to-day basis um that's why i want to talk to you about serverless i'm going to promise there will be some some rust but there will also be a lot of cloud internals and and things that you that you might um that you might don't know if you're using one of those those serverless offerings so um who in here uh who out here has not heard about serverless you can raise your hands okay so everybody heard about serverless that's great um usually we we um so let's say serverless comes in a couple of different flavors and and this would be usually the point where do this this particular joke that well serverless is not actually serverless there are still servers you know that's that's old everybody does that um the thing is but i think nobody talks about is that there are multiple flavors of serverless and depending on who you talk to everybody understands something different and one thing that serverless is is the whole auto scaling part that's the infrastructure serverless where you care about servers less which means some cloud provider whichever that is make sure that all your infrastructure problems are handled and usually actually this is this is one of the most important points this comes with auto scaling and consumption based pricing so this is i guess the most most important distinction between all the other things you just pay for what you're actually using be that an entire server that is provisioned or just you know a little bit of cpu a little bit of memory and there you go the other part is mount application developer sites and this is this is what what is usually referred to as functions as a service so instead of of writing your entire server http server or whatnot you don't write the entire server logic you are just writing this little bit little piece of logic that actually does something how this is getting called transferred how the the payloads are handled how the response is handled is not your part you just focus on this little teeny tiny uh stuff that actually does something and for some for some cloud platforms this is actually um often seen as some some sort of glue code especially if you think about aws aws is 270 services or something around in the poll park you could basically run your entire application on aws services but you need a little bit of glue code to connect the bits and pieces and to make sure that all those services work together and there are two so for the auto scaling stuff there there are lots of examples and um one that came up recently that i found out recently is google cloud run which is really that's my docker file do whatever you need to do with it and aws target and on the function as a service set we so the two most uh most well known um offerings aw is lambda and azure functions and we are going to look at the at the internal aws lambda in the azure functions today the thing is um consumption-based pricing and just writing just writing business logic or just writing the functions those two things can be seen totally separate but usually they are working really well together azure functions has offerings where you don't have serverless scaling they are just 100 servers please and the right functions but they are running all the time they they're all provisioned the entire time this is possible but usually you use them with consumption based pricing they say okay i'm just going to pay for what they need and aws or azure is making sure that we just need that the just provision what i need which brings me to the building if you if you use the two of them together um if so usually most of the of the serverless providers say we charge you for the time it takes to execute your stuff and the amount of memory that you need that's that's very much simplified but that's usually a way to calculate it calculation of server is pricing is a hell there are so many factors involved and i'm going to get that to that at the end of the presentation there there are companies specializing on calculating your stuff so we are not going down that road we are going to download route why we want to use rust with serverless um and if you think about that the entire pricing mode of serverless you pay for amount of time and for the ram that you use you have to say that rust is really good at memory and speed so you have really fast applications and you know memory memory is is everything that rust is about it's using just the right amount of memory not having a garbage collector running not over provisional memory just having what you actually use um and what you actually need and that's why that's why i think rust is a really good fit for for serverless um okay let's look at two functions service providers by two serverless providers this one what's that okay for the folks in the stream aws lambda well no it's the half-life logo and and not everyone is aware of that um so if you if you see any uws lambda blocks out that are usually using the half-life logo um even aws sometimes uses the half-life logo in their blog posts this is the actual aws lambda logo so get this in in your brain if you if you see that that's aws lambda the rest is is half-life and yeah of course we we want to talk about ews lambda um i couldn't say if it's the first one but it's definitely a pioneer of functions as a service so um the poster child uh one one of the services that everybody has in their minds when talking about serverless and i know it from from making web apis and the more i spent with with aws lambda the more i found out that i guess web apis is just a side effect i guess this wasn't planned that just works as well because what it usually what it actually should be is just this little bit of glue code um between aws services and that you have something like the api gateway that does http um requests or makes http points that you can that you can call and make requests to that you can attach a lambda after that is actually yeah that just works but it's i guess it's not the main use case not lambda was not designed exclusively for that um lambda is very much unaware of triggers it just takes an event that can be an http trigger event yeah but what it does it takes an event and it processes some workload and then it produces a result it's it's like that the classical function you have some input you have some output in the base case it's serverless it's not a stateless which is not entirely two but but which is a it's a mental model and lambda runs in in very very lightweight micro vms uh um firecracker micro virtual machines and firecrack is nice because it's written in rust and it's it's a fantastic piece of technology so if it's it's open source you can check it out there are some very much very clever things in there um and then you think about aws lambda you shouldn't think about servers that are provisioned for you that just write the glucose they are actually workers and this is a very important detail where we are coming to in a minute so um if you're doing either slump and i have to make my notes a little bit bigger so i can i can see what i've written there otherwise it's becoming some sort of karaoke um well look at that screen um so it was lambda the execution life cycle comes in a couple of steps and what you see here in blue are all the um all the bits and pieces that need to be run always and all the pink parts are those pieces that we call cold stuff so what happens when you when you when you call a lambda or when you when you start invoking a lambda for the very first time it or less does a couple of things first it checks if you're even allowed to do that if you're allowed to do the checks if you are within the range of concurrent workers that amazon can provide for you and if that all is okay um ew is lambda requests a worker so you can think of a fleet of of micro vms and you can request one of those workers to produce you or to to pro process your workload um if you get one of those workers um and if the worker is what we call cold which which means the worker doesn't know about what you want to process the worker downloads some code spins up or creates a sandbox downloads the code ur code your stuff that you've written improves bootstraps your application if you think about nodejs this means that downloading your javascript files downloading nodejs downloading the node modules packing them in a folder and then starting node with the application and booting it up and only then are your workflows workloads protest so this is the typical aws lambda execution live circuit a couple of the things are need to be done always but they're very fast except those bits and pieces in the middle that the the pink column this can take some time um and this is the cold start time um if the payload is processed or the workload is processed and you get another request um only uh the blue parts are running which means aws checks if you are allowed to do that and if there's already a one virtual machine you are processing that code uh well lambda is processing the code um if you look at an invocation diagram this is this is how the invocation could look like you have a trigger whatever the trigger is let's say it's an http trigger you're getting a yes aws needs to spawn or to make a new worker really faster than initiating first one um this is exactly what you're going to see here at the screen now so um yeah question yeah i'm going to repeat the question for the for the chat um um how does cold starting micro vms work if there are many processes running can i say put it like that yeah this is what we're going to see right now so so what is happening when one of those workers is provisioned and what happens if you're already having running a a workload or processing a vehicle what happens if another one is coming um so let's say you have a trigger this can be an http trigger or whatever trust the trigger and this is the very first trigger that you have like an http request that you need to process something from a backend aws that does its cold start thingy which means requesting a worker downloading application blah blah blah the entire thing once your runtime is bootstrapped the runtime does an http call to the aws rest api this is this slash next that you can see here in the screen which means it it does a call to the rest api asking for for the payload so it asks which event should i process can you give me all the information that i need to process my event once it has fetched this information it processes it this is the code that you are writing whatever that is this is the one thing that you are doing um once the result is there it does another http request to the aws rest api with a post message to response giving the result and during the time this worker is exclusively working only on this particular workload nothing else once the response has been sent once the result is finished the worker asks for another for another workload to process for another payload from the trigger for another event to process if there's one it grabs it from the queue processes it pushes the result back and if there's none it goes into hibernation which means firecracker is is freeing up all the resources so so it stops hit the pause button resources are free and and another worker can can get those resources there's a certain hibernation period if within a couple of minutes aws isn't open about the period about the amount of time that needs to pass um but but if this happens after the amount of period um the worker falls again so you freeze it then falls again which means it it's not it's not a cold start it's still a one container but it gets all the resources back it gets cpu back it gets run back and then can process uh um another workload now what happens if you're processing a record but another request comes in in this case eras lambda spawns up a new microrim the entire thing doing a cold start grabbing that payload from the queue processing the workload giving back the result and then going into hibernation again and this is actually this is actually the one big difference between aws lambda and all the other serverless offerings out there which is that for for every workload that you're processing you have a dedicated worker doing that no parallel processes within the worker or something not the thing that nodejs is really good at which means doing io and and being able to handle lots of requests because it doesn't actually do anything it just has open open actions nope you have a dedicated worker for just this particular process and after that it works on the next on the next process which also means if you if you get another work response and this panics for whatever reason which means i don't know stack overflow heap overflow anything any panic that can happen it just kills the particular worker or the particular runtime within the worker and then it bootstraps this worker again and then you can run the next period so this is this is how aws lambda scales out uh various workloads across your system with the typical consumption plan you get about 1 000 workers that you can that you can use if you need more either cool aws or the events are queuing up which means that yeah you need to wait wait for them does this answer your questions cool great um i already i already said that if if those processes hibernate for a certain amount of time they are getting disposed and resources get freed so this is this is how aws lambda arm works one thing that's a couple of things that are good to know which which kind of kind of interesting to me the longer i work with italy's lambda is that you were paying for ram so so the um the amount of dollars that uh that you are paying for um scales within the amount of ram that you use so if you're using 128 makes a frame you have 0.000 21 to pay if you have twice as much ram you're going to pay twice as much dollars but and this is interesting you're also getting twice as much cpu which means that for for most of the workers that you are doing this this is it just costs the same so so if you don't have any workloads that need a particular amount of ram that can work with with a low amount of ram um what happens is that if you get twice the amount of ram you get twice the amount of cpu it costs twice as much but it's also twice as fast which means it levels it levels again to the same amount um if speed is a problem with the application and not ram just go for the full gigabyte for 10 gigabytes or whatever it doesn't matter it's it's it's the same amount of money that you're paying because if you have a smaller vm or smaller arm it just takes twice as long and this is something that that we so i couldn't say that for for any uh workload or for every workout that is happening but that happens for for a lot of programs that we have been running so if ram is not an issue you can control the speed of the execution with that all right how are we going to write an aw slump in rust um of course we are bootstrapping the firecracker vm then we are running a node process because aws lambda is node everywhere and node is going to load a wasm file a rast program that is compiled in wasm so it loads the wasm file and starts the western virtual machine and no please don't do that please don't do that that's way too many virtual machines that's way too many turtles that's sadly what you see in the blog posts on the internet so if you type in using raspberry pi islam that you're getting stuff like that not cool uh there's there's a there's a more concrete way to do that which is well just compiler binary and related run on the api is lambda so it's you don't need to have you don't need to have any other abstraction and it's actually quite easy so aws provides a crate with a pre-defined aws lambda runtime which does exactly that it grabs something from the queue then you can write your handler function you can see that here and once you're done you send a json result back back to the aws lambda queue hi drone and so so this bootstrap part that you can see in this aging main function so it it's async which means you need an async runtime tokyo would be my go to random for that and this is also one that aws recommends but this bootstrap culture can be always the same the handler function is the actual function that you are going to implement so this is this is the actual logic the actual code that you're running then you compile into to unknown linux new the documentation says that you need to compile it to moosel linux doesn't matter new works as well it's just that with the cup with with some chili versions you run into issues because the amazon linux version doesn't support a couple of symbols there that's why they say in the documentation use mozilla because it's it's the least painful linux target to compile to it's also the slowest one so go go for this target that works you might need to install a link if you are on macos or something but usually that works i have even some github actions that compile that stuff for you i'm going to show to link to the repo where all that stuff is later on so you can check it out and name the binary bootstrap that's name put it in a zip file and upload it to aws lambda and there you go all right let's look at some results so um because you know rusty is really good at speed and memory if i have a lambda with 128 megabytes and i run a typical hello world which is just okay take the runtime from lambda the node runtime from lambda print out hello world and give the result back a cold start takes about 200 milliseconds on a 128mb lambda vm a re-run so if the if the if the lambda is hot if the runtime is hot takes about two milliseconds in rust cold starts take less than 20 milliseconds i couldn't tell if this is for the entire process that i have shown you or just for the one particular piece that's that you have in control doesn't matter this is what you're built for so this is the number that counts it's less than 20 milliseconds so i had some cold starts in in the ballpark of 60 milliseconds with the hello world that's great um and and three runs usually are less than a millisecond which is which is fun so i had a couple of reruns that lambda told me 0.5 milliseconds great it also beat me one one millisecond because that's the smallest amount that lambda can can pay all right this is this is here the world this is not fun this is not something that that's in any way interesting that's why i created a little benchmark program which is called palindrome products what it does is that you take a range of numbers let's say from 100 to 999 you create the combinations like multiply 100 with 101 multiply 100 with 102 and do that with all possible combinations that they exist and then see if the product is the same forwards and backwards so 1001 would be a pelinom product because it reads the same from both sides the nice thing about it is that this takes a lot of cpu um if you're working with really big numbers so you can benchmark really well uh how how long this will take because the the bigger the range is the more it takes um if i'm calculating telling on products from 10 to 99 node takes for reruns so i'm leaving out the whole the whole call starts because it's usually in the same ballpark as the hello world it's a very small program so you don't have to do anything re-runs away interesting and a re-run from 10 to 99 also takes two milliseconds and of course in rust it takes less than one millisecond surprise small numbers are fast and surprise rust is fast this is not something that you're interested in what's more interesting is if you if you crank up the numbers let's say from 100 to 999 999 node takes about 500 milliseconds and in rust it just takes 45 milliseconds so this is really really fast this is really really nice um and if you're working with really big numbers like calculated from 1 000 to 9999 um node takes about 70 seconds um and this is something where you hit a timeout very fast you have to crank up those timeout numbers and also with rust because it takes about eight seconds so this is this is impressive of course you can control it you know give it give it twice as much uh twice as much memory cpu gets twice as much those times go down because it's twice twice as fast but hey same goes for the row down below same goes for us but yeah so those are the benefits of rusting lambda um first one of the biggest benefits actually is that you have very very small binaries instead of deploying the entirety of node and your node scripts and maybe a couple of node modules or something no you don't have to deploy so much you just have to run a three or four megabyte binary this is you can unzip it fast um you don't have any overhead in bootstrapping it this is this is actually the biggest benefits this is how you get those nice cold starts because you don't have any overhead from anything else and it works great on low vcpu so um if for the 128 megabyte vm you are getting provision something between a 12 to a 13th of a vcpo which is not that much um and if and last since it's natively compiled works also really well with this kind of cpu units we have low ram usage so what we see is that in rust you you are just allocating the ram that you use great um in node it always tries to be ahead as much as you allocate so which means that that if i don't know i use 20 megs of ram in rust node allocates 40 mixer from that's how the v8 works but yeah and also what i found out that you have less variation at execution so if you if you're running benchmarks which means um i don't know do 10 000 requests per second or something see see how how lambda behaves there um there are a couple of spikes there there i don't know you see that vm is getting freed and or something huge or something like that and then you can have spikes that even if your payload takes i don't know 45 milliseconds uh to to run on node sometimes you have to 600 milliseconds or one one second runs because lambda can't handle it if your execution times are very very slow and very very fast lambda is more able to handle that so you have less variation in execution and plus it's super fun so i love writing rust and writing serverless functions with rust they love you even well alrighty cool let's go to the next provider which is asset functions and azure functions is very different in basically everything compared to aws lambda so there are not that many uh overlaps even if even if it looks like that so when you're writing i don't know an old application in in aws lambda or azure functions yeah well you're just writing this handler body and you're done with it but there's so much nuance in basically everything that happens so first of all the triggers are part of the function definition you will define what the function needs to be triggered we are defining the input and output binding so you define the input the trigger and the sync like the target while you're writing the function in in aws lambda you're just providing the function then you configure the inputs this is might be the same in the end okay of course um but it also means that you can use the same lambda for various input sources and output sources not so much for azure functions you have to define the trigger with the function um you can have multiple output targets yes but they are very much tied to to what your function look likes um it builds up on azure web jobs if you ever ever run across azure web drops and here that's that sounds fun it sounds also like serverless that's what run what runs the underneath the most important difference i think is that the unit of deployment is not just a single function but an entire function app so you're rolling out the function app and you can put as many functions in there as you like let's say you put 10 functions in there that's great the function app gets deployed and scaled out together which means if azure functions requests another server there it comes to server not a worker server it bootstraps the entire app all the functions that you have in there if there's 10 functions all the 10 functions are bootstrapped this is a big big difference this also results in big cold start times but it also results in very low costs at times uh for for i don't know all the other functions in web because the entire app um is uh um is booted up also um those are the servers which means they can take more than just one request at a time you don't have them exclusively so you uh it might be just like with any other server with a node server or whatnot that you need to process uh multiple requests in parallel already and it needs azure storage to store a function so you can't deploy a function without azure storage hidden costs just to see it how does the azure function execution life cycle look like so the scale controller checks if um if any events come in and if those events come in and there's not a work or a server already working on that it allocates a new server the server is unspecialized so you can think of azure having tons of azure function servers in their cloud but they're all unspecialized which means the runs the function host but the function host doesn't do what doesn't know what to do the function host doesn't know which functions to run it just is a dedicated function server once it's getting specialized um the files are mounted to the server virtual file system so the azure storage stuff from your from your storage is mounted into this particular server the app settings are applied which means yeah telling telling which version to use or feature functions what the runtime is blah blah all the kind of things um then the function host and i'm sorry this is very small um the function host resets the function runtime reads the function json function json defines the input and output bindings we are going to see an example later on loads in the extension if necessary then the function run time and you're going to see that all in a second loads the functions into memory and then you execute the code and you see there are lots of lots of pink boxes which means lots of stuff to do with cold start times but you are also seeing that you have not so much to do when the container is already warm when the the server is already worn because the scale controller says is there any went and the function server says i'm going to execute it so re-runs are very very fast this is how azure function host works will get the trigger the trigger is defined for the input binding there's a function host takes the input and then it does either call and in process task which is written in c sharp wave sharp because the azure function host is written in c sharpen.net also means that if you're writing functions in c sharp or f sharp they're just getting linked together i don't know if that's the correct word because i haven't written any c sharp in over 15 years but since it's the same technology it just can run c-sharp and f-sharp stuff if you're using something else that azure provides for you they have dedicated run times for that you can see all those runtimes on github they are they're open source and it calls this out of process run time which is either no the python or whatever and it creates a grpc connection with it so um it's sending events over grpc to the function runtime it processes those events in the grpc capable server and gives the results back third option so those those are not they're not running parallel even even though the image might suggest it they're not running in parallel it's it's an either or so you can either have in-process uh function apps or out-of-process function apps or the third part custom handler functional which means i give you an http connection do whatever you need to do so this is this is where you can define whatever you like be the demo process like then with the node chess similar language runtime for javascript or rust so this is where we are going to implement our last azure function runtime um the results are going to be sent back to the function host and it sends it to whatever output painting that you need stuff that's good to know um ram scales to your needs this is very important so um in italian you are going to pre-provision a certain amount of ramp that you are paying for inertia function it takes as much ram as you need not more and you are paying for this amount of ram that it takes also the vcpu scales to your needs up to one entire virtual cpu um because azure function hosts can do that so it's not tied to the amount of ram that you allocate or that you that you provision just as much as you need with the consumption based plan it skills up to one vcpu um if you have premium plans you can have up to four or five or something it's it's a hidden in the number that's called azure computing unit um and what i found out by look googling a lot is that 100 is used roughly are equivalent to one virtual cpu cold stats can be slow but but then again since you have as much cpu as you need those reruns can be very very fast and those initial ones can be very very fast encodes that suffer the entire function app alright let's define the function host so this is how how it would look like if we are going to define a function host for for a rast application there are two things that are important so i'm not saying please do a nodejs application please do a sharp application whatever now i'm saying um i have a handler executable so this is this is an actual executable like handler or handler.exe if you're on windows and enable forwarding http request which means that i want to communicate via http with my with my with my application um this is a function binding where i say okay for for the endpoint palindromes i'm talking here now about http endpoints because it's the easiest to do please create this binding where i accept get them post requests as input and i want to see a result on the output over http so those are the two bindings that you can do there this is actually something that i really really like in azure functions because you can for example write services that send an email but also give a response back to the user that calls it so if you have i don't know send email with a body or whatever you do an http request it processes your your request and then you send it to two different targets like an http out so that the user in the browser knows what to do but you're also sending it to sendgrid sending emails alrighty and now now rust again have you read the rust meetup the nice thing is any server will do so if you're using a rocket if you're using what i have here or whatever doesn't matter if you have a server that a server framework can rust to you're writing your own server all together it will do that's perfect any server runs funny it's called serverless but you are going to write the server because you know infrastructure server let's not application serverless um two things that are interesting first of all i need to have um a path mapping that this equivalent to what azure function provides to me so if i have i don't know what palindromes.js in in nodejs that is mounted at slash api palindromes um in rust i need to provide that name in my app so so it's just forwarding http request it's robotic pass but what comes in in my server is that this is a call to slash api paleontomes and please process this event so you have to take care of that mapping which is a little bit of extra work because you have to create those folders with the function jsons but you also have to create the same mapping again there's room for failures for for errors that you are going to have i don't know a typo somewhere because it's all strings you all know that and the other part that's actually quite interesting is that you listen to a particular function custom handler ports this is a port that azure functions needs to spin up your server this is an environment variable that gets forwarded to your application and you listen to that port which is also nice because if you want to try it out without azure functions you just let it listen to any other ports here 3000 in this example i can develop the server in its entirety on its own and then hire those three lines perfect it runs on serverless it runs on azure functions this is sweet shift keys this is wonderful all right a couple of results um cold starts in node uh hello world azure functions around 700 milliseconds give or take can be 500 milliseconds can be 1.5 seconds um usually it's it's about 700 milliseconds reruns though one millisecond because it's again then it's super super fast and this is now great in rust those coils take less than 100 milliseconds this is fantastic because it's just so little to do fascia functions i had cold starts in the ballpark between 30 milliseconds and in 70 milliseconds but 100 milliseconds is a good estimate a good a good measure to to say this is the cold start for my rust application really runs less than one millisecond palindrome products again for very small numbers um already great note takes about nine milliseconds a rust takes less than five milliseconds the bigger the numbers get i have about 80 milliseconds for for node this was 500 milliseconds with the small small aws lambda vm and they have about 50 milliseconds in ras this was 45 milliseconds in aws number the great thing is if you have big numbers it takes a note about 10 seconds and in rus less than one second so this is fantastic because you know cpu scales with it then but what you can see here as well as with with aws lambda if you if you are doing stuff in node or or in rust factor of 10 is a good estimate so 10x fast um what benefits do you have of rust in azure functions and we have a little lag here alrighty um first of all significantly lower cold starts cold starts was always a very big problem in azure functions and they did a huge amount of effort to get that right so i can remember presentations where i showed azure functions and it took me one to two minutes to have a code start of a node nodejs application now with 700 milliseconds it's really really great especially since reruns are so fast but you can cut those cold studs down tremendously if you're doing a rust but what i found great it's just a server so any server will do it doesn't matter which server that you use even servers that you already have they are immediately serverless infrastructure serverless because you just tell them this port this pass and you are done it's just configuration for you and it runs in a serverless offering you're just paying for what you need that's great cold starts in no chase or other random environments depend a lot on the amount of functions that you have because they need all they need all of them need to be compiled on cold start not a problem with rust so this is also something to keep in mind and again it's it's a huge amount of fun so i love writing rust and it's it's equally fun with azure functions and this is actually my preferred way of doing servers right now um just having azure functions killing out my rast servers because hey just works alrighty um that's that's for the stats that's for the millisecond sets for my tests um summary first of all last should definitely be considered if you want to write serverless functions it's a great tool if you like writing rust for some cases you can benefit a lot especially if you have processes that that maybe need to run in the background that there's a cpu heavy that really need to compute something this might be your number one choice because it works great on low vcpu it's fast it needs just a little amount of memory rust in all cases can help significantly with cold start times so if if cold starts are your problem rust might be the choice for you execution times are focused a lot on execution times keep in mind um we are not talking about hidden costs we are not talking about uh um azure storage costs we are not talking about aws api gateway costs because this is you know you get you get the http bindings um in aws lambda for free big quotes as well in hr functions for free but in aws lambda you have to activate the aws api gateway and there you're paying per traffic again so it's calculating the entire thing is is a nightmare seriously um i i couldn't give you an estimate which one is cheaper i honestly don't know they are just two different and and i'm just running in three plants since the beginning so i can't tell you which one is more expensive depending on the factor that you have doesn't matter both get cheaper with rust so those are the benefits of trust if you want to read up on that first of all if you read up on azure functions i can highly recommend the entirety of um of the azure functions documentation um on what is the docs.microsoft.com excellently written it's it's amazing how much information you get about everything in there um and if you don't find the information in the azure function docs you find it on github because most of the things are open source so you can read up on the entire function host so it works on all the randoms how they work and also lots of examples for other programming actually so if if go is your thing do it and go you have a tutorial for that so this this is great um one of my colleagues at dinner has written a blog about what happens behind the scenes when aws lambda calls starts this is where i grabbed all the graphics from it's a great read check it out and also check out the blog from aws lambda on the aws on the rust runtime for aws lambda this is where going to see a nice half-life logo but also some great content for the aws lambda runtime um last but not least this github link is all the examples from today so you can check them out like literally check them out like check them out and try them and that's about it thank you very much you
Info
Channel: Rust
Views: 3,981
Rating: undefined out of 5
Keywords: rust-lang, rust, rustlang
Id: EXqqsCss8Gk
Channel Id: undefined
Length: 43min 56sec (2636 seconds)
Published: Mon Sep 06 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.