AWS Lambda Layers, the Runtime API, and Nested Applications

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome for those of you on the Twitch we are now broadcasting live again from the AWS loft in San Francisco for those of you here in the room we're just coming back from lunch here for Serb last day as part of the recap week here at the san francisco loft for those of you been watching the twitch channel you just saw the previous session on Sumerian and now we're back here in the loft talking about server lists topics for the rest of the day so we're we're into the second half of the day here got a bunch of fun stuff that we talked about just recapping where we were in the first part of the day we had an introduction to lambda and sir bliss applications we talked about lambda and some of the products we did around that we spent our time talking about some of our tooling with AWS Sam and the Sam CLI you saw how easy it was to take that CLI tool basically start at zero and build up to a running application and get things going now we're going to dive a little deeper into one of the the biggest announcements that we hadn't reinvent for any of you who are at reinvent and watch the live stream Amazon CT over intervals announced this live on stage from the product side we were really excited been working on this for a long time and it's pretty cool stuff so that's we're gonna spend the next hour on here after this then we're gonna talk a lot about API gateway dive really deep into that and then we'll finish out the day here talking about EWS step functions so let's get into that those of you who are just joining us live who weren't here earlier if those of you who are maybe here at loft who weren't here before lunch again my name is Chris Mons currently principle and lead developer advocate for service at AWS based out of our new york city office i've been an AWS for a little over six and a half years across a couple different roles and before that came from more traditional kind of startup DevOps operations background managing infrastructure both physically and in the cloud so for today so this week at the loft in on twitch we've been doing a recap of announcements from reinvent and kind of a catch up on where we are in some of these products portfolio spaces herbalists again is the topic here today now in the center of the world for service for us here at AWS is a SS lambda and we talked really earlier about lambda again being a compute service various capabilities and attributes that it has and the concept of a service application and so again a service application is you have some sort of event source you have function which is code that you've written that responds to an event that you've configured and then again your code can do whatever it might need to do in terms of talking to databases data stores other API is in the internet services etc so we also talked a little bit about the anatomy of a lambda function and so we saw an example really early on here the day where we had a really basic lambda function just a couple lines of code where we have the handler which is called here lambda underscore Handler and this is kind of the the interjection point that the lambda compute service uses to start executing your code that could get passed into it what's called an event object so depending on your event source there could be data about say an API call or data for about an s3 bucket or from a su s queue what have you and that's going to be typically what you're going to interface with in terms of the customer action as it were that you have then we have the context object and the context object represents information about the underlying configuration of the platform both stuff you have controlled and then stuff that's just available to you so now imagine this function and again this was a really basic function right all it did was spit out hello world and and that was kind of it if we were to take this expand upon date and imagine if you were that we're gonna build an API based application so the first thing we're typically going to do and this is a general best practice for server lists for service functions in AWS is we have our handler but probably wanted to use something much more complicated with the business logic for our function and so this is kind of pseudo cold so don't expect this to look exactly like any one specific language but imagine that based on the event that we have we're going to do one of two different things with the event structure of the data that's passed in and so maybe what we want to do is break out two different sub functions that represent some sort of business logic as part of that and this is a pretty common model where you'll see the handler then pass things off to that sub function that that's a little bit more purposeful typically though this will be expanded upon where what you're almost always going to have in most modern languages today there's some sort of libraries that you want to import these could be third-party open source libraries things from with inside your organization other stuff like that there's also a concept inside of lambda functions which is considered pre handler code let me explain a little bit what happens with pre handler code and you see this many lines up above here now when we execute a lambda function or when a lambda function invocation comes in and we go and we take your code and we put that into the underlying compute resources that we have and we do the first bootstrap of that environment one thing that you might want to do is run some code before execution to do things like handle database connections import libraries talk to things like secrets managers or password stores and then you could take those resources those secrets those imported libraries and have those pre-existing so that in subsequent invocations it can reuse those so basically this allows you to reuse active connections active libraries active bits of information inside of your function now we will only call this code once during that initial bootstrap and then again as subsequent indications come in we'll look for then what's considered a pre-warmed environment and attempt to reuse these for you and then over scale you get more of these and we fire more of them up and they make those calls so really common have a lambda function where you've got something about a pre handler code so dependencies configuration information helpers you may then have those helper functions to find you then have your handler and then you'll have sub like business logic functions in that same file now imagine again this could be a no js' or Python or Java or whatever general kind of structure that you're going to have now let's assume that I have an API based service so I have an API gateway which again we're going to go into little more depth here later today and then I've got a number of different API calls that I'm going to make off of my API so I can call the orders API the forum's search call etc etc etc and then based on that call I am probably going to talk to something right so my code is you need to talk to databases or data stores may be getting configuration or secrets information from another service or what have you so if we put all this together what we could imagine is that I have a lot of these functions and it is kind of a best practice to have a function per API call or a per sort of you know different capability so for each of those bits of code I'm gonna have pre handler information imports configuration information helper functions and probably some sort of business logic code or function and so as we see here in this diagram all of those kind of smaller blue boxes and the orange boxes could effectively be to some degree duplicated code and so we could have a lot of duplication that exists right we're just showing you a couple of these if I had an API that had 60 70 80 calls in it it could be a lot of duplication that might exist and so 3ala of it is is we want something that looks more like this we want our business logic and our handlers join together and then any of this helper stuff any of this duplicated stuff we would like to just be able to kind of glob that on with the rest of our function and now traditionally you've been able to do this with things like traditional dependency management for a lot of languages so native packages things like pip packages or gems in Ruby or NPM packages for nodejs and there's all sorts of different ways of reusing code but in the context of lambda some of this might be more specific and it might be larger than just a single package and so this was kind of a common pain point that we had seen with customers of lambda and so this led to us launching again just at the end of November something called lambda layers and so what lambda layers allow you to do is upload some blob of data this could be those dependencies that we talked about this could be that pre handler code this can be reused business logic and treat it as a separate artifact from your actual application code and then what we do at execution time is we actually merge all these things together for you and then execute your application and so this can allow for a number of kind of really interesting benefits you know one thing that we see today and that has happened in the industry is that obviously a lot of you probably are programming in languages that have open source packages that you want to include inside of your codebase there have been issues with things like security vulnerabilities and those performance issues in those so you might work for an organization that wants you only to use open source software of certain licenses you might also want to control people in your team from deploying all sorts of different versions of these and you know peg yourself on a certain version and so layers can offer up a separation and responsibilities that can say basically here's a layer that represents the you know approved versions of software X or Y or this is the locked version that we're going to support that security has looked through or hey as a team we're gonna standardize on version blah and here is the package that contains all of that and puts it together this also can again be used for things like the pre handler logic that we saw so let's say that you have code that needs to get secret's out of AWS secrets manager that's code that's pretty straightforward and depending on what secrets are getting out it could be twenty or thirty lines of code there's no reason to have that in every single lambda function that you have you could create a layer that has the code to talk to secret manager the secrets manager SDK or the atos SDK and maybe something like a Davis x-ray to do traces on those calls make that a layer and then share that layer across your organization so there's a lot of things you could do with this and layers are meant to be shared and they can be shared inside of an account inside of an organization or publicly out among whoever might want to consume one so the way that layers work is just like with a lambda function you create a zip file and you upload it to the lambda service and then inside of your lambda function or in the configuration of it you reference the layer as being part of it now again what we end up doing is we take the layer in your application code it put it into the same execution environment for you now layers can be effectively stacked on top of each other talking a little more about that here in a moment and so you can only today we have a limit of 5 that you can have per function they could be versioned and they are immutable so what this means is that when you upload a lambda layer and then you upload a new version of that you can have both versions there or you could tear down the old one but you can never replace a version so like once you have version 3 you can never replace version 3 you can only have version 4 or 5 6 and so on and you can delete version 3 if you want to and then you tell your lambda functions specifically what version 2 you now ordering is important to these layers so you will define them in a certain order and either your function configuration or if you're using Sam which will show later here how you can do that and then all of these get put into a specific directory in the runtime so again at the end of the day lambda is running inside of a Linux environment you can actually go and poke around inside of your lambda function and make shell calls and stuff like that to that underlying environment but so what we do is we put all of this code into the /opt directory now ordering matters because what we basically do is is we squash all these down on top of each other so if I had one layer and it had /foo dot text and then I have layer two and it has bar dot text in my lambda function I could see both foo and bar dot texts inside of the same filesystem but if I then had a third layer that also had a food text file that Fuu dot text would take place over the first one so basically it would end up overwriting what was there now this could be important you can use this to push out say-so cific updates to layers where you can't or you don't want to update an entire layer you just basically push out an update to a single file or a single dependency and so there's there's reasons why you might want to think about the benefits of this but you also want to be cautious about how layers could potentially overwrite each other as part of this and so again this provides some flexibility and allows you to have separation of responsibilities inside of this one thing that we see in the service space is we have a really awesome growing ecosystem of security vendors profiling troubleshooting vendors monitoring vendors as you can imagine a layer that is from one of our security partners for example a company called a pure sec which provides a tool that can do introspection on the events and what's going on inside of your function you could have another layer from a company called IO pipe which does profiling of the performance of your functions and so each of those could be a layer and then you could have layer for your own code and you could put those all together and it will all exist in the underlying application runtime now one thing we didn't talk about earlier is for lambda per region we have a storage limit for your functions and therefore your code the default for that is 75 Giga bytes this is a soft limit but one benefit of layers is that instead of every time you update you code having all of those dependencies in your application you can just have a layer that has it and so that way you're not duplicating all of that code in the storage system as well so a couple different if it's here now depending on the language runtime that you have inside of that /opt directory what we've done is made it so that you can reference the code and the dependencies and stuff in there as if it was just in your code base and so basically what this means is that when you go to do an import of a dependency we've extended the path of the environment so that it knows to look in these directories for you I'll show how this is beneficial here in a moment let's see and so you don't have to say import from a certain directory you can just call import and it will kind of just be solved for you know I mentioned here that layers can be shared within an account between accounts so you can you can say I only want to share this layer with this one other AWS account or across the broader developer community and so I'm going to share a link later to someone who has published a github repo that's become kind of an unofficial list of shared layers in run times that people have open sourced and so you can get some examples of what's out there now when we launched layers we also put out a layer for two really popular Python scientific libraries numpy and Sai PI they're two of the there were two of the most heavily requested pieces of software in the lamda execution environment and so now you have this as a layer and we're thinking about more layers that we can provide to you that will be for common utilities a very common request as things like headless chrome other scientific libraries and stuff like that that we could put in there so let's let's actually pause here it's kind of an abstract thing to see from if you're new to lamda let's actually dive into how this all works so let's assume that I have a a service application and I'm actually just going to go real quick here and you know kind of like much like most cooking shows I already have kind of the semi baked casserole on the top oven and I'm gonna put the the raw stuff on the bottom of in here but so this is actually from a some code that I had from webinar that I recorded recently it's really basic it's four files and I'm just gonna clone this so I'm going to come back to my cloud 9 environment which I use in some previous demos here and I'm gonna do git clone I run this okay so this is a really basic service application it has a app file it has a Sam template which is also pretty basic and it has a requirement text which has a couple of third-party packages that I want to include and now what this code does is it reaches out to a service called parameter store that we have here in AWS that much like the name might sound allows you to store parameters for things so basically keep air values in a hierarchical structure there's a bunch of cool things that you can do with it good for configuration information for feature flags for all sorts of environment information and what's nice is that it is a centralized service so all of the lambda functions I have and ec2 instances and ECS containers could all get that same information from there so it can become kind of a centralized repo of information of any sort so let me show you what I have here in parameter store and you find it under the systems manager console so right now I have a couple different strings are a couple different parameters that are saved I have demo I have one called hello lofts one called I am and one called webinar we go to Hello lofts and let me change this and say hello everyone at the SF lofts and then save this okay so four of these here today so let me go back to my code here and let's say that I'm ready to test out this application so again here in the console I can just go and open a terminal window I can go into the directory for this I can call Sam builds to pull in all of my dependencies and so I showed you before I had a requirement text that just had three three packages that I cared about in it but what's interesting is that those three actually pulled in about a total of 38 different dependencies and so this is a very common thing that people are finding in you know Modern Languages today you've had dependencies that have dependencies that have dependencies in an open source world now there's a lot of risk that can be introduced to this just about two months or so ago there was a a pack a open source nodejs package that had malicious code and started to it that was stealing people's Bitcoin wallet information from their local system and it was a package that was not a very popular one by its own name and identifier but it was included in a lot of very very popular no js' packages which is why I always do development now in a dedicated host no one anyone to steal any of my information like that but again if you thought about it so you know I had three things that I required but again it expanded out to all of this other stuff so it's a common thing you want to be care of and again it's in Y layers can be beneficial to you so I have my built directory I did say in build it pulled all those in I can go into the AWS Sam directory and to build I have my template and I have my my code directory here I can go ahead and test this API it's fired up you just check something again here it lives just in the raw path for this so let me do a curl so hello at the end cool so if you can make this out here it's just kind of a blob a JSON blob but I basically have a structure called messages and then I have four key value pairs hello lofts demo webinar and I am and it corresponds to the data here in this service okay super straightforward let's actually take this code and let's deploy this so I'm going to call my Sam package command which is just gonna zip up this code and shove it up into s3 and create any updated same template that references that and then I'm going to deploy this as a stack now I happen to have a stack reading existing for this just gonna update it the stock is called get messages and so it is just gonna go and update this service application and nothing here is new or specific to layers just taking a service application pushing it up into prod and updating it again with all of these packages and stuff in there just a moment or two for it to finish updating cool so successfully updating my stack let me go back to the lambda console and go to applications I see get messages here was just updated I can go into it I can expand out my API go to API endpoint yeah I see the same messages that I just saw locally in my IDE I can also go and take a look at the lambda function for this so I can go into the lambda console for the function and so I see that I have my function defined here I've got some permissions on the right side for extra loud watch logs and the parameter store manager got my EP gateway but when I look at when I click on the lambda function in the console here I could scroll down and what I see here is that it says this package is too large for the inline code editor but again we pulled in like 60 dependencies from Python and so that's not too surprising you can only handle more basic stuff so this is here now imagine that for this application these dependencies are super super standard boto is the AWS sdk for python x-ray is a tool that we have for tracing and troubleshooting calls in this case setup tools is another Python library that helps with with management of all this stuff there's a good chance as a Python developer for lambda functions that I'm going to include these in almost all of the functions I'm ever going to create anything that's gonna deal with an AW service it's going to have these three libraries so I can potentially be having not just these three libraries but the 60 dependencies that they brought in in every single lambda function that I ever create that's python-based if I were to do this and upload those and how this happened to be part of my step it would be really wasteful it'd be a lot of data I have to manage the versions and think about that and so if this could be something problematic so what could you better well this is where layers comes in so I'm gonna come back here I'm gonna go ahead delete this built version of my code I'm also going to come in here and I'm gonna delete the requirements text and I'm going to say that there's going to be no requirements off of this function just app code and a Sam template again going back to kind of cooking show style here and come back to my repositories and I already have a repository that represents a layer that's going to have all those same dependencies in it so let me just clone this real quick great so I'm another directory here in this directory I could ignore a readme the requirement text and a template dot Y Amal now I had mentioned before in Sam that we have a new resource this resource here called layer version and so what layer version does is basically allow me to create a new layer and so it references here the layer name a description where the code is going to be in like my directory structure here compatible runtimes instead of depending on what language I use it could be something shareable license information and then a retention policy which has to do with if I delete my my layer do I want to still keep record of it internally so this is here so let me go back to my command line and go to this directory I'm gonna run actually in the readme here I've provided instruction so I'm just gonna run pip install so pip is the package manager for Python so do a pip install it's now gonna pull in all those 60 or so various dependencies that I have and I can see that now inside of the Python subfolder now why the Python subfolder well I had before in a slide depending on the language that you have the destructor e structure that you need so that my code can just find this without me having to import it based on the path I love like the operating system so for python that means putting it in a Python subfolder so all these are here the exact same stuff that was in my application great this is technically a service application right how's the Sam template so I am going to do a SAM package again which is going to zip this up and update a template file and then I am actually going to deploy this as a layer there we go so Sam deploy and I have a stack name called lambda layer bodo x-ray so it's just taking all these dependencies and it's uploading it just as a layer takes a moment or two I need some like jeopardy theme music that I can just like play for this cool all right so it's done has created slash updated the layer lambda layer boto x-ray I can come back now and here to the lambda console and I can go to layers and as you might expect I see my layer defined here so I can find all sorts information about it it was just created 25 seconds ago the license that I've referenced you can see here I actually have a couple of versions of this so I'm now up to version 5 all of my previous versions are all here and I could come up here to are actually from here I can actually select different versions so let me go to version 1 and you know what version 1 is old and so I'm gonna delete this and so it basically will say that no one could launch a new function referencing this it doesn't break existing functions but version 1 goes away so I got version 5 and now what I want to do is I want to have my running lambda function be able to reference this layer and so I copy this thing here called a version iron and an arn is basically a resource identifier in AWS terminology I'm gonna go back to my IDE I'm gonna go back to the template file for my my application and now what I want to do is add to it a capability for my layer so in Sam I do layers I am going to substitute the value for y I could type in like an actual name but I have to use the AR in for this and I'm gonna save this I'm going to go back into a terminal window I am going to package this pretty quick and then I am going to deploy this so this is going to update the lambda function that I have configured in that whole application and in this case now I'm uploading it without any of those dependencies in my application artifact and I am now referencing my created layer as a layer that should be used again we've got some hold music just moment cool so that's done so the the real testing of the demo gods here let's go and refresh this and it worked which has just pulled up the same information let's actually go ahead and just just create another parameter so putting the namespace messages slash call that post post layer test after layer cool go back up refresh this cool so somewhere in here is post layer if I come back to the lambda console and I go and find that service application and I find get messages I will find get messages and again here I see you know the information about my lambda function let me actually go again into the function console for this and I'll see something a little different so I'll see here now that it referenced is that there is a layer attached to this function and I could scroll down and find the version the merge order etc information about it if I click back on the lambda function and I scroll down now I can actually see the code in a code editor because my application artifact now is under the size restriction for the console and so I can see it and I see that I just have my app file I have my template file it references my layer and so that's all good now all I we need to do going forward for any of the other functions that I had any of the other service applications that I had is just reference that layer and so again you can imagine where and this represents the layer for me talking to any AWS service plus having X ray tracing you can imagine again the layers that you have for some of these third-party tools for monitoring or logging or security you can imagine layers that maybe represent certain code that your organization needs or things like SSL Certificates or configuration files or all sorts of things you want to standardize out across the applications so that's a Divya Slayers I'm sorry that's lambda layers and again really brief quick example there so took an application had a lot of dependencies uploaded it said you know what I don't want always application all these dependencies dependent on that app pulled it out created a layer configure the function use the layer pushed it back up I didn't touch my app code nothing in my application code changed I didn't have to change my import structure or the calls for that it nothing about my actual application change to use layers so really powerful and easy to get started with transition to the next part of this session here so again lambdas been around now for for just about four years and one of the tenets that we hold really close to ourselves and lambda team is that we want to make it really easy for developers to get started I was actually just in Seattle for the last two days meeting with various lambda team members of ours talking about new features and things they were looking to build this year and we will get legit heated with each other about making sure that we don't impact the ease of working with lambda we want to keep that bar really really low and so one thing that we've done to help keep it really low and we first launched the product was having managed late which is for you so providing languages that we handle the patching and the updating and give you new versions of them and expand those over time and so to date basically what we have here is a number different languages I've actually removed the ones that are now deprecated because we do follow the lifecycle of the ecosystem for these languages and so you'll see here that we've got different versions of nodejs and python and core and all sorts of sorry net-net core and other things out here and the reality is is that the languages that you see here represent a really large majority of what most of you are probably writing applications and today it's a huge chunk of the industry that uses just those languages has been pretty happy but what we know is that that's not everyone and so there is a really big long-tailed languages out there Stack Overflow does a yearly developer study in that developer study they highlights the most popular languages that developers are using it's maybe a little hard to see here some of the very top ones are things like JavaScript HTML CSS and sequel don't really aren't really related to what we're doing on a back-end side but you see in here Java you see PowerShell you see Python you see C sharp you see things like PHP but it's a long list there's a lot of things that we don't have managed in the lambda environment similarly github does their state of the october's which tracks top repositories by language as well as the growth of languages inside of the github octave or as they call it and they show things like rust and groovy and cotton growing really rapidly these are really exciting languages people are finding them really useful for certain use cases and so we know that you care about these languages and we want to help you use lambda to have event-driven workloads with them so this led to was launching custom runtimes from lambda as you can imagine this basically allows you to bring any of those languages that we didn't support previously to lambda so you want rust you could bring rust you want groovy you can bring groovy you want a call in here bring Colin you want to bring COBOL we actually you can bring COBOL I don't know why well you could uh-huh so there's all sorts of things that you can do with this and so this is powered by something called our runtime api which we're gonna go into a little bit here and basically what it does is it provides a standardized interface for running any language you want now we build this for two reasons one was to enable you all to be able to run it ever code you want inside of lambda the second is we actually needed to on our side of things provide a better internal standard for this the different languages we originally had had different ways of working in our platform and so we actually now use the runtime api to power a new language support and so we announced Ruby as a managed language but behind the scenes it uses the runtime api and this will allow us to kind of have mutually beneficial new capabilities for the platform that benefit the managed languages in the language that you run over time runtimes can be distributed as a layer so we saw before here create a layer have code they want reused and so you could have a layer that represents a runtime they use across your organization and there you go so when you the function with the runtime API instead of choosing one of the managed languages you choose provided as a language that's what it looks like and then inside of either your layer or your application artifact has to be an executable file called bootstrap as the name might imply this is what we use to bootstrap or configure and set up the environment so that it can execute whatever code you want it to execute and so basically you have just an executable called bootstrap we'll show how this can be in a number of different ways and then it will execute the code on your behalf now the bootstrap does a couple of different things it basically acts as a bridge between this thing called the runtime API and your code so imagine it is kind of the gateway or the shim that sits between those two things and so it has to be able to react and talk to the the runtime api behind the scenes which handles things like actually passing events into your code and explaining to the bootstrap higher code is configured it's like where the handle there might be and there's other information that it has to handle as part of that and so it basically has to listen to a local interface in the execution environment to do that and then it will interface with a local API the runtime API in order to get information and so the API for this is actually really really basic it's just a couple of calls that are available it's not this like massive expensive thing so we see here that I when you go through the initial bootstrap if things don't work out you have to be able to post an error otherwise you would get see here the invocation next so give me the next event and then this will pull down the request information that you want to pass into the event you would then post either a response or an error message and that's it so you bootstrap up the executable you get an event you execute that with the handler code and then you post back a response or a failure and you can see here how that gets pulled out of an HTTP endpoint down below inside of it when you're actually doing the processing of the event you have to again get the event you have to propagate something called a tracing ID you have to create the context object so that's that that object we talked about that gets passed into a function that has certain capabilities and information about the environment that your code might interface with you have to invoke the function handler so you to actually call the execution of that and then handle response or errors and then any cleanup that might need to happen another part of this is that we pass in environment variables so these environment variables tell the bootstrap information about again the execution environment the handler the task group so where the code lives whether it's in a lair or someplace alts so with this you can execute pretty much anything we have people running essentially shell scripts that execute other binaries there were some folks so there is a open source functions as a service platform that runs in kubernetes called connait 'iv people have gotten creative to run in lambda via this so you can have portability for your functions in and out of different clouds if you wanted to do something like that but there's a lot of different things for this really key though most of you in this room will never touch this you will never touch the runtime api you will use the manage languages we will continue to grow and update the manage languages over time that's where we want to see people be we don't want to we really don't think that you should have to use a runtime api unless you're gonna use one of these unique languages so useful to know about but you're probably not going to end up using this too much yourself now when we've launched this we we at AWS open sourced a couple of runtimes as well as worked with the number of partners on a bunch of other runtimes and so we announced a C++ and Russell runtime that are built by us and then we have partners so alert logic announced one in Erlang and elixir those were two languages that they use really heavily internally and so they had been looking for support in lambda for those for really long time so they kind of built it for themselves in the wider community I company called blue eject did launch a COBOL custom runtime so they work with a lot of companies that have old COBOL code and you can take that COBOL code and run it in lambda now a company called node source has what they consider to be kind of a high formants high-grade quality nodejs runtime that's available now and then the folks at stacker ii who have a tool for managing service applications they like PHP and so you can via their runtime run PHP in the cloud you'll see at the bottom here there's a URL you could also Google just github awesome layers and the person here has a collection of links to all of the custom layers and custom runtimes that have been shared out there among the community if you create one that you want to share you just go and issue a pull request against this and say here's my layer people can go and find it and so this thing continues to grow like every day a lot of cool stuff being shared by companies we have a lot of AWS partners again like in the tooling space which are launching layers so data dog announced a layer for their monitoring tech to be put into lambda and so again there's a lot of things that if you have a company that provides software that other people might consume creating and publishing a lambda layer could be a way for you to distribute your product just as an example so let's take a little bit of a look at this I started doing PHP back in like the year 2000 and so I was really excited when I saw that stacker II announced this and let's actually go ahead and take a little look at what a custom layer looks like so we'll start in the directory that they have in their posture I should say so they have a really basic repository they have a whole bunch of shell scripts that they use for for management of the lair things that publish the lair pull it down and update it things like that they've got a make file for compilation a license or readme and then they have a bootstrap really the thing of value here is the bootstrap so if we come to take a look at this bootstrap the first thing that you notice is that this is a PHP script itself so this is basically going to say assume that our lair contains the PHP binaries so I'll show you how they do that here in a second we're going to execute PHP to run PHP pretty straightforward they have down in here then a a function called start webserver and basically what this does is it runs the process that is going to talk to the runtime API it's going to init itself it's going to look for the next event and then it handles response or errors back from that it upholds in a bunch of the environments variables so it gets the handler it gets I a couple other things inside of here and then it executes the actual code that needs to run so a bunch of flags on this but at the end of it where it says handle their file name that's basically it's saying ok we're gonna execute this code inside of here they also have some failure handlers and that's actually a huge chunk of the rest of this where they they basically provide the support for handling the request and for failures and stuff like that but this is 238 lines of code which really is not a lot of code for what this does it's really straightforward it's a very very kind of vanilla bunch of system calls and and curls to API interfaces getting environment variables and handling responses there's a significant amount of white space and stuff so it's really probably 150 s or so lines of actual true code in this case PHP go back to the the main direct three here and we go to the build shell so what they do and when you're building runtimes we encourage you to use an ami or a docker container that matches the lambda execution environment and we publish what those are in our documentation and so you would want to run basically an install of PHP and then you create some directories and you basically package this up as a layer and then it's deployed so they have here in this directory all the instructions you need for getting started with their PHP layer and really it's pretty much if we scroll down through here that you create a Sam template that references their public layer now you could fork this repo and create your own custom layer for PHP and you can own it and track it and all of that but they are providing this as a manage layer and we can actually see here that they are up to version 7 of their layer and then you just write some PHP and you publish it so if I go here to lambda console and I go to functions and I'm gonna go down to my PHP test I we can see here if we look at this I have some really basic code and I just say I can't believe this is PHP and then I actually call some PHP and so PHP info is a native call inside a PHP that prints out some interesting stuff here I'll show you in a sec I didn't have an API gateway in front of it and so let's go ahead and pull this up cool how many of you have ever developed in PHP before why yeah so you fighting this before right but so this is PHP info running in a lambda function responding back through an API gateway which is generating this HTML and in PHP info provides lots of information about the underlying operating system and runtime and again this is the stuff running in lambda so the lambda OS version that we use and stuff is shown here I can come down here and see lambda specific environment variables and versions and keys and other stuff that are here again nothing necessarily too exciting about this beyond the fact that we have PHP running into lambda environment and again all I had to do for this was I have a Sam template that references a layer that stacker e has shared and again this could be one that I've created too and then there's a Sam template for this that manages all of this for me and I can go and look at that same template so again in a perfect world you're gonna be consuming a layer that's shared by a community or by a company or by a group or you're taking one these repositories forking it creating the layer yourself or the runtime yourself as layer and referencing it and then again run any code that you could think of if you want to figure out how to build your own again the there are examples from all these companies that have published run times of these bootstrap files it's really really easy to do it it's effectively just a glorified shell script that makes some API calls for you cool so cool so we've talked about - you know new kind of concepts here in service applications with lambda both of them are here to really kind of help simplify and make the development experience for you better the first one here with layers we saw how you can you know simplify reuse of code objects of dependencies of packages of anything you want to throw into a zip file that needs to be shared across functions and again in the example of I have three requirements that expands out to 60 you don't want to have to schlep that stuff around every time that you upload and push up an artifact we've installed with the custom runtime environment how we get that like long tail of languages that you might want to use and so at this point now you can build event-driven applications that execute only for when they need to tied into an event source in any language that you could think of small asterisks there are some things that don't run in Linux today but there's a lot of cool things out there a lot of things that are possible and again these are two new capabilities they are just a couple weeks old so get your hands on them give us feedback on them we'd love to know more about how you feel about them or what we can do to make them better as I've mentioned earlier but for those of you who may be just joining or have been I have been here yet today UAW amazon.com server list to find out more about all of this stuff we've got links to source a pre-bout something we talked about earlier documentation developer tools resources like getting started guides webinars and stuff like that as well as AWS partners again my name is Chris Muntz I am principal developer advocate for server list at AWS you can find me at Mons at amazon.com or at Chrisman's on twitter and we're going to take a quick break here and get started in the next session at the top of the hour where we're gonna be talking a bit in depth here about API gateway building and managing api's and so that's up next but thank you all for joining thank you for coming to see recession videos you on twitch here stick around again we're gonna get started in the next topic in just a couple minutes
Info
Channel: Amazon Web Services
Views: 4,751
Rating: undefined out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud
Id: fDv_RKygOXU
Channel Id: undefined
Length: 48min 52sec (2932 seconds)
Published: Fri Feb 01 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.