Live from the London Loft | AWS Batch: Simplifying Batch Computing in the Cloud

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello hi hi everyone and good morning London and good morning twitch great to see see you every everyone here and this morning made it hopefully you didn't have a too rough night last night so we had a couple of beer but you I still managed to get enough sleep so hopefully this morning is going to be alright today we're going to talk something quite fun it's a SS batch it's a it's a service that was launched quite recently and okay and there's a lot lot to do with it it's been very successful Adrienne owns me that's my name Technical Evangelist with AWS I run with young steam I've been playing with AWS as a customer for roughly 10 years a bit more joining ws almost two years ago the Solutions Architect and join the Technical Evangelist team earlier this year yeah I love climbing I love ginger shots so if you're into those two things I'm happy to connect if you have questions on lei WS feel free to text me a message me as well on Twitter happy to connect you with the right folks so what you're gonna expect today we're going to review a little bit the AWS patch service or actually what is a batch first and then walk through a little bit the service what it is what it does look a bit the API how to use it and then I'll go through some demos we have fetch fetch and run demos basically is using a lot of cool technology docker flash and then of course I did did some DynamoDB into it so we can see a little bit what what can be done with it we're going to show the code right away and then do some usage pattern how people actually use it in AWS so maybe it gives you some idea of what you can do with it but listen what is batch computing I think if you ask many different people you'll get probably a lot of different answers I think we all have some kind of own understanding of what is bad but if you think about it it actually seems simply means that you can run a synchronously a synchronously and automatically jobs across a bunch of different computers some jobs may have dependencies sometimes very complicated dependencies so it can make sure you're quite complicated but something simple is I have a job I give it to a bunch of computers and the computers run it it sounds simple like this but it's actually quite complicated under the hood if you want to build a system like this because every job has different requirements from CPU memory or networks or whatever system it needs to to use so it means that under the hood if you want to have an efficient that's a infrastructure you need to be able to handle all these different requirements also e if you want to have your jobs running quite fast you need to have infrastructures that I can go up really fast and sometimes also go down because you don't want to pay for the environment every time from at the beginning of the year for you know for things you might use in November so yeah basically it's what complicated to handle and if you think about it actually badge or the idea of batch is a batch is it's quite old 19th centuries when the tablet machine was was first used actually to for the Unites States Census to physically count how many people were we're living that was 1890 and after that there's been a bunch of of machine created and system news to do bash basically the first first system was some cards that you would put in a machine and each card would have its own different fields and then we'll do some calculation based on that so that's the kind of easier way of having a job put in the cart and then you ask the machine to compute the job then if you're familiar a little bit with Linux and common lines you have stuff like this the add comment here which you can pass them here I'm just passing it compilation of requests to compile the file a C file and then I just tell that job to be computed at 11:45 on January 31st that's the kind of very easy way to have a small job instantiate it as particular time this would give you an answer like that so it gives it a job a job number job one and then tells you when it's going to be executed this different way to instantiate that job that's another version of it and the atque common in Linux would list all those jobs the ATA RM can remove any of those job if you give a job ID so basically it's quite quite common already in terms of especially for sis admins or DevOps if you use Linux to do some batch process flash Processing's this would be used very common to do route routine work so you say every morning at you know I ate in the morning you know you kind of enable let's say authorization for users in my my network right so at Friday Friday at 8 o'clock in the evening you shut down all the SSH access to all my computers in the network so these are typical sysadmin kind of tasks you would do in batch and today if you want to do this as I said very often people have their infrastructures that would be in on-premise this is very quite often they would have do it all infrastructure so kind of general-purpose infrastructures not optimized for CPU not optimized for memory not optimized for anything but Jay to be able to run any type of jobs right and if you think about it it's not really a bishop it's also not efficient because you have to buy it in you know one day and then it stays there so you don't benefit this kind of on-demand on-demand access to technology which the cloud gives you so if you think about it is really trying to fit a square into a circle so it's not really efficient so what is bad on AWS in a nutshell you have all the primitives are bash a batch in the cloud right so you get shell scripts you can execute shell scripts you can execute Linux executive ball files docker images or instances you can run them you can also download a zip file and execute it all these kind of things what it WS does is takes care of the entire infrastructure it allows you to benefit the on-demand so you will specify a job with the requirement what kind of job is it specified the CPU a requirement and then we'll set up the infrastructure for you and that's pretty much it so overall it really reduces for you the complexity of running the infrastructure it reduces your cost because you are all of a sudden a infrastructure on the manual and it really saves time because you can focus on your business need you don't have to become great and managing batch infrastructure which is not in itself easy because you need to make sure the infrastructure is secured and all that stuff so AWS batch as a bunch of components I'm gonna drink a little bit wondering if the beer is coming back I'm kidding mom if you're alive I don't drink beer so the components of AWS patch are alike those are those you have jobs right so it's quite obvious every job as a job description as I say you can put the requirements into it what kind of job what are the dependencies of that job what's the infrastructure requirement for that job you have queues right so you can have different type of queues with different kind of priorities of course and you have schedulers and then the computing environment these are the relation that basically you have in this batch you have jobs you can have dependencies between job as I said it's job as a job definition which could have its own container property you can have the same content property if you want you can share it between jobs you can for you can have one for each of your job and then you have a queue for each of those of those jobs queue supports priority so that means that if you are jobs that you need to play or get executed before the others you can push them in the high priority queue if you wish the scheduler will take care of making sure that the high priority queues are dealt with before the lower priority queues right once the job is in the queue you have the computing environment that will take care of executing that resource you have capability of adding several computing environment it's really up to you you can you can make those and we will go a little bit more details now so what's a job right so it's really a unit of work as I said in AWS bash we we use containerized application to execute your job so that means that you first create a docker container which has the requirement or the tools that you need to run the job then you push this in your docker hub or ECR in AWS and then you can use that docker container to execute your job right as I say it really references image comments and parameters so this is really what it what that and it looks something like this hopefully you can see here is the definition in JSON is something that you can actually pass in the comment lines you seen see here this is the comment line to submit my bash my JSON file as a as a job you have a name you have a queue that's quite simple and then you have the container definitions here you see there is a container override it means that at the job submission you can override some parameters that were define in the container and this is very good for reusability right for you make general-purpose container or that is a and then you can pass some extra average comments at initialization what is important is to notice is it can actually override also the CPU memory requirements and you can override the different environment variables right now something important as well you have rich right strategies which allows you to define how many times you want your job to be retry it before it actually fails right when you submit your job you can this is the same stuff it's still another job the only difference here is that I have a depending dependency right so if you if you submit the job you get a job idea if that job is very long for example takes few hours you can still push other jobs and if those jobs depends on the first one to be executed and successful you can put a command depends on and then the job ID that is that needs to be successful before this one is executed so the scheduler will actually go through the queue and figure out which job as dependencies or not and then will take those jobs and execute them so this is very simple to do job dependencies keep in mind that those Jason are usually created by machines not listened to by man so you don't have to copy/paste your job IDs is something that when you submit your job and you know your current requirements or your pendency then you inject it at at submission it's just to give you an idea how it works its job as states obviously when you push a job you your first you submit your to your job so it has a state called submitted pretty obvious there is a pending that means that the shelter is looking at the dependencies looking at are there are the jobs that need to be executed before that one then once it's it's evaluated it goes into a state runner ball that means that the job is actually ready to be run right then you are at that starting the difference between round ball and starring is simply on the infrastructure if your infrastructure is not there ready to to run the job the scheduler needs to provision the infrastructure all right so you are just in the run of Ball State once the infrastructure is provision it goes running and starting and then running is when the actual job is running it's been then whether you're succeeding or not dependent on the exit code so zero would be succeeded and fail would be known non zero exit code or it's also term failed if it's cancelled so you can cancel your job or if it's terminated so it means it means that the instance running the job has been terminated and now you're asleep what why my job might be terminated well you can use spot instances in your computing environment to run the jobs so for example companies that don't necessarily have strict time requirement can use the spot market to run jobs right spot market is just a market bead for unused resources in AWS and you can them a much better price so you can define a price that you're willing to pay for your compute environment to be there at much slower rate than the andaman the only problem is or difference is that if someone puts a higher price then you lose that instance in couple of minutes so if your job was running then you lose that but it's nonetheless very possible to do this and as long as you push states of your jobs somewhere else or you don't have very very long dependency in your job so if it's something that you can play with to run those jobs you need jobs definitions and this is also a place where you can define the attributes like the role that job will have so what are the policies of that role what can it access for example I'll show you I'll show you quickly one my policy this is a policy for accessing DynamoDB so you see here I have a statement that tells I'm allowing get input item on the particular table in DynamoDB right and this is very good way to say my job can only access that particular table and not only do put and get nothing else and of trade other tables you can not delete the table we cannot do other stuff another one possible is also if you want to use s3 you will get in the list and then on this particular s3 bucket so this is very good way of specifying specifying really strict requirements in terms of security for your job right so that your job if it starts to go to set loose or if you made mistakes you cannot delete unwanted data so very important when you set your job and do the roles don't use the Asterix for for the resources really define the resource that you want your job to target nothing else so that you don't have mistakes I speak to biscuit bearings other other attributes that you can specify the mount points of containers and the containers property and then environment variables and request strategy those are also attributes that you could overwrite at the job submission right this is how it looks on the on the API level you see above there is my it what kind of CLI command line interface comment that I can use to submit a job here are submitted JSON file that has my job definition and you see I have all the content properties these are the initial properties of the container that can be overwritten later on if you want as I said something very important here is this job role RA n that defines my policy this is the policy that will define what the job is allowed to do or not and this is exactly what I showed you here this is my policy right now and you then have a job queue as I said every job goes through a queue so that your scheduler can know when to start to jump every queue can have its own priority and the information that is in the queue will be persisted for 24 hours right after that it's lost as I say you can you can support multiple priorities and queues if you wish this is how you would create a queue using the CLI as I say it's very simple you have a JSON file you have to define the state of your queue so you can have enable or disable so you can create queues that are not necessary they're usable yes and then you have an order here this simply defines the order of importance of my priority for my queue here I've reserved order that means that it's not high-priority if I would have one two or three or four each of those would get a higher priority and the jobscheduler is the whole magic that does everything for you and this is really the very complicated piece of technology if you would want to do this yourself it's really evaluating all the jobs that are in the queue and it might be easy to evaluate three four or five jobs but when you have ten tens of thousands of jobs or even more then it becomes very complicated - if I evaluate each of the queues it for the jobs priority and dependencies and figure out when to run those jobs right and if you to give you a rough idea it's like a FIFO so if first in first out queue so that if you submit a job it's roughly gonna be processed at the order that it actually been submitted unless you have dependencies of course of course if you submit a lot of jobs and you have a lot of infrastructures to compute those jobs those jobs eventually will be processed concurrently right but they are still extracted in a fee for manner from the queues unless you have dependencies and finally you have the compute environment so you have two types of environment managed and unmanaged managed is really you give all the bells and whistles to AWS to do the magic for you that's the easy easy way got nothing to do my WS will start your compute environment will scale it depending on the CPU mean and CPU max and that you have set up in your requirements and then we'll do this for you you can also set the instance type to optimal so that means EWH will actually evaluate the job and look at what kind of job is it is it CPU requirement job is it memory dependent job or is it a combination of different other requirements and will select the right instance for the job to be executed there are questions already online are these jobs single application or workflow it really depends you can have single jobs a single jobs put this is actually very good questions or you can support very complicated workflow it really depends how you want how you want to be do it this is how you would create an environment using the CLI again it's defined as a JSON file there is a bit more more complicated in terms of you have a bit more data to enter you need to define an instance role so what is gonna be my instrument role if you select manage AWS will create that role for you so you don't have to do it the instance type here is optimum so I don't really care defining exactly which instance I want I want my WS to figure out what is the best instance for me the image ID as well here is when you put optimal basically AWS will select an army that is the last generation and with the LA latest let's say software security updates something that you need to do be aware of as well and then you can select the subnets in which you want access to so for example if you have ec2 instances or your own infrastructure running into a particular subnet and you want that job to only access that from it you can also define it it's very it's a good way of defining the security limits of your job right and of course you have the role there as well as I said this is something that if you let Amazon manage the environment it will create automatically this is so something that you can do you can provide your own army if you don't want Amazon to select the army for you and give you the latest version of Amazon Linux Amazon image then you can define the custom Amy in which you would have for example install particular software that you want to use a very classic example is if you want to do GPU heavy workload for example you need to have an ami that is particular at running GPU intensive workloads and do it for some machine learning or example using a max net then typically you would create your own Amy and have the right drivers there define all the parameters for your job to be executed the only requirement is it needs to be Linux base has to have the HTM the latest capability for networking and then have the easiest agent installed on the instance so under the hood in fact bat use is yes to contour eyes your application right there are some limits into into what you can launch there are some soft limits where which you can increase using a ticket and there are some hard limits as well the upper ones are the default limits that you can increase with a ticket to the support you can only have by default 10 compute environment which is already pretty good but the mannan the maximum number of jobs hard limit is 1 million unfortunately I still haven't managed to reach that limit with hitting my keyboard so it should be alright however they're worth knowing those limits alright so this is important if you have jobs that require different default limit don't run away do come talk to us and say and then we'll connect you with service James and try to figure out how we can help you of course we always want to hear from customers what what you guys are doing and how we can improve the service worth noting there's a couple of updates that have been done is now batches so fully supported with CloudFormation from august 11 this year so that means you can integrate that in your CI CD very nicely I'm a huge fan of infrastructure as code so before that I was quite missing now you can actually define all your jobs and batch environment in cloud formations that means you can replicate the exact same exact same environment in one region or another without having to go into console or having to to mess around too much now the big thing which is yesterday something crazy happened I don't know if you've be aware of that but AWS has gone a second per second billing for ec2 and batch so you're not any more per hour on ec2 you were per second so I'm battery is part of that so that means that you can benefit from running a job for 15 seconds if you want and you won't be charged per the hour okay and this is also so this is very important right it's big news understand if your job just dropped it's it's pretty awesome news it's also applying to other services like EMR as well and the only thing I need to mention you still have one minute minimum paid right so actually when I said 15 seconds you would be paid you would be charged on minute 15 seconds so I don't say anything stupid on live stream especially with my boss in the room there let's do some some demo now we saw a little bit how things work I do need to so now now that you know all the theory your bat bash let's look a little bit at how you would create your docker container here this is quite straightforward so I'm gonna I'm gonna do sorry I want to show you I want to show you what actually what what I'm gonna do I just got so excited and this is what we're gonna do this is me pretending to be a developer and I have jobs to do my job is go and fetch a script into s/3 s/3 is our storage service and then take that job execute it and then write the output of that job into dynamodb and then exit gracefully hopefully I'm executing this you know just one Q and a computing environment I'll show you the definition of it now this is like this because typically in real world or let's say for the Portugal use case what we're thinking with AWS patch it's not necessarily a human trying to submit all the 1 million jobs one by one you would automate that very classic way of automating this is you would have data pushed into s3 like log files files or whatever it is zip files and then those the event of putting something into the storage would create an event that you can catch with AWS lambda as a function and then execute do this job submission right so as as you pour data into s3 the event would trigger lambda function that would submit the jobs or too much case this is how things are typically however for the sake of demo I just will act as a lambda function so this is my my actually I start sorry I'll start with the but I don't start like without nothing you already so this is my environment in which I will be running have dependencies on my keys my instance roles here you see I'm defining this is something that is then you need to to understand the undefined in the max V CPUs to be used by the job 256 this is big value that means that my job can scale to a lot of different instances right running at the same time concurrently now my mean is zero so that means when there's no job for a long time I WS might just stop the instance right because I have no requirements to to have to a mini any running there so that means when you would submit to job it would need to start in the instance right so you might take a little bit longer so again it really depends on your requirements how you want to do it and this is my cue nothing more complicated I could increase the order if I want and this is my job definition and you see here I have a docker container that I have my application running there my application is just fetching a script and executing it it's nothing nothing miracle here I do I do have some properties I say that my container actually has one B CPU requirements so that means on my entire compute 256 V CPUs I could possibly run 256 concurrent jobs possibly if I would have that these are my memory requirements and these are this is the job role that I have and this is what I showed you in fact it's here fetch and run if I show you here on my llama it's this one is my role that allows me to access s3 and then DynamoDB only put and get items right I'm such a great way of showing so you're really nothing nothing fancy I don't have environment variable they're put in the job definition however at the job submission I do pass a bunch of environment variables simply to play with and just to be able to demonstrate the override you can also pass comments you see here there is actually a comments which pass 60 as a parameter to map map job that is my script and then the environment variables here I'm passing a value of which script I want it to run and this is important because it means I can reuse that particular job and give it as many URLs of script as I want okay so it makes this actually very very reusable as long as I can execute it and a lot of this kind of flexibility that I don't have a very hardcore job that I can't change the number need to resubmit everything from scratch this is my batch type file and these are important because they are used they are used in my script so these are the kind of script that I want that I will push into s3 to be retrieved and executed and you see here I'm just saying that if I don't have a batch job ID I'll give it one by default this is simply for me to test locally I'll show you how to test locally your container this is important so that you don't have to submit and wait and then it doesn't work try again so you really want to try locally how to do that then you have a bunch of different arguments here and this is the arguments that pass from that submission here so my 60 you see from the comments and then those our environment variables that I passed right and what this does is just computing the Fibonacci sequence of the arguments you pass sorry oh I think everyone loves Ubar and Fibonacci so I just selected people on Qi today and then I take the result of that Fibonacci sequence and store this into DynamoDB you see this is my request to DynamoDB and you can see here pass an item called item to Jason and this is this particular item here so let me like that now you might say oh what's the whole thing so this double escaping of Jason is because of the beautiful Jason way of doing things and how you do strings so I need to double escape my double column to make sure that it's actually passed as a valid JSON file that's it it's not anything fancy if you would use a library the correct way and and it would you wouldn't have to do dirty hacks like that it's just I guess a lot of double escaping that's it just a couple of things I want to show you as well this this is the fetch and run script that is existed executive on my container this is a bit more fancy screen basically what it does is define some path from where my script will be I'm fetching that script from s3 whether it's a script or a zip it has it will do different things as I said you can pass a script or zip which will need to be unzipped Ram then it will crew check what are the different environment variables assign them and then execute whether to know whether some dependencies are there like the CLI for example is pretty important the unzip application is pretty important if you need to unzip so it will check if those application I installed on container if they are not its existing exit exiting with an error then do some cleanup the the the fun part is really here is that I fetch the script I do a copy of the s3 bucket script locally and then I just slowed that script I make it executable and then depending on which case it is I just execute it right and that's it so in that case if I pass this script it will just download that script execute the Fibonacci sequence store into DynamoDB and get out now this is how I do my docker container it's very very simple I just have some dependencies here as I said I want to have three applications for sure installed on the container the one is which to be able to do some understanding which arguments I'm talking about the unzip application will take the zip and unzip it and then the CLI AWS CLI so I can actually store things in dynamo dB what this does is when I create my container it copies the fetch the fetching run script that is on my local machine into the container at user local bean fetch so that's simply a copy from local environment to my container when I create my container and then I have my working directory this is important in my docker container it means when docker will run the container it will directly executive execute that script that means that when my daugher starts fetch the scripts and then gets out that's it alright so very simple the only thing what you need to do is do something like this you would a darker locker bill and then you pass all your all your exit directory inside so that it can copy the files I won't do it here it takes quite some times because there's a lot of dependencies so I've done it already and it's I stored my docker container in here as you see job definition here so I have my docker container called fetch and run and least the latest versions of it stored in my in my ECR which is my store for having my containers inside amazon so that i can load it when I need it right so let's do some some fun stuff so here I want to show you I'm in my directory where I have all this job's environment and I want to submit a job so I'll do bad and then do you see there you see the it's alright so here this is very simple it's my command line that I want to patch I want to submit the job what it does it will submit my job really turn me a job IDs and now I want to show you a little bit what it does in life so this is not my console I mean badge oops I mean badge and dashboard get out and if I do this you'll see I have already a script that is runnable it went quite fast my environment was up and running I submitted some stuff and then it it goes from running ball to starting my script I started then if I go back here now it actually succeeded already so you can go into the different job so this is my job ID that was submitted if you want to just verify here to e0 this is the last job ID that was submitted I can look at the logs that were creating so you can actually go into cloud watch and you'll see you're my cloud watch actually did did compute the Fibonacci sequence for 10 now just for the sake of the demo I want to have a job and I have a Fibonacci 100 this is a bit more fun and then I'll just resubmit my job that's it so and I think the old sorry this won't work because I need to my script I need to submit my script into s3 so here I'm of my local script our uploaded back into s3 and then I'll resubmit my job there oops cool so now I should have different jobs here executed on dashboard stay I already runnable I also want to show you a couple of things so this morning I played a little bit and I have some failed jobs for example here I had the permission to my permission to access s3 it was denied so I didn't have as three permission to access so it actually exceeded with the statues and it usually does something like exit with status that is different than zero and you see here it was pretty straightforward tell me what was the problem so it's actually pretty simple to debug if you have if you have issues right that's pretty much it let's check the jobs or succeeded sorry that should be my last job so I'm doing a lot of console here eventually you wouldn't want necessarily to do console so this is the fibonacci sequence of hundred if you are interested into it now my job also does storing into dynamodb so let's have a look quickly and I have a job number so my job number is seven seven three eight three so into my table and down on DB here and here you can see my job is already there and dynamodb is gonna and in fact this pattern of executing a job putting DynamoDB and then having another job running take that data and compute is actually very similar to MapReduce job or you know jobs where you have dependencies so it's something that is it's quite a common use case right I could run a dependency job but it's very similar basically right so what are the typical use cases just to to quickly finish so that we can move into questions there well use case for having very large number of jobs obviously financial services right and fraud detection is one thing that is very important post rate analysis to understand how the the market is doing you would typically collect the entire market day into s3 and then in the night on spot market very cheap you would complete all the jobs to actually try to understand next census data figure out if they are froze another one is life science DNA sequencing very very important customers doing this already a drug screening molecule you put your molecules into s3 basically your job then takes all that and tries to compute drug training I'm absolutely not my field so I'll stop here and then digital media a very common to do rendering transcoding of data all this kind of very heavy computational jobs that you don't necessarily want to do it in real time you just want eventually that job to be done as cheap as possible and you know so that you'd nest you typically would use AWS back in the spot market during the night all right I've added a bunch of diagrams on the slides that you can get access to it it's very high-level architectures to describe some of you those views cases I'd rather take questions then just describe what's in the diagrams so I'll just end up here so that you can actually see see those with the slide thank you very much so there are a couple of questions there oh the cookie and egg was our jobs in this talk a single application workflow I already answer that those where a single application there was no workflow you could define workflows if you wish are there questions here I'm happy to take a bunch of questions yes [Music] [Music] right so typically you would manage this on the submission level you would try to define the most common denominator for that job create a docker container and then create the jobs with the different environment variables that you need to complete these ones this is something we don't do currently we just give you the infrastructure the different jobs so you would need in fact to create some sort of this job splitter and give it we you can't just give us necessary bucket and just wait that we do magic out of it because you might execute jobs in mirrors of different ways what we do is the shadowing of it so if you have very complicated workflow with dependencies that's something that will take care of I'm yeah another question yep right so the question was there's a slide called on the environment of the environment compute environment that has a service role and then this instance role what are the differences the instance role is really what can I access or is the container role as well like what can access those roles are the instance and the service are for itself to be executed so you allow the service to be executed in Amazon is nothing it's like it's like a lambda functions when you want to execute the lambda function you need to give a that allows execution of lambda actually it's very similarly here you just give it a role to allow to AWS bash to be executed there's some couple of dependencies or it's is dealt with it on the on that level yes another question so the on which slide right on that one you mean on my job that has the dependencies this one come on hello this one right yes so the question is was am i on my submit submit job Jason which has a dependency there is a job ID and the question is do I need to know that job ID in advance or not you can't know it in advance in fact this job ID will be given when you submit your job so typically what you would do is in your workflow you would submit the high level jobs and then take those IDs and create dependency inject the dependencies once once you submit those jobs here I'd literally submitted the job took the ID put it in the file so I played the role of of the script but you wouldn't want to do that typically all right last question and then I'm told to cut yeah right so the question is are ma is my compute environment shut down right away after my last job is done the question is no will keep it there because we don't want to shut down everything and you might have a slight slide they say break the scheduler will we'll try to make a best guess on figuring out what's the access pattern of you submitting job and figure out is is more coming and then in time will reduce the number of instances give you an example last night I submitted for fun about 200 jobs we created I think 18 different instances and this morning I had just three left so I'm not I'm sorry right so the the pricing bash is self doesn't cost right it's the infrastructures that you are using under the hood so if you are I can't give you a price because it typically depends on which instance you are using now if you if you are if you want your job to be executed very fast or then you would want to have a fast instance it's just going to cost a bit more it depends in fact it depends because some instance that are executing faster than you now you're built by the second so it might actually be a lot more right it was yeah yeah actually this question I don't know I'd rather I'd rather ask solutions architect maybe to to to do that I need to cut we are out of the of the of the time now I'm sorry I'll take questions afterwards if you want all right thank you very much [Applause]
Info
Channel: Amazon Web Services
Views: 19,504
Rating: 4.8526316 out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud, Twitch, London Loft, Batch Computing, fetch & run job, AWS Batch, AWS Pop-up Loft
Id: H8bmHU_z8Ac
Channel Id: undefined
Length: 52min 43sec (3163 seconds)
Published: Fri Sep 29 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.