ElixirConf 2016 - Selling Food With Elixir by Chris Bell

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone my name is Chris I'm here today to be talking to you all about selling food with elixir so that's a bit of a weird title I know but you're it will kind of make sense in a bit when we start getting into the talk and you can kind of see the meat of what we're talking about here so I'm Chris I work at a company called made by many we're based in New York and we have an office in London as well but I'm not actually here today to talk to you about me by many I'm actually here to talk to you about a project that we've done for a client of ours which is called carve a grill and carve a grill are a fast casual restaurant chain based out of DC in Washington sorry Washington DC no one says it the other way around it's british ism so what a fast casual restaurant chain means it's a lot of words but you can think of it basically like a Chipotle for Mediterranean food and if anyone's been anyone their lives in DC here I hope you've been there and I hope you're into it yeah did I hear whoo there's a whoo okay now we're rolling so this is this is uh well this is the application that we ended up building here sorry I've just messed that up so this is the application that we ended up building which is basically a food ordering app so you can go ahead you can place a food order you can kind of build these bowls up and you can add to the car and you can check out and the whole point of the app is that you're you're basically placing an order for a for lunchtime and you're doing a pickup at the store so you can see it's a really great-looking app there's obviously a lot of JavaScript but don't worry I'm not going to talk to you about JavaScript today so I sure everyone's happy about that but basically the system that we've built for them what it really needed to do is kind of process and send these orders to a point of sale system and those point-of-sale systems you have to remember are based in a store in a physical store somewhere you know location across the US the second thing that we had to do was basically throttle the volume of those orders going to a store so now there's nothing kitchens hate more than having too many orders and they can process in a given window so what we lot of our requirements around this system we're all about throttling those orders and making sure that we're not delivering too many at one time and then the third thing that we have to do is have resilient against these stores being down so because it's a point-of-sale system and it's in a store those stores have really flaky internet access so we're talking like DSL from 10 years ago so something that we have to bear in mind here is that you know you're still going to want your lunch and you don't really care if that order went through or not but we have to do a good job of making sure that that order gets to a store so today I'm going to talk to you about three three kind of points on here so we're going to run through some of the application design that we use to design the system we're going to talk about some stories from production and then I'm going to wrap up by kind of touching on if we did it again what would we do differently so the first thing we're going to talk about today is the application design and the design of that system and before I get into the elixir parts what I wanted to do and I'm sure there's a lot of probably post rails developers or current rails developers in the room so I wanted to walk through like how we might have approached this in rails previously so you're all probably familiar with like a step by step approach like this you know do a bit of rails new we bundle install sidekick because we want to send some things in the background and then we start generating our models and we just start coding and you know that's great and we've all built great systems like that I'm sure but if we start building a system like that what we're really doing is putting all of our state in the database and we're thinking in the terms where we're saying all of this state of this application exists in this database and we're always using that to read and write in an application like that all roads are going to lead to an active record object and you know what we could do this again in Phoenix right we could do exactly the same thing we could mix Phoenix doc new Carver grille we can install work there isn't like a nice mix command we can run here because you obviously you have to add it to your depth and then we can start coding again and that that's okay and I'm sure like some of us will have success we're doing that but what we're really doing there is making this mistake where we're appropriate in the shape of the previous technology as the content of this new technology so we're basically you know we're taking all of our old baggage with us into this new world and actually we can approach this in a very different way and I guess that's that's kind of what I want to talk to you about today so some elixir design principles that we use to to design this system so there's like a wealth of information in the community and like really I'm tapping on a lot of that information that's out there already to kind of come up with a lot of these design principles so the first one is Phoenix is not your application and I think Chris McCord yeah we've got another whoo here we go so I think Chris McCord did a fantastic job today of kind of reinforcing this point and the changes that are coming up in Phoenix 1.3 really means that you're separating out this idea that you know Phoenix is just a web interface right Phoenix is not your system your system is the thing that it needs to do so in our case it needs to send food orders to a store when it needs to track those orders and things like that but you know if we start thinking about Phoenix being the entire application we're kind of constraining ourselves into that world of MV and C and you know those are those are okay that's an okay world to be in but I think we can do a lot more in elixir as well the second point is embracing state outside of the database so as I alluded to earlier if we if we look at rails what we're really doing is thinking all of this state goes through this database right but actually in elixir we have these these mechanisms so that we can store state and processes and store them in memory as well and become a bit more stateful in our services and that state doesn't have to just live in a database so we have many other mechanisms to where to put it the third point is if it's concurrent extract it into its into an OTP application and you know what that there's also the inverse of this as well which is like don't go overboard extracting things into OTP applications at least not at the beginning as well and what I mean by this concurrent aspect here is you know those things that you're probably going to use workers for in your rails app well they're probably really good candidates for something that you know could be extracted into an application on its own and the fourth design principle here is don't just let it crash so if you if you've come into this world you've probably heard this term let it crash be bounded around and you know what it's fantastic for system design and all of the guarantees that we can bring with Supervision trees and things like that but you know for us especially we had to think about the failure and what happens when these things don't work so we have to think about the expected failure cases right and and handling those and that's that's really some of what I'm going to get into as well yeah so given all of that what we ended up with in the system design is something that kind of looks a bit like this so we have basically four main components of the application we have a scheduler that job is to schedule and send orders to the point-of-sale system we have our order tracker you could probably guess what that one does that actually needs to track the state of the orders from the store we have our store availability managers that basically keep track of that capacity so we can limit the amount of orders going to a store then we have our web part as well and there's a bit more to the system I'm kind of simplifying it a bit here just sort of sake of brevity in these slides but the first thing that I wanted to dig into today is the order scheduler part of the system so what we do with the order scheduler is we do this just-in-time delivery of an order to a store so the entire job of this application is to take an order and then try and send it to a store and what we do here is our point-of-sale provider that we're using here they don't actually have a means to queue up those orders so what we're doing is we built our own queuing mechanism effectively so we batch up those orders 15 minutes ahead of time and then we we send them to a store so if your orders at 12 o'clock we're going to try and send that order at 15 minutes before so that the store has enough time to build that order so you saw from the video earlier about how complex it is to build one of these bowls now imagine that in the real world where they have to go along like a conveyor belt so they need a bit of time to be able to make these orders and this system has to deal with stores being down and orders being delayed sending to a store without having an impact on any of the other stores so we want to do these things concurrently but we want to isolate their failures so the actual supervision tree structure looks a bit like this so the rounded boxes like this represent supervisors and our circles represent the workers and gem servers in the system so what you can see here is quite a complex super actually it's not a too complex supervision tree it's quite simple supervision tree here really but what you'll notice here is that we have we have two trees going on here and the reason why we do that is what were actually doing is we're setting boundaries around these stores right so we're saying that this this failures can happen but they're going to happen on a per store basis so we're creating nice boundaries around these different store so say store one represents a store in DC and store two is somewhere in LA you know if we stop sending orders to store one in DC we can still be sending orders to that store in LA as well so how does this work how do we actually like to schedule these orders to get sent to a point of sale well you know we don't use any cron jobs or anything like that in the system we use the building blocks that Erlang and elixir give us so elixir has this great mechanism to be able to send yourself a message after a given time and that's this process dot send after call you can see here so in that call what we're saying is send myself a message of process and wait a certain amount of time and that time is in milliseconds not seconds just in case you messed that one up like I did first of all and then we actually we process our orders and then we in queue ourselves to do that again so this is kind of a recursive function that we keep calling ourselves here and what that actually looks like so we have our we have our store supervision tree here we have a store manager which is just a gem server that sends itself a message like you just saw and what we do there is then we request some data from the database we get these we get these orders back and then what we're basically doing here we're getting the orders back from the database and then what we're doing is we are taking those orders and we are creating effectively a worker per order that we're sending out so you can see here this is us like creating all these different workers and then each one sends that a store independently and the idea there is that you know each one of these workers can fail but without having an impact on any other workers in the system and at the same time this store could fail all of those orders could stop sending and we still don't have an impact on any other orders that going out by another stores supervision tree here so talking about failure what what happens when failure occurs in the system so what like I said we're expecting failure and we're designing for that failure here so we want to make sure you get your lunch right that's like that's an important thing you're going to be pretty annoyed if you get to the store you turn up and there's no lunch waiting for you so we actually use a library called Jen retry which is by pika mash and the guys are app use that handles a lot of this retry logic for us so Jen retry will actually do things like exponential back-off it will hand a jitter with your retries as well so you don't just try everything at the same moment in time and you can set limits on how many times you might want to retry something to be sent as well and you might think like wow that all sounds really complicated but in actual fact the code is literally like this it's we we basically have a function that says this thing may or may not blow up it might succeed or it might throw and then we just pass Jen retry that function and give it some options here so you can see that we're saying that we want to delay by 3 seconds each time and that will be exponential so it starts off with a delay of 3 seconds and then we are applying an exponential curve to that back off as well and then we're applying a bit of jitter here as well so we're saying it point to jitter to this retrying but I wanna see we've been running this in production and it's been fantastic so if you have similar like retry needs definitely check out Jen retry very much recommended right so that that approach is great the problem is with doing that is it's actually quite hard to think about exponential back-off in that world so we first of all we use the library called delayed OTP that effectively has a supervisor that will allow a child to die a slow death which sounds really brutal but but basically it will add this exponential back-off but Jen retry wraps all of that up and it is under the hood it's actually a supervision tree with its own supervisor and its own workers so I'll answer more questions at the end though if you have them so the next part of the system that I really want to dig into is the order tracker so you can kind of guess like I said earlier what this order tracker does and we're kind of every time we sent an order to the store we want to update you once that orders actually been processed and we want to send you push notifications and things like that so again the order track of supervision tree very similar to what we had before you'll see the sister this like idea use time and time again in this system where we're divvying things up by stores in this case we have a task supervisor here and I'll get into why we're doing that in a second and then we have again some worker processes from that as well so each one of these store managers fetches a feed of order changes from a store and that feed of order changes basically says what's happened to the order and at what time it's happened but we have to get it's basically like a big feed and we have to give it a point in time in which like we last fetch that feed to start from and it can have multiple pages so these these order these store managers basically ingest that feed read all the pages and then they process each one of those events in turn so what we do here to do that is we actually basically just map over all of those events and then we use a task supervisor here to start a child and the reason why we use a task supervisor is if we just use tasks async here if that was to blow up we wouldn't be able to be processing anything else and that would take down this calling process at the same time so by using a task supervisor in our supervision tree we're making sure that if one of these things dies it's a supervised child and it's not going to take down this calling process so we keep track of the last successful time that we were able to fetch that feed in the state of this process so this is just a gem server right we just basically have a time stamp that said this was the last time we could fetch it but the problem with this is what happens when this process goes down right we can't guarantee that this thing isn't going to go down and what happens on restart of this process so we want to restart from a last known good state so we want to actually persist that time stamp somewhere somewhere with more permanent storage right outside of a processes state so what we do to do that is if you haven't seen this before gem servers have a really great terminate method that you can get access to what terminate does is allows you to perform in any cleanup that you might do before this before this process is going to go away so we can use that here and we can get access to the last state to actually persist something to the database to keep track of that so this is a great place to do to do any cleanup that you might have before the process goes away basically and you can have guarantees that this is going to be called because of because of OTP effectively and the last part of the system that I kind of want to dig into it's probably the most complex part of what we've built in the most complex application that we built in this tree so it's the store availability system so as I said earlier we basically we have a certain number of time slots that you can pick from to place your order and the store availability system basically keeps track of that time slot and how much capacity your store has and what we mean by capacity is basically how many orders can I process in a given window of time and for us that window is 15 minutes so we're saying how many orders can we process in that 15 minute window so again you can see there's a very much a recurring theme here our supervision tree is set up by stores kind of each store as its own tree we have this funky thing called a debt stable manager and kind of talked about that and I'm in a minute and we're backed by an X table here so let me just explain why we do that so there's a really high demand for these time slots during the lunch rush that's because everyone clearly is going crazy over getting a Carver food they really want that and everyone's pretty frantic trying to get that time sort of you know the elusive 1 p.m. to pick up your lunch so we could use the database to get all this information but we actually have to do like quite a lot of queries here to aggregate all this information and you know making that call is fairly expensive so at the end of the day all we're really doing is we want to keep track of some integers right we want to say there's a time slot and this is how many have happened and this is how many are left so what we can do is use X 2 to store that capacity and if you haven't seen X before X is basically like a Redis for Erlang so it's built into our Lang itself it stands for Erlang term storage so it's basically a key value store that you can use with no dependencies in your application so for us X was a really good candidate of being able to store this data so that data kind of looks like this where it's just a tuple and in X you can have any Erlang term as the key for your data so for us that key is actually this time slot here so we say that at 11:45 we are going to have the capacity of 15 so we can process 15 orders at 11:45 this is kind of made-up data by the way don't try and like happer system or anything and then we have the pending and confirmed orders count here so that's the number of orders that have been confirmed in the system and the ones that were are kind of pending it and I'll explain about that pending state in a moment so this is all very well and good so we have lots of these data structures that represent this capacity here but how do we retrieve them back out of it so what we do in that case is we actually use this function in X called tab to list and what tab to list does is basically dumps out the state of your table right so we're doing this in a gem server callback here so we're saying just give me the entire availability matrix for the store so that's all of that data that you just saw and it's actually keyed by the date/time so we can order by that as well so this becomes a really really fast way to say give me all this data and then we can reply back to the client with that and now when we want to actually update the availability of that time slot so you saw we had a counter effectively so X has this great method built-in where we can do atomic counter update right so we can make sure that our reads and writes don't race and we can say that for a given date time the syntax is a little weird I know it's like erland Erlang sing tax and it takes you a while to get used to but basically what we're saying here is update the fourth element in the tuple and increment that value by one and you have to put the other element to say that you're not going to do that unless someone knows differently but that's that's kind of what I thought you have to do there and then we can just reply in this case we just reply okay with that okay so I talked a bit about pending orders earlier and and I talked about how there's this high demand for these time such during the lunch rush here but how do we not let that impact the user so everyone's going for that one o'clock time slot it's basically first-come first-serve so we need to make sure that you know if you've selected a time sort but you've spent ages putting in your card details that you're not going to miss out just because someone else got to that check out button faster than you did so just as a refresher here this is kind of what our checkout looks like we have you have all these time slots along here and then you have to put in your payment details in click checkout so the way that we model that is we basically have we you've all probably use like Ticketmaster or something like that before so you have a countdown effectively so once you click on one of those time slots we basically have a countdown that says you've got seven minutes to confirm that order and we hold that time slot for you so that's going to take into account when we say the capacity we're going to include those pending time slots as well so the code to do that looks a little something like this so we basically say hold that time slot for a given store and we give it an order ID just as a reference back and basically what happens in there is we kind of we actually model all of this in a process right so process is a great for storing state but they're also you know we can be actor based here we can say that one one process represents one time slot in the system so you're held time slot what we're going to do here is basically create a process here and then what we're going to do is basically we're going to monitor that held time slot process so by monitoring at what we get back is a reference to that held time slot and then from this point we can store that reference in the gem server for the store manager and then in that held time slot basically what we have is we start a timer so we just use process send after that says after X amount of time let's say it's five minutes we're just going to kill that process and then we're going to listen for the result of that termination back on the store manager so we're modeling this whole idea of like you placing an order and holding this time so just in this process here so what that process will do is eventually it's going to terminate so after some five minutes or so and then what we can do is we can actually listen for the down event on that store manager and because we're monitoring that process we we actually get notifications about when those processes die and that callback looks a bit like this so you can see that we have a down event and this would be this is implemented on our store manager so every time that we're monitoring the process we can always receive these down events and then what we can do is basically say hey that time SOT that you held we can now remove that hold on that time sort and decrement that pending order count as well and so you might be thinking like hey this is all well and good but you're storing all of this state in memory right yet goes away when the process dies so if our server restarts we're going to lose that whole availability schedule that we had in memory at the given time so what we do to get around that is basically we can read the state from the database and recreate all of that state about what orders were held and kind of the capacity of a store on the application start so this is this is literally ripped straight from the code base we say we're going to start one of these cysts like one of these things in a system we're going to read in all of this state and then we're going to start the store manager with that with that availability matrix already pre compiled so this is going to do a bunch of database lookups and stuff but really what we're doing here is we're using the databases basically a bootstrap mechanism to get that data in the system in in this process here one other pro tip if you've worked with Epps before you probably know this but like I said earlier if if your process goes die that goes down with a net stable you're going to kill that at stable as well right so you're going to lose all that and what you can actually do here is you can use this library called immortal that has this thing called a nets table manager that what this will do is basically when you start up your application the X table manager creates yet stable for you and then you hand off that table back to your process and this X table manager is basically going to listen for down events on your other process that was interfacing with the X table so basically when when that process goes down that table manager will take back the X table and then when the new one comes back online because it's in a super vision tree and everything magically works again it will give it back to it so that this is it's a it's honestly that sounded really complicated but it's really really simple to get up and running so definitely check it out if you're going to be using X okay so that kind of walks through a lot of our application design so what I wanted to share with you next are some stories from productions so you know application design is all well and good but when you run these things in the real world things happen right so the first thing that I really wanted to talk about was turning down the concurrency so this is an actual email I actually got from our API provider that was like hey you're making too many requests you know that like in elixir you think oh this is great this is awesome but you know there's always going to be limits to those those concurrent requests you can make and usually it's going to be someone else that's going to be the bottleneck right or that bottleneck might be something in your system but basically to resolve this we had to had just stop making so many requests so the next one I wanted to talk about is sending orders twice so I kind of talked about how kitchens get really annoyed when you've they've got too many orders and they get even more annoyed when you're sending the same order multiple times so something that when we first launched the app that was happening was we we were seeing all these duplicate orders come in and I was like we can't do that like that can't happen like we built this amazing system there's no way that we could be sending these duplicates and you know I probably had a bit too much hubris about it but what we ended up finding out was that there was actually multiple versions of our order scheduler running per note and you know this is actually a really really trivial thing to fix in that you just name your processes right so we one of those store managers and what I would I thought it was unique per node already but something else was starting one of them which led to all these multiple orders being sent and they will get them sent at the exact same moment in time as well because concurrency awesome so all you have to do here is basically name your process and if you name your process it's going to be guaranteed to be unique on that node if you want global uniqueness you can use something like the global module and people been talking about that earlier as well so that's literally the fix that we did we just gave it a name plus that name too to it on start and the second thing here is like Erlang has all these really really fantastic introspection tools but unfortunately we we actually deploy on Heroku and Heroku doesn't give you access at runtime to those introspection tools because you can't every time you run like Heroku run something like a bash session or something you're actually spinning up a new Dino so there's no way to introspect the state of that system at runtime using Heroku so that made debugging this really really hard so I had to basically do it locally but we got there in the end the third one I want to talk about is this is a kind of a really annoying issue with our point of sale API provider who we used basically we were sending these orders to the store and they were saying nope that order didn't go through but then what we were actually seeing is that orders still arrive at the kitchen so this dudes really confusing so we had this wonderful system of retries that I showed you earlier where basically we have gem retry in place if if their API says nope that didn't go through will retry and send that order again but what ended up happening was we had to completely turn off that system because our API provider wasn't actually atomic in the way they process their orders so we couldn't guarantee that they were or were not there it's like the yeah it wasn't a great it wasn't a great API basically so I think the lesson learned here is you know your failure model is only as good as the API you're calling or the system you're interrupting with and that you know we can design these great systems but we have to think about the limitations of the third parties or the the other parts the other things that we're calling in that system - and the fourth and kind of final lesson learned here is this error request timeout so we were seeing this like basically when we're making requests to our loyalty provider which which we used to take payment so this is quite a big deal you know you don't want to mess up the payment part don't do that so we were seeing this kind of intermittently and it was really difficult to debug because you know I'd see it I'd see it in the logs and I try and recreate it locally I couldn't recreate it there so I was assuming that it was the third parties API provided it was you know had the issues of these request timeouts but in actual fact we use this HTTP library called HTTP potion and I'm sure a bunch of you in here are also using a library like that or poison so potions specifically which under the hood it uses a an ER Lang library called eyebrows and basically what that does is it pulls the connections per host so that hostname is just you know the string that you give the address of the API you're calling and what this ends up happening is it sets it as a default size of 10 with a queue size of 10 per connection so you can effectively you know you can have one connection each and then that has a queue of 10 behind it and if you have a lot of slow running requests what's going to end up happening is you're going to see these request timeouts so that the solve is like kind of trivial you just say hey for this host set up much bigger max sessions I wanna see if yeah I would remember this if you were using this library and if you're designing api's around that but really like the better solve here is thinking about pools and pools of workers that you can use right so Hackney actually which HTTP potion outside poison relies on that's really confusing there's two very similarly named libraries but HTTP poison actually relies on Hackney under the hood the Hackney has a great way to say for this host create a pool of HTTP connections and then what we could do with that is say that we're only going to use that pool in these very defined boundaries that we have in our system but I think the lesson learned here is it's really understanding the process design and the bottlenecks of the libraries you're using and not just the process design of your system as well like we're all probably interrupting with a lot of other libraries and we're probably making use of lot of those great elixir libraries or Erlang libraries under the hood and actually just take a minute to look at what that supervision tree looks like for those libraries and make sure that you know you're not introducing some huge bottleneck into your system that you didn't know about ahead of time so lastly doing it again so I think there's three things here the first of which is kind of feeding work and don't read work and what I mean by this is you saw earlier that we have lots of things connecting to the database and kind of pulling from that database now you know that's that's fine but what we've done there is introduced a dependency on the database in that application so what that meant for us was we literally had to extract the database into its own OTP application and use that as a dependency on these other applications in our umbrella app whereas another way that we could possibly model this would be using something like gem stage to feed work back to these to these systems and then they kind of process this work there so the second one is start with an umbrella app so we didn't actually do this we did we literally did mix Phoenix don't you app no we for everything in lib and that was fine it was totally fine but you know I think next time and especially seeing what Phoenix one-point-three has coming out with all these um like the ability to structure things as an umbrella app from day one it's that's awesome and like we could have definitely made use of that when we built this and the third one this might be controversial yeah maybe don't use Heroku like Heroku is fantastic it lets you get up and run in really fast but you don't have the ability to do OTP releases which means you don't have these great introspection tools and also on Heroku you can't do anything multi-node right so Heroku doesn't allow access to EP MD which is the Erlang daemon that runs so you basically can't do all of the really cool node connection stuff and you have to use something like Redis to kind of do the be the middleman between there so I don't know obviously there's a lot of extra complexity if you're not using Heroku but I think definitely approaching this next time we think about using something like ec2 or we're actually looking at docker quite a lot as well so in conclusion I would definitely use a lexer again I just I would just want to make this point so elixir was really really well-suited to this job that we had of you know dealing with failures lots of concurrent work going on and honestly the programming model was really simple we ramped up members of our team to also be writing elixir as well some of them are here today and it's been absolutely rock-solid in production we have we've had like very very few issues with it so far and the performance is like basically dreamy you know everyone talks about it but it's it's so awesome to think like oh they don't have to put caching in front of absolutely everything to get below 200 millisecond response time and honestly working with it every day has just been fantastic so I spent the last six months of my life basically building this thing writing elixir and some JavaScript but let's not talk about that every day and it's been awesome so yeah thank you so much for coming and if you have questions I'd love to take them yeah oh and also sorry sorry just a awkward apology there so carve a grill are actually hiring elixir engineers in DC if anyone's interested in doing elixir full-time you know get in touch with me come grab me or check out Carver grill calm and yeah thank you does anyone have any questions so when you're using Ed's tables they exist per process yeah and I'm assuming you're running multiple multi node multi dyno for capacity and persistence and all that stuff how do you take go handle synchronizing to keep the answer tables between nodes in in sync yeah it's a great question and I can reveal the dirty secret this is on one node right now I was waiting I was literally waiting for that to come up so so yeah we we basically we don't need multi node right now it's it's barely making any use of its unlike a 2 X Dino on Heroku which is like 1 gig of ram and it's you know it uses like a hundred Meg of RAM and barely any CPU so yeah when we do need it what I would think about doing is probably using something like Phoenix presence and then sharing that state between the nodes rather than actually using that like yeah yeah one of the quotes that you've said that really jumped out at me is your failure model is only as good as the API you're calling yeah what would you have done differently if you've known in advance how much your API sucked that's a really good question probably consider a different provider is the honest answer I think but I think really maybe some even more robust failure handling and probably some more manual processes to actually deal with that so yeah thanks that's one this one here is a run to the back hi you mentioned that when when you do to a call start you could pull all the data from the for the S table out of the database you didn't show how you put it into the database I'm curious if you'd had some means of just sort of getting it in there or did you have more of a relational schema yeah it's more of a relational schema so those orders have it effectively have a time slot so we're always persisting that order to the database and then we can use that to recreate the state again so it's very very classic like order line items you the kind of model that we've probably all used it a thousand times if you've done this system so yeah I don't know I have yeah until someone kicks us out that's a good question I I actually only looked at it very briefly I'm sorry so the question was any reason why you don't use amnesia and amnesia is a distributed Erlang database effectively I've heard like horror stories with it as well apparently but this is like complete you know on the grapevine kind of stuff so yeah definitely want to consider yeah yeah one more here I guess that would be the last one because I think we're over time now you mentioned that you have to limit how hard you hit the API mm-hmm is that limit only set in the eyebrows line of code that you showed us yeah so we could do it a couple of different ways there so we could have a pool of workers so we know that we're only about making a certain number of concurrent requests at once or yeah we could just set that limit in eyebrows effectively so honestly I think the preferred approach is probably a pool so you know that you've only ever got like 10 things hitting that API at one time but also we use process dot send after to schedule API hits as well and we just increase the send after time to make sure we weren't doing it too much second part yeah there's a host parameter so that means that the limit only applies to that host exactly so under the hood eyebrows uses a net table that keeps that basically the host the port and then the number of max connections that you can have to that host so yeah you always want to do it per host effectively cool thank you everyone you
Info
Channel: Confreaks
Views: 9,618
Rating: 4.966527 out of 5
Keywords:
Id: fkDhU-2NWJ8
Channel Id: undefined
Length: 45min 13sec (2713 seconds)
Published: Mon Sep 26 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.