Devon Estes - Comparing common concurrency patterns in Elixir and Erlang - Lambda Days 2020

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
afternoon everybody now before we dig in I just want to get a quick sort of read of the room here who here does not have any experience with elixir or Erlang okay who here does not have any experience with some sort of actor based language where they have actors as a concurrency or parallelism paradigm okay so just to get a little terminology out of the way I'm gonna be using Erlang and elixir terminology like beam terminology for talking about actors in elixir in Erlang these actors are called processes not to be confused with OS processes they're an internal thing to the the beam the VM that these languages run on and these processes are what they call actors they it's how they implement the actor pattern and these actors communicate by sending messages to one another and each process each actor has a mailbox and those messages are short in that mailbox that mailbox is ordered and they can selectively receive messages from that mailbox so that is hopefully enough to get you all up to speeds so when I say a process what I'm talking about is an actor in elixir and Erlang terms and when I say that we are spawning a process that's again elixir and Erlang talk for creating a new actor they spawn a process and I'm also going to be talking about parent and child processes this isn't like a formal definition I mean it kind of is but when I'm talking about parent and child processes today I'm talking about a parent is one process that spawns other child processes because a big part of what we're gonna be talking about today are the communication patterns between these actors and how those relate to certain patterns in working with parallel or concurrent computation so if you're anything like me you may have come to functional programming of some sort whatever language that you are using because you are interested in concurrent or parallel computation and this is awesome but the thing is concurrent and parallel computation while easier in some ways in certain languages than in others you still need to implement and to implement these things it's helpful to talk about certain patterns because there are patterns that can be found in implementations of certain types of parallel or concurrent computation and luckily people are thinking about these things a lot of the stuff that I'm gonna be talking about today is taken from this paper published back in 1999 quite a long time ago it's a 20 year old paper and the thing I loved about this paper was that they were looking to give names to some of these patterns of parallel and concurrent computation and one of the things that I learned or I saw pretty quickly I first had experienced an object-oriented programming and an object orientation there are design patterns and they are common and they're shared across quite a bit of that world people can talk about these patterns and you know what you're talking about when you're talking to another programmer and it's really helpful to have these shared sort of higher level abstractions because it's easier for us to communicate when we're talking about patterns or problems in our code and communication is really one of the hard parts of doing our job so by having patterns that we can talk about by having this sort of shared vocabulary or what they call a pattern language this makes I think one of the really hard parts of computing which is the communication within a team or across teams making that a little bit easier now this paper they later turned into a book which was published in 2004 by audison Wesley it's a great book it has a great acronym P P P or 3 P I was sort of hoping that this talk the title for this talk would have a similarly great acronym but then when I looked at it I realized that this acronym was like kind of already taken at least in the minds of me maybe Millennials don't know what that is but I always think of this so it probably won't be referred to in that way but in that paper one of the things that really grabbed my attention was this chart now this chart is not very high-rez because it's again 20 year old paper so I made it a little hopefully easier to see now what this chart does is we start at the top and you answer sort of a couple question is about the particular problem you have at hand and it helps to guide you through a couple questions to eventually a pattern that might be a good fit for the problem that you're trying to solve and one of the things that I pretty quickly saw in this chart especially is some of the things that they recommend some of these patterns really didn't apply to me as a licker or Erlang programmer and as I continued reading on in the paper I realized that pretty quickly that these people were not solving the same kinds of problems that I am solving like 20 years ago the people doing this research and thinking about these problems we're thinking about them in a completely different context right consumer multi-core processors hadn't even been released yet the first consumer multi-core processors weren't released until like 2006 2005 ish and this paper predates that so what they were thinking of when they were writing this paper were people are writing programs to run on supercomputers like this to do massively parallel computations of stuff like trying to model you know equations for theoretical astrophysics to try and simulate the beginnings of the universe or doing weather models that will simulate the next 40 years of possible weather conditions and clearly they're not running Erlang on those machines like they have very highly specialized languages that they're using that are very low-level because they're trying to eke out as much of the computing power as possible out of those supercomputers but even though I was a little bit dejected thinking like oh well this isn't going to be applicable to me at all I'm not gonna get anything out of this it turned out that there are still some good things and that's why I think that even though some parts of what's in that paper aren't directly applicable to me as someone who writes mainly consumer applications mainly for people to use on the internet that there's still some stuff that I can get out of this that will help make my ability to communicate with my co-workers and other people in the community easier and so even though there are some things that we're not going to cover we're gonna cover some of these things now that pattern down there shared memory Erlang an elixir doesn't really have kinda there's a way to do it but we're gonna cover that today because I don't think it's particularly applicable to the kind of stuff that I was interested in and that I'm sort of assuming that many people in this room are also working on this branch as well this group by data branch again this is when you have data structures that are of almost unfathomable size that you need to process in some way doing that sort of highly scientific computing and you need to be able to parallelize computation over that data structure that is also not a thing that people do building web applications typically so we're not going to cover that today as well this chunk here is a thing that does show up every now and then so this is when you have some tasks that can be viewed as recursively parallel a great example of this is the parallel compiler in a lick sir when you need to compile an application it will attempt to compile each file in parallel but because compilation can be viewed as a tree you need to be able to sort of compile from the bottom up and so you have a recursive data structure you have a tree that can be recursively compiled you're compiling this which requires compiling those and that can all be done in parallel so even though that thing does exist and is the thing that a pattern those two patterns do exist in elixir and Erlang because they're not as common and in the interest of brevity and fitting in the timeslot we're gonna skip those for today but these four remaining ones are awesome and these you will find a lot I think you'll find them in just about any consumer application you'll probably find some example of at least three of these maybe all four and that's why we're gonna cover these for today these are great and I think being able to identify these four patterns and talk about these four patterns is going to make communication within a team and within a community a whole lot better and a whole lot easier and what we're gonna do is we're going to look at examples of open source code so that even though the examples that I'm going to be showing today are going to be simplified just so that we can get the important bits all of the actual code is available out there on the internet if you want to go and take a deeper dive and look at the the really interesting parts of implementing these the first thing we're going to be looking at is Benji which is a benchmarking library elixir and Erlang and that uses the reduction pattern we're going to be looking at X unit which is the test runner and test framework for a lick sir that uses asynchronous decomposition and then we're going to be looking at gen stage which is a pipeline processing library and elixir and that uses pipeline processing now you may have noticed that I said that there were four patterns and there are only three examples up there and that's because the fourth one is frankly so simple that it doesn't need that much code that you can find it just about anywhere and that also has the best name of them all embarrassingly parallel I just love that name the canonical example of an embarrassingly parallel problem that I like to give is imagine that you are sending emails right so you have 20 emails the same email might need to be sent to 20 different people and you just send the emails right it's a very simple problem you can send all of them in parallel and one sort of implementation of this might look like this where we are mapping over some lists some collection of emails and we're sending each one with that send email function now doing it this way does it in series we send one email and then the next and then the next and there's no benefit to parallelization but on elixir with a lecture in Erlang to make that parallel that's it that's the only difference is we are spawning a process to handle the sending of that email and the parent process continues on once that spawns sip is finished and then the parent process that the child processes that we've spawned go and send the email and they do their thing and then they go away and that's it that's the whole pattern is that you spawn some child processes to do something and it goes away now if you're anything again like me pictures sometimes help and I like drawing pictures especially to sort of explain communication patterns so we're gonna look at some pictures today as well hopefully that'll help everybody see these patterns because that's what we're trying to do is identify and name this type of pattern so here we have our parent process which has three emails that it needs to send and to do so it spawns three children and each of those children gets one email and it sends its email and then it goes away it exits and that's it we're done that's a really simple pattern but it's super common as well there are a lot of these types of problems that could be seen as embarrassingly parallel and luckily in many actor based languages or languages with actor based parallelism or concurrency frameworks you can do this really really simply now I also want to give a little overview of each of these patterns because the thing is each of these patterns have these sort of unique types of problems that they are associated with so that when you see a problem that meets these criteria you can know oh that looks like a pattern a problem that we can solve with this pattern with the embarrassingly parallel pattern or this looks like an embarrassingly parallel problem so for something to be a good fit for this pattern we want an eagerly evaluated linear input we also need to make sure that there's no communication from the child processes back to the parent process right that the parent spawns the child and then the child doesn't send any messages back to the parent there's also no communication between the siblings we'll see that in our example those child processes were not sending messages to each other they just did their own thing they were totally independent and by totally independent I also mean that there's no shared dependencies between any of those child processes and doing their work if there are shared dependencies you're pretty quickly going to run into a race condition of some sort and that's going to be a bummer so if you have shared dependencies between your child processes in some fashion this is not going to be a good fit okay now the next pattern is very similar but with one major difference you may know this pattern by a couple other names actually because it's a very common one I would say this is maybe even the most common pattern this is also frequently seen as something called async/await or MapReduce what we're doing is we are doing some work in parallel but then there's an additional step there's a reduction on top of that and so what we're going to look at here is this code which is basically a parallel map so this is real code from Bengie because you can do stuff in parallel Vinci and they write a little parallel map function there and basically what we're doing is we again are mapping over our inputs but we're using this function in the elixir standard library called task async which will execute a task asynchronously and what it does is it returns a ref we'll learn a little bit more about that in a second and then it takes those lists of reps those references and then it waits for each of those so the weight is synchronous that's blocking but the the map step is asynchronous so that's in parallel but then the reduce is in series so because I'm pretty sure most of you don't know what's going on in that function we're gonna dig a little bit deeper into the elixir standard library and see you get a very simplified version of what's happening there so in that async function the first thing we're doing is using a built-in function in the VM called make ref now this ref this reference is a native data type in Erlang an elixir and you can think of it as a unique value it is functionally a unique value it's pretty much guaranteed to be unique so for each of these tasks it gets its own unique value then we capture a reference to the parent process so that the child processes know where to send their message when they're done and then we spawn a new child process and say hey child process please send me so that's the the send command that says send a message send me a message that is a tuple with that reference that unique value that I gave you and then the output of executing this function that I'm giving you and then we return that reference so that function will start that child process and that child process knows what to do when it's done to send its return value back to the parent and then we return that reference because we're gonna use that reference in the await function so what we start here is a receive loop this is where the parent process or any parent any process is sitting and waiting for a message that will match a pattern in its mailbox like I said it has it can selectively receive from messages in its mailbox so it's waiting to receive a message that matches a pattern and when it gets a message that includes that unique reference that we're waiting for it will receive that message that's it and return the value that came with it and that's the value that was calculated asynchronously in parallel by our child process and with received loop because I said that's blocking it will sit in wait the process won't do anything you can set a timeout if you like so that if you're saying you know after five seconds boy that thing should have finished and if it didn't finish something might have gone wrong so let's just move on we're not gonna wait any longer you can set a timeout there if you like so as we see that map is done in parallel but the reduce the reduction part is done in series and I just want to give two other quick examples to show this because this is basically what makes this pattern so here's a quick example we have a function here and that functions job is to wait for some amount of seconds to simulate some expensive work of varying time and then when it's done it's just going to send the current second so what time it is just the second back to the parent process then we can parallel start with the current time which is 48 seconds and then we parallel map over those numbers so we're gonna wait for one second and the second one will wait for two seconds the third one will wait for three seconds and the fourth one will wait for one second and then it will send the time that it finished at back to the parent so it's as we expect 49 50 51 49 so it's not 49 49 50 51 the reduction is done in series okay that's the important bit and then of course we're done at 51 seconds that's when that parallel map finishes but after it's received all of the messages from all of the child processes that it spawned and in pictures if this helps it always helps me we have again our parent process and it spawns our four child processes to do their work their calculation and then when they're done they send their messages back to the parent and then they go away they exit and they get cleaned up by the garbage collector and that's it it's a pretty simple pattern but it's a really really powerful one because it allows you to essentially do work in parallel but then to get some reduced value of all of that work in parallel back to the parent process whereas with embarrassingly simple embarrassingly parallel you didn't get anything back parent process you just did the work and didn't care about the result here we care about the result that's a big difference now this next thing I'm gonna show it was technically unrelated but it's so cool I always love showing it so this is our basic async and a weight and with this I can chew up every CPU cycle on whatever computer I am using to execute whatever parallel computation I want right I as long as I have it set up that way I can make sure that the beam the elixir VM is using 100% of all CPU cores but what if I wanted to use 100% of all CPU cores on ten machines what if I wanted to distribute this problem well that's it that's the only difference I love showing this because it's such a clear example of that saying that when you have parallelism with actors that you get distribution for free I mean clearly there's a lot of stuff that I wrote you know distributed programming is hard there's a lot of other edge cases and things you would want to handle and actually setting up your your mesh setting up your your cluster is a thing but when it comes to actually distributing and doing the work that's the only difference it's pretty much there and that's so cool and that's why I always love seeing it but back to the important bits reduction what sort of problems fit this pattern really well with reduction we still have an eagerly evaluated linear input but the big difference is now we can communicate back to the parent the children can send messages back to the parent we're still not sending them any messages between the siblings between those child processes and there's still no shared dependencies between those child processes if you have some sort of shared dependency things are not going to go well but what about when we have an input that isn't eagerly evaluated what if we have some lazily evaluated input how do we handle that now this is one thing that especially on Erlang and elixir with Erlang an elixir and on the beam I'm assuming in most other functional languages as well you're probably not manually managing memory but this is one advantage where we have it way easier than the academics using these very low-level languages where they're manually managing memory being able to efficiently process some input of unknown size when you're manually managing memory is a real difficult problem it's a lot easier for us luckily but I think it's still important enough to go over this pattern and give it its own name because it is a different thing and what they've called it is asynchronous decomposition so we have something that we're decomposing but we're doing it asynchronously we don't have the full thing that we're processing in parallel before we start processing it so what we're gonna look at is the test runner 4x unit in X unit with elixir you can run some of your test modules in parallel if you like to speed up the execution of your tests which is great you don't have to do it for all modules you can just tag certain modules and say these can be run asynchronously and you can have other modules that are run synchronously but this is the basics of the runner the the module sort of looks like this we have a run function which as you would expect runs our tests and then we have a loop we have an async loop and a sync loop and then we have a function to actually run the module and then we have a function to run each test in the module but let's dig in a little bit deeper and we're gonna start seeing something pretty interesting so the first thing that we see is an event manager so we have that event manager start function and what that's gonna do is start an event manager an event manager in this case is a process it's another actor and its job is to basically receive a whole bunch of events now technically we don't need this event manager for this pattern this is just the way that X unit does it for a lot of reasons but we're thinking of this event manager as basically a parent process it's there as an optimization to separate that event manager from the actual parent that's running our tests but it's not needed so when we look at the pictures we'll see that that's on the same level as the parent process and when we talk about the communication patterns you can think about this as a parent so we start that event manager then we tell it that we've started running our test suite then we enter a loop to say you know let's let's run some stuff and then when we're all done we again send our manager executes another function that sends another message to our event manager saying that the test suite has finished running because it's keeping track of the times and stuff then we get all the results because it's also been keeping track of all the results of all the tests as they've been running and we return those results after we tell our event manager thank you very much you've done a great job you can be shut down now but then the real fun part is this this is where we have our asynchronous decomposition so in elixir the test modules themselves are not compiled at compile time they are exs files which means they are functionally interpreted they're their scripts they're compiled after the VM has started running so these are compiled asynchronously and in order to make things a little faster we're compiling all that stuff in parallel and then the parent process doesn't know how many of these modules need to be run it's just sitting again waiting for one of these to be ready so it doesn't have all of the stuff that is doing in parallel before it starts doing its work it's asynchronously decomposing some structure some list some group of things and what it does is it's gonna sit and it's gonna wait for a module to be ready and when it gets one when it receives a message that a module is ready to be run and if it's an asynchronous module then it spawns a process to go ahead and run that test module but again asynchronously so the minute that is spawned it goes back waiting for another module to be ready and then it spawns another one eventually it will go back into the loop do its thing after it spawns that process and then eventually it will get a message saying that okay we've it looks like we've got all of our asynchronous modules and so it enters the synchronous part of the loop which is pretty much the same but again has asynchronous decomposition these modules are still compiled and loaded into the beam code server asynchronously and so when these modules are ready it gets a message saying okay you got another module you can run now but the big difference here is this is run in the parent process because these cannot be run in parallel sometimes you have some tests that might have a shared dependency between them and if you run them in parallel you will have flaky tests and problems and you don't want that so you can run your tests in a series if you want so here it's not spawning a process it's doing those in the parent process and then after it runs one module goes back into the loop waiting for another module to be ready to be run and then again eventually it's gonna get a message saying okay we've done everything we've finished asynchronously decomposing all of the tests all of the tests that are going to be run have been run and so it exits and it just says okay I'm done with the okay Adam and just to sort of show you a high-level idea of what's going on when it's running those tests pretty simple it's again sending messages to that event manager so that's coming from one of the child processes or from the parent process and saying that we've started running a module that it's keeping track of timings again and then it just Maps over all of the tests in that module it says you know give me all the tests that are in this module runs each of them one by one and then when it's done it sends a message back to the event manager again saying ok we're done and then running a test is as simple as you would think it again sends messages to the parent process to say I've started running this test because it's keeping track of timings and then it sends the result of running the test whether past failed errands gift whatever and then it's done so that's what's going on and hopefully this will make a little bit more more sense in pictures so we have again our parent process and the first thing it does is it spawns that event manager and that event manager is going to sit and listen for events and while that event managers listening that parent process gets a module that's ready to be run so it spawns a child process to execute the tests in that module and then it goes back to waiting and that parent that child process does its own thing it's running those tests the whole time and then the parent gets another message saying here's another module ready to be run so it spawns another child process to run that module and then eventually the parent goes back to waiting but now one of the child processes is done it has something to send back to the event manager and so it can send messages to the event manager okay and then it goes away when it's all done and then maybe another one's ready to be run and then the children keep sending messages back up to the event manager when they have a message to send when they have results or some other timing information back to the event manager and then at the very end our event manager gives the results back to the parent process and gets shut down and goes away and does its thing but that's the idea is you have those two levels and the children can again communicate with the parent so it's very similar to reduction but what we have with asynchronous decomposition is the big change is that first thing we now have a lazily evaluated input of functionally unknown size we don't know how big it is but we still have communication with the parent aloud so in this case that parent could be the event manager at that level or it could just be the parent process itself xunit happens to use an event manager in that case but it functions as a parent it's on that same level there's still no communication between the siblings we saw so those siblings the work that they're doing those child processes they're still independent they don't need to coordinate there's still no communication and there's still no shared dependencies if you try and run your tests we share dependencies it will be an unfortunate situation you'll get flaky tests it's no fun now could we simplify this could we use reduction to solve this problem yes we could we could eagerly evaluate the compilation of all of the test modules and when all of them have been evaluated we can sort of separate the asynchronous ones from the synchronous ones and we can do it as a reduction just fine and we can use that parallel map function that we saw earlier to do this thing we could do this as a reduction but what we're losing is a little bit of efficiency so we need to first compile everything before we start running our tests whereas with asynchronous decomposition if some test module takes a really long time to compile you don't have to wait for it you can start running all the other test modules and that's a nice little speed-up that's a nice little benefit there so that is this sort of trade-off in this case but also one of the reasons that I really like having asynchronous decomposition as its own named pattern with that big difference being that the input is lazily evaluated is it's a nice info a nice lead-in to that fourth pattern that we're going to talk about which is this pipeline processing now this is what it sounds like pipeline processing is a fairly I would say common thing in many applications or even seen at a larger level if you have many services that work together there may be pipelines where data travels through one service and another service and using queues and such for doing that sort of thing but this is also really helpful to do within a given application within a single running application of having some sort of pipeline processing for data and what this pipeline processing does is it allows us to sort of deal with that thing that we haven't been allowed to deal with yet which is shared dependencies between the child processes and communication within the child processes and the other sort of caveat to this though is this can get very complicated to do well to do this at a very simple in a very simple way isn't too hard but there are a lot more edge cases because this isn't much more complicated and difficult problem to solve and so if you're going to reach for this pattern if you're going to use this to solve a problem that you have I really really highly recommend not writing your own code to do this I think in many mature ecosystems you're going to see some sort of really well-designed library that will do this for you that avoids a lot of the edge cases and handles a lot of the common problems so please I encourage you reach for one of those and with that caveat in mind I'm actually not going to show the code because it is kind of complicated I think we might be able to understand what's going on but luckily this is open source so if you do want to go and read the code you can the library is called Jen's stage it's on github and you can see how that implements this pattern but it's a lot of code it's a complicated problem it's a complicated problem to solve well but what I do still want to show is the communication pattern because that communication pattern is different from what we previously had and now we have our parent process again but this time we're labeling it with an S as a supervisor a supervisor as a thing in elixir in in Erlang they have a framework called OTP which allows for supervision of processes and basically I wanted to change that because we have something else that's gonna get appiied in this so I didn't want to confuse folks so now we have our parent and that's a supervisor and what it does is it spawns four processes for child processes and each of these processes represents a stage in computation hence the name Jen the stage stands for generic stage that sort of naming paradigm as a thing you see in elixir and Erlang very frequently now that first one way over on the left that P that is a producer that produces data and then the fourth one way over on the right the C that is a consumer that consumes data and then the two in the middle the PC those are producer consumers they both produce and consume data and so when you're designing your pipeline you have to think about the stages in which you are transforming or modifying or working with your data and to decide where you might have shared dependencies what can be done in parallel what must be done in series you know you're looking at a much more complicated problem and you're dividing it up into these stages and then these stages communicate in order to do their job now because gen stage solves one of the more common and gotchas in this sort of thing which is back pressure the communication pattern amongst the children actually starts way over at the consumer so to the right over there it starts at the consumer sending a message to the producer that it has subscribed to saying okay I'm ready for some data now I can do some work please send me some data and then that producer is also a consumer and so it sends another message back down the chain and back down the chain until we get to a producer which sort of starts our chain and says ok I will get some data for you and maybe it already has data ready to be given and so it sends some data to the first to the second stage some event or some data and when it's done it still remembers that this next one wanted some data when it was ready and so it sends it back along the chain and it sends eventually back to the consumer so it uses back pressure to avoid a really common thing which is the ability to maybe overflow or overwhelm one stage in the process and you can have really bad things happen if that happens so that's why I say please use a really well-designed library for this type of thing because it will help you avoid those common pitfalls but that's the idea with pipeline processing is you have this sort of pipeline and you have communication between the stages to send data when it's ready to be processed in steps in stages and the pipeline's could be far more complicated than this you can have multiple children at a given stage so you can design for whatever you need if one of your stages functionally on average takes far longer than one of the others you can have multiple workers in that stage so this is a really really powerful paradigm it's a really really flexible pattern but it's complicated and that's the trade-off we're making there but what we get is a pattern that helps us deal with lazily evaluated inputs of functionally unknown or potentially infinite size and the communication with the parent is still allowed although in this case it is rare often it's more seen as in supervision strategies the children can send messages to the parent if something crashes and needs to be restarted things like that so with pipeline processing that communication back up to the parent is a little bit more rare but it's still allowed and it still does happen but there is a lot of communication between the siblings now and now because we have that communication between siblings and we can design our pipeline in such a way to avoid problems with these shared dependencies we can solve problems where there are shared dependencies between the children we just need to remember to design our pipeline in such a way as to deal with those dependencies so in summary because this is the biggie is we have these four different aspects of a problem and we need to know if I have a problem that has this sort of shape that has these attributes what is a good pattern to help solve this so for our embarrassingly parallel pattern we have eagerly evaluated input but no communication with the parent or between siblings and no shared dependency between between the siblings if we need communication with the parent that's where reduction comes into play so we still haven't eagerly evaluated input but now our children can send messages back up to the parent but we still don't have communication between the children and we still don't have shared dependencies with asynchronous decomposition we're now dealing with lazily evaluated inputs of functionally unknown or potentially infinite size but it still looks a whole lot like reduction we can still send messages back to the parent but we still don't send messages between the child processes between our children actors and we still don't have shared dependencies if we want to solve that very difficult problem that's where we can reach for pipeline processing we now have a lazily evaluated input of functionally infinite size but we can communicate with the parent and the children can communicate amongst each other and you can deal with the shared dependencies in that work with that communication and with the structure and the design of your pipeline so those are the four big patterns that I wanted to cover today that I think are seen pretty frequently and a lot of consumer applications if there are if there's an actor based concurrency or parallelism pattern there these are all things that I use all the time in a lecture and in Erlang it have used quite a lot for the last five four or five years that I've been working with that and I love the idea of being able to just talk with another developer on my team and say like oh that problem looks like something we can use reduction for and we know what we're talking about when we say that we know what that pattern looks like we know how to implement it we don't need to spend twenty minutes getting on the same page about what that means having this shared vocabulary is really super helpful so I hope that all of you all going back to work next week can maybe talk to your your co-workers or if you're not at work yet if you're still in university you can maybe talk to folks and I'd also love to see more research about this I mean I know that there are a lot of people doing more research in this world but it's a lot of that research is still in the very you know people doing the really really hard problems and not the people that are doing like kind of the scientifically easy problems but the hard problem I think is always communication so the more that we can have better ways of communicating about these sorts of patterns and these sorts of problems the better our applications are going to be I want to give a thank you to sketch my employer for supporting me and and speaking today we are hiring for elixir positions so if that's something you're interested in please come talk to me we can chat about that or if you just want to talk to me about other stuff totally I'm around and I love talking to folks so come say hi and thank you all I think we might have some time for questions but yeah thank you that's it you you
Info
Channel: Code Sync
Views: 2,325
Rating: 4.9166665 out of 5
Keywords: Elixir, Erlang, Devon Estes, Lambda Days
Id: GVSRi9Ki8d8
Channel Id: undefined
Length: 38min 24sec (2304 seconds)
Published: Tue Feb 25 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.