Phoenix LiveView in Production: Andrew Morin, ClusterTruck // Elixir Wizards Conference 2021

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone my name is joel meador i work at smartlogic as a developer um we have a couple of short talks today before closing remarks for today and our first speaker is andrew morin who works at a company called cluster truck which is a food service food delivery service that is in indianapolis and a few other markets which i've had before it's very good so i'm gonna let andrew um introduce his company and talk about phoenix live view and production take it away andrew all right thank you um so yeah this talk is called phoenix live view in production i'm andrew morin i am a senior software engineer at cluster truck i have over 20 years of experience in software development uh in a variety of languages but i've really only been uh spending the majority of my time with elixir for the last year or so um and most of that spent specifically with phoenix live view cluster truck is a food delivery platform and we are based in indianapolis but we operate in several markets across the united states we are a fully vertically integrated platform so we actually own and operate our own kitchens and we control the process from ordering through cooking and delivery so this allows us to kind of provide a superior customer experience to some of the third party delivery services that are out there because we can ensure that you receive your food quickly and as fresh as possible from the kitchen on the technical side we operate a variety of services including a ruby on rails api and a react web front end but several of our services are built on elixir and phoenix and that's really a primary focus of ours as we go forward with new development at cluster direct we we use live view in a few different applications our largest use of it is an internal system for inventory management this is used on site in our kitchens by employees and it serves both some basic crud forms as well as some significantly more complex inventory and invoice entry forms uh there's a lot of line items a lot of moving parts to those um so that poses some unique challenges and that's the sort of thing we're going to get into here in just a minute that system also provides some reporting functionality as well we also use live view for some dashboard services we've written a really good example of this is what we call the cluster 500 you may have heard of the indianapolis 500 which we host here in india at the end of may every year but in 2020 due to the coven 19 pandemic um the race was postponed till august so we had an idea to display a leaderboard of the 33 drivers performing the most deliveries on the day the race would have been run as kind of a fun marketing promotion using live view we were able to spin up and deploy a dashboard in basically a single hack day as a single day project so i'm going to show that here we actually brought it back for 2021 because that was kind of fun to do so we did it during the race day and you can see here these were the final results uh from the race day this year at the end of may and uh ken has been one of our long time stalwart couriers and he completed the most distance across the the indianapolis market and like i said this was um something we spun up just in one day using live view as kind of a fun thing and it turned out really great as for deployment um our entire stack is deployed on into a kubernetes cluster um and live view is easy to deploy really as any other elixir application via kubernetes um so you know we use docker builds to create our images and then as far as building elixir clusters it's very easy to do using kubernetes services and dns names to support cluster discovery works great so you might be thinking that live view sounds like a great idea but how well does it actually work in a production environment for starters it works exceedingly well for things like dashboards as we've seen and as well as for reporting live view controllers can really easily interface with phoenix pub sub messaging and allow you to render data to a user uh in real time as you know your background processes running agend servers complete reports or other services send data through message queues that's how the cluster 500 was built and i can show you actually i would jump to an editor and show you a very simple reporting implementation as a kind of an example of how you might do that so here we have a example live view controller on mount it first thing it's going to do is subscribe to a pub sub event for report results then it is going to spin up a new gen server instance and then set its result field in the assigns to nil we have a template down here that just interprets the results and no results as you know meaning it's still generating and then if it actually has any rows it will just render those rows so just a very simple dummy implementation here the gen server over here will uh delay briefly more for demo purposes uh and then it will execute in this case it's just going to build a list of 50 random numbers you could obviously put in whatever sort of uh report logic you needed in here that could be some sort of long-running process the gen server handles and then it's going to broadcast back to that pub sub channel uh and then i've written this just to rerun itself after two minutes is just kind of a fun little exercise so um but hey upon completion of that it's gonna broadcast back to the gen server or broadcast back to the pub sub channel and you can see here we have a handle info definition uh within our live view controller um so you can note that really a live view controller at its heart is just another gen server implementation so it's really handy because it can easily receive any sort of message including from pub sub broadcast and then all it does is update its internal state in this case updates and assign on the socket with the updated results and um each client that connects to this uh to this live view is spinning up just another gen server process and so they're all going to receive this broadcast and they're all going to re-render accordingly so if we jump back to the browser here we can see an example of this which it helps if you actually start the process so this will initially be generated results and after 10 seconds we should see our results simply pop in here so as you can tell there's there's very little code involved in writing something like this using live view so it's a uh great example of something that can be um i'll jump back here to code so you can see it's a great example of something that can be spun up very rapidly using live view live view is also great for uh just a basic card application if you're using phoenix you're probably already making heavy use of ecto change sets live view makes it very easy to create forms tied to change sets and execute change set validation on each user input modification so it's very simple to implement a page that can validate user input on entry and provide the user immediate real-time feedback it's more challenging however especially in a production environment if you have form inputs that don't necessarily neatly fit into individual active change sets um you know this is certainly the case with that aforementioned inventory management system we built we render some large complex input forms that need to handle multiple user interaction events and don't necessarily tie those back immediately to a single ecto model additionally if your clients are not on very solid internet connection which may be the case in a production environment some of these more complex pages can become can have extremely poor performance and have bad user experience as the number and size of these these back and forth websocket messages increases the impact of a bad connection is going to increase as well this is really an essential thing to consider a production environment we'll get more into this on the next slide and finally a live view model doesn't really fit easily into any kind of say like request um request preconditions you need to check such as authorization since live viewers we've seen just has a mount method uh you know one option you could do would be to implement whatever checks you need within mount send an assigned based on that result and then update your template to conditionally render different ui if the check has failed but it's not really a great pattern and it doesn't scale very well as you um as you build out uh you know more more and more pages it's not going to scale very well what we've done is implemented an authorization plug that fills fits into the phoenix connection chain and performs any necessary authorization checks for the current request and then we'll halt to chain it and immediately respond with the 403 if it fails this works pretty well for us i'm not going to show you that code right now but the code that i showed a minute ago was all included in example demo application which i'll put up on the github as well what's really key when considering live view and production is knowing how to fully optimize uh the websocket data and rendering to overcome these challenges that i mentioned so my experience in live view is that it's very easy to fall victim to a works on my machine mentality when the production user experience is in fact very different problems like generating too large of an html diff lead to large volumes of traffic through the websocket that just don't tend to be immediately obvious when you're working locally but become very apparent when you build an application like our inventory system which is being used by staff in a kitchen or walking around with an ipad potentially in and out of solid wi-fi signals as is the case in our implementation additionally i think as developers we often you know think about the differences in browsers and chalk it up to just subtle rendering quirks we need to account for in different markup or css but in my testing i've found that there can be significant differences in the rendering performance of different browsers and that really becomes apparent when you're continually sending diffs of html to be re-rendered in the browser the number one thing to do is to watch the traffic coming back from the server via the websocket and find events or pages that need further optimization the live socket javascript has an uh enable debug method that some find useful but i find it just as well to simply watch the websocket messages being passed and look for patterns you can learn to use things like input field to bounce and live components to reduce the events being sent and the size of the html divs being returned we're going to walk through a real quick example here here i have a here i have a live view with a mount that creates a thousand change sets on the page and assigns those back or creates a thousand changes for a an ecto model that we have and assigns those back to the assigns the rendered view iterates in a for loop all of those assigns builds a form with a hidden input with the id and a text input with the value in them so uh one thing to note here as well is that there's a phoenix change validate event on here this is a pretty common pat and desired pattern in live view forms these validated events in our controller which we can see over here simply update the change set with new values and updates the assign this is a common pattern used to return errors back validation errors back to the end user um i note that this is doing a lot of list access by index that's obviously not the most efficient but it is functional for this demo purpose if we go to this page in the browser and i'm going to go ahead and actually reload this which seems to not be cooperating nicely with my web share here there we go so i'm going to look at the websocket connection and the messages coming back and as i can see as i type every keystroke i am having a lot of events coming back and they're very large as well i'm getting 340k of uh coming back with every message every keystroke i have so as i enter data that's that's a lot of data to be transmitting back and furthermore it's a lot of dom to expect the browser to be updating and re-rendering every single time some browsers do this better than others but it's certainly in a production environment this is not what you want to see so very easy first optimization is to add a d-bounce to our input you could do this globally in say a helper function where in your view where you're replacing this text input with your own implementation of that but i'm just going to add that i'm just going to add that directly right here and i'm going to set it to half a second as a debounce you can also have the option to set this to say something like on blur uh which obviously is going to be even more efficient if that makes sense in your application in my case we use forms like this um where we're actually in our use case we're expecting the end user to see constant updates as they're entering uh values we give them different user input hints so we need some sort of regular update but this will certainly streamline the process a bit as i can see as i type i'm still getting large chunks of data back but it's happening less frequently so that's at least some benefit but where we're really going to see a huge benefit is using something is optimizing this for loop because this is the real problem what's happening is live view is seeing this the entire contents of this for loop is a diff every single time so it's going to send back that that payload every single time we can convert this to a live component and really speed this up so i've already implemented a live component here which you can see called loop item the template for this component really matches the existing template we had here for inside the for loop rendering a form with our inputs so i'm going to go ahead and replace my template here with something that just calls out to the live component note especially that we're passing a minimal number of assigns to the component itself just the individual item and change set we're not passing a collection of all of the items and change sets uh because then we wouldn't see any efficient any any additional gains so we really want to pay make sure we're paying attention to that when we're talking about optimizing this now if i go back and look at my websocket traffic again my initial render is still the same size as we would expect but as i type all of a sudden now i'm seeing down to only about 6k coming back so this is a huge optimization both in the amount of data coming back but as i mentioned really when when end users are using this on potentially different connections and different browsers they're going to see a much better experience as the browser has a lot less work to do to re-render that another optimization to pay attention to a small one but a big benefit is to make sure you understand when to use live redirect or push redirect when navigating pages these methods uh are used to navigate the user to a new page by unmounting the current live view and mounting a new one rather than performing full page reloads another really big one that that has been us before in the past is to make sure to minimize your css we saw this exact effect in our production environment we use a tailwind library and we were failing to properly remove unused classes in our production build this resulted in a large performance hit especially under chrome we found each chunk of this html that was returned from the socket updated the dom the browser had to parse through an enormous amount of css to figure out if any of the new dom or even existing siblings now match new css classes so removing all the unused classes from that tailwind library caused a huge huge boost in rendering performance and then finally this is more elixir wide not live view specific but make sure you optimize your docker builds as well if you're deploying a containerized environment like we are um i have an example dockerfile in this uh in this project as well this isn't fully functional if you try to do a docker build it would probably it would probably choke but it gives you a general idea we use two different from statements so we initially start from the elixir version that we're building against do all of our compilation or our dips get our compile then we run a mix release to build our release then we pull from just a minimal os image and we copy from our build image just our built files that really produces as small as possible of an image so that we can rapidly deploy this when we need to make updates um as i mentioned all the code that's uh in this demo is available on github as well and there's a link there and these slides have made public and there's a link and they'll read me on the github if you want to see them again so that is the entirety of the presentation unless there are any questions thank you andrew uh i had one question uh it looks like someone else does too um sorry we don't want to see it's screen inception there infinite infinite peoples um did you separate your drivers from your cyclists when you did the indy 500 we did not uh that is a that is a good question but we did not um we we are um i didn't get in too much into the details of our business model but we do keep uh relatively small um delivery zones from our kitchens and that's part of uh the drive towards making sure that food is as fresh as possible so um you know cyclists the distance as it gets shorter the cyclists and the the speed demonstration cyclists and drivers uh condenses obviously but uh we didn't feel any need to separate them out but they may find that unfair that's a good question okay uh we have a we have a possibly better question from adam schwartz uh if you are deploying an alpine image is that what you develop with as well yeah that's a very good question i actually just develop on a on os 10 device we do in order to at least attempt to keep uh development and production as close to as possible um we use asdf to make sure that all of our tooling is the same but i do not actually develop on alpine i'm curious if i would see any differences there um mentioning the big gap between production and development i don't really know but yeah i develop on osx and just use asdf to make sure i'm using the same elixir version and node version and all that so we are in production okay uh we don't have any other questions and i think the next session is ready to start so thank you very much for your time andrew
Info
Channel: SmartLogicTV
Views: 784
Rating: undefined out of 5
Keywords:
Id: yLb6nHLAMpY
Channel Id: undefined
Length: 17min 9sec (1029 seconds)
Published: Fri Jul 16 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.