Erlang Factory SF 2016 Keynote Phoenix and Elm – Making the Web Functional

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

This is a fantastic introduction to Elm. I had heard about it, but didn't know anything. This short talk gave me a enough of an understanding that I feel like "Oh right, of course that's how it should be done."

👍︎︎ 9 👤︎︎ u/ergasia 📅︎︎ Mar 12 2016 🗫︎ replies

I'm fascinated by this match up. I've started using Elm and I love it. I am super curious to get into Elixir but haven't found the time yet.

But I love Elm because of the type system which feels like it is missing from Elixir entirely. Obviously Elixir has different and remarkable strengths from its Erlang grounding but it feels like people who would love Elm would resent Elixir for the lack of typing. Is that not the case? Or is it just that the benefits of Elixir outweigh any concerns about the type system? I love to understand this better.

👍︎︎ 4 👤︎︎ u/woberto 📅︎︎ Mar 12 2016 🗫︎ replies
Captions
let's get started so we're talking about making a web functional with Phoenix in Elam these are different tools and technologies right but we're actually seeing a lot of cross-pollination between these two communities a shocking amount actually so I was just at another conference last week and we had - I was a twixtor conference when we had two talks that incorporate at home in some way and the same thing happened at lecture comp last year so I'm really excited to see where we can kind of bridge the gap between Phoenix and ohm and we have some exciting things in the pipeline that I think the community's really gonna love I think Evan can tell you about the UM side of things but first we're gonna talk about what makes the web dysfunctional today on the server so obviously I don't have to tell everyone here like there's no more free lunch right you don't have clock speeds bumping our our programs are getting faster every year we're having the multi-core ages kind of changing the game and at the same time we have the world moving from mostly stateless request response style architectures to increasingly stateful ones and this is kind of really the game changer that's what brought me into the early ecosystem and ultimately got me into creating Phoenix you know I think the stateful architecture is really at odds with most languages concurrency models or a lack of concurrency models so you know you used to have easy horizontal scalability where you can just have a request company reach some state from a database fulfill that request and throw that state away but now that we have staple connections we have ongoing conversation vertical scalability becomes really really important and I think most languages that have you know heavy concurrency models are really falling over in this area and I come from a reback round in any kind of object orientation to try to handle you know a staple conversation with the server was incredibly complex and kind of pushed me over to the functional side of things and ultimately to elixir and Phoenix we also have kind of this link to a web on the client I think no one would argue that you know JavaScript is solving everything extremely well we have a lot of good tooling in JavaScript but we have a lot of projects going you know the biggest source of new issues on the Phoenix issue tracker are javascript related and i didn't did not think I'd spend a huge amount of my time debugging JavaScript when I created an elixir web framework and we just we integrated with a existing build tool and we don't even couple with it but just having it shipped by default with Enix has issues with people having repeatable builds the node ecosystem follows ember but everything breaks when new packages are released so we're seeing a lot of fragile tooling and we have a lot of competing async models on the client so whether it's like continuation Scala callbacks or promises are really hot now or you're competing implementation of promises you just have all these competing models to accomplish different things and this is leading thinking a large part to the framework churn the framework for Teague that we have and I think the JavaScript community is trying to seek ideal architectures for the applications and they haven't quite found it yet so we have a new framework that tries to come in and solve it but ultimately we're dealing kind of with a I think inferior language and I think my hope is Elm comes in and fills us gap and for me personally is gonna be my kind of ejection out of the JavaScript community into something far greater and we'll hear about all about that from Evan in a little while but I like to talk a lot about the Phoenix milestones that we hit over the last six months we really stepped on the Phoenix book actually just finished it so it should be out in print I think probably within about a month it's available as ebook now I wrote that with that first date I was 18 I was a lot of work but I'm really happy with how it came out so check that out I think I'm really excited about how that's good if you will the adoption of in Phoenix in general was having kind of first class education resources available and then I also the big part of Phoenix development over the last six months is dockyards support of the project so I hopped on a Dockyard about three or four months ago and since then I've been working almost entirely full-time on Phoenix I saw dr. I hired me for my my actual role is to spend the majority of my time on open-source development so that's been what I needed to have kind of a sane life trying to manage a large open source project and Dockyard has made a big bet on the framework so check the mountain and give them a big THANK to you but the biggest story out of last few months is our pub/sub in channel optimizations we were able to get two million channel clients on a single server which is something that I didn't think we'd ever be able to achieve this is like you know you hear what the whatsapp scale of two million clients first server is what brought me into their own ecosystem and the fact that we were able to kind of recreate that I mean it's it's not the same they're doing a lot more than just handling mostly idle connections but still it's something that's been extremely exciting and these aren't just idle WebSocket connections they are actually channel clients multiplexed on a single WebSocket connection as active pub/sub subscribers so there's quite a bit going on behind the scenes other than just roll a cowboy WebSocket connections and if you've ever tried to benchmark or load test anything built on earlier elixir you know that's actually incredibly hard so this was a couple weeks of full-time work just to orchestrate to start testing this it took about we had to provision 45 Rackspace instances running the Sun load test client so we had this like fleet of Sun clients opening about 60,000 WebSocket connections each to a single Phoenix server know just setting all this up was a lot of work and we had a pretty beefy server so the server that served those two million users had it was a forty four hundred twenty eight twenty eight gigabyte instance which is pretty beefy but there's only about thirteen hundred dollars a month on Rackspace so if you're able to support two million active users or thirteen hundred dollars a month I'm really excited about kind of what barriers that's gonna break down I think is gonna really blow out some use cases that previously unreachable especially by the average developer and an excuse about eighty three gigabytes in memory so it wasn't use quite a bit about memory but we have to remember we had not only we're all WebSocket connections but we also had pub/sub system built on top of that and we also multiplex individual channel processes on top of that connection so there's quite a bit going on and we didn't even max this mr. machine when we were only capped at two million clients because I set the new limit at two million file descriptors on the server as some crazy high number that we never hit and we ended up actually hitting that and I didn't have time to go back and rerun the results because when you're paying 450 servers by the minute these things add up but one of the funnest parts of this is broadcasting the two million subscribers actually worked we've load tested this simple chat application that's horrible we load load tested this simple already I'll just shout all right so we had a chat application and we wanted to stress our pub set system so you would never want to million users to join one chat app but we didn't think once we had these two million users set up we were like hey let's go try to use the app thinking it would just like kill the server and this is an actual video I took when I hit enter on the left-hand side when it reaches both browsers it's gone out to two million active connections which blew us away this was like like a josée was on skype with me and we were like I think Joe's A's response was like Erlang period like we're just like to give you an idea like what's happening here is this sends a message to the server which broadcasts to 2 million subscribers and like that has to send 2 million messages to 2 million pigs and then has to serialize that to Jason for each pin and then push that down the wire and that happened in like two seconds so just I'm still blown away by it but I want to talk a lot about the optimization story because I think telling this story is extremely fulfilling for me because you know you come into a new ecosystem or framework and you buy into the hype you drink the kool-aid I think it's unavoidable and there's nothing wrong with it kool-aid is delicious but you know I think we often have mantras that we we spout like you know functional programming at least some more maintainable code darling VM super scalable phoenix is super productive and fast without sacrificing performance like you know we preach these things but to have the hype leave live up to reality for me is like incredibly fulfilling so I want to tell the story of how we optimize this because initially to get Phoenix 100 at least the channel layer we it was a whole I can make it work then make it fast so we had a stable API and we like okay we're following with EP principles it should be scalable and we released one oh and then we went to optimize so our first benchmark run only had 30,000 clients before the server was maxed which is way too low and I started like having like crushing self-doubt like what like is this scalable but we had a quick double doubling of performance by removing code so this was encouraging so we doubled performance by adding 14 lines of code and removing 16 I was like okay it's like that's pretty good we were simplified the code and got double performance and then we fired up observer and found a bottleneck like almost immediately and that again allowed us to have simplified code base so at five lines of code removed 38 and we increased our performance by an order of magnitude of where we started so at this point I'm feeling like really good like okay like this is trivial changes the code is just minor tweaks is actually remove code and we're at hundreds of thousands of connections and then I thought okay like you know this must be the low-hanging fruit is gone I'm gonna have to like refactor all this code to get there's millions of special-sauce scale but it turns out our biggest performance improvement was a single line of code change which increased our arrival rate by 10 times the number of connections per second we could open at a time and it also added a hundred thousand subscribers and we were only limited at four hundred fifty fifty thousand connections because we started with a smaller Rackspace instance so these optimizations these disks you see here is what went from a system of supporting 30,000 subscribers to a system supporting two million by actually removing code and I'm still blown away by this process because we didn't have to do any complex profiling like I didn't have to actually run any profiling on this I just used observer to make minor code changes and find bottlenecks so for the second optimization you saw I fired up observer remote observer shell running on the server during an active benchmark and I noticed the timer server had hundreds of messages in its mailbox and I said like what the heck why is the timer server here and it immediately I I realized I actually incorrectly use timers in interval where I should have used process send after or Erlang start timer and timer sooner interval is going to use a central timer server to send you a message periodically so we were bottlenecking the timer server when we didn't even need to so but just by removing that we were able to increase performance fivefold and this is how we optimize we examine it our it's tables we've monitored message queue sizes and that's all we had to do to optimize something from 30,000 connections to two million I'm blown away by I wouldn't have to dip down into beam files or running profile tasks it was almost too easy and in fact Adam Kilson who's here today he I was putting my slide deck together and I saw this tweet so I had to add it he said he was playing around with observer and it took him 30 seconds to see that he was leaking monitor roofs I saw this tweet and I was like oh this is perfect for right here in my slide deck and little did I know he was trolling me because five minutes after I added it to my slide deck a memory leak issue was raised on Phoenix pub/sub by Adam so he was actually diagnosing a Phoenix issue that I still have to fix so he's trolling me but it turns out so this should affect very few people but the moral of story is someone that didn't write Phoenix or have experience with Phoenix's code base was able to within 30 seconds identify an issue and find the exact place in the code that was causing it and to me that's such a testament to the to him that we have available so it's really cool other than me having to fix this and then the 10x increase in arrival rate is pretty cool it's like no you're ATS table types what happened was we were using a bag table for subscriptions and we changed it to a duplicate bag and this is what increased our of connections tenfold because the bag insert time grows linear in time in a duplicate bag allows duplicate keys which lets your insertion time be constant so since every subscriber is unique we're able to use a duplicate bag and this is what gave us the 10x increase in our arrival rate which is pretty incredible but that was kind of the top end of our optimizations we were able to support two million users but we did see at that point we were getting some subscriber timeouts sometimes those some clients one of them would run out of memory and crash and that would drop 60,000 connections at once which would flood our single local server and lead to timeouts on new subscribers and then we also noticed broadcasting to a million users took a couple seconds but broadcasting to two million was taking like five to eight seconds which we're thinking like wouldn't it be great to convince that back down into a couple seconds so we realized we could shard our subscriptions and we would accomplish both pooling of pub/sub servers and also paralyze broadcasts so like any time you have timeouts to a single server the solution is pooling right so we were able to do both by going from something like this where each node ran a single pup's of local server to something like this where we started our X tables which holds our subscribers a local server managed each X shard and we still use PG to to broker multi node connections and doing this was like remarkably simple it was like 150 lines of code and you might be thinking like what did we shard by like we don't have users necessarily we do have a unique pit right so this is the code lifted out of Phoenix where we can just convert our pit to binary look up on the erling docks the binary representation for a pit and we can pluck out the process ID so if you want for example if we have this pit and we want that number 57 we can just pattern match out that value and then to pick the chard you just divide by your chard size take the remainder and that's the chard to pick so every pit knew which local cups of local server to ask to subscribe to and every pin no new each table - right - so this was able this was what lettuce freeing all those timeouts at the top end of our two million subscribers and we also were able to spawn at assman so here when you broadcast we read from it's in parallel and broadcast to those users in parallel so this is what kept our broadcast down to two seconds to a couple million users so that's our pups of optimization stories we went from something that supported 30,000 users which is not great but it was initial it was initial best effort approach and then with minimal code changes by actually simplifying code we ended up with like a world class solution and then from there I've been focusing on Phoenix presence which I talked about last year at elixir comm and this is really exciting to me because we're putting cutting-edge CS research into practice I like to say you know we're applying some really interesting CS ideas and is something that users can use day-to-day but not have to actually worry about it so ultimately presence is like the easiest example is like who's online right now in your application and it seems really simple it's like deceptively simple most people think like why do you need cutting-edge CSS research to show users online we have two users and two browser tabs we just showed them right but let's walk through this problem so initially most of users and the community will implement this they'll say I know how to kind of do this I'll start a present server that just monitors channel processes and we broadcast events as they join and leave and the right code that looks like this I've seen code just like this well they'll have a inner channel join that will have a presence add which is gonna be like a gin server call that's just gonna set up a monitor and broadcast a user is joined and they store that state locally and then anytime they get it down they're going broadcast that user has left and then they delete them from their state problem saw great let's walk through this so there's a few problems with this one is if we have user open three browser tabs what's gonna happen here we're gonna end up with duplicate users so at this point you can have like the client D dupe but what happens if user one closes that browser tab the server's gonna broadcast to leave and we're going to remove them from the online list but they're still there so a user hasn't left until all their presences have left so there's like a there's a extra layer here where you have unique presences for the same user so there's an issue but even worse is a lot of newcomers into both Erlang and elixir will develop something an application on their laptop and they'll just like put state and a gen server and be like AHA can I can deploy this out but the problem is as soon as we deploy this multi node right we have a data synchronization problem if we list our gen server state it's gonna be different between nodes so there's a synchronization problem that we have to solve and at this point most languages or libraries and frameworks will say AHA I know how to handle data synchronization between two computers we can just put in the database we'll just deploy Redis and we'll have a problem solved and I don't wanna crap on Redis because I've used it many times in the past I think it's really neat but I think what happens is when you have a language that doesn't have a good distribution story it pushes you towards bad solutions and especially with presents like presence is all ephemeral state right because either users there or they're gone and when they're gone that state's gone so we have like there's no reason to put a ephemeral state in some storage engine right we have a place to do that and it's in processes but let's walk through this so the best-case scenario is where most people end up is something like this where their present server is gonna read and write to some storage engine like Redis and problem solved but this still isn't completely solved and this is something that most most people just ignore what happens is if a node two catches on fire here so boom no.2 is gone well now the process responsible for cleaning up those users is gone so user two is not going to be online forever and Redis and that's not good so at this point you can come up with like convoluted like maybe use a storage engine that has expiration times but then you have to manage users online pass ID expiration or have every other node monitor orphan users occasionally but then if you have dozens of nodes now you have a ton of nodes doing the same work it becomes a mess so we have two problems to solve with presents one is we have local node concerns where we have to account for unique presences for the same user and the multi node concerns are we have to handle no doubt events and any any user that was part of that node we need to broadcast locally to our users saying as far as we're aware these users are gone and then we also have to replicate data across the cluster so our ideal solution is going to be something with no single source of truth and that so to give us no single point of failure so we're gonna have a more scalable system a more fault tolerant system not relying on some central database like Redis because that brightest goes down the entire application goes down or if you're in a net split scenario the unfortunate users on side on the other side of Redis are now worthless so we can solve that with a CR DT and a heartbeat and gossip protocol and that's what we're doing for Phoenix presence I'm not gonna go into CR T T's too much Alexander saw and she gave a great talk last year at elixir cough and he's the one riding r CR DT if you're familiar CRT T's we've we've been playing it in an or SWAT which is observed remove set without tombstones look it up because I'm not going to try to explain it because I hardly I can hardly comprehend it so what CRT teams give us though is the key is we can have replication without remote synchronization so we don't have to rely on some distributed consensus algorithm or some global lock we can just replicate data across the cluster if data arrives out of order or data arrives multiple times it doesn't matter because CRT T's you can pick your problem within the CR DT problem space conflicts are mathematically impossible so this is the key thing that we can just replicate our data across the cluster conflicts eventually commute or conflicts are impossible so as long as all nodes eventually receive all messages we'll have stronger minstrel consistency and then we have a simple heartbeat protocol to actually replicate state right now we're looking into gossip protocols familiar with swim it implements like an infection style of gossip protocol but we're starting simple because heartbeats are very simple to implement and reason about so heart beats in this case we're piggybacking our CRT t deltas so anytime our presence information we have a joint early if we just piggyback that on our heartbeat to other nodes and then they detect node downs you just say if no two misses some pre-configured heartbeat windows we just declare them as down and we clean up their state locally and broadcast to our users that are connected so pretty simple to to think about how heart B works but once you get to like hundreds of nodes that start scaling poorly but the first person that actually hits the limit of our heartbeat setup can sponsor Phoenix to implement a gossip protocol because you're doing pretty well but we can do some other neat things so if you're familiar with vector clocks or their version vectors in this case vector clock is really just an integer value and it lets just keep track of a node having state changes so anytime inode has internal state changes that bumps this integer value and every heartbeat that gets sent across the cluster is going to include the vector clocks of all the nodes that we're aware of so in this case let's imagine node one has recovered from a net split and it sees that node 3 node 3 is vector clock went from 1 to 2 that means node 3 has updates for us so we haven't seen but we also saw node 4 or specter clock go from 2 to 3 that means node 4 has updates for us so at this point we could ask both nodes like hey catch me up I need to I need those messages those deltas that I missed but then if we have dozens of nodes we're gonna have to send dozens of messages so there's an easy way to optimize this we can just say AHA node 4 said that it saw node 3 at vector clock 2 so then we can guarantee that we can only ask node 4 to catch us up on the data we missed and well that it was clock this vector clocks at node 4 has will contain those threes updates so we're able to optimize the net split recovery and new nodes coming up just by collapsing vector clocks and asking the minimal number of nodes for all the data that we need so that's the internal implementation details so at the end of the day as an end user like you don't care right a lot of people just like to deploy Redis and it's easy so what this really accomplishes for me is like we're doing these really cutting-edge stuff underneath but on the surface they use api's trivial right just gonna make your life easier not having to have this overhead of deploying Redis at the end of the day this is all you have to write on the server so after you join a channel you can just say presents track that's going to register your presence and start replicating that data on the cluster and then - list of presences you can say presents list and that's going to keep track of unique presences even for the same user and then on the client we included a presence object on our JavaScript library that's almost like a CRT T on the client where you can just say anytime I get some initial list of users I can just say present sync state let's go to handle the case where I like I reconnect it's gonna handle conflicts and I can also call present synched if I get this special presents diff event any time data is replicated across the cluster I'm gonna get a diff of maybe hundreds of users that have joined and left and I can handle that in a single place and then it lists the users I can just call presents list and I can pass optional callback to these to detect is a user online for the first time or are they online from a new device or as a user logged off entirely or just logged off from a device so you have metadata about devices users are connected connected to I'm gonna show a quick demo of this because just to show you we've solved some of these I took your problems so I have two users or two tabs here you comply can't read this but 4001 is node one quote 4002 is node 2 so we have two nodes running that are clustered together and I can pass the name and URL as a user so pretty simple we've shown it what's the users but if I want to open up a third tab so I'm going to add a new user to node two named Evan Evan node two and we can see almost immediately it was replicated to node one over here so now I'm going to go back to the two original tabs and I'm gonna close Evans tab and we should see Evan immediately disappear from the right-hand side and then within a second he should disappear from the left as that's replicated so gone and gone from the left-hand side so we're handling the basic case of replication users coming and going but what about that tricky case that no one seems to solve so if we have a node down event I'm gonna kill node 2 node 1 is running on top let's go to stop heartbeating it detects a node down from node 2 if we go back to the browser we can see the no to user is gone so we handled that case that almost no one handles or gets right that's just taken care of for you and the recovery cases is pretty fun to watch as well so I'm going to restart node two and we can see node one up top the texts we got a note up but if we wait a second the coolest thing is they exchange information so node one and two both send each other or transfer requests saying like hey I haven't seen you before you're a new version of this node give me any data that I've missed from you and this is how node 2 Center no 2 was able to get updated information by asking node 1 like hey catch me up and if we go back to our browser tabs even the clients have built in recovery so now we can see our list of users are back and in sync so the client is doing exponential back-off recovery as well as the server's handling the net split and new notes coming up and come down so that's the presence at the user level it's very simple but there were tricky problems to solve and I'm super excited about kind of where this goes in the future I think people can implement service discovery on top of presence it's not just applicable to chat lists I think there are some really neat things we can build on top of this and it's implemented in a way that allows you to kind of do some neat things on top outside of like a web browser but the whole the whole point of this is what I've learned throughout this process especially optimizing pub/sub is like good platforms drive you towards optimal solutions so when we started pub so we did a best-effort solution and with just minimal tweaks by actually simplifying in the code we were able to go from a best-effort solution to a world-class solution all because we trusted that like following these OTP principles was the secret sauce right and that's what we we went into you know drinking the kool-aid and then at the end of the day it turned out that it came true and i think it's a testament to earth Marylyn ecosystem that we had this platform we followed the tried-and-true tools in place and it led to a world-class solution and it also you know is it lived up to the hype as far as like our fast code is not the equal dense code right I didn't have to suddenly take that pub/sub system I wrote initially is like the happy path and then make it convoluted to make it fast and that on the flip side of that is I can write productive code and doesn't have to be slow code like there's no economy here where I want to be productive how this codes got to be slower it's like no I can use HP principles and they're early p.m. and I get the best of both worlds and ultimately what good platforms do is it let us focus on what matters and that's our application right I think at the end of the day like you don't care if we're using CRT T's or gossip or heartbeat internally but what you do care about is building your application and if you don't have to worry about deploying extra infrastructure or you have to worry about net splits and how you're gonna recover handle conflicts that can just be taken care of for you that's gonna let you focus on providing value to your customers or your business and I think you know what Phoenix is all about what liquor knurling are all about is something that I stole from the website and as Elm says writing group code should be easy now it is and I think this is really what these platforms enable they enable people to come in and write really world-class solutions and it shouldn't be incredibly hard it shouldn't be convoluted and I think we've solved that on the server side building on on all on top of the great innovation that Earling has I'm really excited to see what Elm has on the client side and for that I'll hand it off to Evan so I designed this programming language called Elm how many of you are like familiar like heard of this thing before how many people have used this thing before okay how many people who have like built something like pretty decent-sized okay okay so so this project really got started because I was frustrated doing JavaScript and HTML stuff essentially I wanted to put a logo in the middle of a box and I couldn't you know like like and I just was like how can this work and at the time you know I was I was at a company where you'd expect things to be nice I was working in technologies that were 20 years old and I just was like it needs to be not just horizontally centered but also vertically centered and there was just like there are six solutions they have different trade-offs you need to choose carefully I just was like how is this reality and another another thing that came up is like eventually I decided that I didn't actually want it to be vertically I didn't want it centered I was like itch that's not a good design and did something else just to avoid the problem and I had this sidebar that I then wanted to reuse on all the pages and so I was like oh how do you reuse like the visual components and and the answer is like oh you don't you don't do that that's not that's JavaScript not for that so now you need to use a templating language on your server that's going to generate this HTML sidebar and then your and I just was like it how can it be like 2010 2011 and the answer of how do you reuse this code is like oh we didn't think about that yet like that so that was really how I got started with Elm and I'd come from a background of typed functional programming and felt it we could take the lessons from that and apply it to this kind of problem that was very frustrating to me so over the past couple years I've been developing this language these days I met a company called no red ink which is the biggest Elm user in the world not not massive but we have about five people who are full-time elem programmers writing their whole front end in elm and they're Bungie company sort of in this range of like three to five engineers using Elm for stuff and also in our tank is hiring so what's been interesting recently is I've been starting to meet Erlang people just in the course of running em meetups and doing elm sort of stuff whoo first of all like that's not someone I expect to meet when I'm doing like more JavaScript focused stuff but people show up but they meet ups and be like oh like I really like get this and we've seen this at no red ink as well so like Erlang and elixir folks are getting excited about Elm I don't people are getting excited about Erlang and elixir and so the our back-end stuff I'd never tink is in Ruby and they're like hmm what's the next how do we improve here so so you're starting to see these two communities together and so for the past I'd say a couple months I've been focusing on how can we make this really work really nicely together and I was like I'm gonna demo how phoenix channels in elm are gonna be amazing together i didn't finish in time i didn't finish in time so in like a month it's gonna be amazing right now it's like right now right now it's it's just it's just really good so what i'd like to do is sort of show sort of the fundamentals of elements sort of i'd like you to come out of this feeling like oh I think I know elm like I think I think things are fine so yeah so I want to just start out showing sort of the this core question I start out with which was I want to be able to reuse a sort of parts of my UI so you know we have this HTML bindings on a second can people read the code okay okay okay so hey we made our first own program so the idea is that we treat HTML as a value just like numbers or strings and once we have that sort of as a part of the language you can start to build up quite cool stuff so if we want to get fancier we can have a div with no attributes and then like multiple things so like maybe how are you someday I'll buy it so yeah so you can see now in typical HTML fashion you put two things in and it's like oh you know it's going to lay out silly but you can start to build these things up in a pretty Pleasant way and what's neat is these are just Elm values so this is a list div as a function that takes two arguments and so if you want to get a little bit fancier you can say I want an image where the source is this and it's got no stuff in it let's see oh fake-out okay so we can build up fancier things right so at this point like we basically have a user profile you know it's just CSS the rest of the way so so let's let's make it let's make this reusable and so now our main can be a div and it's entries are yogi Steve Alice and their photos aren't correct in every case but the ideas here you can start to make these small reusable parts and build up full UIs so this is nice but it's not very interactive right like this is not probably not going to make your company a lot of money to have just like the static page that no one can interact with so so the crucial part of Elm is figuring out how can we start to interact with this in a way where we still keep the underlying principles of you know I want to have a language that's typed I want to have a language where everything is immutable I want to have a language where all effects are managed so you can't be doing arbitrary stuff so element has this thing called start app which sort of hides some of the some of these details so what we're going to do now is go from just having some visual stuff to having a counter where we can increment and decrement the number okay so not the not the most exciting application but sort of the fundamentals where once you can do this you kind of can do whatever you need to do okay so the core way you sort of set up an elm program is first you start by modeling your problem you then say how to view that model and then you say how to update let's do it in a different order you say how to update it and then we'll say how to view it so we have we want to make a counter so so to model the complexity and all the details the essence of our counter application we just need a number so our model is going to be an integer and we'll say initial model equals zero so nothing crazy and there when you start to update things you use you create a type that we can call action and it can either be an increment action or a decrement action these are the only two things that can happen in our application we're going to represent that explicitly and that actually gives us some cool stuff that we'll see in a little while and so our update functions pretty easy I take an action and a model and in the case that it's an increment I want my model to be higher and the case that it's a decrement I want my model to be lower okay so very straightforward and finally our view we take our model address just pretend this that you understand that part part of this next release of elam is to sort of get rid of some of these details that are truly non-essential the this is one of these details so let's say we have a div I have a button and on click I'm going to say increment and it's going to have plus decrement it's gonna say - and then we'll have a divin here that says text - string model okay so we're just saying like whatever is going on this is what our application should look like and finally let's hook it up so our main is now going to use start app so you can oops model equals initial model view equals view update equals update so this is kind of like the boilerplate that will go oh that will go in any program just to like start things up which is why it's called start app you're starting it your app I try to go with like a literal names policy so like if you want to use HTML and elm it's Elm HTML if you want to like get a package from Elm its elem package okay so let's see if this works I don't know oh cool okay so yeah we can start to see the error messages stuff so one of the things that I am quite proud of is the quality of error messages we'll come back to that in a bit though okay so we have our counter example so I can increment and I can decrement now what's cool to notice here is that when you set up your view you just say this is what it looks like when I show my model and all the details of messing with the Dom like poking around that's taken care of so sort of in the same way that yes so so when you're normally writing JavaScript you have here's how we make the initial version and then things are going to change and then you have to manually manage that so you now have a two-phase process to do something and I think there's an analogy here to sort of managing failure in that you say here's how I'm going to start my system and if it goes down just like do it again don't try to like micromanage and all those details yourself so yeah so this is kind of the essential pieces of them once you have this foundation you can start building on it so let's say you know our customers are really happy with the the counter that we've created for them and they say it's very accurate you know that it updates very quickly we like the UI it's very minimal I think the you know we could update the style to be like more flat and modern but otherwise very pleased so but we want to be able to know what was the maximum number that anyone ever ever achieved and so whenever you have this new requirement in your application you always just start out by saying what needs to be added to my model so in this case we still need to have the current value which is an integer but we want to keep track of the maximum which is also an integer so we update our model and that sort of flows through our and then we just like follow that through our application so we can say current zero maximum equals zero our update gets a little bit trickier so now our model is a record so we can say my model is I want a new record where current is updated to model dot current plus one and decrement I want a new model where current is modelled current minus one and yeah let's keep the rest the same so this is wrong right right but so we can see everything going on and we just need to add okay so in addition to when we increment we need to check to see if we should update the maximum so we can say maximum equals the max of model maximum and this stuff so we could make a variable so we're not repeating ourselves but I won't get into that at the moment dun dun dun oh it's ambiguous man okay okay so now as we move around we keep track of the maximum so this is what it looks like to add a feature just what does it mean to be this application what is the essential data how do i pipe that through so another thing we might want to add is the ability to reset our counter so in that case we're explicitly modeling our with the things that can happen and we have a new scenario which is reset and in that case we'll just say initial model and we'll add a button for that okay so there you go so yeah so at this point you know you know how ohm works like this is it this is what you do and we didn't show CSS but you know it works the same you can put a style here and do some stuff with it and so we actually have sort of an architecture tutorial so one route through reality that we can go is we won't go this way but so essentially this takes these basic building blocks and shows you how to nest them so in the end you kind of get a sort of in the same way that Erlang says oh here's how you architect this so that your life doesn't suck Elm has the same kind of things we called the Elm architecture and we're starting to see this sort of in a bunch of JavaScript libraries so react to sort of moving in this direction so yeah and so one other thing I wanted to show was our error messages so we Hajis is that can we see this at all is this this a little better okay so one of the big complaints about typed functional languages is that the error messages suck and so you'll like be making something so on the one hand it's like if people say if your program compiles it will work but getting to that is often frustrating enough that people are like whatever I'm going back to JavaScript if it if it it it will at least work you know so sort of culturally what sort of happen is the the thing that people normally say is well you you get used to these error messages at some point and and I didn't want that to be the answer so I spent a good two months focusing on how can we make these really really excellent so as part of that we made an error message catalog which is whenever someone the community finds an error message they're unsatisfied with they report it and we try to sort of coherently get better results so right now we're looking at a repo of programs that don't work and the point is if we can have them all in one place we can make changes to the compiler and see that they all are improving like nothing nothing is getting worse as we try to make improvements here so this program I'm showing it's not is going to take a boolean and turn it into the other one and we gave it a number okay so okay so one of the first things to notice is that oops is that the way we display the code is actually literally what the person wrote in their source file so if we want to be like maniacs oh no it trims it trims yeah hold on now we'll see it no wow okay that's not what I expected that still demonstrates though that like you're seeing it actually how it was so when it's a sub-region it'll cut to the particular expression I said I I'm surprised by this so the point is though we give a very particular thing so alright the argument to function not is causing a mismatch it's expecting the argument to be a bool but it's a number that's just like literally the problem explained for like humans to read where a way where you can look at this and it's literally what's in your code so it's sort of optimized for a really pleasant debugging experience so let's find a trickier one okay this one's kind of nice so in this case we have a if so if things are true we're going to give a string else we're going to give float and so so again we have this the branches produce different types of values that then branch is a string else branch is a float we also have a hint system so a lot of times you'll get an error message and it'll just be like hey these things don't match and you're like okay but I don't know this language so like what should they do and so here we say yeah so they have to match so whatever branch we take we're going to end up with the same type and things can proceed and basically whenever you do like weird stuff the compiler is going to be there to help you out so I think this one this one will be fun so in this case we have Alice and Bob and they both have facts about each other and like you wrote them and you're like whatever and you want to see if Alice and Bob are equal to each other so I'll make types of mismatch record okay so Bob is causing a mismatch it's expecting something where the position field and the y field of that is a float but in fact the position and the y is a string and so when we have these type mismatches we actually show the whole thing and sort of trim out any details you don't need and highlight the actual differences so again just like by virtue of thinking about how people's minds work we can get you really quickly to know what's going wrong here so like let's say we fix this but we call it pasta on yeah okay this is I think my favorite error message so if you just saw this alone and be kind of confusing it's like I'm expecting the right argument to be positioned but the but the right argument is in fact position and like the way brains work you're just going to read it the same you're just gonna be like out but so we actually find typos and so we'll say oh did you mean one of these so we can really easily suggest like here's how to fix your code and so I think the end goal here is the goal with them is how much can tooling really sort of recover from mistakes that in other languages just would kind of be a disaster right so like when you see people getting started with languages that don't have this kind of stuff this might be the deal breaker right like they're just looking at their program that like seems fine right there's nothing obviously wrong here and if your error message is just like no you can't do that and this is your first day with Elam you might be well with JavaScript I can do that right like this language doesn't work so so yeah so so this focused on error message I think has been really important to sort of helping Elm get traction in in the front-end world and also oh yeah also sort of breaking down this barrier of this dislike distrust of types so we often don't talk about type diverse untyped we talk about things like hey what if your error messages instead of undefined is not a function or this would be some sort of undefined craziness eventually in JavaScript you actually have like hey here's how you fix your code and that's a more powerful message additionally instead of saying types guarantee safety we can say things like if you use elm you don't get runtime errors and like if you're a JavaScript program or you're like are you a liar like I can't that can't be reality and then you know the responses I mean you try it out and see so you build this sort of like mystery or not mystery but like this this like sort of challenge for people and naturally programmers want to prove you wrong when you make bold claims and and then they try it and then they can't and they're like oh this is pretty cool so yeah so I'm running out of time but hopefully that sort of gave you sort of a basis in how Elm works what it might look like to start using to write applications one thing I didn't mention was how you start integrating that's a little bit more complex and is the focus of what I'm doing right now so the next release of Elm is really going to be about how can WebSockets not just be nice but be maybe the the best client-side web sockets we've seen Oh even here yeah yeah yeah that would be yeah that would be a nicer Hin oh yeah so I'll add this to the error message catalog and try to get this in the next release oh yeah so so yeah yeah there are lots of trade of selling these lines so well so thanks for taking a look hopefully this seems somewhat interesting and just come talk to me afterwards if you have questions was happy to talk about stuff so thank you
Info
Channel: Erlang Solutions
Views: 22,719
Rating: 4.9859648 out of 5
Keywords:
Id: XJ9ckqCMiKk
Channel Id: undefined
Length: 55min 28sec (3328 seconds)
Published: Thu Mar 10 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.