Phoenix a Web Framework for the New Web • José Valim • GOTO 2016

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so to make things a little bit fair I will talk about Phoenix the web framework but I'm also going to explore the foundation that Phoenix is built on top of just to have a pic shows of hand who here has heard of Phoenix before alright and who has not heard of Phoenix but at least heard about Alex see I know that Alex here is a thing inside programming language alright awesome so there are three things we need to know we just need to know the words I'm going to explore all those things so Phoenix is the web framework that is written in the Alex a programming language and runs on top of the early vertical machine okay and the other virtual machine if you I will start with the history behind Phoenix and it always started with the idea virtual machine so for those who not familiar feeling virtual machine not familiar with Alex yet so air laying the virtual machine the runtime narrative is what's created by Ericsson which is a telecommunication company and one of the things they were doing at the time this was about three decades ago they were building telephone switches and one of the things that a telephone switch needs to do is to be able to connect person a to talk to person B right and what is really interesting about air link is that when they were building the language the runtime they had all these requirements in mind that are related to telephone switches so for example you don't want the telephone switch right to be able to connect just one person talk to another we actually want many people as possible talking to each other okay so you want to be able to handle those many connections many people talking at the same time there are other things like for example sometimes you want to call someone but that person is already talking to someone else for another switch so you need to be able to have the switches exchange information so they know here you can talk to this person because this person is busy right now and today the challenges they are working on the solving Weaver lying it's even more interesting because we have mobility now right our phone is no longer installed in our home it's in our pocket so for example one of the things that they solve regarding today inside Eric's on those communication systems is you are talking to someone on the phone and then as you are in your car but don't talk and drive but imagine that you are in the passenger seat in your car going somewhere you are connected to one switch or to an antenna and then they need to hand off to another antenna because you're closer to that other one so they built this language this build they build this virtual machine and everything for solving a bunch of the Zeus cases and it was restricted to telecommunication they were mostly using it at Ericsson for a long period of time until everyone started to realize that this is case here that we have with telephone switches written you're laying and a bunch of connections it's very similar to the web okay so we set a switch you have a server and the server can be is going to receive requests for a bunch of different clients right you need to talk to internal endpoints and then people started to ask themselves hey if earning was good for telecommunication it's very likely going to be good for this too and many companies started doing that so Amazon and Facebook they use airline telecommunication companies like Ericsson the Motorola they are still using early today we have companies like a Morrison companies like Heroku and ryuk so if you ever deploy an application to Heroku every time I try to read your application it's passing through a routing layer inside Heroku that is written in Erlang Iraq the database written narrating okay and there is one use case that got a lot of attention a couple years ago two years ago which is whatsapp one of the reasons they got a lot of attentions because they were acquired by Facebook by 19 billions and whatsapp is an application that is installing your phone and then you can exchange messages with your friend this is a matter which device they're using and what is really nice about whatsapp is that they use air link and they would go to air link events and give talk about the infrastructure and how their handling their traffic in their system so having it just don't have an idea today whatsapp handle per day more messages than the whole global SMS system per day that's how big their traffic is today so you can go and try to find talks or blog posts and there is this one which has at this point is a little bit old this from January of 2012 but they were saying this blog post that they had two million connections on a single machine so what was happening is that they had two million devices right that was connected to a node a machining production and then those devices they were sending messages they were receiving messages and you're on a single machine they had two million connections and when you go to see the Machine you can see well it's a it's a good machine right it has 24 cores nineteen sixty gigabytes of RAM but even for with all this capacity in there when they're having only two million connections you're using only 40% of the machine CP resource later in talk through the SATA door able to get to 3 million and 3 million and a half connections so you have an idea okay so that's a little bit about airline and I like to talk about what's that because whatsapp was exactly what led to Phoenix being built so Chris McCord so cooing Chris McCord he's the creator of Phoenix and he was working on an application that required some real time components he needs to send information quick web real time send information to clients quickly to broadcast for a bunch of different clients collect this information back and when he heard about this he said wouldn't it be amazing if I could use this technology to solve this problem I'm having right now the tools he was using at the time they had really poor performance and it was hard to work with 12 hard to program the model to think about the problem he had was really hard to say he look at this and said I want I want to use this technology to solve this problem I have right now and Phoenix got started with something called Phoenix channels okay so what is the idea between Phoenix channels so Phoenix a Phoenix channel is an open communication channel between a client and a server and then they are sending and receiving information all the time okay so let's have an idea how phoenix channels work so let's start with some JavaScript code because if we have a communication between the client and the server you need to write a little bit of code in the client and a little bit of coding the server so you have this information coming and going and one of the most clients we have today it is to the browser so we need to write JavaScript so if you want to integrate with phoenix channels in the browser this is some simple code that you write so the first thing we do in the first line there at the top we create a socket and then we ask the socket to connect so at the moment we do this we are connecting the client to the server and now we have a bunch of different channels that we can register to so here we are choosing one channel which is if we have a chat application this channel is going to be the lobby so the LOB is going to be the place where all users are they can send messages to each other or start a private conversation or something like that so I can say well for the channel lobby here for the chat lobby every time a new user joins I want to print on the screen hey these new user join so every time I receive this event from the server that the new user join I want to show that every time there is a new message I want to print that new message and we also want to do the opposite right so for example you want to listen to the to some input and say well every time the user press Enter I want to send Matt this message to everyone so everyone can receive that message and then when you specify your rules at the very bottom here you call channel join so that's the client okay and then the server we're doing two steps so here we can see that we have two finger we have the socket and we have the channel so in the server we are going to define the socket and here's some Alex your code so we usually break our code inside modules and so we are saying hey I have this user socket that is going to be a phoenix socket thing and every time someone's trying to send to connect to this a chat lobby channel I want to handle that logic in this chat dot love channel module and every time someone wants to send a message or enter into a room I want to handle that logic in that other channel and naturally specify the channel sorry I Francis by the socket we can implement our channel so what are going to do here is that we it follows the same structure as the phone now we are defining another module which instead of saying use Phoenix socket is saying use a Phoenix shadow and now I define our rules so for example we are going to define what is going to happen when the user joins ok when the users join the chat lobby what do we want to do we want to broadcast I want to tell everyone hey this user just joined right and then we are going to do that and say hey I want to broadcast on the socket that these things are join and here's the username every time someone sends a message from the client we want to handle that so every time the client sends a message what we want to do we want to broadcast that message to everyone as well and that's what we do on the second function here below so we have two functions one that handles a joint part and the other one that handles all messages that come in don't worry a lot about the élitaire code just if you are not familiar with it just have an idea of how we are working with those concerns okay and that's it now when now with all those short lines of code that we wrote on the client of the server we have information coming going between client and server so I instead of exploring a lot of the code aspects I want to take the external view of these and trying to understand first how things are going to work from an external perspective and then how things are going to work internally and how why it matters for our developers ok so there are something else right at this point in the talk when I show those slides you can see that there is nothing really new right we you probably use a heard of other solutions in other languages or other frameworks that kind of provide similar abstractions ok so you also may be wondering here what is what is the big deal with Phoenix them ok so let's answer those questions so so first thing is that so when we have the server right here on the right now we have a browser and we wrote client code we wrote the southern call the server code and now we can connect the browser to the server so the first difference that we have with phoenix channels is also that we say that the communication between the client and the server is transport agnostic so for example here are using a browser but if you are using something like a browser like Internet Explorer that doesn't support something like WebSockets you need to have another way of communicating with the server you may have a native mobile application where you have custom needs where hey I can I don't want to use a WebSockets here because I have a more efficient protocol for doing this kind of stuff you can do that as well or even embedded devices when we are talking about embedded devices sometimes you have particular needs or particular protocols like co-op where you want to have the client communicated to the server and one of the nice things about this is that so now we have here a bunch of different clients that could be running different platforms or using different ways to talk to the server and what is nice about this is that imagine that at the beginning okay let me roll back ok imagine that at the beginning here you only have a single server ok and then you are starting to receive more clients and we kind of have an idea that this is going to be efficient right because we heard about the whatsapp case and this kind of stuff right there were they ham which they were able to handle a bunch of connections but some most of the time you don't want to handle everything on a single server because if something goes wrong that sort of goes down for some unexpected reason everyone is connected so what you can do with Phoenix and this works transparently is that at the moment you add another server those servers are going to start to talk to each other and handle everything for you is I need to worry about it so far as I've imagined that that browser disconnects and then because it's so I don't know going under a bridge or the user has a Wi-Fi problem and then when it comes back it connects to the other server now everything is going to work just fine every time the browser needs to send a message ok it's going to send a message to the first server that's going to talk to the other server that is going to grow the cache for everyone right and Phoenix handles this really well and it's going to scale both horizontally which means we had more machines we're going to be able to continue handling the traffic and is also going to scale vertically because we saw in the whatsapp case as you know we get if we have a powerful machine with 24 cores for 8 cores as we had more connections to that this server is going to be able to handle just fine right and and the reason for that is that if you look again like this case here where you have browsers with a bunch of different devices is exactly the case you're talking at the beginning of the top right where you have telephone switches and then have a different bunch of phones connected and that's one of the things that is very exciting about Phoenix right we are using possibly the only platform the only virtual machine that was designed to handle cases like this right that's widely using production it was designed person there is exactly like that and Phoenix is leveraging that okay so that's the outside view so we have an idea of how client effects to the server and how we are expecting it to be able to handle scalability both on the horizontal and vertical aspects but things get even more interesting about the inside view because I mean it's great that it scales but if it's a poor abstraction for us developers and you're going to have a hard time writing code it's not worth it right so we also need to consider this aspect so let's talk about how things work internally so we were writing software that runs yelling virtual machine so when you're writing a little programs for example all their code runs inside processes and from now on to the rest of the talk every time I say process I didnot me an operating system process I mean a very cheap very lightweight Radl execution so I have an idea in the whatsapp case they had two million connections so they had at least two million processes okay so we can yeah very cheap you can create a bunch of those and Phoenix does exactly that so when the client which is here on the last connects to the server what is going to visit it's going to start a process that is going to handle the communication between client and the server and that's the fee nerves responsible for handling the transport agnostic part okay and now so that's what happens when you call socket connect and now every time we join a channel okay you're going to create different processes so chat lob is going to be one of those processes each room that particular user joins on is going to be another process and the reason why we do that is because processes give us isolation and concurrency so this is nice because for example imagine that you are there is a bug in your code and then something goes wrong in the chat lobby room you don't want for the other functionality on your application to stop working right you want that issue to be encapsulated only on that particular channel where that happened when we are talking about embedded devices for example sometimes establishing a connection is expensive so you don't want a bug in a channel or something wrong in a channel to crash the transport and the process give us the perfect isolation to write this code it also gives us concurrency so imagine that you for some reason you're processing an image inside one of those channel process so someone set an image into a particular room and then you're doing things inside that channel that's not going to block the other channel the other channels are going to continue working in the room and the log is going to be working just fine messages will come and go okay so it's really nice because we can when we are writing code to a particular channel we really get an isolation and concurrency and it takes a lot of concerns for developers okay we just need to think about a particular channel and not how is interacting with the whole system okay and then so when we were talking about the code we have this ideal so we are going to have the transport right and the natural connect to the transport are going to start a bunch of channels each channel runs a separate process that isolated in concurrent and we also have a third entity here which is the pub subsystem and that's what allow us that every time we send a message to one particular machine it goes to all other machines so the pub sub takes care of that and by default it uses distributed airline which is how which that's why it's plug and play as you add Phoenix machines to your cluster they are going to start talking to each other because Erlang has this idea of distribution butene but if you cannot run this through the air languor have some concerns you can bring your own pub/sub map - all right that was a short introduction to Phoenix and the history behind it and you can go and you can learn more about Phoenix on the website and and in a way I could kind of like stop the talk here right now and you have a good idea about how Phoenix came to be but it's not representative of what Phoenix is today Phoenix grow much more beyond the channel aspect and that was because real defining steam will realize that we were in a very unique position because in our experience up to this point all most of the tooling for the web they made you choose right they have to say you have two options or you can be productive if they're going to give you this framework that is going to have a bunch of tools that take a bunch of concerns out of your mind but that's not going to be fast or you're going or you can use this option here that's very performant as long as you write things using weird callbacks using callback styles or one particular style of programming that does not feel so comfortable and we realized that we Phoenix we don't need to make you choose okay you can have both you can have a framework that's going to be performed that's going to be scalable and you're going to be productive with it both in the short term and in the long term okay so that's what I want to explore for now for the bulk of the talk we are going to talk about a little bit about performance and then we're going to talk about productivity both short term and long term so I started performance because we were just talking about you know scalability and around this area so let's get this out of the way so only thing that it should be curious about is channel's performance you may be wondering well you told me that you know whatsapp wrote that thing and it had two million connections on a single machine but is it true does it also apply to Phoenix we're actually curious about the answer to that question - so we decided to benchmark and here are some graphs that we got when were benchmarking so on the horizontal axis we have time so what we're doing is that we set up a cluster of machines I think again we have 40 machines sending opening new connections to a single server so we were getting new connections at all eight between 10,000 new connections up to 20,000 new connections per second okay so you can see that here and then on the vertical one have the accumulated number of clients and you can see here that as we are pushing load right as all those 40 machines for the clients their opening connects to the server we can keep keep up the pace well up to when we got to two million connections so when we got to million clients you can see that we continue pushing load and then the the server star black star lies at two thousand and the reason that happens because when we configure the machine we configure it to open at most two million connections so if we actually said we want to open five million connections it would continue going which is really nice and okay so nice we're able to get two million connections on a single machine rate which will reproduce that case and here's how the machine looked like okay so it's a good machine into machine with 40 cores I don't remember I think it was 96 gigabytes of RAM or 120 gigabytes of RAM and you can see that when we get to million clients connected and there is nothing happening they are not sending any information the core they're just waiting right for something to happen and then what we did is that we would get Wikipedia articles and broadcast to those two million clients and we could see a spike in i/o and we could see the Wikipedia article being sent to those two million clients in three to five seconds which is really amazing sweet and we have good performance and later in the talk when I'm going to talk about productivity I'm going to show exactly how we're able to get to those numbers it was not magically we have to do a little bit of work and I'm going to tell that star soon but performance does not only apply to channels because yes getting a real-time web real-time be able to exchange messages between client and server is an important aspect but the whole static web that we have today not static but dynamic web we have today but we have only request and response it is to a huge part of our traffic right so you may be wondering if I'm doing regular HTTP HTTP requests it's things going to be performing - and the answer is yes wait let me roll back because later people get angry at me so I'm going to show some benchmarks between different languages and technologies and I have to start the disclaimer that don't trust benchmarks okay I'm going to post the links of how those exactly things were measured and the repositories we are measuring but ideally you want to build a prototype in those things I mention market yourself to make sense that it applies Charlie's case that's that I'm sorry let's see some numbers so you can see here we have different languages with different technologies in Phoenix is built on top of something called plug which is a very cheap a very small obstruction around a web server so you see that plug is the fastest for a machine I think the machine had ten bars you can have more information in the URL formation with ten cars were able to get 200,000 requests per second and then you can see that Phoenix comes in second really close to a solution go with 180,000 requests per second what is really nice about this is that we are going from something that is a very small abstraction around web server to a web framework and they are not losing a lot of root okay only ten percent which means that as you add your own logic okay you're not going to be affecting performance a lot and that so we can see that at the top of the table we have solutions in Alex here go and Scala and then at the bottom of the table you have a solutions coming from go and Ruby and node okay and it's also important to highlight here that from this table the ones that called themselves actually web frameworks is Phoenix playing with in Scala and rails so and the reason why I point that out is because they're doing more work for you out of the box they are worrying themselves about security and this kind of concerns where the other tools are more like here going to build your own a to chain kind of thing and have to set up a lot of cell so before we finish the performance section when we talked about channels we had that inside view so we could understand how things are working internally and how that will affect the developer now let's do the same for regular HTTP requests just to have this idea of how things are working inside the virtual machine when we are using Phoenix and we're receiving a bunch of requests okay so let's take this inside view again so on the last side we have the client on the right side we have the server and every time there is a new request what is going to happen is that we're going to start a new process which is that very cheap very lightweight rather execution so as we get a bunch of different requests we are just starting new processes to handle those requests and as before those processes they are isolated and concurrent okay and let's see what this means now in the context of regular web requests okay so the first thing as we saw is that crashes are isolated we want that right if something goes wrong we never quest we don't want it to affect all the requests right it should be contained that requests but it also means that data is isolated and this is really good because it means that we don't have a stop of the world garbage collection depending on the platform that we are using to deploy applications today you can have a very good response time on average you're going to look at the average you say hey I'm responding 50 milliseconds I'm responding in a hundred milliseconds right but when you go to the end of the curve to the ninth percentile to the 99th percentile we're starting to see some requests that are taking one-and-a-half seconds two seconds and those are the unfortunate requests that trigger the garbage collectors so they are starting to stop everything and clean up so we've Alex here the data is isolated so the processes they don't share data with each other which means they don't have I stopped the word garbage collectors a garbage collection which allow us to have an idea of the latency we say that we have predictable latency okay it's easy to predict how our application is going to behave it also means that in some cases when you have fast requests we don't even waste cycles doing garbage collection because imagine that there is a request you go you do everything you need to do you render that you send that to the client right the way it's done and say hey the request is done I generated some garbage here but you know there's no need to do garbage trucks and just reclaim it back you don't need to do marker sweep mark generations or things of sort okay so that's really nice and they're also concurrent which means they're going to load balance of both i/o and CPU you don't need to write callbacks or nothing that or nothing at that sort you just say if one request needs to talk to another API you just talk to an API you can send an external request to a service and it's not going to block anything else okay they need to worry about it cool that was performance so now let's talk about productivity okay and as I mentioned I like to break productivity in two parts so the first one is a short-term productivity which is I'm really excited about Phoenix and you want to try it out right so how productive are you going to be because it's a new it's a new framework right for those who are not familiar with Phoenix yet but it's also a new language and it's also a new runtime it's a new virtual machine you have to get a plane of some of those things so how we'll be able to get something up and running so you can get motivated and continue working on that and continue learning okay but it's also the long term productivity that we need to talk about because a lot of the frameworks out there they only worry about the short-term productivity right which is really fast to get started but as you add complexity as you're running that finishing production it starts to really slow you down okay so short-term productivity so one of the things how can we help her cover activity so one of the things are very good documentation okay very good guides in Alex here we have a saying that we like the commentation to be first-class have reserved some time today able to talk to do some live coding maybe we can explore that but it's a documentation is first-class because it should be easy to write and easy to read so you should be able to access the commutation in your terminal in your browser in your editor and the two the two set really allows you to do that okay we have a guides on the clinic sites that's going to allow to get started there's a lot also regarding to workflow and generators you're going to talk about it soon and there's also the long term productivity and here what Matt is that if I'm running the Fenian production right we want to talk about introspection how can I see what my the code that I put in production what it's actually doing how can I say that how can I understand that and how can we maintain the code base because sometimes we'll be adding features will be improving features that we have already deployed how can we do that so short member activity so if you go to the Phoenix a webpage that I've posted earlier you're going to see that there is a whole section there on the top with guides documentation and we have a guy that take you from installing everything that you need to get started Phoenix up to the point we also have a book that came out I'm one of the authors alongside Chris McCord which the creator of Phoenix in Bruce State we actually have the book in the booth there so we have a place outside of the here inside the review clothes where we we have lunch and eat and talk to each other where there are selling books and programming program in Phoenix is there so if you would like to grab a copy and if you'd like to talk about it I'll be around as well so books they also help a lot with the learn the getting started experience right the short term relativity I mentioned workflows and generators so one thing that's going to happen is that you're going to start building this application and say hey what I actually need to do is that I need to build a couple forms for example or a couple resources where I need to get information from the database show it to the user allow the user to change it so if that's are those case are worried about you can run the first command which is mix Phoenix gent HTML so mix is a build tool that we have in Alex Siri and everything you do you do with mix so create a new Phoenix application you do it with mix compiling your code testing your code okay and you can also use it to generate to to have those workflows that tell you how to build things so if you're interested in for example handling more dhtml side of things you can run the first command but if you are interested in building an API okay that there's going to talk to Jason with different clients you can run a second command and we are going to generate some code and I'm going to tell you a little bit about how it works how it should wire everything together and then it progresses on the channel aspect I was talking earlier you just ran this other command for creating channels okay so about short-term productivity and agains like all those things we have seen elsewhere right we have seen other frameworks or dealings that have like that that have features like that that allow it to generate some code and get started and get productive early on and all of those things that you have seen other frameworks you're going to see those things in Phoenix okay so so I was talking about HTML one of the things I need to do is to be able to get data from the database show it to the user allow the user to change it and get this data back so we have form builders that they take care of that if you are building HTML applications in particular there is no way to run from this you need to write some JavaScript you need to write some CSS or need to write something that compiles to JavaScript or composite CSS so Phenix allows you to take care of that but you also have some really nice features of our own so for example one of the features that comes with Phoenix is live reloading so if I work in an application and then you change the CSS file or you change a template as soon as you save it we use the phoenix channels we're talking about to automatically reload the page and that gives you a very productive workflow sometimes if are working on something particular where you have a for me to click on three places and then it opens up the model and you need to customize that model it's really sweet because you change the CSS file and you see that change reflected on a page and if you don't have that you need to reload the page do the whole flow on the form again find out what is wrong alright so a really convenient feature live reloading if something goes wrong we have really nice debug pages that is going to allow you to see where there is happening and IO connect on that not only that I love this feature first-class concurrent tasks tools so all the things that we do is that we if I write an application we hope you're writing tests but those tests sometimes need to talk to the database so we have a mechanism can actually write tests that talk to the database and those tests they are running concurrently this a lot of these ideas regarding the commutation like first class documentation first class concurrency first class passing it comes from the other virtual machine and Alex here I like to say that it's 2016 everything you do in your machines today it should be using all cores right like last month Apple announced that the Apple watch true and your wristwatch can have two cores right so everything we do in a machine we should be using all the resources available and in galaxy we are going to do that including your tests okay so that matters a lot and we also have a growing community we have something called hacks which is a package manager and if someone has solved the problem if you are working on a problem someone has solved it you can use their packages integrate as part of your application so that's short term productivity so a lot of the things that they have seen us where they are just there in Phoenix and a couple other more so now I want to talk about the other part I mentioned which is the long term productivity and I had separated two and this part had two sections okay so one of the first part of the first section was about how so alex is a functional programming language and it was about how functional programming can help you write more money tangible applications but I'm going to skip this part okay for now we can talk about it at the end of the talk because I want to focus and do some live coding on the other aspect of long term activity which is something that we call applications okay so this is going to be the last section of the talk and I want to go really deep in this part because as a promise is very the language track I want to explore a little bit how everything works inside the language inside of your own machine so long term relativity applications so in order to understand what is applications and how we get long term productivity from this I want to go back to that inside view are talking about okay so we have the client here on the left and then we have the server on the right and then we we talked about every time the client connects to the server we get a new process right and then we say actually you know multiple clients are acting at the same time so we're going to have a bunch of different processes and you can imagine that for all those processes we are going to have a process at the top right that is kind of like handling those connections for us one of the things I mentioned for all the talks that we also had a problem of process that was responsible for pubsub so every time we send a message to a machine we somehow need to get that message to all of your clusters so they can broadcast that to the clients that are connected to a particular machine another part of your cluster so we have a pub/sub process for that too and now when we start you know when we are feeling amount of Ercoli we have to think about those processes which are very cheap very lightweight and they are running concurrently and isolated we start to think like well we have all those entities in our code right now like those connections we have this pub/sub processing and they start to start to ask ourselves like okay but what is going to happen like if those processes they are independent and isolated what is going to happen if the pubsub system goes wrong if the being responsible to pad the masses throughout the cluster there is something wrong with that code in that particular process crash okay we start to ask those questions how it's going to happen in my application if something goes wrong so our answer to this question is that we define supervisors and what supervisors allow us to lose adults is to say hey you watch those processes here and if something goes wrong so you're ready supervise them and if something goes wrong I want you to act on it so if something goes wrong with the pub/sub system the supervisor is going to to notice that and start a new pub/sub system and its place and why this idea is important right why it's relevant so president we are sometimes I'm using we are using our machines and something starts to go like there is a baker in the corner of the screen never like what is happening and then you go you restart our machine and that thing disappears it never come back again when we started the machine you fix the problem and that's exactly the same idea we are applying to this part of code we are having a supervisor that is going to supervisor the pub/sub system because if something goes wrong it's fine first let this pub/sub eaton pub/sub entity crash because the supervisor is going to start another one to run on its place ok and and that's the idea that we start to explore so we can see that sometimes they have processes that has some parents which is a supervisor that has some other supervisor a Tesla Model supervisor so this is end up being kind of a supervision tree and then when we have a supervision tree we package everything inside an application ok so that's an application is when you have a tree of processes there are working on different functions and then you put everything in a package so what applications give us at the other day ok so they give us a mechanism to package and run our code so every time you are building a Phoenix application it's actually this Alex your application where you package everything it has all the code and applications can be started and stop it as a unit and applications also provide the unified configuration this is nice because if you know how to work on one application and how it is done and how it is started and how to configure that you know how to find out how all applications are started and how all plication are configured right and we also had this idea because when I was showing this slide here where we have the supervision tree right if the application contains all of the processes right it has it has everything that our coldest part of ok so let's do this I could go on and on say about what application is about about the process state in supervision tree but I think I'm going to be able to drive those points home much more strongly if I do an actual dam ok so let's do this all right so here can perfect so what I'm going to do here is that I'm going to so I've created this Phoenix project before so I'm going to start it and what I'm going to do is lose it so here is an interactive alligator shell with my Phoenix application running here I can type an elixir code I want but one of the things that I'm going to do here is that I'm going to start this tool called observer that allow us to see what is happening inside our system okay so after I run that command it started this beauty here and observer gives us an eye all the information we need about that's that particular on time so it says what is the system version that I'm running how many cores I have on my machine for how long it's running how many processors it has running right now and all this kind of information and also memory usage and where the memories are located and this kind of stuff we also have load charts but I just started this application nobody's actually sending requests to it so nothing interesting is going to happen here and we have a bunch of other different panes that you could explore the top at the top I want to talk about two in particular so the first one is the process one okay so I was saying like every time the client connects to the server we start a new process that very shaped very lightweight right of execution and in all of our code all the code that we write in Alex Siri anything that you do in Phoenix it will always be running inside a process and in this tab here the process tab it lists all the processes that are running this particular system and then everything this is here and then I can come and I can double click those and try to understand what each individual of those processes they are doing and we can do a lot of wonderful things with this so if you can see here at the top one of the things that we have for example is how much memory each process is consuming so for working this application and there is a memory leak like memory is growing there's something wrong happening one of the ways you can do to solve this memory leak is to open up observer or they're here by memory consumption okay and see which process is growing memory keeps on continuing we remember it's probably going to be the top by the party come to investigate it and then you can come you can double click that process and say hey let me see what this process is doing exactly which part of the code is running what is it state and so on ok so wonderful remember I was also telling about Phoenix that were able to reproduce the whatsapp case or a gap 2 million connections on a single machine the truth is when we started benchmarking it the first benchmark that we ran when we set up the machines for them for all those 5 seconds of the server we were able to handle only a 30,000 clients ok and then we say oh it's we cannot push more low there's something wrong you know how we solve that problem we open up web server we connect to the remote machine we open up observer and then we came here and what we did is that we ordered by this message cue column ok so what this message cue column says is that every time a process needs to talk to each other they send messages and if a process cannot act fast on those messages this message kiwi starts to grow and that's what we did we came here we order by message queue and we could see that there was a process that a lot of people were trying a lot of other processes were trying to talk to but it could not keep up it was a bottleneck was literally a bottleneck so I say hey I know where the problem is we went there we remove the bottleneck and then we're able to get more clients and then we benchmark again we filed the next bottleneck exactly the same way and we're able to get more clients we did this one or two more times and they were and they were able to get to 2 million connections ok because all the code is run inside those processes and we have a bunch of introspection about what we can do here so the process tab is really really nice but my favorite is the application of stab and we were just talking about applications ok so remember the idea behind application is that we can package our code in all of the processes ok and they are contained and when we run an application in the VM is we don't have only one application running we have many applications running side-by-side we want to think about them to think about them as components okay so here you can see all those applications that are running when we start Phoenix so we have cowboy which is the name of the web server and then we have things such as Alex Erie elixir itself is an application Phoenix itself is another application and so on and you can see here supervision trees right each of those applications they have their supervision trees and you could go and explore them if you want to chew but the one I want to talk about is the the supervision tree that is part of your application itself and this application is called demo and here we can see the supervision tree so everything that is happening your application you can come and navigate through this tree so one of the things that we can do here is so if you have a web application one of the things that you're likely doing this application is talking to a database and the way that the majority of languages talk to the database is that they have a connection pool so when your application starts they open up like 10 connections against the database and every time we want to run a query or write something you get a connection for the pool and you write to that collection and we can see that here in our supervision tree so this is what we have so here we have the connection pool to the database to our repository and you can see here all the connections to the database okay they are all represented by those processes here so this number those numbers between the designs they are the process identifier there is a number that identifies the process so not that we can see those things here we can try to reason what's going to happen when things go wrong because that was one of the reason why we added supervisors and when we started talking about supervision trees because you have all those processes and we want to try to understand what is going to happen when something goes wrong in our system when something goes wrong with the pub/sub process or when something goes wrong and talking to the database so you can do here is like imagine that in one of those process that it's talking database imagine that something goes wrong and the database shuts down shut sits down the connection for example or something else happens we can actually double click the process and say I want to kill this process and then we send a cue message so because we're talking about supervisors what we're expecting to happen here we're expecting for a supervisor to notice that one of the database connection is no longer there okay it's going to notice that and start another one in its place and we can see this that's exactly what happened now here at the top we have a new process with a new identifier because that's ones that we can actually be even more radical and say hey what if there is a bug in our connection pool okay what if what if there is something wrong in the connection pool and that whole process responsible for handling those connections fail we can come here we can double click and say okay I want to shut down this process and see if my system can react to that failure if my system can self heal and we can see that's exactly what happens here okay so I have my pool and now and now because that pool is over all the connections were terminated which is good we don't have lingering connections right all the factions are terminated and we started new connections to the database ok so going back to the talk so instead of going through we so kind of see everything that an application can do right the packages are coded it has a supervision tree that allows it to explore but at the day right that doesn't matter much for us right what we want to do is is we want to know which guarantees we get from that and the guarantees that we get is that we have a lot of introspection and monitoring right if our systems run in production you can actually plug into that and understand everything that our system is doing from what the server we have an idea of all the metrics that we can actually get from the virtual machine right so if you want you can get those metrics and push them to the external system where you can follow those metrics and see how what is the rate of process failures and all the all important information to know how is the pressure that I have in REO system how is the memories being allocated you don't need to rely on observer the observer is only use information that is is in the VM and you can use that to export 20 system you want to it gives us visibility of the whole plication state as we saw it could go in the tree or we could double-click things and interact with those it is also really easy to break into components and that stuff we explore in the book imagine that you are working on your Phoenix project and then we started adding more processes to the tree right and that's a point of saying well this thing is doing too much ok we need to break it apart one of the ways to break it apart is to print the supervision tray your emergency and say hey what is way to happen if I get this part of the tree and move it elsewhere so you can get a a branch for example and move it elsewhere you can get a subtree and move it elsewhere and break our application apart and it also gives us the reasoning when things go wrong so to sum up this is a talk about Phoenix and I try to do a mix not only talk about Phoenix so Phoenix is a web framework that is going to be productive reliable and fast and it comes with this way of having the channels where I can have multiple connections between client and server and have information coming going all the time but not only that it's on parity with whatever have used so far for doing web in terms of building api's building HTML applications and in some attacks we are even better because we can integrate new features with phoenix channels so that's one aspect of the talk right how we can use Phoenix both for the new web which is these highly connected aspect with channels but also for let's say the traditional web that is being here for a while and I also try to explore the foundation bathe the battle-proven technology that Phoenix is built on top and how it's leveraging really the foundation that is there for three decades that was built with Derril in virtual machine and then built on top by elixir so if you want to learn more about phoenix you can visit the page create that huge button you can also try the phoenix book as I said they are selling it in the booth there so if you want to grab your copy now please do it and if you want to chat about it just ping me I'll be around if you want to learn more about those other aspects explored about applications supervisors or about elixir in general you can also go to the Electra website Electra - lang org that we have guide that is going to allow you to explore the language not only the foundational aspects but the most the more advanced aspects too we have two guys in particular one which is the ganic started per se and the other one which is called mix and OTP that's going to explore about actually building an application a distributed application and we also have plenty of books for Alex here if you go to website there is also a learning section in the menu where we have books screencasts and different materials for you to learn there is also a initiative called Alex's school that teaches elixir in a bunch of different languages I don't know if they have a translation to the initiate but if not someone could get started on that and that would be wonderful and finally I want to thank my company terraform attack we are the ones who built and designed elixir and we are also contributing actively to Phenix so if you're interested in starting with Alex ear and have some coaching or purity running your pretty building up its application and they have some kind of and you need kind of design review or architectural review just let us know just get in touch okay and that's it does it talk about Phoenix thank you we have questions I forgot to say that you can send fashions throughout the talk that was my bad but yeah so so how does Phoenix compared to other real-time web frameworks that use it Perley javascript an example could be meteor so that's a very wide question I like how we compare on each aspect so for if you are thinking about in terms of performance we are going to get much better performance with Phoenix but if I are trying to work if you're worried about some concepts between the first apology features meta or meteor is the sharing between client and the server and that's something that you're not going to get with Phoenix for example but we can we have other positive trade-offs in other areas such as having more flexibility so media or it's kind of like hey you need to use I don't know if they allow it to something other than at this point but they are more rigid in terms like if I don't want to do this default kind of way right you don't have a lot of a lot of options in me or while things gives you more flexibility to suspect so yeah so I know this is a good question what happens if the supervisor crashes right it's the famous what supervises the supervisors so usually we have a supervision tree right which means that sometimes the supervisor they have supervisors themselves okay and then what you can think like well at this point I'm going to go to the top of the tree and then something what we want is that being fails so there are two things that's going to happen here so the first one is that the supervisor cold it has been running in production for like two decades and it's like they're really conservative we've changes to the supervisor code so the supervisor by deploy it's supposed to be very very reliable right I think about so have an idea about a fifty or fifty-five percent of the whole pre G traffic in Europe go through our link switches so you know there is this part of the code that's really well tested and the thing is that if you put the supervision tree okay inside your application we have this thing called an application controller which is the one that is restarts a supervision tree that is also very battle-tested and it has been running a lot so so the idea is that there is some cold right there that it should not fail right and it has been really tested to not fail and you rely on that I did the example with the machine right like you know when when something goes wrong we start our machine and it goes back to work sometimes the problem never comes back and the reason for that is because you know whoever is building our machine they really test the boot instructions because the both the boot structures if they are wrong then it's game over right there's no if you can do so it's a similar concept being applied here so Oh so this question is more specific to someone who is already using Phoenix so what is coming in next version of Phoenix any plans to build authentication to the framework so that's a good question so there is something that I didn't talk I didn't mention here but the latest Phoenix released which is quite recent it was in June now which was Phoenix on the chewing to introduce it Phoenix process in Phoenix presence is a way for you to know further if you have a ring or if you have a channel or a gaming channel whatever who is connected to that channel in Phoenix presents really nice because we are using the whole distribution we are talking about so if someone joins on this machine we spread that information so everyone in the other machines can know that you joined so you can track who is present on each channel and it works completely decentralized just exchange information between machines you don't need to rely on the database you don't need to rely or edit or add on Redis and things like that so that's something that we just launched it and what we are planning for Phoenix 1.3 is so there is one thing that we inherited from other frameworks like rails and I'll try to show this very quickly I have three minutes which is if you go to if you go to your Phoenix application today I'm going to increase the code you can see here there is this web directory at the here bear on the left side okay we have the directory structure and then we have a something called models which is every time we need to have something that interacts so the database like hey I want to get like I have a user and I want to get the first name the last name and the user age from the database into this thing so I had this thing called models and the things that we have in models they today the way they work that they kind of map directly to having a database but that's a very bad idea and why is that because imagine that you are you just hired - for example - a new company and you are going to work on their application when you open the web model directory you're you're going to see all the files that are going to have there we are going to see them your will to see these structures that are supposed to match the database and that doesn't tell you anything about what that application does the only thing that it's actually telling you is that you're going to have those things in the database and how they match to the database like it gives an idea of your database structure but if I wanted to know my database structure I would go and check the database I wouldn't go to my application so other things that we are doing is to where we want to push developers to think more about context right you don't and there's some other problem big problem with this which is who here already working on application where we would have like models or whatever called models that had like 50 fields 100 fields can someone identify with that all right right and why is that it's because we don't think about context here right we just have this user thing and they're like oh I guess this is related to the user you try to shove everything into one place so I wanted to think about the context you don't have a user in each zone the user it which part of application does it relate to is it about the accounts system is it about that education system so you want to you to think about those contexts and break things into contests so when you're coming to application you can say hey this thing is about accounts this is about purchases is about payment right and sometimes different contexts you're going to have user it's going to be spread out in different contexts right because the user that is thinking about authentication the columns are about authentication they should be on the authentication part or in the authorization part everything that's related about user payment should be elsewhere it should not be everything coupled into one place so those are features that we are exploring for Phoenix 1.3 and the good news is that those changes they'll mostly be done to the generators so if you start using Phoenix today is up all your code is going to break by the next version no it's just that when we are talking about generators because as we said they are learning tools we want to push people towards the proper direction so they can reason better about their application all right we I won't have we had a bunch of other questions but I cannot answer all of them so I'll be around just come asking questions also if you want stickers I don't have Phoenix stickers but I have elixir stickers if you want to so come around come talk I'll be glad to answer anything thank you
Info
Channel: GOTO Conferences
Views: 38,410
Rating: 4.926024 out of 5
Keywords: GOTO, GOTOcon, GOTO Conference, GOTO (Software Conference), Videos for Developers, Computer Science, JoséValim, José Valim, Tech, Software, IT, Phoenix, Elixir, Plataformatecis
Id: bk3icU8iIto
Channel Id: undefined
Length: 59min 50sec (3590 seconds)
Published: Thu Nov 03 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.