Andrew T Baker 5 ways to deploy your Python web app in 2017 PyCon 2017

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good cool and once we get that gift back up there Thank You Drake thank you folks hopefully the way it was worth it for that one that is my favorite gift of all time my name is Andrew Baker and I'm here to talk to you today about deploying your Python web apps in 2017 so we'll put Drake aside for the moment thank you for your service a little bit about me so my name is Andrew Baker I am a Python web developer that's what I've been doing for most of my career these days I work at Twilio if you haven't heard of us before Twilio is a company that makes it easy for developers like you to put communications in your apps so by communications I mean things like phone calls text messages video chat and I work on the documentation team at Twilio and I know that's that's enough about Twilio if you want to know more stop by the booth and I have happy to talk to you more about it but today we're here to talk about deployments so what we're going to try and do in the next 30 minutes is cover five different ways that you can take code that you've got running locally on your machine and get it up and running in the cloud and rather than tell you how to do this I'm figured it would be more fun if I try to show you so in the next 30 minutes we're going to try and take the same sample code and deploy it to the web five times over sound good all right let's give it a shot so our sample app today is a flask application I imagine most be in the room are familiar with flask already but if you're not it's a micro framework that makes it really really easy to bootstrap a Python web application so we've got the little hello world code right here and indeed that looks very similar to the sample code that we're going to be using in our app today so we've just got one route the route route and it's just going to spit out an h1 tag with a hello world there so the first technique we're going to talk about today is end grok you could even call this technique number zero because it's not really deploying so much as it is a really really handy tool that I think should be in every developers tool belt so an grukk if you check out their website it is secure tunnels to localhost so the quote says it all I want to expose my local server behind a net or firewall to the internet and if you click that download button you go over to the end crack download page download that file which I've done already and you really just need to do two things first I'm going to fire up that flask app that we had before and we'll make sure that's working on localhost 5,000 hello world diggin it and then I'm just going to go over to another window here and do an grok HTTP 5,000 to tell an Kruk that I want it to route to port 5,000 on my local host and groj gonna fire up here it's going to give me this weird kind of gobbledygook URL I'm just going to paste that into my browser and hello world that is it your app is now live for the world to see and rock is a really really great tool for just getting something up in a pinch one thing that even folks who've used n growth before might not be aware of is it also has this awesome request inspector so if you go to localhost 4040 while you've got n grog running you can click around see the requests that were coming in see the response that your app was putting back if you can go over to the Status page here you can get some performance information on how your app is doing locally so n grok is the simplest way to get your app running in the cloud so after each of these techniques we're going to do a brief breakdown of the pros and cons of each the pros for n crack it is fast and easy there's no way that it's faster easier it's really handy for demos so if you're just going to a meeting with some code that you've been working on you haven't really showed it to anyone else yet but you want some of the people in that meeting to be able to play with it and rock is a great tool for that and it's also really great for hacking on web hooks so if you haven't heard of web hooks before a great examples with Twilio if when you're Twilio phone number has an incoming phone call from somewhere in the outside world Twilio will send your server a HTTP request asking you how you want to handle the phone call and that's great when your server is live and running and prod but when you're just working locally you don't actually have that URL that you can point to Leo to to get those instructions and grog is the way that we recommend doing it and that's why it's an indispensable part of my indispensable part of my toolkit now the cons obviously it stops when you close your laptop so as soon as your screen goes black that URL is gone and every time you fire it up you're going to get a different random domain you can upgrade to a paid version of course and it's great products so I support paying developers who do cool stuff but if you're just sticking with a free one you're going to get a random domain each time and it definitely doesn't scale so even on the free tier if you have like maybe 12 people in a meeting and you're trying to point them at your end crack URL at the same time your request may get throttled so be careful be careful of how many people you send that URL out to at once awesome all right moving on technique number two is Heroku so furrow coup is a platform as a service and in my opinion today it's still the easiest way to get your code running in the cloud 24/7 so if you pop over to Heroku website they've got lots of interesting language here which explains what it does but for me Heroku isn't really so easy to explain as it is easy to show so we're going to pop back over to our flask app still got that running there I'm going to close down and grok stop my local server and we need two things to get an app running on Heroku the first thing we need to do is create this file cut a proc file a proc file tells for okuu how it should run our app in production and then we need to actually create a Heroku app which is going to give us the URL where app is going to live so we'll start with the proc file first and the proc files are a pretty simple syntax you just do web and then the command you want Heroku to run when it starts your service now if you look the flash development Docs they will tell you do not run the development server in production never ever run the development server in production so for the rest of this talk today we're going to use a Python HTTP server called G unicorn to actually run our app when it's out in the world for everyone to see and so the claim to do that is G unicorn and then the path Python path to where our app is defined so that's inside hello dot py module and then we have a variable in here called app and then there's a couple other options that we like to pass for okuu this one just tells G unicorn hey don't store your logs locally on the server instead pass them back okuu so that Heroku can give them to me so once we've got our proc file when you create your Heroku account Heroku account they will tell you to download this thing called a Heroku toolbelt which is basically their command-line interface after you login to that all you have to do is type karoku create and it's going to go ahead and give us a unique URL right on the spot it's also going to add a new remote to our repository that ties to that unique URL so if I was just pushing my changes up to get help I would do git push origin master I'm pushing my changes up to Heroku I'm going to do git push Heroku master and so what's happening now is Roku has accepted our source code it is looking at the requirements that are inside our requirements txt file it's going to go ahead and install all those requirements it's really really fast because Heroku servers are pretty close to where pi pi servers are at least a lot closer than where our laptop is then going to bundle it all up and shoot it out on that URL and launching cool so if we do Heroku open which is a nice handy little shortcut they give us we'll hit that URL it's making all the pipes line up for us and HelloWorld cool so now we are up and live with Roku so what exactly does this mean to me my favorite part about Heroku is that it's like the easiest way to get an app that's running into cloud 24/7 for free so Heroku is free tier is a little more complicated than it used to be but the last time I did the math you can basically have one app per account running all the time without any serious consequences there's zero server management we didn't have to access any machines open any terminals on any remote servers anything like that and her crew has a really interesting add-ons ecosystem where they partner with other companies to make it easy for you to add things like logging monitoring and databases caches things like that to your application the content this one the scaling is really really easy but it can also get pricey so if you just have like a big event for your organization and you need to pop on some extra Japan for like just one day or an afternoon her who's probably still a great choice if you need to be running your app that like more than one server on Heroku for a sustained period of time you might want to start looking at other options server customization is harder so if you need some sort of OS library to make your application work there's a way to get it in there with Oroku but you have to do a little bit more legwork to do it and some of those add-ons are better than others so some of those add-ons are maintained by Heroku itself some of them are maintained by third-party vendors those third-party vendors can vary in their reliability and in the quality of their documentation all right number three server lists in quotes it's pretty much the only way you can talk about server lists is in quotes so this is the one the newest hottest techniques to get your app out there in the world the idea is that instead of like Heroku where Heroku managed all the server stuff for us but we still have our process running in Heroku cloud all the time with server lists the idea is that our code is only going to be running when someone actually needs to use it and most of the time it's going to be sleeping and as soon as someone sends a request to our website the server list provider is just going to flip a switch and get our process running again quick enough to respond so today I'm going to show you how to use AWS lambda but all the big cloud providers have their own server list feature asher has one Google Cloud has one so I'm just showing you AWS for example sake when you look at lambda you can see pay for only the compute time you consume and we'll talk more about the pricing on the pros and cons list but when I've used lambda before you can log into your AWS console copy and paste some code get things set up and working pretty well I find it a little cumbersome so I like to use one of the third-party frameworks that sprung up around the surrealist movement and the one I'm going to show you how to use today is called Zappa so Zappo is basically just a wrapper around AWS lambda that makes it easier for me to take my existing flask application and fit it inside lambda so to get Zappa working I already data pick install Zappa before I got up here so what we need to start with here is Zappa and MIT and zappas just going to ask me a few questions about how I want my app to run inside lambda first it's going to say what do we want to call this environment so I'm just going to say production which AWS credentials do we want to use I'll say personal and Zappa creates a three bucket lambda also creates an s3 bucket for where you're going to store your source code before it gets deployed so I'm just going to go with the default name there don't really care it found that hello dot app is the right path to get our application started so I'm gonna stick with the default there do we want to deploy globally though it would be the best way to say hello world it also costs a little more so I'm going to say no everything look okay yes it does thank yous ephah and then to get things rolling we just say zappa deploy production so right now is that those making a whole bunch of API calls to Amazon Web Services underneath the scenes it's given me a warning here because my virtual ends and my zapper project have the same name it's probably something we should all fix next time the interesting thing about Zappa and lambda is all of it is mostly a recombination of other Amazon Web Services products and because all those products are products you can use in their own right you can access them and poke around your Amazon Web Services console after you've already set it up so after you deploy a Zappa project you should log into your AWS console and kind of poke around and see all the things that have made for you you're going to want to look at this thing called API gateway that's basically the way that you tell Amazon Web Services that you want to accept traffic from the outside world that's also where you're going to go in and set up your own custom domains and an SSL and all sorts of things like that like I mentioned before there's also a tie-in with the s3 bucket so that's what's happening right now is AWS sorry Zappa just zipped above our source code and is dropping it in that s3 bucket and this is a little bit different than Heroku because when we pushed our code up to Heroku Heroku just looked at our source code took a peek inside our requirements that txt and then pulled all of our dependencies on to Heroku servers with lambda you have to basically bundle up your dependencies locally and then upload them all to Amazon Web Services so a small distinction but if you see something wonky going on that could be part of your trouble all right deployment complete got a weird ugly URL you know what that means hello world awesome cool so talking a little bit more about serverless lambda it's pretty economical for small to medium loads if you don't need something that's actually available 24/7 but you just need it to be quickly available at any time of day this is a really great choice it's also good for spiky traffic so if you have kind of unexpected bursts of traffic to your service and you don't know when they're going to come lamb does a good choice because Amazon is basically going to take care of all the scaling for you and as you saw absolutely zero server configuration even less than Heroku the concept that this is a relatively new technique probably the newest one that we're going to talk about today so not only is it kind of a fast-moving ecosystem you're not going to find that much to read out there about it compared to the other techniques that we're talking about today but also the best practices are still kind of settling in for this one so you're going to be a little bit more on the bleeding edge in my opinion this is just named your Baker opinion it's a little bit less fun when you have to work directly with Amazon Web Services or the other cloud providers interfaces I prefer to use these third-party frameworks like Zappa or the one that's called serverless but your mileage may vary there and the other things that they can be a little tricky to troubleshoot so when something goes wrong with your lambda deployment like I said because it's just a combination of other Amazon Web Services products behind the scenes that kind of means that you have the ability to go spelunking on your own and figure out where things went wrong and you're probably going to have to spelunk inside products that you didn't even know existed all right technique number four virtual machines so this is where we get to the workhorse of the Internet this is the way that most big organizations run their code in the cloud and today we're going to be taking a look at Google compute engine virtual machines but all the big cloud service providers have their own VM service so for Amazon ec2 for example and with this one you are pretty much just getting your own tiny corner of the cloud and setting it up exactly the same way you would locally so I mean my Google cloud platform account now I just hit create a new instance we'll call this one pie con 2017 I'll give it that one the CPU that's how much horsepower I want on it right now I'm going to stick with a bun too because that's what I know best and we're going to make sure we allow HTTP traffic and so right now Google is going to get started spinning up a new virtual machine for me inside my Google compute account so virtual machines if you haven't heard of the concept before the basic idea is we're taking the software power of Google's cloud and we're basically using it to create what looks like fake hardware and then we're installing another operating system on top of it so the pro is that you get full isolation between say my virtual machine that I'm running on Google Cloud and your virtual machine that you're running on Google Cloud the downside is that it's not quite as efficient as if you were just running a process without that overhead of virtualization we'll talk a little bit more about that in a second but now that my virtual machine is up I'm going to use this little shortcut in here to copy the command to SSH into it we'll see if the box is actually ready to accept our SSH command now all right cool so if we start poking around this instance we'll see it looks pretty much what a stock ubuntu service would look like right out the gate I'm going to move on over to the bar activate sudo mode because we're about to run a whole bunch of sudo commands and we basically need to do three things to get things set up here one we actually need to install pip first so I'm going to get that started right now I have to install Python pip yes one ninety two megabytes let's do it after we install pip we're going to have to make a virtual end and then after we make the virtual ends we're going to need to clone our git repository to pull our source code onto this server then we're going to install our requirements inside that virtual end and then we'll finally be ready to run our app so this one is definitely the most legwork so now we've got pips I can do pip install virtual ends and then I'm going to do virtual ends - pea luckily the box comes with Python three already I'm going to activate that virtual ends just like we would locally and then it's time to go ahead and grab our repo clone it in pop in there install our requirements just like we would locally and then the last thing we need to do to get it running is to get that same G unicorn command but we actually need to make one small tweak to at this time so with G unicorn by default it's only going to listen on port 8000 and it's only going to listen to request coming in from localhost so we need to actually pass it one more command which tells it hey listen to requests from the internet at large and do it on port 80 instead so if we pop back over to our compute engine click this little icon here we've got our hello world running there in Google Cloud so pros and cons and virtual machines pros full control you get to do literally anything you want on this thing and set it up exactly the way you like it scales as much as your wallet so that's for you to consider but it can still be economical if you're careful so you can get a lot of value out of it if you put in the time to set things up kind of in the right way the cons undoubtedly more work for you the most work out of any of the options that we talked about here today and there's also a lot more to learn so we set up this virtual machine today using just manual commands on the box if you really decide to run your organization on this in production you're probably going to need to learn about things like configuration management monitoring you're going to want an alerting system for when things go down or weird Network flips happen you are going to be in it if you go this round and the last thing is that ultimately it's harder to predict the costs especially if you add things like load balancers to your stack where you have multiple virtual machines running at once and you want the cloud provider to balance traffic across them evenly those prices can can come back and bite you on your bill if you're not careful so ultimately with virtual machines most control most work but if you go this route you will be in good company because it is the way that a lot of people in the world run their software last piece that we're going to talk about here today is docker so docker is kind of a newcomer on the scene maybe a couple years ago if most of the techniques we've talked about today are going from least effort to least control sorry least effort and least control like with Heroku to the most effort and the most control with virtual machines you can kind of see docker as a way of trying to split the difference where we're going to set up our app just like it was in a virtual machine locally when we run our app it's going to think that it's in its own personal virtual environment but the docker containers that we use are going to be a lot more lightweight than a full virtual machine and a little easier to manipulate so usually I find that with docker it's easier to show than tell so we'll pop back over to our apps here we need two things to get our docker machine our docker container running in the cloud first we need to create this thing called a docker file which is going to tell docker how it should actually assemble our project inside oh yeah I'm not on localhost get out your Google Cloud thank you now is not the time cool so for this part doc files have their own weird syntax if those of you in the audience who know me know that I know this one all too well so we're going to pull off a start of our docker file by pulling from the Python base image we do 3 5 on build and then we're going to tell at which port we wanted to expose in production so this time I'm going to do 5,000 and then we tell it what command it should use to actually start things up and I'm actually going to go back and grab that same one from our proc file and then add that bind just like we had before 0.0004 5000 this time because that's one we're telling doctor to pay attention to so before we can actually run our project inside the docker container we need to build it so I'm going to docker build I'm going to called 80 Baker / 5 waves and so docker is going to look at our code take a look at our requirements file install the requirements and then add some metadata about how we want to run the container and then to run it we're going to say docker run we need to tell it that we care about port 5,000 so I'm going to say take port 5,000 from our container and expose it on port 5,000 on our host and I want to run that image 80 Baker 5 ways so we see Gina corn running inside the docker container now if we go and check out localhost 5,000 we've got our hello world awesome so the next piece to actually get our docker image running in the cloud is we first need to push it up to the docker hub it's basically like the github of docker so we do docker push 5 ways you'll see a lot of these layer already exists images coming up here and only that first one is the one that it actually had to push up on its own that's because docker is kind of smart enough to realize hey most of the stuff that's inside this image is stuff that's being pulled from the base Python image which I already know about so doctor comes with this extra tool called docker machine which lets you spin up virtual machines really easily and then SSH into them or sorry you can't acetate chained to them but you can also just manipulate them as if they were your localhost so I've already got one running here called 5 ways so we do the command to apply basically that docker virtual machine instance to my local environment by saying docker machine and five ways cool so I'm going to kill this one that we had locally and now to get things working on our machine in the cloud I'm going to pull down our five ways start it up just like we did locally five thousand five thousand eighty Baker five ways alright and then I'm just going to pop up another window here and there's a handy little command we can run to actually see what IP address our doctor machine is running on so grab this guy pop over port five thousand hello world cool so that one may have seemed a little bit like dark magic it's definitely the most advanced option that we talked about here but it does have some of its own pros and cons the pro is that it helps a lot with dev fraud parody so once you're running your app in production you're going to find that a lot of your biggest bugs are happening because something that was set up in your local development environment is not the same way that is actually being run in production doctor is great for helping with that it's nice for micro services if that's a thing that you're looking for it's also a great way to impress your friends I can speak with this personal experience the concept that it's one of the newest techniques out there the best practices are still getting settled probably less new than serverless at this point when you're looking at the documentation materials that are out there but still pretty new it works best when you and all your team go all-in on docker and it definitely has its own learning curve besides all the tools that you're actually putting inside these containers to run them so that's all I got folks the five techniques we covered our end rot karoku server list virtual machines and docker my name is Andrew Baker I'll be hanging at the Twilio booth in the expo hall all day tomorrow if you want to ask some questions thank you [Applause] [Applause] catch him out the booth with questions yeah thank you five demos successful Nadia single air you and I are both surprised I think thank you surprise very good thank you yeah yeah it was fun charting it and you know like I said hey hey dream oh hey hi thank you Thanks yeah please do I would love to Thanks thanks man
Info
Channel: PyCon 2017
Views: 45,409
Rating: undefined out of 5
Keywords:
Id: vGphzPLemZE
Channel Id: undefined
Length: 27min 57sec (1677 seconds)
Published: Sat May 20 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.