Docker + Python by Tim Butler

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so up first we have Tim Butler from connects and he'll be talking about docker and Python please welcome him okay thank you everyone for turning up to my talk so just to start things off just wanted to get a rough idea of who's heard of docker okay pretty much everybody and who's actually use docker before okay probably about 50% and the magic question who's actually used docker in production just a couple okay for those that don't know me name's Tim Butler concurrent inner Enterprise Manager for kinetics kinetics is a web hosting and application hosting company and we've been specializing in container based hosting for about 10 years now I've been working with Linux containers for 10 years as well and been playing around Python for about eight years I've got now sort of 18 months docker experience and been focusing on it quite heavily over the last six months so kinetics we're actually developing a new product at the moment so we're combining and working with Linux containers and docker within those Linux containers so we're working on a new feature and a new hosting product for the Australian market with that so just in this talk I've aim this talk at newcomers to the technology so just enough to get you started get up and running see what doctors are be out and see if it's a good fit for you so cover just what is docker water containers go through a few practical examples with Python go through the downsides of docker and also some of the extra services that you may need may use and where doc is heading in the future so what is docker the official definition is an open platform for distributed applications for developers and sucide means essentially their motto is build ship run so this is why I think to think of this is docker is application containers so what are containers containers are essentially an isolated virtualization layer that's a software level virtualization so it uses a common kernel in this case of the Linux kernel and provide you a set of tools to help manage it so like a VM you can still stop start containers you can set networking on containers you can set resource limitations and constraints on containers as well so essentially docker is yet another layer of abstraction from standard containers docker focuses very heavily on the application layer so the goal of docker is essentially to reduce that to a one process one application container so the advantages that this is because it's very tightly defined you get a nice nice easy repeatable deployment from it so containers as Tom sort of talked about in his presentation yesterday with the django con you want production to be boring you want it to be safe well what about containers are they the latest fad is that this the new technology they've in fact introduced 15 years ago into Linux so there's been quite a lot of work that's been going on with the Linux kernel development and it started back in the 2.4 kernel era so for those that remember that's back in the Pentium 2 Pentium threes so today's and one stat that I always think stands out is Google now launched over 2 billion containers each week so they spin up spin down containers for all their services and they're launching two billion each week so from a production ready point of view certainly containers are there and docker is 99% there as well so why docker what advantages does it offer it's fast docker runs at native speed because it's a software level virtualization you're running at raw server speed so you're not getting any hypervisor performance hits it launches in sub second time so docker literally can launch can create and launch a container within 50 milliseconds it's lightweight you can have a full Python environment and it's less than 200 megabytes of storage on the disk but it gets better than that docker uses a layered approach to its file system so uses a copy-on-write overlay filesystem by default what this means is the more instances you spin up doesn't necessarily take up more disk space there was a company called hip riot that presented at the 2015 docker Kong and one of the challenges that they put up was they had 250 running docker containers with a HTTP demon inside on a Raspberry Pi and if you think that's impressive their aim is a thousand so it gives you a great little level of isolation obviously the focus if you're working with micro services you want to be able to contain those within their own special environment so for this you can limit CPU you can limit memory you can limit what that system can do and because it's contained in its own namespace it can't access the processes from other containers that are running and docker makes things easy most of the commands are a single command line it's got a nice REST API that you can also call and there's quite a number of tools that already integrate with it so is it all rainbows and unicorns not quite so what are the downsides of docker or as I like to sort of refer to they're just things you have to be aware of of how docker actually works the first one and most of these are both positives and negatives but the first one is a positive and negative it's a rapidly changing platform so the good thing is they're adding new features all the time and improving some of the services that it provides the downside is you can't just learn docker once and forever have that knowledge in turn that means best practices can go out of date very quickly something that was written 18 months ago as the best practice for docker may not necessarily be relevant today so you've got to do your homework to make sure that you're actually following what the best practices are docker themselves have been working very heavily on their documentation and recently relaunched all their documentation suite so they're making good inroads but still have a way to go there as well docker containers are immutable so the moment you spin it up you can make changes but as soon as you shut it down you lose all of those changes so it takes a bit of thinking sometimes to work out how your services work thankfully they've covered that as well and they have a concept of data containers where essentially your data can sit on a separate container to the actual running instance for example our database server you can have your database running running in a container and have a data container for the actual database files themselves multi hosts deployment so at the moment dock is really good at focusing on one VM or one host node and spinning up the containers the tools still aren't quite mature yet for running it in multi host environments where you've got multiple different VMs multiple different cloud hosting platforms but they're working on it in security this is one that a lot of people get confused on and because docker runs in a different way see this as an issue essentially because you fire up your container and it becomes immutable so for example like the OpenSSL bug there was panic because they've got open SSL installed in their container how do I update it we'll the answer mostly with docker is that you stop and start the instance so you can pull down the latest image sock and starter and you're a few hundred milliseconds at most to restart that service so with the open SSL bug you had to restart your engine X or Apache daemon any way to pull in the latest changes so it's not really that much different you just got to think about it in a different manner so what are the alternatives to docker in the Python well obviously the virtual environment so virtual end is a great little tool to run multiple different versions of libraries and not pollute your dev or production environment with conflicting versions in fact one one Python Easter I did see described docker as virtualenv on crack so the downsides to virtual end is it's heavily reliant on the underlying system still so it still needs Python installed on the system and it's Python only so when it comes to binary or external modules like when you've got my sequel and you need some sort of external library to compile you can run into some issues there so you can still face deployment issues existing virtualization so obviously everybody these days is running virtualized system which is great docker doesn't isn't a direct replacement for this in fact you can augment your vm deployment because now you can quite easily and quite efficiently spin up multiple containers within your virtualized environment or virtual machine you can make full use of your hardware and you you still get all the isolation you still get the ability to restart so existing orchestration systems obviously the more tightly coupled docker is the more it gives you in terms of configuration management repeatability and the orchestration side of things again daka's not a direct replacement for this it's integrated now into most of the modern see em systems so you can actually use docker still with your existing systems as well and just plain Linux containers themselves like Linux containers LXE and virtuozzo which is a product way is again docker isn't a direct replacement for this docker focuses just on the application side of things so in fact you can run docker within an existing container as well so you can have this multi-layered system with high levels of efficiency so how do we use it for Python and what uses are there for Python micro-services obviously everybody would love to be able to decouple their dependency libraries for their service into their own little environment that's great if you've got lots of VMs you can spin up and manage and bits and pieces but docker makes it really trivial to spin up containers and isolate each of those services in there if you're using the 12 factor app philosophy or sort of following its guidance docker is an extremely good fit for this as well so the configuration management side of things the troubles that you have between production and development environments so because docker is very tightly coupled with your service you get some configuration management benefits out of this as well if you've ever had it's works for me or it worked in dev type problem and didn't realize that you had one little library different between your development production environment again docker eliminates this for you which leads on to repeatability deploying docker anywhere should give you the same experience so the underlying whether you're running in a bun to the CentOS running on Azure running on AWS it shouldn't matter and one of the benefits that I've certainly used heavily is essentially used once containers so you can spin up a container spin up as I'll show you shortly a Python environment and you can even have it destroy itself as you shut it down so the overheads to spin up these different environments is extremely low it's also great for testing you can run multiple different test environments spin up the containers run your tests and destroy them afterwards so if you run a runner one of the good things about docker is it's not like OpenStack it's not a complex install to get started so for abun - it's just a very simple one-liner and same again with the CentOS Fedora well if you're running OSX or running Windows you can use a product called boot - docker which essentially again is a one-line script sets up a virtual environment for you using VirtualBox on your machine so it takes less than 15 seconds to boot on a modern system and you can get started right away docker also recently bought out the company that produced kite Matic which essentially gives a nice GUI interface to managing your containers so how do you use docker it's got a very simple command structure so just as a few examples you can go run a container so you go docker run and give it a name each name if you want to restart an instance you go docker restart container listing containers docker PS viewing the log files within your container docker logs and then the container ID so it's a very simple system to get started with so let's create a basic little container obviously we're at PyCon so I've got a nice Python one here so for 2.7 we go a docker run - t - i python : 2.7 and for 3.4 we simply just specify the version so how this actually works - T essentially means terminal and - I makes it interactive means we can type and use it and I must apologize for some of these I've had to take videos of the of the demos I was going to give my original MacBook decided to die 2 days before this event and lastly we specify the image and a tag so generally the tags are used as versioning so if we run a quick little demo to stock a run - t - i python 2.7 and you can see we're straight into a Python shell and we can see it started straightaway and again to run a 3.4 shell this installs and creates a container in the background and stores Python in it so let's just look at a little simple app over to line of Python because pythons nice and simple and basically all we're going to do is say hello pike on Australia and spitting up the version number so to turn my Python file into a dock or container there's two steps I need to do the first of that is to create a docker file so docker file is a very simple text driven configuration file that essentially you need to specify everything that needs - it needs to do to start and create the image so for this one because docker gives us the layered ability I just start with the Python 2.7 image already I copy I hope icon file just into the root directory and finally I run the command Python hello Python hi so now that we've created the docker file the next step we have to do is build the image for this doing it locally we're just going to specify a tag with the - T just so we have an image to reference and we specify that it's in that directory so literally in that directory it was just a docker file and the hello PyCon file once it's built then all we need to do is docker run PyCon au / hello and there should have been the example of it actually running so to give a quick flask example this is sort of more like what what it'll be used in the in a real world or in a production environment so again it's a simple command structure except this time we're going to set it to run as a daemon then we're going to use - key to map port 80 to port 5000 so essentially port 80 will be on the host and it will nap that to the Container on port 5000 and we're using the Dockers training web app for this so just to see it run for this one I didn't actually pull in the images just so you could see the process run from scratch unfortunately I rented at home on my Optus cable connection and managed to pick their peak time to do it so it was unable to find a copy of the training app locally and as you can see it's going and downloading automatically all the layers that it needs to to run this application this particular one is in a bun to baste image so it's downloading the multiple different layers for that as well the good thing about docker is once you've pulled this in if I had all of those layers except for the last one all it would need to download is that last layer so if we just skip through it quickly extracts that because it's a copy-on-write overlay file system it takes care of all that work in the background for you and starts up the application so course we can then go to a web browser and see our magical hello world application we can also have a look at the log files so gain if you used to running flask or similar systems you can see the log e output from that container the docker file for this one so this one was a little bit more complex than my simple example but just to run through it quickly they using Ubuntu image they came through grab Python all installed pip install the requirements oh sorry copied the requirement sought over install the requirements exposed port 5,000 so this allowed the host container to map a connection to it and we ran our application and just for completeness that was the flask app that it ran so you can see the port 5,000 this is where it came into it so the ports that it starts up in your application here that's what you need to expose to have have the ability to access it from a host or from an external source so the good thing about docker is it comes with everything including the kitchen sink once you start to get into more complex docker scenarios and when you're running multiple containers they have docker compose so docker compose gives you controls for multi container deployments it's a very simple yeah more based format and again like everything docker it reduces everything down to a single command line to run docker swarm this is their cluster in component so this is where they're trying to work work on and simplify the host level deployment it's currently still in beta so there's quite a bit of flux in terms of how it works and the feature sets that it has docker machine again docker wants to make everything easy for you so docker have a machine that will create the house for you so this can directly interact with AWS and actually spin up the instances for you as well as a number of other cloud providers so where's doctor going in the future obviously it's a rapidly growing technology but two of the next bits that it's working on to push here is a volume plug-in which is now made it into the mainline code but it gives you a pluggable storage so by this it can interface with systems like currently they've got the demos for NFS EMC filers Gloucester and similar so you can have your data stored on an external source and reference from within the container they're working on a new network system so obviously the the current one works well for simple host environments but it doesn't work well when you get to bigger scale so they're using VX land essentially to be able to bridge between the multiple different hosts and they're turning it into a services based model so rather than specifying ports you'll be able to specify a network specify services so in conclusion go try it no really go go try docker it's quite simple to get started it allows you to embrace the microservices philosophy and enjoy pushes to production because your dev environment running containers running docker containers is exactly the same as your production environment you eliminate all those hassles in between okay if you want further reading I've got my slides up here and also linked a number of tutorials that I've written there was also a really good presentation at PyCon this year that gave a good overview of docker and of course the official docker documentation itself and with that we got any questions all right you said a docker locally Katia's container images is there a way of caching images on your network for multiple hosts not that I know of I haven't actually sort of delved into that so there may be a tool actually available for that but it allows it an easy just docker pull to pull later or data down per container or per house sorry hi thanks for the talk I briefly played with docker like a year ago and haven't actually touched it since two questions one is so my work we're using salt stack for our configuration management which you're abusing Ruby your chef full of ants for the octaves how did the two playing together because you look at the docker files and I was like oh now we're going back to a bash script - any tips on how you get those two working well together and my second question is that I'm using vagrant for our dev builds if I start using you doctor I just throw that away and spin up a docker image or whatever the terminology is yeah no I'm good question so that salt snack is actually integrated a docker module so salt snack itself can outtalk natively to your docker instances I haven't played with that I've just seen it available so I'm not sure exactly how that component works in terms of vagrant I was using vagrant and now I'm just using docker containers so essentially I've got my using docker compose and I spin up the multiple different containers everything running ready to develop absolutely yeah yeah if I've got all the layers in there I can spin up my development environment within a few seconds hi great Olaf thank you is Daka viable as a sandbox for running untrusted code what do you have to do to it or is that the wrong approach to go completely absolutely in fact a couple of people have demoed a number of instances all the one-liner scripts that you see out in the web to install the service like you know when you normally see you know pipe this curl command in DES bash and run it anybody that security-conscious of course wants to figure out what exactly that script is doing docker actually have a docker diff command so you can do it against a standard container run that curl script like download it run it and it'll actually then give you an output of what files have changed and where they've changed so you can use it for that sandbox testing as well absolutely and I've got docker kindly sent me some shirts and bits and pieces so I'll put some of them out the table but come see me yeah and get some buckets of swag you
Info
Channel: PyCon AU
Views: 25,838
Rating: 4.7372265 out of 5
Keywords: pyconau, pycon-au 2015, Python, PyCon
Id: Fxsq3BciYdo
Channel Id: undefined
Length: 26min 41sec (1601 seconds)
Published: Mon Aug 03 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.