Docker & Drupal for Local Development

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello today we're going to be talking about docker and Drupal for doing local Drupal development so hi I'm David snowpack I am a maintainer Co maintainer of 20-plus projects on drupal.org I'm a member of Drupal security team I'm the co organizer of the Drupal for 1/4 meetup group in Milwaukee Wisconsin actually gave this same presentation there last week Thursday if you're ever in the area definitely stop by we have the best food and any Meetup the other co-organizer Steve makes home-cooked food for every meetup it's amazing I'm also co-founder of my job wizard a drupal support maintenance company we don't build sites we just support maintain sites that are live and we do it at a fixed monthly fee as opposed to billing for hours and my most important role in life I am taught to my two beautiful daughters Sasha and Ava so today we're going to be talking about development environments by which we mean you know the way that you run a web server PHP and the database to do Drupal development locally some people do development remotely where they have like another sites on the live server or maybe like they're pushing directly to the panthéon dev environment to do development but today we're talking specifically about doing local development so that's development on your laptop desktop computer um so in a video presentations is sort of difficult to do when we did it live we did kind of a survey raising hands people shouting out what they used for their local Drupal development environment if you'd like please you know leave a comment below this video for what you use personally or what you've used in the past and your experience with those things some common options are lamb aqua dev desktop vagrant or doctor already so docker why should I care and to answer that question first we're going to talk about a little bit of history with regard to local development environments so the classic local development environment is running patchy my sequel PHP on your local machine on your normal OS so just installing those packages and running them like we said some examples are lamp lamp lamp lamp or octave a dev desktop but why do classic developmen environments kind of suck so you're stuck with whatever your OS supports let's say you're doing a project that's PHP 7.1 but there is no PHP 7.1 package for the version of your OS you can't use that or the other way around maybe there's some really old package because you're working on a really old production environment that no longer is possible to get running on on your desktop OS um you can only generally have one of each type of thing without lots of additional work for example what if you have two projects and one uses my sequel and one uses my DB improve whatever reason in the context of those two projects that difference is important or nginx and Apache or you know several different PHP versions for different projects frequently you know Linux will depending on your distro give you maybe to PHP versions like PHP you five point five and five point six or five point six and the seven point one or something but you know if you need to go beyond that that can get quite difficult and while you're doing this set up of your local development environment of course you're messing with stuff on your host OS that you depend on to read email and do life oh and you can break stuff there and this development environment is not transferable to other people or computers like let's say you have a team you've just set up the perfect development environment you can't really give it to your teammate they would have to sit down and do the setup on their computer which maybe is different alas or different version the OS so they you know can't do it exactly the same or if you get a new laptop you might spend a day or two or three getting your development environment reset up so virtual machines then became a common way to do development environments I still use virtual machines for a number of projects and they solve all of those problems you get a whole separate OS that you can do anything you want with install whatever software you need you don't have to worry about breaking anything you can have as many VMs that you need you could have one per project so you could have a VM for that Mario DB project and a different VM for that my sequel project and you can share or transfer them to other computers it's kind of pain because the files are so big and they take a lot of time to export or import but you can export a VM as a dot OVA file and put it on a USB stick and give it to your friend and make an important works right it can allow simulating the production environment let's say the production environment is a single VPS with you know whatever software installed you can set up your VM with all the same software installed configured the same way some examples you know our drupal vm it's a project by Jeff Keeling it's actually based on vagrant to set up a VM to do Drupal development or just you know using raw vagrant or parallels or VirtualBox with VMware or whatever so why do virtual machines suck first of all they take whole indivisible blocks of disk in memory so you say this VM gets two gigs of RAM and 20 gigs of disk and even if inside the VM it's only using you know 500 Meg's of RAM and 1 gig of disk like all of that is reserved for the VM unless you have 64 gigs of RAM like I do you can really only run you know one or two VMs at a time and because VMs take so long to shutdown and startup like if you're constantly shutting down and starting up the m's which can take five minutes or ten minutes or something like that's that's definitely a pain it's nice to be able to run you know some long-running migrate process one project while you're hacking on something else for another project maintaining a VM is just like maintaining a real server in all the pain that comes with that so then enter tools like ansible or salt or puppet or chef to try and make maintaining real servers easier you end up using them on your virtual machine and while those tools are great and definitely help with maintaining real servers or VMs they're complicated and end up being a whole project in and of themselves and so running VMs are generally slower than running them directly on your host system so you've got this great new laptop but you're only able to use a fraction of its performance because you have all that extra overhead of using a VM so containers like docker to the rescue containers run on your host machine within a separate namespace so processes network interfaces routes a bunch of different resources can be namespaced and a process in a container can't see resources in other namespaces it thinks that things in its namespace are the only things that exist a container runs in its own to root filesystem so this is some directory on your computer which contains you know all the files that that container sees as its root filesystem so the container can have a totally different Linux OS than your host system so maybe your host systems Red Hat and you have a container running Debian something like that but there is only one kernel and no virtualization or emulation so check out this fun graph that I made or not graph chart drawing something I don't know um a virtual machine you have the host kernel on the bottom and on top of it run normal processes so that's just business as usual but there's one special process the hypervisor that's your parallels or VirtualBox or VMware which can boot another kernel and translate any requests that that kernel might make to the normal hardware and turn them into something that makes sense to the host colonel so it's like emulating a separate computer and then on top of that Colonel run your VM processes so you have all these extra layers between the actual PHP that's running your Drupal site to the hosts kernel and of course there's fun performance issues with files too because it's not just you know accessing files directly on your host system it's making a fake block device which then it's creating a whole separate file system inside of and that's definitely slow containers on the other hand you have the host kernel which runs normal processes and container processes which are just normal processes with a separate namespace like it's just a separate field on the process in the kernel that says this process is in the separate namespace and so those processes will compete with normal processes for RAM or CPU just like any other normal processes would and they access files on the host system directly you know they just get here's a directory you mount it into the container and it accesses it directly there's no emulation or virtualization or anything in between so like we just discussed containers run on your host machine within a separate namespace and they use the RAM or disk that they use you know there's no block reserved for them but limits can be applied so you can say you know this process should only ever use up to two gigs of RAM or you know setting the time slices on the CPU that it should get so they are super super fast because of those things and because they are fast you can run lots and lots and lots of them and they are easier to manage than a real server and we'll talk about this more later but creating an image for docker via docker file is a whole lot easier than creating a configuration to manage the server result transport or whatever so why do containers suck because everything sucks just a little bit so you know there's new concepts fewer people have a deep understanding of containers as do VMs and normal server management it's new technology docker iterates SuperDuper fast you know there's frequent bugs breaks new best practices that you need to learn you know something that was best practice three months ago might be deprecated there's a whole new thing you need to learn how to do just because you know the technology is moving that quickly and docker is big I'm not going to go into this too deeply but docker is a big package of basically like everything you need to have like a container solution that's useful it's kind of the antithesis of the the UNIX philosophy where you have lots of small things that work together to do a big job like it's a big monolithic thing that includes a lot of stuff like it has image building it has running containers it has a daemon that manages all the containers running on a system that has the ability to run many of those Damons across several systems and have a swarm of containers and all this stuff there are alternatives a rocket is a slimmer alternative that just runs containers don't do all the other things with the idea that you know per the UNIX philosophy all of the other things that need to be done to create something doctor-like would be separate utilities that join together anyway just a sort of point of interest so next we're going to talk about some doctor concepts so first of all each container runs a single process in a drupal context that would mean for example you'd have a container for patching nginx whatever your webserver is a container for PHP and a container for my sequel allow you to be whatever your database is and then you would wire them together to make a complete application there are several advantages of this one process per container model so each container can potentially run a different distro you can have your PHP container running red hats and your nginx container running set to ask whatever the person who created the container thinks is the easiest distro with which to create that container they can use it really doesn't matter that you have a application composed of several different distros so in practice my favorite distros debian so any containers I create are going to be built on Debian but I don't really have any qualms using someone else's container built on sent OS or whatever if it's useful container it's kind of its own self-contained thing for just that process and I can use it even though they're using different distros you can update them separately so you can you know just replace the PHP piece and actually because they are separate containers you could do some clever rolling update stuff you could like put the new PHP container in and connect it to you know the load balancer to the webserver and before removing the other one and then once they're both in there working remove the old one so you can do two updates with zero downtime and clever stuff like that dr. will gather logs and set restart policy for each container independently and dr. can manage networks links and ports and all that stuff frees containers you get you know these extra management tools from docker over each of these processes over each of these containers and it drastically simplifies the creation of images we'll talk about this more a little bit but this is one of the reasons that creating a docker file is easier than managing a whole server or doing ansible or coverage or shaft or any of that kind of stuff so what's an image a container is started from an image and an image is basically a binary package containing the root filesystem for that container and some metadata so here's some example images these are links to the docker hub where people can share their docker images and there's a couple official docker images that are either maintained by the doctor team or the maintainer of the project itself so these are links to two official images and all images have these two parts so this is Debian : Jessie that's saying the Debian image at the Jessie variant so there's a couple of different versions of Debian that you can use and this is requesting specifically Debian Jessie those are actually called tags and then here's the my sequel : latest such my sequel at the latest tag all images have a latest tag and potentially some version based ones you can grab specific versions of variants of that image the metadata for the image includes things like what princess to run and is what user and also where are the volumes on that image volumes are directories that store persistent data images and containers are essentially read-only containers aren't actually read-only unless you tell them to be read-only you can write to them in you know directories that aren't volumes but if you ever want to update the image that the container is based on you'll lose everything that isn't in a volume so you say you know this particular directory is a volume and then that directory will outlive the life of the container so for example with my sequel you put the my sequel data into a volume so that when you update that container or you know tear it down or whatever you don't lose all of your database data and a bunch of other stuff in the metadata that we're not going to talk about right now so it's demo time let's start a container from an image so we're going to run this command first and it's bisect it a little bit it's docker run then this - - RM which means as soon as this container exits delete it so we're just going to run this container one time and then it'll self-destruct and this - I is for interactive and this T is for terminal so - IT is what you put on there when you want to run an interactive terminal for the process that you're running for the container that you're running and we're going to run the Debian : Jessie container let me switch over to my terminal - docker run - - erm - mighty Debian Jessie and bam this new command prompt comes up and we are now inside a completely fresh installation of Debian Jessie and we can start doing you know whatever sort of things you would want to do to a Debian Jesse container such as install some which is I picked a weird example because this one depends on the internet run slow uh-huh but you know let's say we want to install cows a some examples these cows say you know we've got our fresh Debian Jesse installation install some packages and start doing some things such as have a giant ANSI art cow savings hopefully I'll be ready in second okay cow say hello and that did not work for some reason a win for live demos everywhere did I not install Chrome say wow how could such a simple demo no wrong oh I bet it's because we are root and it got installed to user no let's just pretend that that work that's a horrible way to do a live demo to say pretend that my random choice of thing to - worked but you know we could install let's install PHP that'll take a little bit longer and it's maybe more realistic example PHP 5 but I guess the main thing to point out is that you know the starting and fresh container based on this Debbie ingesting image took like two seconds you know there wasn't an install process because the image isn't already installed Debbie Jesse there's a whole bunch of really clever optimizations going on the doctor does it doesn't actually copy you know a new copy of the image it's using a layered file system so like all the images based on Debian Jesse just sort of add layers to this image that never gets modified so you're able to start containers like that lightning fast and destroy them all so super super fast so here now we have PHP we can run some PHP code and say hello from PHP and that worked that demo worked so we're going to exit this terminal and now that container has been completely destroyed no longer exists so all that stuff we installed you know that went into a separate layer and then we exited it deleted the container deleted that layer systems completely clean and all of that was super super fast so what's next we're going to do a a more complicated one we are going to run a container with my sequel in it let's look at the command really quick so we do docker run and then - D which says to run it as a daemon so it's kind of the opposite of the - I T we did before it's going to run it in the background this - P is mapping ports so we're going to say we want to map the nine nineport on our local system - the three 3:06 the standard my sequel port inside of the container we're going to give this container a name so that it's easier for us to manage later and I call it my sequel - to perform for because I gave this presentation a triple four and we're going to do - e to set an environment variable the meaning of this environment variable is defined by the image the main way that you configure an image or you know tell it something is through environment variables so this my sequel image has a script that runs that ultimately starts my sequel but before it starts my sequel it looks at these environment variables and does some stuff so we're setting this my sequel underscore allow underscore empty underscore password environment variable to one and what that means specific to this image I found that in the in the docs for the image is that we want to create the root user with an empty password which is something you should never do in production but it makes the demo much easier and then at the end here we have my sequel colon latest so we're asking for the latest version of my sequel so let's run that I'm going to copy and paste it try to type that whole thing why do you keep scrolling the whole terminal fresh start alright so we're going to run that command what comes out is this hash that represents the container that's the containers ID and du/dr yes so we can see what containers are running I apparently left some stuff I intended to stop before doing this presentation so that this screen would be completely clean I'll just quick stop them right now you get to learn an extra dr. command because I made this mistake okay so we do dr. yes you can see our this years you know the first bunch of characters for our container ID you can abbreviate them there are no collisions kind of like with git commit hashes it's the my sequel latest container command we don't really need to worry about that here's that port mapping that we set up from port 99 two three three zero six and here's the name of the container we're going to ask dr. for the logs for our Mar sequel Drupal core and forward container and here is the most recent output from the logs and we can see down at the bottom that it has successfully started by sequel it takes about 10 seconds for the my sequel container to actually start up to a point where you can connect to it and then we are going to connect to it my sequel client just copy and paste that command so yeah we're connecting as routes to the local hose to port nine nine nine and nine you'll see that I don't get the password that's because through user has no password and we connect you can see what databases there are it'll just be the defaults we can create a new database and start using that database create some tables whatever you want to do and for completeness because I always like to clean up after myself we are going to stop and then remove the container I always feel like with docker if I don't remove things right away I'll forget that I had created that container because they're so easy to create and then at some point I'll run out of this disk space and they'll be like Oh what are all these huge leftover containers and images that I am running so we're going to run docker stop the name of our container and docker RM name of our container and there we go it has been stopped removed like it never existed let's get back to our presentation so yeah um I hope if you've never played with docker you were just a little bit amazed it's fast it's crazy fast like you know we had a fresh Debian Jesse installation to play with in two seconds we had a you know completely fresh my sequel database running in ten seconds um you could very quickly create a whole bunch of new containers if you wanted to run like ten my sequel servers at ten totally random versions you could do it in like 10 seconds right so you know management of the containers starting them stopping them looking at the list seeing what the logs are is relatively easy right so we were able to do a bunch of pretty sophisticated style stuff really fast and easily so one last thing while we're on the kind of darker concepts topic the dockerfile so images are built from a docker file docker file is essentially a sequential list of commands to run images inherit from other images so for example the PHP 5.5 - fpm image that's the run PHP 5.5 as a fast CGI server so if you're building web app it inherits from Debian Jesse so it's basically like all of the steps build Debian Jesse run and then all the steps to build the PHP 5.6 FDM container run but each command in a doctor file creates a new layer to the image which is a really clever optimization so if a previous layer doesn't change it just takes it from cache because all of the layers exist they all have like hash based IDs so iterating on your docker file like making some changes to some of the file some of the lines at the end is relatively quick you can mess around and change things and you know experiment with your new image quite quite speedily it's very developer friendly so let's do an example we are going to make a docker file customizing the PHP says container PHP image if I'm going to be pendentives PHP image anyway so the first line of every docker file is this from line and we're going to say our new image is from the PHP 5.6 fpm image we're going to say run which is really the most common docker file command we're going to run this docker PHP X to install script and this is a script that comes from the PHP image it's a helper script to install extra extensions basically the PHP image installs PHP from source and it does it with the minimum set of extensions installed and so then when you're making a real PHP app you'll extend that image and run this docker PHP X install script with a list of extensions that you want to install for your image so you all you end up with only the extensions that you actually need and in this example we're installing the PDO and PDO my sequel extensions so that we could connect to my sequel database and the zip extension convenient open zip files for some reason then we're going to copy our custom PHP ini file which would be stored in the same directory as docker file to this user local et Cie PHP inside of the container and that location and this script and all of that stuff is things that you would see in the documentation for this PHP image you know with instructions on how to customize it so I think that's all the lines yeah um but basically that's all it takes to customize an image it's super super easy if you can run commands in the shell to do stuff you can make docker file so dr. files Rock for a number of reasons it's way easier than doing salt ansible chef whatever and one of the main reasons for that is that this date and update problem don't exist in a darker file what I mean by that is that you know puppet or chef or whatever looks at the current state of your system looks at the desired state that's in your ansible playbook or puppet you know configuration or whatever and attempts to get you there right and how it chooses to get you there and whether it's going to work or not depends a lot on that current state there's things that could happen to the system either on purpose or on accident or whatever that makes running that you know ansible playbook do something totally unexpected because you're trying to get it from you know a unknown untested state to the desired state and that's a really complex problem which in a doctor file doesn't exist because darker files are always built sequentially the previous state is always known you're never really updating an image updating a container you're replacing the image that the container is using so there isn't really an update ever occurring there's a replace right which yeah I guess is what I was just talking about the images in containers except for volumes are essentially read-only you updated container by rebuilding on a new image and images are always constructed sequentially and only ever concern a single process so the other thing that makes doing like salt Nats upon chef and whatever really complicated is that they are setting up the whole system with lots of different processes and all the dependencies between those processes in an attacker file you're just saying how can I get my PHP process going and it's laser-focused so it's super easy to set up the configuration for just that one thing simple so how do we do doctor in Drupal or how do we do tuple in doctor I don't know who's inside of who but how do we Drupal our docker next we're going to talk about docker compose so earlier we talked about how a docker app is composed of multiple containers wired together doctor compose is a tool to help you do that it allows you to write a llamó file that describes all the services that make up your application how they're connected and then manage that whole thing as a unit so generally you put a docker compose yamo at the root of your project so for Drupal that could be at the Drupal root or maybe if you keep your Drupal code in like a doc root or something it could be that directory above your doctor ooh but whatever it would be like the top of your git repo is generally where you put the the docker compose that llamó file and we're going to look at an example dr. Campos yamo using the images from mode B docker for Drupal which is one of the many at this point many sets of docker images created specifically for doing simple so we're going to cover this in three pages looking each of the services inside of the doctor compose file but first every dr. compose file starts with a version that's the version of the composed file for somebody's my example uses version 2 version 3 is out it exists and this just tells doctor compose you know what syntax for this doctor compose yeah most should use we're doing version 2 and then the next section which is the most important section is all of the services that's you know containers that make up your application so the first one we're going to look at is this mario DB service and by putting my ODB here that's the name of the container we want it to come from this image this would be my ID be ten point one dash two point one point zero um they use kind of long version numbers these won't be images but the the 10.1 is the version of my ODB and then this two point one point zero that's like the version of the what would be image like they version all of their images in case you are wondering with all those random numbers were so then we set some environment variables we set the my sequel loop password to the very secure password we set what initial database we want created a Drupal database with a user called Drupal who has access to it and the password for that user will be Drupal so that's the whole declaration for for the database the next service we're going to look at is the PHP container we use this image would be slash Drupal - PHP actually something I forgot to mention earlier image names for the official images they don't have this author slash image name they're just like straight image name like PHP but because this isn't an official image it's not created by the PHP project or the two doc routine it's first starts with woad B so that's the author what would be slash Drupal - PHP and this is PHP 7.1 in the you know - point 1.0 version of world v we say that the PHP image depends on the Maya DB image so this does two things it says the Mario DB image has to start before the PHP image start or the my to be container has to serve before the PHP container starts it also sets up a link from the PHP container to the Mario TV container so it can communicate with it over this private network that doctor will create for our app we're going to declare some volumes so this says the dot slash directory which we're imagining that we're putting our docker file our docker compose AML file at the roots of our Drupal code base so like we download Drupal 8 and we put you know a doctor - composer yamo right in the top-level directory so we're going to say we're going to mount the current directory that's what this dots is to be the slash var slash t ml directory inside of the container and this is doing a bind mount it's not sinking the files or anything clever like that it's actually making it so the current directory is bind mounted inside the container directory and accessing those files is exactly the same as accessing any local files there's no syncing no delay no emulation none of that junk so that's how we're telling PHP how to get access to our Drupal files yeah the next container is the or the next service is the nginx container let's talk slow enough to get my terminology correct which is based on this would be Drupal nginx image and it depends on the PHP container because it's going to be proxying fast CGI to PHP to actually execute the triple code we're going to map supports we're going to say port 8080 on our local machine is mapped to port 80 inside the nginx image so we'll be accessing our site locally on this port 8080 and we're going to set some environment variables again these environment variables are defined by the image I don't really know what these all mean because I didn't read the docs for the nginx image I just copied this stuff from the load B documentation so I don't know what this one does I guess we're setting a log level to debug this one I know what it does it is very important we're saying the we want nginx to use the PHP container as the backend host so this is telling it what to fast CGI connect to and you know we can say it's PHP because within this docker compose project any containers that are linked can access each other by name docker will add something to the hosts file so they can do that and we're going to say the server routes is this slash var slash well same as we did for the PHP container and then we're going to mount the Drupal files app directly so that nginx can access them so that is our whole dr. compose file now let's let's run it so I have over here as soon as it decides to get big cool a Drupal 8 codebase with a doctor composed AMA on it which is the exact content that we just walk through on my slides it almost fits on one screen we're like one line short of the whole thing fitting on one screen um you know maybe seem seemed like a lot when we were going through the slide so it's actually a really simple doctor compose file and now we are going to execute it so we're going to run docker - compose up - d that says to bring everything up you know as as Damon's running in the background and it does its work all of our containers have been created now let's try to access it in the web browser so that would be localhost 8080 and here we go here's the Drupal 8 point three point two installer we're going to go through the installer manually in normal development I would probably do the installation via thrush but we're going to do it manually so that we get to see all of the parts of course I have to copy the settings file Rebecca installer try again cool so then we put in the database credentials our database name is Drupal database users Drupal a database password is Drupal and we go to Advanced Options to set what is the host and the host is Marya DB because like I said earlier dr. will set up a entry in the hosts file for each of the linked containers so this PHP container is linked to the my a DB container and so it has an entry in the host file for the name of the container Save and continue and now it's installing Drupal okay you know sure and there we go we have a fresh Drupal 8 installation inside of docker I guess more important than the fresh Drupal 8 installation is that is a fresh set of you know my sequel containers a completely fresh my sequel server right completely fresh nginx container completely fresh PHP 7.1 container that we spun up super quickly install the triple-eight cycle inside it what else I want to show we have a presentation open twice Oh dr. composed PS so earlier we did docker PS if you do dr. compose PS it shows basically the same information but only for the containers within your docker composed project so you can see our MRI DB and annex PHP container the running port mappings yes we went looked at site you can stop it with docker compose stop just stops those particular services and if we want to destroy it because we're totally done with the development environment I'm going to add a couple extra dollars in the works but we'll do a move or save um dr. Campos down will delete those game containers and the dash V will also additionally delete the volumes and remove orphans is I don't know I don't want explain it right now it's possible to end up with containers that belong to your doctor compose project but no longer have an entry in the docker compose yeah well and so remove orphans is just say to destroy them too I just I'm used to typing this command when I want to get rid of everything so yeah that's how you would set up a dribble app with docker compose so how do we actually do it at my drive wizard so if I drop lizard we do support maintenance for an enormous number of sites so our problems are maybe a little different than someone who's doing you know just a single Drupal site that's all they work on but you know I'll show you a little bit about how we do doctor for our Drupal so all that stuff was was super easy but things start to get hard one of the main things is managing that docker compose ya know file that can really become a pain if you take a look at the full load be doctor for Drupal template basically the way both be darker for Drupal works is they have and it will load eventually there we go they have this template for a docker compose doctor - composed AMA file and you're supposed to put it in your Drupal project and then modify it to your needs and they have a whole ton of documentation on how to do that and if you know if you want to learn more about how docker works would be is a great set of tools and documentation to do that but look at how complex this is right so this is basically the mario DB thing we saw earlier PHP they show you a whole bunch of different PHP images to choose from depending on your needs here's a you know some commented out stuff to set up X to bug here's a commented out thing that makes mounting directories faster on Mac OS you know nginx configuration most of this is the same but then here's some stuff for getting nginx to connect to the reverse proxy because let's say you want to run multiple doctor composed projects at once they can't all take port 80 right so you need to have a reverse proxy here's some commented out stuff to do varnish lettuce PHP myadmin solar male hog which is a really cool app it basically pretends to be a mail server so your app will send mail to it and then capture it you can go look at it so you know first of all you don't randomly send emails to actual users from your dev site but also you get to capture those emails and look at them for testing you know note is part of your drupal app for some reason memcached here's the actual reverse proxy setup so I mean if you want to start doing advanced drupal things which i think is advanced drupal things are sort of normal triple things you end up by spending a lot of time crafting this doctor composed the ammo file which could be totally fine if you're only working on one project but if I dropped wizard we have hundreds of sites that we do support Magnus for so we can't specially craft our doctor for those files we want to have some standards that we just kind of set up and use for everything weird in my presentation it's back here yeah so yeah adding solar and memcached varnish Joseph is complex it also running different local tools against your site that's now container honest is also kind of a hard problem if you want to run brush against that you know Drupal 8 site that we set up you have to write this sort of long magical incantation docker compose run to actually run brush against the site inside that container or if you want to run you know my sequel to my sequel clients against your site that's now containerized like in the earlier example we just exposed a port but you're not going to expose a port for every single one of your site's you're going to have to have like a big table similar saying like 49 7 3 4 is for this site in port 9 7 3 5 is for this other side so generally like you wouldn't do that in real life so figuring out how to run like your different utilities and different tools against your side can also be kind of complex and also we talked momentarily about the reverse proxy that will be uses setting that up to run multiple apps at once which is kind of required at least for us you know for anyone who's doing more than one project at a time although there certainly are some people who don't use a reverse proxy and they just you know stop one docker app start another docker app which is totally valid thing to do because it starts up and shuts down very quickly unlike the MS but you know for us we want to run you know long-running audit process or something against one site what we're also hacking on another one or like pulling down the latest copy of the production files or you know we need to work on multiple apps at once and setting up that reverse proxy is that super hard but it's a pain so we created a tool that we use internally called was dead so it's a symphony console app whose job is to write the doctor compose Apple and run common Drupal tasks such as running drush and running my sequel and also like add-on stuff we have some special internal tools but also running things like be hat bass whatever so here's some example commands with Devon stall it just installs via the normal not normal via the drush site install command rather than having to you know go through the manual installation like we did earlier with the brush will run drush this - - everything after the - - gets passed directly to josh a sort of cool thing is that you can run wisdom from any directory inside of your project and it'll find the root of it where the doctor proposes and figure out how to run the commands you don't always have to go to the top level to run run your commands we have this cool DB import that will dump your old database import new database it basically just combines to dress commands but you know if you had to write the doctor compose run magic incantation to run those brush commands that would be a Saints this is all these little convenience functions that we add for you know doing our local development oh and if there's anything that you need to do that wisdom doesn't do you can add a doctor - composed override llamo and that will get merged into the doctor compose yeah Mel and this isn't a whizz dev thing this override llamó file that's a that's a doctor composed thing it's super common for projects to commit their doctor composed amo but then like maybe a developer wants to change some setting for running it on their machine or you know to mess with stuff so then they can create a docker compose override llamó not commit that to the repo and that will allow them to do their overrides so I'm going to do a really short demo we are going to download Drupal 7 and jeez that got small run this dress command and this is just a normal dress command this has nothing to do with our tools this is just how you would download Drupal 7 and name it something special we renamed it whiz dev demo there we go downloaded we're going to go into step demo we're going to run with dead in it which is how you would start saying I want to create you know a new environment with dev for this particular Drupal site and it asks you some questions this is sort of new functionality the question that it's asking is stupid that's asking you know what type of site this is basically what Drupal version you're using it's something that it could figure out on its own but doesn't it will in the future anyway this is a Drupal 7 sites or just answered one and then we're going to do with dev install and this will generate our docker compose start up all of our containers and then run jarash site install to install the site and that will take a moment mostly waiting for the my sequel container and then waiting for the Drupal install which it just got to the truthful install part don't installation complete username admin password admin development site you don't need to be super paranoid about passwords and then we're going to go find the site at these dev demo dot local doc computer ominous this taking so long to load okay there we go and now we have a fresh Drupal 7 installation with a couple fewer steps than if you were doing it manually and then like I said we can run with dev Josh you sequel see that's a fun command it starts the my sequel client and like I said before - we can go into any directory deep and run here's the Josh C or whatever and it will still work and basically I'll show you a little bit under the hood when we ran with Devin it it created this wisdom that llamó file which is basically where whiz devs configuration is and all it has for this project is that it's a triple seven site and then it generated this docker compose which a little bit bigger is quite similar to the doctrine pose that we looked at using the world the images random differences this one's version to the instead of version two we use our own images we don't have use the bulky images because we have some customizations there what else did I want to show you um yeah I guess I didn't have anything super sophisticated I want to show you so I have come here to stop and we can do instead of destroy just destroys everything so that was a super simple demo but the long-term vision for with that I think is going to be a lot more interesting so something we don't have currently but we're working on adding as a plug-in system to add commands or hooks or tools since it's a symphony console app any plug-in that you would add would be something that you would write in PHP Symphony type code we're thinking about actually taking the Drupal plug-in component so that creating them would be just like creating a drupal plug-in which would be super cool some ideas that we have is you know something like we Stev add on add solar which would add you know the entry to the wisdom ya know that says okay we want to use the solar add-on and then it would generate a docker compose that includes solar and we could maybe have some extra commands that are available once you do that add-on to manage your solar instance right now what we're doing for solar actually is we have like a magic snippet that we copy and paste into our docker compose override diamo for doing solar but that's like exactly the type of thing that we hate doing so we're going to automate that away this is a command that actually exists but it's useless for anyone but us is we stead of pull we have a tool called Sadam that manages backups and deployments for all of our customer sites it allows us to deal with our customers the same no matter how they're hosted like throw stuff pantheon of stock we hosted on some custom server that works in its own custom crazy way that we can deploy them and backup from them the same without having to think about it we have you know we just run this side on tool to say so I don't back up side on pull side on push and so then we have this companion command in whiz dad where we do whiz dev pull and then the side on name of the site and it pulls down the latest backups of the sign code so that we can super quickly create a development barment based on one of our customer sites just super useful for us i envision in the future you know being able to create a poll command that could be plugged into like it would be really easy to have poll work for any Pantheon site for example but maybe we could have a plug-in system where people could add their own you know pull logic or something and then yeah we don't have a b-hat command yet we have a magic incantation again that we copy to run b-hat but like this could be another interesting add-on thing to just do that magic incantation automated way so we don't have to have these docks anymore and then this idea of hooks which we also haven't implemented yet where you know after you do is to have DB import it would run like a bunch of Drudge commands or shell commands or something to get the site ready for local development in case we needed to disable something or something like that a self managed reverse proxy I were in the process of refactoring with dev from its current in current version that we use day-to-day to this version 2 which is what I was showing you in the terminal and we don't have a self managed proxy yet version 2 we haven't written one but we have some really cool ideas for how that could be configured automatically so that you don't actually have to set it up to that just by running wisdom on a new site for the first time it'll automatically set up and configure the reverse proxy also we have this idea not currently implemented but that it could be plugins all the way down kind of how Drupal is sort of modules all the way down like you have a node module that provides functionality that's super core to what Drupal is but it's itself a module so we could have all of the core with dev commands also come from plugins anyway we are planning to open source wisdom once our version 2.0 refactor is complete it may be complete right now by the time you're watching this video I don't know when your watching this video but you know we needed this tool for ourselves there's a lot actually now existing tools for doing a Drupal development with docker the one that's probably most similar to his dad is a tool called Doxil but you know a lot of the tools worked quite the way we needed so we ended up creating our own thing anyway and you know I'm going to share it with the world in case someone else finds it useful in the crowded space now Drupal docker to help in environments anyway I encourage you to check it out when it's available thank you that is all up if you have any questions or comments please leave them in the comments area below the video look forward to hearing from you
Info
Channel: myDropWizard
Views: 8,725
Rating: 4.9310346 out of 5
Keywords: Drupal, Docker
Id: e-Bhjh7P5bg
Channel Id: undefined
Length: 59min 58sec (3598 seconds)
Published: Tue May 23 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.