(upbeat music) >> Hello everyone, my
name is Michael Irwin and I'm excited to be
here for DockerCon 2020 from the comfort of my own basement. Hope everybody's doing
well and I'm calling in from Blacksburg, Virginia. And today we're going to be
talking about simplifying all the things with Docker compose. So just a little bit about myself. Like I said, my name is Michael Irwin and I work at Virginia Tech, and I've been here since 2011. And I'm currently leading efforts to build what we're calling the
common application platform. I won't get too much into it, but it's a lot of
Kubernetes and Prometheus, and just CNCF stack. And we're really excited for this. I also get to have the opportunity to teach with the computer
science department. I'm an adjunct faculty instructor there. And even just this past semester taught a class on container. So really excited to be able to share some of these learnings and
expertise with the students here at Virginia Tech. I'm also pretty heavily
involved with the community. I am a Docker captain and
also run the community meetup. So anybody coming from the
Blacksburg meetup, hello. Great to have you here. I'm also outside of work. I'm a family guy. I'm a husband and dad to four kids. And in fact, actually today, the day that I'm recording
this, is my anniversary, so and we've got a fifth kiddo on the way. So are we going to break
the girl streak or not? Time will find out and you'll
have to follow me on twitter if you're interested to find out as well. Alright, so let's get into our talk. Our talk is going to be
focusing on the efforts of a dream team that we're just
calling Alice and Bob here. And Alice and Bob have been worked on an application in there, they're still pretty new to containers. And so what they're most excited about is the ability to have
the same environment locally as they do in production. So if they build a container
it works on their machine, they've got pretty good assurance it will work in production. Now the application
that they're working on will probably look familiar
to at least some of you as it's the new Getting Started tutorial, that's being shipped with Docker desktop. And so whether you use
it for grocery list, or to-do-list or whatever, who cares. It's just a simple app. Now, Alice and Bob as they
had been getting started with containers, they followed a workflow that looks something like this. And while some of you may
laugh at this workflow, it is a workflow that I
see quite often with teams that are getting started with containers. So as they are writing code, they build their app using Docker build. In this case, they're just
tagging it with my-app. Then they run their container
and they expose the port mapping 3000 on to their hosts, it's a node-based app in this case. And then they open up their
browser and check it out. And they recognize oops! Some things aren't right so let's build our new image after we made our code changes, let's stop the previous one, let's remove the previous
container that's running. That's why they named it so
that it's a little bit easier to stop and remove it. They don't have to look
up the container IDs. And then let's start the new one, and let's check out the
changes and open up the browser and find out that it's not quite working and just keep going through the cycle. Now, the advantage here is that if again, if it works locally, they
know that it will work in production, but man that
this is a big feedback loop. And so our first best practice that we're going to just talk about is use Docker compose to
replace the Docker run commands that you're calling over
and over and over again. And the reason why that we can do this is because of the way Docker
compose is actually structured and created and the way that it works. So what actually is Docker compose? I'm a big believer that
before you really start to use a tool, you need
to understand a little bit of how it works in its motivations, and how it fits into the ecosystem. In order to do that, we need
to understand Docker itself a little bit better. So the Docker architecture again, we're not going to get super deep in this, but it's important to recognize
that the Docker daemon is at the base of this. And the Docker daemon is
what is actually starting the containers, it manages
the volumes and the networks, and DNS and lots of other
pieces that are going on here. Now, the Docker daemon
again, is pretty low level. And so what Docker did is they exposed and basically wrapped the
demon with a REST API. And that REST API is available
at var/run/docker.sock. And the Docker CLI is actually
just making requests to that. So actually, if we take
a little sidebar here, I'm going to jump over to a terminal, and I'm going to do a curl command in which I'm going to curl
and ask for what containers are running on my machine here. With the curl command, I'm
added the Unix socket flag to say instead of curling
against a remote endpoint, I'm going against a local socket. And when I run this, I see in the response that I am getting a response
from the Docker daemon. And well, I've got no
containers running right now. That'll change soon. I can also, let's clear this and let's go back and let's change this to version. And I'll also still pipe this to jq. And we can see the full details of my-app that's running. And again, so I can see
all the different things that the you know, through the API that are available for my that are running on the demon. If I actually switch over back to Chrome, Docker documents all this. And so if you wanted to
create your own custom plugin or custom tool that hits the Docker CLI against the Docker site,
well, here's the API to do it. Now, most people aren't going to be actually interacting with that. And that's okay. But the var/run/docker.sock is where the all that's happening. And in fact, that's what's the Docker CLI again is using when you do a docker
ps it's making a rest query against that socket,
and then just format it in a pretty way in the CLI. And that's how it's working. So, why do I mention that? Because well, we can
swap off the Docker CLI with something else like
Docker compose in this case and I hope you enjoyed
my graphical skills. Scratch out the CLI with Docker compose. But with Docker compose, it's a higher level abstraction tool that lets us manage multiple containers and we can define our
environments using YAML. Without being a YAML file,
we can version control this and now I can share a YAML
definition with somebody else, and they can just use Docker compose to spin up their environment go. The last thing I want to mention here too is that Docker compose
is an open specification. Meaning that Docker has
made the Compose spec open to anybody and there's
a lot of industry partners that are working to
extend that specification to do who knows what? And I'm excited to see where
it might go in the future. Some of the basic Docker compose commands that we'll be using
throughout this session are the most frequently
used our Docker compose up, spin everything up, Docker
compose down, tear it all down. And Docker compose logs, where
we can see the log output from all of our containers
in a single stream. So now going back to Alice and Bob. Now that we've seen the tools, how can Docker compose help them with where they are right now? So previously, they were
doing a Docker build and Docker run. So let's make a compose file that makes this a little bit easier. So we'll make a compose file, which is named docker-compose.yml or YMAL depending on your organization
and your desire there. And the first thing usually
in these compose files is a version. Now the thing I want to
mention here is a version is the schema version for
the rest of the document. So what flags, what options, what config, is allowed to be specified
based on that version. And for the most part, I
always recommend people to use the latest version. There's not really much reason
to use an older version. Compose has always been very good with backwards compatibility. So in this case, we're
going to use version 3.8. And the next thing we do
is we define the services. And the services, think of them as what containers are we going to run? In this case, we want to
run a container service that we're just going to call
app, we just pick that name, it doesn't really matter. Now when we specify a service, we have to give it an image. What images are going to use when it runs? In this case, our image isn't built yet. And so we can use the build
directive to tell compose to build an image and then
use the output from that build as the image for this service. And so we specify
build./ and the dot slash mean just this current directory, and that's where the Docker file is found. So with this compose file, all we need to do is add
the the port mapping, and we're good to go. Now one of the things I'll mention about the port mapping is this is again, the short syntax that
came from the CLI compose does have longer syntax for
many of the different things including like volume maps, mappings, etc. So if your team keeps getting confused of which side is host port, which side is container port, you might want to consider
switching to the long syntax. Totally up to you though. And with this, you know,
we have a compose file and we can spin this up. And they're updated workflow
now looks something like this. Every time they make changes, all they have to do is
run Docker compose up, and we add the --build to tell compose to rebuild the image every time. If we don't do that, it's
just going to build it once and then reuse it every
time going forward. Now, one of the things you
might recognize is, well, this is still kind of a bad workflow. The feedback loop is still pretty long and you're right it is. So one of the things that
I highly encourage people do is well, shorten that feedback loop. Use tools that makes sense. So build a dev focus container
that watches for file changes and then responds to them. And so we can mount our source code into that container, and
make things much faster. In order to do this, you
need to explore what tooling what capabilities are
available for your framework. In our example, here, we're using node and so we'll be able to
use tools like node mon to be able to respond to file changes and restart the node server, etc. But every framework might
be a little bit different. So do some exploring there. So what's this look like? We're going to update our compose file from the one on the left
to the one on the right, and we're swapping things
around a little bit. We're now specifying a specific image, we're going to say
we're going to use node, we're going to use the long term support. We're running a command and this command, the reason I'm actually
invoking a shell here is to allow me to do multiple things. So I can do both an npm
install and then a run Dev, which allows me to clone this
repo and do the npm install, I don't have to do any
kind of first time setup, it'll all just work at once. And in case you're not
familiar with node npm install is just installing all my
dependencies for my application. I set my working directory,
I mount the code in, and things are good to go. One of the things I'll mention here is I've got a volume for node modules, so that all my modules
are stored in a volume on the same file system
as the the VM that's used for Docker, for Docker desktop. And this is mostly just a file system performance benefit here. But there's a lot of
other good things going on in the works with Docker and that space. So I'm going to to switch over and we're going to actually
take a look at this. And in case you're following
along in my get repo, which I'll post a link in chat
and on Twitter and whatnot, if you want to follow along, I'm at the compose tag right now. So let's jump over to Visual Studio code. And we see the Docker compose file that we were just like looking at. And with this, I can run
docker-compose up-build, and this should run fairly quickly ba-pa-ba-pa-ba And now if I go to my browser, okay, and localhost 3000, we see my-app start up,
and I can add milk and eggs to my grocery list. Hooray!
It works. And now if I go back
to Visual Studio code, and if we actually look
at one of the files here, so app.js, and let's make
add one above more exciting, you'll see that it automatically
restarted my application. And so if I jump back to Chrome
and delete these two items, after refresh the page, then we'll see that it is more exciting. I need to add one above to
actually get the full use of my list here. Again, so pretty simple here, but it allows me to quickly
now see changes being applied, and now my feedback loop is much tighter. So let's jump back to our slides. Now the thing that Bob
recognizes is, well, we've got our locations described
in two different locations our environments described
in two different locations. We've got now the compose
file that's doing some stuff. And now we've got our Docker file, its got some other stuff. What if the images, the
base images change, etc. Great observation, Bob. So what we can do here
is use multi-stage images to solve this problem. And typically, multi-stage images are used to separate build time
and runtime dependencies. So what happens is in our Docker file, we can specify basically multiple froms to do different things. Okay. And this isn't going to be
a deep dive course in it, so you'll definitely
have to look things up. But like, for example, in a Java world, if I were doing a Java app, I
need to compile my Java code. So I need a JDK. But when I'm actually
going to run in production, I don't need a full JDK in production. So why don't I just use a JRE? Multi-stage builds allow me to do that so I can have one stage
that does compiling, the other stage that has just
the runtime dependencies. But we can also do the same thing for our local dev environments. And then in compose, we can say, I'm going to specify a specific target, a specific stage for my dev environment. What's that look like in my Docker file? Let's take a look. So this was our Docker
file that we had before that was just simply copying
our stuff, do our yarn, install setup, default command. And now what we're
doing is we're splitting things up a little bit. So our final production
image, which is at the bottom, if you actually were to look at it, it has all the same steps. But one of the things
that we did was we added yet another stage, that's our dev stage, and this is installing
all of our dependencies, not just the production dependencies, and then has a different default command. And what that allows us to do, is then go back to the compose file. And now we can pull up
the image in the command, the working directory, all those
things and just swap it out with another build directive. Now the difference between
this build directory and what we had earlier, is we needed to find the target now. So we wanted to find that dev target, and the way that we do that, is we added a little bit of extra config. So our context is still
going to say we want to use the Docker file in the current directory, but now we want to target the dev stage. So let's actually jump over
back to Visual Studio code and take a look at this. I'm going to tear down my stack that I had from before, down, and I'm going to clean my environment, and I'm going to check out
the compose multi-stage tag. Okay, we'll close that. And, I didn't actually close it. And if we look at my compose file now, I see the build and it's
targeting that dev stage and my Docker file has
the multi-stage build. Now if I do my docker-compose up -build, we'll see that it tries to do a build. I ran these builds earlier,
so it's already cached in. And you might notice this
output looks a little different. That's because I'm using
build kit for my builder. But my-app is up, and it's running. And everything's defined in
this single Docker file now. Now I don't have the environment split between the two different areas. We're not going to open up the browser because it's working the
same way at this point. But now, again, it's all
defined in one location. Bob feels better now. And you're welcome. So now Alice comes and says, well, we want to actually add a
database to our application. Right now, all the state is just stored in memory in the backend. And we're at the point now that we want to actually scale it up and add a database to it. So how can we do that? Well with compose, we can actually just add another service. But one of the things I
want to highly emphasize, is how you start
introducing other services and how you configure
those other services, and how you configure your
app to talk to those services and know where to find them and credentialize, and credentialize that's a new word. Anyways, how it knows what
credentials it should use. The 12 factor application
that came from heroku is a fantastic resource. So it's at the the website, 12 factor, So one, two, factor dot net. And as one of the core
tenants of 12 factor talks about configuration, which indicates that configuration should come from the
environment to the application. And so I really encourage you to think about how you can configure your app. So once you build your image, whether it's for local dev or
pre prod staging environment or your production environment, you can use the same config everywhere. Or the same image everywhere
then just reconfigure it. And so there's really
two main ways to do this. There's an using environment
variables for configuration, and then there's using files. I highly, highly encourage people to use environment variables when there's nothing secret involved. But then use files, when
it's more sensitive. Sensitive credentials, etc,
like API keys, private keys, that kind of stuff. Now a lot of times you
see environment variables and files being used together. So for example, you set
an environment variable that tells your app where
to find the credential for your MySQL database, for example. And so using them together gives you a lot of flexibility in your app is not very coupled to
a specific environment. What I then encourage you to do, is do the same thing for
your local dev environment, practice that kind of level of indirection because you'll know what works. And what you can do in
your local dev environment is then just kind of plug
in dummy credentials. In many ways that also helps
simplify a lot of things which we'll see here in just a second. Okay. So for our application here
are our team wants to use MySQL and the great that's fine. So the compose file, it's wrapping into two
different windows here. But the window on the right is
defined in the MySQL service. They specify the image, and we're mounting in config
that just has static files of what's the username,
what's the password, etc. And the MySQL image
documentation specifies that we can specify these
environment variables to tell us where the user
the password, the DB is, etc. And then we also mount that
same config into our app, and our app is watching
those environment variables and then using that to configure itself. Okay, so let's take a look
at our application here. We'll switch over to Visual Studio Code, tear this down, and will spin up Docker compose up sorry, get check out MySQL. Docker compose up. And, if we look at our compose
file, we see MySQL. And if we look at our config directory, we just see files for the user, the password, the database, etc. And if I switch over to
Chrome real quick and refresh, we'd see milk and eggs already here. That's because I put them
into my database earlier. And if we switch back to
Visual Studio code here, I can exec into that container MySQL, I'm going to use the user to do, and a password of secret, use todatabase, and select * from todo_items. And we see those items here. And so our app is using
the credentialing here. And if I go over Chrome, I
just won't switch my viewer. And if I delete eggs, and
now update select * from it, I see that it's been deleted. So, my app is using that
the same config files to configure both the database as well as the application itself. I encourage you to figure
out some way to do that to really test out how
your app gets configured for different environments. And then get your dev environment to support that that methodology. Yeah. Great, now everyone can run it locally without worrying too much. Okay. You're welcome. Alice again. So our manager comes
to team and recognizes, hey, our app isn't really
a standard react app. And if you actually take
a look in the code base, it's for the most part, just a single, single JavaScript file that
has the full react app. It's not doing the full web pack and all that kind of stuff,
which is fine for a little demo. But since we're doing composed again, it's just another service to spin up. A lot of teams when they're
trying to add other services wherever trying to
figure out how do I mesh the back end of the front end
to a single container array, but you've got different tool
chains and different services, lots of things going on,
we'll keep them separate. Who cares if they're
separate versions of things? That's the whole point of containers, let things stay separated. And, then what we can do
is we can put a smart proxy in front of it to figure out what should
handle each request. I mentioned earlier that I love traffic. And so this is where traffic
is going to get plugged in. And so traffic is a great reverse proxy that uses the environment around
itself to configure itself. So there's, in most situations, there's no static file that says, a request with this host name gets handled by that container. It's learning it from the environment. In fact, we'll see this in just a second, it uses the Docker socket, remember that API we were
talking about earlier? To query for running containers, and then find out what's coming and going, and we'll see how it does it. But there's a hint on this slide. It uses labels on the containers to provide that configuration. So if we look at our compose file, we actually see that
I'm plugging in traffic, I'm telling it to run in Docker mode, so it's going to watch the Docker socket. And then I just add a bunch of labels to my different services. So in this case, this is my back end and I'm seeing any requests
that has the host name of localhost and a path prefix of /items which all my API endpoints
for this simple app have this container is going to have it. Then I would spin up my react app, my react service and let it
handle all the other requests. So let's actually jump over to that. I'm going to quit onto MySQL here. And let's get check out the
split front end tag here. And, hope I didn't tear everything out. I've made a little alias on my machine that stops all the running container. So I'm going to run that real fast, docker-compose up, and you'll
see a new service has started. Well that's starting, this is doing, the new services front end and I updated my Docker file yet again with another stage for my front end base, my development environment. And for its routing rules, it's going to handle our
request to localhost. But you'll notice it's the
same for the backend as well, but it's a little more specific. So the back end is going
to get all requests that are /items, and then anything that
doesn't match /items will get passed on to my front end. Now if I open up my Chrome browser, I'm switching back to normal ports now. So just localhost. And if I go to /items, for example, I'd see there's my item. So that's actually being
handled by the API. Pretty cool. And traffic lets me do all this. Now I have a real react app,
I can make changes to it. It's doing web pack and live reload and all that kind of stuff. And my Dev, and my all my devs have to do is just simply docker-compose
up, and it's working for them. Great, this will help us build better apps and hire qualified front end developers. I certainly think so because
now it's an actual front end development environment at this point, and they'll be able to help out. So now our major ask is, can we add some end-to-end test to this? And that's a great question. Docker compose can really help us do this because at the end of day,
it's just another service. And so what we're going to
do is we're going to identify the tools maybe our organization
already has favorites for testing, whether it's
selenium or cucumber, or just tester, what are
anything that we can use to do end-to-end test, we can support. Selenium actually already
has several container images that already have Firefox
and Chrome bundled into them, which should make it pretty
easy to do these kind of tests. The cool thing about Docker compose is we can also specify a service to watch. And so compose will spin
up our entire stack, and we'll watch a single service and when that service exits, it will automatically
tear everything down. And then whatever exit
code came from that service will be what compose passes, or will be the exit code from compose. This works really well for CI builds. So if our test container fails,
that means compose fails, which means our CI pipeline would fail. And we use this all the
time and it's magical. So some sample architectures of what this might look like
if you have just a simple app in a database, imagine like
an API or something like that, and you just want to API
tests by all means, do so. Some of our applications
are a lot more sophisticated in which we might have a container that's got selenium code in it. And it's talking to a selenium server that's using one of their images that has a Chrome or
Firefox browser in it. And that's actually
going against the proxy and opened up the website. We use Mock CAS in higher ed, the CAS authentication
protocol is used quite a bit. And so we've got a mock
service that allows us to mock authentication to do
kind of persona-based testing. And then we can put our
credentials in our code base and not worry about them being compromised or anything like that. So I'm going to jump over
to the API test tag here. And we're going to tear
everything down first. And I'm going to get
checking out API tests, well that's okay. And what we'll see here is there's no change to the compose file, but I added a different compose file. So there's a docker-compose-test.yml. The reason I did this is
typically these tests will run as part of a CI pipeline in
which I want to build an image, or I want to run an image that was built as part of my pipelines. And so I actually have an
image named dc 2020 here. Now, obviously, that's not a name that will be in Docker Hub. But, you can swap that out
with whatever image name your CI pipelines might be using. And then we've got a
test code base down here, which is actually building
the API test project here. And it's just going to be
running a bunch of API tests. So I can run this by
doing Docker compose F, I specify this other file
now instead of the default. And I'm going to give it a
product name of test and, exit code from in the
service name is test. This will spin up it'll
just take a couple seconds. There's not a lot of tests
here, it's pretty simple. And I see that all my tests
pass my test container an exec code of zero, and if I echo the last exec code, I also get a zero. So hurray! It works. My CI pipeline would work, and we'd be happy. You rock. Alright, so switching back to slides now. To wrap up, Docker
compose can really be used to simplify our dev environments. And again, the goal here
should be your developers should be able to git clone, docker-compose up, and start writing code. And you want to really focus
on shortening the feedback loop so that they can be as
efficient as possible. But yet at the same time, using environments that
are going to be the same when deploying to production, etc, or at least as similar as possible. With compose as new
services or new needs arise, it makes it really easy
to just plug them in, they're just another service. It's really flexible to let you change the way that you think about things or the way that you developed
certain applications. I mean that's kind of the whole point. Compose can then also be
used in automated testing, and there's going to be a talk later today from Andrew Crumpler, who's
going to be talking about integration testing
with with Docker compose and so you should definitely
check out his talk as well. So with that, I'm going to
wrap up and say thank you and open up the questions. There's a chat that's
been going on throughout and we'll go on for several
days after the meetup. Otherwise, feel free
to hit me up on Twitter @Mike87, and say hello. And with that, I thank you. And I look forward to
talking to you later. You, and I look forward
to talking to you later. (upbeat music)