A deep dive into using Tailscale with Docker

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
About 10 years ago you were pretty unusual if you were running a container in production, whereas these days you're unusual if you're not. In today's video we're going to talk about tailscale and docker, we're going to cover how to add a container to your tailnet, why you might want to consider running tailscale in a container in the first place, as well as how to use tailscale serve and funnel to expose those tailscale applications from your tailnet directly to the public internet. Today's video will be quite a long one so i've put chapter markers down there for you to skip around and find the bit of the video that you need, so with that let's get started. Well the first question that probably comes to mind is why would i even want to put tailscale in a container in the first place? Well by putting a container directly onto your tailnet, that's our term for a tailscale network by the way, you can not only control access via ACLs but you can also replace reverse proxies. Yes for those of you that have been waiting to learn reverse proxies for long enough i've got good news you don't have to learn them ever you can just completely skip that step and you can also access any other service on your tailnet from these containers as well. So you can have something running in your basement in your house connected to something in the cloud, take for example a GPU workload some AI workload or something like that, you've got something running in AWS or GCP and you don't want to pay their GPU prices but you've got a GPU sat in your gaming computer like right here and you want a way to connect the two together well using tailscale you can do just that, think of the possibilities and this all happens through tailscale's encrypted wire guard based tunnels, you don't have to mess around with port forwarding or complex firewall rules or dynamic DNS or any of that it all becomes a thing of the past. tailscale provides an official docker image that you can find over on docker hub as well as github container registry, this exposes several parameters through what are called environment variables there'll be a full list of those variables exposed by the container down below. There are two primary methods for adding a container to your tailnet, well three if we include logging into the container manually, running tailscale up, copying the resulting printout, logging into the browser manually and redoing that every single time you bring up a container. The two methods we're going to cover today are both what are called programmatic methods so these are ones that you can repeatedly do without any manual intervention. The first option is called an auth key and the second is called an oauth client and as for which one is right for you, well it kind of depends i'm afraid, it's just one of those classic situations where it really does depend on what you're going to be doing with it. So let's dig into some of the differences between the two, both of these methods actually support most of the same things there's just some nuances that you've got to be aware of when choosing between the two. So let's first of all take a look at api access, an auth key grants full api level access to any client that authenticates using it, this is in contrast to an oauth client which limits api access via scoping. So this for example means if i have an oauth client secret that is scoped to only allow people to modify the dns or scoped to only allow people to read and write devices on my tailnet or only scope to read audit logs that's all they can do or any combination of those different things as well by the way. Whereas with an auth key because it grants full api access to anybody that has that secret they can do whatever they want and so if auditors are in your regular vernacular it's likely that this alone will be enough of a feature for you that rules out auth keys in favor of oauth clients. The next thing to look at is expiry and lifespan. An auth key has a maximum lifespan of 90 days, now a lot of people think that this means that anything that's authenticated using that auth key also automatically expires after 90 days, this isn't the case. It's worth bearing in mind that auth keys are separate from the node keys, so under the hood what happens when you authenticate a client to your tailnet is node keys are generated, so this is underneath it's the wire guard key exchange magic that makes tail scale work, that is a completely separate expiry process from the authentication token that you use to initiate that process. A lot of people use oauth clients simply because they never expire, now in practice what this means is you're going to have to come up with some way to regularly rotate those tokens so that any security posture that you have or any risk profiles you have are met or if you just simply want a set it and forget it type solution and an oauth client might be right for you but there are plenty of situations where you might want to give someone temporary access to add something to your tailnet and then have that auth key automatically revoked. The next important difference is the tagging, so when you add a node to your tailnet it must be owned by somebody and when you use an auth key that identity by default at least with an auth key is assumed to be the person who clicked the generate auth key button in the UI. You can add tags to an auth key if you would like but with an oauth client that secret doesn't actually itself add the node to the tailnet it has to assume the identity of a tag owner instead. This is because a node on a tailnet must have an owner whether that's a user or a tag and when you use an auth key that node is added to your tailnet as the user who generated the key. With an oauth client though the node is owned by the tag assigned at secret creation time and this is why in our ACLs you will see tags listed as tag owners. The oauth client is assuming ownership of a specific resource on your tailnet using the tag so the tag ownership is assumed using the tag assigned to the client secret at the time of creation. So let's walk through adding a container to our tailnet using an auth key. I'm using the visual studio code tail scale extension here to access a Ubuntu virtual machine with docker pre-installed or docker compose ready to go and I have here a sample docker compose file there'll be a link to a github repo down below with all of these resources for you as well as the blog post that accompanies this video with all the materials explaining you know all the details behind this this file. So let's walk through the compose file that we've got here. So first of all on line five we have the tail scale image this is the official tail scale docker image from docker hub. Next up we have the container name so this refers to the name that docker gives the container on the Ubuntu host. Next up we have the host name now this is important because this is the name that tail scale will will add to this container to the tailnet with the name of so whenever we want to use magic dns to refer to this container we will use auth key hyphen test you can make this value whatever you like by the way it doesn't have to match the container name or the service name. Next up we're going to specify our environment variables so here we're specifying our auth key this is basically like a password remember so just you know treat it with care and then the next thing we have here is the state directory now the state directory is really important and it's coupled by the way with this line here of volume so var lib tail scale is the directory that the container is going to use to persist all of the state of you know that what state when i logged in and authenticated to the tailnet what was my wire guard key exchange underneath like all of that kind of stuff gets stored in this directory and the reason it's important is because if the container restarts or gets recreated or any other kind of major event in the container's lifespan the state gets persisted in this directory and so it doesn't have to re-authenticate to the tailnet every single time or get out of sync with what's going on. Next up we have the ton device this is a networking necessity and below it we have a couple of capabilities that we've added to the container as well so net admin and sys module these three or four lines here mean that we don't need to give this container a privileged status within the kernel so it's a much more secure way of doing it because we're being very explicit with the permissions that we're giving this container with these lines here and then restart unless stopped totally optional you don't need this bit unless you want to. Next up we have the actual web service that we're trying to run here now this is just a very basic nginx web container it's not even got a real website in it it's just we're just going to load the nginx test page on port 80 and then finally we have the network mode now this is the magic source that connects everything together and makes this whole thing work so network mode service colon ts auth key test this is a docker compose specific incantation so ts auth key test here actually refers to the name of the service up on line four up here now if you're doing this with docker run command for whatever reason you can actually replace the name of this with container and then the name of the container so if you know if this was uh ts one two three you would you would do ts one two three like this and those two things would have to match but i'm going to use service because i'm in the docker compose ecosystem so i'm quite happy with doing it like this now what i'm going to do here oh i should probably generate my auth key before i do compose up shouldn't i so let's jump across to our tailscale admin console and go up here to settings and then under settings we're going to want to jump into the keys section then we're going to click on generate auth key and then we're just going to give it a name the name can be whatever you like so i'm going to call this docker auth key test it can be whatever you like it's not important it's going to be a single use key so we're not going to do any reusability or anything like that reusable keys are they're fine but they're quite risky so if if you don't treat them with care and you have a key that lasts for 90 days you've effectively given anybody that finds that string of characters root access to your tailnet you know they can add and remove devices there is no scoping remember for an auth key so anybody that has this key can do whatever they like to your tailnet programmatically so just bear that in mind and treat it with care next up we've got the expiration i'm going to set this to 90 days now there is a bit of a misconception around expiration a lot of folks seem to think that if the container gets to 90 days and you authenticated using an auth key that expires after 90 days that container is going to drop off your tailnet and completely need a new auth key bang on 90 days it's not quite the case the keys we use to authenticate the node to the tailnet are different from the wire guard keys that get exchanged behind the scenes later on those keys are good for about four months or so and you can disable key expiry if you want to in the tailscare admin console as well so theoretically you could use an auth key with a one day expiry set your key expiry to disabled and that container would stay on a tailnet until the end of time another setting that's worth exploring is ephemeral so if you're doing something with cicd continuous integration maybe you have a github action that wants to talk to something off site for example it's not part of the github ecosystem those jobs are typically short-lived so you don't necessarily want those containers on your tailnet forever so an ephemeral node will automatically remove itself after it goes offline i don't want this in this case but it can be very useful in certain situations and now we come to tags we talked about tags a little bit earlier but this is where you would set a tag as it pertains to a specific auth key and you can set multiple tags here if you want to i don't really want any tags on this node i want this node to be authenticated as my user so i'm just going to leave this empty i'm going to click on generate key and then i'm going to copy this to my clipboard go back to my docker compose file and just enter the value that we had on our clipboard click save now i can do my docker compose up i'm going to create a couple of containers you can see there's a whole bunch of stuff scrolling by here the auth key has appeared the containers appeared in my tailnet you can see that in the vs code dashboard just there if i go back to my admin console notice that my single use key has actually automatically invalidated itself remember it was a single use key so it automatically puts itself in the recently invalidated auth keys section down here and then on my machines page we can see that if i look at auth key test i now have a container that's in here so if i go to http auth key hyphen test which is the dns name that we set in our host name remember just here auth key hyphen test we can actually refer to this container by its host name by its magic dns name and resolve the url that's running the web page now nginx is running on port 80 underneath so it's just a plain http request and we're actually connecting we'll dig into the specifics a little bit later on but we're actually connecting to port 80 of this container which is which has attached itself to the interface of your tail scale container but that's how we get started with an auth key let's now look at oauth clients let's start by looking at the differences between an oauth client and an auth key so i've got a second docker compose yaml file here which you'll notice is pretty similar to the docker compose on the left so on the left we have the auth key and on the right we have the oauth client you'll notice that we're actually even using the same environment variable okay we've added an extra one with ts extra args this is where we specify the container tag that the auth client uses the oauth client uses when it brings up the container so when the container starts it's running something called container boot this does a tail scale up command under the scenes and it recognizes that we've provided an oauth client secret it then uses that secret to generate an auth key under the covers with the specific tag that we've associated here because if you remember when you add a node to a tail net it has to be owned by somebody and tags and users are all sort of in this melange of ownership and so when we add the node with the tag container we're authenticating that node as if it belongs to that specific tag owner generating an oauth client secret's quite straightforward so again we're going to jump back to our tailscale admin console go to settings go to oauth clients over here on the left click generate oauth client i'm going to select devices right and notice that the read permission is automatically set i'm just going to write test one two three for the description it's totally optional by the way and under tags here tag container i'll just show you real quick where i set that up in my acl so i just have a tag container here which is owned by the group auto group admin remember there'll be a link down below to all of the details like there'll be a git repository with my acl file just so you can copy and paste if you want to as well as the linked blog post so if we go back to oauth clients over here we've got the tag container and then we just click generate client the client id isn't particularly important to us here but the client secret treat it like a password once again same as with an auth key i'm going to go back to vs code and just copy in my value here i'm going to click save and then just going to change into the correct directory and then do a docker compose up this is going to add a second container to my tailnet now using the name ts oauth test actually not quite in line seven it would just be oauth test so if we go over here to my machines page we can see that oauth test is there also note that expiry is disabled nodes that are added to your tailnet using an oauth client do not expire so that's that's a big difference between an auth key and an oauth client authenticated node and by default nodes are also added as ephemeral so in order to make a node non-ephemeral we'll have to stop the container remove the state directory that we have so uh ts oauth test by the way that is this directory here remember where the state gets stored when the node authenticates to the tailnet we'll remove that directory as sudo and then we need to just add on to the end of our auth key just here ephemeral equals false also need to make sure it's deleted from your tailnet as well in your admin console then bring the save the compose file and then bring the node back up again with docker compose up and if we go back to our admin console we will see that the oauth node appears expiry is still disabled but the node is no longer ephemeral so it's a really minor difference but just being able to add that ephemeral equals false on the end means the nodes have a lot more permanence on your tailnet so depending on your use case that may or may not be useful for you and again just to verify what's going on here i want to do instead of um doing it in the web browser this time i want to do it in a terminal session so i'm going to do a curl http oauth test and you saw the web request flow by in the terminal underneath and then we get the the nginx landing page as well so there you go we've now added a node to our tailnet using an auth key and also an oauth client now i promised you a bit of a deeper dive as to how the docker networking is actually connecting everything together under the hood so we're going to dive a little bit into linux kernel namespacing i promise it's not as intimidating as it sounds so when we create a container what we're doing effectively is creating a new namespace in the linux kernel with a whole bunch of different things that control where different resources live one of those namespaces that we create is a networking namespace so each container has its own networking context when we add network mode to a specific container we can actually group two containers together under the same namespace so let me show you that in action so what i have over here is a docker compose file it's basically the same as the one we had before it's got an nginx web server in it as well as the tailscale container now notice down here on line 22 that i've actually commented out network mode so when we bring these containers up what's actually going to happen under the hood is it's going to create two networking namespaces in the linux kernel and we can just check that these two containers are both running with a docker ps hyphen a let's suppose i want to find which ports this nginx container is listening on internally well the first thing i might want to do is grab a docker exec docker exec spawns a process inside the container a shell process and lets us attach to that assuming that that binary exists within the container so what i want to do here is do a docker exec then the name of the container in question which is nginx PID test one and then i'm going to do netstat minus ton alp ah we run into our first issue and this is because images like nginx are designed to be web servers they're not designed to be diagnostic tools and so a lot of containers strip out unnecessary binaries but using namespaces we can actually hop into the networking namespace or hop into the context of the networking namespace using a tool called nsenter now this tool it's really pretty cool actually but what we can do is we can provide a bunch of different contexts in here for example we're going to use the -n for net context and we need to grab the PID or the process id for the docker container that we want to inspect so to do that we're going to use the docker inspect command and i'm going to look for the PID of the container nginx PID test one which we can see here is 53.1.6.6 then i'm going to use nsenter i'm going to run it as sudo because i need root privileges to change into a different namespace then i'm going to give it the target of 53.1.6.6 because we're targeting the process id that owns or is part of that namespace that owns the network namespace and then i'm going to run the command in the networking context netstat ton alp which if you recall is exactly the same command i tried to run before with the docker exec that didn't work and what we can see here is that within our nginx container that we've got a couple of commands running on ipv4 port 80 here as well as ipv6 on port 80 as well so let's just stop and think about what happened for a second i'm on my linux host and i changed into a different namespace using namespace enter as the command and then i ran an arbitrary command as if i was in the context of the container and so when we talk about containers that's what we're talking about is different context different processes sliced up within the side the linux kernel in memory space but where things start to get really interesting is when we start trying to stack the containers together using network mode so over here in my docker compose file you can see that network mode is currently commented out what this means is that we're going to be creating two different network namespaces for these different containers one each so let's do a docker compose up and then use a couple of commands to examine what's going on with the namespaces underneath so i've got an all-in-one command here which is going to use ns enter once more and it's going to grab the PID you see it's going to use docker inspector as we did before to inspect the namespace for ts nginx test this is the tail scale container not the nginx web server and you can see that my namespace here the network namespace for just this specific container only contains things pertaining to tail scale d so that must mean that this is the tail scale container and if i do the same thing again here and i'm going to load instead of ts nginx test i'm going to investigate nginx PID test one we can see that we've got nginx listening in its namespace inside its container so these two containers right now both have separate namespaces within the linux kernel with regards to networking another way we can check this is to use docker's built-in network tools so if we do docker network ls this is going to show us all of the different docker networks that get created and by default docker compose creates a new network per stack so what we have here is a stack called 0 3 nginx example default so if i do a docker network inspect of that network what we can see and i'll just pipe it through to jq so it looks pretty and what we can see here is there are two containers currently on that docker network they've both got their own ip address from the docker bridge so this one's 0 3 this one's 0 2 you can see the different names of the containers here as well and what this shows us is that these two containers are currently operating as we would expect as isolated units now let's go ahead and uncomment network mode i'm going to recreate these containers and just watch what happens i'm going to run the same command again docker network inspect and suddenly there is only one container showing up we can see it's ts nginx test well this is exactly what we'd expect because the nginx PID test one container we've given it access to the network namespace of this ts nginx test container so actually what's happened is we've merged them together and we can prove this by looking inside the container's networking namespace so let's first examine the nginx PID test one namespace and see what's going on and then let's examine the actual tail scale container upstream of that container what we've done is we've merged these two namespaces together in the kernel and you can see we've got the same processes running in both places so we've taken those two separate namespaces and merged them together by using the network mode in the docker compose file so let's put all of this new knowledge together and put a self-hosted recipe manager onto our tail net i've got an app here called meely and another docker compose file which looks again suspiciously similar to the ones that we've been looking at throughout this video we've got our client auth key here we've got our tags there's a new one here called ts serve config which will come on to throughout this section and then we've mounted a couple of extra volumes we've got the config directory which is where our tail scale serve configuration lives and then the actual application itself so we've got meely as an application here using docker volumes to persist the data so let's bring this container up and it'll take a couple of seconds to start it uses gu unicorn under the hood and we can wait for it to come up on port 9000 this is important because what we were doing before with nginx was on port 80 so we didn't have to do anything to make web requests work we just they just worked because it was on port 80 this one though is on port 9000 so if i go across here and go to meely port 9000 we can see that's fine and it works just fine but it's on plain http there's no you know tls certificate or anything like that and i don't really want to have to remember port numbers and so this is where tail scale serve and in a minute tail scale funnel come into the equation tail scale serve lets us provide a configuration file that will tell basically a bit like a reverse proxy that will tell web requests where to go and so let's take a look at one i've got over here a sample json file we can see at the top these first few lines refer to proxying TCP/HTTPS traffic we we want this this is a good thing next up we've got the web section this is what is basically acting like a reverse proxy so when i receive a request of meely dot velociraptor hyphen noodlefish dot ts dot net on port 443 proxy that request under the hood to localhost port 9000 now remember that this is actually running inside the tail scale container but because we've done network mode and added that other container into the namespace of the tail scale container in the linux kernel it's actually listening on port 9000 inside the tail scale container so that's how the localhost part here works and then finally i've got allow funnel set to false for right now funnel would allow me to expose this application across the internet with no further configuration required other than just changing this value to true so if you had friends and family in another country that weren't on your tail net for some reason changing this value from false to true would let them access this application as if they were on your tail net now remember that means it's also exposed to anybody on the internet so use this option with care the upshot of this configuration file is i can now go to meely dot velociraptor hyphen noodlefish dot ts dot net and i don't have to worry about typing import numbers or anything like that what you might actually notice in the terminal window at the bottom is that when i make this request it was actually requesting a certificate from let's encrypt so this website is exposed only on my tail net right now it's not on the public internet but it's backed with a proper tls certificate from let's encrypt so if i look at the certificate here you can see it's a let's encrypt certificate and and the whole purpose behind tls really is to verify that you are actually talking to the web server that you think you're talking to it verifies the ownership of that domain and so let's get logged in and have a little poke around and see what's what just type in my password here and oh yeah i feel like i'm in the mood to bake some bread don't you the easiest loaf you'll ever bake well that's fantastic we've got a self-hosted recipe app on our tailnet but if you noticed if you were keen-eyed hard coding things into a configuration file is never going to be fun in the long term to manage at least and so what we've actually done is we've implemented under the hood a way to substitute the values in this file using this ts cert domain variable just here so if you put this into your configuration file when tail scale starts up it will actually substitute the value of this variable for the domain of your container's tailnet and we can verify this by hopping over to the container host itself so if i do a docker psa and then look for a docker exec command in here so docker exec 0 3 b 1 this is going to give me a shell inside the tail scale container itself what i can do is do a tail scale serve status this is going to show us that whenever we receive a request on the root of the domain so the slash is the root of the domain it's going to proxy localhost port 9000 now if you're not a fan of typing out configuration files yourself like i said you can do a dash json on the end and it will actually print out the whole file for you in exactly the right format if you need it to feed into the ts serve config environment variable that i showed you earlier it's really important for this configuration file that you mount a directory and not the specific file itself if you mount the file directly it breaks the way that fs notify tells docker that the file has changed we haven't done that here we've just mounted a directory so we're okay but just notice what happens when i change false to true for example it creates a whole bunch of configuration on the tail scale side to now expose this application using tail scale funnel so anybody on the public internet could now access this application et voila we have a self-hosted recipes app running natively on our tail net with a valid https certificate available both internally to tail net devices only and if you turned it on externally to other devices using tail scale funnel this approach scales to multiple containers and different operating systems as well not only that but you could also let's just think about it for a second you could run one container in your basement you could run another one in digital ocean for example and then have another one at your friend's house and have all of these different devices all talking to each other natively encrypted over tail scale if you're already a regular user of tailscale on docker let us know down in the comments how you're using it so we can make it even better in the future don't forget you can find all of the companion resources for this video in the description box down below so that's the blog post the git repo various different kbase articles they're all down there for you to use so until next time thank you so much for watching i've been alex from tailscale
Info
Channel: Tailscale
Views: 24,074
Rating: undefined out of 5
Keywords: tailscale setup, tailscale tutorial, tailscale wireguard, tailscale vpn, wireguard vpn, how to setup tailscale, tailscale docker setup
Id: tqvvZhGrciQ
Channel Id: undefined
Length: 31min 58sec (1918 seconds)
Published: Wed Feb 07 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.