Practical Design Patterns in Docker Networking

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to the using docker track if it's your first time here I'm Elton I work for docker and I'm just doing the MC bit so I'll be going the second if it's not your first time here then you know that because I said every time we got a fantastic session coming next we've got down who also works for docker and who used to be a docker captain so he's drawn from that level of community expertise to know joining the docker the docker family he's gonna talk to you about networking and this is gonna be awesome so I nice big home for for danglies all right thank you very much for attending this session it is quite blinding up here but I imagine that's par for the course for being up here yes I was a docker captain I was a docker captain for all of three months before I jumped ship as it were and became part of docker so I've been a docker now for around three months or so but I've been contributed to various open source projects and various bits of the mobi project for docker for quite some time so I've kind of been around the house as it were with that so why this topic you know we were given the opportunity to put forward things that we thought that people would be interested to know about so why why did I decide that people might want to hear about this well it doesn't take much to look around on the internet and see that there is still quite a few people continuously looking up and searching how do I do this with networking how do I do that you know how what steps are required in order to you know just connect things together with containers so it's quite clear that I think people want to know about this as a subject and then moving forward as people are kind of getting together with their applications they're moving them all together and they want to build services and they want to connect them together how do they do those steps so what's you know how do we move forward what's the next steps there so this is kind of a talk in two parts the first part really is going to be an introduction or an overview of the various dock and networking technologies and then the second part of this is going to be a women look at an application a kind of a legacy network application we're going to break you down into the various tiers and then we're going to essentially re-platform it using various dock and networking technologies this is the agenda for the talk so going to go through the evolving application networking architecture as it works we're going to look at you know how applications have changed how applications have typically been deployed what they typically look like we're then going to go into dock and networking so we're going to look at the various technologies that you can make use of with deploying our applications in containers and connecting them all together then we're going to look at some infrastructure design plans so you know deploying docker you know what steps can you take when you're configuring the platform that the docker engine is running on so some of the key design cut design patterns there that can help you take advantage of things like security and segregation then we're going to look at the design patterns when modernizing an application so we're going to take an application and we're going to break it down into tears I'm going to look at which networking makes sense for we to particular part of that application you will notice there is a section on here that says redacted now I'm hoping you were all awake in the keynote yesterday or you've been on the Internet in the last 36 hours or so you will be aware that there was some announcements made so I'm going to cover some topics around that yeah so that's going to be towards the end and then a summary of kind of all of this bits that I've covered today and hopefully some time for some questions at the end as well so let's let's kick it off let's look at the evolving architecture of application networking so you know rewinds maybe nine ten years or so a lot of applications were physically hosted what that means is typically one server one application one operating system it was a very one-to-one design and that typically I gave you know from a networking perspective there are a lot of things to be considered there every application required an IP address for instance but everything was pretty flat and networks typically we used physical separation through things like ports through VLANs that they were used to segregate multiple applications that had shared infrastructure and then the high availability for these applications typically you know expensive clustering software where you would have you know applications monitored by these these third-party bits of software are then DNS and load balancer between sites so this is the first of many networking diagrams for today this is just a kind of a quick overview of what a physical application a physical application deployed would look like so to the left we've got tier 1 tier 2 or to the right if you're looking over here yeah these are basically a number of servers will be deployed you'd have a number of applications on there and then there will be a load balancer that's shared between all of those applications all running on those physical servers and then for things that required high availability so things like databases you typically would have an abundance of hardware where the application or the database would typically only run on one of those servers and then you'd have the expensive clustering software which would monitor the state of these things so if the server fails or if the application falls over for whatever reason the application then will be redeployed over on the secondary bit of infrastructure that you've deployed over there so that's kind of a quick overview of kind of some of the more traditional or physically hosted applications so either spread one to one across multiple bits of tin or kind of looked after a high availability scenario with kind of an abundance of hardware sitting there waiting to capture that running well to restart that process in the event that its original host goes down for whatever reason moving forward as technology is improved there was a huge explosion around a decade ago where virtual machines well where physical servers were getting to the point that they were so large that it made sense to come up with a way of compartmentalizing your applications into virtual machines on those servers this was fantastic for breaking up applications and making things much easier however a lot of the design for those applications typically caused an explosion of resources across your network you know move away from one physical device having one physical IP each of those physical devices still has that address but then every virtual machine that you're running on there and there could be 10 hundreds of virtual machines they all become IP addresses on the network as well you need ways of segregating all of those there's a lot of things to take into account there so this typically is a very small example of virtual infrastructure you can see that to segregate various applications or various tiers we make use of VLANs and all of those need to be presented down to those virtual hosts and then all of those virtual machines all require an address on the network they all require connecting to the corrector network as well you know there's a lot of things to consider there and if applications want to move around or you need to scale out your service services and servers you know the networking team needs to be involved as well so just adding in more additional capacity here means going to the network administrator and getting that person to ensure that the networks that are acquired are also presented there as well so an abundance of network IP addresses and abundance of overcomplicated networks just to provide that segregation and then all of those additional tasks handed down to your network administrator for him to ensure that you know your infrastructure is all configured correctly as well so that was kind of the case around four years ago when a technology company kind of appeared that championed containerization so kind of moving forward how can we move away from that a little bit to simplify things to make things a lot easier we're going to look at the various dock and networking technologies that can make you know the network administrator's life easier as well as the administrator of the applications life easier as well so welcome this is my terminal for today I'm kind of cheating here cuz I don't need to do everything live cuz I've got it all here so this is my docker con terminal and if I'm using docker networks I can quite easily see those networks just by doing a docking network LS so we can see here bridge networks host and networks the overlay network and also the Mac VLAN network as well there is one mentioned on here which is the null network I'm not going to go into a great amount of detail about that but essentially if you create a container and connect it to that null network I quite imagine you can imagine where that traffic's going to go yes that's right you've black hold your container there there are reasons for this so things I uh vices that are essentially just doing processing and don't require any network connectivity you would do that you would give them some workload connect them to no network for instance so we'll start to drill down into these various networking technologies first one we're going to look at is first two we're going to look at actually our host and bridge networking these are these simplest networks he's kind of come out of the box so I've got three hosts here the first host I'm going to create a container on and I'm going to use host networking host networking is quite simple essentially when you create a container and connect it to the hosts network you are connecting it to the hosts network what that means is that essentially the when the process comes up in a container it will essentially attach its ports that you want to export that it's exposing to the hosts network so shares a tcp/ip stack shares the hosts namespace the only real difference you're getting here is you're getting that segregation by having it run in a container so starting to move forward into bridge networking if I just run containers without connecting or exposing anything by default they will connect to the bridge network that comes as part of docker what that means is when you install docker and start the daemon up inside your docker host there is a bridge network that is created by default and it is just sat there waiting for containers to connect to it so when you start a container it is given an internal IP address so you can see on the second host the Gateway address which is just underneath the the bridge NAT device and then all of the containers that are created wolves will start and give be given an IP address in that range now that's great all of the containers that are connected to that bridge can speak to one another nothing can speak to them and they can't speak to anything outside of that bridge network so it is an isolated network that lives inside that one particular host so that studs great everything can speak to one another internally but you know we've got services that we want to share to the outside world we want to expose that so just doing a docker run and then using the - PFLAG we can say which port on the host we would like to advertise the following post after port after the colon that is being shared by that container so in this example we have an engine X container that we've started on the third host that was always sharing port 80 however what we've asked for is the host to have a port 80 shared and then to have that connected back to the to that container at which point then you've essentially created a traffic link between that you're exposing that port to the outside world you don't necessarily have to do identical ports so I could do - P 8080 Cole on 80 and what that will do is that will map port 8080 on the host to port 80 on the container itself so kind of a degree in flexibility there in how you want to expose your services the key concept really here is though and for kind of best practices only expose the services or the ports that you should be exposing there is no point exposing everything if there's no need to you're just opening yourself up to you know for security reasons that doesn't really make that much sense so bridge networking and host networking both of those are single hosts only what that means is that containers that are created on there they can speak to one another on the bridge and you can expose ports to the outside world but if you have a number of containers on host one and a number of containers on hosts - how do they speak to one another so the technology behind that is overlay networking again this comes as part of swarm so whenever you create a cluster and basically do a swarm join every node will become part of that overlay Network so quite easy there are a number of ways we can actually do this we can essentially do a dock and network create - d4 driver overlay and then a name that we'd like to use alternatively if we just do a service create that will automate the process of creating that overlay Network and then it will create the containers on top of that Network as well so a very simplistic way of doing the full deployment of an overlay Network so the technology or how this works it makes you serve VX LAN and essentially that is a tunnel network that lives on top of the underlay network it's quite simple it's all part of the kernel it's all part of yeah it's all part of the Linux operating system so we're just making use of features that already exist it's very simplistic and any host that is part of an overlay network sorry any container that is part of an overlay network can speak to one another so traffic can span between containers regardless of what host that they're actually on as long as they're connected to that overlay Network so by default the overlay network is encrypted the key is rotated automatically every 12 hours as well so you get additional security you get segregation and also when you when you create a service and you expose a port on it as I've shown before with the minus P and the exposing port and the internal port that you like to expose traffic can be routed to a task regardless of where it is so what that means is that for instance on the middle host might keep my tasks my container isn't actually running however that's the house that the load balancer has given me I've connected to its external port and the overlay network takes care of routing that request to one of the hosts inside the swarm cluster where that container is actually running so we get the capability of having no infrastructure that is bigger than the service that we're running we also get load balancing so every time that there is a request the overlay network and swarm will take care of moving to the next particular task and moving this way through them so you get load balancing and routing to where those tasks actually exist so a little bit more kind of I know this is using docker but I'm gonna give a little bit more internal detail of how that actually works there are two IP addresses that are given to each container when they're created on an overlay Network there is an internal so the 10.0 address here and those are containers those are IP addresses that exist on the overlay network so those are the IP addresses that a container will use when they want when it wants to speak to another container internal only addressing the second address is the VX LAN vtec endpoint address and it is this address which allows traffic even though it thinks it's on the same network to drop on to the external network and be routed to another vtec endpoint through the VX LAN tunnel where that task is running so if a container at the top wants to speak to the container at the bottom it'll be using the external address that traffic will be then be pushed out through that box and routed to 10.0 20.3 where it will go through the tunnel back out and the traffic will then be presented to that container so essentially just a simple overlay that exists on the physical overlay underlay sorry so that's that's a kind of a quick overview of kind of swarm networking overlay drivers one of the newer features that was added a few releases ago was Mac VLAN so Mac VLAN as the name kind of suggests is kind of more of a hardware concept or a hardware way of creating a container and giving it things like a MAC address so MAC address is essentially a hardware address that is usually a printed into the BIOS or the firmware of each network card that you would buy and it's a unique address which identifies a physical device on a network so what this means is that we can now have a container that is directly connected to the underlying network so why why would you do this I mean we can expose everything we need to do over overlay networks we can expose simple services through bridges well if you need to connect things to a direct network for things like VLAN access or they require they're required to be looked after by ipam software or things such as that then direct access to the underlying network is this Mac VLAN is that's what it's going to get you the containers essentially become a first-class citizen on the network this is quite simple to set up essentially where you create a network so with this within a network create I'm using the Mac VLAN driver we're specifying the subnet so some applications for instance will have could be coded with static IP addresses in them they could have requirements where they need to connect to devices that already have ranges in them that you know we can't change the firewall for instance or the endpoint has a requirement for a static connection well this allows us to do is it allows us to move an application that we can't modify into a container and make it look exactly like you did before so with this we create a network we tell it which range we're going to do and we also tell it which Ethernet adapter to connect to as well now this is a little bit important because that Ethernet adapter that it requires connectivity to promiscuous mode is required the reason for that is you're going to have what was a single network device suddenly looking like multiple network devices on the network as well so you may have issues with some switches if they're not allowed to have a connection that shares multiple MAC addresses however if you've been running things like your virtualization and things like that typically your switch will be allowing multiple MAC addresses on the same connection so we've created that network we've told it which Ethernet adapter we want to use so this is the Ethernet adapter that we will bind new MAC addresses to if we want to make use of VLANs it's a it's a shounen they're kind of the example up above there we would make use of a sub adapter which is connected to that VLAN and then finally we have that network created is then just a case of when you do a docker run we tell it the network that we want it to you so in this case the MAC underscore net network which I created the previous example and we can tell it to use a physical IP address in the range that we before this means that this nginx container that I've just spun up here will have what will be connected to the underlined Network anything that is monitoring the network will certainly become aware of a new device that's connected to that network if it was a different application in this container it can look exactly like the previous content the previous application did because it's connected in the same way to the network there are a few issues with this as a design pattern you don't want to have every container having its own IP address there is no benefit to doing that and the fact that containers are tiny and you can spend many of them up very quickly you could effectively do a bit of damage to your network if you're not being careful so you know it's great that this is a way to have you know applications that typically require direct access to the network we can put those in containers we can make them look like first-class citizens on the network but do be aware that this gives you a lot of the overhead that we've tried to get rid of from you having multiple virtual machines everywhere with this you can have a lot of IP addresses all over the network and anything that monitors the network or anything like that could start screaming and your network administrator might not be too happy with you either and then finally networking plugins so all of the drivers that I've mentioned today they're bridge the host the null if you're so inclined to not connect to anything the overlay Mac VLAN they're all part of the docker engine however the one of the key features of the docker engine is we've tried to make it as plug-and-play as possible with that there is a capability and we do have a number of plugins that third-party vendors have created which allow a number of different functionalities so some of the key ones are their own IP address allocation management so they take charge of giving IP addresses to containers some of the other drivers allow you to do clever things where when you spin up a container the driver will be able to speak to pre-existing networking equipment and do things such as we'll see of service between containers and between hosts or it can create its own overlay tunnels it can do all manner of things like that so what this allows you to do when you the lifecycle of a container so the starting and the stopping of it rules can be pushed up to physical networking devices when that container is created and when that container is gone those rules would be removed as well so you get the automation and the lifecycle not just from a container perspective but also from a networking configuration perspective as well so on to infrastructure design patterns so this is one of the newer features that was added into dock of 1703 or or 1.13 essentially the capability of segregating your control plane and your data plane so for security reasons this makes this is a you know this makes perfect sense when you have a host that has multiple network adapters you have the capability of binding the control plane so the swarm commands and you know the bit that you connect to when you want to do docker runs and tell the docker engine what you would like it to do and then the data plane so communication between containers will be forced out of a different set of network interfaces as a motion this provides logical and physical separation so pretty straightforward the only real key difference here is when you do a docker swarm in it we say which adapter we want to advertise so that will be your control plane and then we say which adapter is going to be the data plane so which adapter is going to carry all of the traffic that our application makes use of and once we've done that from the swarm manager perspective it's kind of the same for just adding all of your additional workers as well so when you do a swarm join it's the case of just making sure that you add in the two additional flags which say this adapter is carrying this particular type of traffic and this adapter is carrying the other type of traffic once you've done that any containers that you create and add them to that overlay network all of their traffic will now will only go down that data plane go through those data plane interfaces so complete physical and logical separation so that's some of the design patterns for infrastructure we're now going to look at some of the design patterns when you take an existing application and migrate it to make use of dock and networking just quickly because we're taking an existing application we're going to be using docker Enterprise Edition there are some key features as to why we're going to do that mainly support but also we're going to make use of things like the universal control plane so what this allows us to do we can either do all the commands that I've done manually today in my super fake terminal or we can actually use you know things like services so full service definitions and compose the full stack as it were and then we're also gonna make use of the docker trusted registry as well so on here all of our containers will be scanned and we can only deploy them when we know that they've been certified and they've gone through all of those policies so this is the platform that we're going to use to deploy our application we're going to take an existing application and move it onto here so this is my application it's pretty straightforward it's a bit of an amalgam alga motion of the first two architectures that I pointed out but quite simply we have a number of services that have been migrated to virtual machines and then the database is still living on some physical infrastructure so you will tend to find that this is kind of a bit of a common architecture that you'll see in a lot of places where the still haven't migrated their database to virtualization for whatever reasons it makes use of a number of VLANs to do the segregation between the tiers and also in the VLAN tier so we have the these two virtual machines they require some deep access to the network so we have things like ensuring before data goes into that database we remove credit-card numbers or they scan the network to provide security reasons but for security reasons for secure you know for obscure traffic and things like that so we've got a full kind of complement of various tiers and various networking requirements so whilst I was showing you the architecture the the behind-the-scenes the developers and our application maintain as they've been busy at work and they've you know they've repackaged all of the applications that we can into containers great job everyone so let's start breaking this down we're going to start by looking at the the first two tiers so the front end and kind of the app tier as I mentioned you know you've got a lot of virtual machines you've got a lot of VLANs that do all that sort of segregation the key concept here is that we're going to use some dock and networking technology to simplify that architecture we're going to simplify the networking configuration and we're going to get security through isolation so the services that we need we can isolate those through overlays through VX land tunnels and then we're going to get the security that we need through that encryption on that overlay and then we can start to reduce some of the networking configuration overhead for the physical devices as well so it can move away from having VLANs sprawl throughout our infrastructure so looking at the front end of thus this application what we're going to make use of is we're making use of a function that is part of docker EE so I mentioned previously the swarm overlay and how that does routing well with docker EE and the universal control plane we get additional functionality with a bit of technology called the HTTP routing mesh and what this allows us to do so for instance if we have multiple for an ends or multiple like web sites are weird we're exposing the HTTP routing mesh will allow us to have those in separate services and as traffic comes in to those exposed ports it will route the traffic to the correct service dependent on it's depending on the hostname that we're requesting so this allows us to share networking ports between different services as well and then once we've been routed to the correct service then we also get all of the load balancing and routing the overlay network provides as well so we're essentially getting functionality on top of functionality and scaling and moving away from kind of VLAN sprawl and all that overhead there so we've taken our package to applique our new applications we've deployed them using overlays and HTTP routing mesh to give us multiple additional functionality with our applications here the mids here we can just make use of simple swarm overlay so we take those you know scalable applications and we can get scalable services now because we don't need to worry about having to do all that configuration of every host all we need to do is ensure that the host can communicate with one another directly so as long as it's part of a swarm cluster we can take that repackaged application make it into a service and scale it up and deploy it as we see fit so completely scalable services and then we also get the benefit of being able to load balance between all those existing services as well if we move on to kind of the backend and some of the more physical services so we're looking at the database and the and these existing virtual machines that have some low-level functionality now this database is huge we're gonna leave it as is however we need access to it so how are we going to go about doing that sort of thing well we're going to look at how we can preserve those existing integrations we have some virtual machines that were essentially sitting on the network they were scanning HTTP traffic or just all traffic moving around we're using Mac VLAN we can take that application and attach it directly to the network and give it the same functionality except now we can manage it all by docker as well so we don't need to have that virtual machines everywhere we don't need to have yeah all of the extra steps there and we can connect it directly to the network or to the VLAN where it needs to see that traffic and then some existing in-house applications that were built with the static IP address requirement or we just can't what all they need to sit on that same network we can essentially create a create an IP address for that host put it back onto the same network in a container and it will have access to all of the networking resources that it required before so we've essentially taken the front end we've taken advantage of HTTP routing mesh we've taken advantage of ovale to separate out that application into separate web service or web services we've taken the apt here and we've taken it in its original form we've moved that into a service and we're exposing it through overlays and through exposed ports that are mapped back up to the front end and then the deep services that we require to look after the data based here we've managed all that by making use of Mac VLAN so what are the design patterns for all of this you know well where possible there is a lot of opportunity here so you know you can start to remove a lot of that complexity that exists in your networking infrastructure but we get the same functionality that we had before so making use of VX LAN to give you that segregation between your applications and between your application tiers and also we get for free AES encryption on that VX LAN as well so not only is it segregated it's also encrypted and then the cases where we have that hard pinning or a requirement for VLANs we can make use of Mac VLAN to put those applications in containers but still present them to the network in a way that they can still operate in the way that they were designed to to begin with so I mean there's a lot more of this that you can actually get some hands-on experience with as well so if you do have time a lot of the labs will actually run you through building up services and actually deploying them through composing and connecting them to networks so if you do have time that's there for you so this is the redacted bit there's a kubernetes logo on this that probably gives you a clue as to why it was redacted to begin with disclaimer so we will have time for some questions I believe however this is a very high-level overview of the design patterns that you may come across when working with kubernetes and swarm together other than that type of topic I'm not really going to be able to answer a lot of questions around that so if you can forgive me for that the rest of the topic I can quite easily cover questions for so that's my get-out-of-jail-free card for that particular topic so I'm sorry about that so what does it look like at the moment we have UCP we deploy services through either the UI or through the CLI through the CLI bundle so when we push services to UCP you know through through swarm and through there through the various hosts that are connected to that those services will just be span up in overlay networks simple as when we deploy services through kubernetes it's the same thing except the kubernetes managers will speak to the couplets that run on each of the hosts it will push the config down to them and again dependent on the networking driver that you wanted to make use of you could end up with overlays or you could end up with bridge networking on each host that is entirely down to you however the concept is pretty much going to be the same through UCP push your services either have them spin up as a swarm cluster have them spin up as a kubernetes cluster great but now the question is I have some swarm services over here and I have some kubernetes services over here yeah how do I get them speak to one another so the key design behind this at the moment is that we're going to make use of a layer seven ingress controller and this essentially like the HTTP routing mesh that I showed you earlier as part of UCP this essentially will allow a way for you to speak between services dependent on which service dependent on which Orchestrator or platform that they live on so this essentially is going to be the way that when you hit a front-end the traffic is routed to the correct service so you see quite clearly if I hit the swarm koukin comm I'll hit that ingress controller and my traffic will be routed down to the service that is currently running on my my swarm cluster same with kubernetes if I hit Kubb traffic is routed down to the services that are running on my cube cost cost cluster sorry speaking between the two essentially your service that's running on swarm will just be need to be told when it needs to speak to that service speak to cubic on comm and that layer seven ingress controller will route the traffic to that so that's basically how it's going to work it's essentially gonna route the traffic to the place where it's actually running so that's a quick overview of UCP and kubernetes and how you would have the to speak to one another a summary of my session so applications that can be rehomed applications that we can make changes to or move to different networks you can make use of a myriad of dock and networking technologies and features that's going to make it a lot easy for deployment and a lot easier for their scaling as well the overlay networks they pretty much give you the same functionality that you're going to get from things like VLANs but additional functionality from having encryption by default so you can have all of the segregation between applications between tiers all done through that overlay network and then services that are tired or hard-coded and can't be changed particularly much you know their specific network requirements well they can still be deployed in containers because it can make use of the Mac VLAN driver so with that slice I was told to put in so you know a lot of today a lot of this applies to MTA so migrating those applications you know you need to be aware of the networking technologies that are going to help you deploy those applications so if you want to know more the docker booth is 20 meters that direction and also if you want to play with docker ee have a look at the HTTP routing mesh or deploy some test services in there you can get a trial as well so that URL will get you a demo for you to play with and with that so that was practical designs in dock and networking and thank you very much [Music] what well I told you it's gonna be fantastic and it certainly was thank you very much dad that was awesome so we've got a few minutes for questions if your question raise your hand we'll go up to one of the mics hello hello hello is this thing on it is alright okay so I've looked at a number of other solutions and I was wondering if dr. UCP provides solutions for SSL multiplexing termination and load balancing of connections at the front end that's a very good question so there are a number of things that you can do there the your thing that you may end up with is load balancing on load balancing but you can do when you create service you can pass all the SSL Certificates to that service and have that as your endpoint and then that will be a level of termination there or if you need to contaminate much higher up if that's your network I was wondering it does UCP provide that facility or look to things like openshift for example and there's no native SSL multiplexing and termination in there you able to offer equivalent built-ins yeah as part of a I'm not actually sure with HRM because it just uses a it's just a software proxy so it's a proxy sitting in the service that runs alongside this warms the rest of it so like we do the initial configuration of saying this is where the the at that this is the the DNS name and routed to this service but then you can go and configure it yourself so I'm not sure if we do that as cell stuff but it's just a service you can extend it okay right in the middle right in the middle of the room so if I have containers with overlay Network communicating only with each other using SSL today should I just remove the the SSL layer because the overlay network already adds encryption that's a very good question I mean you do have encryption on top of encryption I don't know if there's a limit on how much encryption one should have but essentially that's really down to if there's overhead from doing that sort of thing I mean you can't disable the SSL encryption on the overlay and then just still have your your containers speaking to one another through that SSL or disable it in your application the idea behind a lot of the program is that we don't want to change people's applications so it's entirely up to you you have the option of doing it either way but yeah overhead is really the the main key for why you would want to possibly turn one of them off yeah I think I think possibly your you SSL inside the container is probably more expensive than the encrypted overlay Network in computers but yeah I think you have to test that out yeah question over here you mentioned that using the Mac villian driver containers become first-class citizens of the network does that mean I'm not limited to TCP anymore it's fine that as I understood well yes the question there makes sense but the problem that you have is that docker is taken care of configuring that for you yeah so docker is always going to attach an IP address to it and it's gonna be TCP IP as your communication method thinking about the UDP and still I pee but now you can send UDP traffic over my VLAN so oh yeah if you if you got a DNS server or something that's serving out IP addresses for instance that Mac VLAN thank you cool and I think we're out of time actually yeah I can do one more yeah no yeah one question um session stickiness yes in between the HTTP ingress controller yeah a lot of applications they need stateful sessions and stateful applications and how you're going to deal with it so the way that you deal with that I had a feeling somebody was gonna message yeah without having look balance after loop balancer because there's a little bit problem and yeah well II mesh is a mesh yeah yeah so the key way that you do this at the moment is when you configure a service in UCP yeah you can essentially set cookies in the header that you would like to use and UCP will use those to ensure that the same route is taken to the same task so you will get that stickiness based upon a session ID or cookie ID that you have as part of that HTTP request it's interesting because the ingress problem is occurring every day and yeah it's nice technology so thank you is are you going to open source it no ingress control on the net you have to build it on your own like we do it yeah because you have stateful applications that you have to root their sessions to the correct container yeah so you have to do it on your own at the moment and effort and interesting as a project it's a good reason to operate to talk so thank you very much down not the big hump down please I'm the he's around to go any questions asked enough [Applause]
Info
Channel: Docker
Views: 49,454
Rating: 4.9127727 out of 5
Keywords: Using Docker
Id: PpyPa92r44s
Channel Id: undefined
Length: 42min 30sec (2550 seconds)
Published: Fri Nov 03 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.