Docker for Java Developers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so my name is Aaron Gupta and I'm for Jenny random alright today we will talk about docker for Java developers specifically how many of you are Java developers in your day job well that makes sense you know pretty much the entire room you know so I guess we're in the right audience then I met Joe champion why I'm from Brazil actually and I work for a company that does data science in Java using docker Hadoop and kind of stuff so hopefully I can share a little bit of my experience here my name is Arun Gupta I am a darker captain I'm also a Java champion I like to run I'm getting ready for my first ultra marathon in June some pretty excited about that in addition I'm always looking for new tips and tricks you know what people are telling me so often I give this talk I end up learning lot more from the attendees after the Q&A and all those things so yeah so what we plan to cover this is sort of our broader agenda that we are looking to cover we'll talk about when you're building your packaging your java applications what is the base image that we need to use we'll talk about as a Java developer you use maven or Gradle one of those build tools we'll talk about exactly how do you use those tools to package your java application.run your containers we'll talk about how can you do multi container application your application typically consists of multiple containers a database container or web container an application container a caching layer and multiple instances of those so we'll talk about how do you run multi container application on a single post for development and multiple hosts for production we'll talk about scaling apps on AWS or as your you know you're going to talk also about memory memory management for job applications running in docker that's a little tricky and we are going to show how to debug your java applications how to monitor your java applications in docker and how to do integration testing so pretty much everything that you care about from your java application perspective but with a darker paint on it so that you understand how does my life change as a Java developer particularly when I'm building my applications with docker so let's get started jump right in well when you build a Java when you build a docker image you need a base image what are my options for using the base image for Java well the first one and all that people have been using for a very long time is Java kind of very logical intuitive name how many of you are still using Java as a base image very good I'm glad nobody is because that is deprecated as of December 31st 2016 so if you are using please don't tell your friends not to use that as a base image it is getting still getting updates but any point of time they'll pull the plug on it nobody will be updating that image so think about JDK 8 build Jerrica 9 bills all of that good stuff that you care about will not be available in that base image so but that that was the start so what do you do then well you use open JDK as the base image that is the base image that you want to use that is the image which is basically renamed from Java basically tab from Java and now we open JDK that is getting all the latest updates so this is a snapshot from docker hub so you can look at the tags all the latest tags all the eight tags nine tags all different variety versions everything is over there now you can look at a debian base image or an alpine base image you can look at the JDK image or the JRE image now if you saw in the keynote this morning you saw a demo of how you can use a multi-stage build to be more effective so think about using the JDK image for building your application and then the second stage is where you are using a JRE to package your application that will keep your all the you know your compilation files all your JDK all the everything else you need your maven artifacts exclude from the bill use only the jar files and you know your JRE as the base image kind of makes it simple and further cutted cut down your image size if you can deal with the Alpine image what is the difference I mean if you look back for a second you know between Debian and Alpine it's almost three times the size so what is it that is not available in the an image so you've got to be aware of that it shows you a snapshot of what all you are missing out from the Alpine image essentially is a non-essential stuff in Armin I hope the public handful of people who probably still using CORBA so that kind of utilities is what they have cut down if you're using JavaFX then you may care about this so essentially it's going to give you a snapshot of if you're using an outline based JDK image what is it that you're not getting it's important to know that what is the third option well I use Oracle JDK you know how do I use that as a base image now you can't create an image use Oracle JDK as a base image first of all there is no Oracle JDK base image you can't create an image you know put Oracle JDK in there and push it to docker hub that's a violation of legal license for molecules perspective because it's a supported product it's a product because you accept the license when you download JDK so you can't push that image to docker hub so what is your option particle container registry in that sense Oracle container registry essentially is Oracle container registry dot well container registry or Oracle com if you go to that URL you can sign up for an account and that's where you can download the official Oracle images well I don't want to go that route I have my custom JDK where I've added some tweaks some tunes all that kind of stuff so where do I host my specific images I still want to use my JDK I have my support contract and stuff like that so in that case what you do is you do create your custom image but then you put it on a private repository so for example ECR and also you can use Amazon ec2 container registry you create your own image and you push the image over there but then it's a managed service you don't care about you know how the registry is being running but your image is sitting over there and then you are controlling the access it's not publicly available hopefully and last but not the least you know the fifth option that I would recommend is look at Zoo Lu so you know what I don't have support contract I'm looking for an image to get started with now all I want is a supported commercial JDK image that is available on docker I don't have time to set up PCR I don't have time to set up my registry I just want to commercially supported image so as companies like Azul and similarly IBM they have supported version of JDK that are available in darker hub so that's sort of your one of the options so the key part that I want you to think about is when you are using the base image do you really want JDK is there any compilation happening at the runtime because that's going to add a blow to your application right away your image size goes like big so highly encourage you to use the multistage build process where in the first stage you are using JDK to compile your application and the second stage you just pick the jar and use JRE as the base image so that's something to keep in mind all right now we understood what the base images look like now I am a Java developer as a Java developer I am building my applications either using maven or Gradle power possibly and maybe shell script or maybe hand compilation hopefully not but maven M Gradle are the two popular build systems at least from the Java developers perspective so let's take a look at it when I'm using those as build systems what tools are available to me what plugins are available to me to have a more seamless integration first of all in terms of maven plugins there are a few varieties of maven plugins that are available there is one by Spotify one by fabric 18 my preference is the one by the fabricate team and the reason my preference is that because first of all this is my red hat which is in there very much in there open source is their DNA very actively maintained that's the plugin that they use for their open shipped platform as well and the quality and the feature is pretty rich the community is very responsive and I know the maintainer is one on one as well but essentially what you do is you add this as a plugin in your form XML and then you have a bunch of goals available to you you can start a container you can stop a container you can build an image you can push push an image to a docker hub so your typical docker lifecycle commands are available to you as part of your maven lifecycle itself how does my pawn XML change so here is my maven configuration file here so I mean if I can walk you through this little bit essentially what line 64 through 66 is showing how my plug-in configuration looks like well that's my plug-in definition essentially then line 67 through 85 is my plug-in configuration and the plug-in configuration essentially has to think about it it got an image part well it says what images need to be built within images you can say what is my one image now within image I got three parts I'm saying what is my image name which is my repository name essentially then I've got a build section which is how I'm going to build my image and then I've got a run section which is sort of my run part of it so essentially if you have multiple images to build you can specify all of them over there the one key line that I want to highlight over here is 974 where I'm saying artifact what that means is as part of maven package I possibly generated a jar or a wall file all I'm telling maven is go ahead pick that jar file and add it to my base image now what about the base images now in this case and line 72 I'm specifying OpenJDK as the base image but I could use whatever base image I want to use and now in this case I'm also specifying the darker configuration dockerfile configuration in the pom.xml certain times what you may want is it is my darker files use that docker file instead that's perfectly fine so all those configurations are entirely possible and then finally in your palm XML itself you can say how I'm going to associate those goals that are available to me two different maven phases so for example in this case I'm saying every time I say maven package which is on line 89 do the build so you just say maven package possibly in a profile like the docker profile that you create in maven and the moment you say maven package is going to build the image for you and when you say maven install is going to run the image container for you and it's going to spit out the logs as well now this is completely up to you the point being you have a darker maven plugin by fabricate T it has all the goals available to you you can certainly extend the goals and attach them to different phases of maven and by the way if you look at the bottom of the screen there a darker Java sample from my github repo that has this entire maven file a fully functional sample that you're welcome to use similar things you know we have available in the Gradle world so just like maven you know there's Gradle is another build tool and as part of that build tool there is a Gradle plugin now again just like maven there are a few varieties of Gradle plug-in but my preference here is the Beamish go Gradle Gawker plug-in 3 or 6 was the latest release I think about three weeks back that's when I will be slides essentially you can find the details which is shown at the github URL at the bottom of here the documentation is pretty thorough it was started by Brian mujhko but now there are other maintainer of the plug-in but essentially the plugin comes in two flavors it has a general purpose darker remote API where you know whatever you want to do with Gawker is available to you it's like in working the darker remote API full-blown API is available to you however you want to invoke it that's completely a possibility so in that case it gives you command like docker X image I want to build an image I'm going to push our image I'm going to remove our image so your life cycle commands are available similarly for container I want to start a container stock of container kill a container all those life cycle commands are available to you or for Java what I prefer is there is a opinionated java application plug-in hey you know what that's far too much details for me to figure out where my host is running what plugin I want to use I'm just going to use up up java application plug-in where all I'm saying is it is my base image it is my image tag and here's the food I want to expose just go ahead make the image for me okay let's take a look at sample now in this I'm showing you a build or Gradle file here I'm of course adding my dependencies say this is my brand watch go Gradle darker plug-in on line 13 well line 12 first of all I'm applying the application plug-in on line 13 I'm saying I want to use not the darker remote plug-in with all remote a plug-in here I'm using a darker java application plug-in that's not of the open edit plugin that i was talking about earlier i'm importing some classes here and then line 30 to 30 is where my magic is happening so see what did what does it mean to be an opinionated all you're saying is essentially I'm going to build a java application I make a docker image for me for this my base image is open JDK and tag is hello Java that's it as simple as that and in addition I've also added a couple of custom tasks like create container and start container and start container task which is on line 42 depends upon create container so I can just say start container it will start the container but before that it will create the container so very fine granular control is available to you from Gradle perspective all right so that's my and by the way the URL that I showed earlier that got both the maven sample and the griddle sample that you're happy to look at let's take a look at multi container application as I said earlier when you are building your application at that point of time you need to have no application server like a wild fly what a database server like MySQL Couchbase whatever database comes to your mind then a web server and then a caching layer and typically what you want is you don't want a single instance of an application server the single point of failure you want multiple instances of that similarly with others database is set up in a cluster so pretty much from the start itself when you're building your application is a multi container application let's see what doctor offers us in that space well the core darker compost so what darker compost gives us is an ability to define and run multi container applications your configuration can be defined in one or more files the default filename in that case is darker compost gamal and that's where you define you know what my darker compost version is you cannot define what your services look like and I'll show you a syntax for that one in addition you can also have a darker compost dot override dot Y amel which will override the services define in darker compost or channel this is very useful particularly in cases whether you want to move in production or in dev or in staging but the fun part is you can use dash F to specify as many darker compose Gamal files and what that brings you is the ability to say I'm going to use a combination of these files and then define whether a particular darker compose is targeted at dev or staging or production and then accordingly I can spin up my environments and shut down my environments it's a single command to manage fall services and it's very good for reducing the impedance mismatch between dev staging and CI but many I think you found something useful as part of this yeah yeah so can you go to the next slide oh well so let's talk about this a little bit well this is sort of the syntax here it's a multi container one single host all I'm showing is the beauty of docker is whatever works on a single host would scale to multiple host as well but essentially what I'm showing here is I got a darker compost files and that I got a web service which is on line 10 and then I got a database service which is on line 3 and all I'm saying is hey bring up this web service which is a well which is the Java EE application deployed in wall fly container and make it talk to a culture based instance which is defined by image on line 4 now how do these containers talk to each other really well all I'm doing is in my web service I'm defining a couch base URI environment variable that is then pointing to my DB service and that's the logical connection it's the late binding and when the late binding happens you know it figures out the IP address that is very important you don't want a hard code IP address because the container dies here and if it's running as a swamp service is going to come up on a different host and possibly be assigned a different IP address you may scale the service so there may be multiple containers running behind the scene so that logical connection to the service is fundamentally important now you can run multiple poles files so for example I have this file docker compose DB dot e on oh ok and this what I'm doing is I'm saying the port instead of mapping 8080 on the host to 8080 in the container am adding port 80 on the host to 8080 on the container and an addition the image that I'm using is not Couchbase image which is for development but I'm using Couchbase : prod and what that gives me the ability to say hey by the way this is the production certified image and possibly from my private repository but let's take a look if I say darker compost you know I probably have a darker compost yam all sitting somewhere if I say darker compose F darker compost dot enamel and docker compose BB channel now instead of my application coming up on port 8080 will come up on port 80 automatically I can look look at the list of services by doing me giving the same command by PS and then similarly I can shut down the services as well so this is a very useful feature this is essentially what we are seeing customers using a lot for cutting down the impedance mismatch between dev staging and production and literally using a combination of files this is a very powerful feature of darker compost there are other functionalities like how you can extend you can define for example your AWS secret and access keys in a configuration file and then in your darker compost you can say go extend from that file so that you don't have to repeat those access keys and secrets in all the files that make that makes it very powerful so let me show you how you can really change your life without a Java developer so we'll be using docker for quite some time and reviewed several images for all the database Hadoop SPARC everything we needed it's a very complex application so when we started building our images what we did was a readme file like probably most of the early beginners it with all the instructions on how to start each of the images so he was actually going to the readme file running these scripts one by one one by one and changing the path because each developer that created a new image would put their own path so I know this looks really stupid but it dissolve right it just took it now by the time to make sense so with docker compose this is the new version you can have exactly the same thing you had in the readme file but in a dr. Campos file and then you can just say dr. Campos up and all your services are going to start in the right with the right volume mapping ports and everything so what happens then before we would take at least four hours to have the whole environment set if you are lucky now in two minutes we have everything set up even if you're not looking I guess dr. Campos does make it everybody lucky in that sense of course all right that was the part we were talking about running multi container applications on a single host you know how do we get multi host applications because single host again in a single point of failure how do I extend that well that's where swamp mode comes in so our mode was the feature that was introduced into our core 112 you know if you're using starting knew good docker you don't need to worry about it it's just a feature baked into darker itself but essentially is it's a native clustering in darker itself once you set up docker is an optional feature once you enable swarm mode then you know you can use your docker CLI to create your applications deploy your apps and manage your swarm and all that capability the beauty of this is this is no single point of failure because it's a multi host docker now you go to applications or scattered across different docker host it's a declarative state of model in the sense you say I want to create a service and for that service I want to run three replicas and that's it that's all you care about if the container goes down you bring up a new container not my problem so docker is owning that responsibility that I will make sure whatever your desired needs are I'll take care of fulfilling them it is very self-organizing self-healing in that sense because it makes sure that your state is maintained and if the container does go down it will bring upon a different host now go back to what I was saying earlier why is it important to use that logical service name because if a container does go down if it comes up one different host a different IP address you don't want your application to be break it's all about the application resilience here essential pick also has the capabilities for service discovery load balancing scaling so if you scale your service from three tasks to like save six tasks you're still using the logical name and it will automatically do the load balancing behind the scene for you it also has the capability for rolling updates so everything and anything that you will typically expect from any deployment architecture for orchestration framework is feasible out-of-the-box here so I created a six node cluster here you know the way swamp mode works is there is a manager one manager at least one manager which is the yellow box in the center and I have a six node five nodes which are worker nodes essentially in the yellow in the orange here actually now in that cluster I am giving a command pointing to the manager that docker service creates so created my service one of the things that you want to understand is part of docker 113 up until doctor 113 the CLI was pretty messed up and other commands were very non-intuitive in 113 all the commands have been nicely organized so I can say docker command sub command and all those sub commands are very nice so for example I can say docker image LS I can say docker container LS so you're you have a high-level command and the second level sub command so in this case I'm saying doctor service create I want three replicas give the service a name web in this case and the image and the aqua automatically does the scheduling for you so that is pretty cool and it's a replicated service that is another mode in docker swarm by which you can say create a global service so I want a single instance say for example I want a prometheus container running on each node and only a single instance so you can run it as a global service as well in which case you just specify the mode as global so multi container or multiple host how would you do that well this morning you saw the example of darker stack deploy now you can use a stack camel but you can also use the original docker compost or the amal that we had which we can just use right here itself so in this case all I'm saying is darker stack deploy and I'm saying take my exact same compost files from one host to multiple hosts and the logical service name is still very relevant so that's the importance that's the beauty of how the whole thing works from a single host to multiple hosts now it doesn't matter where your multiple hosts are if your multiple hosts are configured using docker machine sure it will work there if multiple hosts are on AWS that will work too I want to scale my service so I can just say docker service scale give the service name and the number of replicas and that's about it all right I've done so far development on my machine how I'm going to scale these services up on AWS and Azure now if you think about darker from a high level perspective darker comes with a seee which is a Community Edition and EE which is an enterprise edition now see e is for development purpose comes on like say laptops which is either Windows or Mac or on servers which are like Linux or on cloud AWS and Azure primarily and similarly there is a docker for e which is on servers which is the commercial offering with a 30-day evaluation version on servers which is Linux flavor or on cloud once again on AWS and Azure so it W in agile was for the longest time only in development and beta mode but now it's ready for production but essentially what you can do is you can use docker for AWS or Azure which is nothing but a CloudFormation template on AWS so you can say here is the CloudFormation template go on docker has created the CloudFormation template so you just fire up the CloudFormation template but as part of that you specify how many managers how many workers need to be created they are already into an auto scaling group so from your perspective if a node goes down dr. will actually Amazon infrastructure will bring it back up so that number of nodes you know if the node gets kicked out or getting bounced it will automatically be taken care of your services can be connected to ELB or elastic load balancer and they can be load balanced at bay your images can be stored on EBS or you're persistent volumes could be stored on EBS so it's very well integrated in that sense with you know your Amazon infrastructure if that's what you care about same thing on Azure as well now it's very well integrated with VM scale sketch so you have auto scaling as your load balancer as your storage so in that sense what you're getting is a native cloud experience for the cloud platform as a to making you learn a new technology here and as I said earlier it is available both in dr. C II and dr. EE okay memory management for job applications so I have a few examples for you here and how this can be tricky so I created my daugher reset the one gigabyte of memory for for my daugher here and you see that in running this command the free - from linux there is no job in it just called free and I passed the memory switch over there that's a darker switch and you can see that there gave to this container 500 megabytes right but why I can see here that free saying that I have one gigabyte memory that's because I do know docker relies on control groups do it's magic and free is a tool that was created before controller so free is not aware of control group so it thinks that the whole memory available is actually their memory I gave to docker so it doesn't matter if I put the memory switch over there or not so guess which all other code to was created before control groups yet and does the same thing the Java via right so if I I creates a demo that was just allocating memory in Java and printing how how much memory was free and use it and you can see that I'm running this application and with the memory switch of 100 megabytes right so my whole dock environment has one Giga but I'm giving just a hundred megabytes to this container here and this contains one in Java so you can see here that what's happening is that I'm running the application it's allocating memory and in the end I have an application killed by Daka so what's happening here so first thing when you say when it will pass the memory switch what you are saying is that daughter should kill the application if it goes over the memory you are locating so when did the job application allocates more than a hundred gigs Giga megabytes it should be killed by doctor and it's killed by doc now why I don't have out of memory exceptions here that's because the Java VM thinks that it has the whole document memory so you can see that the total memory here is 253 megabytes that's because when you don't Saturday heap space for Java the default is to use 25% of the whole memory so I have 1 Giga for dr. Java the Java VM thinks that can you use 253 megabytes here it doesn't matter they memory switch if I put over that and another thing that is interesting as you can see they use it print over there that the Java VM is actually capable of allocating more than 100 gigabyte 100 megabytes and that's because when you set the memory switch for docker it means realm + swap so it can actually allocate more memory before it's killed but the problem here with Java applications that if you start getting skilled of queued applications without out of memory exceptions you are going to change this bolt forever and you don't know what's happening right so what you can do is besides setting the RAM and swap you can do this so this is my maybe Bute and you can see in the line 62 to create our Java options variable over there so if I do this what I can do is when I start the application I can put it in memory switch for docker but I also can put my Java options and set my hip space so now when I execute the application if it does out of memory it's going to give me an out of memory exception before docker actually kills the java application so that's the right way to do it so by default the container is going to use as much memory and swap that is available it doesn't Mel doesn't matter the memory switch that you are using you can you can restrict the memory using this docker sweet so memory is how much memory can be allocated before doctor should cued application memory reservation is more like a soft limit and memories well swap is how much swap can be done by dockerills by the container so right now the JDK is completely unaware of these limited resources for memory also for 4 CPUs but 4 gdk9 there is an experiment a feature that has been done they are experimenting with it and then the Java the GDK will support control groups so this should solve this problem but right now the only thing you can do is to set your your your memory for the JDK as compatible with they did memory you're giving to the container so the bug in Java applications this is a feature that it's very important for us cause when you are debugging an application that is inside docker you are the bug and application that's very close to the production environments right so usually when you're double the coded souvenir you know all machines you can reproduce a who's like network connections database connection problems and things like that but you have your if you have your application run inside that door environment you can simulate something that's very close to production so the way to to debug applications running inside docker so you remember that I had the Java options variable in my form for for mailing over there so if I have that I can start my daugher application exposing the 505 5005 port and using the East switch to start using the bug mode so when I started is what is going to happen that application is going to stop waiting for an IDE or a debugger to connect to it and then you can debug your application so this is how you do on that piece you just attach the ID to that port over there and then you can debug your job application inside docker and catch all those bugs with database connections network and everything that's really hard to catch when you're doing outside from from the production environment well in this case we are using NetBeans by by no means as a suited to NetBeans sure sure yeah and is our favorite IDE which is the other thing but yesterday we had somebody in the workshop be able to use Eclipse for example for debugging as well Eclipse idea any idea you want as as long as it's able to connect to this port to do remote debugging it's okay so two month or job application so docker has several common line tools that can help you see what's happening inside your application side your container the first one is dr. steps that's going to give you the CPU memory usage and all the basic monitoring data that you need another one is the doctor remote API that you can actually connect to your your services and see what's happening over there it's going to give you a JSON response as a result and yesterday in the workshop there are people that are thinking on using this API to do several other more advanced stuff so you can for example check if a services burning inside docker and then use these to decide what to do if your application for example wait for a database that's coming up or something like that I think the key part here is the docker CLI is just a dump client and essentially you say docker image build or docker container run whatever command you give under the layer is actually giving a REST API to this server that is listening on the host side of it and essentially if you look in this case you know we're saying HTTP local host host and port assuming port is 80 in this case container slash slash stats that's sort of the API that is being issued behind the scenes for you we just directly tapping into the API itself and other part that you want to be a little bit aware here is how the API can be accessed using on a Mac and if you're using Windows there's a lot of flavors and windows that you can use so I blogged about this couple of days back if you want to access the docker remote API go to my blog and read about how how much fun you can have when you're using docker and windows cool actually should check our whose blog frequently because he's blogging a lot cool stuff about docker in Java thank you I read it every time so if you're running first if you weren't for this is another common that you can use so docker service logs and then you can see what's happening with your services you know and this is very powerful as well because if you are running if you're scaling your service you know you don't know where those containers are going to be running which hosts they're going to be running and it's very difficult for you to figure out how many well you can find out but the door service loss is an aggregated command where it actually starts giving you data so as you're seeing here for example it will start listing the actual instance where the log is coming from that that's very handy segregated service love so there are third-party tools that you can use as well for example Prometheus you can connect so this is new in version 113 you can connect to several endpoints and have information over there there is a tool called C advisor that you actually can run using docker with those that come on over there and going to give you a nice graphic interface about what's going on inside your your container one of the tricky parts for C advisor is it only gives you data for 60 seconds so you typically have two front end it or back end it rather with the influx VB where you're storing your C advisor data and then front end it with a graph on a dashboard so that's something to be plenty of material on that available I will just say google it and you will find the right information right so integration testing integration testing I think is one of the most importantly uses for for docker for Java developers because I don't know if you guys do integration testing but it's really hard you have when you do integration testing you want to do as much close to production as possible so it's different then you don't want to start an embedded database for example because it's going to give a different result when you go to production so usually you want to have to simulate several nodes running or Hadoop closely for example several modes for your database or several database talking to each other this is this is very hard so much doctor what you can do is to create for example a docker compose fire that is going to run to be used just for testing so why I'm going to do this because there are a few different than when I'm running tests and when I'm running production first usually I don't want to have a volley method so I can run several buttes in parallel and when I start I stop a butte stop at a test all my data is gone so it's clean for the next test the other thing I want to do is I don't want to publish any force cause if you start publishing ports I can't run to builds or to passes in parallel right because I'm going to have port conflict so usually what I do is I create a docker compose file that's just like the docker compose files be using for production but without the volume is Method and without the ports expose it then I'm going to run the application and usually what happens that you have to run I script it's loads data in the database then maybe run the application multiple times to simulate several situations and after you do that you have to run the integration test which usually it's checking log files or checking the results in the database to say to see if the application run as you expected and then you stop the services using the same docker compose file and then you have a completely clean environment for the next test because everything was started no volumes or methods so the database is going to be empty next time you run integration tests so it's very it's very easy it's a lot a lot easier than doing this by hand so if you're not using docker for other things just do integration test is worth the investment and this is a trick because if you want to run multiple simple tests in parallel usually when you have Jenkins for the for example and you have several buttes going on if someone starts a build and another one starts a similar dude or using the same services what happens is that you're going to have conflicts because docker is going to start the services the same network unless you use the p-switch over there so what am i saying over there okay so what I'm saying over there is that docker should start all the services in its own network and I'm saying that the inertial torque should be called F and I'm using here the Jenkins build number variable what could be anything else you want so as soon as I do that I have all the surfaces isolated and it's going to be to allow me to run several builds in parallel so that's that's a nice three causal first finished see bingo all right well I hope you had fun if you liked the talk we would really request you to go to the doctor for a platoon raters give us a rating that you feel is appropriate that hopefully brings us back again thank you will be around for taking questions [Applause]
Info
Channel: Docker
Views: 27,768
Rating: 4.8746438 out of 5
Keywords: docker, containers, Docker Captain, PODA, WORA, JAva, dev, test, Run, package, Docker Hub, Maven, Engine, Swarm mode, Pipeline, Multi-container, Using Docker
Id: yHLAaA4gPxw
Channel Id: undefined
Length: 43min 12sec (2592 seconds)
Published: Mon May 08 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.