Transparent Container Solution for DPDK Applications

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone before we start I would like to calm everybody down although I dressed like it I'm not a lawyer and I'm not working for marketing or sales although I don't want to sell you an idea in a few seconds my name is Sahar and I'm with me we can you can find Tanya and we're both working for toga networks today we want to present you a new approach that we have developed in order to deploy DP DK applications on container as environment this approach delivers the DPD key application the same look and feel running over the container like it was running over a VM or a host but first I would like to ask you a few questions with the show of hands how many of you have used containers at least once great how many of you are using containers on a daily basis a lot less and last but not least how many of you have deployed a DP DK application over a container great so let's start on the agenda we're going to go through some background in order to align some bases in order to have the same baseline with everyone it might be that you know some of the information but then we're going to go to some techy and then talk about our solution in details if you look at the telco evolution we can see that during the last years the telco evolution has been in has been continuously innovative and avoiding the way that two peers are communicating with each other and the way that user are being serviced due to the problems with the old-fashioned way and a V and Sdn has been introduced to the market bringing virtualization to the telecommunication industry this fact allowed the telecommunication industry actually deploy any appliance network appliance on a generic server without using the hardware specific on that area and if you look at the application evolution we can see that during the last years with there is a big movement between monolithic approach to distributed approach what we call today microservices this trend has brought a lot of issues to to to the world today and it has also new requirements two of them are low latency and high bandwidth networking so what is a micro service well the main idea behind a micro service is actually to break down the feature into different separate parts that can be either developed by different teams and actually can be even written in different languages well we can say that we can develop a micro service and oh and run it over a vm but I'm not sure it's the best match think about it for every time we would like to release a new feature out of a new application we will need to create a new VM image we have a very large footprint and we are having a very slow spin-spin time because it takes a lot of time to run a VM also we know that the telco telco industry has started moving towards micro services themselves in order to prepare themselves for the 5g era so it seems like we maybe need a new packaging unit to deploy the micro services so what do we have today if we look deeper into the VM we all know that VM has its own kernel it has its own libraries and it has his own applications in general we can say that most application actually don't use all the resources that a VM actually gives them and we all know that the footprint due to the fact that they are running their own kernel is big on both memory and storage and of course networking so is the Micro Service best way to be deployed over VM or do we have any other choice is there an alternative alternative for that well the answer is yes and it's containers a container is a standard unit of software that package up the code and all of its dependencies the container is running in user space and therefore shares the OS with the VM or the host that it's running on which means that we actually have a very low footprint because a container can actually weight something like a few megabytes and it will take less than a second to spin it up while if we remember what we talked about the VMS it can be gigabytes and it takes a few minutes in order to spin it up the fact that container is not always dependent independent it's Allah allows us to actually run and move the container between different OS and actually move them to inside our cloud without any overhead and of course we have a very fast spin up and due to the fact that we're not using our own kernel while while running we have a higher density options I've been researching around containers for the last year and I've been hearing a lot that container is like a passing trend or that it will take containers something like five or seven years in order to start being mainstream so let's look look what the market is saying about that on a research that has been conducted last year we can see that the container market is continued growing and its target is going to hit 4.3 billion dollars by the year of tween 2022 also on the research that was conducted on 2016 a hundred and twenty five companies have been identified as using containers when this research in research has been released last year we already have a hundred and eighty four companies identified as using containers that's around 50% growth also if you look at your organization adoption we can see that 50 percent are either using containers right now or they are planning to use container in one or two years tops so what's going on with the cloud well we can see that cloud is shifting towards containers we can see that container as a service is the second growing service in the cloud today and also Microsoft is planning to re-architect Microsoft Azure by making it container based so what is our target our eteri target was to give the DPD k application the same look and feel on the performance also all run running on a container like it was running over a VM or a host how did we do that well we have something called container direct and well this is the time to get tacky and we're gonna move to Tonya to start getting tacky okay okay so thank Shaka hello everybody very excited to be here today containers direct we chose to implement it he was in docker which is a platform for developers to deploy applications within containers why did we choose docker several reasons actually well first of all it is flexible you can take any application any operating system and easily deploy it inside the container because docker offers a decentralized network your applications can grow and scale independently of each other docker can be very easily used across platforms but if I'm completely honest with you the reason we chose it is because it's extremely user friendly so as we already mentioned a container is a standalone package of software that includes everything you need for running your application right so how do they stand alone containers communicate with each other but before we dive into that let's quickly go over the docker workflow so now I'm a developer and they want to deploy my application inside the container what do I do well first I write a docker file in which I specify all the dependencies and all the stuff around my application will require from the docker file I generate the docker image and this is more or less like my executable because when I run the docker image the docker container is created for it now the docker image I created is uploaded to the docker hub from which different teams can pull this image and start their own independent containers and these containers need to communicate between themselves somehow right and this is exactly the docker network so it's basically a pipe channel for containers to communicate now container networking model is what formalizes the steps to enable the docker networking and it's based on three main components first we have the sandbox which basically includes all the container networking configuration routing tables and the IP stack everything next we have the endpoint which actually connects the container to the network for example it could be an internal port of an open with switch and a network is actually a group of endpoints that can communicate with each other so docker implements V&M with lib network which is an open source library that supports several types of networks let's dive a little bit deeper into that so if we look at the lib Network we have the network controller which is actually our entry point into the lib Network it exposes an API for the user in our case the docker engine to easily manipulate and configure network drivers next we have the network driver which is actually the most interesting part because this is where all the logic is implemented this is what defines how your communication between containers will take place the network is just an abstract object that defines a group of endpoints that can communicate with each other a network can be local to one host meaning all your containers are located on the same host okay it can span multiple hosts an end point of course is just an entry point for the containers to connect to the end work network and as we said the sandbox packages up all the container networking configuration so a quick summary of the lib Network let's go over V&M life cycle so we want a new network driver to connect right first it's red it registers with the network controller now when we want to create a network the network controller controller locates the appropriate driver and binds it to the network ok so we have a network now we want extra you use it for containers right so when we want to connect a container to this network the driver creates an endpoint for us and packages up in the container sandbox when the container finishes its task the sandbox in destroyed and the endpoint is released so we said that lib Network supports multiple Network drivers right there are already few available for us to use the first and it is actually the default one is the bridge it provides connectivity connectivity between containers running on the same host next we have a host network with removes the isolation between the container and the host it's running on and enables the container to use the host networking interfaces directly the overlay network spans multiple hosts using some sort of network overlay encapsulation like for example VX LAN and muck villain enables you to assign a MAC address to your container but what interests us in live Network is the fact that it supports remote drivers allowing us to implement a new network driver for our containers direct solution so now we finished the technical introduction and I'm finally going to talk about deep edk connectivity of course we won the first ones to tackle this issue ish and if today you want to take a Dipity k application and deploy it inside the container you can do that there are solutions but all of the existing solutions are based on para virtualization using some sort of we switch which introduces both performance overheads because you have additional software layers as well as not the best resource utilization because for example V host consumes a lot of CPU cycles and we said that our goal is to eliminate any additional overheads and to enable our applications to run inside the container as if they are running directly on host right so how do we do that by enabling container networking over Azariah V so in this networking model we have a V F assigned para container him we have a new high Astor IV driver that manages configures and assigns the vs for the containers and we plug it into the lib Network now as you can see in this example we have several containers running side by side on the same host and the isolation between them is done by the hardware so we don't have any additional software layers right so now redundant data copies and because of the fact that the containers have a direct access to the hardware resources were able to get the same performance as if we are running directly on host let's zoom in into our highest or AV driver and take a look at its software components so at the top we have the docker framework which is actually our entry point for all the user commands - it we are using the network which is a go plug-in helper that allows us to plug in our new driver into the leaf network and our new driver is called docker highest IP plugin so it uses several other libraries for example the net link is the library that they exposes cornell api for different network manipulations and us ravine net library implements all the required SRA avi configurations and we implemented high logger which is just an easy and nice to have logger library with different logging levels that we use in all of our models so I want to create an SRA avi based network how do I do that we do that with the docker network create command specifying that we want the driver to manage this network to be our driver highest area v in this example we request our network to be configured with 5vf we want them to be created on top of the PF called eth - and we want our network to be called my net so we can later I didn't connect to it so what happens behind scene when I run this command first of all the docker engined forwards this request of course to the Lib Network the Lib Network locates the requested driver highest ROI fee and passes the actual command of the network reaction to it so the highest ravi driver enables Azariah v configures the five virtual functions we requested and saves the network configuration okay so we have a network ready now we want to start the PDK containers and connect them to this network right how do we do that we do that with a new user space utility called docker Randi PDK why did we implement a new in the use of space utility because we want to handle all the DPD K required configurations setup and teardown all transparently for the user so if we zoom into the software components of docker and the PDK will see two new libraries we didn't encounter before first we have the Cobra library which is just a nice API that allows us to create new executables and this is exactly how we create the docker and db/decade and docker on the PDK uses SRO v-net and Heidegger we saw earlier but it also uses another library called DP DK map that implements all the DPD K required operations so now we want to create a DP DK container we do it by issuing docker run the PDK run command specifying that we won this container to connect to the mind network we created earlier right we want our container to be called the PD KC and to have two virtual functions assigned to it okay and the container itself will run sent us the bash command so what happens when I run this command behind-the-scene first of all the driver allocates the 2vf I requested right once I have the video the vs dr. on DVD k binds this verse to the 2d pd k generates and you dokgo run command exposing the you i/o devices the relevant ones of course and saves the container details in an internal database and starts the container now why do I need to save the container details in internal database because I want to be able to gracefully shut it down when the job is completed and I can do that by running docker and the PDK stop command using the container name as a key to my internal database so what happens here I load the container details I know what interfaces were assigned to it i unbind it from them I and bind them from the PDK and I stop the container so we're done with this technical part sure but we're not done yet because as you know each great presentation has to end with some cool numbers and really beautiful brass right but that brings us back to marketing so if you please numbers ok so let's see some numbers we have set up our set up running l214 application from DP DK once on the containers and then again over the host we use the Ixia as a link partner and is the packet generator we normalize the results so we can see the the differences between them and as you can see when running small packets small sized packets we actually got less than 2% bandwidth degradation when running on the container once we got to larger larger package sizes we actually see no impact when running on the container than running on the host so what about latency right let's see the latency numbers again we execute the same thing once over the container and then over the host itself and we had only 3 percent overhead when running the application on the container against running it over the host itself so let's wrap up container world trend is here and DP DK has to adapt fast we as a community needs to shift towards the container environment and deliver solution to containers in order to have the all the new applications and the new telco industry solution and answered their requirements our solution for container direct actually breaks this performance barrier that we had before when running the classical networking interfaces and we don't have to compromise now on performance when we move from a VM to a container telco providers we are already seeing that there is a big shift and movement from vnf to CNF cloud needs our solution especially on containers and the time is now thank you thank you for listening questions I actually have two first one only one so first when you use this array of e surely a cell with scalability is no does not fit a container scale want to run 8k and more containers in one host so I can imagine your solution also support more scalable I of these solutions exactly you already answered AR the question so actually it's a we divided the answer to two parts first the solution today for deploying a lot of VMs also using the VFS and leveraging them so the block is still the same block but we now have higher density we are losing we're using less memory we are using less storage so actually we can help have having a faster solution for telco when you want to remove a container and then upload a new gateway for example you want to speed up because there is now a burst of packets you want to bring it up fast you don't want to wait a few minutes till the VM is gonna boot up and run all the libraries of course the next face is gonna be scalable i/o V and then we're gonna have a much higher number of ad eyes which will allow us to use this solution and leverage that okay and second question is how you manage the different privilege that may be required by different containers for example the Pedic application will be will probably want to control the MTU and being promiscuous mode while other containers will just want to run some native traffic and being less privileged so how do you manage the different privilege levels actually docker has different privileges and capabilities that you can specify when you start your container so this is just input parameters to document the PDK command as well because as we said after all the setup stuff it basically generates in you docker run command so you are able to control it power device yeah it's basically already handled for us by docker thank you hey thanks for the nice presentation I have two questions - so one is related to the topology that we are able to create using VMs so we can connect to multiple VMs and we can create an actual network topology so how do you plan to achieve that with containers which is the necessity this is actually the could you repeat the question so I'm talking about the topology that we can create using liam's so we can connect two VMs using bridge and we can transfer data in between and we can achieve all sort of things so how do you plan to do that in containers you mean if they are running on the same host yeah okay well actually if you as a sysadmin and you are starting the container you know where they are going to run right so if you deploy them on different hosts they communicate over this auravie as we discussed if both containers or multiple containers are running on the same host you have a different alternatives if you don't want to go over the PCI and the vs you can use the bridge network for example remember we mentioned that Lib Network supports it it enables two containers running on the same host to talk with each other so there are alternative solutions depending what you want and you actually can expose both of the interfaces to the same container and you don't have to use motors like it's happening in kubernetes in communities in order to expose different types you have to use motors where Dockery's its built-in already you can't connect the same container to two networks one for the out traffic and one for communication on the same house okay that way you can do chaining with the same within the same host using the other interface okay that ends things and numbers that you showed where they were just one container or did you move spawn multiple containers and now it was just for one container and actually we were able to saturate the bandwidth so okay thank you thanks any other questions awesome presentation guys learned a lot thank you oh well one quick question one of the challenge with ESRI we recollect is that from workload migration perspective like if container has to go from one server to other and VF mapping which is done from the from the ESRI we vector to the container would that slot be available on the migrating destination container of Nod did you run into that kind of problem and yeah so via migration in in general this is like you have a religious of many many different types like this bring you supposed use a continuous one so you have the same actually you have the same problem when running container when using a VF but because of the plug-in you can actually listen to all the different configuration and actually can track down inside your database all the configuration like the filters like the MAC address or whatever thank you assigned for that container and once you're deploying that container in another place you want to migrate it you actually can moved all the rules configure that over that VF either in software or hardware and then rerun the container so you actually have an additional solution while you track download the configuration this is why we created an environment to actually control all the configuration and everything to come from the same place if you remember I mentioned in the CNM lifecycle when the container finishes its task the sandbox is destroyed but the end point is just released so the configuration of the end point is not deleted exactly for this use case in case you want to start the same container you want to preserve its configuration now it doesn't mean you have to start this container on the same host because the end point is the virtual object you just holds the configuration of how your container connects to the network ok so from a migration perspective does the orchestrator in this case maybe a community's controller or darker swamp has to ensure that the destination host where the container is migrating to has required yes I will be slots available then only migrate it has to support s array of V but if faster IV was not enabled for some reason it will be handled by our framework yeah so ok in order to continues Tania's answer in order to to continue at Anya's answer you can actually imitate that in software because as we said we controlling everything from our plugin we didn't handle this meaning if the destination doesn't have a VF or available VF probably we won't deploy it but we can add to that solution that if that site doesn't have a VF you will get a less lower SLA meaning I will give you a software instance of an inn of networking you'll get a better SLA and once the VF will be available I will give you a better SLA so I can we can do that it's not develop yet but it's a good point thank you great thank you if you're needing other questions you can take it offline during the break so we'd like to thank our speakers yeah thank you very much [Applause]
Info
Channel: DPDK Project
Views: 940
Rating: 5 out of 5
Keywords:
Id: 5yGCB22CEys
Channel Id: undefined
Length: 28min 31sec (1711 seconds)
Published: Thu Nov 21 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.