Need for Speed - Using DPDK and SR-IOV

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right good evening everyone thanks for making it to the last session of the day hopefully we will make it interesting and informative let's go with the introduction I am Ganesh nagarajan working for a D and D integrated cloud design team I hope my colleagues please hi this is Manish man I work for AT&T cloud in the development my name is Trevor McCaslin and I am an upstream developer at 18 T thank you all right so we got a power packed agenda today you know we wanted to start with why the need for speed to begin with right and then how are we going to achieve this with the help of technologies like SRO B and D PDK and you know within a D and D we have written a new service it's called V of D stands for virtual function daemon we are going to talk about the architecture of that and then we also wanted to give on into and view of how this whole orchestration can happen with the help of own app and OpenStack and you know followed by that my colleague is going to talk about some of the internal leaders that needs to happen on the compute node we also wanted to show a demo that we were able to achieve speed you know closer to the line speed and we will also talk about some of the limitations and recovery followed by that we will go with the implementation details and finally how we are working towards the community to kind of contribute to the upstream community all right so the need for speed I mean we all how Wi-Fi routers at home right and you know typically when the speed slows down especially in my house the first person to get impatient is my five-year-old kid you know he figured it out now to use that you know data plan thereby giving me more bills but imagine you know if this is about to happen in a house how much of a speed and latency requirement is critical for you know the business customers right so traditionally the service providers built all their infrastructure with big monolithic black devices right we call those as the physical network functions and you know if you are a customer premise and you want it like a router remote we need to build a specific browser put the window software in it and ship all the way to the customer premise so everybody knows it's all time consuming it's very closely it's not scalable right so thanks to the open source community right with the help of Linux Foundation more importantly OpenStack and one app we were able to move from the domain 1.0 physical world to the domain 2.0 virtual world we were able to virtualize all our physical network functions into virtual virtual network functions aka VN apps so the real need for speed is for those VN apps which going to run on our cloud which is it is the AT&T integrated cloud and we are transformed ourselves with the help of software-defined networking and more importantly we have built the infrastructure with help of OpenStack now if you see the the requirements for vnf could be broadly categorized into the you know following three areas right so you have the requirements coming over the network side you know for example they need high bandwidth fast or height PPS quality of service service training port mirroring you name it everything they needed today to achieve that and if you talk about storage typically they need a ops and potentially some local attached storage with SSDs or SAS and if we talk about compute right so they need CP opening you know why CP opening they wanted to avoid the context switching that would happen for all these VM process from one code to the different code and they wanted huge pages to avoid the number of page lookups filling up the TLB cash so side buffers and so forth and then and of course they wanted Numa right new mais non numeric memory access the whole idea is they wanted to access the memory which is proximity to the code they wanted to avoid the cross boundary Numa axis which incur some overhead and they might need some affinity you know to run couple of virtual machines on the same hose and also they need migrate it could be offline and as well as live migration so to satisfy all these requirements we had to extend the default flavors that it's coming out of the OpenStack and define a much more information meaningful flavor series for our customer needs so we kind of identified the three categories here which we call it as the network optimized flavor series so you know if we talk about NS NSS network optimized s RI OB so anybody needs a saw a OB they could create something out of this flavor series and then the second one is as a network optimized DP DK and then the third one is you know the regular plain kernel we're out there or it could be like in some cases many would also use OBS so these are the three different types of networks typically a vnf would need and they could achieve it using this alright so in this slide I just going to show you how the track packet travis's on each of this host for these three different types of network on the Left what you see is the packet actually goes by out the kernel readout or it could be an obvious you know before the packet is sent out of the network card now with kernel you get a lot of overage you know there is an interrupt happening at the host of the hypervisor when the packet needs to be copied from kernel space to the use of space and then there is an interrupt going also at the VM level to read this packet so with all these words you know you would be able to achieve only up to a speed of probably like 1 Gbps on a 10 gb on a 10 gig e card right so definitely we can optimize this so the second type is the DP DK me router so we were able to run the entire B router other obvious on the use of space and completely go away from the interrupts and we are actually doing this up you know the pole mode you know drivers to pick up the packets of course the host interrupts are solved but there will be still an interrupt on the VM side to kind of read the packets and then on the third side is the Saw IV which we are going to deep dive in the session well the idea is you take up a network card and then you can I create multiple virtual functions out of that and then you know you are directly attaching this VF device all the way to the virtual machine there is no interrupt there is no overhead and that's where we were able to achieve up to nine - closer to 9 to 10 Gbps speed over a 10 GB card and in fact we are also going to show you a demo how we have achieved it so in this slide I wanted to show you how all these bare-metal service needs to be configured if you actually see in this picture all these three servers are all the same it has got like 24 cores you know if you enable hyper threading it will become 48 and then it has got you know three makes each NIC has two ports why do all your speeches but the idea is on the left you see it is an SSRI oh we host profile where you are Nick one and week three is used for the workloads it's all yov workloads and the Nick 2 is used for your regular you know we router for example you might need to attach an operation management interface for the VM for all your operation needs and then the other interfaces could purely take the traffic and then in the middle you see the DP DK whose profile we are in week 1 and week 3 are bonded and then the middle neck could be used for your pixie storage and other kind of OAM traffic and then on the right you see it's a regular we router or OBS based where everything is running on the kernel so you might ask you know why I'm really bothered to see this the idea is it is important to define all these first profiles so that you could do the complete automation we wanted to avoid any manual intervention once this has been defined your automation framework could take that and deploy it in your data center alright so this is an interesting slide the idea is we are achieving a saw I will be in fact with DP D K so what does that mean the idea here is you would need to virtualize all the physical NICs into virtual functions and and you know you can set up so many filters and parameters or it the whole idea is we have written a completely new demon it is already conserve the service it's called virtual function daemon which you see in the middle of this picture that is actually a DP DK paste application which is going to configure all these v FS to provide an SRO V network so we have a research team within AT&T was developed beta and they are also outsource this project you know I encourage you all to go and check it in the get up so what we can do with this VFD a lot of things that you can configure on a virtual function for example you know you can set a quality of service you can enable anti-spoofing checks or the VLANs and Macs and say this is a solid OB it's all layered - it's all VLANs so you should be able to set some VLAN filters you should be able to support the q and q tags strip the outer dot insert an inner tag you will be able to support all type of bomb traffic which is a broadcast unicast and multicast so all these things are going to be achieved by this V of D and just like any other OpenStack service there should be a command line interface talking to this VFD service in fact if we see in today's linux you got IPL in command that would talk to configure your physical NICs but we are what we are talking here it's DP DK based application so we created a new command line interface is called iplex which is be able to go and talk to this v FD to configure all your parameters so this this is a high-level architecture of VFD so what happens is typically all these parameters are sent from the heat template and those parameters are taken all the way to the NOAA compute right so the NOAA compute is going to put all these port configuration information in a conflict adjacent foil and then it is going to invoke iplex so the iplex command is going to notify your VFD that hey there is a port that i need to configure please go and do it so v FD is going to pick up and step phi on the conflict adjacent then it uses DP DK api's to go and configure the virtual functions and send the feedback all the way to the IPX IP likes and it goes to all the way to the noah so if you really see we are leveraged an architecture which is very similar in OpenStack right so what happens when you create a virtual machine through Noah you know the Noah would going to spit out a lipid or XML and is going to you know give it what you are KVM hypervisor to go and create the virtual machine that's exactly what's it happening you are actually spitting out the port information to the V of D and we have do you be able to go and configure it alright so I this is the slide I wanted to show you how everything fits together right like I said you know we are leveraging own app which stands for open network automation platform there are a lot of you know it's a topic of its own and there are a lot of my AT&T colleagues who are given some good presentation in this summit I encourage you all to go and read it but I'm going to give you a quick introduction or somebody or phone app right so owner typically helps you for all this BNF orchestration it helps you to design the vnf for example let's take an example like you know if you wanted to run a v-not you need to know what type of flavor that you need to use what image you need to use how much interfaces that virtual machine needs to connect to and what are the wheel and parameter settings that needs to be applied so all these needs to be done at the design level and the next step is orchestration so onna pass an orchestrated call ms4 which would you know send this information why are he template to the OpenStack region and then the third one is configuration it's not all about creating virtual machines now you need to go and configure the VM so that is an SDN global lot of controllers with an own app that helps you to do that and of course we need to inventory all these VN apps you need to have a global view of you know how much of service is running you need different data centers there is a component called a and AI which stands for active and available available inventory that would do all the inventory things and finally there is dcae he stands for data collection and analytics so you would be able to look out for events in case of failures then you can go and recover and you know if you need to scale of enough that component will be able to help so in the middle so that is all these own app is running in the global centralized region and then that would be able to manage multiple OpenStack regions so in the middle what you see is what we call us a location control plane the idea is you have to run all your OpenStack services you are nowhere Neutron and and then the the third setup is that your compute server so this is that actually the virtual machine is you know getting created I hope you know this has been information now I will hand over the next session to cover Manish please go thanks Ganesh so I'll walk you through the details how the implementation is so this is the first implementation and this is our attempt to show you what other advantages that the NIC card providers are providing by exposing the features now some of the NIC vendors are providing us mailbox on that NIC card and you have the switch on the NIC card that you can program with these features so you can send the message to the PF by mailbox and then you can set those parameter on the neck and you can offload your NIC filtering Mac filtering your you can control your bum traffic you can say that you want to allow or not allow multicast broadcast what all those things you can offload to the NIC so you have more CPU cycles for your processing of the packets at the app level so also for the set up of the compute so the implementation first you have to have on the compute the VFD demin that can is talked about so to set it up you need to enable the I am oh you you need to have in the BIOS vtd enabled and you have to configure the huge edges so in the vnf world we use 1gb huge pages so that we can reduce the misses and then you need DP DK driver igb you IO module on on the kernel so that you can talk to the neck and then you can also configure how many vfc you need on on the host so so you this this'll like you your team can decide how many cubes / VF you need and Neptune this parameter and then you can all you need in in the end for the for the OpenStack is the PCI whitelist where you'll specify which PCI is a map to which physical network so physical network is the parameter that you currently use when you do provide a networking IG you specify that field when you're creating the no one Neutron Network so Ganesh walk you through the hardware part of it Numa and all that so I'll take you to the implementation how the OpenStack will see it and how OpenStack year the it will be set up and how which part administration administrator will do and which part the tenant has to do so so as I said the PCI is set of physical networks physical networks are defined mapped to those PCI's then then you can the admin can go and define the provider networks using those phys knots and once the networks are there in the tenant space tenant can go and start creating the SR IV ports so it's same concept by how you create the direct port today today in OpenStack but with some customization so the customization the key here the changes we did is we input more fields into the binding profile of the Newton port I'll show you that so and once you have the ports you can call the novi pi2 instantiate your BNF so when your vnf will instantiate with those Neutron ports noah will call plug right everybody knows blog who understand Noah so it will call plug interface and that call will invoke the iplex interface of VFD so it will generate configuration file and also call iplex to add this configuration to the neck so that's how the flow is so going to the next slide so we have modified not only Newton and Noah we have also exposed the all these parameters to heat so that any kind of Orchestrator can pick this heat definition and can orchestrate the workloads so we have the VLAN filters so VLAN filters will be applied to the incoming traffic on the neck so so the filtering will be done on the neck and Nick the the NIC will decide which we have to hand the packet based on which VLAN and you can do anti-spoofing rule using using the Mac filters right on the neck so then you can also do qinq where you have advantage as a service provider you can strip the tag the service tag and then you can pass the customer VLANs that whole trunk traffic back to the VM and VM will process it so then you have the bump traffic the boolean that you can set based on what your vnf is looking for whether it supports multicast whether it doesn't support what do you want to see it back to the vnf so you your vnf doesn't process all this it's all done on the card so that the advantage right there so then there are some more boolean's and so this is the first attempt so the NIC renders any NIC vendors here so NIC one does no right the new features are coming in the new NICs so this this list is going to grow so as we said we can this is this integration that we did is from the first version of the VFD VFD is in active development we we have already enhanced it to do QoS metering is under trial that's another requirement for most of the telcos so all that will be offloaded back to the NIC so that's pretty impressive for the for all the services like you you don't have to worry about mirroring loads and all that on the system so next slide so this is my demo setup I'll just I have a recorded demo and I'll show you how we spin the v-net and we generate it traffic from one location and we connected we can assume the other location is a cloud provider or something and this is how the traffic is so this is a duplex traffic both ways bi-directional and I'll now switch to the the recording oh sorry about that okay let's ring up okay let's go okay so I have a I have this vnf that I shown you it has two s RI of e port the rest of it is taken care of another heat template where we had the cinder boot volume for this and so this is where the customization is so if you see on your screen right the binding profile field I will pause it here so the binding profile for this Neutron port that I am spinning for this vnf I have the VLAN filter I can specify whether I want to insert tag while exiting on the transmit traffic I can specify I whether I want to do spoof track and this this snapshot is after the VM is created right so I I know which PCI is allocated to my port and which visit I am map to so it can give the operator of view on which physical port this this port is or for the vnf so going a little further so once your especially yep press play so once your so this is the next port so as you see both the PCI's right one is from visit one and one is from fizz net - so inside the V VM you can bond them for resiliency so that's another requirement for the telcos you so we we are using this is the VM definition we are using we modified the ANOVA definition of the VM so this is the nova config right for the vm we instead of interfaces when you use DP DK drivers you have to provide the host device so now your VM is actually accessing that that PCI address that's highlighted on the screen right so so the this is how like this is these are the basic changes that we need to do to achieve this implementation and so on the we have this iplex this so this is the VFD running on the thing and this is the I plug so this is the interface that we call from Nova plug when the VM is instantiated and we call iplex delete when the instantiate instances deleted so this disintegration is pretty easy and seamless so you can you can even do updates when you want to do the QoS you can call iplex update and it will update the configuration on runtime to the card so it's pretty easy so going little further so I will show you like the traffic yet so the as operator right operator will be interested in knowing which how the traffic is doing right so so this view will provide the operator the details how the traffic is doing in this picture I have couple of BNF spin I have the traffic passing through you can see the transmit/receive if there are spoof packets you can see that too and it's pretty impressive like you can make a judgment whether that the link is up or down so it this is very useful for the operator so we did some performance tests during our demo recording and we were able to do close to line speed I mean this is Exia it's sending the bi-directional traffic on both both the streams one in and one out so we were able to do close to line speed using it so so that I think concludes this demo and I'll switch back to the presentation so so the results are we are able to do line rates high PP mpps you can it can handle this this profile that we are using for this traffic sorry I'd set it for screen it okay so so the profile we we used for this traffic is I mix so we have frames ranging from 64 bytes to five thousand bytes so when we we are getting close to nine point seven which is like nine point nine on the neck right and so so let's talk about the limitations right so what are the limitations of this solution so number one with the regular asari we write you you don't have any any security features with regular SIV with this error UV you at least can control the mum traffic you have all seen you can control what shooting your vnf you have the VLAN filters you have the mac filters right so so this this is like better than the regular sra we right so that's open one so talking about the live migration that's that's something that will evolve in the coming days like right now there are limitations in the live word and there are limitations that we cannot migrate something which has host device attached right so we will we are working with the vendors and if we could we will take the snapshot of the memory we'll take the snapshot of the registers as it evolves and we will pass it to the to the destination host during them aggression that's that will take some time but in the time being we have own app own app can detect as Ganesha it's a closed loop it has analytics itself it is monitoring your vnfs so you can build it really resiliency into your app design plus you can also like the Reis pen your VM if the need be so that's our recovery strategy for now but we are gaining a lot of performance with this so there is always a trade-off in your first shot so we are not there yet but we'll be there so that's our anything I'll pass on to travel next we'll take oh man okay so I'm going to explain the implementation details of the demo that you just saw so at the beginning of it that you're going to have to create a port with these API flags for the neck and that was put into the profile binding field and then everything is ghost goes pretty normal up and the only thing that's different with VFDs is at step 7 that you're going to have to generate the virtual machine configuration and add the host step devices and also you with the DK driver you can include the PCI virtual functions as well and then with the iplex without dinner with a virtual machine configuration that will bind the port and then they'll also pass it to VFD to continue the rest of the operations and so when i was trying to upstream this to the community this was my first first proposition it doesn't look really pretty because you just stuff a whole bunch of flags into the profile binding field and I wanted to offer some other alternatives so I also proposed using a bit details field so I mean has not that much of improvement so I was trying to think of other things and when I came up with another proposition this kind of fits most mostly with like making like a hard-coded paths forward API and then this will also make it easier to extend it for more features to come so this is just kind of like what the database API would look like and then you can create the ID and associated with a pork using the synthetic fields that are already place a neutron but then when I took this to the neutron drivers meeting they were just saying like well many of these are a sound like they they can be derived from existing api's and and after some research this kind of was true but not all of them exactly support SRO B so I started talked so I can map some of them and of but the one set weren't implemented where the broadcast unicast and multicast allow I'm being enforced through the security groups and the firewall so when I came to propose another API I want to make it more abstract so that way more community members can and operators can use the feature to its fullest ability so whenever the courts created you could have underlying arc of a Linux bridge implementation or a novias or an SRO V and what it matter because each each one we delegate to its controller and force it with IP tables or OVS firewall or and then in our case would be the offloading on the NIC for the for the file and so here's kind of like what that would look like if you're going to add that to the security groups it's just simply just adding those three fields to the bottom is like true or false and then you could use that to pass to the neck so this is already upstream VFD is a part of DP DK library and this is part of the dart documentation and their use cases that they describe so you can see here you have two different services running you had connived latency-sensitive services running and also you can use the DB DK application to offload your computation intensive services and not all the DB d cave flags are going to be required for this implementation we're only going to be pushing the ones that are coming from our customer requirements for our BN apps and so this is just a fast host-based packet processing you can find this this documentation is relatively new came out this year but it's on DB DK and then here's another use case it's about enter vm communication this this will allow you to do really fast communication between VMs that are on the same host because the NIC has a switch built into it that makes it capable and this this example just shows how what MAC address lookup table would look like so if you like wanted to update the max you would have to go through this flow and the steps are described there in the documentation so coming on to future work mari have a few patches proposed for enhancing Nova Neutron capabilities right now there's actually no Q&Q network type in neutrons so I have a patch for that and also it's a VLAN base so I also refactored the VLAN to make that work nicely together and then we're going to have to add some kind of oh it's v object to pass these parameters and make a proper negotiation for the port binding between Nova and Neutron and then there's also whenever these Knicks are released there's the window nova scheduling they're going to have to know what capabilities it has so there's there's a patch that's already been merged for enabling SRO Vienna coffee Chur a discovery and then based now after that you're going to have to integrate VfB with neutrons so now that nobody knew Tron had can now the prerequisites of it met now we can start implementing the rest of a BFD so first of all you have to test it somehow so we're going to have to support some SRO via third-party CI that with VFD installed and then we need to implement the iplex interface in Neutron so it's like IP link but it has a little bit more extended capability so local out like that class and we have to add options to the to the client I mean that's pretty standard and adding and modifying database API models for VFD support which I've been showing through this presentation and there might be even more to come and then whenever comes down to the agent implementation you could come up with your own agent or you could also model like you can make a agent extension of SRB or you could modify the existing SRV nick agent to just use the new iplex tool rather than IP link so then also lastly just depending on where the API lands because I just discussed like how you would want to make it abstract and be across Neutron there might be more modifications needing to come so like if one of the changes we're in the security groups you would have to modify OVS aging and the firewall as well because s ROV is designate for the no op driver and that is all so I thank you all for listening for the presentation and if you have any questions can you please stand up to the microphone and use that I can answer the upstream questions and then minium ganache can answer everything else hi thank you for the presentation so what I understood is the things we show we see so on the demo is a way of hacking but the future wise integrating vft with the neutron properly right with the agent that's that's the holistic way to go right yeah make sure we are not overloading Noah with the networking requirements correct yes that's there in kind like we want to so for every implementation this is pretty new technology you need to first prove yourself right and community showing them some work and then now we are working with the community as Trevor already submitted to proposed to changes along with it Intel also submitted a new change wherein they are exposing the NIC features to the Nova scheduler so Nova will know in future which Nick is capable of doing what and based on that your VM will land on particular node so you may have some capability from some vendor have X capability and some has Y so the Nova scheduler will handle where your what is your need and based on that we are to send your VM thank you thank you bro can you go back to the side by side by side early in ganaches slides just escape out there's a bit of messaging here and part of the brilliance of this one all this we thought this simpler one part of the brilliance that I hope folks can see here that as you go you know from conference conference from meeting session and meeting session there's a bit of messaging hear about coexistence that gets lost as you hear one person's experiences and another vendors experiences and preferences one of the brilliant things these guys have demonstrated here is coexistence using the exact same compute nodes using the exact same software coexistence of three different forms of networking all using Neutron with of course their additions that they're working on up streaming all of these options work for different workloads in production simultaneously you don't just have to pick one type of network virtualization all of these work in production in parallel same stuff same OpenStack compute node so I applaud you guys for that this messaging is very clear thank you Jeff so also question about these slides the part on the right so for the vnf perspective what drivers do you need to take advantage of this implementation is Salvi sufficient or you need the PDK or is the PDA hidden in the infrastructure from the Vienna perspective yeah I mean you want to take them so the VFD is built with its own static DP DK library so the vnf doesn't need to be aware but on the other side the vnf also could be a DP DK enabled application so that you know it could directly read from the VF so so it doesn't matter whether your vnf suppose Sportsline a XG b VF or we hook it to the DP DK driver we have VN apps that works both way and we have certified those so that I think addresses your question so my question is once you configure the number of VF and allocate bandwidth to each VF are able to change it dynamically or is that futureworld no we have more we have we have I think we have we we can saturate the line card with just very few VF so but but the advantage we are getting with spinning more v VF is you can spread right you may not have the peak all the time so you you can spread your workloads on the Nick on the host and you can take advantage of more VF so so we know like it there is a like you can optimize based on the cues on the neck but right now is a pretty much a static allocation I you good percentage for all of you but I think that space could evolve more with more dunya we are beginning to see a need for changing not it's not instantaneous it's not you know mili second boundary but let's say every few days you want to repurpose a running compute node for different vnfs you know move things are on a spin up new Vav NFS so we are being beginning to see the need for that sure it would the VF agent that you are proposing do that work it should be able to do it I mean as long as you put the right parameters in it what to configure I think we can what was going on okay thanks when add add to that last question you had one thing that I Plex gives you that in years past as a real pain a real pain was determining it's like well I just put like a you know a vendor's virtual router or vendors firewall load bouncer on a link and you know you're using s Revit you're saturating that thing fast that that table view of all the VF state and all of those values is is golden it's it's wonderful from an Operations standpoint because before you you had to like scrape through all the individual VF outputs compile them somewhere in that sort of thing so having that visibility when you have a bunch of DVD K native VN s you know routers firewalls that's that's huge and as I mentioned like VFD is very actively developed and we we are working with multi NIC support so you can go to the jet uplink and yeah in fact there was a question the other session about DP DK how would we go and debug all these traffic right can I use the ping and TCP dump and of course it's not going to work because those running on kernel I think the idea is you can either run it inside as part of the VM the other option is you can always do a port mirroring right and you could take it to the probes and you know it can analyze your traffic no I just want to add something here if we have any upstream developers in the room don't feel free to join in on the effort and get in contact with me sure the interview traffic does it work out to the books of you need to do something especially you need VP for that virtual Ethernet bridging on the switch sorry conditional vep there's we VB virtual Ethernet bridging the your up switch which the computer is connected to should be V EB capable thank you just a question um for projects that are coming up like project calico is there are you guys working with that project as well to make sure everything works together courier all the stuff that's yeah I think right calico is for the container network interface right so we do have a plan to kind of we are right now all our OpenStack services or is running as virtual machines so we are planning to containerize all our OpenStack services as you know in the crib nad spots and then you know we are also thinking about calico deleverage as our container network interface for all this communication but containers with sorry I think it's a long way to go probably you know we can think about it for future there is a plug in effort going on for us array we plug in for CNI and also if you go to DP DQ org they already started DPD k4 containerized loads so they are just starting so it's that the like just started so we'll see how it goes yeah I had two questions could you comment a little bit on your approach towards resilience and in particular have you contemplated architectures where you have redundant NICs that are both capable of you know maintain the same state information and processing you know without you know failure of something drops or break yeah that's an interesting question that you brought to the table right so it's all about like migration capabilities with this all I will be you know can we how like a Mac we tap and you know those sorts of thing that too and definitely we are considering that option but right now our strategy is to at least seamlessly you know recreate or do an offline migration when we detect a failure right and a lot of issues that we need to solve before even talking about live migration you know in my slides I went over CPU pinning I went Oh huge pages the idea is you know if you enable huge pages there lot of dirty pages going to handle and we are talking about line card speed here and taking that VM actively - it's going to be a pain so either the vnf itself has to be you know cloud native you know for example some of our venusaur running as primary backer so we don't really have an option to do or address the live migration issue right now but we are actively working to make sure that you know mooch also point at some of the pointers like you know snapshotting the registry and other things yeah we are also you know researching the square to add to that it's not only the host responsibility for the resiliency so we have a vnf there we are working with a vendor where we have a and B side of the vnf and these sides are - are talking on another interface together so we we have the state on both so if you ad are out you know that route on the other side so if this sauce goes down our service will be up so so that since you're pushing it up to the application level then yeah so so but but as I said if one side goes I'll Reese pin it because my service is not impacted by that well so I think you mentioned that you're actually pushing some of the VF functionality all the way down to the Nick it is that right all your your vnf functions you know like so so are you actually programming the Nick to do like certain certain kinds of functions yes so Nick exposes certain api s-- the modern mix so we program them to apply the filter and all those parameters i've shown you so are using something like p4 or how do you do that we use DPD api you see directly manipulate the Nick's but that would not really interfere in the top right the traffic and it's all between the virtual function on directly to the via nerves so these DB DK libraries doesn't interfere in that yeah thanks very much thank you all right but last question I have a question regarding the mirror or interface that you mention briefly I was wondering if that mirror interface is also implemented in the same way utilizing VF and br-2 and yes yes it has to be the VF and it has to be on the same physical network card because you know that's all we can you know do the put mirroring right so alright I think we are top of the thank you all for you know running the session thank you thank you [Applause]
Info
Channel: Open Infrastructure Foundation
Views: 21,871
Rating: 4.7647057 out of 5
Keywords: OpenInfra, Open Infrastrucure, Open Source, OpenStack
Id: AULt3BuwMnY
Channel Id: undefined
Length: 45min 51sec (2751 seconds)
Published: Wed May 10 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.