Building NFV Solutions with OpenStack and Cisco ACI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good afternoon everybody oh sorry I've interrupted obviously so please no carry on thank you for coming this is our fourth and final session of the day we've covered multi-cloud networking we've covered our sebum product and now we're gonna get into building NFV applications with OpenStack and Cisco ACI we've got Domenico de Stoli a nifty rathore down here two of our engineers and away we go we are we are gonna build time in for Q&A at the end and for those of you who are always very very busy trying to take pictures of the screen video of the session will be up on the OpenStack Foundation YouTube channel later tonight so save the space on your camera for pretty pictures of Berlin and we go thank you very much thank you Gary and welcome everybody to this session so as gary said is about OpenStack an ACI so how we're gonna build your NFV solution based on OpenStack and ACI my name is Dominic oh das Stoli I'm a technical marketing engineer from NCMA business unit which in Cisco is the business unit responsible for ACI so I'm going to present the first part of this session today and then my colleague ifti will also take care of the of the rest of the presentation so thanks a lot for being here by the way I heard that someone is playing against us there are beers maybe I shouldn't tell you but I I really appreciate the fact that you are here instead of drinking beers in the other section so let's get it started the agenda today is indeed going to be split on different things so obviously since we're talking about Cisco HCI and OpenStack I guess if you're here you know about what OpenStack is you may not know what a CI is so we're gonna have a brief introduction about cisco ACI what it is we're gonna understand why eventually you may want to run cisco a CIS solution together with opens back so what are the benefits there and then we're gonna go more towards the NFV kind of challenges that typically our customers have or share with us and how eventually with the Cisco ACI you can solve better those kind of challenges so Cisco ACI a rope and start better together if you will so let's get it started what is cisco ACI cisco a CI is indeed a cisco Software Defined Networking solution this is basically based on three main components right so we have got the underlay or the switching layer if you like which is based on Nexus 9000 switching family so those guys here and this is a based on bipartite graph as we call it or Bay or rather a topology of leaf and spine kind of switches so the second part of HDI is actually made of the AP controller rather the AP controller cluster which is the brain of the solution so which we'll see later that the Cisco AV controller is basically the single point of management of configuration of troubleshooting and visibility for the entire infrastructure of ACI so the third component is obviously the software that runs on the nexus 9 thousand and the ACI ap controller which is indeed called a CI application centric infrastructure now the reason why we call it application centric infrastructure is because a CI is indeed introducing a sort of network network policy framework which is indeed shared amongst different kind of compute architectures so you can basically build your application framework which can be then allowing a connectivity from both burn metal server as well as virtual machine different kind of virtual machine managers and eventually also containers so all this kind of architecture or computes can be eventually be part of the same kind of network security policy framework which is offered by the ACI solution so the beauty of this is that you can eventually move from one kind of architecture to another or having multiple cable our architecture communicating with each other so being which routed and eventually being also having some service insertion to have communication allowed between all of them how does it work underneath open well did the ACI solution sorry ACI works with a protocol which is called all flags so all flex is an open source declarative model which allows the communication between the AP controller so our ACI controller and the rest of the node of the fabric so what it means is that ACI ap controller instructs all the nodes of what are the policy that should be define and configure in the nodes in terms of what is basically the intent of the user so if I can give an example the declarative madhuri is the way how i'm telling an instruction rather without telling each and every step to get to the final configuration i would like to have so if I'm thirsty for example in a declarative model I'm just telling my colleague if T if T am thirsty and it will figure it out how are the steps in order eventually to pour some water in a glass and bring to me the the the glass right in an imperative model which is the kind of old or legacy fashion model of configuring things I should tell instead if T exactly how to pour water in my glass walk towards me going through the stairs and eventually bring him to me that the glass right so with the O flex protocol we really want to abstract the way how the controller doesn't have to know exactly the configuration that must be configuring each and every device but rather only the intent of the user so obviously those nodes or devices to which we are pushing the configuration must be smart devices if you like so that's why those devices will be running some sort of old flex agents or some sort of agents which is capable of this understanding this declarative model so the the interesting thing is that this op flex agent is not only running in the ACI fabric but is also running in the I per visor so can be running in the compute node nodes that you have for example it can be running obviously in the compute nodes from an openstack perspective and we'll see it later you know how exactly the parity can works but also this can be extended to other agents running on vmware if you're having some vmware compute in your data center or you can be also running on Microsoft s CBMM as well as on some type of routers or switches that you have in your data center so this is a final slide before we jump into the OpenStack way of integrating with a CI and these try to summarize very briefly how a CI works with several kind of domains I think it's it's very well sharing the kind of a CI anywhere vision that we have because a CI is an SDN solution that wants to be a solution not only for your on-premises data center and when we talk about on-premises with ou calls about the possibility to extend your data center across multiple pods or multiple site around the world so having VX LAN end-to-end kind of policy enforcement which is completely shared across multiple locations right but also we work on remote location or branch offices where you can extend again your VX LAN extension as well as well your network policy enforcement so the remote location here that we are working with is the possibility basically to have small sites where you can deploy just a pair of lis switches so the nexus 9000 switch or even having the ACI extended through what we call the vr-12 pause so beer to allies leaf and spying components that can run in a very minimal footprint in your extended branch offices at the same time we also work on the public cloud on the other side so who's not talking about public clouds I think some of you may have attended the previous session from our colleagues in Sisk about the multi cloud solution so indeed ACI is capable to extend towards different clouds so we're working now with AWS but also Google cloud platform and Microsoft Azure are in the pipeline and the idea there is that obviously you can extend through VPN or things like Direct Connect your data center network policy to the cloud or to multiple clouds and you can decide obviously based on cost or based on different kinds of business analysis how you extend them where and what you extend exactly from your data center so again that's a 1,000 foot view of ACI but indeed it's it's a very brief introduction just for us to be on the same page when we are talking about what is a CI now obviously why we are here we are here because when talking with customers with up with several customers every week we understand that OpenStack has a number of challenges especially from a network instant point right so especially we see that many customers are seeing the fact that the distributed l layer 3 services are typically a challenge with OpenStack distribution that's not always true obviously because we have got some distributed the virtual routing function but in general if you move to more advanced kind of feature like NAT or floating IP or you know other service insertion then having distributed network services in OpenStack is not really something trivial at the same time performance especially you are here probably because you're interested in NFV so performance in the NFC world that's very very key right so performance is also sometimes in open socket challenge together with visibility and complexity of troubleshooting why because you typically focus on the overlay so the architecture of OpenStack how you create spawn a virtual machine networks etc but you have very minimal disability or at least you don't have a visibility of a kind of merger infrastructure with your underlay so we know switching layer so with a CI we are trying to solve this kind of problems and we have possibly or hopefully a response for each and every of those so first of all we replace completely the data pass from a neutral perspective and with this three boot basically the routing function starting from Nats but also floating IP as well as obviously DHCP and the metadata optimization which is completed these three boot in each and every compute not that that you have will see more in details actually in a few slides from now the second thing is that we also support obviously hardware acceleration so with the ACI plug-in we do have the possibility for you to run as ruv or VSD PDK and in the future also VPP you may have heard about DPP also in our other previous presentation right at the same time sometimes we talk with customers and many are interested to to run other villain or VX lon but VX LAN comes as a challenge sometimes depending on what kind of NIC interfaces you have in your servers right so ACI provides you VX lon in the backend so basically between leaf and spine we always run VX lon and you can decide if between the compute node and the top of the racks which you run villain or VX lon but it doesn't really matter from a scalability standpoint because a CI is capable of doing purple Ville and significance so you can indeed run the same villain in many leaf ports but that villain may be significant significant something different from an SEO standpoint again from an integrated overlay and underlay perspective we'll see later but ACI basically I'll completely automate the configuration that is pushed from a neutral perspective and therefore you're gonna have a one-to-one mapping between the neutron component and the ACI component and that means that you have a much better visibility of both your overlay and underlay you'll have information like in which hypervisor your VM runs but also what kind of encapsulation that one is using and to which leaf or two which top of the rack switch that specific compute node is connected to right so really end-to-end kind of understanding was of how the packet is flowing within the ACI architecture from one endpoint to another one and finally the troubleshooting is significantly improved also from the Earth's core and the telemetry system which is part of the HCI architecture with some kind of else core which in percentage will tell you you know how health is your search system and eventually if there are any problem will warn you with some others and folds so the ACI solution works with multiple different OpenStack distribution specifically obviously with our Cisco bean solution but with Red Hat OSP director or canonical juju charms and what I mean by that is basically the old pistol ation part of cisco v more o SP director or juju charms take care for you of installing already the ACI plugin and we'll see in a slide what are this ACI plug in components that we are going to install but basically there's minimal efforts from your standpoint or it's completely transparent I would say the fact that you are running the ACI plugin oh you are not from an effort perspective right so you'll see mostly the benefits and rather no no pain in in terms of installation of the OL architecture so what are the main components of the ACO plug in there mainly tree there is obviously the ml to plugin provided by Cisco ACI so you may know better than me what ml to plug in is is a framework provided by the OpenStack distribution in order to configure your underlay switching layer if you want so Cisco obviously provides one plug-in for a CI the second component which is a key of the environment is the ACI integration module dai M this guy is actually representing the one who is doing restful api call in order to create a CI objects in your a CI architecture and finally we are going to have the o flex agent the one that I was presenting before and this on flex agent is actually deployed in each and every compute nodes that you have in your OpenStack architecture so this is eventually the the overall architecture picture where you see how the entire flow works and talking about the flow indeed we can see here you know a sort of how you are operating your network from and OpenStack administrator perspective and you will see actually that it doesn't change much from your normal utilization of OpenStack so the OpenStack tenant will still interact with Neutron rather than Nova you know all the OpenStack projects from an open stack controller perspective and when creating obviously all the network active architecture rather the the Neutron router the neutral network subnets etc these will in turn push the a CCI integration module to have some restful api call to a CI to the API controller specifically and creates a CI objects so we're going to have a one-to-one mapping of the neutral network created into a CI what we call endpoint groups right so when you will attach virtual machine through Nova to your neutral network eventually the ap controller will take care of configuring the old fabric infrastructure so that you're going to you know your pervasive gateway a configure and all the network policy configure in your nexus 9000 as which is environment so this is the the one-to-one mapping that we have I'll go very quickly through it but the idea is that for each and every neutral objects that you have you're gonna have the corresponding ACI object right so you have a full disability of the things there and well this is a screenshot of the ACI GUI but I just wanted to highlight you the fact that when you create each and every object for example here an OpenStack project we're gonna have in turn one a CI a tenant when we create in Asia in OpenStack neutral network you're gonna see objects created automatically in a CI like a CI endpoint group and breach domain with a corresponding sublet attach to that which is representing your default gateway distributed in each and every node and finally you're gonna have visibility on the Nova virtual machines that you create and attach to the neutral networks so you're gonna have information right like your VM name the network the encapsulation that VM is attached to the compute node where that virtual machine is running on right so you will have really a full hole visibility what I was talking to you just a few minutes back now I want to switch to n Fe kind of challenges and sure shortly I will also pass the word to my colleague if t but in general when talking with customers about challenges on the n sv kind of architecture what we see is that in the NFF world we see the u k-- you may want to create very rapidly and eventually create and destroy v NF in your whole environment so the challenge there is that you may have v NF distributed everywhere in in your data center attached to different top of the rack switches even possibly across multiple data center right and the challenge is that the scale of sorry the scale of your architecture in terms of v NF that you're going to have will be much wider so you're gonna have obviously in need of representing or doing equal cost multi pass towards all the vnf that you are creating at the same time also optical ultimate performance on the vnf is also a challenge there right so the idea of a CI is also to try to work in the in the sense and in the kind of the field of nfe and and reporting supporting customers around the different kind of capabilities that are lacking there so if thing we're going to talk shortly about a couple of features here I'm gonna briefly introduce you hear that ACI supports Neutron trunks port obviously also we do support Neutron svi so the creation and again if they will talk about it shortly the creation dynamically of BGP as VI enable the networks on the top of the racks which is towards the vnf components or as well the service function chaining of Neutron right so you can't create eventually serve this function chaining in your in your environment and be fully supported by cisco NCI and as well we support obvious DP D K and s are UV and as I said before VPP is also some rope map item that we have with the cisco ACI in the in the short future so i didn't say that i will pass now that the word 50 which is going to talk more about the neutralised VI and the SFC kind of architecture for yours hello hello anybody can hear me so thanks Dominic Oh for doing the hard part so it was great I'm gonna start with this the SVI slides basically what what do we have for actually the neutron ssvf features and which one do we write so so first thing is in the ass actually my colleague has told you that the SPI is you for a couple of different so for a couple of different reason one is of course you have vnfs there are adding services dynamic dynamically in your data center or in the inner data path of your traffic that is maybe going from your branch office to the Internet so as you add more services you want to make sure or you had known that networks you should be able to do peer with the external world and advertise the routes using BGP OSPF or whatever protocol you like the second thing is actually giving you the CMP you have services that you're deploying that actually are exposing endpoints so those endpoints could actually be the same IP addresses and if you actually add good hi is the same IP address from multiple points in in the Asia infrastructure the ACI actually uses the now they automatically will load balance the traffic to all of those endpoints depending on the location so it goes to the it first get lost parent balance to the nine case which and the nine case which will load balance to the the closest and the least used vnf so that's one of the main thing and and so we need to be able to peer with the rest of the world to actually basically advertise our routes and the second thing is actually we can use it for a 64 way ecmp which is actually spanning across multiple sites multi pod so that's one of the main benefit for the for the SVI so so you could actually support six different pair of switches and and as I mentioned you can actually distribute your load further so it's it's more like l3 load balancing coming all the way to your load balancer we and then a load balancer way and I was actually doing extra load balancing layer four to seven down to your application so it actually gives you far for more scalability and efficiency so this basically is showing that you can I cannot move because of this mic but let me see if I could use that so so we are [Music] advertising the same web and and and basically letting ACI decide which which one will actually getting the load so so it actually goes from the external world to the leaf switch automatically because the ACI policy allows it to and then fill that release which will actually load balance it to if they're multiple vnfs deployed on the under the same leaf switch it will actually load balance to the to those and so you have the leaf will automatically extend it to that one so and this is basically we have this demo actually running in our booth if you honor actually come and see if we actually have the fall where we have basically deployed three different networks one network is actually representing the external world and the second is our load balancer which actually has multiple instances but is is advertising the same web through bgp which allows us to actually have the external world come in and load balance and then this we have this load balancer which actually load balances to a to a real server form so it's basically l3 load balancing here too there al four seven four to seven from here to here so and actually I should I should have practiced the animation so they it's actually now say telling one so we are actually it's X we're advertising ten ten ten ten and the and actually when the first flow so it's flow based right the first flow comes in it's going to just come to the ACI and a CI will actually round-robin it one by one to the to the Lord so the second floor actually gets goes to the second one and so in from this you we can actually the load balance the traffic using you know any any load balancer there are a lot of actually distributed load balancers available that will give you this scale so this application is for those if you can build yourself with their commercial distributed load balancers available from actually the companies that are they are here in today so it's basically is going to Lord balance to the load balancer as four to seven is going to load balance to the server form that's outside the AC is AC I will provide switching and working for that but what the main component is actually the DC my thumb is too big so the main component is actually that the the ecmp that we're providing for for external ultra load balancing so that was the part for the scalability of your application now the SFC is the new transgendered Neutron service function chaining API right so what it allows you to do is create port pairs deploy your vnf and that court they're using ingress and egress port and then define some kind of a classifier that will allow you to actually start sending the traffic to your vnf or whatever you're deploying and then the return path so it is done using what we call is a multi node PBR PBR in a CI is what is called policy based redirect a policy based redirect is done with with with a contract that we call a service graph that gets applied to a traffic or a router for example and anything that is flowing through it we can say from using this classifier redirect this traffic to this port pair and then bring it back so and multi-node means you can actually have multiple bumps in the wire which means you can actually take a service to be nf1 from we have one two we have to we have to be nf3 and then back so you can define all of those things without actually having any domain knowledge that's what actually it allows you to do so you do not need to know anything about sciu best basically use the neutron cause which is create port pairs port groups create flow classifier then you can create a service chain when you as soon as you create the service chain the traffic will automatically get redirected because of the a CIS service craft functionality and then you and dynamically you can add and remove nodes to that particular service chain by using update service chain and it's it works out of the box without having to do anything on the ACI side so this is the basic so we have basically for chain api and that actually managed by by to the neutron and then the driver manager passes that and then what happens is a I am will actually which is my colleague mentioned it's the the main module that pushes all all the neutron constructs to the AC I will actually push it to the to the ACF fabric and but for the neutron site that are actually the same way we're creating the poor chain we could take the port port pair we're taking multiple port pairs making them poor pair group if you take multiple port pairs and create a fourth pair through you're deploying those via Nerf's vertically and AC I will take the responsibility of load balancing to those so that that's also provided automatically the load balancing to world it will be enough and if you have multiple port per word pairs in the service chain port pair groups and they're deployed horizontally which means if you have three port player groups and you're applying into a service chain it's going to take your traffic to the first four port pair group first me an F then second me and F and the third BNF if you add another port pair group update that that that service will automatically be inserted into the interior path so what we support is actually the API and the functionality the CLI for actually creating the the neutral court web port there create command a neutron port pair group create classifier create all those commands are part of the neutron CLI so that CLI extension has to be provided by the by the vendor but we fully support the API and it works out of the box of course you can always get the get the neutron SFC a CLI is just a Python to SFC and that actually will allow you to create this but of course the vendor that is giving you the OpenStack is someone who has to support that part we support at pass so so OpenStack creates everything so basically when you're creating the SVI as the OpenStack will create the SVI it's an extension to Neutron so you'll say I want to create a neutron that create type s VI it will it will actually automatically push the configuration so neutron side will create the SVI it will manage the lifecycle of the vnf and it will manage the the port port chain right so those three things are our decoratively done from the neutron side and what it's actually pushed to the to the a CI and a CI actually does the data path Orchestra which means that the air traffic automatically will start going or if you're actually peering it will make sure that the that you can actually have peering capabilities in the external world and so once we created it actually is implemented as a world as I saying it was a multi node service graph so you basically have the client and then you have a server you have two networks and within that you're inserting this graph so to create this what we do is we have to create networks which means you have you created two networks and you created ports ingress and egress port after that what you do is you insert that into a service chain so and that B and F will actually start receiving the the traffic and and then inside is a CI we have a different semantics right so we have the the consumer and the provider instead of the ingress and egress but it actually means the same thing this knowledge is not required at all to to use it all you need is a snippet to see that yes I'm actually seeing my traffic come to this vnf and it's going out or if you have any monitoring that automatically gets done we don't have too much so this is just a simple example of creating it so though we create two networks we create a float a flow classifier then then we basically create the ports we create the ingress port we cleared egress port we basically use the Nova to to start the vnf using the ingress and egress port we also create the so we take those two ports and we put them in a port pair using just a neutron port pair command then we take that port pair and we create a port wave group it's just a single instance if you have multiple port payers and you create the port wave group from them then it actually will be deployed vertically and then from the port pair group and the classifier that we in the last page we actually create the new transport fare using the neutron port we're create so it's basically completely Neutron workflow there is absolutely nothing non-standard about it and and and magically or you start seeing that the traffic that you're classifying is automatically reaching the ingress port of VN and the traffic that you sent out from that vnf actually goes back so that's that's the best e vnf part you can Avesta said you could actually add and dynamically remove VN apps by just using the the update the porch an update command so here basically creating two more networks I think it's running out of battery or something so yeah we're creating the source and a destination at another port pair and then we basically add that was the same plus if I sorry I went to back here more pumps in the wire so we create another another network we create two more ports we add basically create another another port where grow and then we use those two port wave groups so if we're using initially when we created the chain we only had one port we're broke then we actually update and we add the two two port where groups write cluster 1 and cluster 2 it automatically will add the second pair which mean the second B and F will also start receiving the traffic that is leaving the first vnf so and so yeah and that that is what is a multi node PVR which means you see now multiple we enough on the ACR this is basically the ACI side where either said like a CI and knowledge is not really required but just to see you can actually get full full visibility on the a CI side in actually what is being inserted in your traffic so this is their very very typical use for all these things together what we have is basically we have these VMs which are basically brass neons we have an external that VM and this basically is just customers data center they're all actually running on OpenStack so from here we created an svr we can actually say the route to to it it's through here so all the traffic will start coming here here we can actually advertise here outside and say the route actually to the data center is there that's the peering part if you want we are also serving a different type of applications here you can also use the dcmp to scale your thing and now that actually and then this sv i will start having all the customer traffic flow through this transit network and go outside now what we can do is we can use those you can create these networks and actually start inserting dynamically applications into the path of that traffic and another thing is we could actually have multiple copies and we can use segmentation to separate different different traffic and we can treat them very differently depending on this is going to be a trunk port here and a trunk port here and then the this actually is going to have multiple VLANs pass through it and then you can actually define you can treat this is a very simple use case that you can put together a lot of a lot of resources to run all of these things extra of 50 so I'll sum up very quickly we're running out of time I guess we highlighted a number of benefits to run ACI and OpenStack so ACI in OpenStack better together I think that should be hopefully clear by the end of the session but we've got some want to know more some links where you can find more information also we're gonna be at the booth just the other side of the of the building so you can find us there and yeah that would be all from our side so if you've got questions we'll be around we'll be at the booth so you can find us anywhere you want thank you all for coming [Applause]
Info
Channel: Open Infrastructure Foundation
Views: 689
Rating: undefined out of 5
Keywords: OpenStack Berlin Summit Session, Iftikhar Rathore, Domenico Dastoli
Id: quKZd8DDksA
Channel Id: undefined
Length: 40min 17sec (2417 seconds)
Published: Sat Dec 01 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.