Orchestration and Deployment of ONAP Operations Manager on OpenStack with TOSCA and Kubernetes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone I'm Chyna from the clarify team and I'm going to present you our journey in on up actually I will use and clarify and earn up to first bring up on up and orchestrate very nice use cases we started with one on up cluster but now the demand we see from different customers that they want to go multi-cloud federated on up as well as to the edge as you know on up now extends into a cry no and there are many nice edge use cases if I have time I will actually talk about orchestration models to the edge autonomous orchestration etc so let's start first with own up and what is on up I will go over in briefly what is on up then I will explain Tosca we use Tosca as an intent based model for orchestration and then I will explain how we can deploy on up in one or more clusters on code on top of kubernetes and show some nice use cases using the owner SDC the service design the modeling and the Esso the service orchestration and clarify also I have another use case it's for a video streaming and then I will move to the edge and show I or we can actually orchestrate an edge and a master deployment so let's start what you see here you see the little box here and the vertical box that's the OM that's the component that installs on up but first what is on up on up is the open Network automation platform it has some non real-time components like the design time the SDC the service design and creation where you create the design create the artifacts and then you push it into the Esso the service Orchestrator the service Orchestrator in turn calls the application controllers the Sdn C for creating the network's the see for creating the application workloads and it has something that is called a and I that's the asset in inventory the active inventory and basically all the infrastructure component should be actually registered there if you have an H component it should also be registered there it's not to register subscribers or some other stuff that is coming on top from higher level components but it should have all the active infrastructure components so this is on up in a nutshell and the vertical box on the right side here is the OEM the owner operational manager that installs and manage on up basically in clarify we are actually being integrated in three main parts one is in the OEM to install on up and on top of course of a kubernetes cluster a kubernetes cluster could be on top of OpenStack can be on bare metal if needed then we are integrated into the Esso so from the bayesian release actually Esso can call clarify and clarify can actually get a Tosca blueprint and act deploy and execute the blueprint and also we are part of the DCA the telemetry prod so we are the controllers there at the DCI level another way to look at it is if you look at cloud if I is at the left side it installs on up on top of kubernetes cluster up there it's part of the dcae and it you can look at it as part of the controller's so you have as DNC you have up C and if you want to execute a complex topology or Tosca topology distributed topology you use clarify for that and already there are several telcos that use on up together ways clarify I'm not going to get into this but what happens here in a nutshell so you actually create something in SDC it create gets all the Tosca types it creates the artifacts actually it creates a SISO file which in turn is pushed to a so and so as multiple ways to orchestrate things one is the bpmn engine it's more like a Tosca sorry it's more like a process-oriented flow so you say I want to do step a and from step a I'm going to step B to step C so like automaton and also it has Tosca using clarify which is intent-based orchestration you say what you want to orchestrate and not so for example you define the aisle level abstractions and you don't care about the underlay so let's say I want to have a connection point between a and B I would define this connection point but I don't care if it's running on some kind of an underlay or it uses different kind of routing but it's the intent how to use it or for example I want to define a firewall between two points I don't care if it's a firewall from one vendor or another vendor or it can be even a router with AC else but in the AO in the plugins I will implement the out but the Tosca defines the what and not the owl so if we look at the OEM architecture so basically you get a blueprint there is a Tosca blueprint and this Tosca blueprint is I will show it to you later is responsible to install on up the installation is actually based on two steps first you provision a kubernetes cluster and second which is in an independent step if you already have a scuba net is cluster so you can just use the second blueprint and install the own app components which are many containers or many services and pots installed often on top of kubernetes so this is one cluster you see at the top the kubernetes master we talked to the kubernetes master and we can provision workloads and actually all the resources on top of kubernetes which means the services the parts and the network and everything and it can run on OpenStack or on bare metal added at the bottom we can deploy a nazarone up cluster and the reason for having multiple clusters is because of high availability redundancy because of load balancing sometimes you want to actually distribute the load and as a reason is because of proximity so you can actually have multiple owner clusters and you can load balance the load between those clusters so these are the own up services actually you have the message bus you have the SDC then I saw all the containers actually we have more than 100 containers in Anup and before I continue further I want first to talk about Tosca and what is Tosca and how we use Tosca so as I said Tosca is intent-based and not how we do things it defines what we do so in clarify what we do we actually use the Tosca blueprint we push it into the clarify Orchestrator which passes this blueprint it has a core component that knows to take this blueprint and separate the blueprint into the smaller components and parse it and creates a plan and then we have plugins that can interface with different systems like any cloud so we can run on any cloud provision workloads networks applications on any cloud like AWS OpenStack VMware etc it can interface any configuration management tools like sheriff puppet ansible any networking tools like NSX and others and of course I will touch the kubernetes and containers in more details but it can provision workloads on top of kubernetes but it do more with kubernetes so it's a model driven automation and governance you can say who can execute the blueprints you can view the blueprints you can have different roles of admin and a user and of course everything in one tenant it supports multi-tenancy etc and it support a federated model of a master Orchestrator and edges and when you come to edge it becomes more complicated because sometimes edge doesn't have connectivity to the master Orchestrator and what do you do so the edge should work autonomously so you need to have copies of your orchestration both at the master and the edge and when there is no connectivity things about ships or airplanes so still you need to orchestrate the lifecycle operations of the workloads there and we are open source so basically it's like building the Boeing that you have many many vendors building each one of the parrots but what is different here actually is that our parts you know is very dynamic so each vendor actually creates the part and the part is constant it doesn't change but in our life everything is so fast and we need to make sure we can orchestrate at scale at very dynamic environments so this I took from the Tosca definition but let's make it simple so here I have an example of what is Tosca Tosca actually defines a graph of actually components think about the graph in memory each component as relationship one relationship could be contained in and in this example you can see that I brought up a VM which is a node and we have a JBoss container contained in that VM and we have a CRM application contained in that JBoss container okay so it's from type compute the VM the container is from type JBoss and the CRM is from type applications so I can write like object oriented I can my own types in Tosca and extended from a base route another type is connected to and you can see that the serum application is connected to an Oracle application to a rock Oracle database that is running on a different VM now on the relationship I can define lifecycle operations for example I can take runtime attributes from Oracle like the port that Oracle listens to or some other information and send it in runtime over the connection the relationship to the zero application so like this for example think about edges so you can create a topology of edge and a master and think things between the two you can create VPN as a relationship etc so I'm not going to go deeper into Tasca just to show you the overall picture here so you create a Tosca blueprint the Tosca blueprint is being input to the domain as a domain model to the orchestrator the orchestrator parses the Tosca model and creates the different nodes and arrows and the connect connections and there is a workflow and embedded workflow and install workflow that's the default workflow that knows to run on each one of the components and instantiate them and provisions them you can define also you own workflow on the graph and you can update the graph in real time so for example you can add a node you can remove a node you can change the properties of a given node and the orchestrator is responsible to execute it now there is the plugins concept that it extends the core by actually interfaces with many different data sources so let's say tomorrow I want to actually interface with authentication or with a a kind so I can write a plug-in to l-dub so there are many plugins like to the different clouds to the different orchestration systems like kubernetes as a domain Orchestrator to different configuration management tools etc so basically to create a service you can create one or more blueprints and you after you create the blueprint the same blueprint could be instantiated and executed on on different locations but using different inputs so the trick here is to you can use the same blueprint but to give different input so it can create different things based on the same core blueprint and eventually it will create the graph that I mentioned so you can have for example if you have multiple domains and if they are similar but they have different inputs so each one can have for example a different man you know IP addresses or different management administrators etc think about edges so edges you can group edges actually in groups and the edges the same edges in the same group are the same but in different locations they could be different so you can send information to those edges with the same blueprint just giving different inputs so if we look now at this Tasca example so we see that actually we have here a VM we can create a group of a VM and all its components like the IP address etc we connect the VM as I mentioned to Tom Caen container and then to a database and here the things are becoming more interesting we can create a composite service and the composite service basically let's say we have multiple blueprints each blueprint you can think about the blueprint has a micro service micro service 1 & 2 each one can get its inputs has its own life cycle operations and it sends output back to the master blueprint and basically you can do this on the fly so you can have logically separate for example your application you can have each vnf in a separate blueprint you can have a master blueprint and then you can actually have a orchestrate those blueprints with you know from top down approach you can create service chain and change them on the fly so for example let's say you have a router firewall and now you want to add a dpi device so you can in this example I run a deployment update and I actually manipulate the graph in memory so I add the additional vnf and I connected into the service chain using this deployment pattern so basically what we wanted to show here is that Tosca is very fluid flexible it can actually manage complex topologies and you can create a topology in a way that it's very modular like Lego blocks so each component can run in its own separate blueprint you can have a master blueprint that ties everything together and you can change this on the fly and add and remove components like we see here at the bottom now I want to talk about on App is actually installed on top of kubernetes before I will get into this I will I want to say what we do at the kubernetes level so we can provision workloads and resources on top of kubernetes but before that you need to bring up the kubernetes so there is a Tosca blueprint that brings up kubernetes the challenge that we actually encountered in onap that when we actually installed it so the containers were actually were by ship or were sent the pods were sent everywhere to the kubernetes workers dominions but they need to access a shared file system so we created an NFS share in the blueprint and combine it together so every container can access the data clarifier also implements the provider interface in kubernetes so if kubernetes wants to scale so kubernetes asked clarify a I need another VM it will get another VM and this will be added as an ode to kubernetes also we implement the service broker what if you won't need to actually access external services as you want to refers them as an internal net cloud native kubernetes services so there is the service broker interface that you can implement the catalog of services and kubernetes will ask actually access it as a kubernetes service but the services could be on Amazon and could be you know a database external database that is not part of the kubernetes cluster etc so these are actually the these main things that we do around kubernetes so they provide an interface to add more infrastructure components to kubernetes vm networks etc deploy applications workloads on top of kubernetes and actually also the kubernetes provider that i mentioned before so basically we are like a sandwich on top of kubernetes the provider is implemented in a native go application so it can access every cloud and create resources on the cloud and we have Tosca blueprints that can provision workloads on multiple stacks so let's say you use Elm a charge so you have you know one kubernetes cluster in amazon and other kubernetes clusters on Prem so you can use the same Tosca blueprint to actually run the workloads on multi cloud so in this example we see how this looks like so we look at first the kubernetes you can see the visualization of the kubernetes cluster and you can see that you have the austere you have the different networks the security groups and the kubernetes node also we can actually collect kpi's as part of the Tosca blueprint and you can actually define in the Tosca blueprints a monitoring component that you say I want to collect CPU or that's infrastructure component memory but you can collect also kpi's from the application itself for example number of connections or something that is interest for you and visualize it on the dashboard this is the own app cluster that we provisioned using cloud ePHI so you can see it on OpenStack so it's on top of over the sac and and area I want to emphasize that you can take all the own app components and provision them on top of kubernetes on top of OpenStack or bare metal but you can also have an AI braid installation you can define that some of the components will run as containers pots in kubernetes and some of them could run has VMs so you can have a you know a mixture environment and this is true not just that Anup is just an example but you can have an AI braid deployment model that you can actually define some components running on bare metal some could run on VMs I'm going to run our own pods and also I have an example that some components are running as function as a service and when I show you the actually some orchestration use cases using on up just to summarize this part and then I show you the on up itself so basically we have a Tosca a blueprint that defines the infrastructure it defined kubernetes it starts a kubernetes master with multiple nodes this is configurable it also installs the tiller server which is the LM client uses it we have an LM plugin or LM integration you define in the Tosca blueprint all the Tosca applications all the applications as Tosca nodes instances and it will use the LM integration to provision on top of kubernetes elem has values global and local values so you can override those values using the LM integration so you can use the inputs and say whatever I want for example I want to define global inputs cluster wide or on top of multiple clusters or I want to override those values locally or globally and everything is here defined with Tosca and of course we have the service layer that higher level components can interact with the REST API and call clarify to do for example to create a blueprint to deploy a blueprint and execute it and even to create a workflow so let's say for example you want to an example of scaling you measure the KPIs and you see that the KPI for scaling actually crossed the threshold so you send it to the OSS or to other system and this system can actually trigger a workflow and tell clarify for this blueprint I need to scale out okay so this is the owner portal after you bring up on up so you can see their own up this is the topology of on up with all the components so you can see the portal the SDC the robot console the message bus the app C etc they and I so and if you see if you do kubernetes ETL get pots you can see all the running pots and the same for on app services and if we want to look at the blueprint so there is the Gerrit the blueprint so you have the Tosca blueprint here so I don't want to I don't have time to actually go into the blueprint into details but it is defines the Tosca DSL it defined different inputs imports so it's basically some definitions of the Tosca Tosca types that you import and make them as part of the blueprint it has the input section for the images the LM version and then it creates the NFS connectivity that I mentioned for all the nodes and defines the kubernetes master and the kubernetes nodes and so and the security groups etc so now after I have a kubernetes cluster running I can have multiple kubernetes clusters so I can go and provision actually the on up components on top of it so basically what I do I point to the in this example to the Taylor server or today I point to the kubernetes master and I provision on up on top of these kubernetes I can take different own app components and provision them on different clusters or I can create multiple own app clusters and define what I want to provision where so you can see that each type Tosca type here is very similar in some type on app knows component it defines the same things except for the application here it's an eye app c-clamp etc so you can go to Gerrit actually and this is the link and you can look at it now let's look at interesting use cases one use case that actually the telco did almost by itself with an integrator as use cloud if I for two things one for a catalog it was for a streaming video so I have different domain and network controllers and different media environments so they wrote an abstraction layer on top of it and they used a Tosca blueprint for first defining the catalog of services what the user can do with this what services the user can consume and after that they used on up and the integrated clarify as part of on up so they pushed the Tosca blueprint into the on up Esso Service Orchestrator and this service Orchestrator was calling clarified to actually orchestrate the different domain controllers at the top is used at the TM form API layer actually to get integrated with their OS SPSS system and this layer actually the API translated request from the OSS BSS in to clarify another use case is I want to go more to the edge so we also did it with big telco we created an on up cluster with three edges each edge was a kubernetes cluster we defined the services in SDC and pushed it into a so and I so was calling clarify in this case it was a connected car example so think about it how many times you use Waze or Google Maps and you run into traffic jam and then it turns your turn right okay but you ready stucked in a traffic jam so we calculated a rectangle around each one of the edges and we know the density of number of cars there and basically we can tell you a you have to go through another route so before continuing I'll show you what we did and then I will explain about the architecture we use in this architecture function as a service deployed on top of kubernetes and you can see here you can see here that we have different car types like Ford Toyota and we use an IOT gateway for that and let me show you what happened I will go quickly so we visualized everything on a Google map so we send information from the car so you can see the density information and also we set a prediction back to the car to tell it where to go so we use the Google map actually and with familiar with Boston here and know that you have many times traffic jams here so we created the Tosca know to create a traffic jam here each dot represents a different car type like Mazda Toyota Ford etc now I will do it fast forward you see the cars are getting to the traffic jam and I will send some of the cars to the destination point using another route so basically you see the cars that are going on this route get to the destination much faster than the other cars okay so let's continue so basically we use deer I cry no and kubernetes on top of it and Kublai says the function as a service platform we use an IOT gateway show you with an IOT gateway to get the car request it's a simulation from car requests but we want to do it also in real time in real life experiment we add the fast engine that automatically defined the Tosca type for each car type like four four two Toyota and we kept all the locations of the car in the MongoDB database and clarify using the Tosca blueprint orchestrated everything and also we visualized everything in Griffin I in Prometheus if I go one slide here you can see they own up so Anup orchestrated everything we add different edges and Anup ran the blueprints and the walk loads on all these edges and basically the challenges were actually that we encountered is to define things dynamically let's say I now discover a new car type I want to add it to the model dynamically and with the capabilities of what I mentioned before to do everything dynamically so we can create in the graph we can manipulate the graph in real time and add different car types to the model itself without the need to tear down the deployment and of course when you add a new car type you need to put the function in place and you need to connect it correctly to the IOT gateway and get requests and do everything as it was initially defined okay let me go quickly this is the model that we created with SDC so you can see the IOT gateway in SDC you can see the kubernetes master you can see the kubernetes IOT gateway service and you can see the definition for the functions mazda Toyota and Ford for each one of and the car types and this is the kubernetes itself and the metadata for it so this model was pushed to SDC using it Tosca blueprint and it was pushed to and so went on up and it was called clarify to orchestrate these things here now things that we learned about kubernetes we with function as a service we can scale kubernetes at three different levels one is in cupola or in SE any other fast engine we have a way to define if there is more car types from type toyota for example i can create more functions of toyota to accept the word the load i can scale also the native way in kubernetes so you can define scaling in kubernetes and that's easy and also as I mentioned before we implement the provider interface so if kubernetes needs another infrastructure node to add workers so it can call the infrastructure provider interface and this will call clarify and clarify we'll add the node into a running cluster all those scaling ways are dynamic and it is easy to scale at each one of the levels [Music] okay so now let's talk about multiple on upcoming clusters so you need multiple owner clusters because of load balancing many load balancing and high availability so you can create multiple clusters and define what runs where and let's think about federated kubernetes first so what is federated kubernetes actually you know what is federated already kubernetes is federated by itself because it's node Federation it runs different workloads on different nodes now federate kubernetes kubernetes in a nutshell as an API certain gateway actually the API for kubernetes it has a net city for configuration and then as controllers to bring they're all close to the desired States the pods to the desired state as you define in the yamen files now go one layer above it so think about the Federated API so you can define for example I want to run object foo on multiple clusters so it will go to each one of the clusters and execute it so the same you have EDD for a federated configuration and you have controllers to define the desired state so you can run workloads on top of kubernetes in multiple clusters now sometimes it's the world is not only kubernetes and you need to tie it to different things outside kubernetes sometimes you need to run it on VM sometimes you need to create VPNs you need to create some network connectivity's so for that you need a glue layer on top of it and here we use clarify to first to define all this Federation and actually define workloads on top of it so I will do fast forward so there are many use cases for from Federation I go to edge cloud many use cases for each cloud think about that you have you know at actually a OpenStack in each one of their ships so people use it for you know so many ships that you need it for people use it for gaming or for whatever they need so you can have open stuck on a plane airplane so there are many edge use cases IOT etc so and as Michael Dell said the edge will be much bigger than the cloud itself so their reason for add we're having edge and think about it tomorrow the central cloud is gonna broke into smaller clouds to the edge clouds so the reason are two the reasons are twofold one is latency you cannot send everything to the main cloud let's say you need latency 120 milliseconds so you need an edge and let's say you don't want to send enormous data points I call it the tsunami of data points to the master cloud because it's a lot and you're going to overwhelm the master cloud and you cannot keep all this information so the main two reasons are latency and lots of data points from IOT AIA are a machine to machine smart cities etc so in this case and how you're going to manage this so I already touched on the server let's edge but the challenges are enormous here you need to define a complex model and you need to do a service composition of multiple master and edges now what do you do when there is no network between the edge and the master so the edge should work autonomously also the edge has limited resources resource constraints what you do is the crew t/o is allowed to talk from the edge outside or from the edge or inside or what you do with security and tenancy inside the edge itself and many different things think about a satellite environment where their bandwidth is scarce so how do you manage this so basically I just quote some money that say that even from operational point of view we are going to have more edges and people so how we can manage this so just to finish this I don't have enough time so they are actually many models to define an orchestrated edge one model is to have master Orchestrator that actually uses a control component at the edge but the master Orchestrator actually manages everything and just send operations to the edge and more federated or distributed the way to do it is to have a local edge or castrator that is autonomous it will run the lifecycle operations like the configure provision install manage even eeling and scaling at the edge itself and there is the master Orchestrator the master Orchestrator will connect to the local Orchestrator and when there is no connectivity it will work on its own but then when there is connectivity it will actually keep the data that it collected and send it to the master Orchestrator now I see that not far from today we are going to have lots of edges with connected cars augmented reality and IOT so the master cloud will serve as a learning point like let's say you learn something at one of the edge and you want to send the data to the master Orchestrator that can send it over to the other edges or and do AI at the master cloud but the edge will need to work autonomously and be smart enough to manage all the components that are connected to it and just to finish this I don't have time to go into a federated model and managers of managers and cross edge workflows but I just want to finish it with some examples here so basically we want to have a service composition using Tosca and you want to have it as a Lego blocks so you can combine different components together create a master service and you can create multiple master services have a catalog and each one can consume each one of the services and this could be used you know for a smart city transportation branch offices we have the vcp and sd1 solutions today for smart home and cities for military and defense energy etc so there are lots of use cases and the challenge is to make it simple and to define you know the topology and what you want to do intent based and not to get into complexity so I see that I am right on time let's say someone asked questions so I have a server a question but I try to formulate in a single one so I using Tosca extruded standardized at one or you have a flavor yeah we are part of the Tosca Asus yeah we support the Tosca standard we many times run ahead of Tosca so we need to define our own types and push it back to the Tosca committee and try to to actually convince everyone that this is needed and this is based on real use cases that you need to do so and according to this the blueprints could be used by other vendors as well so how deeply do i lock my infrastructure into the qualify if I'm using which which ones do you refer to I mean created a blueprint blueprints or probably use useful only by qualify yeah okay thank you yeah so you can define that Tosca types and use another parser to go and parse them yeah another question okay thank you [Applause]
Info
Channel: Sniper Network
Views: 707
Rating: 5 out of 5
Keywords: Open Source, RDMA, LPC2018, LPC, Linux, Free Software, Technology, eBPF, Open vSwitch, OVS, SDN, Software Defined Networking, Virtual Box, What Is Open vSwitch?, Why Open vSwitch?, Open vSwitch on Linux, FreeBSD and NetBSD, Open vSwitch on NetBSD, Open vSwitch on Windows, Open vSwitch on Citrix XenServer, Open vSwitch with DPDK, Installation, OVS Faucet Tutorial, Open vSwitch Advanced Features, OVN Sandbox, OVN OpenStack Tutorial, OVS Conntrack Tutorial, OVS IPsec Tutorial
Id: T0q-SZj02jE
Channel Id: undefined
Length: 41min 51sec (2511 seconds)
Published: Tue Aug 27 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.