VES ONAP Demo

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is a quick Dillo developed by the opiate of the vnf of in-stream or best project running on the Opie NFP Danube release you can learn more about OPN efi at the opiate feed main site of pmf feet org and the opiate v wiki at wiki OPN fe org the demo is named hello own app since it provides a basic intro into the goals the vest project and its role in onap recently launched open networking automation platform open source project the focus of the vest project is to promote adoption of a unified data model for streaming telemetry from physical or virtual network infrastructure and applications this unified model and the agent libraries that enable it to be integrated into platforms and applications are intended to dramatically reduce the cost of integrating new virtual network functions or via nufs with service assurance system in NFV service environments you can read more about the rationale for the best concept in the presentations available on the OPF e wiki at the best project pages this demo integrates illustrates the integration of two implementation approaches to best agents which are the software processes that observe and report to limit read data to the best collector included is a collective plugin written in Python and to vest agents written and see all the code you'll see operating here is open source and is available through open if you get run foot repos or reference open source projects such as on that in the process of migrating the best development to the onap project with integration and test efforts continuing in OPN Fe as we integrate the own app platform with open appear efference platforms through tests and demos such as this you could probably look for future development of vests under the data collection analytics and events or dcae project and own app fast fits into the dc-8 neat architecture as the system for generic analytics data collection from Network functions for this demo I'm using an opiate at the apex reference platform installed using OpenStack trip alone in a two node non H a virtual configuration with an open stack controller node and an open stack compute node I'm using a collect D agent developed by Intel as part of the OPC barometer project for this demo and to see agents developed by AT&T in the open at the vest project and in the O net project the collective plugin runs on the single bare metal to kaput host and reports on a wide variety of compute host and guest VM statistics such as CPU memory and network utilization the C agents run on each of the VMS and report on application statistics ad via resources the four VMS provide the monitored vnf component functions in this demo including an IP tables based firewall and IP tables base load balancer and two VMs running a web server based upon engine X a 50 M provides the best monitor which exposes a restful interface over which it receives the best event data the monitor saves the data into an influx DB database which data is displayed on a set of gravano dashboards will update this demo soon to include firewall and load balancer B NFS developed by a TMT based upon the FDI o projects VPP framework and contributed to own app as demo VMs the collecti agent reports for a wide variety of host and guest VM performance data and reports the data in the vest JSON format here you see example of the measurements for VF scaling event which includes very detailed statistics on server load memory CPU all the network interfaces internal to the compete host disk and aggregated data on disks future usage and be dicks the best see agent reports a CPU Network and application of statistics such as transactions per second at the web server instances and delivers the reports in the same JSON format here you see an example of a heartbeat event from the webserver agent which informs the collector that it went that the agent is still alive on periodic basis a scaling event indicating the webserver has a certain request rate and a fault event indicating that the webserver had stopped similarly the d firewall agent reports on CPU usage and network interface usage the yellow onap demo is deployed to the tacker vnf manager and at oscar blueprint for the set of demo vnfs you can see the resulting topology through the OpenStack horizon dashboard including the four VMs and the internal and external networks to which you're attached when the deployment is complete you'll see a terminal that remains open and presents the monitor process output which shows events vests events as they're received in JSON format plus information about what the monitor has collected and saved into in flux DB and summary information from the web server such as server state and traffic the events in this demo include regular heartbeat event messages from each agent virtual function scaling related messages which provide detailed to limit your data and fault reports by browsing and logging into the Ravana dashboard you'll see that the collect be the collected telemetry being displayed in the best in the dashboard at this time you can see three key sets of monitor data CPU usage from the bare-metal host and from the V firewall and B load balancer network usage from the bare-metal host and B far wall and load balancer VMs and webserver traffic from the webserver pm's there are many other parameters that are being received but most are expected to be processed by closed-loop control systems - not displayed in this graphical UI I have two test scenarios that illustrate changes in the agent reported data as conditions in the VNS change the first is the invocation of traffic to the firewall load balancer and web servers when I start the traffic you should soon see traffic ramping up on each of the two web servers in the best web server traffic dashboard so you can see the bits are coming in already with transactions being reported they should show up soon on the dashboard you can see web server traffic increasing network usage increasing on the load balancer and on the firewall and you'll see network traffic increasing on the virtual network interfaces to the VMS provided by the compute host and you'll see increases in CPU the second test area is to cause a fault that's reported and later reported is cleared in this case I'm actually pausing the docket container in which one of the web servers is running you'll see that the state change is stopped and after a minute later changed back to started in the process you'd normally expect the TPS to go to zero but in this case the iptables base load balancer isn't quite configured to support the transfer of the load from one web server to the other so you just see TPS go to zero and then back up when the docker container is restarted so you could see traffic now actually has dropped should go to zero you see the virtual network interface traffic also dropping off and CPU usage as well dropping off pretty soon the traffic will be started again you'll see transactions coming back in being reported and being picked up through influx TV into the grip onto dashboards hope this demo was useful to you as an introduction to the purpose and implementation of the open ffs project it's just getting off the ground and there are many opportunities for collaboration on important aspects as described on the OPM few wiki feel free to reach out to us at OPM feed through the open Fe tech discuss mailing list which you could subscribe to at list OPN Fe org and thanks for watching
Info
Channel: Open Platform for NFV (OPNFV)
Views: 4,078
Rating: 5 out of 5
Keywords: ONAP, Linux Foundation, Open Source
Id: Zoxcj4mwUwU
Channel Id: undefined
Length: 11min 1sec (661 seconds)
Published: Thu Jun 15 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.