The Pensando Distributed Services Platform

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
the platform that we have built is called the pensando distributed services platform and it is this platform that essentially is going to enable us to allow for our customers to deliver in bringing a cloud to our customers data centers and for enterprise environments what they what this means is the ability for pensando to deliver cloud like agility security and the simplification of the architecture within an enterprise environment at the same time for our cloud customers this now will allow us the ability to enable our cloud customers to build a next generation architecture which will enable them to leapfrog the likes of AWS that have come to rely on internal development technologies now if I were to draw an analogy on how what the technology is that cansado is building and enabling for a cloud customers it is consistent with how and the journey that nvidia went on as they were building out an accelerator for graphics which eventually has become the dominant platform for artificial intelligence and the next set of applications if i were to look at something even closer to home it is consistent with what AWS set out to do as they were thinking about building the cloud and in 2016 acquired a company called Annapoorna Annapoorna served to become the foundation on top of which all the enhanced capabilities at AWS started to deliver as services to its customers whether it was enhanced networking functions or the ability to run their own hypervisors within that environment they continue to drive the pace of innovation by integrating a lot of their value-added service functions directly into this technology which became the foundation and started to get deployed across bulk of the computer portfolio this allowed them to build a variety of services and to continue to drive the next set of services enabling the enterprise customers to take advantage of the cloud and its full potential by our customers we are now redeemed to be at least a generation if not again over two generations ahead of this technology now when you take a look at this accelerator and this technology that we have built and start to look at what we are beginning to deploy starting this calendar year across our customers environments the foundation of which are the distributed services cards that run at 25 and hundred gig and enable a variety of software services that can be turned on in parallel to one another we call it the power of the aunt whether it is a variety of distributed Software Defined Networking services riding alongside with storage services and security capabilities while enabling telemetry to one in parallel and in the future an architecture which we continue to innovate every eighteen months to two years allowing us to take advantage of the current state-of-the-art silicon based technologies where today's technology is based on 16 nanometer and the next one will be available over the next 18 months tried to seven nanometer technology all of the software services that we will be talking about that are available today will also be fully future compatible on these new platforms as they become available the power of this architecture clearly is that it's fully software-defined its accelerated at wire rate and providing full visibility and security because those are attributes that are built on into this platform and not an afterthought now this architecture clearly is fully software-defined everything in this architecture and this platform is fully programmable so making it a future-proof architecture when we talk about software-defined up elements those will consist of things including network and network services security services and storage services all of which can be turned on at the time being accelerated at the edge also implies in addition to being fully programmable is the ability to consume it at low power and we can we will be looking at how some of the key elements of innovation are allowing us to do that at the same time why you want to keep become programmable and being low powerful friends scale becomes a very important element whether you are trying to scale for large-scale enterprises or for cloud customers or for service providers at the edge this architecture is unique in that it treats all workloads where the virtualized bare metal and containers as first-class citizens and at the same time it has built in security into the architecture while allowing for always-on telemetry after all if you can't have visibility and observability security becomes a very tough thing to go tackle while at the same time we have built the ability to be centrally managed with a policy model which is consistent with that of the cloud providers a lot of the use cases and the power of this technology is being driven through our partnerships with end-user customers strategic cloud customers as well as a lot of service providers with whom we are in the midst of trials with at the same time the goal is to continue to deliver production worthy products and when I talk about the use cases at the end we will go into more details of that at the foundational level this technology is powered with our p4 programmable processor during France is a section he's going to talk in more detail about this now this technology when embedded on a PCI Express card can be deployed across all scale out servers and the ability for all of the services to run tied to the end policy models can be enabled across these distributed Services cards as I touched upon earlier one of the key attributes of this technology is that customers do not need to have the requirement to rely on complex service teaching architectures stellar stitching is baked into this platform and all functions can be present everywhere at scale simultaneously managing all of these assets is available either through a restful api where customers that have their own controllers would have the opportunity to through those restful api s-- manage the lifecycle of the distributed services cards and for enterprise customers and some of our cloud customers we are also delivering our own policy services manager which is based on the kubernetes architecture when wippen goes through the demonstration of this technology you will truly get to appreciate all the attributes of the policy services manager that allow you to deal with the lifecycle of the distributed services cards the full and centralized capabilities of managing the policies and automation and availability of the restful api is that give us the opportunity to work with a broad range of ecosystem partners across the IT or operation orchestration stacks enabling our enterprise customers to integrate it within their current operational model in summary this distributed services platform allows our customers to build a fully software-defined architecture one that can be accelerated at the edge and one that is always secure and visible and also breaking down the silos that today exists within the enterprise environments with the big complex appliances that do not are not required sorry sorry I have a question yes Enrico you talking about your customer as large enterprises service providers and telcos and I totally get it but actually do they have to build their software stack on top using the API that you provide or do you provide their entire stack so the storage infrastructure on top of it their network functionality is ready to go I mean it's not that a firewall you know comes out from whatever it's this software vendor for that firewall with you know all this afloat angle all of these things already right it's a very good question and we go for our enterprise customers we are essentially delivering the entire stack even for some of our cloud customers they are not only enabling the functions of what we are delivering to them but they are integrating it into their current environments so as an example for an enterprise customer they would be deploying the distributed services cards across their enterprise class servers working alongside with HPE at a strategic level and also getting the certification working closely with dell enterprise customers would be able to procure and deploy the distributed services cards from their server vendors with the ability to turn on functions that are available by pensando including functions like stateful firewalls as well as micro segmentation features they would also have the ability to procure and deploy the distributed policy services manager either on a virtual machine or as a container that can manage the lifecycle of those distributed Services cards whereas cloud customers would have the opportunity to embrace the distributed services card across the computer infrastructure of their choice and at the same time working closely with them they would be delivering a set of services that are tied to the software that pensando has provided to them at the same time we will be delivering tools to them that will allow them to compile their own business logic that can Korres ID with a pensando software through those restful api s we also tie into their controllers and their operational models so we have a very flexible way of getting deployed across this broad set of customer base did I answer your question Enrico yeah totally I have another question though so if you I totally get if you have an environment that is you know already full of your cars I mean every server with your cards but what happens in a in the moment that you start to deploy in the new cards so that there is a moment where you have all the servers with the traditional network interfaces and new ones okay how do you manage the transition there so the way the enterprise customers are managing that is they're doing it at a pod level the deployment of a self-standing pod now the very key to those customers is the ability for this newly formed HOD to co-exist and interoperate with the existing infrastructure that they have in place which is a very important aspect of how we went about developing the restful api is and enabling this product to be integrated into customers is existing ecosystem of app of tools that they use whether they are happen to be using Splunk or ServiceNow we would have through the restful api is have the ability to integrate into their existing environments okay and whereas if you are a cloud customer and you're starting to build your services on top of pensando the key element is to have the ability to offer like to like functions that the customer is deploying today the cloud provider is delivering today and the ability to help them migrate towards a fully software-defined architecture going forward to their customers that are looking for acceleration of these functions whether it's at 50 gig speeds or 100 gig space so at a baseline our services meet the requirements of what the customer is already delivering what the cloud customer is already delivering and on top of that when we start introducing new services they would be able to roll those out at higher speeds and at more scale not allowed and enabled by their current suppliers and we enable them to build a truly software-defined distributed architecture and the customers that would be interested in this are probably running something like NSX for a CI or V X LAN with a VPN today are you integrating seamlessly with any one of those but eco our customers are the ones that define who I ecosystem partners are all of the customers that you all of the partners you have named we clearly have working relationships with and continue to work on a better together more tightly integrated stack because ultimately the customers looking for a single pane of management and orchestration layer and they want to make sure that architectural II what they are building is future-proof and at the same time it allows them to work with their current then does of choice so quick question from me as well so say for example a vendor or an organization starts to move or buys into the technology how easy would it be for them to transition to it is it just a matter of adding a card and it detects whatever it needs to handle is it an automated process or do you have to manually transfer all these services that it can handle it's a very good question as of what we have noticed is that as large enterprises are looking to deploy this technology the first step for them is to make sure that the certification with the server vendors is already in place something that we have already worked towards and we want to make sure that their server vendor is able to make this technology available shipping out of the server vendors Factory the second thing that the customer ensures is that when they are automating this technology across their environments they go through a sixteen ninety hundred and twenty day process to ensure that their automation tools and their ability to take this technology into the environments has been completely tested so now the barrier to deployment has been overcome by ensuring that whatever tools they have in place we have the ability to get seamlessly integrated and we run no differently than the way they are currently deploying what they have in place when the time comes to have the ability to turn on some of these services you can go about doing that through the centralized policy manager and through that policy manager is where you would have the ability to deal with all elements of the lifecycle of that technology whether it is the ability to deploy a day one and you want to run it with just basic functions like telemetry visibility and observability to then the ability to upgrade it to turn on more sophisticated functions whether those are policy driven models that you are trying to enforce or whether you want to build off of the telemetry functions and observability functions and you want to now have things more available to you at a flow by flow basis so that's technology gives you the opportunity to embrace it with all of these being opt-in models and depending on your sophistication as a customer and your appetite to embrace all the capabilities it allows you the customer to think about at what stage do you want to turn on what level of functions at the day one we ensure that we can operate and give customers the value that is far more incremental to what they get today using their existing operational model and the existing tools thank you thank you so at the foundation of our technology clearly is this P for programmable processor and I started this talk by telling you how our customers consider us to be at least one generation if not multiple generations ahead of the AWS nitro technology we then created benchmarks and tests suites that can be easily replicated by our customers to demonstrate what does it mean in a quantifiable manner when we see a way at at least a generation if not two generations ahead of the AWS nitro technology we conducted benchmarks on multiple metrics latency and jitter as we said earlier is a very important element particularly for applications at the edge as 5g and IOT become very relevant for service providers and the cloud providers to deliver differentiated services around having a latency which is an improvement of nine times more then the AWS nitro is a key attribute that allows us the opportunity to not only become optimized for the edge but when you combine that with our ability to deal with up to 8 times the number of packets per second in our technology it gives us the power of the end where we have the ability to process a lot of packets per second at very low latency and at a very predictably low jitter making our cloud customers and our enterprise customers giving them the opportunity to maximise on the performance for the SLS for the applications as well as for cloud providers the opportunity now to deal with many many more customers giving them the opportunity to have increased revenue per rack at the same time the power utilization of our product if we believe is one third that of AWS today AWS by their own presentations require five nitro devices to offer equivalent level of services compared to the distributed services card today we have measured performance of sub 30 watt 700 gig with all of these services turned on whereas we believe that the AWS implementation would need to draw about a hundred watts assuming 20 watts per nitro making us the output giving us the opportunity to deliver far more density per rack because we draw one third the power at the end of it it is really the power of this platform combined with the power of our software that will allow us to continue to deliver best-in-class cost scale and performance and staying on the 18 to 24 months regimen on delivering our next generation product and platform which continues to be on track we continue to maintain this lead over the competition
Info
Channel: Tech Field Day
Views: 2,117
Rating: 5 out of 5
Keywords:
Id: gUgITjFmHOA
Channel Id: undefined
Length: 19min 58sec (1198 seconds)
Published: Sun Apr 26 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.