Cisco HyperFlex with Jonathan Gorlin

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so thanks for the introduction everyone my name is Jonathan Corwin I am a product manager for the Cisco hyper flex product platform we've got a lot to cover with you we only have 30 minutes so I'm gonna try to use as much time as we can but please ask questions let's make it interactive I first want to start at a high level with what's new or you know we just announced hyper flex 3.0 just late last week a lot of great excitement and a lot of great innovation there so we'll start at that level then we'll do a quick business update let's look at a year and review all the work we've done over the last 12 months gonna dive specifically into the new hyper flex platform that we have along with some of the three other innovations we'll talk about some of those features in more technical depth so somehow we'll get all that done in 30 minutes so first of all if we take a look at what hyper flex and what we just announced really hyper flex is ready to take on any application in any cloud and in scale and specifically if you look at the features that we added in our three auto release to allow us to really hit these key elements it starts with the application we're extending our application support not just from the VMware ESX hyper rise that we always have we're adding in Microsoft hyper-v support this will allow us to address different applications giving customers choice which hypervisor they want new application stacks so it's really great that we can take the greatness of the hyper flex platform and bring you time for V we're also adding in container support so native supports for persistent volumes inside of docker containers we're building a flex point driver I'll talk about what that means and why we decided to make that choice and then in traditional Cisco fashion we've always had these Cisco validated designs these reference white papers customers absolutely love them and we're continuing a push on that front on to add more and more support for enterprise applications so today we already have support for for up for s AP sequel Oracle Oracle RAC the whole Microsoft suite as well and we're continuing to push as customers are putting more and more their workloads on that reflects on the e cloud message so we're bringing in some of the other assets that cisco has in-house where we can now tell a really great cloud story as well so it starts with being able to monitor how the application is performing and we can do that without dynamics we can also do intelligent a placement making sure our workload is balance the workload optimization manager to help us fine tune our environment and then the cloud mobility and private cloud Catalog come with our Cisco Cloud Center as well and we'll talk about what all that means but in essence you can take Microflex as your on-prem private cloud and then with all of these other assets you can really create a great multi cloud story and then with the any scale we're pushing the scale limits we're going all the way up to 64 notes and we're doing so in a way that doesn't sacrifice resiliency actually increases resiliency as we continue to scale at the same time as we're going and pushing it to new mission critical workloads we wanted to make sure that we could address stretch clustering this is a big requirement for some of those very high end applications as well and then as Jeff knew had just described in the last session we're changing the game and how we can do deployment and scale so if you think fifty a hundred thousand sites how do you deploy hyper-converged into such a dense environment you know that's so distributed in a sense that you have all these different sites well with either site we can now do it all centrally that's really engage the X base so a lot of exciting things we're gonna dive deeper into but before we do I want to talk about what we've accomplished in 2017 it's been an incredible year if you've been following along with us we've had so many major releases it's hard to keep up and everything on this top line we we've introduced in the last 12 months so when we first came to market we started with hybrid appliances we saw very quickly customers wanted all flash but we haven't we haven't stopped with the hybrid we're still going to continue to offer both platforms customers need choice their use cases for both all flash and hybrid appliances and we're pushing forward on both of those fronts we brought on the 40 gig networking innovations that were in UCS and we've seen a big performance increase especially on the all flash systems a 40 gig and that's a native part of hyper plane we need the network together we added to work herself encrypting drives customers wanted data rest encryption so that was something we absolutely had to do we have it now on all platforms and 45 I word and all flash we talk about enterprise workloads so we're continuing and push out more and more CBDs and give our customers confidence that we can run these applications and a lot of cisco and cisco there as well so we have you see applications we have a TRC so you spoke before about the pros workloads and the fact that you have validated designs have you been working with some let's say major then there is like Oracle or the database providers for example to validate certain use cases yeah so we do have conversations well you know Cisco works really well with a great partner system that we do have a lot of times sometimes the vendors want to work well with us and sometimes they don't so it really depends on the application stack but if you take a look at the CBDs you'll see that we have some of the some of the biggest names in there and a lot of it is working with the technical marketing engineers at our our you know partner come here no I don't have the specifics on which which have definitive partnerships but we can certainly talk to our technical marketing team and see see which ones they are so we also introduced new tools that allow you to help size the environment really make sure that you get the right hyperflex config you're not over buying or under buying in the worst case and then customers last year we're telling us look we hyperflex it works great in the data center how do I get that same experience out at the edge I want that same core file system I want the same management and so we enter the edge offering that allowed us to push that same platform out to the remote office branch office environments we traduz hyperflex connect UI a native html5 based management tool and this co exists alongside with our vSphere web client plug-in on the replication side we added an asynchronous replication done at the filesystem layer so this allows you to replicate between hyperflex clusters and then of course we're solving that in 300 we're taking that a step further with synchronous replication with with stretc clusters you see you have synchronous or yes increase replication for so what yeah we'll talk about that's the new innovations we're doing synchronous for stretch clustering um so what we have today prior to 300 we had asynchronous yeah in the invite static so we were one of the first hyper-converged vendors continue staying up to date with the Intel roadmap and what's more important there is we also made sure our customers weren't stuck with if I have an import cluster do I have to start all over again we allow mixing of nodes so you can bring in those new m5 nodes into an existing cluster and continue to scale out so as you can see that's just a lot we covered in one year that's a high level now customers and partners are constantly telling us you know they put our stack up against other hyper-converged stacks and they're seeing the performance of hyper flex just outshine everybody else and not just in terms of raw performance but also in terms of consistency making sure that every VM gets the performance that it needs and we've also had this independently validated by e HP so check out the report really hyper flex playing the fame now has been how we do performance and consistency the momento has been tremendous 2000 customers is the lots of public figure we gave a significantly larger now and it's continuing to grow double-digit on your customers last year at Cisco live Europe 17 you announced that you had thousand customers so these are 2,000 more customers so you have 3000 customers now or is that an increase of a thousand customers over the last 12 months this is a total customer account correct but you know as soon as we finish up our financial reporting will update this number and it's definitely significantly higher and you have these customers are those already existing Cisco customers so you have some net new customers which never were Cisco customers before that it's actually a really great mix and one of the great things about hyper flex is it's a growth engine for Cisco more than a third of those customers are brand-new to UCS they've never bought a server from Cisco before so we're definitely tapping into our existing install base it's humongous but we're also seeing a lot of up brand new customers as well and the friendly we announced last through the acquisition of spring path I came from we worked really well together is separate teams the acquisition was last year so the companies are fully integrated now on the fact that we're all under one umbrella means that we can continue innovating faster so I'm hoping when we're up here next year in 2018 we'll show this slide you'll see even more of the fast-paced innovation that we have ok so let's talk about some of the key themes that hyperflex has always been known for we've always been adaptive meaning we provide flexibility for the customer and there's a lot of different areas that were flexible one is we allow you to reuse existing investment if you have UCS compute only knows you can bring those in if you have existing architectures you have fiber channel Sam you've got a network based san you can bring those in right we're not a rip and replace a really a solution that's designed to fit into an existing environment that's our customers really responding well to that message as a multi cloud as we talked about the Cisco cloud center we announced that last year allows you to build out these blueprints that allow you to deploy on premanande multiple clouds using that and that simplicity has always been a key message of hyper flex it's always been a key message of paper converged but we're talking about end-to-end simplicity so it's not just about how easy is it to install and upgrade I'm talking about the entire lifecycle right so how easy is it to size my environment how easy is it to purchase right do you know Cisco makes it easy with bundles to be able to produce to stand it up we have about one of the best installers out there to make it very simple to deployment we make des to operations very simple as well with both the vSphere web client plug-in and rhx connect and then when it goes to troubleshooting with tap we know tonight is the best in the industry we have connected to active smart call home you've seen some of the innovations in inter site we're trying to push the edge of what it means to provide that proactive customer support so really when you look at hyper flex it's a solution that simple end-to-end okay so now we're taking it to the next level here any application we talked about some of the cv DS that we have Splunk and some others we're continuing to push out right if there is a workload that's virtualized it's a good candidate to run on hyper flex and the fact that we're bringing in hyper-v support and container support is it means we can start using hyper flex in more ways more use cases and more applications the any cloud we keep talking about that this is a platform that allows you to have a private cloud as your starter and then use that as the on-ramp into various different public clouds with all the various tooling that we have and then the any scale is all about you can start small with a hyper flex you know start with a small cluster in a remote site but you can use the same platform for your core data center run your mission critical workloads all on the same platform right so hyper flex has a solution from you from the smallest to the largest scape scale so if you look at the the platform this is what we had prior to 300k so this is what shipping right now if you look at the bottom infrastructure layer it's all built on our own custom log structured file system we've talked about that again and again we do wide striping right so we're the way we're doing data placement we're leveraging all of the nodes simultaneously and this gives us all of the performance and consistency that we're famous for so all of these are foundational elements that make up our platform we're on the VMware ESX hypervisor the world's best hypervisor most popular of course we have great application support and you can see we started in the robo VDI vsi space that's very quickly overtaken now between databases mission-critical even medical health records you name it the application stack we're seeing customers put everything under the Sun on top of hyperflex and we're really delighted to see customers are really ramping up using HX 4 and then we have all the loud enablement at the top layer of course this is surrounded by end-to-end security with all the Cisco assets and then on the left-hand side we have all of the new innovations with the Cisco inter-site platform so this is where we are right now and with 3.0 this is what our picture changes to you can see how we're starting to fill in and provide more value and capabilities for our customers so down at that platform layer we're now expanding up to 64 node scale and we're doing that with the availability zones that allow us to get more resiliency as we're bringing in the stretch clustering higher density we have a new new drives and new form factors which we talked about that allow this higher density play and then the flex volume driver isn't as an elemental piece that allows us to build for a kubernetes based application so then when we move up the stack we now have hyper-v support so customers now have a choice between VMware hyper-v so oh yeah any questions please feel free to interrupt so with the hyper-v support we'll bring in now customer choice with containers of course we'll have cloud native so you can see we're starting to build this out of a very nice wait a minute the top layer at the nameks and see wom again is helping this with the story and then all the way on the left it stopped up on the screen here you can see the HX cloud deployment is a big game changer we just didn't we just released that to GA a few weeks back getting back on the hyper-v support and seeing that you have in the multi cloud services you have Azure are you planning to are you supporting some kind of azure ad on-premises or something like that not yet so are you referring like Azure stack on prime so cisco does have an azure stack solution yeah it's it's it's orthogonal to HX it's sort of a you know you can choose depending on what the customer requirements are we wanted to make sure that you know UCS can solve multiple different customer challenges ok so let's dive deeper into each of these areas I'm sure you're excited to get into the details on some of these new features so let's do that I wanted to start out from just an architecture point of view this is our pre 3.0 stack if you can see it was very custom built purpose-built it works great on vmware we have our core file system and data services layer we speak to the vmware api's and vCenter works very well we're integrated with the vSphere web client right it was purpose-built for for that need but what we need to do is we start moving into this multi hypervisor world we start moving into hyper-v in containers we need to disaggregate this so we're moving to a layered approach a more modular approach where we took all of the greatness of those underlying layers we've separated those and now we can present new presentation layers so in the example of hyper-v we now need to be able to speak SMB protocol for containers maybe we need to speak I scuzzy right so this gives us a more modular approach also as we start bringing in more management entities as well we need to be able to start adding them in as customers start asking for more and as you can see this this new architecture is modular so as customers asked for more features we'll be able to onto the framework as we bring in new hardware innovations on the bottom again we want those Hardware innovations to be leveraged by all layers in the stack so this is what our hyper flex 3.0 and high-level architecture looks like giving us the ability very rapidly innovate okay so with hyper-v specifically again it starts with that same core foundational element the same file system that's tried and true or 2000 customers have been using it no data locality it's fully data distributed so all of the benefits performance benefits are there what we're doing is we're adding in a new scaleable smb3 file protocol support so we can speak inside of a hyper-v world we layer on top of this Windows Server 2016 datacenter core and we're going to support this in both hybrid and all flash models as I said we're continuing with that in there and we're gonna support all the native hyper-v features so native failover clustering the production checkpointing all the things that you would come to expect hyper-v HX is going to work seamlessly with it and it's important when we talk to customers we're not just selling an SMB you know file system that shared storage we're selling them a solution and the solution includes hyper flex and the windows feature set in hyper-v 2016 to provide with the customer and then if the management layer will have support for SCB mm at the customers license root system Center if they prefer not to they can still use hyper-v manager that's perfectly okay and will support the powershell commandlets that that hyper-v has today from the HX side we'll use the rest based API and our HX Connect UI to manage my career functions so let's look real quickly what the architecture is going to look like with hyper beam so if we start with a familiar three node cluster here they'll have Windows Server 2016 installed on them will have a very similar user experience will have an installer that will go through it'll provision the controller VMs little set up the network stack that'll configure feel and it'll push the service profiles down all the things that you've come to know the hyper flex and UCS is the same in the Hydra world it was controller VM so we're going to come up together and go form a shared HX data store with a separate pool of capacity and a separate pool for cache and then that HX data store you can then create an SMB file share so it'll be a mapping between those two and that SMB file share it's just like regular shared storage all the server's have access to that chair just as if it was running off of any foiler of course it's all about the VM so you bring your application VMs in they have to reach the X Files those VHDX files live on that SMB file share and then just like we do in the ESX world we have this IVA Beiser module that reroutes the i/o we do the same thing here so soon as maybe I advise our module we have a hashing algorithm and make sure that that based on what type of i/o we're doing we're actually leveraging all the controller VM simultaneously so all the controller games are processing i/o for all the VMS we get very good level balance utilization and no hotspots in this so it works well as tried and true on the VMware staff that we had we wanted to replicate it in the hybrid world so hyper-v on HX is in EAP our customers early access program so customers are playing with it giving us some early feedback just tuning some of the documentation so it's here it's ready and it's coming with three dot o any questions on the hybrid so let's jump to the next big anchor feature which is our container support so the problem I see is trying to solve is developers are going to the cloud right developers need to have access to the native tooling for them to be able to do their work and so what IT needs is a platform that allows them to go and develop on Prem but give them the same toolset that they have in the cloud so what we're doing is we're providing the support we build a flexible named driver which basically means using a kubernetes stack an administrator or rather a developer can put in a pod request and ask for persistent storage right so the whole problem we're trying to solve here is your containers require spatial right it's stateful containers they need storage wouldn't it be great if we could use the hyper flex platform it's very fast it's resilient how do we how do we plumb that in inside of our containers and so with the Flex flying driver a developer can come in you can make a pod request and inside his pod request using our driver he can do on-demand request create mount all of that work is done automatically behind the scenes to give them a persistent volume for his containers so we didn't think it was sufficient to just ensure that a checks could be the plumbing to make sure that containers can have persistent storage we wanted to take it to being the entire length of the story to say let's make sure that a developer can actually use this runtime just like you would on AWS or gke right they go in and they ask for storage they don't care how it gets plumbed developer doesn't want to have to call as hyperflex administrator and say carve me out of one what's my you know iqn none of that the flex line driver abstracts it all the way and gives us support for persistent volumes inside of containers okay so the next big anchor feature is stretch clustering so as we start moving into more mission-critical applications customers are asking for give me better resiliency give me more uptime we do that well we simply take a cluster and we distribute it across two sites this is still one logical storage cluster and we're doing synchronous replication across the two so we're mirroring all of our rights across besides this'll ensures that we can tolerate all different types of failure scenario so we can lose an entire site we can lose local discs local nodes we can have a network partition no matter what you throw to stretch cluster it's designed so that you can quickly resume your operations on the striving notes arriving site just a quick question regarding this what kind of latency do you support between the two sides questions so for right now with three oh we're requiring a 10 gig link between the two sites and a maximum of 5 millisecond RTT between the two sites because as you know as latency goes to the Mormont's exponentially it decreases so those are the numbers were starting with they're not necessarily a random stone we'll just have to see what the so this is a zero RPO solution because it is synchronous every write is committed on both sides before it's acknowledged back to the application and it's a it's almost a near zero RTO because we're using natives with vmware h a features to restart the VMS if if we have a psychology so you don't have to have a run book you don't have to go and use SRM that's the whole beauty of this aj solution and we've also optimized it such that the read path is local so we have multiple copies on both sides all of the i/o reads can happen locally and then only the rights have to be replicated across one space so this is also an EAP customers are testing and now a lot we have a couple partners in Europe that are running this and giving us some some feedback as well and this will be shipping so do you have any specific requirements for I think is your air Pio on on that solution on the stretch cluster like latency II or something like that yeah so similar this question there will be network requirements between these two sites that's the biggest hurdle really when it comes to stretch cluster and then we'll also have a witness appliance Bryant is a third-party arbitrator most architectures use a witness and so it'll be a latency requirement between the two sites and between the witness it'll be a virtual points yep so you can put it on anything that could just run it again it could be in the cloud it could be in a closet you know be on your laptop just needs to be running got a question from the audience do the cluster nodes have to be on the same subnet in the same layer 2 network or is it just pure IP routing so today we're asking for stretched layer 2 there you know there's no there's no inherent limitation so it's all IP unicast and do you have any requirements regarding to empty your packet size so we we do use jumbo frames within our clusters for traditional hyperflex we're using 9216 jumbo we would want jumbo frames to being able to cross this link as well otherwise you'll be artificially you know sort of hampering your cluster so we're not going to require it if if you can't do jumbo frames across that link you can still deploy hyperflex cluster but you're not getting the most bang for your buck because you'll have the additional packet overhead ab jumbo frames is in most cases only over doc fibre available yeah so so that's why we'll at least give you the choice you know if you can if you can get it that's great opportunistically if you can't we'll still be able to support you yeah it's gonna unlock a lot of use cases for us and customers really the last major anchor area for Piper flex 3 Davao is how are we scaling the platform and we're scaling in three vectors the first is we want to continue to offer a cost-effective way to scale meaning we want to lower the dollar per gigabyte we want to be able to get denser in our platforms so there's two ways we're doing this the first is we've qualified and this is currently shipping a new 1.8 terabyte 10 K stash drive for the hybrid platforms so this allows you to get 50% more storage over the 1.2 s that we were using so that's a very nice option we're seeing customers adopt that very quickly and then the other area is we're bringing in a new chassis we're bringing in up a checks to 40 large form factor so we're bringing those three and a half inch large form factor drives six and a terabyte initially will be what will be qualified this will allow us to go very dense in terms of storage so at the end when all is said and done record flex will have multiple tiers that we can choose from on the high end we'll have all flash with either octane optimize or nvme campaign we'll have all flags with regular staff speaks imagining we'll have small form-factor for performance but the cost sensitive customers and then we'll have large form factor for those that really need the deep storage and really capacity optimize so you'll be able to use hyper flex in whatever use case you need and all those different flavors the second area we're scaling is the note count itself so we're going all the way up to 64 note clusters this is the VCF clustered limit so hopefully that'll put to bed any you know any competitive you know how far can we skip excuse me how far can we scale this will be supported across hybrid and all flash and four and five and mix clusters so any existing customers out there can continue to seamlessly expand all the way up to well key element is we wanted to make sure we would scale in a way that would give us more resiliency all right we don't want to just scale and increase our failure domain we want to decrease our failure domain as we're scaling and that's what the feature called el easy so on the 64 denoted cluster size specifically it's going to be 32 hyper-converged they checks notes and up to 32 compute nodes these compute nodes can be pretty much any C Series or B series server managed under UCS manager today so we started with you know just a specific set of kids now as long as until you see a server and three important time generation most of those are now supported customers can reuse them and there's no licensing fee to do so either so very compelling being able to use those but you know so can you elaborate a bit on the availability zones is that something that you can customize completely or is that based on the on the rack or so yeah good question so the automated availability zones is gonna basically take our cluster and logically segregate it there's no management overhead for this feature so it's all automated the administrator just turns it on he can optionally specify zones but but we prefer to leave it automatic mode you turn it on and what it'll do is it'll segregate you in the groups and the data placement policy will change to say I'm only gonna put one copy of data in each zone and by doing this it allows us to decrease our failure domain and so now we can tolerate more just failures more node failures we can in the case of a cluster we have several blades on a B series is it able to care to query the chassis information to create surprises different different zones for example how do you let's imagine the unlikely even for you losing a blade not a blade burn entire chassis for whatever reason is there a way to make sure that the workloads are let's say split or not the workload with the weighted data is replicated across the across the blade and not critical blades across the the enclosures the chassis you know to make sure that you still have this kind of fellow domain somehow segregated yeah it's a good question so for hyperflex specifically the nodes that would become a part of the availability group are the hyper-converged node so they'll be rackmount but your question about location awareness is an important one so this is all a logical construct there's no physical input of what the rack awareness is but you can imagine this is a foundation where in a future release all we have to do is add the administrator controls to say well instead of automating this let the administrator figure it out or maybe through intelligent API as we can query what the stack looks like so that'll be an evolution of this feature but we wanted to give the traitor the ability to flip a switch instantly get better resiliency without having to worry about the policeman okay so that's the the last major anchor feature I just wanted to again it on the multi-platform aspect it's all about the application at the center that's what we want to care and feed at the top layer we want to make sure we get visibility into the performance of that application we can use out dynamics for that we also want to make sure the infrastructure that supports that application has the resources that it means maybe it's undersized or maybe worse maybe it's oversized and we can reclaim it or rebalance it in a certain way so this the workload optimization manager gives us the tools to make those right decisions this distal cloud center gives us both the ability with blueprint our applications for deployment in equal areas along with the private cloud Self Service Catalog and I asked capabilities that come native to it and then it's all about hyper designs where hyper flex hopefully will be your on-prem instantiation of a private cloud and hyper flex have design that works seamlessly within this ecosystem and then just just to hit on this one more time where we're Jeff knew and the previous session was talking about inner sight I wanted to make sure it was really clear on how this is a you know affecting hyper flex so hyper flex is a first-class entity inside of inter-site we have some really great news cases for monitoring management but the cloud deployment piece is the real game-changer and there is no one else in hyper-converged space that's even close to being able to provide this level of functionality to customers so think about the example where I have a thousand remote sites and I need to deploy hyperconvergence a thousand sites how am I going to do it well without inter-site you've got a huge logistical problem typically you know customers will leverage of staging sign maybe the leverage of partner to do this will take the gear out will check everything maybe a little configure it partially box it up again ship it to the insight administration team you know my tea guy will fly out stand it up rinse and repeat this staging process is very complex it's time-consuming it costs money flying people out is really inefficient so how do we do this in a better way we want to do it like the Meraki style right I want to just be able to have my access point and be able to configure all remotely so the inner sight well what you can do is now you can ship your hyperflex service directly to the end Sonny all you have to do is wrap it and connect power and connect the network network will pull up DHCP address it'll automatically phone home to the inner site cloud you can then claim it securely using our two-factor authentication mechanism the servers are now available in either site and the administrator from anywhere probably the remote office can kick off a deployment and so we've added tools inside of inter cell you have an HS Custer profile that allows you to create all of the configuration in advance you can clone those profiles you can use policies as well to make sure that it's consistent right maybe I want to make sure I'm using the same DNS and NTP server and all my sites we can easily do that in inter site and then deploy them all in parallel so you can deploy five 10 100 sites in parallel you know all from a remote cloud so it's very compelling what we can do to change this operational model really give our customers the power to deploy reflects in scale just a quick question on the how the singer let's say sing attach deployment is working when the server arrives do you have somehow pre-configured that server that it is reporting to the right tendon into the right customer or house is working yeah so everything comes pre-loaded from the factory the the BMC versions and the firmware is already at the version that works with inner sight all you have to do is make sure that you have outbound connectivity so outbound 443 to our service URL you can also use a proxy server optionally as well if you don't have the direct access you can go through a proxy and then from there it's sitting in an unclaimed State and you as the administrator can come in and claim it in order to claim it you have to have the device serial number and you have to have the rolling claim code so every every couple of minutes there's this code that changes in the IMC and get that all remotely so as soon as the system online through in band management you can go pull that information and then securely claim it in inter-site Wow okay so you are doing it across the serial number you need to have this information so that it is reporting to your inter-site tenant you happy that's how it's working today which is still still much better than where we were but ultimately we're gonna get better where maybe when you order you can actually have your server pre claims so when it shows up to your still print Asiri numbers on the box outside yeah hmm so we're trying to find a way to do it in a secure method that's the reason we haven't done it yet and there's some ideas that we're talking about yeah so there's like in the Moroccan oil Nawrocki does this they have a way to know if the server is still fresh from factory to make sure nobody you know someone tampers with it and handling so there are all these different methods we're looking at so I just wanted to wrap up real quick as you can see a hyperflex 300 we've got a lot of content here we're innovating faster than we ever have we can now do any application on any cloud and at any scale so thank you all if you know we'd love for you to attend some of our breakout sessions we have UCS nhx sessions here we also have the oldest solutions come by you can see some of the demos we have videos and if you want some one-on-one time feel free i'll be around and some of my colleagues as well okay any questions
Info
Channel: Tech Field Day
Views: 5,574
Rating: 5 out of 5
Keywords: Tech Field Day, TFD, Cisco Live, Cisco Live Europe, Cisco Live Europe 2018, CLEUR, CLEUR18, Cisco, Jonathan Gorlin, Hyperflex
Id: aDibIvTSusg
Channel Id: undefined
Length: 34min 28sec (2068 seconds)
Published: Wed Feb 07 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.