Cohesity Interface, Data Protection, and DevOps Demo

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right guy my name is Travis ask any questions you have I know I'm kind of a little bit catty-corner to you guys so throw something if you need me not allowed sorry I'm not allowed to throw things anymore but is it just why we cannot have nice things all right all right guys so let's go through the the co ECD interface itself and we're gonna walk through a couple different workflows on this we're gonna walk through kind of the overall infrastructure kind of setup that sort of stuff from there we're also going to kick into data protection both as a integrated data protection appliance as well as potentially a target for maybe a different application and then we're going to go through a recovery operation also kind of walk you through some of the DevOps workflows so we got a couple things to go through here by all means ask any questions you'd like as I keep walking through and we'll go from there all right first thing I'll point out and again I love this fact it's all html5 no Java no flash no versions no nothing else you don't have to do what hell ever again so I spilled Java in my laptop what hey I know right alright so let's let's talk through a couple things on here right so I've got the ability right here on the main screen to be able to drive through a lot of these different components and let me kind of just refresh this here you'll see here right on the main screen I've got my top section here that's all going to be based off of my data protection cycles so I can see right here my data protection I can see right underneath that my infrastructure level which is going to be my systems themselves so I've got three nodes in this particular system no alerts which is always a good thing green thumbs up and then I storage again this is 15 terabytes on here they don't give me the expensive big drives they give me these small drives for my little mini lab here and down here you can even see the data reduction ratios so right on the main screen I can see that logically I'm storing a hundred and sixteen terabytes of data on this 614 gigabytes of storage now full disclosure this is a very very sterile lab environment with a lot of backups running over and over again I do not expect you to get 194 to one data reduction so this is a lab just preface in this there's great d-do because of the way we do things I'll walk through why this is kind of as goofy as it is I've seen analyst white papers explaining how you can get that yeah you do a lot of backups and you they're all logical backups right I mean smoke and mirrors and a lot of things on D dupe I mean you can make it look as big as you want even full VDI clones there you go more vendors have counted thin provisioning in their data reduction and then I've countered that by showing them on one gig USB I said but I'm gonna lie and I'm gonna tell my system it's actually 100 terabytes yeah it you can do some fun stuff so let's walk through some of the basics here let's say that I'm setting this up brand new in your environment right now today I've got the cluster stood up which I'll tell you takes about 15 minutes it actually takes me longer to take it out of the box and put it in the rack than it does for me to actually set the logical stuff up putting IPS on that sort of stuff so what first thing I'm going to do is we're going to go to the sources let's let's go set up let's say VMware for example now let me go to add a source and this is the step not plural this is the step it's very very simple all I need to do is take the information from Ivy Center instance give it a username and password service accounts as as recommended here and that's it I don't need to do anything else there's no proxies to deploy there's nothing that needs to be installed at the software layer all I need to do is give it the credentials in the path for my system itself and then from there we import everything and you support what they're from a VM from a virtualization perspective VMware Ivy VMware today there are more things that are obviously in the roadmap that I'll defer to others but for right now it is VMware which makes up a big chunk of the the datacenter workflows and from here you can see now exactly what's in the infrastructure right so I can see what's protected what's unprotected which if we remember back from the main screen about the same vulnerabilities list here so I can see if I've got vulnerabilities right off the bat but by going into my sources now I can derive a little bit deeper into what's in there and you can see here for this particular instance I created and out of this this V Center instance into the cluster in February but you also notice that we're refreshing this on a regular basis so we can actually take active action against what's actually happening near infrastructure so if you add something in maybe do a policy that is a lot of stuff in it which I'll show you in just a moment that actually now takes out the question of are you protecting everything just walk through that first of all I'm gonna go to protection and I'm gonna have options now of either protecting virtual machines or protecting a view of you as well like if I have a file server on here and it's all policy driven so you'll see the same workflow whether I'm protecting a bunch of vm's whether I'm protecting a file server all of this is L all based off of the same workflow and really the workflows three different components one what am i protecting so I can see that I've imported everything from VMware and I have visibility now unto the entire infrastructure so I can see everything that's in here and as you would expect I can pick and choose individual virtual machines to grab great that's kind of a pain to manage but you can do it I can select higher level objects maybe I want everything in that particular Network great I can do that what if I also want maybe the entire cluster or the entire data center I can select and protect one VM just as easily as I can protect a hundred VMs and that's how easy it is once you've selected what you want to protect especially with the high level objects I have the option to select automatic protection this is that active monitoring of what the job would be so if I'm doing the entire data center now everything that gets added in here I'm gonna go ahead and import that into this job and it's gonna take on the policy them setting up so once I've selected what I want to protect now I need to select how I want it protected so again we're now going to select a protection policy to me this is an SLA right so I can set an SLA and apply that SLA to multiple different jobs inside the system itself so out of the box we ship gold silver and bronze which are a kind of standard stuff you hourly protection every six hour protection that sort of stuff but I've also set up a couple other ones inside here so let's see here I've got TM windows setup you've got I snapped on this one running every hour I'm retaining this for sixty days I want to be alerted if the job takes longer than I expect that that's a good thing but that I've also got more extended retention rules set up into this as well so maybe I want to keep the first every day for a longer period of time and then maybe I want to keep the first every month for maybe for a year I'm setting this at the SLA level so I'm selecting that now at policy level so I can apply this policy to anything I need going forward so it's very very easy to set your standard of gold silver bronze for your company and now have that apply everything going forward and still be able to organize your jobs and schedules but you can create those groups together and that way you can apply policies very very easily to large groups so step one what I want to protect step two how I want it protected step three I need to give it a name from here everything else is is taken care of inside of here app consistencies included the ability to index everything inside the system like Noah talked about we're actively indexing everything on the system itself so by default we've had enough customers say I don't need a you know the recycle bin to be indexed so we've taken some of that stuff and we're excluding it however for whatever reason you feel like indexing the recycle bin all you need to do is remove that from this particular job and it's now being indexed very very easy to drive all this stuff that makes sense to everybody I see no nodding butt but no shaking all right good deal we'll move on to a job that's already been created and kind of kicked through some stuff that way you can see what some some actual workflows look like so by default we give you a 24-hour view of what's going on with the system itself and I could see I've had 355 runs I've had 354 that were successful but you'll note that no are no cancel there errors but one actually violated the SLA it took longer than I expected it to so we don't show it as a failure but it did violate an SLA so we are notified on that and again this is something that you can set if you want note of that extra level of notification for me if I'm looking for granularities I want that there I want to know if I've got a job that's supposed to take an hour if I'm taking 58 minutes I probably might need to take some action or or change some of those settings if it happens regularly now if I drill into any of these jobs let's let's just look at this one demo back up it's gonna give me some details about what's happened inside of here so this is for VMs they're a hundred thirty six gig all in by the time we get what's only handed to us on our hourly interval on this particular job I'm only getting handed what it looks like about 70 or 80 Meg from these systems again sterile lab environment but if you extrapolate this into your standard data center not a lot of change happens on most VMs maybe some logs for some your your big sequential your your your databases may be a bit different but for the most part there's not a ton of change which means I can add a lot of granularity and really change around my SLA is in an infrastructure I could have my RPO be much much more granular than I would traditionally have with a regular backup application and even with that 70 or 80 Meg that's handed over to us realistically by the time we actually write that down to disk the duplication we're in the 15 to 20 Meg range which means I get very very fast backups so backups are part of it right we do backups for a reason one to check a box to to make our our boss is happy and make the auditor people away from us but also so we can actually run a recovery so let's take a look at a recovery use case so if you go to recovery here I've got some options I can either recover files or folders which is the we've exposed the elastic search capability on there so I can basically start typing a name and it'll help me find the file that I'm looking for whether I know where it is or not but I can also recover whole VMs and this is gonna be a very similar workflow and again one two three step operation so let's walk through one of my customers actually had this happen which was unfortunate for them but it kind of works out good for the story they actually had a VMware instance where they were running a proof-of-concept for us at the time they ended up getting a hit with a crypto virus they had a bunch of their VMs all go offline at exactly the same time couldn't do anything about it their traditional backup software runs once a day and they would have lost somewhere in the neighborhood of 18 hours worth of information assuming that that most previous backup was good and they had to run the restore and however long that was going to take with coisa d they actually had this running set up because I set it up as you know once every hour for data reduction and so we kicked off jobs once an hour on those particular systems so let's let's emulate the same thing here I'll use one machine but I could do one machine ten machines doesn't really make any difference inside of here so that Windows 74 all flamengo edges delete this from disk will initiate either the worst case scenario disaster or whatever you want to call it and we'll walk through the the recovery interface here so first thing point out we're searching by VM or protection job name so I can do a lot of really cool stuff here so if I type in something like windows and if you remember here everything is called windows so I'm gonna get a lot of stuff popping up so I'm gonna get the individual VMs that have windows in the file name or in the name itself but then any job that has windows in it I have the ability now to so you that window the demo backup job that I showed you with four VMs in it I can simply select that box add it to the cart and I'm recovering four VMs just like I'm recovering one it's exactly the same workflow and will handle all the orchestration under the covers so whether I'm doing this with one VM 10 VMs 100 VMs it's exactly the same workflow for this I know I wanted Windows 74 so let's just go a little bit more granular because I know I've got a couple different jobs that run that again they don't give me expensive gear so I got to share a lot of yemm's so let's see here we've got time at 302 I've got a 233 that seems good let's grab this one now I'll walk through one thing here that isn't probably standard for recovery but we actually give you some really silicon's here I know in this particular instance I'm restoring back everything in place back to the original data store I don't need to make any changes but let's say for example the data store is what host I don't have that data store anymore so hitting the default operation of recover to the same datastore is not going to do me any good I need to put that somewhere else so my Recovery Options gives me the ability to do some slick stuff I can rename it I can either change you know how to prefix or a suffix just to see if I wanted to test something make sure that's the right thing maybe I want to send it to a different location maybe that's a different vCenter server maybe that's a different resource pool maybe that's a different datastore I have the ability to have all these walkthroughs inside of here now where else might this be useful this is also very useful for DevOps if I wanted to take a copy of those 10 VMs that are all integrated together and walk through the same workflow the recovery workflow and the orchestration for cloning is identical the only difference is the last step storage vMotion with a recovery operation we the motion it back without with a cloning operation we leave it just running on the cohesive datastore in perpetuity so because we know all this stuff is gonna be the same I'm gonna go back to my standard options and I'm gonna hit finish here now so in this particular instance it's 304 by the little clock here I had a recovery from 243 so this snapshot run at 243 is now merging together as one full backup elevating up into the SSD tier presenting itself via NFS over into my VMware infrastructure and initiating a storage emotion after powering on that VM all done in just a matter of seconds and I can do this for one VM 10 VMs doesn't make any difference but I can see the datastore is being created I can see the folders created all this workflow is all done without any extra steps that need to go into the infrastructure everybody get this this makes sense good mmm well you guys are quiet today all right so that's protection that's that's showing a backup and that's showing a recovery operation now not everybody uses our integrated data protection some people still have their own things for whatever reason whether that's you know a DB I'm sorry fools sure so maybe it's a DBA that wants to do their own thing and they say something to send for for Oracle great we can you know simply supply that as well right so that's part of our platform our platform gives us ability to expose different ways of seeing that data views internally so on here I have the ability to create a view again in the same bucket that I'm sending all my data for for backups I can now send that same thing both either NFS or SMB and expose this out so that other applications can actually write to us you don't need to have only the cohesive the infrastructure you don't have to use only our system you can actually still leverage other applications to send data to us so we allow you to use this not only as file or use case but also that same technology that lets us run file services on here also lets us be a great target that can scale out infinitely for any application to just simply write to us so huge power in the platform in and of itself even if you're not leveraging our application for everything is part of that right I'm ask you a question so is it so are they I think the answer is no but are they silos within themselves like you know I have the NFF SMB share and I put that stuff in there that's one world and then I have my backup stuff and that's another world or can I take the stuff that I backed up and somehow view my VM as an NFS SIV share files within my VM so I think I understand your question let me let me try and reiterate this so that I say it back to you if you back something up on coisa t you have the ability to then run individual granular recoveries of those files but it doesn't necessarily expose all the files inside the VM via SMB okay that would probably violate all sorts of security stuff but there are products that do it okay but but you could run a test in that environment and spin up that VM and then export the file system within that VM through SMB and then that becomes an SMB server so we won't we won't expose that record we just see the file the VM file but we do crack open that VM and index that stuff so we do have that ability but you just don't expose it using NFS or SMB you just even joke like that that VM we said I can run a test of yeah of that VM where is that VM actually running the VM the the compute is in whatever your infrastructure is okay it's still right you're not today we don't run the compute inside the box for the VM right tomorrow we may even run that within the box in fact we probably go so today we just then then you might actually call apply for the term that you're currently calling or that off but we're calling it happy to it's a key storage mm we may qualify for all right good deal anybody else cool so what kind of keep rolling on through here so we've gone through data protection which I mean really truly in just a couple minutes we showed what it takes to add vmware so that that's what 30 seconds to set that up we've set up by data protection jobs that run very very quickly but that's always a very very good thing inside of your infrastructure as well we've shown you that we can leverage the platform to be extensible for other applications to be able to write to us that's always a good thing and and real quick I'll walk through one other thing inside of here as far as cloning operations you'll note this is exactly the same setup that we saw before so if you look real quick at the workflow whether I'm doing this for one VM 10 VMs I can expose these entire same four VMs add those to my little shopping cart here which again I love the shopping cart mentality I can add as much stuff in here as I want to clone out in one piece think of this as hey I want to test this one really complicated patch system but I don't necessarily want to take up all that much space on my production infrastructure and do all the cloning inside of VMware I could just take a copy the backup make that now available through this workflow and I'll choose to maybe I want to rename it I want to add you know test one on in front of the beginning of this I want to choose to send this over to maybe a different resource pool so I don't have any applications running into each other inside of my infrastructure I can choose how I want that data to be visible on the system maybe I even want to just put this on a completely different type you know network inside my infrastructure so this prevents this off and actually run my test inside of the infrastructure I have the ability to then choose where that data goes and if I even want to turn it on now I showed you this is exactly the same workflow as we use for recovery the last difference is that the last step in the process for us here that relocate virtual machine that we do inside of a recovery operation putting it back to a datastore for anything that you see inside of our interface that we used for cloning we're just leveraging the coisa T as the datastore it mounts it via NFS presents it available and whenever you're done with that all you need to do is tear down that clone so we'll actually show it to you here does that process instantaneous or you cop are you pretending a virtual view or you actually copying data we're not copying anything we take an instant clone off the system itself present that logical backup which is really truly just to change block tracking bits that were made merged in there and I create that and actually select that and I get you now everything inside of a folder inside of here and I have the ability to then view what's in that particular workflow and then for me to tear all that stuff down all I need to do is go back to my clone tasks choose any of the clones that I'm working with independently you can see that takes a little bit of time teardown now you'll note inside of here it's gonna actually go through the clone tasks it's gonna go ahead to delete the VMS it's gonna delete the data store if it doesn't need to be there anymore if there's no more dat things on there it'll even power these things off for you so it'll tear everything down and do all the orchestration to put it all up and do all the orchestration to tear it all down for you but just with a couple clicks and then what about leveraging that from a restored perspective like instead of doing a full restore just spinning up the VM by default it does that so we actually do that we actually present the VM right on Kohi City instantly so what one VM 10 VMs whatever that's done it routes itself up right on Co e city storage is an NFS datastore while that's happening we actually initiate the last step we storage V motion it so instead of having to wait for all that data to get transferred over I can actually have the VM boot right off cohesive II now again you think about this if I'm leveraging devops on cohesive that means I've got those bits and bytes that run most OS is in my SSD layer already so now when I do my restore operation all that stuff that I need to boot is probably already in SSD which means I can boot very very quickly for that recovery and then it's just a simply migrating the data back to whatever datastore by default it goes back to the same one but if that's what you had a problem with it'll actually redirect that wherever you want based on your policies same thing if you look at this from a disaster recovery I want it to go to maybe do a different V Center I want to go to a different data store I can have all that so that you can set it up right in the workflow let me see here I think we're good as far as the workflows on all of this we didn't really see any QoS stuff us sure hey I got a moment very very quick inside of here from the data protection perspective we don't leverage anything from qos we handle all that under the covers right because it's data coming in the view is where you're actually going to manage your QoS so you could see I've got a couple different things inside of here and for me to actually dive into this you can see what the QoS policies are and I've got a couple different ways to do that I've got tests in dev high which here let me let me just go in and your and edit it I'll show it to you I've got tests and Fi I've got tests in dev law so this kind of allows me to have priority on what I'm doing for my dev ops workflows I also have the backup target hi and backup target low so I can now prioritize based on how I want that SSD tier to work how long want the IO to work all of that's tunable inside of here and really all we're doing is just giving you some some some basically knobs that you can turn to kind of prioritize it internally when you look at it tests in dev hi testing dev low it's kind of a matter of how much we want to send that data through the SSD layer and then the backup target you know do we want to buffer any of that or do we want to send that data straight down to disk by default even without any of these things set up by but we're still going to monitor the data coming through and with a pattern recognizer we can determine if they'd is sequential in nature or you can just drop it straight down to biscuit about the same speed or if it's more random in nature where the SSD is really going to get you that benefit and I just want to add to that so these are free so what we call principals in the system so QoS is all about principals each principal has a priority and a fair share and those are some predefined principals what our customers can do is they can define custom principals if they are not happy with the predefined ones and then attach Fridays and pressures to those and then use them to say okay this particular view or maybe this particular back up it gets associated to that principal and all through the system we're going to observe that policy for the qsr policy for that so that's the power of the system good deal all right I think I'm good I think we're gonna pass it over now to to Nick to talk a little bit about some cloud it just asked one question yeah you finished up you got it you guys are obviously gathering a lot of really interesting information and facing that up so the customers can analyze what they're doing on their systems how much of that are you sending home from dial home perspective so that you can then you - lies that - you know further development systems and so forth I know we get out some of the base level statistics more than anything else I think Sunil I can answer that we gather a lot of the information and we pass that information out to a centralized repository at home which we use then to monitor customer environments so we can pre alert them for support issues before they happen as well as use that information to build better products going forward does that answer the question yeah I mean what areas specifically because we see the hardware and faults and events like that but is there anything performance and it's all across for example we gather information as to how long a backup job takes when does this backup job run and if it fails then we can tell you that this backup job is running regularly at 9:00 p.m. on Tuesday because your primary environment is constrained at that time so it could be better to move in at 888 raw actively work so we can proactively we don't do that today but we have that information and we can turn on that going forward we also basically can then tell you how should you set up or modify your environments or what what disks systems are being hit most or if we believe our disk is going to crash we can basically or hit an alert we can tell you that if a storage subsystem is filling up we can alert you before that all of that information so we often find out problems before our customers do and so this dial home friction ality can be turned on or off by the customer the customers can turn it on on demand when they need it so it's very flexible if the customer later on one stop thanking system with your latest code release how do they go about doing that so a B will tell them that we have a release and they will we will agree on our timeline when to do it and we don't even have to go there it's just done remotely you guys do it for them we can we can do it for that for it's very automated the system is not it's non-disruptive upgrade so we don't have to bring down the system even though it's a distributed system and even though it can be a large system we can have nodes incrementally being upgraded remotely and it's just very automated so we just want effectively just point the system who the new software and the system just kind of does it by itself on the towel and back syllabus under that house how long typically finding before you get say 80% adoption rate on new releases so we are a young company these days we tend to like every two or three months we have a new new system and it comes with a host of new features but as the company matures we will tend to keep most of our customers on an older release keep what the new put some of the customers who really really want some features that we just released on the new release and once they're happy with that then kind of upgrade everyone so it's just the the timespan expands as we get more and more mature but these days it happens to be more quicker because customers are hungry for more features is that typically kind of weeks or a couple of months ok it's a couple of months so by the way that was cerebral gennadi he's a senior director of product management could we do a question from Twitter before we continue our chat asking is there any way to back up a physical machine yes we are building that friction ality so you're doing that in two ways one is we have this project which we call puppeteer where we are going to orchestrate calling an external piece of software that can back you up so that's actually going to come very soon and then we are building and support for backing up both native Windows and Linux platforms and we are kind of working on that support but we don't have it today thank you I'll even point out that if you have an application that can send data to us yeah so you can today you're lucky over yeah and can backup those environments and put the knobs so if you've got product a and you need that you can just send that data to us and we can be a central repository for everything yeah but product day sends you a tarball and now I can't use your analytics correct so we would say that that backup software don't don't send us compressed hairball dildora's ourselves so Sam I can't tell net backup not to send you a turbo and then that would be a limitation so just send it whatever you all did so it won't be able to lead up or whatever but the users after win because because it's sending you a tape image anyway because that's a that does that is an interesting point so you you list you know CommVault Veritas and some backup software's do have the ability to tell them to not do that not you list Veritas we what is never can send those things to us leave me not anybody if you just want to be able to I'm not aware of any backup software but be that humble beam camera you can done on deduplication and compression and me it's but no but it's not it's not that the compression and deduplication it's the creating of aggregate files when you back when you got VM sends you a VMDK image they don't send you files net backup sends you a tarball yes and so you know if you're gonna do analytics on the files then for every file every backup application you need to enter ball them and be a question the first first thing is whether you can store them efficiently the question the answer is even if the Senate hardball well we can potentially deduplicate that you know I can turn the d2 I can turn the d2 Puffin right back up that's what I can send you a tarball and you'll be efficient right but but once you have a product so what makes you cool yes but once you have a tarball every backup software has the ability to restore well what you can do is you can just store on to us and and then kick off the analytics so that's what the extra step you'd have to do with those pieces of it that would take forever calling that an extra set up is like you not if you're doing needle progression and stuff right see we already stored the data once and you're done cutting it so it's not it I gotta come across then it's gotta come across the network to media server and then back to you as an SMB not square-mile by file that's where we can do it through out on our next workbench so you can run that software on us I'm gonna start on to us and then but yet a restore the data pair if I've got if I've backed up to you with net backup and then there's a car ball on you yes in order to restore to you a data path goes net back the net backup media server crap reads that tar ball and writes the individual files back to you as an SMB target yes that's gonna take forever I agree with you with all unless we understand the format of the backup software I'm gonna be able do it and that's the point yeah it's that for you to say for your you know you're not data domain correct editing all data domain needed to do to understand that back up is to go oh this is how they put the time stamps and other crazy stuff in the middle of that screws up Rd do right you guys have to have a deeper understanding of each backup applications Powerball format so that only in order to do the analytics to do the analytics but that's where you know if I don't mean the analytics then it's a scale out data domain which is a valuable thing but 20% of the real value yeah you get less value unless you can use fully ROI efficient yeah I see the backup software thing as a step not on well you know as soon as they could do Windows and Linux agents then I don't care right so it allows us to peel off stuff so you can let's say we don't support Windows Linux today and so you can use your existing backup software to do that use our backup software to the rest and as we support these these native machines we can do the rest so but your point is well-taken it's right mohit one question from the teacher paid yeah somebody's asking can customers upgrade their city environment without contacting support the answer is yes we would still recommend they inform the support so we can handhold them just in case something happens but the whole way in which this system was built was to make everything automated there it's not like the support guys entering tons of commands he's just looking at the system that's all so the answer is yes they just need to get our new software from somewhere and do the necessary thing themselves literally just pointing the system add that software and that's it so yeah it's real it's really like creeks like sky I mean a you go admin upgrade point it at the software and you're good so it is possible but it's it's not ideal yeah Edgar as I had a support he's saying that if you go to the Support Portal they get attached most can get access to all the pieces of song that we've released and so all they got to do is find y so they can self upgrade if they want to so on that point then can you downgrade your software you need we but we do not support downgrades today again based on feedback we might we actually in the right statement as we don't recommend it but as long as we have not changed the format in any way then you can eventually it's just two pieces of software it's two versions and you can go from X to Y or Y to X now some particular versions may not support going both ways let's say we do a data format change we may have changes that say well you can go from format a to format be but we don't support going back and so some two releases may have that but in general you can in fact we do it all the time as they're built into the platform that does the equivalent of a snapshot or whatever of the current you know meta Basin and so forth before that upgrade happens just in case that it you know say it was gonna fail halfway through or whatever and then it makes it easy to roll it back or there are things put in there so we do intend logging and stuff that allows the doublet is not competing as it's done and so there ways to kind of prevent things from going here there so I think this discussion is to detail for that it's it's out of scope for this video talk we can take that offline okay so let me just clarify something Howard said so you do so the only way you have to backup regular hosts today is is VM we're not you said something about you don't have don't have Windows right we need to use age and external backup software not our back and software backup that's the right answer this will be adding physical windows sequel server and oracle adapters in the next three to five months sequel server is coming very very soon and the other ones will be coming before summer
Info
Channel: Tech Field Day
Views: 9,073
Rating: 4.5 out of 5
Keywords: Tech Field Day, Storage Field Day, Storage Field Day 9, Cohesity
Id: 2NGFQQ5K0cQ
Channel Id: undefined
Length: 35min 28sec (2128 seconds)
Published: Sun Mar 20 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.