NetApp ONTAP FlexGroup Volumes (Free Webinar)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you for joining the webinar Netta on tap flex volumes and with me I have mark he will be presenting us the webinar is our expert today mark has over 30 years of experience in the IT industry and for past 14 years he's been teaching all sort of soft metaclasses so and today he will be teaching us about your flex for volumes would you mark thank you very much thank you very much all right so welcome to a special little presentation on flex group volumes glad you're joining us enjoying QuickStart for this presentation we have a lot of different instructors who work at QuickStart and will help you in whatever your particular interests happen to be as far as my concerns I'm primarily doing that up stuff so I'm looking at different NetApp classes and presenting those classes and if you're interested in certification exams I can I can give you information about that too I've been teaching NetApp classes now since 2006 so it's been quite a while I've been involved with Nana equipment since the late 90s so I've seen the evolution over the years and I've got kind of a history that I can see that puts things into context and that's one of the things I really want to do today is is kind of look at look at these new types of volumes from the context of how they developed and what particular problems were trying to solve with these and that will really help you match them into use cases that you might have when you get done with this class you should be able to understand the architecture of a flex group volume what is purpose is how it works and this will let you see how it can improve performance particularly for certain types of workloads it's not necessarily for everything yet it also tends to simplify things because as we as we go through this one of the things you're going to see is the nature of a flex group volume is that it's a single gigantic pool of space and that makes management of that storage easier than if it were subdivided into smaller fixed size buckets of space we will see how all this works together now when I first started to work with on tap this was before on tap seven it was in the Y on tap six dates and volumes in those days consisted of physical disks so if I were to create a volume I would have done a volume create and then I would have specified how many disks were to be included into that volume and that's how it was for most of the early history of long tap in on tap seven this changed significantly and so the physical and the logical layers were separated and what we now are used to call an aggregate or we used to call a flexible volume became an aggregate and aggregate was made up of disks that are organized into raid groups and then we would create flexible volumes that got their space from the aggregate so we separated the physical and a logical layer this gave us a lot of flexibility and that's why they were called flexible because we could go both grow and shrink these and that could be done dynamically while they were actually in use so that was kind of the next step separating the physical and the logical layers and that then became also part of the transition when we started to move from the old physical node based systems on tap six and seven and then through the seven mode series which was on tab eight or a lot of it into clustered on tap flexible volumes have made that tradition into clustered on tap but now they were part of something that we call a storage virtual machine okay everybody today is used to the idea of virtual machines a storage virtual machine is just like what it sounds it's a virtual machine in a non tab cluster which can be thought of in this sense is something like a hypervisor and that on tap cluster then hosts these storage virtual machines which expose volumes flexible volumes or antipas or sips or if we're doing lunge with I scuzzy or fiber channel now one of the things that has been happening historically is that the amount of space we need to be able to manage or that applications are consuming is increasing very rapidly and with the flexible volume architecture we we certainly had a lot more flexibility that we had with earlier versions of volumes with the old traditional volumes but there were still limits to how large they could grow and they were still organized into fixed units or buckets of space the next generation took us then to infinite points and infinite volumes aren't really infinite but they can grow to be extremely large 20 petabytes is the maximum versus a hundred terabytes for what they want okay so it's a huge increase in the available space and that space is organized as a single bucket and so I don't need to worry about directory structures or any of the rest of that it gives me a lot of flexibility and so this was our first effort to kind of solve this problem how to deal with this very large pools of space it was very successful for certain kinds of workloads but problematic for others and so then the next step in the evolution was something that we now call a flex group line okay and that's kind of where we're going to end up in our journey as we see this how this various architectures evolved okay so traditional volumes reflected the physical storage traditional volumes were made up of disks that were organized into Rea groups and if I needed more space after I had created a traditional volume then I could add disks to it but it was very problematic to try and reduce the space it was difficult almost impossible so so we could grow these Agra could grow these volumes but we couldn't shrink them okay so you can kind of see this architecture here we have a physical storage system which would be a controller with shelves of disks attached to it and those disks are then created into a volume underneath that the disks are organized into groups so some of the disks are data disks for storing our data some of the disks or parity drives to protect that beam and so it looked rather like this a volume could have multiple rate groups in it depending on its size and once you created this volume you could grow it but you couldn't shrink it and then of course you needed to have some spare drives on your system to replace a failed disk immediately and start that rebuild if a drive were to fail which inevitably be they do so this created a lot of issues and the problems got worse and worse as the disk sizes got larger so in on top seven NetApp introduced the next kind of stage in the evolution of volumes which was the flexible one okay now they separate the physical and the logical space the logical space is called a flexible volume and the physical space is an aggregate okay so now I create an aggregate that is made up of disks that are organized into rate groups and then I can place flexible volumes inside those aggregates and those I can grow and shrink so we end up with something rather like it yes we have this physical error or down here which is the aggregate made up of disks that are organized into rate groups and then the purpose of the aggregate is to be able to support the flexible volumes which live inside that particular aggregate now in on tap seven we were still focused on physical nodes so I would have had an aggregate that was made up of disks that belonged to a specific node and then I would create volumes on the aggregates in that node all the volumes on a particular node were mounted under a kind of a virtual mount point that was called slash ball and so that set of volumes on that particular node were in a sense its file system okay or you could think of it as the namespace of that node and the path to you every volume was slash ball and then the volume name and that's what we had for a very long time that was the seven mode architecture and then when clustered on tab came out or when we moved into the eight dot series that you had the option you could boot up into what was called seven mode in which case it looked pretty much like data ontap seven or you could boot up into what was called cluster mode in the seven old world there's one named space for each controller and I can grow and shrink my volumes inside of the aggregates that belong to that particular controller volumes though are limited by the aggregate they get their space and their performance from the single aggregate that they live in and they have a maximum size how large and individual volume could grow depended on which version on tap you had how much memory your controller had but generally it topped out toward the end at about a hundred terabytes and so if I needed more space than that I would create additional volumes and I would have to separate my data then on to those individual volumes which could not exceed a hundred terabyte which was pretty big in those days and in those days not nearly that big of the problem on tap seven came out about two thousand five so that gives you kind of a timeframe for then we get into cluster downtown probably Oh hard for me to remember exactly I would say 2008 not I'm somewhere in there with clustered on tap things change substantially now instead of focusing on individual nose i have a cluster in this case i'm representing a cluster that has four different nodes and then within that cluster i'm going to have aggregates that are made up of disks that belong to those nodes now i'm going to create a virtual machine a storage virtual machine and that storage virtual machine is going to own volumes which can be placed on any node in the cluster on any aggregate in the cluster so now I have a namespace or filesystem with my storage virtual machine that can actually span the entire cluster if I wanted to it's all about where I want to put the volumes okay so I could put them all on one node or I could spread them out across the entire cluster every storage machine has a root line which is living in an aggregate on one of the notes and it will also have additional data volumes which could be placed potentially any aggregate on any node in the cluster but even though this gives us a lot of flexibility even though my SVM can span the entire cluster the individual volumes still are basically the same as what we had in seven node fixed-size although I can grow and shrink them up to the maximum of 120 terabytes and so I could be writing data into a directory under slash ball 1 and potentially I could fill up this volume while I still have lots of space over here so I have to be very aware as an administrator where my applications are running and provide the necessary volumes in that logical area to support my space requirements we have a lot more flexibility here if my aggregate gets full I can lose volume to another aggregate it's a big improvement over where we were but still at this stage volumes live inside of an aggregate and get their performance and their inner space from a particular aggregate and our maximum volume size remains the same we cannot have a single bucket of space that is any larger than a hundred terabytes so the next iteration which came out memory serves me right right around I think it was a - maybe was eight three but it was right toward Bryant right in there and they created something that was called an infinite bond now this is a really radically new architecture in this case now my volume my infinite volume is made up of multiple flexible volumes which are woven together so that they look like a single pool of space now this infinite volume can span aggregates so we have a giant pool of space that is scalable so it looks more like this say I have an SVM and that SVM is going to have an infinite volume it had to be dedicated to that SVM so there was one end you'd have one SVM with a dedicated infinite bond and it had its various what we call data constituents or sometimes they're called member volumes that live in aggregates but this all looks like a giant pool of space and now this can span across multiple aggregates what happens then is as the application is writing it's basically looking at these files and it's dropping individual files into different locations so the client doesn't really know where the file is not directly okay up here there is a kind of a master volume which holds a directory that knows where everything is so as as we write a file into this particular member volume or data constituent volume Dan there's a directory up here that's updated so it points so for certain kinds of work clothes this room worked really well okay it would scale up and if you were just reading files from this from these member volumes who is quite fast but the problem was everything had to go through this one central point and that point becomes a bottleneck so if I have a very metadata intensive application there are applications which create millions of little files and they're creating directories for those files and they're doing this very rapidly and dynamically I don't really know how to fit a additional kind of directory structure underneath that what I need is a big pool of space but if their metadata intensive which would be manipulating all these various directories and indirect inodes then that is going to create a bottleneck here so the solution was to move from the infinite volume which was introduced in either a two or a three like but I know I was there an 8-3 I think it was their name too and was supported all the way through 9.5 this architecture allowed us to have volumes that actually could span between multiple aggregates on different nodes inside the cluster so this could scale out potentially up to 200 petabytes so the way that works is these data constituent volumes can be up to a hundred terabytes in size because they're made up of flexible bottom and I can have up to 200 of them so you multiply 100 terabytes by 200 and you get 20 petabytes which is just a huge amount of space also another another issue that was created with this architecture is you because all of my directory structure was located in this one volume it it made the maximum number of files that were supported limited to what a single volume could hand so that put us at basically a two billion file limit and so especially if you had one of these with lots of really small files that became another issue with this architecture so this architecture is replaced with what we today call a flex group line it is also made up of flexible volumes that are woven together and it is scalable from a space perspective but now my metadata operations are distributed across all the member volumes and so it's scales from a metadata perspective as well so now we don't have that single control point that bottleneck we are distributing the metadata across all the data constituent volumes and there's something called the remote access layer which is taking these files as they're coming in and deciding where to distribute them across the data constituent volumes the algorithms that it uses to find the best place to place the data have of course evolved with various versions of on tap so we've identified problems and fix the problems as you go through one version on tap after another you see adjustments that are made to make this more and more flexible and more and more powerful now just as it was with infinite volumes a specific file lives in a specific data constituent so files do not span the data constituents you have this giant pool of space it looks like a single volume but it's made up of these components called member volumes or data constituent volumes and files live inside those but now because of what we've done with the metadata be metadata manipulations can be done in parallel and they scale as the volume grows across multiple aggregates and multiple nodes and also our file count scales so now we can have up to 400 billion files in a single Flex group volume which can span aggregates and has that same 20 petabyte maximum size there 200 points so this is kind of where we are now this ingest layer is taking those requests or remote access layer is taking those requests deciding where to put the data one of the things that happens is that you know when we create something like this and we put this inside the on tab ecosystem the ecosystem has complex there are lots of different features that are supported in all kinds of interactions between those features so as something like this is introduced it generally is only going to support a subset of some of the other capabilities that are part of on tap so flex group volumes were first introduced and this wasn't even a ga introduction it wasn't generally available but if you know certain customers were allowed to start experimenting with flex group volumes and II this was done in 9.0 some of the features that flexible volumes or flex group volumes of supported in in 9.0 included supporting the NFS version 3 protocol and a lot of times that is the first protocol that ends up being supported partly because it's a simpler protocol too we can take snapshots now when you take a snapshot what's happening with a flexible volume is you freeze the state of that filesystem at that moment in time when you take a snapshot of a flex group bog it has to simultaneously take an image of all the member volumes at that moment in time okay so that's that's how it works and to be successful it has to be successful across all of the member lines for most of the history of on tap the maximum number of snapshots you could have on a particular volume has been 255 but in ONTAP 9.4 that was increased to a thousand and 23 however in Flex group volumes we're still limited to the 255 number we can do snap restore rolling back a Flex group it supports hybrid aggregate so once this rolled in there was a lot of flash pool aggregates out there and these are hybrids with spinning drives and flash drives in the same aggregate one of the things we talked about in the admin class while these were supported with the Flex group volumes in 9.0 you could move just these data constituents around so if I needed to move dc5 over to another aggregate I was able to do that so I can physically rearrange these components and it also supported excuse me the latest raid type in this time frame which was right with the infinite volumes I always had to dedicate my storage virtual machine and the only space it had was that infinite one with blacks crude volumes I could add one of these to an SVM that already had more historical flex volumes you know so I could mix the different volume types in one SVM now the first release of flex groups that was generally available to the net up customer base was in 9.5 you can see a number of features have been added we now in this implementation added support for Windows shares so SMB version 2.1 and 3x were supported in 9.1 at least candidate 2 it now was integrated into unified manager and a lot of the different storage efficiency features that we particularly associate with flash drives as they're kind of moving into our environment they're supported here too so adaptive inline compression inline deduplication thin provisioning all flash all those things were available in the 9.1 implementation of Flex group and I could replicate a flex group volume with snap near so now we could protect this space so the rollout continues in 9.2 aggregate or cross volume inline deduplication support is added for flex group volumes and we also add a support for volume level encryption so by net upstanders this is a fairly light update but one-by-one features that are available in our flexible volumes are now being added to the Flex group line 9.3 is a big big change big release so support for snapvault was added on netapp volumes we can create a special kind of directory called a queue tree so queue trees did not exist originally in Plex group so they they come in in 9.3 we can now set up automated deduplication schedules our stamp mirrors are now integrated using the version independent snapmare technology so now we're replicating the logical blocks instead of physical blocks you can see some of the other features that are added here antivirus scanning very important in the windows world so just a lot of things happening here and so everything else of course continues on there are performance enhancements that come in as well as you're going through this so generally speaking it's a good idea to try and stay up to date with releases because there's big improvements that are made from a functionality perspective sometimes from a performance perspective that you'd like you probably want to take advantage so this goes on into 9.4 where the features are expanded once more so they start introducing F policies and file oddity very important features for the windows form especially and big differences or big improvements come in 9.3 9.4 rather from a performance perspective so just just by doing an OS upgrade no hardware just a pure software upgrade you can get a pretty big jump in performance you so your fees that and that's alright you know on the same physical art 9.5 was the first version of on tap that allowed us to put a flex a flex group data constituents on a fabric pool aggregate train so we got a lot of layers here so now I could I could have old data that's in frequently accessed automatically start to move off to an external object store something we have been doing in flexible volumes but here it first becomes available with fabric pool in 9.5 andreas other features here along with some more performance enhancements 9.6 is a pretty big release so with 9.6 we can start getting support for continuously available shares with SMB version 3 and hyper-v sequel server instances for the first time now I can put a flex group volume into a Metro cluster and replicate it synchronously across two different locations we have support for volume shrink and II aggregate encryption okay also now we can start integrating into the cloud so now I can create a Plex group volume that's running in Azure or in AWS again this underlying architecture stays the same but we keep building on it refining it adding new features and capabilities with each version of on tap and finally 9.7 which is our current release added the ability to create clones of a flex group log so now I could make a writable copy of this gigantic flex group volume without actually having to physically copy the space it's a busy it's a virtual copy so particularly in development environments and such this can be a really really useful feature we've had it with flexible volumes for a long time going way back to on top seven but now in 9.7 we have the ability to do this with Flex group points you [Applause] we all so added support for NFS version 4 and 4.1 and a really cool feature is the ability to convert a flexible volume into a phlex group volume with a single memorable so I can do this replace I can do this in place without actually having a copier this is really nice let's say you have a situation where you have a volume it's getting close to the max and the size you need more space now I can convert it into a Flex group volume and add in a constituents to that plexi and I don't have to copy the data because a lot of times when you're dealing with the kinds of workloads that are typically used with Plex scripts here you typically have lots and lots of little tiny file and making copies of this is are really inefficient can take a long time to do that so this is a kind of a break that's a fairly big deal among these creatures also you can see with this this feature here them sort of setting the stage to use a flex group volume as a VMware datastore it's currently not recommended but you can see them setting the stage for that with this picture here so the nature of the architecture II means that it is best used in particular kinds of workloads so it's important to kind of understand what kind of workload you are going to be supporting before you choose whether or not to use a flex group bond but some of the most ideal workloads are those where we're very metadata intensive okay we like your if you're doing microprocessor simulations PDA artificial intelligence machine learning if you're doing you know software bills that are just huge you're trying to rebuild the Linux kernel something like that you've got tens of thousands of files I used to I used to work so in the energy industry and I I'm somewhat familiar with seismic data processing okay this is a great use case for why big huge archives of data data likes big data unstructured mass all of those are good applications for flex group long now one of the big advantages of flex groups is by having all of these data constituent volumes we can paralyze our workloads so more operations can be in flight at a given time and this is going to help performance so with these kind of workloads NetApp has published a lot of benchmarks and I pulled this benchmark from if you if you go onto Google and do a search for net app and then TR protectable report 44 for I'm sorry 25 five seven there's a whole bunch of benchmarks there but here you see an example this is doing a Linux kernel compile and if that all those files would easily fit into a flexible volume so we really don't need the architecture from a space perspective but you can see how long it takes because a lot of times the accesses into that individual volume may be single threaded once we put it into a Flex group you can see how the the times of shrink getting a lot more performance because we're parallelizing that work look sometimes even on the same physical hardware you see really significant differences in performance now it's really not necessarily ready for any kind of workloads that are out there generally speaking our requirements is that the files that we're dealing with need to be small relative to the size of the data constituent bond that ingress engine that's writing all that data into the data constituent bonds he is trying to basically fill them out in parallel and the idea is that streaming the data across all those data constituents and if it does feel like it'll all fill up at about the same moment when we create a file we don't necessarily know how large that file is going to be let's suppose I create a file that's a long okay we start writing into the log well that file is going to just grow and grow if that file were to fill up one of the data constituents because remember files do not spend data constituents if we're to fill up a data constituent then the whole fledged line would fill up at that moment okay so that would be pretty bad now with each version of ONTAP they're finding new ways to kind of optimize the algorithms that select how to put the data originally it was primarily focused on how much space was available and so if one volume had more space available another one of the data constituents then it would it would have sent more rights in that direction then they started to take into consideration the number of I know which limit how many files you can have so we are tracking the inode count as well because you wouldn't want to run out of I knows then you couldn't couldn't create a file any more in that law I mean it would appear that if the whole Flex crew book filled up and then they came up with something called elastic sizing which is really kind of interesting if a if a data constituent fills up then it will pause the rights that are coming into it and it will look at some of the other data constituents you will steal some of their space and added to the one the data constituent that's filled up and then release those rights so that they're right into that same data constituent but it's now taking space from the other members and so we're getting more and more flexible as to the types of workloads that we can hand but again this is not suited for everything so PMS you can see them setting the stage one way we can solve the problem with the file size relative to the data constituent sizes by making our data constituents bigger so it would be better if I'm dealing with larger files to have more and fewer data constantly more battery backup they would be better for me to have fewer data constituents that are larger then more data constituents that are smaller so there's a balancing that goes on there no matter what though individual files do not spam data constituents so an application that wants to try and stripe files would probably not be appropriate here if it needs to be able to place the data specifically okay remember it's not until the ingest engine looks at our utilization that it decides where it's going to go so the application doesn't actually get the physical control where the right hand and then not every feature that on tap has is necessarily supported in a flex group so if you want to use a feature that's not supported then flex group is not going to be the answer so here you can see some of those and these are typically associated with Windows shares so remote volume shadow copies some of these other features here that we do support in [Music] normal flexible volume shares are not yet supported with flex groups as on tap has evolved one of the things that it was really apparent to me when I when I see it across all the many years that I've been working with on tap is how they have evolved methods of parallelizing the workloads okay I remember II when I was first working with on taps in the six and seven days there were different processes which could consume a CPU and once that CPU was completely consumed even though there were other CPUs available he really couldn't go any faster physically spread those threads across others so you can use well that's become you know a problem will they've been addressing with each new version on tap part of that component is a quality that NetApp calls what entity and what happens here is that there's a limit to how many parallel threads we can apply to certain objects and that changes depending on which version of on tap you have so on modern high-end systems like in a 700 there are sixteen affinities per node and eight per aggregate so the optimal configuration to get the most throughput on that particular node would be to have two different aggregates and to put eight data constituents on each one so that underlying capability the number of affinities we can have for object is something that can be taken into consideration when we are building these flex coupons you can see what that what those qualities are with this particular command so it's an advanced mode command you can do a node run and get the stats and see how many [Music] affinities are available on the note on that particular hardware and how many on each aggregate and then when I lay out my Flex group volume I can use that information to optimize the layout so if I had an a 700 running 9.4 then I would like to have 2 aggregates on each node and when I create my Flex group volume I would prefer to put eight data constituents on each one of those aggregates and that would maximize the parallelization that is possible so when you're looking at applying this it's important to understand the workload that you want to run and then up has a tool very very useful great tool called xep and it is a wonderful tool if you have to copy large numbers of small files but it also has analysis capabilities where it can look at a particular volume and see for example what the average file sizes are the maximum the different kinds at i/o and of a sequential and kind of build a profile of what kind of workload is running on a particular block so the best practice is to understand that and then maybe you'd be able to use that information when it comes to sizing data constituents in the rest yeah the bigger my average file size is the larger my data constituent volumes should be each version of on tap has significant enhancements particularly on the side of features and performance and stability enhancements as well so that cool function where it can kind of steal space from a different data constituent was put in place when they came across some workloads that actually ran out of space on a particular data constituent so this is a way to keep that flex group volume online and running it's a pretty unusual situation it's pretty rare but now they have a solution for that we want to lay the Flex group out appropriately for the hardware we have and generally there are tools that will do the auto layout so although you can manually opt try to optimize this usually the the best practice would probably be to you that the auto tools which are aware of the performance capabilities and considerations let them lay it out for you they're probably going to make the best choice you there is a tool for creating a flexural volume both from the command line or if you do it in the GUI it will automatically choose the layout okay you can manually do it if you want it's better to have larger rather than smaller data constituent phlex volume sizes it's going to give us the most efficient utilization ideally it says here the largest file in a data constituent should be one to five to more than five percent of the data constituent size we can build all of these and we can build all this performance into the cluster but we also need the network pipes to be large enough to actually make use of that performance so net up recommends that you use at least 10 gig Ethernet ports on the nodes in the cluster the to access the Plex group on another interesting kind of feature that develops there are sometimes surprising things that pop up as we scale up into sizes that are kind of beyond will people ever considered and one of these is inviolate eased with innocent in eunuchs in general there is ah has a name but underneath that name is a file ID number and it's a 32-bit unsigned integer so that means the maximum number of unique IDs you could have is a little over two billion but remember a flex root volume can support over five or four hundred billion five so that's a problem because if two files have the same file ID we're gonna have a collision there and it's not going to go well so if you are using flex group volumes then the per NFS version 3 and version 4 you want to enable these 64-bit IDs and then when you do that need to unmount and remount it on the host so then you get the new six equal ability they also recommend in provisioning if you are you know running on a flex coupon the big advantages come from being able to distribute the data across all of the member of August or the data constituent lines they kind of use those two terms interchangeable okay now let's suppose you decide that you would like to move a workload that's currently on flexible volumes into a flex group how are you going to get it they're protocols like snap near which would be one of the ways we would have historically approached something like this don't work with Flex group volumes because stamp mirror is functioning at the block level not at the file level and the whole point of the Flex through volume is to distribute the files the individual files across the number of constituents so Sam here is not an option we need something that does a copy at the file level so net a product called xcp for that purpose and if you are moving it from competitor storage into on tap or you're moving it from one on tap system that maybe is using flexible volumes into a flex group volume then xcp is probably the preferred way to do the migration it's surprisingly difficult if you haven't worked with large numbers of files particularly little files to appreciate how difficult it is to move that kind of workload it takes a long time individual files especially if they're being processed sequentially have a lot of overhead and so xcp has a lot of code in it two parallel eyes that work look in 9.7 I have the ability to convert in place a flexible volume to a one-member flex group and this can be done live okay so imagine you have you know this flexible volume it's perhaps you're getting close to the hundred gig limit and it might have a billion files in that would be a pain in the neck depart but in 9.7 we have the ability to convert it to a single member flex group volume and it does this unbelievably fast it's not changing any data inside it's updating some metadata inside the volume and so regardless of how many files you have in this flexible volume it's able to do this conversion in less than 40 seconds understand that's pretty conservative it's usually much faster than that now I have a flex crude volume with one data Constituent so I would add more data constituents and then as my workloads are running into that flex group volume all the new data will tend to be redirected into the new data constituents you over time things will level out so it sounds kind of complex but the goal really of Plex groups is simplification now instead of having to manage all of these individual small [Music] bikes and I can just pour my eating into it and I don't have to worry about the distribution that's happening behind the scene it looks like this single giant pool of space but file system is being distributed files are being distributed across all the data constituents so we're going to get better performance because we can parallel eyes more operations and I don't have to manage those distinct little buckets of space so if this is something you enjoyed there are more NetApp classes where we dive into these different features that are possible if they don't on tap and if you have questions please go to the Quick Start portal we're monitoring the questions there so we can get back to you with an answer hopefully fairly quickly alright thank you Marcus for the great sessions so thank you everyone else for joining you feel free to send in your queries to QuickStart at wix.com and you can also visit our social pages if you want to you know like ask any questions and we'll be more than happy to assist you thank you
Info
Channel: QuickStart
Views: 473
Rating: 4.5999999 out of 5
Keywords: netapp, ontap, flexgroups, training, certification, course, education, online
Id: kCC_lpCHGes
Channel Id: undefined
Length: 55min 57sec (3357 seconds)
Published: Fri Jun 05 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.