vSAN 6.5 DEMO Install and Configuration

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome everybody my name is jim sandy and i'm a vmware technical partner manager and systems engineer in this enablement session i'm going to show you a demo of enabling and configuring virtual san version 6.5 just to keep in mind that virtual san is actually built and the bits are actually in the vcr hypervisor itself so by proxy vSphere version 6.5 when you enable the sandwich is a sense of what you're doing you just enabling it you are actually at virtual sand 6.5 as well virtual San is not a separate product it is not a virtual appliance that needs to be installed it is baked into the hypervisor itself so to take advantage of virtual sand it's simply a matter of buying a license for it and getting the key and slowing the key and then enabling and configuring it so again there's no separate install that's required of the appliance or another product it is baked into the hypervisor so I'm going to show you in the hands-on labs and if you're not familiar with the hands-on labs the you can go to labs hol be more calm and we have labs for all of our different solutions in there but we also have several for virtual Sam we've recently updated to the virtual 76.5 version so as you're going to see I'm going to be using the hands-on labs environment to enable virtual sand and then do the initial configuration of it and I'll go through some of the different areas within the vSphere web client associated to be sand and what's available for you to use and to do your setting changes so without further ado let's get started with the demo okay so here as you can see I'm in the beast your web client again my hosts are vSphere 6.5 V Center is 6.5 as well so because virtual San is actually in the hypervisor in the vSphere house itself it's not an appliance that I need to deploy or anything like that V San has already built into the hypervisor itself so really all it takes to enable V Sam besides having the compatible hardware assuming that the only thing we need is a product key develop product when you purchase the licenses for Rhys and and with that you can then enable virtual sand on a cluster it's that simple because it is baked into the hypervisor into those bits already so again I'm going to show you how to enable virtual sand 6.5 so as you see here we have a cluster this cluster here region a 0 1 - comp gr1 in it we have three hosts now before we can enable the virtual stand cluster the first thing that we have to do on our hosts one of the few configurations that we do have to do is we have to go in click on the host go to the configuration tab and then go down to your VM kernel adapters you're going to need to make sure that you create an adapter for thus and specifically for that traffic of communicating that purchasing information and data between the hosts so you do have to create this new kernel adapter for vcn as you see here if I scroll over you're going to look the services for vcn are enabled and you can also look here abled services as we stand right here so as we go down each one of these hosts you're going to see that they already have the adapter set up for BC and services here so that's the first thing that we have to do if you haven't done that that's very first thing you have to do before you can enable the understand and really other than that putting in the license key that's all there really is before any type of configuration of having to be able to enable virtual skin so being that we have that done we're going to go back up to the cluster level here we're going to go to the configuration tab and then you can see under virtual San we have general now just to let you know to add the virtual simulation thing is here under licensing so you simply just click the licensing you're going to assign the license put in license and that's all there is to it I'm not going to bother with that so going back up to general on your virtual San we're going to go ahead and morgan enable purchasing so I'm going to click the configure button now starting off keep in mind our hosts have disks in them and we'll see those in a minute now you can have virtual single out and automatically grab any available disks automatically during the initial enable and configuration I prefer not to do that so I like to go to manual so I can make sure that I'm selecting the right disks just in case you have multiple disks in there that maybe you have one in there that's not meant to be for virtual sand potentially so I'm Plus this way this kind of shows the creating of the disk groups and selecting the individual disks showing you this manual so we're going to select manual again you can do automatic now with all flash versions setups of virtual sand to where you have flash Jets both for caching and capacity tiers you can turn on the space efficiency feature sets such as need application compression now being in this environment we have flash disk I'm going to go ahead and I'm going to enable deduplication and compression I'm also going to allow the reduce redundancy now we'll talk about this in more detail all that means the stretched cluster in this particular demo I'm not going to show any stretched clusters and we're not going to configure fault domains so I'm going to go ahead and click Next now remember we went and looked at the VM kernel adapters to make sure that you st. Services is there is a VM criminal adapter for specifically be sent services in communication now this is one thing that's new to 6.5 is we now as a part of enabling virtual sand we've now put this check in here to make sure that you already have those configured on the disk and we manually went and checked them already but here's a part of the actual enable and configuration step we actually have a check process in here and as you see we did have them set up and as you see we are in check mark saying yes they are configured so we're going to go so I'm go ahead and click Next now here we can look at the disks now you're going to see in this environment that I have each host has a five gig SSD that's Matt for caching and that also has a 40 gig SSD for the capacity now this is a demo environment so that's why they're very small disks and there's only one capacity disks now you can look at these and either way during the configuration of this you can either look at it by this when you're going to select it as or by house so here when looking at it by disks if I expand these out looking at all three hosts and as you see in the 40 gig disks here's each of the 40 gig disks on each of the particular v10 holes then we have the five gig caching disk and here's each one of them for each host now we can also again look at this by host and what it does is it shows you just can open up along here I'll show you the one host with both this the 5k caching and 4 gig capacity now because this is a lab environment they've already configured as the caching capacity tier but like if I go back to saying the disk view and these are all my capacity what I would do is click on all of these and then I would end up if they weren't already configured I'd click right here to say clayman's district capacity and I do the same thing for the cache interior as well but that is already done because as you can see they've already been claimed for those ok so normally you would have to do this manually mangal configuration so I'm go ahead and click Next because I'm gonna grab all those disks just a summary screen here if everything looks good after you checked it go ahead and click finish now we're going to watch the student the recent tasks here because it does take a few minutes in order to bring all those hoes to claim the disks and it does have to create a disk group which I'll show you here shortly for each one of those particular beasts and nodes so I'm going to go ahead and we're going to wait for that to finish up as you see it's happening pretty quick and as you see it has completed now it's completed it's creating the virtual sent data store which I'll show you and then it's formatting and this version 3 which is the formatting disc luring for virtual 76.5 so we're going to wait for that to finish as well I'll go ahead and minimize that now while that's finishing I just want to show you here we're still on purchasing in general under the cluster level as you can see here we enabled virtual sand turned on and put on manual mode we enabled the dupe and compression again it is about ready to finish here and you're going to see this change from one disk to remember we've got a total of six disks across all three has two per host so right now it's finishing up the disk format version right now it says three disks I'm version three but shortly it will show you six sometimes you do need to hit the refresh to let it update so it's still doing that now you're going to see the setting here Internet connectivity now if you have internet connectivity which I don't mean in this lab environment you can go ahead and turn that on and you can click enable now if you have any a pietà proxy server you can put an Associated proxy server information the poor username password whatever click OK and turn that on now what this does is you're going to see and later on on the demo there's a help check what it does is it checks the hardware versions of the controllers make sure the driver versions are correct firmware versions are correct and everything else if you enable this internet connectivity it will can go out and automatically update hardware compatibility list database in reference to all the different driver versions firmware versions for all the hardware and everything else and it will do those checks and will tell you whether they're out a date in reference to what drivers need to be used for virtual sent by the way as you see here at finally finished up disk format version all success or in version 3 now which is a good sign the green checkmark now as you know one of the new feature sets of 6.5 is a virtual Saint can be used in nice because you target service for an example at this point in time with 6.5 it's only met four physical servers to be able to point to a virtual sound data store using I scuzzy so an example would be if you have some physical Microsoft clustering service servers and you want to use a decent data store as their data location you can point to it the I scuzzy create the target you can go in here and enable it to create the I scuzzy target enable the service you're going able to service you're going to use sever I scuzzy BM Colonel that you plan on using you're going to make sure that you know typically just like we did the beasts and services here and I have a colonel that has to be met just for I scuzzy traffic to keep that separate he uses thirty to sixty by default you can I use by enable this here you can see you can use no authentication or can use chapel neutral chat and then if you have any edges or policy again be sending will deploy a default purchasing storage policies you see here if you're creating others you can associate it to that I'm going to go ahead and click cancel because I'm not going to show the eyes because you target portion in this because this is just the initial install enable and configuration not purchasing so this is just some of the general settings now it's going to Disk Management now we claimed all six of those disk great for flash to five gigs and three or four capacity which with a porta gate and when it does a crazy disk group automatically for each host so as you see is x3 one and two they've all created a disk group in each disk group you have one caching disk which is the 5 gig as you see here caching and then you have one capacity which is the FortiGate now for virtual synonym for a disk group that is the minimum requirement for disk group you must have one caching disk in at least one capacity disk there's no size requirement that you have to have at least one of each now you can have more than one caching disk in a disk group and you can have more than one disk group and obviously you can have multiple capacity disks in a disk group of one cashy disk now best practice is to have more than one disk group that helps with especially when we talk about recent later that will help the speed up the process of resyncing multiple disk groups that way if you lose a disk in a disk group you're normally taking out that one disk group on the host and not the entire host its we only have one disk group in series you're caching disk then you lose everything in that host and essentially that host is now unavailable so just something to keep in mind now here is the setting for the fault remains a stretch cluster now stretch cluster again is to be able to stretch this vcn data store across multiple physical locations so it may be main headquarters and a secondary site falta means what therefore you'd understand this now the second example we have a company a data center they've got say 100 beasts and nodes and let's say they can spread across six racks physical racks well when in the typical clustering fashion when you deploy a VM it's going to create you're going to have your primary copy it some credit backup copy we have need to have a witness and to cluster fashion well the thing is is that if you've got multiple physical racks and you deploy the young you don't want the primary copy the backup copy and then the witness to be on hosts all within the same physical rack is if that physical rack goes down then you've lost everything okay so we gave you this capability of doing Falcons so in this scenario that just gave you we have six physical racks what we're going to do is we're going to create a fault I mean for each one was physical racks and with all the Associated hosts and methods go ask in that poll to mean that will then when we deploy a VM it will ensure that it writes all the objects in the virtual data store in two different hosts on different reactions spread them across essentially three racks rather than putting them all potentially in the same physical racks it means power again so this way if it spreads the objects the primary of M the back of young witness across three different physical recs and three different nodes and those racks then if you use one rack of power you're you're not losing your data so that's the idea of a faltering so if you don't have multiple another need for fault domains as you can see it puts it in a kind of a non fault domain which essentially just like having them all into a single fault coming but we're just going to leave it at this we're not going to create a fault Amane again fault the mean is for you know that situation and stretched clusters like I said that's where we're doing multiple sites so for this particular example Monacan I show you that now going down to health and performance the health services enabled automatically its version 6.5 and the interval is every 60 minutes now you can change this to whatever you desire to either less or more whatever you prefer I mentioned before the hardware compatibility was database so you can either download the file from bm1 the update for that and point to the file you know just like you normally would point to the file and update the database that way manually or you can get the latest version from online now I'm knocking about a clicking this because again this is a test environment and I do not have an internet connection in this environment so it's not going to me good but in most cases you probably may cancel the internet you can click on this and then you'll see the update date associated to the hardware compatibility list database change and again this is what when you look at some of the health checks I'll show you here shortly it's going off this database so it's a good idea to make sure that you have this up-to-date support assistant we need to put in a ticket and problems with virtual San you can upload a support bundle and attach it to a service request that you already have open so this makes it very easy to do so so you do have this option again you will require an internet connection for that now the performance service currently is turned off you can click Edit you can turn that on if you have multiple storage policies you can associate it to that but we're just going to do it for the virtual setting default storage policy you want to click OK now keep in mind that we're just turning this on it will take a few moments for it to make the appropriate changes there we go from a performance aspect against that virtual stand default policy we are currently compliant and healthy as you see the green checkmarks now again if we if I was going to show you guys scuzzy targets I would show you that by you know coming into here again clicking at it you can able it you have to have that vmkernel for less heavy traffic you know the standard default porks thirty to sixty there no authentication that you want to be security tap or manual chap and then again you can set it according to the policy which is a default there which we're not going to do and then you can also set up by discussing the share groups not going to bother showing that to you again the licensing here would be here to put in the VM license now I'm going to go back to the summary screen here as you see on the cluster we've got red because we just enabled that some of the nickel arms are going to be initially kicked off we're just going to reset those to green okay so let's go over to our data store so I'm going to show you that ye sin kata store so around your data stores rather the data center object and then as you see here here is the default named beasts and data store that's how it gets named and as you see here the associate recall policy is assigned to it the name is recent data store which is default the type of Dees and of course capacity keep mining me of those three 40 gig discs for capacity as you see here total capacity is 113 and we have free space of 112 right data store capabilities currently right now because of configurations storage IO control and hardware acceleration is not supported so going back to class and clusters again we're on the cluster level here I went to the monitor tab and show you the IDs in here now here are the basic checks now because it's set if you remember for every 16 minutes I'm going to go ahead and I'm going to do a retest this test here is pretty quick so as you see here once i retested it now that vcn is configured correctly we as you see we passed but look at all the checks that we do we do cluster in cluster it does a bunch of different objects here but here i'm going to show you the hardware compatibility control driver control release support as you control or be sent HCL as you see all that hardware type related stuff as well as to the network the physical disks that was I was talking about with the hardware compatibility list database in keeping that update especially for the hardware compatibility for your storage controller and associated firmware version and driver version because there are certain ones that you gotta use for purchase and you can't necessarily just update to the latest one so going down to capacity as you see here capacity overview is your total your use total your DB and compression overhead and then you're free now obviously I odbms on this data store so these noble objects in there so you're not going to see any just real savings especially if you look over here at give you progression overview we're only getting 7 or 12 Meg savings so 1.46 exploration as we start adding VMs and putting a pitch in set in same datastore you'll see these numbers go up here now the use capacity breakdown if you want to look at how that use capacity what was it consists of as you see here performance management objects in the green is a percent filesystem overhead at 0 right now you do from compression overhead is 74% right now this is a use data keep in mind and then the checksum overhead and 17% and that's by object types you can also do by data types as well so primary VM data piece and overheads up because witnesses very proud components and stuff so it's like I said as you start to add more VMS and components into the piece and data store you're going to see these numbers drastically change now under recent components now normally this is what you're going to see you're not going to see any data because most of the time you're not going to be syncing components let's say we lose a drive or we lose a host or maybe we add a driver a de host and can we make changes to the size of that data store or the disks and it has to resync those components across all the available hosts and disks so normally speaking you shouldn't see recently happening unless you've done all those things you've either added capacity or either by hosts or disk or maybe you had a drive failure or a draw or a ghost failure - or not you have less and that has to resync those components thing to watch out for is you can i if you are resyncing it really hasn't been a reason that you know of a failure you haven't had any of them and to resyncing you might want to maybe look into why you also want to look at the place left to recent this will give you an idea of how much time is left for resyncing you want to go making a lot of changes while there's a recent action going on especially deploying vm or moving them around then we can go to virtual objects now virtual objects here is going to be empty right now because we do not have any BMS and associated you know VMDK files or anything like that i can go back and show you that later physical disk is going to look like before under the configuration you see the disk groups and each disk under each host I scuzzy targets remark I see anything because again we don't have that enable then we have some proactive tests here now one of the first things I do after you set up your VM and you enable that very first thing we want you before you even think about trying to throw a VM on there or build a VM anything this is VM creation test run this this is pretty quick get a cluster minute is going to try to deploy a you know basically an empty VM real quick just to make sure that you can actually create a VM on the virtual same data store so as you see there we're creating a couple different ones and then it removes them so as you see there we've passed that test so that's good so that means I can deploy something the decent data store then we have the multicast performance test again we in right now on version up to and including version 6.5 you have to have multicast running between our hosts for in your host communication so I'm not going to bother heavyweight for those other tests because those two take longer so what can I do right now is I want to show you how you can add capacity to a be same cluster that could be adding additional hosts or kamini adding additional discs or an associate just groups if you're adding SSD and regular capacity discs so as you see here we have this ESX 0 for now just to check little quick the VM kernel adapter and just to show you we do already have in kernel adapter with Sam enabled on that so make sure you have that now what we're going to do is we're going to while in maintenance but we're going to drag and drop into the virtual same cluster okay so we're going to watch The Regent recent tasks because it is going to have to configure that host and reconfigure the cluster so it will take a few minutes to do that okay so it's done adding you can see yellow exclamation points are no longer on the other piece and hose so now I can take this out of maintenance mode and we're going to make sure that that's done and it has so what we do is i'm going go back up to the cluster level again i'm gonna go to configuration now let's go back just management because not only to added a host with this gonna you remember we originally set it up to manually configure virtual scene and to grab those discs manually not automatically so it's not going to make the grep i'm so we need to manually go out and do that so here we have the ESX host now from here what i can do is simply there's two ways to give you idea from the disk level or for from the district level so i'm just going to create a new disk group i'll go back this way so here first we're gonna select the disk for the cache sheet here so caching here again is that five gig and then for the capacitor is going to be a 40 gig flash we're going to click okay now it's going to take a minute to do this it's going to create the disk group for ESX 0 for using those two disks and again it takes something to do that you may have to refresh if you want to see it a little bit quicker it does take a little bit time and you can watch here with the recent tasks again okay so it's finished adding the disk to and creating the disk group 4604 as you see here here's a disk group and in that district you have the two disks as we expect it now we can go let me show you the recent data store so we go to storage we go to the sent data store this is again the default name again we have the virtual same default storage policy assigned to it the name is Vicente destroyed by the all types of beasts and of course look at the capacity now if you remember correctly before we had like a hundred and think fourteen gig now since we've added additional hosts those disks we now have 151 gig provision space is to dig phrase 149 so I just want to show you that real quick that by adding that host and very simply have added capacity as well as compute resources of course so going back to the host and clusters again we can see all of our discouraged over each one of our OS now what I'm going to do as I want to go ahead and I'm going to move one of these VMs on to the datastore so I'm going to do both the compute and storage click next I'm going to select the virtual sand cluster get a green checkmark it's compatible click Next and right now I am going to go with just the default virtual same storage policy as you see the only compatible storage policy is to be sent a school and click Next I'm just going to go the default network settings and you're going to review make sure that all your settings are correct and then we're going to click finish so that's going to take a minute to move that piano over ok so as you see here we've successfully move the base - Linux began into the VM cluster and is completed so go ahead and close that now what I'd like to show to you is a local little monitor tab here and what a view objects and go ahead and refresh that is you see here now that we've moved a VM into quraysh it's a on to the V sin datastore we now see the base Linux VM on there and as you see we have a B on home folder and the hard disk which is the game DK let me scroll this up here a little bit so you can see a little more on the bottom here so as you see the VM home folder we can actually look at the the physical displacement where they are so again remember the concept of a cluster is that we have one copy of the VM we have a secondary copy of VM as a backup it gets constantly sync primary and then we have a witness and as you see here we can see our two beyonds these two components are on SS 0 4 and 0 2 then the witness is on 0 3 so we can actually see where our components are sitting within the beasts air within the beasts and data store itself in which particular host set the line we can also look at the actual hard disk itself the VMDK file associated as you can see it's in a raid 1 but you components again Ron oh for know 2 and the witnesses on all three we can actually see where these individual objects and PM's where the residing and what hopes we can actually go in and see if we bring down this particular hose what components of which VMS are sitting on that host and you can kind of preemptively maybe migrate BM accordingly to do the same maintenance on the particular host now there's a way that's in order to do that for example you can you know by best practice you would put a V sand node into maintenance mode and we do that and ask you do you want to make sure that all of the objects on this particular host is continued to be accessible there's a couple different settings that you can select and if you say yes keep them available it will automatically move all those objects off that host and put them onto other active hubs to make sure that you are still able to access those particular games and associated files you know I want to go to the home screen here because another thing I want to show you is the default storage policy so here we're going to be in storage policies and as you see we have the default or the virtual stand default storage policy now let's just take a quick look at that real quick to see if some of the settings in that keep in mind there's multiple settings that we can do now this practice is not to modify the default policy is best to clone it and then rename it and make the setting changes that you want to that particular one so starting off here we have the number of failures should tolerate now for V San again we have by default number three notes and the reason why I need a standard clustered format again is you have the primary copy of the backup secondary copy TM and you have a witness so that requires your 3d host and so if any one of those hosts was done you still have access to that VM and that data so we have affiliate re of one that is the default we also have the number of district sprocket and by the way you can click at any one of these little icons here and will give you a description of what is Matt for the settings so for the number of discharge project a number of hard disks across which each replica of a storage object is strict of value higher than one may result in better performance for example when Flash food cache misses need to get service from the hard disk drive but also results in higher use of system resources so the default value is one maximum as to all so again you can get better performance on striping across more des and again that's in basic raid type of concepts although the way nice and works it's not a true traditional raid 5 6 1 or whatever it's slightly different but very similar in conceptually also for spirt vision default is to know this means that like say we're in a degraded state where we lost one of our hosts you know if you go to deploy a VM are you going to want to still deploy that you know we're integrated state probably not so that's why typically is set to no and why it is set to no by default so just something to think about with that objects based reservation you can set a percentage of the logical size of the storage object that will be reserved basically thick provision upon VM provision then the rest of it will be thin provisioned then we have flash free cash reservation percentage so this is flash capacity reserved as read cache for the storage object specified as percent the logical size of the object to be used only for addressing read performance issues so reserved flash capacity cannot be used by other objects unreserved flash is shared fairly among all objects so as you can see there if you have something that you have issues with from a read perspective you can assign the certain percentage of that cache disk to assign directly to a certain set of the ends because remember of those policies are based by you can do by BMS the create a policy ensign beams to as Apollo policies are vice versa you can also I had some other additional rules as you see here failure tolerance method I have some limits for object and stable object checksum so that's a default policy in its settings it's like I said the best practice would be to clone this and I'm going to call it new lease and policy and then we click on rules you can use common rules with VM storage policy again you can read the description here but we're not going into that too much wolf set is where we were setting the rules before in regards to stripes and number of failures tolerate keep in mind that whenever you for example number fetish tolerate in order to increase this this means you need to have more hopes so right now with only three or four hosts I'm going to do this to the number of values tolerate so if I tried to say for do that it probably is going to get me an error see incorrect missing values because I don't have enough hosts to be able to handle a number of five-star for so number of districts you know I could let's say for this I want to do two strikes it will also look at that to now by doing that you're going to decrease the amount of storage you know we can also do the force provisioning the reservation for object space also flash free cash we can add additional rule if you want the whole failure tolerance method you need to grade one for mirroring or we do a racial coding five six for capacity so let's say that we're going to make that our policy so there's my new vcn pause okay I'm going to go back to hosted clusters I click on that base Linux VM remember the policy assigned to that base Linux VM was the default at the time when I moved it it shows it as compliant now if let's say I just moved into here I can do a check compliance to make sure that it is reflecting correctly as you see it is so what I'm going to do is I'm going to take this photon 0 1 a and I'm going to migrate that to W in datastore as well and change both the compute and storage click Next gonna select the ECM cluster is that what I'm not going what I'm going to do is I'm going to but upon ideas on try this new piece and storage policy that I configured since you see datastore does not match Caribbean policy queries additional physical disks because remember I update the number of stripes I also if they the type of redundancy two five six so I just wanted to show you that real quick so depending on what your settings are in the storage policy you may not be able to meet that so if we put this back to the beasts and default storage policy however as you can see it will meet that requirement so I'm not going to go ahead and finish this there's really no need for it but I did want to show you that depending on what your settings are you may not be able to meet the requirements click cancel so now that I have that based Linux VM on there I did want to go back to the datastore as you can see the total capacity is 150 one provision spaced out of 16 and the first place to 146 you can see now that we had that VM on there that it's not showing as much space available as before which of course makes sense now going back to the virtual sinc cluster going to capacity under than monitor and virtual Sam settings again you're going to see that the numbers have changed now especially for use capacity breakdown now also the goop and compression has gone up a little bit 4.3 gig now because now we have a VM on there's some duplication files and then now we see four data types and four object types the numbers are a little bit different now as you see we have this orange area in owl which is deduplication and compression overhead which is now a 45% which was a lot more than before and now we have the green which is virtual disk which is 37 and the VM home on ticks which is 4 and as you see the colors and the percentages have changed since I've added that beyond that which is of course the way sugar now just one add a note going back to proactive test now I did do that multicast performance test now because this is a virtual environment these are virtually nested ESX hosts is why the performance fails if this was a real live physical environment one likely this was past and then also you can store the performance you can run and also pass as well so just to let you know in regards to that going back to health you know we can read test the health as well to make sure that we're still good and all the checks as you see we're good now just to show you real quick here with that yes x4 remember when we look at the objects we did show that there were some objects on here let's say we're going to put this into maintenance mode go enter maintenance mode now as you see here it recognizes that it is a part of a piece and house so automatically checks the box loop powered off and suspended virtual machines to other hosts in the cluster and virtual send data migration now this is what I was talking about before in regards to doing maintenance so if you lead this check and we do ensure accessibility essentially this is going to move all those objects to the other 3 hosts in the same cluster so 1 2 or 3 to make sure that those objects get mobility posts that are running the currently there's no issue so I'm going to go ahead and do that I'm going to click OK and we're going to watch the recent tasks so as you see it successfully put it into maintenance mode now what I want to do is I want to go back to the cluster you need to do by the VM or the cluster and then look at the virtual objects now if we look at that base VM now if you look at that base Linux VM as you see here looking at the virtual objects that VM home and the hardest one are still compliant but it does let you know that hey we have reduced availability with no rebuild delay that's because you know we welcome forward two three house but it did move those accordingly so that they still are available and they can still be seen didn't notice that we put ESX zero four on maintenance load so it's missing those components that were on ESX 2:04 but then if we go back and we're going to maintenance mode and exit maintenance mode go up to the cluster level as you see now the Linux VM is now healthy and everything is active you can see all the components so as you see that is pretty much it for enabling and configuring virtual sanding so that completes my demo of how to enable and how to configure virtual stand 6.5 and again just a reminder that virtual stand is not a separate product it's not a separate appliance that you have to install it is built into the hypervisor itself so the reversion you have installed as far as the hypervisor such as these version step 5 and once you have a valid product key that you can assign to recenter server you can then enable virtual san and configure it and start using it so as you saw using virtual San is very simple to enable and configure in most traditional storage appliances such as a nazzer san device typically you have a storage administrator because it takes somebody with that kind of storage administration experience in order to you know install configure and manage that type of solution from start to finish but as you can see with virtual sand a VCR administrator that's familiar with working within the vSphere web client even though without being a storage expert can very easily enable configure and continue to manage a virtual set environment very easily they do not need to be a source expert in order to do that so as you can see I showed you how simple it was to simply enable virtual sand it took simply adding the product key adding some VM kernel adapters and enabling it for communication for that vcn service and then go off and enable it and do some minor configurations you can configure some policies either the number of stripes or number of failure tolerate again all those can be limited based on the number of beasts and hosts you do have so just keep that in mind as well but again as you see it's very simple to use virtual sand it is a workhorse the performance is outstanding so virtual sand is an awesome product we have over 7000 production customers that are using virtual sand their production environments and virtual sand has only been out a couple years and it was a new technology for us so you can see that it's being widely adopted and of course as you know also hyper-converged infrastructure is becoming very popular like our BX rail appliance which is a single physical appliance that has usually for physical nodes in it and then we want to be sat on top of it as well as some of our other solutions hopefully you see how easy it is to enable and configure and manage or virtual sand even if you're not a storage expert so with that that completes this particular demonstration I hope this information was valuable to you and I look forward to seeing you on my future sessions thank you and have a wonderful day
Info
Channel: Tech Data Technical Services
Views: 48,255
Rating: 4.8733029 out of 5
Keywords: vsan, 6.5, demo, install, configuration
Id: SBdaTypBTcQ
Channel Id: undefined
Length: 43min 32sec (2612 seconds)
Published: Wed Mar 29 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.