Clustered Data ONTAP 8.2 Intro Demo

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi my name is Joe Whelan I'm a systems engineer working that out and today we're going to do a intro demonstration of clustered data ontap 8.2 and next slide we're talk about all the capabilities that we're going to demonstrate today so today we're going to create a two node cluster within clustered data ONTAP we're gonna create an aggregate or a pool of storage we're gonna create a storage virtual machine which is the backbone to cluster data ontap it's a virtual representation of a physical controller we're going to create a volumes both forceps and NFS access and then we're going to mount both these sips and NFS volumes from a Windows client and from a Linux client respectively and lastly we're gonna create to lunge we're going to create one for Windows and mountain from a Windows client and we're gonna create another one for Linux and we're gonna mount that from a Linux server running Red Hat Enterprise Linux so here's a diagram of the lab that we'll be doing our demonstration from today so the jump host that's at the top left-hand corner that's our Windows 2008 server that will be remote desktop akin to that's what we're going to do a majority of our work is in terms of accessing sip shares and presenting a Windows one we're also at down the bottom you can see there's a Red Hat pair of Red Hat servers that are down there running enterprise Red Hat Linux we also have an own command unified manager as well as a virtual domain controller as well all of these connected to the network as well as two pairs of data ONTAP fez simulator clusters and these are going to be the system that we actually joined together during our demonstration today these pairs are joined together by a 10 gig backbone and that's the cluster network on the far right hand side so we're going to start by opening up a putty session which ssh a--'s into our first unjoined node we're going to log in to that node now so we're going to start by running the cluster wizard so we're going to run this cluster setup first option is going to be to actually create a cluster because one does not exist right now we're going to say no we don't intend to use this as a single node this is going to be a pair we're accepting the defaults for the easy Rho a and E zero B interfaces from the dedicated 10 gig backbone now it's asking for the cluster base license so we're just going to be doing some copy and pasting of licenses both for the base license and some of the supporting protocols such as SEFs NFS I scuzzy snap restore and the snap manager bundle just to make things easy where it's going to copy and paste the keys in as appropriate so now we're at a point now just to add the additional license keys in for all the additional protocols and these are just demonstration licenses for each system that you would purchase they each each physical fazz the voice would have its own unique kick so we've entered all the keys in we're going to accept easier easier OC as the cluster management interface so we're going to accept that as a default so we're gonna enter the IP address so these are addresses that have been handed out to me from our lab folks so we're gonna go with 0.1 o1 24-bit subnet mask at 0.1 gateway our domain name it's going to be demo NetApp comm and we're enter the IP address of the DNS server which is 0.2 53 you can enter physically where the controller is located again the node management interface we're gonna keep on easier or see there you go so the first node is join all the IP informations in there and that's all that's really involved in setting up that first node so we're just going to do a cluster show here and you can see that unjoined 1 is now in a healthy state one thing we're probably gonna want to do is we don't want to keep it named unjoin 1 so we're actually gonna do a node rename command so node rename - node unjoin 1 which is the current name - new name and we're gonna put the new name is cluster 1 - 0 1 at jobs now in queue that's completed and just to make sure it work we're doing a cluster show you can see the new name is updated so now we're going to take unjoin to node and we're going to join that into the first node that we just joined into the cluster so the main difference here is as we log into the second node instead of doing a create now we're going to do a joint so again we go into the cluster wizard by typing in cluster setup and here we're going to type in joint accept the defaults again easier a and e0b or for that 10-gig dedicated backbone Network we're gonna join the cluster one cluster so we accept that default setting up the network for us checking to make sure the cluster is healthy [Music] now it's starting up the cluster so again we're going to accept easier OC as the management interface again this IP information has been given out to us from our lab folks so we know that we're all set there and that's it so again we're into a cluster show we're going to make sure that we should see both nodes now in a healthy state which they are again we're gonna want to rename unjoin to something a little more logical so we're ready to call that cluster one - OH - the same command node rename - node the old name then we're gonna enter the new name here jobs queued up complete and again we're going to do a cluster show just to make sure it's healthy one last thing we're going to do here just from the command line to show you how to do it is we're actually do an aggregate show you can see that the aggregate name for the second nude is kind of an odd name so we're gonna actually rename that so very similar to have we rename the cluster node we're going to do an aggregate rename command we're going to call that something a little more logical and what we do is in the next section here we're gonna rename the first aggregate via the GUI but we just want to show you how to do it via command line here oops had a typo there there we go so job is cued up let's exceed it and then we're just going to do an aggregate show and you can see the aggregates been renamed and we're show what that looks like the assistant manager in just a second so now we're going to sign into one command system manager so pretty much from here on out we can do everything through a GUI inside the command line to really just that initial setup on the nodes that we have to do via the command line so now we're putting in the management IP of the cluster or we can just use the cluster name since it's going to resolve we're going to pop in the admin credentials so it remembers it for us we're just check save my credentials so next time it'll remember go into the aggregates section under the cluster we can see that we have the renamed aggregate that we did be a command line what we're gonna do is we're gonna go to a kegger or zero and we're gonna actually edit the name just by clicking on it and hitting the edit button we're just going to highlight that name and type something again to match the same flow of the one that we did via command line so AG zero underscore cluster one underscore oh one underscore zero please hit save and close and you'll see that name gets updated once this closes there you go so we're gonna go into system manager and we're gonna create two new aggregates one for each node so under the cluster window we're gonna highlight aggregates and we're going to hit create it's going to open up the create aggregate wizard we're gonna name the new aggregates to match the same naming convention that we previously used except this time it's gonna be AG 1 instead of 0 and again we're going to do that for each node so cluster underscore Oh 1 and O 2 we're going to select desc each node is going to have 13 virtual disk available so we're gonna choose 6 for our demonstration today we're gonna hit save & close we're gonna hit create and that aggregates created now we're gonna do it for the other note as well so we're gonna have each nodes gonna have a root aggregate or the operating system files reside and then each one will have a data aggregate which will be AG one for our lab today again you're going to select this this time you're going to choose the second node and you're going to reduce the discount down from 13 to 6 for our lab today it's creating that second second aggregate it's finished and here you can see just in the aggregate view all the aggregates that we created eggs 0 and 1 for both nodes so now we're going to actually create a B server for our sand or I'm sorry Arnaz protocols for sips and NFS so we're going to go to the B server tab on the left hand side we're going to hit the create button we're gonna call our visa we're vs1 and you're only going to see the protocols that your license for which for our lab today is Sif's NFS and I scuzzy since we're just doing Nazz protocols today we're just gonna check sips and NFS we're gonna do NTFS security we see the domain and the DNS server we're gonna accept that as a default and hit save you can tell you so we're going to enter in the IP address that's provided for our lab today which is 192 168 0 131 again we're gonna do a 24 bit subnet mask and a gateway of 0.1 the home port which is what we're configuring here is going to be on cluster 1 - 1 - 0 1 home node and we're going to use easy row C as the home port and this basically joins the active directory domain just like any server would we're typing in demo Netcom going to go to the default computers oh you server name is BS 1 and we're going to give it basically an administrative password so at this point it's creating a sip server it's joining the domain providing we gave it the proper credentials which we have we're also going to create AVS admin account so this is basically an administrative account only for the V server so nothing outside the V server world okay so we're gonna submit that and continue we're gonna hit okay and you can see here our V server now has both the NFS and the sifts protocols licensed and configured so now we're gonna go to the configuration and we're actually gonna look at our network interfaces and you can see right now that we have one LIF one logical interface configured and that's all in cluster 101 what we're gonna want to have is a second LIF configured on the other node within the cluster so we're creating an interface we're going to select it for both data and management we're gonna use the same naming convention that we use for our first lift it's always good to name your lifts something that's logical if again if it's four sips an NFS to incorporate that into the naming convention and because our V server is only set up for Knives protocols that's all we have the option to check this time we're going to browse for our home port and this time we're going to select the second node and we're again we're going to use this seem easier or seaport and this time we're ready use the IP address of 192 168 0 dot 132 and again 24-bit subnet mask and a gateway a 0.1 we're gonna hit next when we're done and once this completes now we have our V server setup we have two lifts one on each node for redundancy resiliency and we can see we see both lifts listed there so we're going to take a quick look at the sifts and MF s configuration so we're going to go into the protocol section under configuration we're click on SEFs we can see it started its vs one is the system name it's doing to the Active Directory domain and we joined it to demo NetApp comm so we're gonna click on NFS we can see server status is enabled which means all the services are properly started currently we have NFS to re enabled so now we're gonna go into policies and click on export policies which is for NFS you can see the default policy is set up so what we're gonna do is we're gonna add a rule for that and what this rule will do for us is we're gonna create a single access rule that basically grants readwrite and root access to any node on the network without regard to which the net which - protocol it's using like I said this is just something for demonstration purposes just opens up you know a wide open policy not something you would typically want to do in your environment so now we're gonna go into storage and then shares you can see the two default admin shares that exist we're gonna create a new one so right now the only folder that we have available to share is the root volume itself so we're gonna click on that and hit OK we're gonna create a share called share one and we're going to create so now we're gonna hit edit on that chair you can see we have the ability right from here to edit the ackles on the share itself so right now it's open up to everyone full we also have various options that we can change one I always like to enable is to be able to show your snapshots which is basically it tilt a snapshot director that is read-only gives users the ability to do their own restore some some administrators do not care to have that option check so you have that option on the share by share basis to turn on or off another thing we're going to look at here are local users and groups so this is where you can do a mapping of UNIX users to Windows users and vice-versa so we're going to show what that looks like so we're gonna add a Windows to UNIX rule so basically a Windows to Unix means for any demo administrator account that's gonna have the equivalent access to the root user and vice versa so we're going to set up another rule of Unix to Windows which does the same thing so if you're logged in as a root user you would have the same access that the demo administrator would as well typically you're gonna want those accounts that have the same same level of access you go and now you've mapped the local domain local root user to the domain admin user so now we're going to create a couple volumes valve one and vowel two and what we're going to demonstrate is this whole concept of namespace and Junction points within cluster don't app so we're gonna create a new volume within the first data aggregate only node one so we're creating a one gig thin provisioned volume called Bob one and if you look under the namespace tab you can see that immediately becomes a child of the route mount point so we're gonna create a second volume we're gonna call it ball two of the same size one gig in size we're going to think provision that as well also going to tie that to AG one on node 1 and then provision that as well and again what you're see is it automatically will mount itself under the root volume so what we're gonna do here is we're gonna play around with the concept of unmounting and rebounding volumes as different junction points at different levels with different names so you can see we just unmount it ball - so now we're choosing ball 2 as a new junction point and we're actually going to call it projects instead of ball 2 and instead of mounting under the root volume we're actually going to mount it as a child directory of ball 1 so this now gives us the capability to build our structure however we see fit so now what we're doing here is we're going to play around with this idea of sift shares and accessing our data via the assist protocol so we're actually going to map a drive to that share one volume that we shared out earlier so we're gonna do slash slash vs 1 which is our V server name slash share 1 we're going to map that to the Z Drive and once that mounts what you're gonna see is you're gonna see because that's the root volume you're gonna see vol 1 as a folder but we actually know from the work that we just previously did that is actually a volume in that app so that's not even deletable it is a read-only directory and basically all these junction points just represent themselves as folders under each of these shares that are out there I had the capability of creating a file just doing a quick little write test is to show that I'd the ability to create a file and save it - that's - that directory structure I'm able to save my work and we're going to now try to do the same thing in the Linux world so what we're going to do is we're going to open up a putty sesh session to one of our Red Hat servers we're going to choose a Red Hat one we're going to log into that with root credentials we're going to create a directory bs1 we're entering the commands to allow us to actually mount that vs1 share as a NFS mount point now if I do a dfi CBS one now I'm going to change directory right into this one and if I do an LS again I'm gonna see the sifts text volume that's f-stop text file that I created and I'm gonna see vol 1 which is the directory that is a mount point under the root volume so even though we're coming at it through a different protocol the NFS we're seeing the same thing that we did in the windows world rate from linux and if i change directory to download level to vol 1 and do an LS i see that project mount point which is actually a ball 2 and we rename the junction point projects so from a directory perspective all I'm seeing is the actual Ward projects as a folder or as a directory so we're gonna go back up one level 2 vs 1 I'm just going to show that I can read the SIF text file now I'm going to write my own file at the same directory away from the Linux host the NFS and we're just gonna cat that just to make sure we can read it there you go so what we're going to do now is we're going to create another V server and this time we're going to create it for block level access in our lab we don't have fibre channel so this will just be for I scuzzy so we're gonna call it BS LUNs we're going to check the ice cozy protocol only again so we're gonna accept the defaults here we're going to choose the first node of the two nodes in a cluster we're going to accept the DNS information that's in there Ronettes save and continue okay and what we're doing here is we're specifying how many lifts per new that we're gonna want to designate and we actually give it a starting IP address so according to our lab sheet the IPA jars range that we have to work with here it's going to start with 192 168 0.1 33 with a 24-bit subnet mask and a gateway of 0.1 submit continue again we're going to create a BS admin account which is just an administrative account to this V server only so this would not have administrative rights over vs1 that we created earlier this would be just for BS luns b server we're also going to create a management lift for this V server only one 92168 0.1 37 and again we're going to use easy rows C for that purpose we're going to see the summary information we're going to hit OK and you can see this V server is set up for I scuzzy only so what we're going to do now is we're gonna create the one to attach through the windows host so one thing we're going to want to do is we're gonna want to come into the I scuzzy initiator properties and we're actually gonna want to get the initiator name from the jump host that we're using that we're going to present the one to so we're just gonna copy that for later ok so now we're going to go into our vs luns and D server and then we're going to go to the ones section we're going to say create a one and at the same time we're going to create a volume at this while we're going through this wizard so we're gonna a car lawn something low logical for a Windows one it's really call it Windows that one we're specifying that it's a Windows 2008 or later operating system and that's just for the block alignment just going to create a 200 Megaton very small and we're even thin provisioning we're hitting next so we get the ability here to actually create a volume while creating the lawn so we're gonna pop in and we're gonna choose again the first data aggregate on node 1 so now what we need to do is we need to add initiator group so again we call something somewhat logical for a Windows initiator group and we're going to choose the port set so we're gonna accept the default which has all the I scuzzy lifts that were created during that set up process that we went through when we set up the V server so we're going to accept that so we see all four lifts we're gonna give the initiator group a name we're gonna go to the initiator tab and this is where we're going to enter the initiator name that we took from the jump host so we're gonna hit add should be able to copy and paste but delete that and highlight that write a copy control V there we go we're gonna hit okay ready hit create seen it and successfully created the initiated route ready hit okay we're gonna check to magically map it cuz it is not actually math the lung yet using that I group it's only created the I group we're gonna hit next next and this is where it actually does the work and actually maps the Lund to the host or it hit finish and at this point the LUN is actually Matt to the jump host and now we're gonna do something within the next section you can see the the lung right there so now we're on the jump host again and what we're gonna do is we're gonna set up the lung we're also going to make sure all the multipath information is set up correctly so we can actually go into MPO properties if this was not grayed out that would mean that multipathing is not set up properly but because it's grayed out we actually have the drivers already installed if they weren't installed we would install it and then have to reboot the host so we're going into the ice cuz the initiator properties we're going to the discovery tab and we're going to click on the discovery portal so we're entering in the first IP the first IP address of the first lift that we created and we know there's three more go back to the target tab we're going to highlight the inactive session and hit connect we're gonna check the Enable multipath checkbox and go into the Advanced Settings and for target portal IP address we're going to choose the drop down box and we're going to make sure dot 133 is selected and we're gonna hit OK I'm gonna hit OK out of that and you can see the status is now change from inactive to connect it so what we're going to do now is we have one path set up we want to set up the other three so we're gonna highlight that we're gonna go to properties you can see that's the one session that's active so we're gonna add three more so again we go to add session we're gonna enable multi path we're gonna go to advanced for target portal IP we're gonna do the same thing except this time we're going to choose dot 134 we're gonna hit OK hit OK again and you can see a second session is in there so now we're going to do that for dot 135 and dot 136 as well and what you're see is we now have four sessions two sessions per node which is the minimum that we want per note for just for re resiliency and redundancy hit okay I'm just going to close out of this so we're gonna do next is we're going to go into server manager we're gonna go to the storage section we're going to go into disk management and what we should see now is we should see one one being presented and it immediately wants us to initialize that disk so we're gonna initialize it using MBR then we're going to actually right click on that unallocated disk the one that's set at two hundred and three Meg's right now we're gonna do a new simple volume and that's going to open up the wizard or assign adjust the e drive just gonna do a quick format again we can call it something that's a little more logical when finish and now you have an e drive like I said the data is fully accessible at this point so one thing we're just going to want to check to make sure all the multipathing is set up properly we're actually going to right click right on that disk go into properties we're gonna go into hardware and you're gonna see NetApp one c mode multipath disk so what we're going to do there is we're actually just want to click on properties and now we're going to go to the MP i/o tab and you can see all the active and optimal paths and non-optimal paths so this is all the ways that data can be accessed from the host all the way to your storage right she just want to make sure that you see all the pads that you have defined and again for us there's food issue before pass there so now what we're going to do from system manager again is we're going to create a one manually here and we're gonna do it for the Linux hosts that are out there it's gonna be pretty much the same process on how we created the one for the Windows host except we're going to make sure we choose Linux instead of a Windows OS and again that's for proper block alignment so you don't have any how-to misaligned ones same thing we're to do a two hundred Meg one that's been provisioned we're going to create a new volume again on node one data aggregate one we're gonna add an initiator group the port set again will be all four lifts that we have defined for that V server so two lifts per node for resiliency and redundancy we're going to go to the initiators tab this time I actually copied the initiator so you should be able to paste that right in there there we go we're going to create that the eye groups successfully created so we're gonna hit okay on that again we have to check that box it has not match it we've just created the eye group and once we hit next this is where it actually does the mapping and at this point that lunch should be fully mapped to our Linux host and what we're gonna do is we're gonna hit finish here and we just verify that the Lumbee is created now we're gonna hop on the linux host and verify okay so now we're gonna putty back into the first Red Hat server we're actually had a session open from our previous work so we're just use that we're just gonna check some of the packages that are actually installed on this linux host already so we're gonna just change back to the root directory keep in mind to all this work that we're doing both in the windows and linux environments for block level access there is software out there that we sell called snap drive and it does automate a lot of the things that we're doing a few manual steps for so you know from the host side with the proper credentials you can do everything right through a GUI without having to first create it on the storage array and then do you manually do the formatting at the hosts layer so you know that's that's some additional capability through the snap drive software there snap drive for Windows and there is snap drive for UNIX today for this just demonstration you know we're gonna show you basically the way of doing it both creating on the storage right and then doing the formatting at the host layer but understand there is some additional software to kind of make things a little easier in terms of formatting and getting everything up and running more quickly so we're just checking some of the settings here some of the I scuzzy session settings make sure that we're all set I set up properly for I scuzzy connectivity to our Red Hat host and we're just checking to make sure we have all the correct packages installed on the linux host that's just a way of checking your multipathing settings for this Linux host you can see that it's a setup with a Lua so what we're doing now is we're checking the ice cozy service you can see that it stopped so now we're gonna start the service up you can see it started up and just one more time or check these through a status so it started up and just saying that they're a currently there's no active sessions so now we're basically scanning to find the any I scuzzy ones that are out there but now we see all these ones that are available for us and the reason that we're seeing those is that we did the appropriate love masking and our first in our last section just got a typo there those are all the sessions that now exist Sandman is actually one of our tool sets so this will actually show you the path from a net app perspective on what we're mapping - it's a really helpful tool to help us figure out how to access our data from Linux host so you can see the multipathing demon has not started so we're starting that now and we're doing a status now you can see the process is running so it's doing a check-in fade we're gonna need this setting when we actually go to Matt the one so now we're set to actually map the Lund and get everything configured for our final stage here okay so now we're doing our make FS command we're going to use ext4 as our file system format so it's now formatting the files file system we're actually gonna do a make directory slash Linux one we're gonna mount it and we should be able to do an LS on the linux LUN there you go so just the test we're going to just write a quick little hello command line that we just created and I'm gonna do a cat we should see that file in there and we should see it with the text of hello oops there you go so you see we've created a file we've presented the loan before matted it with ext4 created a file filled it with text we were able to save that and again we can do an LS dash L on that one and see that file that's going to be it for our DRO demo today for clustered on tap please check out our advanced demonstration which will get a little more deeper into more the advanced functionalities of seedot thanks for your time and I hope this was helpful
Info
Channel: Joe Whelan
Views: 52,579
Rating: 4.8865247 out of 5
Keywords: NetApp, CDOT, Clustered ONTAP, Storage, Demo
Id: b_r0lyUzXzM
Channel Id: undefined
Length: 50min 21sec (3021 seconds)
Published: Tue Jan 21 2014
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.