Creating a Gluster Cluster on Fedora

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so I've been playing around with Gluster lately and I wanted to record a quick screencast on how you can get up and running with a two node cluster cluster and a client really in about ten minutes on Fedora 25 and it turns out it's not that difficult but you can this'll get you started and then you can you know figure out how you want it how you can use Gluster in your environment so the first thing I'm gonna do I always start from scratch because I don't want to prepare any of the environment ahead of time and skip steps because that is always annoying when you're trying to go back and do it yourself so I'm going to create all these machines from scratch so that there's no funny business and we are going to so I'm going to create a the two Gluster nodes first so there's gonna be quote called Gluster node 1 node 2 all that's created here so this is kind of just uh you know what what you need to get started and this is obviously a OpenStack and and so you can just I'm just creating machines and air these can be bare metal machines or the client can be the same as the you know the client can be mount the cluster volumes on servers and stuff like that too so there's there's no limitation this is just one way of doing it so we go in here we've got all of these starting up another thing I'm going to do is cluster expects the what already calls bricks so the each volume is made up of bricks on different hosts and bricks are just basically a subdirectory somewhere on the host with a file system medics that supports extended attributes but it does prefer that it be on a mount point other than your root filesystem and that's because the cluster tries to balance usage across the different nodes in the cluster and if things if something outside cluster is eating up disk space then it can't make educated choices about that so it's better to have it on the the brick storage on a different file system and so the way I'm going to do that here is uh these are left over from a lot of previous run so we'll just delete those and recreate them but a migrate to cinder volumes that'll be mounted at Deb V DB DB on the on the cluster nodes so it'll say node one it will make it 20 gigabytes and then create another window to with 20 gigabytes and again all these concepts map to other cloud providers so you can just do whatever they call these things you know a cinder for example AWS it's a it's an eps volume ring so I'm gonna mount those volumes attach those volumes to the nodes and then the last thing I'm going to do is assign floating IP addresses to these so that I can get to them from where I am okay so that that build up in the infrastructure that's all done so we're just gonna so now you can see that these are three from scratch for twenty five machines so what we're going to do log into these nodes okay so I'm going to go through one of the processes slowly on one node and then do it quickly on the other one so the first thing to do here it's easy it's easier just to become route an issue all these things since almost all the commands require root privileges I'm going to turn off selinux that this isn't necessarily required there are the default policies don't know where you're gonna put your cluster bricks and so you can create you can create selinux rules and not just globally turn it off like this which isn't really a good idea in protection so I wouldn't recommend this in production but for this demo it makes it simpler so I'm just gonna get with that and then I'm going to install the cluster server software I'm gonna go ahead and catch this other one up at the same time because this can take a while so I don't want to waste any more time than is required but I do want to show every step and I'll go ahead and start the installation of the only Gluster client to on the cluster client you'll need a package called cluster FS - fuse and that is the user space filesystem driver for Gluster all right so now no.21 is done installing and I'm gonna go ahead an able use the enable - test now which enables and starts the cluster server service the next thing I'm going to do is create the partition that the brick is going to be on so I'm gonna this is that volume that I attached to send your volume that I attached this instance so I'm just going to create a new partition one yep all the space and then write that the next thing I'm going to do is create an exit fest file system on that partition and then I'm going to add that to my FS tab so it gets mad at the next time for this demo I'm not gonna reboot these notes so it doesn't really matter but I mounted it /sv svr wherever you want to put your bricks you can change this for your for your use case then I'm going to mount the brick storage and then I'm going to make a directory for the break that I'm the the cluster volume is gonna be called my ball so I call my ball for the brick directories - so that now this note is ready to go I'm gonna catch up on node 2 and do the same thing so I'm gonna enable the cluster service and then create the brick partition alright create the XFS file system on that partition add it to the FS tab mount the brick storage and then create the brick directory so the next step so you don't have to do on one note or the other I'm going to do it on node 1 and so there's really no notion of master and slave there they're all peers but you do have to start the Gluster you do have to create the volume on when one of the nodes so in this case I'm going to do no one so I'm gonna tell node 1 to probe the peer node which is no 2 in its case and then we're gonna create the cluster volume so this is cluster vault and create the name of the volume replicas in my case replicas through only got two nodes but basically what it means is it going to every file that gets written to the volume is going to be replicated on to both of the nodes and then here's the you know the node name and the path to the brick storage on each node so once you do that it creates the volume and then you have to issue a volume start with the volume name and then it says volume start success all right so now your your cluster cluster is ready with a volume ready to amount ready to be rounded by a client so I'm going to jump over here to the Gluster client and the only thing you've got to do is this so the type of the file system is less or FS and that's gonna call and diffuse to use the user space driver to mount the cluster volume to a mount point so here I'm so this is just you know one of the nodes I this could be Note 2 as well slash the name of the cluster volume not not the path on the note on the nodes to the bricks but the actual volume name that seemed cluster wide and then the mount point that you want locally locally mounted to so if we go in here and we say I go testing to file a cat a if we go in here to the brick storage on each of the nodes you can see that we have a file name a now and because it's replicas to its replicating across both nodes so we're going here a is also here and there we go so that is a basic cluster cluster setup on Fedora 25 then take that long and it can expand greatly from here so hope that was helpful and catching the next one
Info
Channel: Seth Jennings
Views: 2,092
Rating: 4.8571429 out of 5
Keywords: gluster, fedora, linux, distributed, filesystem, centos, rhel, storage, containers
Id: CsoRK9AafZU
Channel Id: undefined
Length: 10min 7sec (607 seconds)
Published: Wed Dec 07 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.