Proxmox 8 Cluster with Ceph Storage configuration

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
running a server cluster in your home lab is an awesome way to not only learn about Enterprise applications and running things such as virtual machines and containers in a way that is highly available it's well just cool in the video today I'm going to show you guys what I've been playing around with this past week or so and the home lab I've been doing a lot with proxbox as well as experimenting with shared storage AKA Seth so if you've been wanting to create your own proxbox shared cluster with SEF shared storage for high availability then please do stick around for this video as you guys know I run a VMware vsphere environment in the home lab and primarily due to that is what I work with a lot in the Enterprise however I love experimenting with other hypervisors and quite frankly there are many open source hypervisors such as proxmox that really make a hard case for going open source even in the Enterprise however I will leave that topic there for now when you are thinking about ha or High availability you not only have to think about the compute and memory of your workloads which servers are actually running those VMS or containers you also have to think about storage shared storage is a requirement for most hypervisors that I know that have true High availability and the reason for that is if you have a node fail and have storage that is only attached to that failed node the remaining nodes cannot access the data that was stored only on the failed node so with shared storage you have all of your nodes able to access that same storage when you have a failure the other nodes are able to pick up where the failed node left off I'm going to step you guys through how to create a cluster using proximox8 then after the cluster is created we're going to create SEF shared storage but between those rocks box nodes so step one let's create our proxbox 8 cluster okay the first thing that we're going to do is configure a proxmox cluster now what I have done to pre-stage this exercise is I have three proxmox servers stood up and configured with IP addresses just the basic normal configuration so I can log into each of those proxbox servers so I've got pmox01 pmox02 and pmox03 I'm going to go back to proxbox01 and click the data center node I'm going to click the cluster node and as you note under the cluster information screen we have the ability to create cluster and join cluster since I do not currently have a proxbox cluster I'm going to choose create cluster I'm going to call this proxbox cluster pmox cluster01 and I'm going to use the existing IP address information in the drop down which is the IP address of the proxbox node I'm going to click create and this will kick off the process to create a proxbox cluster and as you know we've got the task okay at the bottom so I'm going to close out of this screen now as you know we've got cluster nodes we currently have pmox01 as expected and as you know also there is no longer the option to create cluster we can now click the join information button and that's what we want so we're going to click the join information button so this essentially gives us an encrypted string that we use on our other proxmox nodes to join the existing cluster so I'm going to just simply copy this encrypted string I'm going to go over to proxmox O2 I'm going to click the data center node once again go to Cluster go going to click the join cluster and then I'm going to pass in that encrypted string and also it asks for the peers root passwords and I want to type that in and as we can see we've got the fingerprint we've got the peers link address and now we simply click the join and it has the cluster name so join pmox cluster01 this task has started so as you can see it's going to stop services and join the node to the new proxbox cluster now one thing to note is you need join information for each unique node that is going to be in your proxbox cluster so I'm going to refresh the screen back to my cluster configuration I'm going to click join information again and this produces a new unique join encrypted Stream So now on pmox03 I'm going to repeat my steps go to Data Center cluster and I'm going to click the join cluster option I'm going to paste in the encrypted string and then I'm going to type in the root password and then finally click the join pmox cluster01 button if I go back to pmox02 that configuration has already applied I'm going to refresh the browser session as you need to do after joining to the cluster and now as we can see any of the nodes that we log into we should see all of our nodes and we've got the login box log back in and we should see all three proxbox nodes which we do if we go back to o1 we correctly see all of our nodes and we can do that from any of the proxbox node so now we have our new proxmox cluster configured Seth like many object storage based Storage Solutions like VMware vsan and others take advantage of locally attached storage on the server itself now you may wonder how can locally attached storage be shared with other proxbox nodes well the key is a distributed file system each server in a proxbox cluster running SEF contributes local storage to a logical storage volume and replicas of your data are created within that ceph storage pool when there is a failure you still have access to those other replicas of your day data okay now that we have our proxbox cluster created successfully we want to install the ceph component on each of our proxbox nodes and that is easily accomplished by navigating on each proxbox node that you're logged into navigating to the data center option and simply clicking the menu option for Sev it will prompt you to install ceph as it does here notice it says self is not installed in this node would you like to install it now so I'm going to click install now one of the things that you want to note on this screen if you don't have a subscription you want to change your repository to the no subscription model obviously in production environments you want to have that repository set to Enterprise however I'm just setting mine to no subscription so I'm going to click the start installation button verify that you want to install the Seth component so you simply type the Y at the prompt now as you note it gives the message at the very end install CEF Quincy successfully so I'm going to click the next button here I'm simply just selecting the IP subnet that is available under the public network IP cider range and doing the same for the cluster Network now in a production cluster you're going to want to have dedicated networks or you could choose to have dedicated networks for your cluster Network and your public network so you can have that proper segmentation and and multiple uplinks and so on and so forth so that's certainly best practice if you click the advanced check box you can see that the number of replicas and minimum replicas you can set here the very minimum is three and two so you have to have three replicas and a minimum of two so I'm going to click next and finally we get to the installation successful now as I already noted before this gives just a basic checklist of what we need to do so we need to install it on all other nodes create additional ceph monitors osds and create SEF tools so we're going to step through all of those steps let's click finish and I'm going to quickly run through that process on nodes two and three one of the first things that we need to create is a ceph OSD or object storage Daemon now this is responsible for storing objects on a local file system and providing access to them over the network so it's a critical component of your ceph configuration that we need to initialize in proxmox so now we want to configure our OSD and this is where we actually designate the disk that we want to use for our CEF storage and I only have one available this is actually a nested virtual machine that I'm running the proxbox cluster in so I've just simply added an additional disk of 50 gig or so sized disk so as you know that's the one that is available for myself storage so I'm going to make sure that disk is selected every other options I'm just going to leave as the defaults and we're going to click create and this process runs it's going to designate the disk as part of the ceph storage and I'm going to Now flip over to my pmox02 node I'm going to scroll down to ceph and we're going to click OSD and then we're going to create OSD and again we're selecting our disk here we're just accepting the defaults click create the OSD has created and click the reload button and now we have 0102 so finally let's navigate to O3 going to click the 03 node go to ceph OSD create OSD and we're going to click create and this process runs we're going to be able to reload and now we see all the nodes are correctly up now if I click back on data center when we go back to the ceph node before it had a warning because we didn't have our osds properly configured so now we've got the health status of ok now let's create our SEF pool so I'm going to go down to pools create and we're going to call this pmox school 01 and for the size values again you want to have three and I'm in size of two we're leaving everything else as the defaults the crush Rule Auto scale mode is on everything else looks good so we're going to click create after the pool creates we will notice that all of the nodes should properly have this pool available which they do so that's awesome now we see pmox Pooler one on pmox 01 the same pool on O2 and then finally same pool on O3 another critical component of your ceph configuration is the ceph monitor the ceph monitor is a special construct of your ceph configuration that Maps the cluster State including the monitor map manager map the OSD map and the MDS map and the ceph crush map now you will see Crush listed in the Sev configuration and not to over complicate this but crush is the construct that allows SEF clients to communicate with osds directly rather than through a centralized server or broker we want to create the additional monitor so as you notice I already have the first node running as a monitor I'm going to create I'm going to select my second node we're going to create as we note it popped in there so we're going to create a final time we're going to add the third node and allow this to finish to have all three proximox hose functioning as monitor nodes okay so all three are displaying under the monitor section all right guys now for the exciting part that I have been waiting for how well does our ceph storage pool hold up when we want to do a real-time operation such as migrating a virtual machine from one proxbox post to another so I've prepared for this demo by creating a Windows Server 2022 virtual machine that is just a basic installation I've got an IP address and I've got everything configured with vert IO drivers installed so what we're going to do is start a continuous ping for the Gateway of this particular home lab environment so I've got the continuous ping running so what I'm going to do is initiate a micro operation from pmox01 to pmox02 and see if it correctly fails over from pmox01 to O2 and if you notice I do have this virtual machine on the pmox pool 01 which is our ceph storage so let's kick this off we're going to select pmox02 and we're going to monitor this along with the virtual machine as we can see it's starting the tunnel it's going to only copy the memory map of this virtual machine so if you notice there's not a huge amount of data being copied and we just briefly loose contact with the console but as you can see we've still got continuous pings running and that is super super cool so it tells us that our proxmox Seth pool that is distributed shared storage between those proxbox nodes is doing exactly what it needs to do all of the hosts correctly have access to it if that were not the case we would see more than just this four gigs of memory map which granted is a bit anemic for Windows Server 2022 but for this test VM that's what I got allocated and also what's really cool to me is this is a nested environment inside of VMware vsphere so so actually I have extra overhead on this virtual machine compared to just bare metal and to see it fail over that seamlessly that is awesome well guys I really hope you've enjoyed this deep dive into proxbox clustering specifically with proxbox8 that is newly released and just how easy it is to install ceph on our proxbox 8 nodes in the proxbox cluster and allocate a shared storage pool between each proxbox node please do like the video subscribe to the channel it lets me know you guys enjoy the content I'm going in the right direction and it also helps to support the channel once again I'm Brandon Lee please keep on home labbing guys take care out there and I will see you guys on the next video thank you foreign
Info
Channel: VirtualizationHowto
Views: 3,704
Rating: undefined out of 5
Keywords:
Id: -qk_P9SKYK4
Channel Id: undefined
Length: 16min 38sec (998 seconds)
Published: Fri Jun 30 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.