Creating a Supercomputer with a Raspberry Pi 5 Cluster and Docker Swarm!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so do you want to learn how to build your own Super Compu cluster using a bunch of Raspberry Pi 5ivs then this is the show for you so let's dive straight in I'm Kevin come with me as we build robots and other things and bring them to life with code and have a whole load of fun along the way okay let's get over to our a very quick keynote and then we'll get right into the live demo I promise so yes this is all about a pi five cluster I've been running a cluster you can just see it behind me just there for quite a number of years now and that's actually raspberry Pine fours so now that the raspber pi 5 is out I bought four raspber pi 5ivs and we're going to take a look at how to build these into a cluster so we're going to learn about how to use Docker swarm which is the clustering technology I've chosen to use for this for its Simplicity mostly so going to have a quick look at what the um the swam what it's all about how we use it we're going to have a look at some of the key features of that and then why we would use this in a small scale environment particularly if feel like learning stuff it's perfect for that you don't need the complexity of um learning something like kubernetes so we're going to be using this today and um let me just turn that right down there um I've also built a course for this I've actually just unplugged that Raspberry Pi cluster just because I was having some issues with bandwidth just before the show and I wanted to rule out that it wasn't to do with that stealing all my bandwidth um but there is a course on kez robots.com learn doore swarm so you'll be able to look at all the different things that we're covering off today on that course so let's get into this shall we so before we do that I've got a message to my sponsor which is today PCB way this video is sponsored by PCB way your ultimate destination for all things PCB manufacturing and assembly whether you're a hobbyist a startup or a season professional PCB way has got you covered PCB way offers an impressive range of services they provide high quality customdesign printed circuit boards for any application you can imagine from single layer to multi-layer flexible and even rigid Flex pcbs they have the expertise to bring your designs to life PCB way ensures fast turnaround times and affordable prices without compromising on quality with their state-of-the-art facilities and advanced manufacturing techniques they can handle small prototype orders up to large scale production runs with equal precision and efficiency PCB way offers additional value added services such as PCB assembly component sourcing and even functional testing you can trust them to deliver the fully assembled and tested boards ready for integration into your projects one of the best parts of pcbaa is a userfriendly online platform it allows you to easily upload your designs get instant quotes and track the progress of your orders in real time plus their dedicated customer support team are ready to assist you with any questions or concerns so whether you're working on an Innovative Internet of Things device a robotics project or anything in between PCB way is your go-to partner for Reliable and affordable PCB manufacturing and assembly head over to PCB way.com today and turn your ideas into reality with PCB way your trusted PCB manufacturer okay so thanks again to PCB way for sponsoring the channel okay so what is Docker swarm you might have heard of Docker before but have you heard of Docker swarm so this is their clustering technology it's been around quite a while I think there's also talk about depreciating this at some point so some people might be surprised to learn that it hasn't been depreciated it's still uh completely supported and actively used by a lot of people so it's technology for building clusters it's also known as a container Orchestra orchestrator I think sounds like um it's a very fancy way of just saying it just schedules tasks and so on so what a a cluster can do is it can take your Docker containers programs that are running in the background we covered off this in a couple of shows uh last week the week before uh it can take those containers and it can actually have them hosted on different nodes in your network so a node would be another Raspberry Pi I've got four Raspberry Pi and I've just named them 1 two 3 four for Simplicity and I don't care which node these containers run on I just simply want to be able to hit an IP address send traffic to it and it do whatever it needs to do so Docker swarm can take care of all of that we'll have a look at that in great detail in a minute and this means that we can enable several computers to act as like a single unit so it can take all that memory all that capacity processing wise and effectively you're just dealing with one big unit and you can add extra worker units to that or even take them away and scale it up scale it down however you wish it's very very simple to do that so we're going to be using be high five I've got another one I bought from Pimon this time and I've also got their new um mvma base which provides some storage so this is the mvme base that has the 500 gig um nvb drive in it as well as the base so we just attach that underneath the rasby pi 5 connect the little cable up format the drive install raspber Pi OS and away we go so I've got four raspberry pies already which are just in the corner behind me they're live currently they're not doing anything at the moment they've got Raspberry Pi OS installed on them and the reason that we chose Raspberry Pi 5s for this cluster cuz people might be thinking why don't you just have like a desktop PC and one of the reasons is single point of failure so if that PC dies yes it might be a bit faster than a Raspberry Pi 5 but we' got a whole bunch of these raspber pies so we don't care if one of them dies well we don't care too much if one of them dies there's another one can take its place immediately so they're very fast Raspberry Pi FS they can do pretty much anything you'd need to do um to host things on them particularly in like a learning environment they're very small so the fact that they have this very small form factor means that we can have a whole bunch of them in a very small space much smaller than your average desktop PC storage wise we've got the as I said the mvme drives now so that means we can have a lot of storage on these so these raspberry pies have actually got a terabyte nvme drives each so they're very fast they've got plenty of storage we can put on as many containers as we wish for that and they're affordable as well so normally buying a cluster buying multiple computers and putting them all together can get very expensive so if you used to do this with desktop computers you're probably talking a couple of thousand pounds to do this where to do this it's just a couple of hundred it's much smaller and you can start out with the S with just two raspberry pies and they don't even have to be Raspberry Pi FES but in this case that's what we're using because um that's what we we want to use for this particular project so what you'll need if you want to play along with this is two or more raspby Pi 5ivs um I would say you definitely need a 27 watt power supply that's the the new power supply that rasby Pi produced uh and the reason for that is that the pi 5 used more power than the raspb pi4 so a mobile phone charger or power supply probably isn't going to cut it for the kind of work that we're going to throw at these particularly things like large language models you'll also need a bunch of these really small network cables so that you can just see on on that picture to the right there we have a a rack I'll talk more about that in a second we've got four raspberry pies and they've got a cable connected up little patch lead connecting to that little desktop switch underneath them so you'll also need that desktop switch and the cables the cables are pretty cheap to buy and I've gone for these mvme drives uh for the fast storage so you'll need the mvme Bas or the hat like the pinebury uh pie hat for example as well as an mvme Drive I've gone for that because it means we're can have more storage it's a bit more reliable than SD card I've got a little graveyard down here of SD cards that have have died and a burned out I think that one particularly has a an ND burn mark on it uh just on where the just underneath the 32 there where something terrible clearly happened to it and I've also got this nice cluster rack now I think this particular model isn't for sale anymore but I bought this one from P Hut they've got a nice selection there and then software wise I've installed Docker already I've got a really nice script on kevs robots.com on the in the docker tutorial on the docker course that you can download that will install Docker and set it up perfectly for you I've also set up SSH so that I can connect to these over a terminal so it is a bit techy today this is probably an intermediate kind of level course rather than uh for beginners per se so this is the cluster rack that I chose I bought two of these I've got the one that's um just behind me there and then there's one that's now in the in the corner that's running the new Pi 5 cluster like I said I I bought these quite a while ago so there is a new version that replaces this from P Hut I think essentially they've added a bit of extra space for the the mvme drives they're around about £50 a buy which is a bit expensive for a piece of metal but they do the job nicely they keep everything nice and cool separated secured and they also have some extra fans you can just see in the back there there's two great big sort of fans in the back there just to provide some extra Cooling and these are powder coated they've got this nice black kind of look to them as well you can even add SSD drives externally to these if you wanted to as well I think they have a a slot for that right let's get over to the the real meat of today's show shall we so what I've got over here is I've got let me just click on that one there I've got these four Terminals and these are connected up to my Raspberry Pi 5 so if I just go over here if I just type in a pin out for example that will tell me what these have got spec wise so this is a a Raspberry Pi model 5B and we can see on the specs where's that gone that it's got 8 gigs of RAM I think it just says that just above there there you go 8 gigs of RAM and we've also got the uh nvme drives plugged in so that we've got plenty of storage on there as well interestingly how it says micro SD there but if I do the uh DF for dis for df- for human readable we can see that we've got 828 gig free and that's our mvme drive there right so what we want to do is we want to initialize our cluster so I'm just going to bring up my notes on my other screen because there's quite a lot of commands I need to have ready at the uh the go so what we're going to type in is Docker swarm and then we're going to say in it to initialize the docker swarm and I also need because the raspberry pies have got two network connections they've got a Wi-Fi and an Ethernet and I need to tell it which which um IP address to use to advertise on so I need to type in advertise and then ad DF address and then the address which is 192168 2.1 now that's a really nice number each of the nodes has got the IP address of their node number and that's because I've set a fixed IP address I'll maybe show you that a bit later on but that's something we need to do for each of the raspberry pies sometimes on your your router your home router or whatever is providing the uh the DHCP addresses for your network which is how they get the IP address you can set that so that it always gives it the correct IP address but in this case I've actually just forced it and just said I want a fixed IP address right so I've hit return you can see there it now says the Swarm has been initialized the current node is and it's got all these numbers don't worry about that and then it says to add a work Noe to the Swarm copy this command so if I just grab hold of that command and just copy that into my clipboard I'm just going to come um down to this uh command line just type in um Docker node LS that's going to say list all the nodes are in our swarm currently So currently we've got Devo one it's ready it's active and it's the leader so if I come down to this uh second terminal this is connected to my second Raspberry Pi I paste in that Docker swarm command that I've just cut and paste them up there and just hit return that's now been added to that cluster as a worker node so if I come back up here and just do Docker node LS you can see I've now got two nodes I've got Dev o1 Dev O2 they're both active they're both ready but one of them is the leader so this is the first one and that means that that has all the responsibility for remembering which nodes are connected what services are running and so on right we're going to connect up um Dev O3 so I'm just going to paste in that same command that's now been added if we come back over to the the node one over here do Docker node LS you can see that Docker the node 3 has now been added now I'm going to leave four off for a minute because I want to show you how we can scale things out a bit later on but if I come over here I'm going to type in Docker stats and what this is going to do is it's going to just show us if there's anything running um on that particular node and that will actually be able to show us oops Docker stats that'll be able to show us when I push something out to them using this cluster technology they automatically pick it up and start running it so that's what that's for there right so let me see right so we need to go back to the slides just for a moment okay so the next thing to talk about is registry so we need to store container images in our registry so what I'm going to do I'm just going to hit back over to um our our cluster over here and we're going to create a registry so what I've done I've created a whole bunch of Docker compos files and these are essentially the ingredients list on how to build a container I'll show you what one of them looks like inside but really if you want to know more about Docker composed files I did do a show on this a while back so I'm just going to go see the clustered pie stacks and if I do LS and just list the stacks there you might be able to see we've got a whole bunch of different Stacks a stack is essentially um something that Docker uses um to host a service um so we might have for example a website and a database and they they might be separate containers but you can actually run both of them at the same time as a stack and one relies upon the other so the data might the website might rely upon the database being run for example so what I've got on here is I've got one that's called registry so I'm just going to go into registry I do LS you can see there that there's a Docker compos file so I'm just going to show you what's in that Docker compos file so it's not there's not a lot in there so you can see there it says version 3.9 services so we just have one service one stack in here we have the registry and that's just a name it doesn't matter what that's called the image is registry version two now Docker knows what that is and where to get that from Simply from that one line so by default there is a repository up in the cloud um Docker it's hub. do.com where they store all these things and registry is one of them we're going to call the container name registry uh and it's going to use ports 5,000 and the reason it's got 5,000 kind of twice there there's an internal Port that runs within the container and then there's an external Port that we can see it's kind of like a firewall um so the one that we want to access is the first one the second one is the one that's inside the container so I've just made them the same it means it just works the restart policy it means always restarted if there's any kind of issues and then this deploy section this is what's unique to the Swarm so we want to say just limit this to um 10% of the CPU don't use more than 50 Megs of memory and the mode is replicated and we only want to have one replica so our node can have as many nodes as it likes our cluster can have as many nodes as it likes but we're only going to have one replica of this uh registry right so the way that I would then um make this appear I would do Docker stack deploy and then I need to give it a file configuration file so I'm going to use- C and then I'm just going to type in that Docker compose and then we just need to give it a name such as registry so it's now created that that now exists on our Docker swarm so if I do Docker um let's do Docker stack LS we can now see registry exists there and if I do a Docker let's see which command we need to use for that dock PS let's just see uh what's running so we're actually running on this particular node the registry itself so I can see there dock PS just means like what processes are running on this particular node so registry 2 and you can see see that it's just been up 15 seconds okay so we can actually see if there's anything in the registry and if that's working using a command called curl curl is just like um it's like a URL Grabber just AIT bit like a browser that's command line so it'll just show you what gets returned when you hit a particular address if I do HTTP 192168 2.1 which is where this is running slv2 and then I think it's forward slash um so it's saying it can't connect to that particular one do I need to do 5,000 in there yes of course so I need to put the 5,000 port in there as well right and it's just returning two little curly brackets that are empty saying that there's nothing in there and you can even do um underscore catalog if I type that correctly cat alog and catalog forward slash why not type that that right catalac I think I've typed too many cats in there let me just backspace that there we go so the repository has nothing in it at the moment so that's what that's telling us there we've not actually added anything to read the repository and that's what this registry does it's kind of a collection of um uh containers which are not started up yet and they're called images so again a bit of terminology there okay let's get over to the the next piece then so one of the really nice pieces of software that can help you manage your Docker swarm is painer so I'm going to install painer onto this swarm it'd be interesting to see which node it puts that on of of the three that we currently have available and then I'm going to show you how you can visualize all these different containers that are on there right so let's get back to over here what I'm going to do now is I'm going to back out of that folder and I'm going to go into my portainer folder so all these Docker compost files I've created uh to run that little cluster that's just just flashing way behind me there that runs kevs robots.com so I've got another Docker compos file let's have a look in that uh particular one so you can see on there it's called paner um it's going to grab the image c. pen.io which is their own um repository and it's going to get the um Community Edition version of painer it's also going to create a volume now we talked about this on the show a couple of weeks back uh when we was looking at Docker uh volume is a way of storing an entire file system in a single file on that Docker node and that means that um we we've got some permanence there because if we don't have a volume when we stop that container from running it forgets everything that it's learned in that session so this gives us a bit of permanence uh what's actually happened here is the container the the volume that this container is going to connect to is actually a special one vun doer. sock is a socket uh and that's a way that processes can talk to each other so if you have one process over here and one process over here and they want to talk to each other you can use a socket to do that uh and that essentially just punches a hole out of the container into the the host environment and talks to the docker sock which is how Docker runs internally we got the restart polici unless stopped it's going to keep running it's running in privilege mode um which means essentially running is like Sue route pseudo and then it's got two ports 89443 we'll use that in a second to actually access the portainer and then the deploy section again this is specific to our swarm so placement constraints node host name equals Dev 01 so I've told it there it has to run on dev 01 which is the this particular host here of the four so let's bring that one up so if I do Docker stack deploy - c the docker compos file and then let's call this one painer like that okay once that's up and running we can then go to our web browser so I'm just going to load up a web browser I'm just going to get um that one there I'll do and then let's type in 9443 so we need to just click on this advance and this proceed it thinks it's unsafe because it's uh HTTP so I'll just need to type in a password in here let me just do that and let's just type in the same password again obviously I didn't do that a second time there we go create user right it wants to add an environment so I want to add basically just click to this home one here so I've just clicked up here home going to click on this local and it recognizes that this is a swarm if you've ever used paner before you only get about five or six options but now we've got we've got secrets and we've got this swarm so I'm going to click on that and you can see that the list of all our nodes I've got one two three I've got one manager and two worker nodes they've all got four cores in their CPU they've all got 88.4 gigs of RAM uh they're running different versions of the engine but I think that's because the the manager and the worker engine have different versions so that's normal and the status there is ready if I just get out the way I think there's another column there this just says that active but we can actually visualize that much nicer so if I click on this and I scroll up here we can see that node number one has got two containers running on it so it's got the painer container the thing that's running this software here and it's also got the registry as well it's called that registry registry so the reason it gets these names like that is the the name that we provide as the service name in the docker compos file is the first one and then the second one is what we called it on the command line when we initiated it so what we want to do is we want to create some additional things um on here so let me just see if there's anything else I wanted to uh to talk about that yes so notice how I went to 192168 2.1 because that's where this particular the container is running what would happen if I connected to a different one so if I connect to 192168 2.3 9943 9443 if I connect to that IP address what should happen is we would expect to get a blank page but we don't it's as if we've connected to that very first node so why do you think that happens I just type in the uh admin and the the password now we haven't spun up an additional instance of this something interesting is going on there so even though I've connected to that other IP address it's rooted all that traffic magically behind the scenes to the the container that's running that particular service so that's one of the magical things about Docker swarm I didn't have to put in any load balancing any kind of fancy IP routing it just knew what to do because this is only running on that 192168 2.1 which is devo1 machine so that's pretty clever that's something that uh is a bit surprising if you've not seen that before okay let's just get back to the slides for a second so that's how we can use a paina to uh to visualize our swarm we'll go back to that uh slightly later on so let's have a look about what's going on here so we've talked about this manager node that's doing things like scheduling it's doing things like um storing that distributed state so all these worker nodes are working together but it needs to keep a record of who's running what software and what would happen if one of those nodes became unavailable so say I pull the power there's a network problem what will actually happen the manager node will immediately fire off a container on a different node that is active that's okay so that um behind the scenes it just makes sure that service is is running with very little downtime it also does things like service Discovery uh and like I said scheduling the tasks so you can see on the little um diagram there um we can have different containers running on different nodes and we don't really care where they are unless like I did where I pinned a particular container to a specific node using that constraint so we're going to add more nodes more yeah we're going to add one more node and we're going to add a whole bunch of different containers to just show you how this really brings to life um so let me just see if I had anything else I wanted to cover off on that one not so let's deploy some more apps and spread this workload around okay let's get over to um our uh our swarm again and let me just scroll up so let's see what we're going to do so yes first of all we're going to we're going to use something that we're going to run on all of our nodes and this is a piece of software that's called Dash Dot and it's kind of like a dashboard to see how much CPU memory and so on is available so let me just go back over here I'm going to go into the dash folder and I've got another Docker compos file there let's have a quick look at that one so this one takes this uh how would you pronounce it Maurice Nino SL Dash do latest so it's going to get the latest version of Dash Dot it's going to run unless it's stopped again that's running in privilege mode because it wants to get that kind of system level information and it's going to access um it's going to make available Port 3001 both internally and externally we're also going to get the mount host because it needs to get some uh information about disk space so it can do that using this read only that's what the R means on the end there and then there's some cluster specific stuff on the deploy there so Mode Global so before where we had things that could be replicated that we can scale up if we say mode is global every node will get that software um the restart policy on failure so that kind of contradicts a little bit that restart unless stopped but this is because this is the in the deploy section so this is more to do with clusters and once I spin this one up I deploy you'll see on both node number two and node number three dash dot up here it's quite a small piece of software so it should deploy pretty quickly as well so let's do the usual let me just uh so Docker um stack deploy - c and then Docker compose and then let's just call this one dash and away we go so let's see how quickly these start appearing almost instantly they are now appearing on there so you can see Dash Dash Dash and again that's because we named the service Dash and then we also at the end there called the Ser the U the container Dash as well so that's quite you get that sort of Double Dash uh just there uh and we can actually access that if I go back to our browser and instead of looking at pora let's go for 3,1 I think it was and let's just give that a second to fire up not sure if it's HTTP there we go I think it's HTTP rather than https and it usually just takes it a second for it to find that and uh put all the information on screen um but we can actually access that on on all the nodes because it's running on all of the nodes so what I will also do on here is I'll just go back to that painer let's just have a look on here let's see if we get uh access to that let's just log back in I've not logged in on this node before so that's why it's asking me for my username and password which I've typed wrong cuz I'm typing it too fast there we go so if we now visualize our cluster again we can now see that we have the painer and the registry but we've also now got three containers one running on each of these nodes now that's pretty cool isn't it that was uh very very quick to to get up and running okay so let's try something a bit more interesting I'm going to show you how I can build a custom app so what I mean by that is I've got a piece of python code and and we're going to um we're going to get that into container and run it so if you like what I do and you want me to make more of these kinds of videos please give this video a like drop me a comment let me know if you're going to use any kind of raspberry pies you having in the past or you plan to in the future to make your own cluster uh and would you choose a Docker swarm or would you choose something like CU kubernetes I'm very interested to know what you would use there and if you've not subscribed to the channel already please subscribe it means an awful lot to me it helps the the the channel grow I do go live every single Sunday as well at 7 o'cl as long as there's no kind internet issues like there was today um so yes you can catch me live and have a bit of a chat after the main show so what we're going to do next then we're going to contain our eyes we're going to dockerize a python app so I wrote an app a while ago for Kev robot.com which provides a search API so you can type in any search string into the little search box on K robots.com and it will return the results for you um and it's actually coming from this Python program so what I actually built in this Python program I'm not going to go through the program itself I'm just going to show you how to dockerize it but this program essentially indexes every web page on kevs robots.com it then does a full text search exports that into a little SQL like database and then we can then query that database for any of the Search terms that people search on and we can return back the URL a thumbnail image and um some text such as who the author is what the title and the description are so that's what we're going to be dockerizing on here so we need to create a Docker file um which contains the instructions on how to build the image we need to create a Docker composed file which tells us how to deploy the app we need we then need to build that uh and then we need to tag it as well so tagging is the part that uses the registry so we essentially tell the registry there's a new container and this is where it's stored once we built it we upload it to the uh to the registry that's partly what the the tagging helps us do we push it to the registry and then we can deploy it with our deploy stack so let's get over and uh give that a go shall we right let's get over to so here and do this so what I'm going to do I'll make this one a bit bigger because there's a there's a bit more involved if I in fact if I just shrink that back down and I just do that instead we can see a little bit easier what's going on here so I'm going to back right out of clustered P I'm actually going to go into kevs robots.com and into my search app so if I show you what's in here search app is the actual Python program if I just do cat app um you can see there that's the Python program that does all it's not a very big program but um but builds the fast API um API for the uh for the search thing right so I've created um a couple of different files here so one of them is the Docker file let's just have a look what's in Docker file Docker file um so in here again this isn't a massive file um and but it will pull down um the python slim image and from that it will then create a working directory user Source app um it's then going to just copy um the files from The Source folder into the Container that's what the copy dot dot does it's then going to do pip install um any require m. text so I use fast API it needs to pull fast API and put that into our container um and then we're going to use I think P 8,000 on here as the port that we use to access this particular service it's then going to use um uvon which is a way of Hosting um this particular fast API program so that's what that command there is um uvon app app and then it's going to host it on the 000000 which just means like your local host on Port 8000 000 right so that's the the program that's how to build it in the docker file the docker compos file so let me just show you this Docker compose again this isn't very complicated we're going to build something we're going to call it fast API app um build dot just means use the docker file to build this all the build instructions are in there we're going to use port 8,000 um to kind of punch through from the container to the real world uh we're going to map the current um we're going to buy in effect the current folder uh to user Source app and the reason that we're doing that is because there is a search database and I want that to be the same I essentially generate it when I compile Kev robots.com what Python program that regenerates that database and then I want to pull that into this container so as long as I've um I've cloned K robots.com I can access that database and we're going to restart that if there's any problems there so I've got another program here that's called deploy this is a shell script that simply just does Docker build and then it tags it uh once it's built it um with 192168 2.1 5000 which is our registry uh we're going to call this one search and this is the latest version of search the dot there just means build the docker file in the current folder and then once we've done that we then Docker push which uploads it to our registry using that tag that we've created so let's do this simply just do deploy and it's going just build the app there now you can see there it's only a couple of um couple of Meg for it to build this and once it's done that it's now push that up to our registry so if you remember before I typed in a command which was curl and then HTTP 192 168 uh 2.1 Port 5000 slash and then it was V2 _ catalog um and what if I type wrong there I've not put my forward slashes in there like so and you can now see the repository does have something in it it's got something called search and that's the one that we've just built so now that we've built search we want to deploy that to our um our um cluster so if I do Docker stack deploy just like we've done before - c and then I'm going to use the docker file that we just created so that's the docker compose file and then I'm going to just give it a name such as search um oh so is there an issue there let me just see what I've done wrong actually is it the docker stack I created two Docker compos files one of them are called stack that's the one I should actually be using there so deploy mod deploy Mode Global means run this on all the nodes so let's just do that last command but instead of Docker compose it's the docker stack yaml so if I do that there we go and what we should also see here if I just shrink this down you shall see that uh another thing has popped on on our container on each on each of our nodes they're now running some extra things so you can see search search is now running there so if I go back to our web browser and let's now type in um so it's Port 8000 and one that's what I'm actually running it on and the way that this API works is you type in Search and then the query is like a parameter and I'm going to search for anything to do with python so there we go look how quick that was that was the API immediately searching the database and coming back with all the uh the pages so you can see there the results are so we get a URL learn python1 intro. HTML the cover image is the uh the URL to the image which is assets one which is the Python program uh learning course the page title is called introduction to Python Programming the description is understand the fundamentals of python blah blah blah so we what we can do we can actually do this now on any of the nodes because it will it will also root it so if there is um if we provide an IP address that doesn't have do I need to do HTTP there sorry uh if there's a node that isn't running that four isn't actually in our cluster yet so that's why that's not responding but there we go if I do three if I do change that to IP address number two if I change it to IP address number one um it's working on all of them but it's also doing the load balancing behind the scenes so we can type Docker we can do like smar we get a different set of results back so you can see there that's the API that's running live now um if I go to search. robots.com smar whatever if I plug in that little cluster behind me we'll get the live search let me actually do that because okay so there we go it's now come back with the results let's do like python for example I type python correctly that is and it's going to pull back the the live results there um it probably just take it a second for all the machines in there to get all the right IP addresses and whatnot um but yeah that's actually a live search that's going on there okay let me stop getting Ed with that so what we've done there is we've containerized a Python program um we've we' then deployed that across all the different nodes in our um cluster and we can see that by looking to see um if I do Docker stat on this top one as well why ises it not let me type on that one strange some reason that one's become a little bit unresponsive I have found this I think there might be an issue with the Pineberry pie hat on this particular node because all the other nodes are fine but I think when this one gets under load it starts to behave a little bit funny and I think it might be to do with the pinebury p hat so I'm just going to log in using another terminal window down here and just see if everything is good 68 2.1 just in case it's that particular session that's locked up um and it has come back there so that's fine there we go I think it was just getting a bit busy with something let's just replace that one with this one here okay so let's do Docker stat oh interestingly that seems to I think there's an issue there I think there's an issue there with my network so I probably just need to look into what what's going on there um because when I plug that cable in everything seems to stop working there and I'm not sure what that was I'm also got out of focus there as well so uh so let's just wait a second for them those to come back up see everything's working okay there and what I'll do I'll just reconnect to these sessions that's probably the easiest thing for me to do this is a live um interesting that's now just come back to live there this is live so sometimes things do happen like this there we go it's come back up and that one's also working fine so that momentary glitch has uh has gone away so let's just go back to the stats so we can actually see what's running there and that's what I was trying to do on there Docker stats so we can actually see what's running so you can see in the main one we've got uh the search program we've got the dash dots we've got Porta we've got the registry running on that one that's the master one or the the manager node and then we have the other two as well so what we want to do next is add a new worker node so we've been working away here and this this node number four hasn't been doing anything so if I come back over here and if I type in Docker swarm if I just type in Docker swam it'll actually give you the help there and it's the join token that I'm actually interested in so if I do join token I think I have to do the IP address or the I want this to be a worker so if I just grab that string there copy and paste as long as node 4 is behaving itself let's just reconnect to that just in case 2.4 yeah something clearly is going a bit glitchy on my my home network I'm not sure what's going on there but um for some it was working fine before the show and then there was an issue so just before the live stream just while that's waiting to connect just before the live stream I had um some issues with my Broadband it was being very very very slow and it was actually the switch that's in my loft similar to this one this one will replace it shortly so it's got a whole bunch of uh network sockets on there this one's a nice ruber the one I've got up up in the Loft I think it's a Cisco one and all all the ports say fault on them but they've been working fine so I've been been using it uh with no problems so what I'm going to do is actually just kick that um just by turning on off and on again I'm not sure what's going on there just give me a second it's l just at the back of the studio here and you'll be able to see just how quickly this comes back up on the Raspberry Pi 5 so let's just uh terminate that particular one and once it's rebooted I can usually see the uh the network light pops to life which it hasn't done quite y there we go it's come back to life so that should uh connect up any second there we go so it's now asking me for my password okay so what we can now do is we can now paste in that Docker swam join command that will now connect to the master node the manager node I keep calling it The Master the manager node um and everything should work okay as long as it's actually got a a good solid connection which it should be okay um some things have been a bit quirky there not sure what was going on there so let's just uh doc node LS you can now see that we have four nodes uh this one is currently down I think it's because it's just bringing it up so we'll give it a bit of Grace there to do that uh once it's ready U we'll be able to deploy things to it as well so let me just find what else we need to do so let's go to the uh painer and we can see that we've now got a fourth um node on here so if I just hit refresh you can now see that we've got four nodes and what it's doing one of the reasons it was being a bit slow is as soon as it came up it was like right you've got loads of work to be doing you need to be loading Dash you need to be loading the search app and also the uh it's got multiple copies of that because I think some of them failed the first time but it's now it's now working okay you can see now it's uh um it says it was already part of a swarm which was interesting I think it it probably did get the command um to join it and that's why it's just uh come back and just said that there's an issue there but so if do Docker stats we can see what's going on there and it's now got those two dash dash dash search search uh instances running on it so just by doing the jocker the docker join command because they were globally deployed the docker swam immediately deployed those containers to all the nodes uh including these these new ones so that's why on the portainer we can see that we've got all these things running and we can actually filter out only running tasks as well just so that we don't have that that failed ones cluttering up our display there great so that's the uh how to add an additional Docker node on there so what would happen if we wanted to restart one of these so say that Docker number four was uh requiring some kind of software patch or we you wanted to upgrade the hard drive or whatever and we want to bring that node down out of our um our set of running nodes so we need to tell that node we need to tell the manager node to stop sending anything to that and to essentially drain all those containers from it if necessar redeploy them um and then we can then reboot that particular node so let's let's actually do that so what I'm going to do is I'm going to come over to this one let me just make this a bit larger again so you can see what's going on I'm going to say Docker um I'm going to say Docker and then node update and I'm going to do dash dash availability let me make sure I spell this right ability and and then I'm going to say that that drained that particular node let's drain node number four so dev4 right so watch what happens on node number four you see there the search app has been uh stopped and the dash dash is still running but in a second that that one will also stop running as it will have drained everything there you go so nothing is now running on the number four we can even see just in the background there there's nothing now running on there when we're look at it through the cluster visualizer so if we now do docket node LS just to list it you can see active active active drain so let's go over to here and let's actually reboot this if I do pseudo init six that's going to uh reboot that particular machine so if I do uh node LS you can see there that um as far as it's concerned it's ready but in fact it's actually down and then in a second or two there you go it's now changed to down so it now knows that there's a problem on that particular node and the visualizer also shows that's uh red or Amber it's it's not online currently and once the node has rebooted we can probably see that just by the fact that we get a a terminal back up let's just run out the stats again on there so we can see what's going on if we now do um node we can see that it's ready so it's now ready it's rebooted we've done all the patches whatever we need to and we can now bring that back up so we can instead of it being U drained we can now just say active like so and if we now just do Docker LS you can see everything's active and you can also see it's restarted all of those uh containers that were ready to go on there so we go back over to our visualizer you can also see there it's ready and it's running the two containers that we need it to so it's really responsive how quick we can change these things now let's say we actually want to remove this one we want to use this for a different project it's luxurious to have four raspberry pies um all just sat there doing the same thing maybe we haven't got as much demand as we need so what we can actually do if I just uh come out to that thing there we can run that drain command again to drain down um the things that are running on here let's just see that they they've stopped and then we can simply just leave the Swarm so we just type in Docker swarm leave it's as simple as that so once that has finished running we can just do Docker swarm leave and it's going to leave the Swarm so node left the Swarm so if we come over to here it'll actually just show it as ready or possibly down on the visualizer because as far as the manager is concerned it still thinks that that is a node that it's interested in so it just thinks it's down so to remove it we have to do Docker swarm sorry no Docker node RM for remove and then dev4 so if now we do do Docker node LS we can see we've just now got three nodes running and our visualizer now just shows that that new view where we've just got the three running so there was a question in the chat there what happens if the manager fails um so we do need the manager to be up and running typically we wouldn't have that um as a worker node we would just have that doing the orchestration stuff so um we can actually create more than one manager so we can actually um promote so let's let's add this one back in let's go back over here Docker swarm join token and let's make this one a worker again there's the command for that let's just grab that make it um oops typed the word Docker twice there so it's now joined the swam as a a as a worker so then we can do Docker node and I think it's promote manager is it just promote oh we we have to do that from the manager sorry so Docker node promote and we need to say which node it is so Dev 04 we can also make a a manager as well so if I now do Docker node LS to list the man the all the different ones we can see we' got the four nodes again Dev 1 2 3 four they're all active and this is still the leader but this one is reachable so that means that in the event that our main manager goes offline we've got a backup so they could all be leaders in theory but that kind of defeats the point of having that separation of Duties so um to answer the question there Koko that's what we need to do call you coko Adrian um so that's what we need to do there if you want to have a bit of resiliency so let's see what else we we want to do so let's remove some Services now so say that we uh we don't want to use that Docker um that dash dot one if we've got dash dot actually up and running on there to show you what it looks like it's it's quite a cute little dashboard but um it's not really designed to work in the way that I've got it running on here so say we we don't like that and we want to remove it it's really really simple so if we do Docker stack because this is how we deployed our stack um and then we just do RM oh not let's do LS to list it we can see there we got Dash so if we do Docker stack um RM Dash it's going to remove that from all the nodes that are currently running it so if I now just do a Docker stats on there you'll see in a in a moment Dash will just simply disappear from all of these running nodes there you go so it's just disappeared they're now no longer running that and if we go back over to our pora view you can just see there we've just got the Reg just painer and then the search app which is running on everything now if we wanted something a bit more beefy we could run like kevs robots.com which is a a much bigger application that's really what I use this cluster for which is the machines behind me just there um so we could do that it just takes a little bit of time for it to actually build it so the way that I would do that I would come come over to the uh the master node there I'd go into um let's see the clustered pie stacks Kev robotss and then we would do the um deploy so this is a similar to that um Docker container dockerization program that we did before with the the python search program this is essentially doing the same thing there it's just going to build it it takes about five minutes to build it in total because it's a big site there's a lot going on there but essentially we end up with an image it can then fire that image out all these nodes and they can then host that but it's just like magic all this complexity just by some very simple commands um from Docker okay so what we can also look at as well say we wanted to um go back to this Docker um uh this node number four let's do Docker swarm leave again so uh what's it saying there oh this is a manager so we need to demote it first so we would need to do Docker node demote and then dev4 I think that's correct there we go and now we can leave the Swarm right now Docker actually caches all those images and containers and things and network connections so sometimes we want to actually get rid of all those things because they're taking up space so that we we type in Docker system prune and if you just do that it will then say this will remove all stop containers any networks that not got at least one connect connected container any dangling images any dangling Bill caches sure you want to do this if I say yes we've got back there 30 Meg of space CU it wasn't taking up that much space um you can actually do Docker system prune DF for force and it will ask you that yes or no question that's way to sort of clear out and get disk space back if we did bring that uh back into the folder make that one um a um node again it would simply just redeploy those applications and it would just start caching them again so it's very quickly to sort of tear up tear down any of these um um instances on your swarm okay let's get back over to our keynote and see if there's anything else I'd put in here to cover off merch so we do have a merch on the Kev robot.com uh website if you go to KES robot.com merch you can get yourself one of these amazing um robot maker hats to show that you're a true robot maker you can also get the mug as well coffee tastes so much better in the K robots.com branded mug so head over to K robots.com merch for that and we have a Discord server so if you've not joined our Discord server you're also missing out on the kind of offline um offline online so when we're not streaming that's where you'll find everyone so uh head over to KES robots.com Discord to get a sign up link for free if you want to follow me on social media I'm all over social media so I'm on uh on threads kevm um I'm at kevinm threads. net I'm on Tik Tok Maier 6 I'm on Instagram at Kevin Mia I'm on X kevmac on masteron kevm mason. social and also on Blue Sky at km. b.so so you can find me there and I'll will post up all kinds of behind the scenes um stuff on there as well if you're interested and if you want to help support the show you can do that in a number of different ways and get your name on the credits at the end of the show uh so to do that go to k robots.com coffee and you'll be able to get your name in lights there we also have a super thank thanks and Super Chat so if you're in the chat now you can do a super thanks just by pressing the uh the little um dollar symbol I think it is if I switch on the stream elements there they'll also pop up and you'll be able to see things there um and you can also do Super Chat no super thanks is if you're watching on replay Super Chat is if you're watching live I always get those two mixed up there's also the YouTube join membership program button at the bottom which just says join if you click that it's simply the price of a coffee per month and it helps support the show I means keeps me in raspberry pies and robot stuff okay so it's time to thank our supporters who have also generously given um to the channel and support the channel so thanks Nicholas and Wayne for buying coffees recently we had Steve and someone who had to be name lus um Who provided uh coffee last week and we've got number of members on The Bu a coffee membership program so we've got U Alvaro Diaz we've got Mary Louise May Jeff Johnson Dean Cy Malin Brad Tom shmy and Steve Phillips hey to all those people and then on the YouTube membership side we've got Dale from hydrid robotics we've got Bill Hoy Warren steel Steven Cross John lamu we got Jonathan AR orad 39 Vince John Paul Jolly alist wear Cassie got um tinkering rocks JDM Johnny bites hands from Cheer lights Michael and of course Tom as well so I think that's pretty much everything I've got for you on the main show to day so hopefully YouTube is now showing you just over here a recommended video that it'll think that you'll watch uh probably one of the docker type programs or something to do with Raspberry Pi fives I suspect but check that one out binging videos of my channel also helps it grow just because you're spending more time at my content so this is the point in the video where if you're watching this on replay I'll say thank you so much for watching and I shall see you next time
Info
Channel: Kevin McAleer
Views: 20,734
Rating: undefined out of 5
Keywords: Raspberry Pi 5 Cluster, raspberry pi 5 projects, raspberry pi, rpi 5, cluster, docker, docker swarm, pi5 docker, Raspberry Pi 5 Docker Swarm, raspberry pi docker cluster, raspberry pi 5 cluster, pi, pi 5, portainer, raspberry pi server, raspberry pi cluster, raspberry pi projects, raspberry pi 4 projects, raspberry pi tutorial, docker swarm portainer, docker swarm tutorial, docker swarm explained
Id: tDENgLiJSh0
Channel Id: undefined
Length: 56min 51sec (3411 seconds)
Published: Mon Feb 05 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.