Automate with Ansible - Clustered-Pi Part 3

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey robot makers hope you're having a good day so far so do you want to know how to use ansible uh to manage your remote cluster of raspberry pi's to install docker to install portena and node-red then this is the show for you so let's dive straight in my name's kevin come with me as we build robots bring them to life with code and have a whole load of fun along the way okay so yes uh somebody's like immediately asking that um if we have the dogs on the stream today you can see that alex isn't um in the robot lab as we speak today she's uh back in sheffield for this particular episode so anyway let's get back to the earth keynote and dive straight in okay so yes this is all about automating with ansible so i'll get into what ansible is and all that good stuff let me just get over to my keynote there okay so the goals of this session then is we're going to install docker on our raspberry pi cluster my raspberry pi cluster is just sat there and um at the moment it's only got three nodes on it i did actually have four nodes of raspberry pi fours um i'm using raspberry pi fours because they work a lot quicker than raspberry pi zeros um just while i experiment and figure out all the scripts and everything so unfortunately this node broke we'll get into a bit more about that later on let's get back to the keynote so yes we're going to install ansible uh we're going to install um we're going to look at what ansible playbooks are we're going to install docker using an ansible playbook we're going to install something called portaino which is amazing for managing docker nodes and we're also going to use that to manage some remote docker nodes as well and i'll also show you something i found called thing which is helps you manage devices on your network so this is really useful if you've got a bunch of raspberry pi's and you can never remember the ip addresses or all that kind of stuff so you can sort of visualize your network okay so what is ansible let's get into that shall we so ansible is um owned by red hat red hat producer the popular linux distribution so red hat ansible automation platform is a foundation for building and operating automation across the organization it's very business speak that isn't it the platform includes all the tools needed to implement enterprise-wide automation so what do they mean by automation let's have a look at that uh through the show today so we can use ansible to install software on our raspberry pi cluster on each of the nodes and when we start scaling and having more and more nodes particularly on our raspberry pi uh clustered pi that could potentially have 12 nodes on it that's a lot of work so if you had to log into 12 raspberry pi's individually you're going to be uh wasting a lot of time installing things and getting them all the same so scripting is the way to go for that and ansible will allow us to do that in a really nice way so the video last week that i put out midweek was about how to log into raspberry pi's without using a password and that doesn't mean that you don't have a password that just means that you can use something called ssh keys instead so it's a much easier way and still keeps that security so if you're not check that one out make sure you check out that video as well and as i say on that bottom line there saving this as code makes it repeatable reliable and rebuildable so one of the things that i love about ansible is you can create a script you can save that script as code and essentially that code is making your infrastructure exist so i like the way this means if everything got wiped out and i have to start again with new memory cards and everything it's very very easy to sort of build in a new node and get it up and running with the software that we want so ansible playbooks let's have a look at what these are so a playbook is a script to automate an action or a set of actions for a computer or group of computers the group of computers is defined in a file called inventory we can actually just use the default file for that the hosts file in ansible which is different in your hosts file on your linux machine um and you can actually have a different file if you want to have different sets of inventories you can do that as well so we've got an inventory file that can be an ini format or it can be a yaml format i played with both and it worked just as well and then playbooks are defined in yaml or ini format which is a simple to create read and manageable format it stands for yet another markup language so the two files we need to know about with the ansible is inventories and playbooks so one of the tips for installing ansible so you can just install it it's a python library at the end of the day so you can do pip install ansible however it's probably best not to do that on your your default machine it's better to create a virtual environment so the way that we can do that as you can see there we do pip3 dash m vmv and vmv so that tells it it's a virtual environment called virtual environment and then we activate that virtual environment once that's done by using the source activate and then we tell it the path vmv bin activate once we've created that we can then create um a git repository in that particular folder um we can do pip install ansible and then if we do pip freeze it will store all those dependencies that ansible requires into that requirements.txt and then if we add dot using git that will ever add all those files into the inventory into the repository sorry and then git commit dash m and this means message and then initial commit that just means save all those files like the requirements file and anything else that's in that folder in that git repository and then you can push that up to github or keep it a copy locally but it's just a way of controlling the code because when you're developing these playbooks you'll probably find there's quite a lot backwards and forwards as you sort of just tweak things and make things better so once you've then created your playbooks you can save them to git as well just like it says they're using git dot add and then you can do git dash m a useful message about the update so um and it's dash m without a space in between i don't know why there's a typo just there but it should look like that kind of message so that's how we install ansible very simple it's a python library pip install ansible will get you there i'd recommend the virtual environment root okay so an example inventory file this is what they look like inside um so this defines this is a yaml file the ini format we can have a look at that in the demo that's a lot flatter doesn't have that sort of hierarchy so all is all the hosts that are in our inventory and we've got two hosts we've got master and node but they're actually groups so master has a host and the ip address for that is 1.10 and then we've got a bunch of nodes we did have four unfortunately i think node four is dead no node one is dead which is um that one there 1.7 so three four six seven just happened to be the ip addresses of all the four hosts that i've got so i've got a master host and i've got four uh cluster nodes so that's how we define them so that that tag line there master and that tagline there node are what's underneath there so it's kind of hierarchical so it's very easy for us to add extra nodes in there just by banging in the ip address and then that inventory file will then be able to be used in the playbooks later on to define who do we want to send out these particular packages to to install so we can define the things instead of just master node we could define like node-red or sql server or whatever we want to install on different machines and we can define how many machines receive that package so the the goal there is that we're actually going to send out the same package to many different machines so we've looked at docker in some other videos so we'll do a really quick recap of what docker is so docker is software containers so i think about a big container ship with all these nice containers on there and inside those containers are everything that you need to run a particular application so all the libraries all the dependencies are inside that container so docker is a platform for developing shipping and running applications doc enables you to separate your applications from your infrastructure so you can quickly deliver software with docker you can manage your infrastructure in the same way you manage your applications you can use code to manage it by taking the venture advantage of docker's methodology for shipping testing deploying code you can significantly reduce the delay between writing and running in production so what i like about docker is it gives us a platform to be able to deploy applications very quickly that really reliably run with very little interaction from me i can just sort of say run node-red and it will just run we'll have a look at that exact example um very shortly so this is what i want my cluster docker blueprint to look like by blueprint i just mean this is what i want it to look like so i've got a master node that's going to be running docker i'm going to have node 1 2 3 and 4 in my prototype node eventually the whole clustered pi could have 12 in there they're all going to be running docker and then i'm going to run portena probably on each one each one because it's quite a small application and that can just mean it's easier to manage individual nodes if you need to and we'll have a look at what portainer is shortly and then i'm going to run node-red i'm going to run mongodb which is a database system i'm going to run mosquito which is an mqtt broker i'm going to run mariodb which is just another database and that's what mysql has now become i'm going to run nginx on a couple of servers and jekyll which will take some markdown files and make them into a website that i can then host with nginx and the idea is clustered pi will be a website if you go to cluster.pi.com that's where the site will be and it'll be hosted on um it finds the button on no not that one that one would be hosted on that that cluster there so that's what i'm kind of aiming for there okay so that's what i want it to look like so think about containers they have a couple of attributes that we need to know about so they need a name now if we don't specify a name docker will just provide a random grouping of um words together to make a unique name they're quite funny when you look at some of the names it just comes up with them randomly the image is the code or the app that's going to run in the container and on hub.docker.com there's a whole load of pre-defined containers ready for you to use right away so what i tend to do is go over to hub docker and i will look to see what is available for me that runs on the arm platform and then i will write the script sometimes it even provides you the run the sort of command that you need to type like docker run and then all the different parameters then we've got volume so volumes are like storage now a bit like a hard drive for that particular container and the idea you're separating the code from the storage because if you get a new version of your code you can very quickly change that out without actually changing the data and by separating the two like that you've got uh more reliability not linking the code to that container um in a sort of hard linked way so it stores the data for your immediate separate so the image can keep the data but replace the image with the new version ports are a way of protecting that container from the real world and only exposing what you need um port wise like port 80 is http port 443 is secure http so you'd have to expose that from the container to the real world and we do that just using the dash p and then the ports from and to from the containers world to the the real world and it might be that you decide to change it to a not a known port so when i say port 80 is a common known port for http um if we wanted to run that and not have people hack our website we could change that to a different port but then a regular browser wouldn't be able to browse that because it expects it to be on port 80. it depends what your use case is for that and then finally we have networks so all containers running a hidden internal network and we need to create other networks if we want to join that to the real world other than just exposing the port through the ip address of that node that it's running on so if that's a bit confusing it'll make a bit more sense when we get into actually running a few of the nodes shortly okay so if you like what i do and you want me to keep on making more of these videos please make sure you like comment and subscribe and that doesn't matter if you're on youtube on facebook on twitch or twitter drop me a comment give me a like and uh subscribe to the channel on youtube and i prefer people to look at this on youtube if possible i know if you're watching this on facebook it might be that's where you found it that's the way you prefer to to view it and the comments do come through the live stream software depending what platform you're on um it's just it's a bit of a richer environment if you go over to youtube okay and um i do a live video every single sunday seven o'clock gmt so if you want to catch me live you're watching this on replay that's the time and that's the location on youtube and if you want to follow me on social media i'm all over the place so i'm on twitter i'm on facebook i'm on um youtube obviously on the world wide web at smartphone.com uh via email if you go to um action.smartfan.com join join the dash list you'll be able to join the list there all the details for that are in the smiles fan link there you can see a link in bio and i'm also on instagram as well i think the details for each of those if i press this button here so yes small robots is the facebook group um at kevin mcelroy is the instagram kevsmack on twitter and smashfan.com if you want to catch me um on the worldwide web and on facebook i think it's just kev mcaleer if you go to a facebook.com slash kepmaker you'll find me there okay there is something as well i wanted to just call out um about uh let me find the right button for this um if you if you're using buy me a coffee and you're an actual existing subscriber you might get a message that paypal is discontinuing that payment method i think they're going to be switching to stripe i've already set up stripes so it should be quite easy to uh change payment method just by going to buy me a coffee and changing that but if you receive an email saying it is changing it is a legitimate email apparently buy me a coffee have fallen out with paypal um and it's to do with paypal just randomly cancelling payments to their members for whatever reason i think if they're not um what's the right word they're not advertiser friendly paypal are just cancelling their memberships so buy me a coffee decided they're going to terminate their relationship with paypal and they're using stripe um instead so quite a few people use stripe already but don't let that stop you from joining up there it certainly helps support the show and i'm really grateful for all the people who've subscribed to date so alex is putting a link there in the live chat um below okay so that's all i've got to say about that i think there is um a message in fact if i just go over to um here this is the uh the message from buy me a coffee about how to change your payment method so if you've got a payment thing you can just do um changing payment method and then i think just resubscribe and then you can come back in sounds a bit a bit complicated but um apparently buy me a coffee i've only just been notified it's like really short notice of this so it is what it is i'm sure you're about to figure that out okay so let's get back over to keynote and let's carry on so yes i've said about that buy me a coffee.com i really appreciate when people do that as well i get the message from you when you buy a coffee and there's two ways you can do that you can either do the buy me a coffee one or you can just go to youtube itself and just do the join the membership there as well i think they're quite cheap as well okay enough of that so one of the concepts i've come across whilst building these clusters is the concept of cattle and not pets and i certainly had the pet view of the world when it came to building servers so the scenario that's been laid out here by randy bias is um in the old way of doing things we treat our servers like pets so for example bob the mail server if bob goes down it's all hands on decks the ceo can't get his email it's the end of the world in the new way servers are numbered like cattle's in a herd for example wwe 001 ww00 when one of the servers goes down it's taken out the pack shot and replaced by a new new one on the line so in the old world i used to have servers with names i would name it like neo or dionysus or what was the other ones used to have they always had like fancy name zeus i think was another one um now i would simply do the cattle thing where you just have it as like node zero one node zero two and it doesn't matter then you're not emotionally attached to it and you've not manually built that as well you've you've actually deployed everything to that node with code with a script so it can be very quickly built and replaced it just means it's um you treat it differently you're not manually installing everything and making everything really difficult to reproduce if there was an error if there was some kind of problem and that server went down you'd have to think about how do you rebuild that whereas now my entire cluster all the code for that is saved in github and i can deploy a cluster like that on a whole new set of machines with the blank memory cards so cattle not pets is the the takeaway from this session so we're going to run ansible playbook to install docker on all our nodes we're going to do that in a second i've been testing that out this afternoon it's working nicely now we'll use the nozbe playbook to install docker on each node so we'll start from the master node and we'll get it to connect to each of the four nodes and in each of the three nodes because one of them is dead and then we'll get it to um install on those three once it's installed we can then start deploying applications to each of them as well so we're going to install the apps on the docker nodes we're going to install portainer on each of the nodes and then i'm going to create a playbook using yaml on each of them to install the different applications that we need so we'll have a look at some of those files shortly too so pertainer the playbook for pertainer looks like this um so the first it's a yaml file so it starts with those three dashes and then the next line down is a dash and then name colon and then we give it a name so installing nodes on a doc installing potato nodes then the next line is hosts node so that means deploy this playbook to the nodes that are called in in the group called nodes so if you remember there was four different sub nodes under the node group then the tasks for this particular play groups there's these little groups so there's a name of the task and that appears when you're running it so it'll say remove previous version stop the container so if there was a previous docker container called portainer it's going to stop that it's then going to remove it by doing docker rm and then it's going to install uh pertainer so the way we install it we use the command docker run dash itd and then we do dash dash name and then capital p for portainer then dash p which is the ports to expose so we're going to expose port 8000 to the external port 8000 so this is the inside port from the container this is the outside port from the real world from that node we are we're doing another uh port so nine four four three two nine four four three and then we're going to use this cr dot portena io portena portena ce that's community edition and then version 2.9.3 that's called the tag that bit there and what that's got that dash v is going to mount that var run docker sock to var run docker sock within the container so a v is a volume and it's a way of just mounting that volume a bit when you plug in a disk and it appears on your desktop mounting is the same thing within docker using the dash v thing that that last name there is the name of the container that we're gonna run and it's the name of the container from docker hub as well so it looks quite a lot of stuff that's why we put it in the playbook but although otherwise we'd have to run these commands from the commander individually on each of the nodes so that's the installation script for the playbook and what portena is is a centralized service for platform let me start again a centralized service delivery platform for containerized apps which is a whole mouthful it's a user interface that's really nice to be able to play with docker that's the way i look at it you can see all the docker images the networks the environments on that particular node and it just makes it so much more easy to manage just when you want to play about with it and get a feel of what's going on we're also going to have a look at fing which is a really nice piece of software in fact we can have a look at that now so i just bring that up and i flick over to here so this is fing um so thing allows me to browse my network of all the different devices on my network there's quite a few things on there so you can see all the different devices it gives them a different category like there's a tv there's some raspberry pi's there there's a meros smart lighting strip which is the lights that go around this room um got some smart apple tvs we've got some google devices we've got a touchscreen which i think is is it behind me just there i think um and so on cameras and whatnot some esp devices sonoffs that's the fan i think and the mirror so you get to see all your different devices and you can click into them so i want to be able to look at um all the nodes that are running on my um raspberry pi cluster nodes if i go to the very top here and can i get that to sort by name let's try that let's go down to node and we should well see there node one two three four um so that one there let's have a look at node two and we can get some more details about that we can see like the mac address we can see when it was last seen on the network um and so on i think it's running chrome os no that's not right um yeah and it's got some other details there too so it's just a nice bit of software to be able to see your network um and be able to see if everything about your network is nice and healthy um and it gives you various different reports on that as well you can do like deep scans and whatnot okay so that's thing i just wanted to show you that and what we'll do in a second i think that's the last slide in the thing before we do the demo yes okay let's get over to the demo so what i'm going to do first of all let me go over to here so i've got four terminal screens open let me just log into that one there and let me just clear that one as well so it's going to make sure docker isn't installed on there oops let's try that again it doesn't matter if it is i can if it is working let's see if the install script actually deals with that right so over here the raspberry pi 400 that's my main machine that's the machine that i'm using to deploy all the scripts from um so that's raspberry pi 400 it's the one that looks like a keyboard and if i just do let me have a look ls so i'm currently in the cluster pi let's get in there cluster pi and i'm going to activate that virtual environment by doing this command oops activate okay so i've now got um the virtual environment activated and it means i can now run ansible so if i type ansible it'll just ask me what you want to do i'll give you some sort of friendly or helpful message there and there is another command which is antibody playbook so if i do antibol playbook i can use that to run the various playbooks that we have so in this code cluster pi um i have a folder that's called playbooks if i go into playbooks we can see that we've got a whole bunch of different um playbooks i'm going to use the docker c yaml so let's have a quick look what's in there dockerc.yaml and in that file there we can see let me just scroll up so it says applies to hosts all so it's going to install this on every host um every node in the the cluster become it means become the sudo super user so it runs all these things in super user mode then it has a variables section and packages to install is it's basically going to do a sudo apt and then install lib ffi lib ssl python 3 python 3 pip so adam's asking what are playbooks playbooks are essentially the script that you're going to run on many different machines so it's a set of tasks that you're going to run and you want to be able to repeat these tasks potentially many times as you add remove nodes to your network so it's just a list of what we're looking at now is a playbook so it's just a yaml uh formatted file and that's what this uh these sort of indentations and the colons that's all part of the yaml format which stands for yes another markup language so the tasks underneath this is how we're going to run the various different parts of our script so the first one is named install a list of packages so it's going to use the apt which is the um linux application package manager uh the name of it is now this little two sets of curly braces and then the name in between that's called ginger that type of formatting might sometimes see it as a mustache as well because it looks like little moustaches and that means replace whatever is in packages to install which is the variable here so each one of these it's going to go through that list there it's a bit like a python dictionary and it's going to install each of them and update the cache as well so once it's finished those it's then going to remove any python config parser package so it's going to do a an apt again python config parser state absence so it's going to make sure it removes that and then the next thing it's going to do it's going to go get to the docker convenience script so if you go over to getdocker.com there's a whole script there um which enables you to run the install um commands all in one go and it's quite complicated script so we're using curl which is um it it's essentially like a download tool in the command line so it's gonna grab all the contents of that dash o means output that to a file and it's going to output it to get docker dot sh which is a shell script and then it that means that so the arguments it's going to create the file home pi get docker dot shell the next thing it's going to do is it's going to say install docker so we say shell and then we run that shell script get docker and that creates user bin docker the file and then we're going to add the pi user into the docker group so use a mod and then add and grant the docker and pi at the the user pi to the docker group we're going to unmask docker as well this is because sometimes you get this weird thing where it says that it's masked so we just do unmask to fix that problem and then we actually fix the permissions as well of docker so we change the mode to 666 and that means that things like portena can run as well so there variables run and then docker.sock so that's the socket then it's going to install docker compose that's like an extra component of docker so you can create your own containers and that creates user local bin docker compose and then we're going to start docker so sudo system control start docker and then finally just to make sure everything works properly it's going to reboot the raspberry pi so there's a reboot command at the end so i'm going to deploy this to all these nodes so the way that i do that i also have on here let me just uh back up a folder and if i look in my inventory so i've played with the yaml format but i think i prefer this flat file so ini files just have this square bracket so master is the master node all four nodes are listed there and then i've got some other groups as well so there's a node red group which is just on 1.3 there's a grafana we'll look at grafana as well which is a graphing software and mongodb which is on 1.10 as well and then the ks3 which is kubernetes i'm looking at as well that's also got a group which is just running on the master at the moment but we're not looking at that today so this is a way of just grouping together different ip addresses to different groups so you can just you can install your software on different pies using this inventory file so the way that we run this now is i'm going to say ansible playbook dash i for the inventory file which is infantry.ini and then the name of the playbook which is in the playbooks folder and this one is going to be docker docker c dot yaml and then the last thing i'm going to do is going to say use the user um i think that's the right way pi let me see if that runs should fail on one of the nodes because we've still got this node that's died listed um and i can see there that it's it's immediately failed because because because i've not got the ssh key user running on here so uh if i try and log in to let's just try and look into one of the pies um let's try 192.168.1.3 1.3 it'll ask me for a password it shouldn't ask me for a password because i've already set up that key that i did last week and that means that what i'm not running at the moment if i just do eval and then ssh is ssh key s i think that's what it is um why is it not let me do that let me just have a quick look at my notes for that um evo ssh agent sorry not there we go so it's now started the agent so if i now do that um ssh pi it it should log me in without asking for that but it has done anyway so let's go back to the docker script let's try that again and see if that works that's not failing again isn't it so i'm just going to try changing that to equals pi i did have all this set up before while it's doing that i'll just tell you about this this pie that failed as well so it's still failing there so let me just go back to i'm going to open up um visual studio i'm just going to quickly look at the code that i've got in the cluster.pi um hey tom how's it going so we're just looking at ansible um on raspberry pi's to deploy software to our cluster and i'm just going to check out on visual studio just from my notes there because i've got a reminder of how to set up the ssh key bit and just need to remind myself what's going on there so that's it to do with where i'm running that from no no that looks good so the agent bit let me just find that sss agent so that should work let's try that again so eval sss agent so that is running okay what we'll do so the reason it was failing is it can't authenticate automatically to those nodes so i'm going to very quickly just generate a new key so i'm going to call um ssh ssh key gen and it's just going to generate me a brand new um ssh key so i'm just going to give it a name let's call this live i'm going to give it a passphrase oops type that too fast okay and then what i'm going to do is going to add that key locally so i just need to do ssh add and then live was the name of the file let's type in the passphrase so that's now been added to my local keychain and then i just need to do ssh copy id and then the name of the file which is live.pub and then the ip address of the machine i want to connect to automatically so let's do 1.3 so i just need to type the password of the machine i'm looking into just one more time so now if i do um ssh pi at 1.3 it will automatically log me in and that's what's required for ansible so i just need to repeat that last line the copying of the thing to all of the nodes in the network so let's just do that and let's do that as well to is it number six i think let's try that if i've not got it right i don't think i type that password in right then okay great so now let's try running that script one more time the ansible one and let's see if we get any errors connecting to the nodes we should only get the error connecting to the node that's actually died which is this node four so yes gathering facts that's better so um it says no route to 1.7 and that's the ip address of this one so it's connected to number six number three and before which are the three nodes which are on our network and we'll know that it's actually install it because i'll all actually disconnect from the command line so currently if i just do like h top on there h top on there we'll see that it it'll crash that in a second and we can also see that it's running um various different scripts and things on each of them so i'll tell you about what happened to this raspberry pi so that on the very back of this raspberry pi where you put the memory card in when i took this out of the uh the case like that the the this came away from the from the board so the memory card couldn't make contact a little edge connectors couldn't make contact with the board and because it's spring-loaded um there was nothing i could do to make that stick there so i did something really stupid i used super glue to try and super glue that in while the key while the memory card is in there so that's now welded in there and um and it doesn't work so this is actually dead i might be able to run this from a usb drive and get it to boot from like a usb memory stick i've not tried that that's the only thing i can perhaps get this running but i think this is a four or eight gig version so i'm quite loved to write this off as absolutely dead obviously quite sad that that's not working so that's our fourth node these things are not exactly cheap right so let's have a look what's going on now on our screen so we can see the install of packages so it's it's running on three of the nodes um it's removing the config parser it's getting the docker convenience script and then it's installing the docker software on that particular node so 1.4 is this one at the top so we can see on here um let's see if we can see which one i think it will be um it shouldn't be taking up too much ram i don't think it's probably that one that keeps popping up um there we go so it's now it's now change the permissions it's installed dock and compose and they've now rebooted so two of the machines have rebooted the fourth one is still is still running so that suggests that something's not quite right on that node there but i'll i'll just manually go in there and just give that a reboot so let's just do sudo init six on there to just do that a reboot and then just gonna ping so that we know when this has come back up 168 one dot and this is um is that six that one yep six that one is three so let's do ping three and then let's do ping four on that one right so they're all coming back up now let's just do ping ping so i'm just pinging them to see when they've rebooted 168 one dot and that is four it's probably done it already yeah so they're all back up so now i can just log back into those i'm just logging in to show you what's going on on the actual machines let's just get rid of that for a second and then let's just log back in because we want to be able to do all this remotely so we don't really need to log into these i'm just doing it to show you what's going on on the individual nodes let's try that again so well that was saying connection refused on number three i'll just keep pinging that see what's going on there that one looks like i can log in and then what we'll then do is we can now type docker because docker is the thing that we installed so if i type docker we should see the command line come up let's do the same on that one oops docker and yeah that's looking good if i do docker ps and the fact that it's come up with that container line means that it's running docker the um the the daemon for docker the service that one isn't that one's uh in a bit of a failed state i think so i probably need to wipe that one and start again i've been experimenting quite a lot with this installing uninstalling installing uninstalling and that one's gone into a bit of a state and for some reason that one needs a bit of a kick as well so that's node number node three so what i'll do is i'll probably just give that a manual reboot there we go so what i've done on that let me just go to the right one so that's uh node number three node number four and node number two so node one is the one that's uh that's died and that was sat just there so what you can see i've got there is i've got a dedicated uh us a dedicated network switch just here it's got a little um tp-link one i'll just move this back you might be able to see that bit better and i've also got a power supply i've got a power supply there that's that's got a wire connected to all of the uh the different devices so they've got a nice little lead and um oops and these the the network cables are quite nice nice and short little patch leads with nice silver ends as well so i've got a whole different multi-colored bag of them um but they're just for the prototype when we be using the um the raspberry pi zero that's going to be um all wireless so i want to experiment on some faster nodes before we get into these slightly slower zero nodes say slower they've got less memory so it's a bit more of a challenge there okay so let me just see if i can connect to oops that one now what's going on there let's try logging into that one there that back up now let's see if we've got docker installed on that one as well oops what's going on there docker ps so doc is running on at least two of the nodes properly um let me reboot that one again we'll leave that one and in a sort of failster and we'll come back to here because this is part of why we run ansible sometimes things like this happen and the fact we've got everything in a script means we can just run the script again and it will deploy everything that we need it to so i'm now going to deploy portainer onto those docker nodes so i'm going to use the exact same method we did before but instead of using the um docker yammer file the playbook i'm now going to use the portena playbook so if i just do that and run that it's now going to try and connect to all of those nodes and install portainer so we might get some text saying that it couldn't connect to node number four because that one's currently offline as it uh reboots it looks like we might be able to connect to it now let's have a c and let's have a see if docker is actually installed on there properly no it's still saying that there's an error there so let me have a see what's going on here so it has it's ignoring some of the remove the previous versions of pertainer and then it's come up here instead let's have a look um dog problems connecting to docker those ones have connected okay and they're fine so if i now go to this one to do doc ps again you can now see that we've got an extra node running on on those two so nodes three and node two are running portainer correctly so that's ipaddress 1.3 and 1.6 right so what i'm going to do now is i'm going to open up a web browser and i'm going to connect those nodes and i'm going to connect to portena running on those nodes so 1.3 and then i think it was nine four four three okay so let's go over to the browser um it comes up saying it's not the connection is not private but i'm just gonna say that's fine because i don't understand and here's pertainer that's now that's how quick it was to deploy to that that docker node it's immediately there so i'm just going to type in a password here and we can then have a play with portena okay so let's have a look if we click on the home button there we can see that we have a local docker installation that's got one container let's click into it and have a look look around so we can see this stacks there's images there's networks there's containers and volumes there are all the things that we called out earlier in the presentation if i click on images we can see there that there's a portainer image and there's also a node red image that's probably from a previous installation i can actually get rid of that just by clicking on that and say remove and that'll save us a bit of a bit of storage on the raspberry pi um i've clicked it twice there so it's it removed it and then it hadn't refreshed in time so there's just the protainer instance if we go back up to the the dashboard and we look at the containers there is just the portena instance which is what we're looking at now you can see there it's got an internal ip address that's not something i can see on my network that's internal to the docker configuration and then we publish some ports we published 89443 9443 is what i'm running on and that's why it's complaining about it not being secure because it hasn't got a certificate that's valid but it is running over https so we're looking a bit inception-like at itself there so we can click into this and have a look at some of the details um we can see how big the image is that it's running on linux on an arm platform um and we can see some of the details about that i can then back out i can even edit this if i wanted to so if i go back to the container back to the portena container put in a container we can do things like click on the log file so we can see there all the different information that um so it couldn't find an admin account because it was run for the very first time um it was unable to communicate with various different environmental things there's no problems there these are just sort of information and warnings so that's the log files so i can just jump back there i can inspect it which just gives me a lot more details like you know where's it mounted to so the various different volumes and mount points so the reason it's connecting to var run docker from outside is because we want to look at using the portena software the the docker images volumes all the rest of it so we need to expose that from the outside of the container to the inside of the container and then we've also got a local data storage volume which has just got this random name there and and then it's just mounted that into into the slash data area within side the container let's go back there we can click on the console if we wanted to we could connect to it if if there was something running there it's immediately stopped because there isn't actually anything in there that we can connect to and what else we've got we can attach to it as well if we wanted to see what that looks like we can attach to it but um i think if there was any kind of output um like log file stuff to the screen that's where we would see it like standard out and then if we wanted to we can edit that we can we can change the ports we can add to it if we wanted to we can we can add extra volumes in there remember this isn't a virtual machine so you can actually add and remove volumes immediately without having to reboot or do anything else networks is all about um is it using the internal network within docker or do you want to expose it to an external one and there's lots of different network things you can use you can use host non-bridge you can create something called the mac vlan which enables you to out so expose that to the outside world you can create environment environment variables if you want to create some secrets you can have a restart policy and what the restart policy means is if this should crash what you want it to do you want it to automatically reboot or do you want it to just stay stay dead so um i usually set that restart policy to always um so we can now just save that um and then well in fact i've just clicked always i don't even have to save anything that's immediately made that change to that container okay and we can duplicate that container if we wanted to we can edit it and change some setting and then restart it or duplicate it there's all kinds of things like that we can do we can even just kill it if i do that though it'll stop this software that we're running so what i want to do now is deploy another piece of software perhaps onto this node three um to show how quickly we can do that using ansible and then we'll also see that that docker container appears within portena so let's get back over to here and i'm now going to run let's have a look at the node red what i'm going to do i'm just going to cancel that let's go into the playbooks folder and let's edit using nano the no the yeah the node red playbook so this is going to install on any host that's called node red it's going to run a bunch of commands stop in anything if it's already installed and then it's going to install it and expose ports 1880 which is the node red so just to prove that this is actually going to work if i come over to here and i go to 192 1683.118 1 8 8 0 it can't find anything nothing's actually running on there so let's install node red i'm just going to come out of that so currently i just jump back into and edit the inventory file it's going to install anything that's in the node-red so we can see that we've got 1.3 which is this node over here and that's what it's going to deploy node-red to so the way that we install node-red is we just come over here and i'm just going to change that from docker to node-red let's just do that okay so that's now going to deploy that playbook it's going to install the node-red onto this machine over here so it's installing node-red it's gathering the fax it's going to see if it's previously installed which it isn't so it's going to ignore that and then it's going to remove and install node-red by grabbing from um hub.docker and installing it locally onto this this node so we can go over here i don't know it keeps doing that if i do docker ps we can see currently there is only one um it's a bit of a long line that that's why it's sort of wrapping around but we can only see one container currently running on there and in a second we'll see that there's two containers on there and the name of it will be node red so let's just see that i'm just going to keep doing that until it installs see i'm hoping i can resurrect node four and that we've got at least four nodes in there because um like i said it must have been at least 50 pounds to buy that raspberry pi 4. i'm a bit loath to buy another one to replace that i might just jump straight to the um the raspberry pi clustered pi instead now that i've got this working and one of the things i was looking at as well is sometimes i do 3d design so i'm designing the physical structure for the cluster other times i might be like testing out how to get scripts like this working so i've never played with ansible before this week so i had to quickly learn that and then create a show around that to show you guys as well okay so i think that's nearly installed now node red it's not massive but um so adam's saying i need to get out the acetone and soldering iron i know this is this is what i need to do i just want to get rid of the superglue and the thing is i tried pulling out with some pliers the memory card they just it just destroyed the memory card i have got some new memory cards though so um i don't know what memory cards that you recommend but i always go for the sandessa extreme i always find these are quite good quality they do a different variety of the sandesk one which is um it's endurance i think it's called and the idea is it can it can it can withstand lots of heavy rights to the the card because that's usually what kills these cards is when you sort of read and write too many times and what i have found is these um integral i think they're called um let's see if i can show you this one this little card here this one's dead this one's been running maybe for about a year and um it no longer works right no dread has installed so let's just do our let's do our docker ps on there and we can now see there's two things there's one that's called node red and the name of it is node red okay so if we now go over to that web page and reload it we should find that node red magically has started to work on that node and that's how easy it is to install using docker it's very very straightforward and we can prove this works we can get a debug node we can grab an mqtt in i can connect that to our debug and i can just very quickly set up a connection to my mqtt server i've got one in 1.152 i'm going to leave it at version 3.1 you can use five but um i don't have any authentication on mine because i'm really bad and then the topic is just going to be the default one which is um excuse me and when i deploy that we should now see on our debug messages a whole load of messages start coming through from all the different nodes so you can see their temperature loft temperature hall temperature bedroom so the hole is sorry the loft is 14 degrees so it's quite cold outside that's probably about right the humidity is 60 60 whatever that is is it percent um and we can see like the bedroom which is this room actually should rename that 21 degrees and that's probably about right the um the temperature is set to 20 in there so it's about right there they're about one or two degrees accurate these things the hole which is in the house always reports high so i think that's probably a faulty sensor but you can see there all the data is coming through so that means that everything works we're able to get our node-red instance working we could create a dashboard with that we can create all kinds of fancy code and that's running on a docker node on our raspberry pi cluster now if we go back to our portainer and we just give this a bit of a refresh the page pages by clicking over here we should see that we have node-red running on here too let's just do that there we go so you can see node-red is running again that's running internally but that's exposed to our external ip address which is 1.3 on port 1 1 8 8 0 and we can have a look a bit further inside if we wanted to to see what else is going on there so we can see there's the image there's the ports um so anything else there is a volume node-red data which is exposed within the uh container as a slash data and it's using the bridge network which just means it's kind of an internal inside docker so we're not exposing the ip address externally just the traffic to that port so we can click on the images now we can now see that there's the portainer there's the node red image we can click on networks to see that there is bridge host and none we can create new networks as well if we wish we can do like a mac vlan which enables us to create connections to the outside world in this sense create ip addresses new ip addresses and that's what i've actually done on on another one of these hosts we can look at the volume so there's the node red volume we can have a look and see what's going on there we can have a look at various different events and then either hosts just give you a bit of information about that particular raspberry pi so it's got that's only got a gig of memory that particular node three okay so that's one of the nodes now we could actually go back over to node let's just go back down here so we've got this running on node two as well which is ip address 1.6 so if i go to um 1.6.443 [Music] now pertainer doesn't look like it's running on that one let's just try that on 8000 i think it should be running let's just try it six nine four four three sometimes the browser doesn't work properly but no that doesn't seem to be running properly at all so let's just have a look and see if that's actually running docker ps no portena stopped running on that node so we can actually just fire this up by doing run portainer let's do dash d for um repository must be in lowercase repository so i'm not sure what's going on there it's really having a bit of a moment there that's fine so what we could do we could just get it to redeploy that using the ansible script but i'm going to crack on i'm going to run another couple of playbooks on here so let's run um the grafana one um so if i just do ls in the playbooks folder we can have a look you can see there that we've got a grafana one so if i do um instead of node-red i'm going to delete that just press g tab for auto complete and it's now going to deploy grafana to that node 3. so i probably just need to look at probably just wipe some of these actually so what's going on there permission denied what so node three is fine let's just have a look and see what's going on there docker ps so i've got node red got put tina graffana user equals pi i'm sure that's correct let's just try it like that i can never remember if it's dash u space pi or dash u equals pi um but it says it cannot connect to that pi permission denied oh it thinks it's running on 1.10 right i know what's happening there remember we connected before and we did the copy we just need to do that to number 10 as well which is the raspberry pi 400. now if i run that again it should install it correctly i think this time you got to get used to some of these errors that come up um and be able to troubleshoot them it's usually to do with that ssh thing otherwise it'll tell you what lining the script is is faulty that purple text is fine where it's sort of saying um platform linux on host 1.10 using the discovered python so it's running python and not python 3 and that's going to change in future apparently that behavior so now um 1.10 is running graffana so if i get my browser and i go to 1.10 and i think this is on port 3000 is it what port is uh graffana running on well what we can do go to ten nine four four three we can have a look at the portainer on there and have a look oops did i do admin admin nope did i do my other password instead yes right um so on here we've got a whole bunch of containers let's have a look grafana is running on port 3000 so let's go back over to here and then let's run on port 10 connect to 3000 let's make sure we're doing http there we go so there's grafana running on the master node now grafana is like graphing software you can if we very quickly set up a data source on here i think there's like a test data source that just generates like random data um i think it's right at the bottom is it test database and if we now just create a new dashboard and it adds new panel you can create dashboards i've not used it a lot myself yet but it's good for time series data so if you're getting temperatures for example from a couple of temperature center nodes i will do a video about how to do that in micropython because um it's quite a nice easy one there's not a lot going on in those temperature sensors it's simply an esp 8266 or esp32 and then it's connected to temperature sensor so this um this sensor here does exactly that it has an esp it's got um the cable connected to the temperature sensor there in fact if i powered this up we'd actually see the data coming through for this um so very very simple to set those up and run in unfortunately the raspberry pi pico doesn't have wi-fi so we can't easily do that and if we're going to connect like a wemo d1 or something mini to that you might as well just use the wemo d1 mini micropython but then you can get the data sending to um an mqtt broker you can then use grafana to connect to that mqtt broker and then visualize that data node-red does have some functionality for doing dashboards but grafana is like really nice software for doing that so i look how easy and quick it is to get that up and running as well so this is running on a docker node um we've got node red running on our dockers we've got portena running on our docker so we've got all these different um containers remember i said about how it creates weird names for the different things like amusing archimedes flamboyant yona distracted joy juliet admiring williamson all very strange names so unless you do that dash dash name and then give it a specific name it will just come up with one cattle not pets it doesn't matter we just need to have the thing running that's all that's important and if we've got the um the restart policy on these as being always if that node crashes it'll just fire it back up and start it up again so you can guarantee that that thing is going to run as long as that raspberry pi has got power and nothing else happens to the docker environment so that's um how we can very quickly um get up and running just going to check my notes and see if there's anything else so we've installed docker we've installed portena we've got node red running we've connected node red to mqtt started chewing some data we've installed grafana and we've edited that inventory as well so we could add extra nodes if we wanted to to have graffana running on there too um so that would be very trivial for us to do if i just go back there i make sure i'm in the right folder i just do nano inventory so where we've got that grafana there if i just go down and add in 192.168. oops what's going on that 192.168. one dot 1.3 and i save that i can then run that um grafana playbook again and it will it will probably because the way we've written the script ignore it the one that it's already installed on um and the node that hasn't got it installed it should actually install it on there so number three should actually get that installed shortly so if i just do ps on there and it will see that as well as node-red and portena we've also got grafana running on there so we'll just take it a moment or two to download grafana image and then deploy it so the next step with this um this cluster so at the moment we've got docker running on each node individually we use an ansible to deploy containers to those docker nodes but those nodes are not aware of each other in any kind of clustered sense so the next step is to use something like kubernetes to manage the containers across the cluster and in kubernetes you can say make sure node-red is running on at least one node so you can define how many replicas are run in any instance you might say you want to have the nginx running the website that i'm going to develop and that has to be running on at least two nodes for load balancing and as traffic comes in we can make sure it hits the load balancer and goes to one of the two servers that's available um so kubernetes is the next step now one of the challenges i'm looking at at the moment is can we get kubernetes to run on a raspberry pi zero that typically have half a gig of ram because kubernetes typically requires two gigs of ram to run properly so the k3s which is like a really scaled back version of kubernetes that one apparently does run in really low amounts of ram and there is also a serverless so is it called um faas i was looking at it this afternoon there's like a serverless docker because docker is a container format an open source container format you don't actually have to run docker itself to manage containers there are other platforms out there um that are completely compatible with that so let's just go back for a second to our um list here and let's see if we do doc ps we've now got graffana running on 1.3 so if we just very quickly go back over so that's running on 10.300 so if i open up a new browser do 192. 168 3.3 000 we now get the interface saying welcome to grafana and i think it's just admin and no password or his admin admin as the default there we go new password let's type a really secure password in and away we go we've now got graffana running on two nodes so there we go okay let's have a look at the chat so this is the area of the video so if you're watching this on the replay on youtube i'm gonna say thank you for watching um be sure to check out the smilesfan.com and look in the description for the video if you want to link to clustered pie codes and i shall see you next time
Info
Channel: Kevin McAleer
Views: 183
Rating: undefined out of 5
Keywords: Raspberry pi, Raspberrypi, Cluster, Docker, Ansible, Node-red, Nodered, Portainer, Install, Clustered-pi, Small robots, Kevin McAleer, ansible tutorial, ansible tutorial for beginners linux, ansible tutorial linux, raspberry pi cluster, raspberry pi ssh, raspberry pi headless, node-red dashboard, node-red mqtt, raspberry pi 4, raspberry pi 400, ansible playbook tutorial, ansible playbook tutorial for beginners, raspberry pi 4 projects, node red docker, raspberry pi server, ansible 101
Id: M088p-xqzzw
Channel Id: undefined
Length: 63min 35sec (3815 seconds)
Published: Mon Dec 06 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.