Monitoring Plant Floor Data Using HiveMQ Cloud, InfluxDB, Grafana, & Schneider IIoT Gateway

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] in this video i'm going to show you how to monitor plant flow data in real time using mqtt influx db time series database graphana visualization platform schneider iot gateway and hive mq cloud as decentralized mqtt broker and for demonstration i'm going to use this schneider iot gateway to collect tank level telemetry data simulated by this potentiometer on this legacy controller via modbus tcp i'll then publish the telemetry data as mqtt messages to a centralized hive mq cloud mqtt broker which will forward that data to an influx db time series database running on my raspberry pi here i'll then visualize the information from the influx db database using grafana visualization platform which we'll also install on this raspberry pi so the reality is that in most factories today the majority of production equipment still use legacy communication protocols such as modbus in our case so for us to be able to collect data from this controller for real-time monitoring on the grafana visualization platform we're going to use a schneider iot gateway to act as a protocol translator reading data via modbus protocol on one end and converting it into an iot protocol on the other end which is mqtt in our case and then before we visualize the data we'll need to store it on a time series database and for that we're going to use influx db now both our time series database and visualization platform need to be hosted somewhere it could be on the cloud or on premise however in this demo we're going to use a raspberry pi as our server for hosting influx dbe and grafana visualization platform now how do we program the iot gateway to perform the modbus tcp protocol conversion and publishing of mqtt data the simplest way for us is to use node-red which is a low-code drag-and-drop platform next we're going to need a centralized mqtt broker for receiving mqtt messages from our iit gateway and forwarding them to our raspberry pi acting as the server and this is where hive mq cloud mqtt broker comes in now here's the thing our mqtt messages are not going to be automatically sent for influx db storage from our hive mq cloud broker what we need is a bridge for catching these mqtt messages and persisting them to the database and for that we're going to use telegraph which is an influx db server agent that is capable of subscribing to receive mqtt messages from the hivemqcloud mqtt broker as input and persisting them to the influx db database as its output now the first thing that we need to do is to make sure that our broker is up and running and to do that i need to access my ifmq cloud portal so i'll go to the ifmq website here and then under cloud i will select hive mqcloud and when i do that i'm redirected to this page so i'll scroll down here so here you'll see that there's a free basic version which allows us to connect up to 100 clients so i'll select sign up now and then i'll be taken to the hive mq cloud portal homepage and here i'll enter my login information because i already have an account if you do not have an account you can go ahead and sign up by entering your email and password so when you sign up for the first time you need to follow the simple steps of setting up your cluster once your cluster is set up you can configure or change your mqtt client credentials under access management as you will see in a minute okay so with those few simple steps i have my mqtt broker up and running now i'll need to copy my mqtt broker details that are used for publishing mqtt data from my schneider iot gateway so i'll select manage cluster and here under manage cluster i'll copy the hostname or broker url address and here you'll notice that our mqtt broker is using tls encryption at point triple h3 okay and then our select access management so this is where you configure or update your mqtt client credentials for example i can remove the username and password that i created when setting up this cluster and then recreate it and then i'll copy the username and password to be used later on in our iit gateway okay so that's all for our mqtt broker setup on ifmq cloud portal now the next step is to start publishing our plan flow telemetry data from the schneider iit gateway and the best part about our iot gateway is that it comes pre-installed with node-red so all we have to do is to access its interface via a web browser so i'll open a new tab on my browser and then i'll type in the ip address of my irt gateway which is 192.168.0.128.8.18 for node red okay i've already put together a node red flow for reading the tank level value simulated by the potentiometer on our legacy controller via modbus tcp so let's go through each of the nodes let's begin with our inject node here you'll notice that we're reading data every five seconds next i've got a function block for preparing our modbus message so if i double click into the node you will see here that i'm specifying the function code for reading holding register which is three and my controller modbus id which is one one zero and the modbus register that holds our potentiometer value and lastly because our mod password is two 16 bit registers we're going to set quantity to 2. and then we move on to our next node which is the node responsible for sending the modbus request to our controller so this is where we configure our modbus communication parameters so as you can see here we're using modbus tcp and we have my legacy controller ip address and the modbus port number which is 502 now as mentioned earlier the response from our modbus controller consists of two 16-bit integer values so we need to convert that to a float value so for that we're using this function block so here you can see the few lines of javascript code that perform the conversion okay so before we publish our tank level value as mqtt messages to our hive empty cloud mqtt broker let us deploy this to see if we're getting the right data so i'll click on deploy okay i'll go to the debug window and then i'll clear it first so there you can see we're reading a value of 77.89 after every five seconds so i'll turn down my potentiometer and there you can see we're now reading a value of 58.6 okay now that we're getting the right data from our controller we can now publish the tank level value as mqtt messages to our broker now since this data will ultimately be consumed by influx db we need to first understand how influx db consumes this data so that we can structure it accordingly now while influx db is capable of handling data in json format and primitive types such as float they've got their own textual format called the line protocol which is specified as measurement comma text set space value so in this demo we're going to use the influx db textual format and in our case the text will be tank level comma location equals to plant a space value equals to 58.6 so what this means is that we have to concatenate our tank level value to this string format and publish it to our mqtt broker so back to my node.red platform i'll add a function block for creating an influx db textual format and then inside this function block i'll copy and paste this code which is for converting our payload to influx db textual format and then i'll check to see if that is a success so as you can see we now have our influx format so now we can go ahead and publish this to our ifmq broker and to do that i'll drag the mqtt publish node then here i'll add a new broker and then i'll go back to my mq cloud portal and copy the hostname of my broker paste it here set my port to triple h3 enable ssl and then under security here i'll put my mqtt client username and password as set on the portal and then i'll click on add and then i'll publish this to a topic called plant sensors and then i'll click on done and then i'll go ahead and deploy okay so we should be publishing to our hive mq cloud mqtt broker but to test that we are indeed publishing i'll open up my mqtt.fx mqtt client which is already connected to my broker and then here i'll subscribe to the topic plan sensors so as you can see we are indeed receiving our mqtt messages okay now that we are publishing our telemetry data to our mqtt broker the next step is to set up our influx db time series database to store this data now to install influx db on my raspberry pi i'll need to open up my raspberry pi terminal so i'm using party to access the terminal from my pc so i'll go ahead and log in so we're going to be installing our influx db time series database from the official influx db repository so the first thing that i need to do is to get the influx db repository key using this command and then i'll add the influx db repository using this command and then next we make sure that our list of packages is up to date okay now we can go ahead and perform the actual installation of influx db on our raspberry pi okay our influx db database has been installed now we need to set it so that it starts automatically each time our raspberry pi boots up okay when that is done i'll then go ahead and start my influx db service okay now that our influx db server is running we can start creating users and databases on it and to do that we need to first run the influx client using the command in flux okay so first i need to install the influx db client and then i'll use the command influx to run the client okay now that we've got our influx db client running we can go ahead and create a user with admin rights and to do that we use this command so here we're creating a user called admin and the password is admin password with all the privileges okay so we've successfully created our database user now we can go on to create our database for holding our telemetry data we'll call it telemetry okay so we've successfully created our database called telemetry all that is left for us now is to start storing our mqtt messages in our time series database okay so here i'll exit from our client and then i'll clear the screen okay now let's go ahead and install telegraph to start collecting our telemetry data coming from our hive mq cloud broker and store it in our influx db database to install telegraph we first download the package using this command and then once telegraph is downloaded we go on to install it and then when telegraph is installed we go on to enable it to automatically start when our pi reboots and then we can go ahead and start telegraph okay now let's open up the telegraph config file in order to tell it where to pull our telemetry data from and where to store it so to do that we use this command okay so here we're going to leave most of the configurations as default the only configurations that we want to alter is our telegraph influx db output plugin to point it to our telemetry database on the local instance of influx db and we also want to configure our telegraph mqtt input plugin to point to our hive mq-mqtt broker okay so i'll scroll down here until i get to our influx db output so as you can see this is the section for our output plugins so here i'm going to uncomment the localhost url at port 8086 and then i'll also uncomment the database and then put the name of my influx db database which is telemetry and then i'll scroll down some more and then here i'll uncomment the username and set the username as admin and then i'll also uncomment the password and put admin password which is the password of my influx db user okay so i've finished configuring my telegraph output now let's go ahead and configure the mqtt input plugin okay so this is our service input plugins and then here we've got our mqtt consumer plugin so i'll comment it out and then i'll go on and comment out the servers here and then copy my mqtt broker hostname and paste it here and then i'll change my tcp here to use ssl and then for topics i'll put blind sensors okay and then i'll scroll down here and then under username i'll put my mqtt client username and then i'll also put my mqtt client password and then lastly i'll set the data format to influx so this is where if i was using json i would have set it to json okay so we need to change our port to triple h3 okay so that's all that we need to do in our config file so i'll go ahead and save using control x to exit and then choosing to save okay now after configuration i'll need to reload my telegraph so i'll go ahead and do that okay so since we're already publishing mqtt messages to our ifmq broker we should see some telemetry data in our influx time series database so let's go ahead and query the database to find out and to do that i'll need to first run the influx db client using the influx command and then i'll select to use the telemetry database and then i'll select everything under tank level using this query okay so as you can see we are successfully logging our telemetry data into our influx db time series database now the next and final step is to visualize our data using grafana so i'll exit out of my influx db client okay now let us go ahead and install krafana on our raspberry pi so i'll begin by downloading the grafana package and then when the download is complete i'll go ahead and install it using this command and then next i'll enable grafana to automatically run each time my raspberry pi puts up and then after that i'll start my grafana server okay so our graphana has been successfully installed now let's go ahead and access it using a web browser so here i'll put the ip address of my raspberry pi and then i'll put the port number 3000 which is the port at which our graphana runs okay so which means our grafana has been successfully installed now let's go ahead and login the default user is admin and the default password is admin and then here we are prompted to assign a new password so i'll go ahead and do that okay so we've successfully logged into our grafana visualization platform now the next thing that i need to do is to add influx db as my data source so that we can start visualizing our plan flow data and to do that i'll click on add data source and then i'll continue by selecting influx db and then here i'll fill in the url address of my influx db server and then i'll enter the database name the username and the database user password and then here we'll select an http method and then i'll click on serve and test okay so my data source is working now after configuration i'll continue by creating a new dashboard and to do that i'll click on the new dashboard here then i'll click on add an empty panel and then i'll start to configure my panel using the data of my influx db database okay so here i will select measurement which is tank level and then here i'll leave it as value group by time interval and then we can use a linear field order by time ascending and format it as time series and then i'll call this panel blunt telemetry then click on apply and then you can zoom into your trend and then here we can choose to see the data for the last five minutes all the data for the last 15 minutes and then here you can set auto refresh to 5 seconds so now if i turn my potentiometer up you can see that we're now starting to get a value of 87.5 in conclusion you could publish mqtt messages as spark plug payloads for black and player interoperability using the hive mq spark plug extension which works on the enterprise or self hosted hive mq broker i hope you enjoyed this video about real world mqtt for industry 4.0 please check out the ifmq youtube channel for more videos like this [Music]
Info
Channel: HiveMQ
Views: 1,269
Rating: undefined out of 5
Keywords: #MQTT, #HiveMQCloud, #InfluxDB, #InfluxData, #Grafana, #RaspberryPi, #SchneiderIIoTGateway, #InfluxDataTelegraph, #IIoT, #IoT
Id: UVWatCq77B0
Channel Id: undefined
Length: 23min 29sec (1409 seconds)
Published: Tue Jun 29 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.