Reolink IP camera integration in Node-Red

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to a new video in my home automation series as i mentioned in my recent rio link videos i received two real-life cameras the rls-522 and the e1 zoom to cover how these cameras can be integrated in third-party application and in this third-party application series i will be covering node-red i think i've covered some of these scenarios in the past because i had a video how you can integrate ip cameras with the google home device so you can you know show the picture from yankee i became around the google home um so i'm going to go through those scenarios and i'm going to you know take you through a few more other scenarios but basically this is all going to revolve around how you can take a still image from reoling cameras how you can also take a snapshot of a moving picture i mean like a like an mp4 recording of a movie picture and how you can use some of the only functions that the cameras support so in particular we are going to see that you can get the camera image as well but you can also on certain devices listen to events that are coming out from the camera and i was able to use these to capture the motion detection on the camera and that would send me an event to node red so i think that is actually quite useful and i was able to also control the camera from node red so the ptz control so pan and tail the camera and also use the zoom functionality so these are the things that we are going to look today and just like in some of my recent notes i leave quite a lot of comment notes in my flow and if you double click on these comment nodes i usually leave some additional information there so links to websites that real link has provided or some other websites you are going to find in the comment notes and before i dive into the details i just want to leave as a reminder that all the functions that i'm going to talk about now so capturing the still image the video and the onvi functions are only supported to a subset of reolin cameras i think the easiest way to remember is well obviously you can read the articles that i left the link of in the comments section but anything which is battery operated it's not going to support this function so for example the argus or the argus 2 or as you can see the older e1 even pro or for example the lumos so you really need some of the traditional cameras that are either wifi or ethernet or power over ethernet versions so those are the ones that you can use for these functions unfortunately none of the others support these functionalities or these features maybe they will do in the future but not at the moment so let's start with the image capture or the snapshot capture and again you can read about the you know the article here that i'm also using to determine what the how it needs to be done and for some of these functions i'm going to save my images to like a special folder on the on my pi which is set up as the static folder in settings.js so you need to do these changes and then if you copy anything to those folders in the on the pi you would be able to view them or access them via the the node-red web server like this so if you need that one so the first is let's capture an image from the camera so i'm going to use the rsc 522 which is looking at my backyard so side of the house and some leftover construction material so not an ideal place so if i click on this inject node it's going to request a new image from the rear end camera and then you can see here so let's look at this flow in detail the inject node is just a simple injector there's nothing really here and then this is a simple http request node and in this http request node we have to provide a url and how the url looks like it's actually you know listed in here and for this you need to know the ip address of your camera and you can get the ip address of the camera from the rear link app but because you would need to specify the ip address here um i think it would be ideal if you can set a fixed ip address for the camera and the only other thing that you need is your need your user id and the password in here and that's what you provide here on the on the url and then you leave everything else as you know basic oh in the return you say that it needs to return a binary buffer first of all i have this image preview node uh again the name of this node i left it here so it's a node dash red dash contrib.image output you can install it from the palette manager and if you provide a payload which is an image it can show you on the dashboard and sometimes i like to do this that you can check things in the editor you don't have to go into the ui just to check you know what's the content of the image so this is absolutely optional if you want to use that but um you can preview the image or what you can do is the easiest way is the you com you convert the binary buffer which comes out of this http node into a base64 string and oh by the way yeah this is not a standard component as well so i just need to find yeah this is it and i also add it here okay so that's the note for the base64 and i pass it to a template node where you can specify what you know width and height the image should be displayed and of course the image source is going to be the base 64 um data stream or whatever it's called which is coming in a payload and then you just pass it into a template node which displays it on the dashboard so you can see it here so that's the image i mean of course it's scaled on but i mean sometimes i don't want a huge image on my screen and what i really like about this particular flow it it passes the information directly into the dashboard so it doesn't uh save it on the drive so if you want to do it quite often and you are con conscious about you know wearing your sd card you can use this flow because it doesn't create a file but what happens if you want to create a file so for example i have the you know the same inject node which just injects the timestamp and i have the same http request node and then the output of these this is also set to binary buffer and the output of these goes into this function node which the only sole purpose of this function know that it generates a a specific file name so as you can see it will create a file name which is called snapshot underscore and then it will pass on the year month date hour minute and second dot jpg and it's going to put it into the slash home slash pi folder and of course this goes into a file out node and that's just going to write the content into a file so if i call this one then as you can see it created the snapshot file and if i go to my dashboard and i use my file browser that i covered in a previous video then hopefully we can see the snapshot file being created here and if i look at it that's my image yeah it's the current image so you can use it like this and for example you can use this flow if you want to for some reason you want to create uh images you know maybe every hour or every day you just want your node red to automatically create these files with a separate file name and then put it onto your drive i i actually did that for a couple of months with one of my earlier ip cameras and i use these images to create a really fast video which basically shows how you know the seasons change and the you know the leaves come and go and that sort of stuff so i mean that was gimmick but you probably find the better use case for this type of application if you have seen some of my earlier videos then you know that i'm using telegram quite a lot one of my favorite use case is that i grabbed the image from the ip camera which is looking at my front yard so when somebody rings my bell i can see who that is i get a snapshot on my telegram account on my phone so i can tell that it's a postman or somebody else and this flow starts exactly the same as the one above so the inject node and the http link and this time we are saving the image to a file with telegram we actually have to save it to a specific file so i'm saving it to this file i mean technically it can be anything else and then i can create the telegram message in a function node so the telegram message is a special payload which contains a chat id so here you have to put in your own chat id so that's the chat id for your account and then the type is photo the content is the path for the image and you can also add a caption so besides other thing this is how it's going to look like on the phone so i mean of course i'm getting some extra messages as well but this is how a picture message looks like and of course you can click on the picture and you can see the image in full screen so i think this is a fairly useful test case especially if you are using telegram in your home automation and of course if you are not using telegram probably the other most common things to use or send notifications to is via email so again i have an inject node i have the very same http get and then it goes into a change node which is setting a few things uh or to the message.payload for example i'm setting the message.2 which is the email address you want to send to the payload is the email buddy the topic is the email subject and you are also setting this attachment.file name which is the snapshot.jpg and that's actually the file name that will appear in your email client oh and i almost forgot the top few changes so i'm creating an attachment array where the attachment of contents is the payload and the file name is the snapshot so this is how you pass a file name attachment to the email node and all this information goes to a simple you know email node which i've configured so it sends the email to my gmail account well actually from my gmail account to my hotmail account for testing so if i click on this one it requests and then it sends the email out and if i look at the email this is how the email looks like so you can see the subject the body and then the email and the attachment is called the snapshot.jpg but what happens if you want to send out an email which actually contains multiple email attachments in here i use one single camera image but let's say you detect motion or your alarm goes so for something happen and you want to send a snapshot from your camera but then you also want to follow it up with the second and the third snapshot maybe you know 10 and 20 seconds after so this is what this follow is going to do so you click on the timestamp here and then it's going to req fire off the first request to get the first image and then after 10 seconds it requests for the second image and after 20 seconds it requests for the third image and then this node this function node keeps collecting these images and once the third is received then it's going to send out the email and just like before you use this change node to set up the email let's say the subject the email body and the two email address that you want to send the email to and then in the function node you don't really have to understand how the this function node works the only thing you need to remember is on the on the first line you can set how many images this function node should wait until it sends the email out so if you want five emails then just change it to five and then just make sure that your flow is going to generate five images and of course it doesn't have to be you know five image from the same camera you can have like you know from this inject node you can fire like five separate http nodes from five different cameras so all five cameras will get collected into a single email and if you look at the email that's how it looks like so we have snapshot one step shot two snapshot three i think this is also a fairly good use case how to use these cameras and the and the email functionality and of course if uh movement is detected you probably want to have just a few images in case the first snapshot misses the you know the subject and with this i think we have covered all the you know the image processing capabilities next we can look at how we can process the video stream and to process the video stream i also have a few links that i can recommend so first of all there is a link from real link how you can get the rstp url for any of the reloading cameras again you know the caveat is that it's not supported for all these cameras at the moment but basically it's this is the url for the mainstream again you need the ip address of the camera and you need the admin user id and the admin password well the admin user id is admin and you need a password in order to access the rstp stream and for these examples i'm just going to use the main stream but you can also use the substring which is the lower resolution stream and to do the conversion of the rtsp i'm always saying rstp but it's the rtsp real time stream stream protocol so in order to convert this feed to an mp4 you need ffmpeg and usually you can install ffmpeg like this in a raspberry pi but i'm using a particular i think i'm using raspbian jesse for some reason i mean this raspberry pi has been running for such a long time now and for jesse for some reason ffmpeg is not available so you can use this link to look at this discussion and you use post 7 which is going to give you a link how you can install ffmpeg on raspberry raspbian jesse the process is going to be very simple the heart of the the logic is actually in this exact node so this is going to call the ffmpeg to convert the stream into a video and that's the full url i just wanted to copy to a notepad so we can see so it starts with ffmpeg and these flags and then here after the dash i which is the input you specify the rstp stream so that's the rstp stream for the camera of course with the you know the user i did the password and the ip which which was on this article and then you have all the additional parameters probably the only thing that uh worth noting is this dash t60 which basically says that it's going to do the conversion for 60 seconds so it's going to convert one minute of footage from this live stream into an mp4 and of course the only other thing we need to add at the end of this command is the file name that the mp4 needs to be saved at and the way i do it in my flow is i have a simple change node where i put the you know the path for the mp4 into the payload and that gets appended to the end of the you know the command here in the exact node so at the moment it's triggered manually but if i just trigger it then you know ffmpeg starts and now it's going to run for a minute and when it completes then we should be able to see the file in the dashboard so i can click on this open video link and i will be able to you know download the video and we just went to it until this disappears and besides that i also wanted to show you this stream and this was an example that i've downloaded and it appears to be it should be doing the very same thing as the video capture but uh and uh but for some reason for real in cameras it doesn't really work so you're supposed to click on start streaming and oh maybe i shouldn't do that while still doing the conversion and you would be able to see the live feed here in the image preview but um probably because how real link uh creates the rtsp stream for some reason it's not working so you're always getting this invalid source message but i mean to be honest i don't see a lot of value looking at your live feed in your editor i left this in so you can play around with it if you want to but i think the main use case is going to be this anyway so for some reason you need to save the you know recording in the mp4 and now of course you can you know click on here oh yeah sorry this is my other camera which is in the kitchen this for some reason is looking at my water jack but anyway you can see that the time passes so it's actually doing a one minute recording from this camera uh well at least now you can see that it's working on the e1 zoom as well not only on the rsc 522 and when this completes you have an mp4 file on your raspberry pi so again you can just put it into an email you can put it into a telegram message it works exactly the same as as the jpeg example we have seen the only difference is that instead of a jpeg you are just linking the mp4 file of course keep in mind that this file could be quite large especially if you are converting the main stream so maybe you can use the substream or you can reduce the in the t60 parameter maybe you can just reduce it to dash t i don't know 20. so it's only going to be a 20 second recording and with this we reached the end of the you know the still image and the video examples so now i'm going to go into the omvi support and in case you don't know that because well i also had some vague memories what onvif is but this is an interface specification that are usually adopted by many camera makers so if your application supports omvif you would be able to talk to many different cameras from many different vendors because they would start implementing the same functions like you know discover the cameras ask for the picture ask for the different urls for the video stream and you know listen to events that sort of stuff or use the ptc function so here um that's the article from real link on you know what camera supports the onvif and what profiles they support because there is a lot of functioning on fix that these cameras actually don't support and i'm going to come to that at uh in a few minutes and either to you and in order to use these functions there is a set of node which is being uh developed by bart at the moment and these nodes are available on github but they are not part of the mpn so you won't be able to install it from the palette manager so i recommend that you go to this github page and actually there is a tons of information here and there are a lot of example flows here in the readme so as it says here this is an experimental noted contribution at the moment so that's why you can't install it easily so you just have to install it manually in your you know node-red folder and then of course you have to start node-red but once you done that you would be able to use all these functions and to be honest i have implemented most of these in my flow and if i zoom out a little bit and if i scored scroll to the right you can see all the examples here but i've actually only used a few of them in my flow and well i'm going to only going to share a few of them because as i said not everything is supported by real ink unfortunately so i was trying to focus on the ones that are actually supported and i think they're worth investigating so first thing first which actually i haven't included in my flow but you can see that all these cameras support a discovery function so you can just so you can send out a request on your network and all these cameras will start reporting back that oh yeah i'm here i support this function and if i do this in my network and i start the search so again this uh this example code is coming from this side here it's actually this one you can see that i got response from four cameras and i only have real in camera so these are over here in cameras and of course i'm not getting response from the lumos or the argus 2 because those are the ones that don't support ombif so if you are unsure what what cameras you have you can use this discovery function and whatever comes up here in the discovery then those are the ones that you can you know use in the rest of the flow here so let's look at the first on the function that i highlighted which is the snapshot image and this does exactly the same thing that we just did above so you do an inject node instead of using the http request you use this onvif media node and then the output you get an image so you can really decide whether you want to use the http method you can use this method as well they are perfectly interchangeable but if you would only use this functionality you can just use the http request and you don't have to install these additional on with nodes so this is the you know the image capture and as i said it works fine the next one i wanted to highlight is the device events and device events can be used for a lot of different things and some cameras even send out statistics on the device events like what is the cpu usage like i don't know memory or voltage or whatever it looks like that reeling doesn't really do that but one thing i discovered some reeling cameras can send you motion detection information and out of the sort of the six camera well sorry out of the four cameras that support ombif i could only get it working on sort of the fixed camera so the 420 the 420w and the 522 it doesn't work on the e1 zoom for some reason and the way it works is you set it to start listening and it starts listening to events and then whenever event is received as you can see that it's going to send back a fairly complicated object and this object is going to identify that the event is a motion detection event and it also has an attribute which says is motion so if motion is detected which can be either true or false so what i found in the 522 that it sends an event when the motion is detected and then it also sends an event when the motion is no longer present like i move outside the view of the camera and also what i noticed is that the motion event is actually getting sent multiple times so if i just keep moving in front of the camera then it will send the motion event let's say i don't know something like twice a second or something like that so what i do is that if i have a motion i just limit the number of messages to one per hour because i don't want all of the messages and i take a snapshot and i put it on the dashboard and once the motion is also and once the motion is no longer true so the motion is is gone then i take another a picture as well and i also update these motion status on the ui as well so hopefully what's going to happen is i would have a snapshot of you know once somebody got detected and also a snapshot when that person has moved out the frame so now i'm outside i'm going to the back of the house to the five to two and then hopefully motion is going to get detected and i'm also looking at the live feed on my phone so i'm moving into the frame and hopefully motion is going to get detected there is a lot of backlight here hopefully that's not going to fool the camera so come on oh motion is detected okay let's go back and look at what happened in node-red okay so hopefully you can see that this was saying motion at some point and i got an image when the motion was detected and i also got an image as i was walking out of the frame so it was definitely working that's a definitely good sign because i haven't uh rehearsed this quite a lot and these are the messages that we have received from the rsc even so you can see that it has a topic and then it also has a time and then some property but here in the data it says that the you know the data is motion and the value is true so this is when motion was detected and the last message should be showing me motion is false and the other interesting thing is that this functionality was working slightly different with the 522 and i think either the 410 or the 410w so for example the 522 is sending me an event when motion is no longer detected but some of the other cameras don't do that so just be aware of that make sure that you test the cameras and then see how they are working and of course if you don't want these events anymore you can just stop listening to these events i'm a little bit disappointed i'm not getting you know more events but to be honest the motion detection is probably the most important one anyway so you can use these start and stop events let's say uh also limit when you are receiving these messages let's say you turn on your home security alarm and then you start the event listening on your cameras and you can collect all these motion triggers and maybe you can just save the images uh yourself locally or just send them over to telegram so you don't have to you know pay attention to the real link app but it works nice and it seems to be working very reliably and again i've tested with these other cameras as well except the e1 zoom where it wasn't working with but again maybe with a future update in the future this feature is going to work on the e1 zoom as well and finally let me talk about the ptz functions you would be able to control your cameras from node red as well as in pan and tilt them and use the zoom so i'm quickly going to demonstrate it in the 522 so i have some injects note here in the editor as well i also have some buttons in the ui and in the change rules we are just setting an action which is the continuous move and there is also some speed which controls how you know fast the movement is but i don't think in the really cameras the movement the speed of the movement is actually implemented so you can just put any values here i mean of course you know positive values means zooming in and negative values mean zooming out and the same applies to all the motions as well so i think positive is that negative is done and positive is left and negative is right or vice versa but you can look at the code and see how it works so let's look at it uh for the rsc 522 so i have the controls here as well and you can see the live feed from my phone so if i click on plus then it zooms in we can see it on the live feed and if i click on minus then it zooms out and because it's a continuous zoom you would you could use the the stop button to stop zooming at any point so you can't make it to zoom 10 percent you have to send a zoom command and then a stop command to stop unfortunately and let's look at the e1 so i can make it to pan left and then i use the stop button to stop and i can make it to pan up and stop and maybe pan down now and then stop and i can zoom into this one as well the one thing i really regret is that in onvif there is a lot of different movements supported so for example many cameras support a home function so let's say in the e1 zoom example you can set up a spot like a particular location and zoom level that is the home function and then you can just tell the camera to move to the home function so it would automatically pan tilt and zoom to the same location and that is not supported in these cameras also in omvi there is an option to support presets so again you can move for example this e1 to various locations and set them as presets and you can just call up those presets and it would automatically move to those locations that is not supported either also there are more functionalities about moving so at the moment i'm using this continuous move action there is also options to use absolute move and relative move which again i think it would be quite useful because then let's say i can home the cameras in the upper right corner and then use the absolute move or the relative move to move to a certain area of the screen so this is how i could you know go around the fact that it doesn't support presets but that is not supported either so the only option you have is you start the camera moving in one direction and then you send the stop command and um i mean it certainly works but it's it's probably not good enough to automatically move to certain parts of the screen if you are doing the node red sort of like unattended then you can just sort of like move it pan it to or tilt it to all the different extents maybe if you use the delays you can sort of like make it to you know follow a particular pattern but i don't think it's going to be you know very accurate so these are the functions that you can use and based on my experiments and you know trying out all these it looks like that none of the others are supported at the moment so maybe that is coming in the future as well so these are all the use cases that i managed to cover for the reeling cameras and well the use cases that i think actually makes some sense and you can use it to certain scenarios so image capturing the video capturing the event logging or the event monitoring and the ptc functions as always if you are interested in this flow you find the download link in the video description and i just want to stress enough again don't forget to look into the comment notes because this is where i put most of the information for you know the various subcategories or various subflows but that would be all for now thanks for watching and hopefully see in the next video
Info
Channel: Csongor Varga
Views: 20,564
Rating: undefined out of 5
Keywords: reolink, integration, ip camera, rtsp, snapshot, onvif, node-red
Id: MpwXad56gPQ
Channel Id: undefined
Length: 30min 56sec (1856 seconds)
Published: Tue Sep 22 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.