Edge Computing and next-gen of IoT Sensors - Alex Raimondi (Miromico) - The Things Conference 2019

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Applause] thank you and welcome to my presentation about edge computing and next generation of IOT sensors myself I studied at ETH Master degree in electrical engineering and I've working for over 20 years in embedded system electronic developments and further software development I'm also co-founder of shipping company where we do bluetooth-enabled golf-ball to allow people to refine their golf balls Gardner once presented this kind of hype cycle for new emerging technologies every technology has this kind of innovation trigger where it started to grow grow out people getting attention it gets published and will grow in in a publicity and at some point there is a kind of a break or a hype wherever the hype is breaking and it's going down into a dough of disillusionment because the new technology didn't fulfill all the requirements or expectation people had on it and after some time the technology will evolve in some something that's usable and deployable into the field and with with AI and this IOT especially we are I think we are kind of short before the peak or getting into that peak but still if the illusions of or the expectation of the technology are not met in all the features people X are expecting there are still a few things that will make it into our future products what is done nowadays with computing on IOT edge sensor we have all the conventional sensors like microphone MEMS sensors for acceleration Charro's and nowadays we are running standard algorithms we develop off we develop on the PC and optimize everything until the final result is what we expect and put it on a sensor and deploy it into the field the algorithm has to reduce the data in a way that it's transferable or it's generating the information we want or would low-power Network like Laura ven or Bluetooth forever but the main important thing is everything has to be developed out on the corner computer in the lab if standardized data we've collected data from the system with AI we can slowly start to change the way we approach this kind of computing this is a typical computing performance we require for different signals so we can start out if simple sensors like temperature humidity where we have a few measurements per seconds we need less than a few million of operations if you go into multi-dimensional signals like acceleration where we have three or even up to nine axis of data we will need more processing power we need strong microcontroller and if you go into out you're processing various higher data frequencies and we do we want to try to do speech recognition or distinguish between different signals we will have we will need more processing power and even strong controllers but what's happening when we go for image recognition or more advanced audio recognition like speech we will need gigaflops data operations per seconds and very strong microcontroller which also comes with higher current consumptions machine learning is typically used in that kind of application of we expect image recognition which we assign as recognizing faces recognizing specific people people counting or even specific feature detection like distinguish between cars bicycles trucks on a road in speech recognition we want to recognize speech like talk talkative long language translating into written text and all other complex things the human brain is capable of doing so I think it's those two applications are most important because it's the mode related to the senses which is most important for human beings the problem if those application is normally we need high-performance computing system like cheap use or everything is similar to be able to process and run those kind of algorithms on it training is very complex it requires large datasets and the models generated out of it will be very complex and high in size but what can we do with AI on standard low-cost however nowadays there is a way of partnering up with a company octo nyan they created AI system for microcontroller in the class of cortex m4 they call it rhenium and they provide AI models that can run on standard and standard microcontroller with low memory requirements and all the low power requirements the system consists of a sensor in the middle which is collecting data and it's also supplied by a model which can be downloaded simply over over an app in directly to the sensor either the sensor is using a pre trained model with which is has been teach tin with data in the lab but in addition to what we did with conventional data processing it can take the pre trained model applied to the device where it has to to collect data and improve the training over time with real world data in there in the real environment where the sensor is going to work for the training there is there is the need for a high-speed interface for example Bluetooth or USB to collect data to the processing transmit data to the cloud but once the training has been completed the sensor can switch to a low-power interface like Laura ven and just transmit that the decision which comes out of the machine learning algorithm this is the device it has multiple sensors like accelerometer gyro pressure sensor temperature microphone and it is in a waterproof housing with battery it's a device that can already be used in pre proof-of-concept systems small scale deployments it's very interesting for application like predictive maintenance detecting machine failure or before it actually happened based on sound vibration or auto strange movement it can be used for gesture detection like for example the picture shows if if you have a sensor in your pocket it can distinguish if you lift the packet the right way or the wrong way in in terms of your of the back of the bending you do and it is also capable of doing sound classification which is distinguished between cars the blender example that my colleague showed before and sounds like this so what's the goal of bringing AI or this kind of computing to the edge if you compare data just data sensors for data collection to do all this kind of monitoring we would need about five kilobytes data per second need to transmit them into some processing cloud and to the decision in in a high-performance computing system with that kind of data transmission it will no it might not even be possible with technologies like laura1 the battery would die after about one day next step would be we do kind of data compression sense of fusing on the device but still offload the decision making into the cloud which would mean if you want to decide like every minute if there if this machine is still running good or bad we would have to transmit about 10 bytes every minute into the cloud with that kind of setup a battery of reasonable size would probably live for about a month but if you can offload the full decision and computing into the Shenzhen node we can reduce the communication to own the situation where there has action to be taken so this or justice report status updates like once an hour so we will achieve better ela lifetimes in the range of years so where is my Romeo stepping in there as your a provider of standard level 1 sensors Laura VAR modules and gateways we also be stepping in in building cortex and for base Laura module which already are premium enabled and will just allow to download the AI models onto the modules and you use the modules in your system with your sensor platform or if the sensor on a baseboard and you build low-cost scalable product which already which are already a I'm abled and everything will be available throughout logistics and distribution partner F net so what else can be done with a ia is kind of a broad abroad word for everything that's mimicking human behavior we had a look on one on simple AI make algorithms for a sensor a for microphone and temperature sensor and now if we want to go for the more interesting use cases like speech speech and image recognition we are heading towards deep learning deep learning is massively more complex than the the small machine learning algorithm and if you compare over the years the complexity is creasing significantly in in the high performance computing and but how can we bring those algorithms into embedded systems there's a project developed by engineers at ETH Zurich and University of Bologna where they built a low-power IOT processor targeted for neuronal network processing and AI on the edge it's called gap eight processor the eight stands for the eight risk five course which are in there and it also provides neural network based horror processing unit to speed up and reduce power consumption on the typical operations you need for AI or the neuronal network computation it can do up to eight kicker operations per seconds and it can provide 300 mega operations at about 1 milliwatt the device is now commercially commercialized by Green Wave technology which is the French startup and it is also already existing Hardware gap we know I think the hacker name is sim familier for you what can be done with a board like yep we you know we have we have a currently Amasa T's is running with ETH Zurich where we are working with camera based image recognition on a low power system this the picture here now shows how we can detect a number just based on a camera image the idea is to offload image recognition also into low-power sensors and in the end communicates the data out by Laura van how about energy efficient in that next generation IOT systems we take a sample application with a conversion convolutional neural network based on people counting our target is to develop a camera system which is battery-operated standalone so which means no wire attached no power supply and just just them stick stick it to a ceiling for people counting for example in a meeting room or you know in a train station to detect overcrowding of a location in the in a project we implemented that kind of application on a cortex m4 with a very small camera like 80 64 by 64 pixels and it took about one minute per frame for processing so if you do one direction every 10 minutes with that frame rate of one frame per minute we will get a battery life time less than two months with a reasonable-sized battery in the gap eight processor we can do we can run those algorithms on about 30 frames per seconds so if he duty cycle everything with one detection every 10 minutes we will get battery lifetime of more than 10 years of course everything is still in research and development so it's not ready for production now it's there are a lot of problems to be solved with that no new hardware but still it will it shows us where we are going with the next generation of IOT as a conclusion machine learning is now making its way slowly and slow into a embedded system it will provide a vast amount of new interest in use cases and it will bring a lot of new potato opportunities and intelligence for fodder for new low power system and vsms marami who are investing and you know inventing new and intelligent sensors for the future for future applications so thank you for your attention you were fast we still before we go into the break in this room maybe one last square one one question from the room but this is exciting stuff this is where yeah this is where we're going a I and IOT together and where we're heading a where are we heading if you're back here in three years and where are we now hard to say but I think we will have this kind of devices ready and is I think there's a lot of use cases in for example facility management meeting room occupancy train stations and I mean all those devices are on the market already you can buy camera based people counting but it's all wired and you know total cost of ownership with a wire system is still too high so it will not deploy and or scale in high in high volumes so we need battery-operated viruses just stick and play you this so for the next few years you see this taking a high flight yes and being in production one question then this room is going to have a break one last applause for Alex Raimondi thank you very much [Applause] you
Info
Channel: The Things Network
Views: 1,872
Rating: undefined out of 5
Keywords: iot, internet, of, things, lora, lorawan, ttn, the, network, technologie, future
Id: -MJurPAJyG8
Channel Id: undefined
Length: 16min 15sec (975 seconds)
Published: Mon Feb 11 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.