ElixirConf 2021 - Mike Waud - Sink: A protocol for distributed, fault-tolerant, BW sensitive systems

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] thanks for joining everybody this is i'm going to talk about something i've been working on the last maybe a year and a half uh it's a project called sync and so the description is it's a new protocol for distributed fault tolerant bandwidth sensitive iot systems and so for people who are watching the video uh and for you in the audience i tried to put some buzzwords up here um please laugh again at the jokes we had a little mishap at the beginning but so iot internet of things machine machine communication is m2m edge computing or fog computing which you heard earlier event-based systems and then eventual consistency and a little bit more specifically we're going to talk about nerves we're going to talk about mqtt which can i get a show of hands of like people who work with or heard of mqtt okay now i see why people are here great uh well we're def i'm definitely gonna talk about that so you came to the right place uh something called co-app also a little bit of kafka phoenix channels and then tcp so let's dive in uh but first uh i just want to kind of outline some of the takeaways that i hope you get which are what are some of the challenges of iot device communication what are some of the options that you have for communicating with devices and then lastly or third what are some of the custom approaches we at spark meter have explored with our solution sync and lastly this is elixir conf you know how does this all fit in uh so hello my name is mike woad i work for a company called spark meter which probably very very few of you have heard about but um spark meter offers grid management solutions that enable utilities in emerging markets to run financially sustainable efficient and reliable systems got to make sure i stay on brand here so what does that mean well we sort of have three lines of business we design and we sell smart meters for electricity metering so these will measure electricity used some information about how stable a grid is and they will also receive signals to turn on or off to limit the load update firmware there's a lot going on in these things we also sell grid management software kind of a sas to manage your grid so we can handle the software part of it and let our operator partners really focus on generating distributing electricity and then interacting with our customers and lastly there's some digital solutions i don't really talk about and so we operate in emerging markets so this is typically africa or southeast asia which makes it really interesting the project was actually started in haiti and the idea was through automation and more efficient grid operation we can lower the cost to deliver electricity to people and increase the amount of people in the world who have access to electricity and good access to electricity which is also important so we've been around for a while we sell to more than 25 countries uh and this meter sold number is old i apologize it's definitely higher than that now but it says more than 120 000 so still accurate uh and lastly just earlier this year uh we got an award for fast company as one of the most innovative uh companies in energy which was pretty cool so this is the project i work on which is our next generation base station it's this blue box in the middle here and kind of the way this thing works is we have these smart meters on the left which uh operate in a mesh uh communication again uh harkening back to our keynote uh which is kind of interesting interesting to hear and they all talk to each other and then eventually to this base station wirelessly so the space station at a site will have tens hundreds thousands sometimes tens of thousands of meters it talks to so the base station kind of collects that does some processing we actually run billing on the base station and it'll have all these messages that it wants to send up to the cloud and then the cloud will you know have behaviors it wants to send to the base station uh all kinds of communication going back and forth and the base station runs elixir nerves which we've been super happy with and we also have some rust for some performance sensitive stuff and then in the servers for the communication and we run elixir we have some separate services which you know run other stuff like python or things you would expect but mostly everything here is elixir so you'll hear in this talk i'll talk about the ground which i'm usually referring to then the base station and then the cloud which is the cloud uh and the question mark uh very relevant uh i really enjoyed the keynote because uh that's a lot of where i work uh is just kind of well what's in between here and that's gonna be the subject of this talk so that's a little bit about who i am what we do let me get into the problem a little bit more which is that messages need to go in both directions we need to talk from the ground to the cloud to send meter readings up to the cloud to you know ship them out to whatever and then from the cloud for someone makes a payment we need to send that then down to the ground network and power losses are frequent uh often they look like the same thing too which is even worse um but so that's a big challenge that we have to deal with we also want visibility into device connectivity we want to know if a device is offline so we can be proactive and tell our operator partners that hey we haven't heard from this device in you know two hours why don't you go check on it and you know certainly hinted in the talk description data is expensive uh so let me just give you some hypothetical numbers of what that might actually mean let's say we're paying 40 cents per megabyte and i think i mentioned this these devices talk over cellular so data can be very expensive and let's say we have 100 000 meters and those meters consume one megabyte per meter per month at 40 cents per megabyte well that can be 40 000 per month and now if you have a protocol that is really verbose or chatty you can imagine doubling that or 10xing that and if you have a protocol that's good you can cut that by some factor so that's why this is important and just to give you an idea of also how frequently we lose network or power these are two different sites both in rural kenya green of course means it's connected and then the red means it's been disconnected up top this site is very good uh below that is much worse although it's been online for more than a day solid which is good so we need to deal with both the best case and the worst case here given that background you know let's kind of evaluate some of the solutions are out there i'm going to rank them by these criteria here industry support is if i go to stack overflow or a blog or you know just the documentation can i read this if we hire someone do they have like some idea of what it is uh and bad would be of course none of that bidirectional messaging can we send a message if not do we have to build something on top of that to do this device visibility same thing can we see that the device is connected or do we have to build some sort of heartbeat delivery confirmation if we send a message to the device how do we know that the device actually got it do we have to do this or will the solution handle this for us data efficiency and then supporting infrastructure what do we have to stand up in production to actually let this thing run how much work does it take to keep it running and then if we have developers that want to run this locally which we do how much work is it for them to do that because if there's a lot of friction they're probably not going to do that and it's going to be harder to catch bugs and debug stuff so with that we looked you know at rest something called co-app mqtt kafka and then lastly websockets and phoenix channels so start with rest needs no introduction representational state transfer obviously it just there's a client that makes an http request over tcp to a web server and gets an http response so industry support very good bi-directional messaging bad for our use case because it's easy to go from the base station to the cloud but to go from the cloud back there's ways of doing it but we basically can't make that request device visibility is also bad because you might know the last time you got a request or response but you have to build on some sort of heartbeat to know that the device is still there not that big a challenge but it's just one more thing you have to do delivery confirmation though is very good because it's essentially synchronous supporting infrastructure also good assuming you already have the web server running uh and data efficiency big saab no surprise there because you're setting up this tcp connection you're tearing it down and you have all this extra http data in the frame which you know you don't really need um so something called coap which is really interesting this constrained application protocol has anybody heard of this all right um it's it's pretty neat it's worth reading about just because they have some interesting ideas um it's it's kind of like rest for constrained devices um and it operates over udp uh the spec is very well written uh it didn't work for us for a lot of the same reasons of rest um and also although the spec is well written uh there isn't a ton of its support for it in like libraries and stuff um bi-directional messaging sort of the same story although there's something called bi-directional co-app it's not clear if that's like supported by the spec of the library whatever i didn't look into it too too much device visibility same problem as rest delivery confirmation is a little better because coap has this interesting thing where when you make a request you can actually send a flag of like this request needs a response or it doesn't and when the server sends the response it can say here's my response pass fail error because of this reason or it can say my response is that i'm going to give you a response and here's a token of that future response and it'll match it up later so kind of interesting supporting infrastructure also pretty good because it doesn't need anything in between to stand up data efficiency also pretty good okay mqtt the 800-pound gorilla uh so for those of you who aren't familiar with this the idea is you have this mqtt broker kind of in the middle um and all these clients connect and talk to it over the mqtt protocol using tcp and the clients will subscribe to topics and also publish method messages on those topics to the broker which then are propagated through pub sub to any currently listening client key distinction there so industry support is good of course you know aws google microsoft all have brokers that you can run it's been a while since i looked at since i looked at this but they were running support for 3.1 but not everything there mqtt 5 has more interesting stuff it didn't look like that was supported by the big guys there are some options for that and of course mqtt4 well we don't talk about mqtt4 it's complicated bi-directional mesh messaging though is very good easy to send messages to the broker and get them back uh tortoise is a great library for this um device visibility though is bad uh particularly for our use case and this was something that kind of annoyed me which is that you know there's like this last will and testament message in mqtt which is helpful but really um we ended up having to build essentially session tracking on top of this because you might you know from the client that you're connected to the mqtt broker but for us we don't know if the base station then is connected the base station has to broadcast a message message that says hey i'm connected and we have to build tracking on top of that so that was just kind of an extra tedious step for us kind of similar is issue of delivery confirmation where the client will get a confirmation from the mqtt broker that it got the message but you know just because the broker got the message doesn't mean that the device on the other end got it the classic example of this is um if no one is subscribed to the message it's like a tree falling in the woods and no one hears it so on as the cloud we might send a message to the broker but the device isn't actually listening so it wasn't there so we have to build this whole tracking system on top of it and i'd be curious to hear from people afterwards if they've had the same problem seems to be a common request response problem that people solve in mqtt themselves uh sporting infrastructure kind of a frowny face because you have to stand up this broker uh if you want to run something locally you have to run mosquito which is an open source broker versus what's whatever is running in the cloud and they're not the same thing but data efficiency is good so that's good kafka uh i'm not going to talk about this too much uh or explain it too much i found this really good article from confluent which was basically talking about some of the cons of kafka for iot and they specifically say requires a stable network connection and solid infrastructure which we don't have they also say lacks iot specific features and then they go on to basically just tell you to use mqtt but so if we did look at it you know it's mostly pretty good they don't essentially throw away a message if no one's listening which i liked but standing up a kafka instance doesn't seem very fun or easy so i gave it a really big sob lastly the one that's relevant to us is phoenix channels or websockets this is nice just because there's nothing in between where client connects over web sockets to a web server supposedly one web server can handle like two million connections or something i hear and so industry support is very good uh of course it enables bi-directional messaging uh good device visibility uh delivery confirmation is bad because you still have to build something on top if the connection goes away not the end of the world but just one more thing to do infrastructure is good because you don't have anything in the middle and the efficiency is also pretty good because you can send binary messages i also pulled the google trends for these just curious it's pretty much what you'd expect rest is most popular than kafka and websockets and mqtt and co-app are kind of trailing behind it's actually interesting that it's been pretty steady although for some reason it went down i don't know if google changed their algorithm who knows so given all that what is sync why did we do this thing ourselves what are we doing how are we solving things well first of all we have this cool logo which i was very happy with everybody needs a logo um so i mean it's it's really pretty simple we just run a tcp server and we have this custom binary protocol and clients connect and they send messages and they get messages back um so it uses ranch which is which was really nice because we're already running web servers on these uh both devices and in the cloud um so in the cloud which runs the tcp server you know we didn't need to add anything we already had ranch because we have cowboy because we have phoenix so that right there saved us a bunch of code and we know ranch works great we are also able to use mutual tls authentication so the devices have their own ssl certs the cloud has its own ssl cert and when they connect we know that each one is who they say they are uh we don't have to deal with like username and passwords we don't have to deal with like bearer tokens um all that stuff is basically done for us um and elixir is great for this uh and then we have hooks into the app for pure authentication that we do additionally uh and we can send messages when the tcp connection goes up or down um we also have keep alive messages kind of like web sockets to tell us if the if the client's still there um and really most of what we use for this is uh publishing a message much like mqtt uh and then we have built in and this is important batch publishing so the protocol itself if you give it a bunch of messages we'll take that and figure out the most efficient way to encode that and send it and i'll talk a little bit more about that in a few minutes there's probably more coming but so far that's all we've needed so this slide is i find it very boring but it's important because i'll explain why in a second basically what a sync message is is there's a key much like a key value store or a primary key it's just some binary some event type id which is an integer and if you combine the key and the event type id you essentially get a topic but it's much less verbose than something like mqtt or kafka and we also built in schema versions so we know as we add data to the system as we evolve our schemas we want to make sure we're explicit about which version of that we're sending so that's actually built into the protocol and we also have timestamp built in as a unix timestamp so anything that is not any of that is then considered to be the payload so that's just some binary it could be thrift it could be um bro buffers we use something called avro you could just i don't know use strings you could use json you do you um so the really neat thing about this is because the protocol knows all this extra data it can then pull it out and you give sync essentially row level data and then sync turns that into column oriented data and that column oriented data will compress much better than the row oriented data and so also just kind of once you publish a message we have built in you either get an ack which means hey i got this and it's okay or a knack which i got this and it's bad for these reasons so given our criteria before how do we do well industry support that's a big saab industry support you're basically you know you're looking at it me and a few other people um but hey everything else we did great so i'll take it um you guys want to see some benchmarks yes so i pulled these together sunday uh i took one event which is 60 bytes which is about the most we send uh trying to give some worst case data here so rest it took 600 bytes roughly or 10x the amount of data we actually want to send uh using json binary was only a little better mqtt was 4x and that's because we're sending a message out and back and then sync we were actually able to do almost 2x just a little better so that felt really good but then we want to kick it up a notch of course let's talk about batches so i took a batch of 100 events that same 60 bytes and i really tried to randomize the data so we couldn't cheat with compression so mqtt you basically you're losing a lot of the protocol overhead at that point um so it's almost 1x the amount that you're sending and i just put together some custom batching of how you would lump these all together and if you use dip my randomization was good so we didn't actually save that much that was a good confirmation when i was doing benchmarks and then with sync we're actually able to do better than the original 60 bytes just by doing that column-based encoding and by knowing if there if something is sent the whole time we can just send it once and then if we actually send it less random data which compress well because we structure it better again we can out compete uh just something with like mqtt so that felt really good um so that is the end let's check out time cool it's the end of the easy stuff we're about to get to the tough stuff and i'm going to warn you it's going to get a little crazy so the real problem is that we have a distributed system you know it's writing a binary protocol and like encoding messages it's a ton of fun and you can really feel like you're doing something but um at the end of the day the problem is much more challenging than that we have these base stations on the ground that talk to the cloud and each one is kind of its own special case so an example of this is um think about like an iot thermostat where you can change the temperature from the cloud or the ground but the system is disconnected someone on the ground changes it from 65 degrees 65 fahrenheit for our just to be specific here um from 65 to 60 on the ground and then someone in the cloud changes it from 65 to 70. uh you probably in the cloud want to know that the state on the ground was 65 degrees and you're trying to change it to 70 so you have to keep track of where the system has diverged and then when it comes back in sync you have to keep track of well what do i do do i change it to 60 do i change it to 70 like how do i even think about this and how do i make sure that messages still go out while we try and find figure this out so all that to say um we need to know the state of the other system we need to have some sort of way of doing conflict free replication and then for us additionally we want to be able to protect prioritize certain messages um such that if uh base station has been offline for you know a couple days it's going to have a ton of data to send but we have probably have more important things that we want to send first so we really mess with message ordering here and that can cause some unexpected things to surface in the ui how do we deal with that schema evolution this is kind of the case where you might on your on your devices be running multiple versions of firmware across your fleet uh so you have to figure out well which version do i send to this device which version do i send to that device uh and additionally you want to be able to deploy to the cloud and have your messages be understood by the firmware even if you hadn't sent an update so that's a big challenge we're trying to solve with sync uh and lastly there's this idea of poison messages and these are these are those things that get into your system um justin was talking about this a little bit kind of today where you have something i always think of it like an emoji someone put an emoji in your system and you just can't deal with someone putting like fire or something and it just crashes your system and if it crashes at once that's fine because we have these supervision trees but you know if you keep sending that message over and over and over uh it's gonna keep crashing and potentially bring down your whole system so in the context of this exchanging data between the cloud and the ground can we build in dead litter cues to the protocol so that if one of these people one of these systems writes a bad message um it doesn't break the connection so all this to say i drew this really early in the project and this was uh what i really wanted was we have this idea of you know the base station firmware with its own internal queue and the cloud application with its own internal queue and what i really wanted was just someone in the middle that i can make it their problem and i would just say here's the event you figure out how to send it you figure out how to do it efficiently um i'm too worried about like this meter over here or like exposing this api like i don't want to have to deal with this so the way i think about sync is that if mqtt and kafka had a baby i don't know what it would look like maybe like this who knows but so kind of taking this idea of the fine grain topics of mqtt that can be really specific which kafka kafka doesn't like and then this persisted event log of kafka and kind of combining them together also keeping the idea of subscriptions from both and allowing for synchronous and asynchronous operation so if you worked with like kafka or other event-based systems this should be kind of familiar sync uses this idea of an event log where let's say meter readings would come in the first meter reading would be offset one each subsequent one would be essentially an auto-incrementing id and similarly if you have changes to a user's user where the first one is one you change their name user two and so on that's useful because then you can build off of that with event subscriptions and the what used to be your queue going out to each base station um where you would take every event or message and put it into each queue you can now essentially have pointers and you know where your system is in terms of replication and all that of course is persisted in the data store one of the other things other things that kind of annoys me about mqtt is like you have these subscriptions and then you also have to track them somewhere else and you have to make sure that's in sync so i didn't want to have that again with this solution additionally there's configurable priority where we can send meter readings last because that's typically the most data and what we don't need as much for online transaction processing kind of stuff um you can configure transmission behavior so you know like let's say something has changed a lot and a base station just came online well let's only send the most recent one we don't need to send them old data and we can also have retention parameters where we can say let's keep the last 100 events for this for each meter or something like that and then as the system chugs along that's really good for managing the amount of disk space used on the system so given all that kind of the retrospective here was this a good idea let's start with the reasons against so we can end on a happy note um the big one here is that uh it's custom you know it's something we built something we maintain there's a high bus factor certainly we're also kind of still figuring out some of the right trade-offs and how to scale this and if you've worked with immutable data you probably know it's a bit of a blessing and a curse you know that no one has changed this for the most part but if someone does something and they you know commit a plain text password or there's some bug that changes how something is calculated you got to go and figure out well do i build something on top of this to now correct for that do i go in and like perform surgery on these records um it's a bit of a challenge there uh also these event logs you know it's kind of like cassandra something like that where you don't have foreign keys or secondary indices so if you have to query into that you might have to do essentially a table scan there's also this question of like how do we deal with people that depend on this if this system's been offline for a while and it comes online um it has a lot of data to send and if you look at the system at that point it's going to look weird um how do we deal with that's it's kind of the thing about error handling which is if you handle this error your system's in a weird state and then how do you actually like express that to people uh and it's easy to think up front about oh we'll just handle this error but like what does that mean to people we often think about too late um additionally uh you know if you've worked with event based systems you know that event ordering between topics is complicated if you have something in one topic and that depends on something else you kind of have to figure out well do i wait for that how do i know when this thing actually came in and you know for me i think about this a lot like at what point am i just re-implementing kafka like am i really delivering something of value should i just switch like how much is the novelty worth it um but so uh for the pros you know i think one of the things i think about a lot is that we would have had to do most of this anyways this complexity is already caught in the system and by pulling it into sync the abstraction is um relatively easy to understand and reason about uh it mostly keeps it contained uh and you know we were using mqtt for a while until i had this whatever crazy idea to do this event-based system uh and when i was pulling out the old code and putting in the new stuff uh it just felt so good to delete the parts of the system which i kind of held my nose at and you know they just really annoy you so that was a really good feeling um and having this event log double as an audit log is also super handy um being able to delete a lot of these audit log tables and sometimes you're like i don't know if i need to keep track of this data once you have it in an event log you know you have it if you need it later uh also being able to grow the system by adding new events uh has been great uh we don't have to like make a new endpoint or something um all that infrastructure is basically there for us uh and lastly sorry i'm not gonna talk about it but if you work with cr this stuff is really cool uh an event-based system makes that a lot easier so kind of where did elixir help well nerves has been great big fans it's really helped us get this project up fast everybody who's worked on it has really enjoyed it vintagenet has also been super helpful for us for getting the networking working and even when there wasn't something that we needed in vintage net it was very easy to plug in the abstractions there to get our stuff working we also use nervetub to deploy to our nerves devices and the remote console access is just super helpful probably probably my favorite feature of nervous hub tough to say um another thing that really i didn't appreciate until i was putting together this talk which is that being able to share code between the ground and the cloud has also helped us move really fast i don't know if we would have done this without that because we would have had to write this binary encoding and decoding twice you know once in c or something like that and then again in i don't know python or whatever it's in the cloud so being able to have that one library that runs in both has been a big win next supervision trees this has been helpful just for thinking about error handling but also system bring up you know what parts of the system start when how do we handle things that don't start and lastly working with binaries and tcp connections i think this would have really intimidated me two years ago or something but with elixir it's pretty much like the best language you can use to solve this problem so i can't imagine having to do this in some other language but if you have to do it and you're in elixir you really save yourself a lot of headaches so this is the actual end i want to say thank you to the audience and the community i want to say thank you to sparkmeter for sponsoring me and for having awesome colleagues i want to say thanks to the nerves team for an awesome product and i also want to say thank you to my co-worker benjamin milde you might know him as lost cobra kai he and i have worked on a lot of this stuff together and if you haven't read this book by martin klepman designing data intensive applications i highly recommend it it's very popular on amazon so probably a lot of you read it but if you haven't it's very good so with that uh take some questions sure so the first question is are they mainly um tight and key like has there been kind of pressure to design a more complicated subscription model yeah that's a that's a good question so the way we think about it is the complicated logic determines whether something has a subscription or not um and then once the subscription exists the system will then handle what's put where so the firmware or the cloud will determine who gets a subscription to what and then sync will say oh i have a subscription i'm going to make sure they're up to date so a subscription will actually look like this key and event type it'll have a consumer offset which is the last thing it knows it transmitted and then it'll have a producer offset which is like the most current record so if you're 20 records behind your producer offset minus your consumer is going to be like 20. that's the way we do it um it's worked for now we're evaluating some other options just for a couple reasons but that's the gist of it um john yeah um so basically we keep track on the connection of the last time we've received a message and if it goes over a certain amount of time we send a message we send a ping i think the server sends it just so we don't yeah basically if we haven't had any data just make sure they're still there yeah so yeah like if we haven't heard from them it's called like a ghost tcp connection or something like that there's some name for it we just make sure they're still there and if it's not we close the channel but we only send the ping just because we don't want to send more data than we have to so you have event log is part of sync itself so what i'm curious about is why or if you have separated that as maybe like a you know like a helper library or something like that so you can just use the protocol and also like if you had given thought to like using for the like distributed consensus thing using something like either i don't know like um yeah um i might not answer your question completely but i'll mostly get to it uh so the event log and the protocol we have been able to code so they're pretty separate um we're considering splitting them into so you could just use the protocol and you wouldn't need to persist anything or else or you could just persist stuff and not use the protocol or you could use them both together um we store with the event log it's also interesting because you get into this sort of like two-phase commit or like you know if you're using kafka and keeping the events in the in kafka then you have to deal with like okay when did the event actually get committed um so we use like this outbox pattern which is that we store the event log in the database so we have the transactional guarantees of the database that the event is actually there which is another nice i think advantage i'm curious what your data store is on the ground yes we use sqlite so uh sqlite sqlite some people some people call it um potato potato uh i've learned a lot about that i'd be very curious to talk to other people who have worked a lot with that just because um you working with that realizes how much you take for granted something like postgres gives you and really puts you more in touch with your your data patterns and like what the database is actually doing yes yes much easier now yep so make sure you have your yeah sure i can talk about that so basically we have these event logs and the idea is the event should only be going in one direction you know much like the band meter readings should only be going from the ground to the cloud and i don't know like user information typically only goes from the cloud to the ground if you have something that needs to reconcile the differences between those two um and you have directions going in two ways you make a separate event and then you look at the two and you do the conflict resolution that way so you don't have like a single resource that you're updating from both the cloud and the ground and you don't have this shared resource and you don't have that shared conflict john uh failure rate like because we tried to write and it failed or like we're using emmc too much uh i would love to do that it's definitely been a question and a concern so short answer no um we do write i mean basically we have to deal with power outages so the assumption we have is like power could go out at any time so we write a lot to disk unfortunately because of that so there is this question about how long is the life span fortunately our disks are fairly large size so i think that helps also but it is a question for sure yeah immediately right so for example you know we have let's say something comes in and it causes us to change the state of a meter we want to know that we change that state um we don't want to like keep that in memory and have a chance that we lose it so we err on the side of just writing too much but that also you know right slow down the system so it's also a concern so you're generating an elixir for that yeah there's something called earl avro which we've had pretty good luck with yeah avro is great i like it a lot um it's nice to have guarantees uh on this like schema which is so we define the schema for the events and sometimes it'll get through like something else but then the schema will give us an error so it's just like one more nice check on our data yes sir uh focusing on the actual leader that you mentioned i heard you yeah that's a great question um so some of the things we'll do is like if a site has solar panels and a diesel generator they'll do scheduled um usage basically so electricity will cost more during certain parts parts of the day which is when the generator is running and it'll cost less when they can use the solar panels so that's part of the software we don't i don't think we do anything dynamically with that i'm sure it's probably in the pipeline but then we have there are there's the ability to set dynamically the load limit how much power you can draw from the meter and the reason is just because you know you might not have enough power to serve everybody in the same site uh with just the amount that uh if it's like a micro grid which is not connected to the main grid you might they might this site might have like 20 solar panels and they're the amount of energy they can give to anybody at a certain time is capped uh so that's just there but these are not no they're very active there is a lot that happens on the device in the cloud which is makes my job very interesting can i buy your company's system and run it on my own inside my own house [Music] that is an interesting question um i'm sure our marketing would have like some sort of a lot of questions for you on that but uh you know the short answer is it wouldn't be worth it for you a lot of this is basically um keeping track of usage per customer and figuring out who to build and keeping operations going but interesting idea um nerves anything you want to do with monitoring nerves is great so that's how i would spin it uh i also know we got to get to lunch i'm happy i'm happy to take questions forever but don't feel bad if you guys want to head to lunch we're probably at time anyways thanks
Info
Channel: ElixirConf
Views: 876
Rating: undefined out of 5
Keywords: elixir
Id: DJRL86mO4ks
Channel Id: undefined
Length: 42min 5sec (2525 seconds)
Published: Fri Oct 22 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.