Andrew Thompson - Erlang logging for the 21st OTP | Code BEAM SF 19

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you yes so today I'm going to be talking about fascinating subject of hogging which is really a subject that should be boring but it's boring until it's important you know still today logs are really one of the few things you can you have historical a way to go back and say what the heck happened how did I get here if you weren't running dtrace or inspecting the process as it was failing logs are usually you're one of your very few ways of recreating what happened and if you ship a product to customers logs can be one of the only ways your customers communicate with you you you're effectively if you write a product that the customer runs on site your logs are you talking to the customer so the logs need to be approachable in some way and you know logging is one of those things that you know as long as it's working fine nobody cares about it nobody pays attention to it but when it breaks you know it's like plumbing you need to call the plumber and the plumber needs to be here in 15 minutes because you know your ceiling is spouting water so historically Erlang has had a very dodgy story on logging it you know the logger in OTP has a lot of weaknesses I think anybody who's used it is probably intimately familiar with them will cover them briefly but you know a ray of hope has appeared in OTP 21 we finally have something that is not error logger to look at and and consider as part of a Erlang or beam deployment so why am i up here well I have a long history with Erlang logging going back about a decade I tried to use Erlang error logger in 2008 and it was so bad I wrote my own then I joined bash you in 2011 and I told our customer support team about the logger I had written previously and they were so excited that they made me write a new one four four react because one of the biggest pain points in the customer support was logging and and we'll cover sort of the real pain points here and then I gave a talk about lager at Erlang Factory 2013 maybe 2012 I forgive a year it was and I've been maintaining lager ever since so a wonderful you know eight year history of dealing with this and I was consulted and participated a little bit in the design of the new OTP lager and OTP 21 I talked I emailed with ciri Hansen and Ericsson quite a lot so yeah what what what is the problem with what we had before well the big one is that errorlog year had a completely asynchronous pipeline a process could send logs all day long for free and not pay any cost it could just kick the can down the road all day long and there's no back pressure there was no you know nothing was just a completely asynchronous pipeline and log messages were not sighs truncated at all if you're if you had a 200 megabyte binary or even a ten even a one megabyte binary in your gen server state and you crash that one megabyte binary is going to be put in the log file and Erlang is not good at printing a one megabyte binary the logs that you get out of error logger are extremely confusing for anybody who doesn't know what Erlang is or doesn't want to know what Erlang is it uses very dramatic terms like crash report you get it scares people they are like oh my god my system is falling over and it's like no you just a socket got closed it's fine but crash report the the log rotation and error logger I think it's using disk log but anybody who's not any sysadmin used to be assisted me any system in looks at that log rotation and they they've never seen anything like it before and it just makes them mad the way the log rotation it works in error logger is that there are n log files then there's an index file that in decimal not an ASCII holds the index of which is the active log file and it just it just bumps through them and so you either have to look at the timestamp on the log file or you have to look at a decimal bite in a file to figure out which is the active one it's it's nuts no no no non Erlang person whatever it was gonna put up with this they they despise it and my 3 2013 talk has more complaints about air lager and if you want to go back in time and and see more complaints so what was the whole what it what did lager try to bring to the table one of the big problems we had at bat show was that Aaron lager would be the cause of node failures for no real reason because you know a big process crashed and it really shouldn't take the node down so loggers number one goal was never ever ever have never be the fault of the node crashing that is everything else the secondary to that we we don't want to be the cause of failure we wanted to try to make it look more like traditional application logs we were shipping react to customers customers were like these logs are in there terrifying me I don't know how to do anything with these things they're not you know so we wanted to make it feel more like running Apache you're running engine X or running postfix we wanted to log things that were not giant tuples of doom which is something that we were like you know that that's kind of what era log your logs is giant tuples of doom and and sure there's a lot of good information in there but if you're not an Erlang you know aficionado that's not an easy thing to to work with um I wanted to fix the log file rotation because I'm assisted min and it just made me really mad I wanted to make it easy to use it with it ships with the same a reasonable default configuration I tried to keep the API as simple as I could and I wanted it to basically do the right thing by default I also wanted to add metadata to the to the logging process effectively I wanted to be able to say well this log event is related to request ID 47 or user 32 has is this request pertains to user 32 and I wanted to press it to tag the message with this kind of information so as it flowed through the flow to the system we kind of knew some things about how that log message was generated I also wanted this decouple formatting the message from storing the message so that you could format it at one place and then send that through and then at some later point you could write it out to disk or send it out to the network or ready to the console or whatever and I just wanted to make Erlang like more palatable to ops people so that they don't have to drink the Earl and kool-aid to run an hour Lang service I feel like that's a big deal you know we're all kind of we all have the Stockholm Syndrome of Erlang is just a weird thing that kind of doesn't look like a lot of other stuff but you know people want if you want to have somebody run your Erlang service it you really shouldn't have to also have them be an Erlang you know fanboy I guess so you know hindsight is 20/20 I made some mistakes the number one one which I still don't think is a mistake but first transforms are a hated thing I guess so you know back in 2011 we didn't have all these fancy new OTP features we did not even have a function macro you couldn't figure out what function you're you were logging from it was not available so that was why I originally added the parse transform and then the press room was like well I can do all this other cool stuff I can I can like do like all these neat hacks and it just kept getting bigger and it does a lot of cool stuff but it's still apart strands form and people see the red warning in the OTP documentation that says never use the press transform they're like why are you using a parse transform I don't want to use a press transform and I never got around to adding the logging macros for a long time I think Christian Tristen finally did recently there was a reason they didn't exist I just was I just thought they weren't as good so I never never bothered to do it I never got the OTP team on board I feel like they always thought lager was kind of a terrible thing and from some some of the stuff they've said over the years I don't think they really understood how it worked or what it was trying to do so never really got them on board when elixir came out I missed the boat on elixir I talked to Jose briefly about it at I think the 2013 airline factory but I never really I never really did that I never really integrated that and so they did their own logger which I believe has been replaced with the new OTP logger but basically I never really supported anything that wasn't Erlang you know and the Erlang community never we they just you know there was never we never standardized as a on a logging API that was an error logger we still had a whole bunch floating around logger became I guess probably the most prevalent one but there was no standard way to log in Erlang and and mixing and matching libraries that supported different api's was it's just a nightmare so and I tried that I never really got structured logging there kind of lots of metadata you could kind of do it but it never really got first-class support and I was probably too opinionated on some things um you know got it you got to go make some calls sometimes and maybe I made a few that we're a little more conservative than some people might have liked so what about the new logger so we have a new one in OTP 21 it's backwards compatible with our old friend error logger so if you're still using error logger it's gonna suck a lot less that's good it supports multiple formatters and handlers and they're separate things now we'll cover sort of how the pipelines look in a minute it supports log event metadata which is which is really cool they copied basically what logger does it supports report messages which are effectively there is no format stringing these things they are structured reports to say this thing happened but it's not there's no real format string it's all just a piece of data it has all supports for lots of types of overload protection the message is too big there's too many log messages we have our mailboxes too large we you know we'll there's there's it turns on like synchronous messaging once the asynchronous pipeline fills up pretty good it also does log event formatting the calling process which makes the callers have to pay for their excessive logging which is good and finally it has filters so that you can control how and when messages make it through the through the pipeline to different endpoints so this is I can't overstate the fact that this is a huge advancement everybody on the beam is winning here because we finally have an alternative to the the only other standard library logging system we've ever known thank you for thank you Eric Stanford for actually finally doing something about this so there really a lot of like the OTP logger is a lot like mine and we're gonna get really it's gonna get really hard because they are hot items they sound the same so I'm going to try to call the OTP logger the OTP logger but I'm a screw-up and hopefully I'll the context will make it clear so they both support the seven syslog levels back in error logger you really got two or maybe three log levels depending on how you configured it you know as I said logger the OTP logger has all of the overload protection that logger does both to the event Foreman and the caller they both have per process and per event metadata you can for both systems you can install into the process dictionary for a pig you can say you know anything that comes from this pit always include this metadata and then at each event you can also say here's some extra metadata so you can do all kinds of really stuff interesting stuff like if you you could put like a worker ID and a pig and then you always know worker ID seven you don't have to put it in every place you every call site you log so that's really useful and you can steer the log message based on the metadata and actually in the OTP logger you can steer it based on log message format so there are some differences so the OTP logger does the entire formatting at the call site logger only does the format string arguments and then it kind of passes the the log message with all the metadata the timestamp a bunch of stuff through the pipeline so it's it gives you it has a richer structure in the message that it passes for each handler in the OTP logger you do a separate format step which means that if you have ten handlers you're going to run formatting ten times logger however only does it once and then it passes the result to each back-end independently each handler in the OTB logger now runs as a separate process logger used a Jenna vent for several back ends just like error logger did and the filters are a little bit different not they're a little bit yeah they're not the same as how you could trace events in logger so here is my chicken scroll on how I understand these different pipelines to work we're gonna cover them hopefully you can read this relatively well so we'll start with our old favorite error logger so in the calling process it just literally sends everything over an asynchronous message to the Jenna vent which has multiple backends or some number of backends installed into eat each back-end does the formatting and then writes it to wherever it's going so you have you only have one message for event but you do have n formats per event and the message size is unbounded so you potentially are sending a very large message and then you are doing unbounded formatting in the memory space of the Jenna vent handler which is if that takes a long time then you have the mailbox growing because all this other stuff is backing up behind it it's yeah so that you can see how that would fail under load so then logger the formatting is done in the calling process once then we send a single asynchronous or synchronous message depending on the overload state and the metadata to the Jenna vent each Jenna vent has back-end the backend does basically a metaphor matter where it takes the the main log message and then the metadata formats that however it wants and then writes it so you get one message for event one format per event you get a bounded message size the logger formatter does not allow you by default to make a message that is larger than one kilobyte it is it is forbidden at compile time and then you do the only the only oh ennopp eration is doing the metadata formatting which is relatively cheap usually you could probably you know probably enough rope to hang yourself there if you want to but it's not as easy and now we have the new OTP logger which looks quite different as you can see we format in the calling process but we do it a variable number of times and then for each formatter we send an a synchronous or asynchronous or a synchronous message depending again on the overload state to the back end the back end then just simply writes to wherever it's going so we have n formats per event we have n messages for per event and we have an unbounded message size by default so there's you know you might see there's some concern here a lot more o n here than loggerhead so we'll see some of that in a little bit here so structured logging this is great this is my favorite feature of the new OTP logger by far so basically you can log a couple or a map I guess of data it's called a report each report has a default formatting function applied so effectively you attach what you you know unless somebody else overrides it this is how we're going to turn it into a into a into a textual piece of data you can override it which is great and a huge improvement here is all OTP messages are now specified as reports they are not format argument strings anymore you don't have to parse the format string to figure out what kind of OTP error you're getting you can now look at it as data and say hey that's a gen server crash big big big improvement and structured logging just gives you a lot more options you can you can look at the data you can slice it you can dice it you can transform it a lot easier because now it's really its data and it's not some kind of like format string you're intercepting here's a quick example of gen server crashing so you see it's a map the label is gen server terminate the name of the process the last message the process received the state of the process that's the one that might blow you up the reason it crashed the client info which i think is why with a client that was talking to the gen server when it died I actually don't remember how that thing works and then crucially we have some some stuff like domain OTP so we know that this came from OTP OTP is originating this message here's the default report callback which is gen server format log and then there's some I really don't know why that's there it feels redundant but something about error logger is there ok backwards compatibility of course fair enough so that's why that's there you don't really have to pay attention to that so now let's quickly talk about filters filters are I really think of them they remind me of writing like firewall rules for IPFW or PF really you you have basically the filter can ignore a message which basically just passes it through in almost all cases it can drop the messages can say I really don't want that or you can actually alter the message and transform it or do something to it and return the new message and you felt you felt you chained the filters together so it just it runs down the list of these of the filters you've got and basically if the filter makes it to the end of the chain it gets passed on to them to the formatter unless the last filter in the chain returns ignore at which point there's a configuration variable that will decide whether ignore means drop or pass I think by default it passes there are actually several filter chains you have a primary one which is global to all handlers and then each handler can have its own chain and it runs the primary first I think and then it runs the per handler filter I'm sure I'll be corrected if I've made a mistake here and then filters are a fun you effectively you you pass in some state with them but the state is static you you the filter does not has no ability to update its own state you could probably put a net stable in there as the data to pass but you have to be aware that this filter might be running in multiple processes on multiple times and so you might have a data race there so by d f-- traditionally you would just have a fixed value there probably do some weird stuff but be careful and there are several nice default filters included in the standard library and a file called lager filters Earl so unfortunately as we've already mentioned backwards compatibility is a thing so the new logger looks a lot like err lager it does not have oversized meshes protection by default it looks like the old error lager so all my complaints around you know scaring scaring users are still there it has to bundled handlers the standard handler which is basically the console handler but you can actually configure the standard handler to be a single log file that if you delete it it will recreate which is good so you could do external rotation on it and it should mostly work and then my old friend disc log is still kicking I've already complained about that I would say if you are going to use the new logger please read the documentation and please configure it not to be in legacy mode because you can get better formatting you can get better behaviors if you if you don't care about being compatible with error logger so like don't just blindly turn this thing on I would suggest you you look at how to use it and configure it correctly because backwards compatibility is a thing but you only have you know as long as you're aware you can you can disable that so kind of like being no compatible mode in vim so you know you can use an editor that's newer than 1974 so what's next for longer you know basically this new OTP logger has largely removed the reason for logger existing which is good however there is still a lot of users of lager out there so we have to figure out how do we get them off using lager and lager has 8 years of development that ahead of the new OTP lager so has all kinds of features if you look at the log or heed me it's extremely long now because every time we had a feature we add it to the readme file so the readme is now probably 20 pages long so we're you know logger is ahead on features there are some really nice nice features that definitely don't exist yet in in the OTP logger so today I'm announcing it's not out yet but logger 4.0 will be coming out when I get enough time to finish it I have a branch but it's not it actually works but it's still working on it we're gonna depreciate loggers event core in favor of the OTP logger the infamous parts transform will have a mode to rewrite all of loggers calls into the OTP logger call format and it will rewrite the metadata put it in the right format it all works I already have this all working I also have a report callback for formatting the OTP structured logging in the logger style where it rewrites the big scary messages into slightly less scary messages I have a bridge so if you have an existing logger formatter you can use it with the OTP logger with a little shim and I think I'll probably try to provide some tooling around migrating and and maybe sanity checking or it's a little tricky but I'd like to make a smooth make it so it's easy for there's a smooth transition if you have any ideas please help me because it's kind of complicated to change from one logging library to another so this is kind of the cheat sheet I guess if you have an existing logger project and you want to switch to the ODP logger these are really the handful of things you need to do if you want it to look exactly the same so you effectively change your default formatter today and this is not gonna be easy to print obviously log or log or formatter I'm sorry and then if you want to overload the report call back to form at the crash logs to be slightly more palatable there is a logger logger formatter report report callback which will fix all of the OTP reports to look like logger prints them and then in the logger stanza of your sis config you will tell logger to use logger and I know it sounds ridiculous but I don't like what am I gonna do and then in your rebar config and this is for rebar 3 I have no idea about any of these other things so don't ask me about Erlang dot MK or mix I have no idea I don't even know about rebar 2 but just use rebar 3 and you just put this in here and all of your set dependencies will also be recompiled to use the new OTP log or api's it just if they used logger they will now use logger just like the the flag says so it will just upgrade and I've done this and it works great so why are we not keeping this thing alive well we have a new one it's fine I don't want to split the community I don't want to duplicate effort I'd rather put features missing features into OTP then keep maintaining my own thing you know everybody using the beam can take advantage of the new logger if you're using my logger it's not so easy it's kind of a weaker story also it's been eight years nine years I'm I'm fine with being done like somebody else like Eriksson can have fun maintaining the logging library again it's good they have maintained air logger very much so this is be fun so community involvement if you have any stuff using logger if you've written your own back-end or whatever formatter the formatter you can just use with the bridge but the backend because the the pipeline is different you get a different thing it's it's hard to provide a wrapper so probably port your stuff if you maintain a library that's using my lager please consider porting it to use the new OTP logger be aware that you will break anybody OTP 20 or earlier if you're using lager try my branch see see what works see what doesn't work and if you try to use the new OTP lager please let Ericsson know if it doesn't work or if it doesn't address your concerns I've talked to Cirie and they seem very interested in just knowing about what a users expect you know they have a very enterprise and Ericsson view of the world so they I think they're they're open to other perspectives also if you like to do blogs or podcasts or whatever like talk about this stuff write documentation the documentation is a little sparse right now there's not a lot of good examples Fred wrote a good blog post about doing structured logging which i think is really good there are there is documentation it's just a little thin like especially like filters are not well documented right now so you know pitch in if you can it's gonna be tricky to upgrade but I think as long I think basically we should shoot for OTP 23 that'll be three major releases with the new OTP lager available so next summer I really think that we should expect our libraries to be using the OTP lager api's after OTP 23 I would like to cut lager 5.0 which would not even have an event core it would be whatever whatever bits that we haven't managed to to get into OTP by then that's all it would really contain and just just be aware that Erlang users like to stay on old versions so they might get mad if you if you break their code by changing the logging API obviously they have to come along at some point but yeah it they will complain if you're not careful so I wanted to wrap up by doing some benchmarking this might be a little controversial I've tried to be fair my benchmarks is may not be completely accurate benchmarking is really hard but I had some old benchmarks laying around from 2011 that I that I dusted off to see what it looked like so the two the two are cascading failures is an application that Fred wrote when he was at AG gear that emulated the failure mode that was killing notes for them effectively a process opens a Nats table and then spawns a whole bunch of children that all referenced that at the table and then after a little while it kills the at the table all the children try to read from the edge table they all blow up the entire supervisor treif crashes it all gets restarted and that it all happens over and over and over again crucially it happens slowly enough that the supervisor does not hit the max restart intensity and so it just really pummels the node until it runs out of memory and crashes and then log bench is something I wrote that basically just has different message sizes different log backends and basically it just sends a whole bunch of one kind of message through to a back-end from a variable number of workers so basically for cascading failures I just wanted to look at the memory usage of five minutes of analyst crashing this test did kill error logger back in 2011 I don't think it dies now because of some of the depth limiting that's been added but it's I didn't even bother rebenchmarking it to be honest with you so this is logger 4.0 alpha whatever now the graphs are gonna the units are gonna change on the graph because I couldn't figure out how to configure our D tool but basically you see that we stay basically below 120 megabytes this is five minutes and you can see each each death spiral begin and and and over and over again now one thing to note is logger in this mode is writing to three different log files and to the console it's doing it has four handlers installed this is console only and you basically see that we just use very slightly slightly less memory but it looks very similar the graph just does its own thing but look at the axis not at the height of something and then if we look at the default config for OTP 21.2 it's not quite as pretty of a story it spikes up to six higher megabytes and then I think it realizes it's instant kind of terrible mode and so it it starts to kind of control things but we still get pretty high spikes comparatively like we were at a hundred and twenty megabytes but you know most of the time here were very very frequently then I tried turning on the max size and car limit and this looks great except we get almost no logs it just I don't know what it might and I think goes in to drop mode something is too slow or too much memory and basically we get maybe a tenth of the logs that we would expect this is what the console again as well I tried to write my own formatter and then I don't know what happened it said flush five thousand log events default switch to drop mode and then all my memory went away all my CPU went away and my computer hard locked so I don't know what's going on here this is probably something I need to look into some more but it's a little hard when you can't actually debug it effectively so this might be my fault but be aware that strange things happen with this test if you write your own formatter I don't know why well we'll look into that so yeah I think I covered all this yep so on to log bench so log bench just sends different there bunch of messages of different sizes to the back end and sees what happens so this is with one log worker just for completeness on benchmarking and we have three different message sizes so we have simple which is a very small message small which is also have the size of the lighter and then large and you see that the performance and this is effectively number of messages cleared through the pipeline in per second that's what the number is it's not necessarily logged because different these different loggers will do different things under overload some will drop some will not this is effectively at what point does the pipeline clear up so we really think like think of it a pipe how many can we log through the flow to the pipe a second and they some of them may go into the sewer because they're overflowing but the whole point is how can how fast can we clear them out of the system so we see that logger itself is pretty fast here it's one worker the other thing that's interesting flus we see so we see we have error log or file errorlog a console logger console log or file ODP logger console OTP logger file then we have a weird hybrid that uses the OTP logger pipeline with the logger formatter which is the one that hard limits to one kilobyte so we have two of those one for console one for file and then we have the configuration where I've configured the built-in OTP logger size message limitations and so you see that under the large under large when the messages are large and they're not all that large but when the messages are large you see that error logger and the OTP logger in the default configuration basically grind almost to a halt not quite they do 22 messages a second which is pretty pretty slow but they get pretty they get pretty overloaded and you'll see that my weird hybrid is actually the best under under load here and then just regular logger file handle does all kinds of tricks to be asynchronous and tricky so it's actually very fast when we have four processes logging as fast as they can the OTP loop the OTP logger gets a big speed bump because it's doing the formatting in the caller error logger is still sludging along because it still has that broken pipeline but you see that the other things have caught up logger is not that much slower with 4 or s actually faster everything's faster basically but you'll see that the large number of messages we've gone from 24 to 90 ish for the OTP logger for the inbuilt size limiter we've gone to about 2,000 per back-end logger is about 66,000 6500 but then my weird hybrid is kicking ass over here for some reason it's doing great it's it's it's completely crushing everybody else and the trend continues at at 100 log workers if again logger is basically kind of at the same level all the time it's not really changing a huge amount the others basically get faster again though you look at the large messages you can only get about a hundred 115 for the for the OTP lager when you've got the rate-limiting installed it's only about 2600 verses you get ten times faster with the with the with my size limiter which is interesting and then I tried to send really really big messages huge and giant will cover how big those are in a little bit but again the weird I can only benchmark these to the others do not actually survive they it takes it takes so long that I gave up I had to run these benchmarks a lot they took hours but I only had so many hours to run these things so we're clearing about 5,000 messages a second with the with the hybrid and logger is going about you know 1500 2000 that's with one worker we got a four weird things start to happen you see that the red getting tall like that means that actually we're shedding we're not writing every log out now we're starting to shed shed load and then at a hundred workers we go into like panic mode and we just start throwing stuff out the window which is fine that's good for the system I mean you can't necessarily expect to log everything so you will learn the overload condition and very few of these logs actually get written but the point is is that they're getting flushed out of the system they're not eating your RAM they're not eating your CPU the logging is not consuming all resources on the beam which is in my view important so back to the large files large messages if I graphed the memory usage of the OTP logger in console mode during the test and you see that it goes up to about 900 megabytes and just sits there for the duration of the test similarly for the file this is in the default configuration so be aware that you may use a lot of memory with the default OTP configuration for OTP 21 the others were much lower I didn't include them because they weren't super interesting but these were the these were the outliers so just to cover the sizes simple messages 5 bytes the smallest 56 bytes the larger about 96 kilobytes the huge are 4 megabytes in the giant are 16 megabytes and as I said log bench is measuring the flow of messages through the system not necessarily that they made it to the end point overload protection may have kicked in I could have turned off overload protection but I was kind of interested in what is the default what do these things do by default so one thing that we learned here is that the built in OTP log or message size limiter is is really quite slow I've talked to the OTP team about this they're aware of it they're working on it 221 dot 2.6 is is faster and then there's a branch that's even faster they're working on it it's still not as fast as the thing that I hybridize together that's still the winner but they're working on it they're aware of the problems they've they've seen this benchmark they're aware yes so without message size limiting you're gonna you're going to get yourself into trouble it doesn't take much to start things kind of getting into a bad feedback loop good thing is that the OTP logger does ship with overload protection by default so it will start to shed load at some point but you may only get 20 messages a second out of the thing so just be aware that there are some there are some speed concerns and as I said the weird hybrid is is the fastest thing not exactly well it makes sense I guess but because my my size formatting or my size limiting is much faster but because I spend a lot of time on it but so that's an interesting interesting thing to come takeaway so yeah here's the wrap-up logger is still the OTB logger is still pretty new they it still needs work and it's the only users to to give feedback to Ericsson to say this is something that I need or this isn't working well it's also got some features that it's also missing features comparable to logger so if you switch you may be missing features just to be aware I don't I don't categorize the default configuration as safe I know it's backwards compatible but I urge you I urge you to look at that config and change it because it will bite you in the ass at some point the other thing is if you have a lot of handlers installed in the OTP logger they all cost they all incur a cost versus you can have a lot more handlers installed in logger and they're relatively cheap so if you're currently using logger be cautious about switching just yet take your time categorize what the performance is going to be like if you're using the OTP lunker already just check your configuration beat on it a little and see what it does if you're still using era logger just anything is better please upgrade please and I think you know my real takeaway from this talk is let's try to get everybody working on the OTP logger let's let's put all the effort behind that thing let's make that thing better and stop having so many out of tree log libraries let's just make the standard library logger better and everybody everybody wins that's my talk I don't know if we have time for questions yeah 4 minutes for questions all right yes the question was system D and kubernetes and and if you're not logging to up to a discreet device like the console or a file can you maybe just avoid the bottleneck and the answer is yes I mean I I have a I don't like system D and I don't think it's good but it does you know if you want to do it that way and you can push your logs just over UDP and it's not a blocking interface you can do that all day long and in fact what you can do is you can use a filter to send your your filter can instead never actually forward to a handler you can in the filter just send UDP you know you pass the socket in as a filter argument and you're you own you never leave the calling process so the calling process just sending UDP now of course if your UDP stuff can break or do some weird things you might have trouble but you don't always have that luxury though and obviously at some point queues are always there's always an unbounded queue somewhere or a cute that's gonna fill up and it's always going to get you someday whether it's DNS or its UDP or its the switch you know getting clogged with log packets but you sure don't don't lie to disk if you don't have to if you have a bit more infrastructure for that ship it off somewhere else absolutely sure the victors and all the formatting and all the filters on the hangers it's done in the cloning process and then the process off is something that you implement ok so just just for the record I have a mistake in my slides and there is not an always a message passed to the handler I think that's part of the overload protection mechanism you can write handlers that do not use that behavior and they will they will not make a cross they will not make a message call which is good no I ran out of time sir and also it came out after I was done I was sick of benchmarking I will I will probably put together oh so question was have I run a mono TT OTP 20 to rc1 I will probably put all this together in a blog post and try to be a little more scientific about some of it I ran out of time but I will benchmark OTP 22 as well well I get a red card the question is does the elixir logger ship and kepada pay in legacy mode I don't know I would encourage you to find out because yeah things things no no they do they do so the question sir is undecided it sounds okay thank you [Applause]
Info
Channel: Code Sync
Views: 1,397
Rating: 4.8518519 out of 5
Keywords: Erlang, Logging, Code BEAM SF, Andrew Thompson
Id: zqpmSav8rBY
Channel Id: undefined
Length: 45min 12sec (2712 seconds)
Published: Tue Apr 30 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.