Why I love Logstash and you should too

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome my name is Ron I was born I live in our work in Portugal I'm a software engineer for elastic I joined in September of last year and I work in the log stash team most of my career has been around developing with Ruby and I've always have a healthy interest in all things that are if entry Vernon I have driven architectures so why are we here we're here about for this guy and this is logstash and this is our mascot and it has a name because it's a log and it has a mustache so it's quite clear so quick show of hands who knows about log stash good and if those who has it in production cool awesome so this talk is quite introductory so it explains what log stash is what is it for the latest developments what is current in what is new in the current version that we have that has been released like a month ago and also the next steps where is the future for log stash so what is log stash well it's a event processing pipeline that is plug-in based it's written in Ruby and open source no that's it that's my talk no no so what does that mean plug-in based event processing pipeline well it's something that looks like this so it's a three stage process that allows you to in each phase for example get something from a log file anonymize that data or some part of that data and then send it to for example elasticsearch or send a notification to page of duty so here the combinations can be well I'm going to say endless because there's a lot of plugins that you can use so to give you a bit more examples of what a log stash can do and where is it used well very common use case is imagine that you have your web application that has when it's starting has just one node one machine that has a web server a database and you kind of want to understand what's happening in your infrastructure so you want to look at the logs of those applications and you want to be able to get a big picture of what's happening and this is where log stash comes into place by allowing you to get data from the log files of those applications and also information about the operating system and send that to somewhere else and in this case and this is something very common to elasticsearch that you can then analyze and visualize using qivana which is a visualization tool that reads from elasticsearch as you scale and you start breaking down your components to different machines you can get the same thing which is you put a log stash in two inch components and get data out of there and send it to you something like elasticsearch but also something that i did a lot in my previous work is work for a Portuguese Telecom so you had a lot of very old legacy systems the people that develop those systems were no longer there and you still want to be able to get data from one system and send that information to another the problem is that well one system might be able to speak a language and speak through a protocol the other might not and this creates an awkward situation and each component will think that the other one is rude so this is a problem of multiple protocols multiple languages multiple ways of doing things you can have multiple ways of representing an event in your log files you can have just one line or multiple lines think about the stack trace for example and you can also represent that as just a plain text or JSON or XML and you can also send that information or store that information using log4j tcp UDP so there's a lot of ways that you can communicate with data so and not all applications are ready for all kinds of combinations of that so going back to the example this that's where logstash comes into so it allows you to get data from one side do some transformations and then send data to the other side using the something that this side can speak without looking at languages looking at a more concrete example think about an application that can send data through a TCP socket in the form of CSV but on the other side you need something like only has like Web API that receives Jason and that's what logstash can do for you and that makes everyone happy so how does logstash do this so log session is an open source project you can get it at that address and as I said before it's written in Ruby nowadays it's more geared towards JRuby and that has a reason and actually multiple reasons the first one it's that it leverages the JVM and all its goodies so you have a very interesting garbage collector you have a lot of mature libraries for everything that you ever need and also it opens up the ability to have a JRuby code base but also communicate in the same JVM with Scala closure even Java obviously also all the debugging tools or the profiling tools are quite awesome I love visual TM for example also it's a very active project in a very active implementation of the Ruby language the next version that's coming out soon it's nine zero zero zero don't ask me why he decided to do that version the last version is 1.7.2 NT that there's actually a reason for that is that because the language Ruby is on two dots too so he wanted to steer away from having a similar number so people will then say well that's behind the current implementation of language is it ahead so just to get out of that comparison between versions so you just put decided to use nine zero zero zero also if you're interested in this kind of stuff there has been a project started at the Oracle labs for an internship and internship and the project was reimplementation of the Ruby language using the Grail or crawl I don't know how it's spelled but how's that pronounced but well is a dynamic compiler for the JVM and it gives you a lot of interesting stuff because it exposes the compiler through a Java API and now your language can actually influence the compiler compilation of the language and that combined with the truffle abstract syntax tree interpreter it allows you to have simpler code for generating from your code to actual compilation and actually influence the compiler in runtime have a tighter abstract syntax tree as you go along because JRuby is a very dynamic language so it allows you to fine-tune that abstract syntax tree as you go along the execution so it's a very interesting project if you want to know more about this there's a URL that explains the process and also how to use it how to test it you can just download Ruby and disk this version of Ruby and then you're done so going back to locks - oops going back to locks - the core concepts that you have to be aware of our events obviously and events are generated using input plugins and to give you an example think about the file input plug-in which reads off a file and for each line it generates an event and it also takes it with some metadata like when did I see this particular line I will timestamp it with the current timestamp then you have filters filters are not just for dropping events and deciding which events you want to take care of but also do transformations of those events and even go from one to many or many to one mapping so to give you two examples think about imagine that you have a payload that is a CSV and you want to deal with each element of that CSV as an individual event then you can use a split filter plug-in that will break down that message with a terminator and generate multiple file multiple events from just one and on the other hand again going back to the stack trace while you have a log file that will generate one event which is multiple lines of a stack trace you can then use a filter called multi line and then you say well I know how the stack trace begins and if I don't find that initial pattern then I can say that all the lines belong to the previous event and then that way you can collapse multiple lines into just one and then deal with that block of text as the stack trace does that make sense so how do you do this well log stash is driven by a configuration file and there you describe how your pipeline will work for your Testament for your requirements you just have to describe the pipeline and then start logstash with your configuration file that's it for now every time that you need to change that configuration file you have to start and stop locks - but that will change in the future and the configuration file will look something like this so you have a section for the inputs a section for the filters and a section for the outputs and in each section you will specify which plugins that you want to use and how to configure them for example you can read anything that is sent to this directory and ends with the log and also get data from a UDP socket simple then you take that data each event that you receive either from a file or UDP socket and you generate a checksum for that message for some reason you might need it and you base of that checksum of the payload and the timestamp deadlocks - tagged it once it came in two locks - and then you can send it somewhere else so in this case you send it to elasticsearch and you also send the event back to the command line so easy so there's a lot of plugins a lot of them input plugins most common ones file sending getting data from rabbitmq getting data from Redis SNMP traps very useful in network environments since log same thing a lot of systems only know how to communicate with just plain TCP and UDP sockets things more esoteric like getting Twitter streams and each tweet will be an event that you can then manipulate filters are I'll speak more about filters but the idea is that like I said you can manipulate events that you receive in the inputs and outputs are kind of opposite obviously of inputs for every input plug-in that you have you all most likely will have a similar output plug-in so it might read stuff from a file but also writes to a file at the end so to explain a bit more what's the job of the filters think about my sequel how many of you is my sequel well interesting okay so for example my sequel has a log file that registers the queries that are slow slow log makes sense but the way that it does that is that for each occurrence it writes this block of text at the beginning you have time then you have some information about the query a timestamp and the actual query the problem with this is if you're reading off the file then you get each line as an individual event and that that way you will lose the ability to correlate for example the time stamp and the actual query so you need that in a single block so that's not what you want to do this like I said before you can use the multi-line filter for example and this will say well if I find so here's this pattern which is at the beginning of line I have a time and if I don't find it then this that means that this particular line belongs to the previous events and that what we'll do is look for if I go back a bit so I look for this pattern and if I don't find it so if I go through here then I will say that this line belongs to this event the same thing for this one this one and this one that well that way this will be compacted to just a single event and that's what you get and this also adds a small tag saying that well this is a multi-line event that was something that I generate using the multi-line filter the problem is that this is still just a block of text that you have to understand and break down and another plug-in that you can use so you need to do something after doing multi-line and for this you have a plugin called rock and rock is just a way of breaking down a block of strings into meaningful data well oops there we go you could obviously do this with a regular expression the problem with regular expressions is this so what crocs gives you is just a huge list of patterns that encapsulate this kind of regular expressions into names and then you can describe this block of text using those patterns instead of the regular expression to show you how that's done and actually you have a lot of patterns you have patterns as simple as an IP here or host you also have composed rock patterns which is IP or a host so you can just say well if I find either an IP or host then that's what I want and to specify to describe this log line you can do something like this so I have this fixed block of text that I don't want to capture then I have something that I want to put in a variable called time and this greedy data here will essentially go from here until it finds something that is fixed so here I stopped at the newline then then you can do then you can do the same thing for the query time I have a block that is fixed and I want to capture a number which is the grog pattern I want to assign it to a variable called query time and then I actually want to convert that to a float and why do I want that imagine that you're sending this data to elasticsearch or graphite or something like that then you want to have that as a number makes sense you don't want that a string and then you do the same for dress same for the lot time the road send etc and describe the time stamped and the greedy data and you take that expression that you created so this block here and then you say well I have an extra filter after Gras a-after multiline and this is evaluated in sequence so first I collapsed all the lines into just one then I take a crock pattern that I want to match in the message field which is usually the payload and this way you'll get everything broken down not just the message the payload but also all the individual elements that you can then use in a system ahead as I said there's a lot of plugins actually right now there's this many so 53 inputs 47 filters 64 outputs and the list keeps growing and growing and one of the reasons for that is the new release of log stash that happened a month ago so 1.5 and what this new release brought into play was the separation between the core of log stash and the plugins that's what I said the the problem was that before so 1.4 you had everything in the same git repository so github elastic thoughts log stash slash log stash had everything in it so both the the core and all the plugins the problem with that is that if you find some bug in the file input plug-in and you want to send that fix to the community then you have to release a new version of log stash just for that or say well you have to wait for the release cycle that can be one month or two or three so 1.5 as I said brought the separation between the core and the plugins and now if you know about Ruby each plug-in is a ruby gem so it's kind of a jar I guess for the Java world and that way you can have separate release cycles if you find a bug in the elasticsearch output you can just fix it and release a new version of the plug-in and then you have a tool inside of log stash that all's you to do manipulation of the X think plugins that you have in your installation you can install and install updates etc and you can get the plugins for rubygems rubygems.org is like the central repository and the public repository for all gems that exists you can also install the plugin a dot gem locally or even get clone a repository for a plugin and say well i want to use this plugin that is in this this directory then you're done also this allows us to just get clone a repository for a plug-in and run its tests individually before you had to download the whole log stash and execute the tests for that plugin in the context of the whole software so all plugins right now are living inside of this organization or this user in github so log stash plugins and you can just go there and search and see if there's a plug-in that is suitable for you and actually CAFTA input and output was just released at the time of 1.5 so if you have that in your architecture now you can use it so what is the plug-in query and plug-in has been made to be much easier now and to understand what a plug-in is and to is the development of a plug-in we have three repositories there in under that github organization that is input example output example and filter example and those are just skeletons with a very simple example that you can just clone rename and change the API to just do what you need and then you have plug-in and what does that API looks like for each of the stages of the pipeline well for the input plug-in it's quite simple you just have to inherit from a class obviously so this is all in Ruby you have to configure you have to set say which parameters you need on your plugin for example you can say i have a configuration parameter which is interval it has to be a number and by default it is five then you have a method for doing some bootstrapping some setup and then you have the main method which is run that execute you don't have to call this this will be will be called for you and inside of that method you just have to do something that eventually will have some block of data that you can create a log stash event with it and then send that into a queue and that will be shipped off to the filter stage of the pipeline it don't need anything else to think about for example the TCP input plug-in you just have to open the socket when you get a new connection read data off the socket and when you get data then you generate a new logstash event and then ship it off the filters are much more simpler you just have a method that is filter you get one event and then you do something with it and if you decide that your filter match that event then you just called filter matched and you're done as well and output is even simpler so just get a method that is receive and you do something imagined for HTTP output you just have to get that event and execute an HTTP HTTP POST with that payload it's quite simple once you get that and you develop your plugin you just have to build the jam using this command and you always have this file there as well and it outputs a dot gem and then you can just go to your log slash installation and you then plug in install and touch em so this bin plug-in is actually something that we had to develop for 1.5 so to be able to manipulate the the plugins that you have and it has obviously the basic tools and operations that you need so you can install a plugin if you don't specify that is a local file a local gem it will get it off rubygems it will install it verify what it needs download dependencies etc etc you can also uninstall obviously you can list the existing plugins that you have on your installation if you specify just a part of the name it will try to match the existing things that you have so in this case CCP input and output and also update all the plugins that you have it will go to rubygems check if there are newer versions and a plate them and last thing that I mentioned about running tests if you just want to run the tests for one plug-in you just have to clone that repository to a bundle install and that will install all the dependencies for that plug-in and then just run bundle exec our spec and it will run all the specification for those plug-in for that plug-in and say if it failed or not so the good things and very nice things about this plug-in this plug-in system is that it leverages the rubygems.org so when you publish a new plug-in everyone can access it immediately because it's out there it also leverages bundler which is a tool in Ruby to do dependency management so you don't have to write anything about that you just delegate let's just say I want to install plug-in and this plug-in has these dependencies go to your job and it will pull everything that it needs and also it's the Ruby way of doing things the bad things is that we we found some difficulties when doing this because depending on the situation where you're running log stash you will need different sets of gems and different set of dependencies when you're downloading locks - a clean repository of locks - you don't have anything you just have the code for the core you need to download all the dependencies you need to download JRuby itself and if you're unpacking for example a release of locks - then you already have all those things you don't need to download them if you're running tests then you need the dependencies and the development dependencies if you're just running in production you don't need them so all these kinds of different sets made us made our lives hard to play with bundler because it's not that used to dealing with so many different gem sets the fix for this in several situations were just to change the way that bundler behaved the other problem was bundler itself the the tool that managed these dependencies was initially written to be used through the command line so you do bundler bundler install and it will download everything you do bundler update it will update your dependencies and although it offers an API so you can just in your Ruby code call bundler it's not that meant to do that for example once you do a bundle install with the development dependencies that it will remember that it did that so after doing your execution if you don't need them anymore and you want to just have the runtime dependencies it will remember that it is stalled previously with the development dependencies so you couldn't do the reset and actually there's a method there saying bundle reset but that didn't reset so we had to actually patch the way that bundler did this reset to do reset also one of the problems that we had was we have a lot of plugins that now use jars and obviously when you want to manage jars you want to use maven or some other tool that will take care of the jar dependencies when you are just calling a single jar everything else should be take care for you so ruby and jay ruby has a gem that will call maven and all those necessary tools to take care of those dependencies for you however the version that ships with JRuby was very very bad so we ended up just for our J for our plugins that uses jars just download them include them in the gem and send that to Ruby gems this is not an ideal solution it makes gems a lot bigger it makes us take care of the dependencies for the jars we will see if we'll be able to change this in the future other problems it makes it harder to test so the plug-in and the bundler is very disk intensive so it does downloads gems and packs them manipulates all the files that's saying well you have this set of gems so it's very hard to do unit testing on that and the solution was just creating a sec acceptance test that just assumed that when we install something and then we uninstalled it if we list it it will no longer be there and we just assume that everything else behind it will work so that's what was log stash 1.54 log stash to the next version the things that we are considering and taking care of our first lead resilience so why am i talking about Raziel resilience if you think about the pipeline that i mentioned so you have inputs filters and outputs the way that this is implemented is that you have between each of the stages a queue and obviously this has been mentioned already in some of the talks queues are not meant to be unbound it's a bad thing to have kids that are unbound and this is actually something that logstash does not do the queues are actually very small they only have the ability to hold 20 events each so 20 here and 20 here so it's just a matter of having a little breathing space when you're doing a pipeline and not block any stage just because you have one one plug-in taking a bit more time so you can accumulate a little bit of events if one of the stages backs up the poem with this is that these queues are for now in memory so if you get something from the outside into the input and then the input sends it to the queue that it's going to be read by the filters and there's a crash then you lose that data that's a bad thing so in the next version we're implementing is changing the way that these kids work from an object in memory into a disk based queue this is being done through memory mapped files and there's already there's already a poor request that has a lot of the code already being able to do this anything that will be released in the next version so this way when you get a crash and you restart logstash you'll just have the event there as nothing has happened the other problem with locks - something that all systems have to be aware of is how to deal with failure imagine that you're sending an event out to some system and the system says that's an invalid event that that's malformed or I can handle it right now I just I just don't want it then you have two options normally either you can retry and something that's maybe transient and you can just try it again and maybe it will work or you can discard that event but it would be nice to have a third option which would be to call that a lost cause that events not going to be handled so I'm just going to re-inject it back into the pipeline and that way I can for example lock that into a file saying well this was the last event I couldn't handle but it's not gone I have it in the file if you can if you can later then you can pick that file and pick that events and do some other transformation and reject it back into your systems and this is what is called that let that let through queueing another interesting thing is clustering as I said before right now everything that you do on a configuration file for logstash is through the configuration file and if you have to change that configuration then you have to stop locks - so in the way of through the path of creating a clustered locks - we have to have a lot of tooling in place and all the features in place to be able to do that and one of them is being able to configure on log stash through an API if you think about elasticsearch for example the other product the other project of elastic everything is spectacularly being able to configure through an API and everything is configured to the through the API so this is also the aim for log stash so being able to tell log stash well this is a new configuration file reload yourself go fetch the configuration from this place change parts of the configuration in runtime and eventually even coordinate a specific pipeline between multiple nodes well if you want to know more about this the we have an issue that's mostly an issue for discussing this kind of problem and to see how the development for this feature is going and obviously introspection in the talk before by Greg yang is saying if you have queues then you might want to know or also you need to know how those queues are behaving so introspection is something that for now log stash is not very good at so it's kind of a black box so if you have the API in place you can also use that API to understand how full are those queues between stages of the pipeline how many events I'm processing in each plugin for example what is the latency between sending something to a plug-in and shipping it back so all this obviously has a lot of problems in implementing it when you insert introspection when you add debugging and stuff like that it will slow your system down so it has to be well-managed in the way that it doesn't get in the way of performance a lot so the same thing it's also being discussed and talked about in this issue of log stash project so what else there's a lot of concerns right now as we grow the list of plugins so as we have close to 200 plugins right now how do we know if a certain plugin plugin number 90 73 is a good plugin is a good thing to use in production we can't manage all the plugins we are not that big of a team so we expect a lot of contributions from the community but on the other hand how do we tell the contributions that the contributors that their contributions should have a test or that it's your contribution that adds this very interesting feature has 200 lines of code and it's all in a single method how do you communicate this need for having decent code but not at the same time not alienate other contributions from that person who is very willing to help the project other problems are improving integration testing so if you think about filters they're very easy to unit tests you just have input of event and output will be one or more events so you can do unit testing on filters very well however in inputs and outputs they are inherently drawn towards integration testing if you have elasticsearch output then it's very nice to be able to actually test it against an elasticsearch instance because you can just rely on if I have an HTTP output if I use HTTP client from Java I can expect that if I do the right call on that library everything is solved I will not have crashes anywhere obviously every library that we use will have bugs like logstash has bugs so it's nice to have integration tests we're doing a lot of experimentation with containers obviously docker as well to have proper integration tests and things more abstract like we want to have better performance and predictable behavior expectations etc so what now go play with it if you don't know about logstash it's a great tool it saves me a lot of time in my old job file an issue if you have a problem if you have some expectation when you're using log stash and you don't meet that expectation for us we consider that a bug so do file an issue if you see something that does not meet your expectations when you this tool write a test it's always welcome and also experiment not just with logstash but the whole alt stack so get data from your log files structured them using log stash send them to elasticsearch and then be able to visualize and create dashboards through Cubana also you can complain on IRC it's also a interesting way of communicating with us i'm jay SVD on freenode in the channel box - so go there and talk with us and yeah that's it any questions about this yes so okay so the question was how does the elasticsearch output works there especially before 1.4 there were two ways of running elasticsearch output one of them was having an actual elasticsearch instance running inside of flux - and the other way was just while communicating with a cluster in the outside more and more we know that having an elasticsearch instance running inside of logstash is a bad idea because you're in the same JVM doing completely different things so the way that you communicate now with elasticsearch is through one of three ways one of them is using the REST API for elasticsearch so just HTTP the other two are using the transport protocol that the nose used to communicate between themselves so it's a binary protocol Java etc and there's benefits and downsides to each one of them between those two protocol those two ways of communicating using the binary protocol one of them is just a transport so it's kind of a client for for the cluster and the other is node node is actually a way of logstash actually becoming part of the cluster starting himself as a node of the cluster and joining the cluster that has some problems and some benefits as well one of them problems is you have to be able to if you have a very tight network you have to be able to communicate bi-directionally with the cluster and logstash also if you have a lot of edge servers and you can figure that on all of them so you have a cluster of ten nodes which is your elasticsearch cluster but then you have 100 nodes sending data to elasticsearch data actually part of that cluster and a part of the gossip protocol as well so it's kind of a twisted thing so either transport protocol or HTTP is a good way yeah there's another question yeah yeah so okay so the question was how to forward logs from your edge servers into either elasticsearch or some other system right or into log stash for example so there was there is a project called walk search folder which is a very simple why apparently I have meetings so there's a project called access folder and yeah squid there's a project called lock set folder which is written in go and the job of that software component is just reading stuff from a file and sending that through a protocol that we called lumberjack and that's all the thing that that software does it's very light through the lingo and then you have an input plug-in unlocks - that is able to read that lumberjack so the architecture would look like a lot of flock set folders locks - to do the actual breakdown of the message and transformation and then send that to elasticsearch for example that is one way of doing it you can also do that using just log stash on all of the notes the the bad thing about that is if you have a very restrictive hardware then you might not be able to might not want to have the memory penalty of having a JVM on all of those servers they are very resource tight yeah yeah yet the benefit of having logstash on all on both sides is that you can use some pre-processing some transformation before sending that to the central logstash that as a heavy lifting so that's benefits and downsides you're kind of forced to use encryption and if you're using the DES Lumberton the block cipher the project so it kind of depends on your requirements I guess yes although so you're kind of so your your question is more like what what is the the problem with the troubles that people find when using this stuff or oh yeah there's a lot of use cases for logstash and that's the that's the funniest thing is that once you are able to create your plugins then you can do whatever for example I did just a meet-up in like a month ago or two months ago in Paris and just for that conference I created three plugins one of them is a plug-in that captures webcam snapshots from time to time the filter plug-in was something that calculated differences between JPEGs using an algorithm and the output plug-in was executing the sake command which translates text into voice so with that I created a burglar alert system that when I started and I was just having my laptop like that and when I came in front of it it would detect the difference and say well you're an intruder get out so that's a very extreme case obviously but especially when for example when I was in my last company in the Portuguese telecom I didn't use it for having stuff from Apache and sending stuff to a search I had a lot of core routers and network equipments and I got that information from the core routers using syslog input plugin did some transformation and structured of that message and then detected detective for some particular clients who are having some issues and then I triggered a notification to a system that could do remote resets to the customers IPTV service and equipments and then with a restart usually as everything in the our ecosystem we the restart the problem would solve itself so oh yeah totally actually I is something that is a problem for me as an employee of this company is that I am still learning a lot about the elasticsearch because I didn't know anything about the elasticsearch my knowledge was just lost I use it a lot so I didn't even had never used the elasticsearch output plug-in in my life just when I joined the company and started developing on it so yeah there's a lot of use cases for it good question anything else yes the question is if it's a good idea to just write stuff directly into elasticsearch instead of using logstash in the middle right yeah yeah what I so why do we need logstash I used to tell this a lot until freaking applications start writing log files in a format that's not meant for humans then you need stuff like log stash if all applications did not write stuff like the my sequel slow log which is a multi-line thing that is completely out of shape then you need something to break down that message and structure that and then send them to somewhere else if you have very nicely structured information it's all already JSON or XML or something like that there you might not need it yes yeah awesome question I have no idea I don't think so but do contribute that now we don't have I think another question yeah yeah good say no more yeah yeah so that's the problem of having two servers sending the same thing that will go through a multi-line plugin and if you send on both sides line a and line B and those arrive in this situation then you're kind of screwed the way that to deal with it is by partitioning that those events in using some kind of criteria that you know for example if you're sending data and you attach the hostname for example for each machine that you're sinning then you can on the configuration file use something that I did not mention which is conditionals you can say well if the host is something like this then I want to use one filter if the host is another then I want to send use another filter yeah there's there's a plug-in right now I don't know how advanced it is the idea is to have a stream identity and if you just use the IDS to use that plug-in to first separate the streams using some criteria again so if you say that use this field to create different streams but right now I don't think that that will solve the problem but that's something that we have had complaints about and something that we have to aim for is breaking down streams and have multiple streams inside of each plugin for example yeah it could be I don't think we have anything actually written down for that but I don't want to sound like a jerk but please do open an issue about that so then we can actually because we read we read and work with github a lot to decide what to do so that's actually a very very valid input yeah okay so the question is what is the relative performance characteristics of rock and how does that compare with just writing structured information on the logs so that that yeah grok will have similar performance to what ruby has on its regular expression engine and what that means it's not the most performance thing in the world and what the big problem that we see with clients is sometimes not just that the grok can be slow but the patterns and the regular expressions that they built are not well performant so if we just go through their configuration and see what kind of expressions they built oh no just do this differently and their their performance problem is no longer there but obviously if you can send data as he already asked if you can send data in structured way it's much better always yeah the question is if you a what is the common pattern to either run logstash inside the Machine of your application or outside of it obviously especially depending on what kind of performance of what kind of configuration of RAM CPU etc you have on the machine it might be just better to have that off the server and you are kind of restrictive if your application locks to a file if it locks to a file then you have to have log stash in it in situations that I used in my in the telecom all network equipments would send data off so I had a machine just dedicated for that so it kind of limited on the way that the applications locks anything else No thank you for coming
Info
Channel: Parleys
Views: 31,188
Rating: 4.0454545 out of 5
Keywords: Devoxx Poland 2015, tutorial, training, course
Id: mfb0R7azKZc
Channel Id: undefined
Length: 51min 26sec (3086 seconds)
Published: Mon Jan 04 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.