A Visual Introduction to Inner-Workings of the JVM - Douglas Hawkins

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone thank you all very much for coming my name is Doug Hawkins I am a lead JIT developer at Azul systems we make a java virtual machine called zing aimed at high performance as well as an open JDK distribution called Zulu that you may have heard of and a lot of my job actually ends up being just digging into weird performance anomalies and this is sort of me showing you what I do when I want to understand these things now obviously since I work on a VM and I've done this for five years I think jaebeum's are pretty fascinating but I also feel the details can be a bit abstract and can feel a bit far removed from what you're doing every day and on some level that's good you shouldn't have to know that's kind of my job but when things go wrong it is good to have a little better idea what's happening under the hood and that can be difficult because the JVM is honestly an amazing piece of technology there's also an amazingly complicated piece of technology it has dynamic instrumentation and profiling and it's doing just-in-time compilation and a garbage collector and all these moving parts that are constantly trying to adapt to your application as one of my colleagues who used to work at Sun in the early days put it it's alive it's constantly trying to react to your program and that can be really hard to understand and much of my job is investigating the performance anomalies that come out of these problems basically I spend inordinate amounts of time staring at log files which isn't always the most fun thing in the world to be honest but it is important when you're dealing with big high performance applications and I'll give a lot of credit to bring Gregg for sort of popularizing the idea that we should use visualizations to understand what's going on Brendan Gregg is the performance architect at Netflix he deals with things on a ridiculous scale and he is popularized this idea of flame graphs as a nice way to look at where the hot points are in our code and it's not that we didn't have tools to visualize hot points in our code before we did but the right visual can make a huge difference when you're dealing with millions of lines of log or tens of millions of lines of log because you just can't read all that so today we're going to use some tools some that are free and some that I've sort of hacked together myself in the course of my job to just see a little bit of what's going on inside of VM and as usual I'm going to start small I start with a very simple program where it's easy to see the performance characteristics of the program so this is pretty much the whole program I've got a main the core part is that blue line in the middle we're just summing up a bunch of numbers in this case 20,000 numbers I just produced those randomly and I measure how long this takes in this outer loop to make sure that the VM doesn't get too clever I have to store that result of some into a public variable so they doesn't get smart and optimize that away and I also mutate the array a little bit every time to the lit loop because otherwise it might say store the result from the first iteration of the loop and never recompute again perfectly valid optimizations and if you're not really careful those sorts of things can happen and you're not measuring what you think you're measuring but that's the gist of the program some up 20,000 numbers see how long it takes do that 200 times and just to make sure that I'm not protecting things I create a side structure to store the iteration results ahead of time I just update it and I don't print out the results until the end I don't want any i/o interleaving with my summation operations and I'll just run that program it's here Maxim and we'll run it with JDK 8 and just to make sure no GC is happening I also turned on friend GC details just so we don't see any GCSE interleaving with the output or the measurement phase which is actually at the top before the table and this is what I get now I did actually make a slight little more complicated than indicated I actually measure sub millisecond time by using some complications of current time Millie's in nano time but you get the basic idea so get this table and at the very beginning we're running in the interpreter I think you all know that and our iterations aren't terribly fast few hundred thousand nanoseconds to sum up these numbers and then since we're using Java 7 or later we now have tiered compilation on by default and we're going to go to a tier 1 a baseline JIT c1 and things will get a bit faster we get to about 50,000 nanoseconds so we've improved by almost an order of magnitude and then as we keep going we'll get to the second turgid c2 and we'll get to around 7000 nanoseconds so we've improved by almost 2 orders of magnitude and this is the process your program is going through this is for an individual method so it's really simple it's basically just the summation method that we're interested in and if we plot it we get a graph like this this is on a log scale Y axis is durations x axis is iterations and we could see the three distinct phases interpreter then baseline jet c1 then optimized jet c2 but I think we're already not off to a great start here if you think about that x-axis is time you got it wrong because each in duration is not the same length right the interpreter is slower than c1 is slower than c2 so let's plot it in terms of time and here I've marked every 10 iterations with this vertical hash that's dashed and you can see a little better what's happening now you might think I'm cheating by using time on the x and y axis that might be fair but at least now we can actually kind of see what's occurring the first 10 iterations take about three and a quarter milliseconds the interpreters not super fast then c1 kicks in and we do about 50 iterations in about 3 and 1/2 milliseconds and then at the very end we do the last 80 iterations and about half a millisecond but what happened between 60 and 70 my goodness there's actually a single iteration that takes as much as twenty thousand to three million nanoseconds I think because of switching costs between c1 and c2 you couldn't really see that in the previous graph could you know so the way we look at the data does actually make a big difference and there's actually more to tiered compilation than just C 1 and C 2 the basic flow we're going through right now is the normal flow we start off in the interpreter we go to c1 in what's called tier 3 where it's doing instrumentation which feeds extra information to c2 so it can do cool speculum optimizations that's the normal flow but there are a couple other flows and we'll come back to those a little bit later and you could see this if you wanted to if you've watched any of my prior talk see ok I could turn on print compilation and I can see some of this happening in blue here I can see some being compiled at tier 3 both a loop compilation which has that at 14 out to the side to tell me where the loop is and the full method which doesn't have the at 14 and then I see the tier 4 happen that's also a loop compilation and when the tier 4 gets created the VM will actually discard the tier 3 and that's why we see that tier 3 being made not entrant and then later on we see the full compilation at tier 4 that's when we reach our peak performance but it's a little hard to see all of this relative to our performance and these logs get large really really large most real programs will have at least a few thousand JIT compilations and of those maybe 10 to 50 will actually matter to the performance of your application so you've got to find them if you actually care about the performance that's a bit of a challenge and there's actually hundreds of D optimizations as well we'll throw away compilations not just because we're moving to a higher tier but because we made some speculation and it didn't work out and again the only ones that matter are the ones that affect your hot methods so it's like 1d optimization out of hundreds might be important now there is a tool that I very much like for just poking around called JIT watch that will let you do some of this so I'll use JIT watch with the program we just ran how JIT watch here apologize that the font is not bigger and which at watch you can open a log I produce this log as I was doing the run earlier and you give it this XML file that the VM produces it contains all sorts of stuff then you tell it to start processing the log and we can produce a nice visual timeline of the compilation activity okay that's pretty nice and at the top it actually tells me there are a hundred and ninety-eight total compilations that's pretty small because this is a small program 175 of those were c1 15 of those were c2 2 of those were OS ours all right that's something but I really don't know how that related to the performance of my program but I think this is a good start and it's certainly better than reading that XML file the way you get that XML file is to run with this and it's it's pretty awful if I'm being honest I've written a parser for it it wasn't fun but it's very rich in information and you can use JIT watch to see that XML synth it just as it pertains to your method I can go into JIT watch I can go back to the methods view find my sample program go into here show the try view it's open in the background I think yeah and go to journal it'll show me just the part of the file that pertains to my method of course presumes I know which method I should be looking at but I do have these tools and I think JIT watch is absolutely wonderful for writing toi programs and poking around it will give you a triple view of your java source code next to the bike code next to the machine code if you get it all set up properly and I've done a video series about using JIT watch the guy who created it has done a better video series about it Chris Newland it's absolutely worth your time to go check it out there are other tools there's Mission Control which oracle has recently open sourced and it will give you a similar view you can go into code compiled methods after you've done a recording and you can see this graph over time it's showing it in terms of bytes of code produced but it's largely the same picture but as I said I want to know more I want to know how this relates to the performance of this simple program so I plotted this same line against our simple programs performance yep getting a bit ahead of myself and I also want to know this exact tear breakdown as well like I don't want to just know c1 and c2 because there are these other behaviors there's tier 2 one tier 2 and tier 3 and therefore different things I would like to see those two so the first thing I do and this is actually something Jim watch will give you as well you can get tables of breakdowns by compilation type but I wanted to break down by tier and so I have an API that I like to use that just lets me query files I just say read in this log file same file that Jim watch used select all the compiles do a mapping by tier so I now have a stream of tier numbers and process that with a counter which just makes this table and I think this is a really powerful way to explore what's happening so here's that program right now I'm going to do by tier and there you go 136 2 or 3 s 38 tier ones 15 tier fours 1 tier 2 now I can get a little more resolution to my data and if I want to break it down by tier and OSR well I could do that too there we go it's not the prettiest format I'll give you but it's pretty quick way to get answers to questions and this is really useful in my day to day work of just trying to figure out what's going on in somebody's application who just had a performance problem because I really don't know what I'm looking for I'm starting with a guess and a canned visualization might give me what I want and that's an excellent place to start but it has its limits and then I can of course take this and plot compilations overtime by tier and I think this is pretty interesting because one of the cool parts about Java's steering system is it does have these different modes it has Tier one compiles which are only used for trivial methods and we expect most trivial methods will be identified pretty early will load the class we'll try to get something we'll find out that methods trivial and we'll tear one compile it almost immediately and we can see that's pretty much the case looking at this graph which is in terms of time we can see almost all of the tier one which is in green is discovered at the very beginning and then we discover a little more at the end this is a slightly different benchmark because I wanted something bigger but you get the idea tier two is actually pretty much only used during phase transitions in the application at the beginning when tier fours queue gets backed up that's when we use tier two most of the time we go straight to tier three we skipped here - but if tier four is too busy we'll go two then three then four and we can see that happening the yellow Peaks at the beginning and then we get rid of some of the tier twos move them to three and then we move the threes to 4 and we can see that happening in this graph and we can actually even see the fours gradually replacing the threes so here the numbers actually go down as we make the tier threes and tier tubes not entrant and they get replaced by a higher tier so you can see the red line getting thicker over time and at the very end the application switches its behavior and we do some more compilation oh we can see that too but where I think it really starts to get fun is breaking things down by package I can actually just write a program like this find all the compiles map them by the package name and count them and that's for the simple program earlier there were 30 compilations of java.lang that kind of makes sense I think all of you end up using java.lang Java util Trek X I honestly don't know I'm outputting some stuff to a file and I'm not putting using formatting to output to the console I guess maybe that uses something from the reg X package that's kind of interesting I'd like to investigate that java.util okay collections everybody uses collections Java niño Java IO yeah that's for outputting the table at the end but only for the compiles are actually for our example only four of those are actually the hot code out of nearly 200 so you've got to find that one in 50 that matters so now let's try to see if it actually did affect the performance of our application let's do what I said I was into earlier let's put our performance graph next to our compilation graph see how helpful that is uh-huh that is the lining of the performance graph up against the compiled graph by time that tiny little blue squiggle at the top is the actual run of our program as we saw it we really are looking for like a needle in a haystack here in terms of which compilation mattered and in fact the flattest period in our graph is the point where our program is running because that's when we're kind of doing one thing and we're not compiling new code hmm but this graph did tell us something interesting they told us that our application has multiple phases our application has to first start up to VM not really a surprise I bring anchored everything in time relative to the first measurement point so there were actually 30 milliseconds before that where we were jetting code and there was actually another 90 milliseconds before that before we were just starting the VM but we were compiling some stuff we were you know loading classes from the bootstrap like string and security manager and things like that you actually load about 50 classes before you even start running mate and some of that gets cheated and then after we finished running our benchmark we're printing out our results that's when all our i/o is happening and I'm also writing out a CSV so I can make these pretty graphs so we're doing extra work we're riding out results were running new code to print out the table that you saw as well as write out the CSV file and that was the same thing that was happening with the benchmark you saw earlier on the tearing graph it was generating graphs and tables and so forth to give you the results of the benchmark and this isn't really a typical a lot of our programs go through this startup a Cassandra node and it might read a bunch of data in and then you switch to querying it and then you shut it down right you're gonna have multiple phases it's pretty normal at the very least you're gonna have the VM starting up phase and then the running phase even in something really small like a function as a service or a micro service you're probably going to have that sort of phase so we can see that most of these chip compilations they're just not important to performance and of course we all knew this or at least I hope you knew this this is just the classic Knuth quote we should forget about the small efficiency say 97% of the time premature optimization is the root of all evil yet we should not pass up the opportunities in that critical 3% you might have heard you spend 90% of the time and 10% of your code lie that is overstating things by at least a factor of 3 or 4 it's like 2 or 3% of your code it's vanishingly small you've got to find that if you want to know what's really affecting performance now for this I'm going to do something we usually should not do I'm just gonna guess I'm gonna guess that some is important to the performance of our benchmark I know I'm going out on a limb here but that's what I'm gonna do I'm just gonna mark those compilations now hopefully we find that those compilations overlap the running of our benchmark or something is really really wrong here okay that's where they occur that first green line is where both of the tier 3 the c1 compilations happen both the full compilation and the compilation of the loop and yes that is about where we transition into the slightly faster performance that second Green Line is where we do a loop compilation for tier 4 C 2 and that's right before we see that weird performance anomaly can't say that that's definitely the problem but maybe and then at tier 4 at the last Green Line we have the tier 4 c2 compilation and that is right before we drop into our best performance that's what we'd like to see so this all sort of makes sense placing these lines is a little fuzzy to be honest measuring its sub-millisecond time frames is tricky but it does actually work out quite closely even and I've run this thing like 50 times so now you can do this with jibt watch as well I can go and pick my method and then I can go into the Timeline view and it'll highlight where the conflation got queued and installed but unless I know that methods hot and I can match it up against my performance graph it's still kind of limited in where I can take this but at least there are timestamps and I can start to figure these things out so I've asserted that everything is multi-phase so can we actually see the phases that would be cool I think we can if we look at compilations by package over time something like this now it's a little small so I'm just gonna switch to it in a browser this so this is a heat map of the program running make it a little bigger and it's composed of three colors the blue is our c1 compilations the green is our c2 compilations I didn't break them down by tear and if we throw any compilations tier two compilations away there'll be some red this program doesn't have any red and then the grows are our packages so I've highlighted the example package that's the benchmark we're actually running and then the columns are buckets in time in this case I did five millisecond buckets in time and I count the compilations within them and at the very beginning we can see yep we're compiling java.lang and Java util we're just kind of getting the VM warmed up and then in this run about 120 milliseconds into it we started actually compiling our code the benchmark code and we get first we did some c1 compilations during the 125 millisecond but that's the both the full compilation and the loop and then we did the loop compilation and then we did the full compilation and that's where we reached peak performance and we can see that sort of and then later on we start writing out our table of results and these bottom two rows are in IO and reg X and right now it's filtering only to show columns that had C two compilations there is a bit more going on here but we can see that there's a whole bunch of stuff we didn't even bother to compile because it just wasn't hot at all until after we finished running the benchmark it would be very hard to see that unless you have this multi-dimensional view so let's add some deoptimization into the picture I'm gonna do it in a somewhat brute-force fashion we're going to run a simple program that I like to call uninitialized forever this is a program I have a mane it's gonna try to instantiate an object 25 million times I said try and the class it's gonna try to instantiate it has a problem in its static initializer you might see the problem it always throws a runtime exception you can never instantiate this class which means you can never create an instance of this class oh sorry right now we have a compilation error in our static initializer let me fix that there we go that's better all right and we're gonna run this program and we're gonna see what happens I want to make sure my program was stable you know I'm a good Java programmer so I caught throwable so that keeps running ok so what do you think happens we're running a hot loop what is hot spot want to do when it sees a hot loop it wants to JIT it but hot spot doesn't like uninitialized classes so it's going to see that and it's going to throw us back to the interpreter but then the loop is still hot so it wants to compile that but it doesn't like uninitialized classes so it goes back to the interpreter but the loop is still hot you see how this is gonna go so we're gonna run it there you go uh-huh yeah this is not a happy system and I'm just gonna leave it running in the background now if we wouldn't look at this compilation timeline gent watch we might get a sense that something's wrong because this graph kind of keeps doing the same thing it keeps going up a little bit and up a little bit and up a little bit and really we kind of expected to flatten off if this is a long-running program but it didn't well eventually it does okay I could also go into JIT watches top lists and I could look and see uninitialized forever mayne has D optimized in a shorter run seven times and maybe that means something but lots of things D optimize at least a few times and if this isn't a hot method that's not necessarily a problem or I could go into Oracle mission control and if I go to the code errors tab I could see that I did have a whole bunch of no classes that found errors I hope that that's a bad sign to you but I've actually seen this happen in real programs three times in the last five years it does happen now eventually it stops after about 90 seconds let's see if it's been about 90 seconds ah yes it has stopped why did it stop well fortunately the VM does have some safeguards there are some safeguards that if something just goes horribly wrong and we compile the same thing more than like 40 times we'll stop doing that we clearly made a mistake it is also good it's stopping evil people like me from doing horrible things to your VM of course I would like to look at these things a little deeper so I take out my trusty querying API I query my log I know what I'm looking for I'm looking for uncommon traps and I just make a table by pair and reason and I find out that Maine did this seven times because the class was uninitialized and that's actually not a blad bad place to start doing this sort of exploit exploration it's just the start of the exploration but it's a start the heatmap can make it a little more clear we can at least see that the corrective mechanism actually did its job so here is a heat map for uninitialized just two packages java.lang and the example and see that like every 400 milliseconds because I'm using two hundred millisecond time buckets there's a c1 compile a/c to compile addy up a c1 compile a/c to compile addy up a c1 kupo you get the idea and it just kind of goes like this hey there were probably two but and yeah this is not a healthy system the rainbow colors are really pretty but you want to see this stop at some point otherwise the system didn't stabilize and then indeed way out here at around 90 seconds it does in fact stop I guess we'll call this healthy but you know the fact that we did this for 90 seconds is actually indicative of a problem and just the kiss you think that these the optimization things don't actually really matter they do if you start caring about small enough points in time I am often working with low latency customers banks exchanges etc and they consider like a cup a couple hundred milliseconds might be considered an outage by some of them if the system doesn't respond so I plotted a benchmark and I did a heat map just by compiler and not entrant and you can see one very noticeable dip that's a 20% drop in performance caused by a single deoptimization just had to hit the right method this one was the one that was actually running the benchmark surprised surprised that negatively impacts the benchmark but it was just one there were actually a few hundred the optimizations in this run but it was only that one that mattered and the sort of visualization and putting things side-by-side I think can help us understand these sorts of behaviors which are really hard to understand otherwise but you got to have you know very precise timing information to line all this up now the reason this slows down is because when we hit a deoptimization we discard the tear to compile in our case we'll at least go back to c1 and OpenJDK is case you're going back to the interpreter I believe growl will actually let you go back to the middle if you're using the äôt tiered mode but even so that mid lower tier whether it's c1 or the interpreter it's slower than the higher tier and that will show up in your performance if you're looking for it it might only be a moment it might only be a few hundred milliseconds I've seen one that was actually a full second but it is there take one more example a view of this take a bigger benchmark which has more distinct phases to it and look at it so here we go this is the benchmark java.lang java.util the usual culprits this is going to be true of pretty much all your applications and if you're running Scala there's going to be the core Scala library if you're running closure there's going to be the core closure library in addition to this there are certain things that are just used by every part of the program but it's slightly different parts of that package and they just kind of show up as cosmic background radiation in this sort of view the line I've highlighted here is the actual xerxes packages I've folded a bunch of them together because they're like ten internal packages to search these but this is an XML benchmark so it makes sense that that's the code and you can see it goes through some churn of Siwan compiles c2 compiles the optimizations see wonka policy 2 compiles the optimization before things finally start to settle down out here after about 20 seconds and then much like my simple program this benchmark harness does right out charts at the very end I did a 5 minute run and there you go at the very end it ends up using a lot more of java.util Java collections java.lang but it also uses some packages it's never used before in this case this one draws charts so it uses or J free chart and we can see these phases happening in a way that we just couldn't before now this does mean that you need to think about if you care about warm up warm up isn't a single moment in time warm up is each phase of your application every time you load some new code that's going to get JIT it it's not just that you're warmed up that's not that's not what happens new code new phase new jet cycle new warmup cycle you also need to think about that warmup isn't necessarily something you can distill down to a single statistic one of the things I work on is something that tries to remove a lot of the warm-up by doing the work ahead of time and loading cache code from disk and lots of other things and that's the Green Line here and the blue line is our VM running without that and if we were just going by warm-up time they look like well they're big reached their peak at the same point or their minimum at the same point since this graph goes down but I think we could agree that the Green Line is better than the blue line even though the time to warm up if you're just talking about that statistic looks kind of the same and of course when I did this I made the same sorts of pretty plots with the blue line you can see our system is doing lots of compilations shown on the top we've got a lot of C ones happening as new code is discovered we've got a lot of c2 or in our case we have another JIT called Falcon and that's happening as the benchmark is getting warmed up and we're seeing a lot of the orange we're seeing a lot of D optimization occurring and then when I add the technology I was talking about this changes we do a lot of compilations up front so we do thousands of C ones up front and even hundreds of tier two compiles before we even start running main and that's how we get this better warm-up curve and then the other thing you have to keep in mind is each of these phases is gonna have different hot methods you can't just take a profiler and say I found the one hot method if you've got a multi-phase application you need to profile each of those phases and most of us do there are plenty of great profiling tools I decided not to talk about profiling tools today because I think there are a lot of excellent talks out there already but you can use Mission Control and have the table view and in this XML benchmark I was talking about it tells me that XML schema validator handles start element is the hot method not really a surprise but it's always good to check I tried to profile that last phrase with or J free chart and unfortunately the VM kept shutting down before I could get a full recording so I eventually kind of gave up but one nice thing about Mission Control is it will let you narrow in on a period of time by taking that slide graph at the top and narrowing the time window so that's that's pretty cool and of course my favorite view is flame graphs and you can find all sorts of things that have been hooked up to flame graphs Netflix uses perf there are people who have used the data from flight recorder which is what Mission Control consumes to create flame graphs you can find things that use the sampling profiling built into the VM to produce fan graphs there are lots of wonderful tools for producing this visualization so I would encourage you to check those out but I feel like that's been covered I did want to cover at least one program that's not completely a benchmark so I decided I'd try spring boot I just took spring boot I mean spring grits supposed to be light and I started it I didn't really configure much in it I just want to see how much did compilation it does so I produced that same XML file and use that to produce a heat map I was really pleasantly surprised in some respects now there are a lot of packages and I didn't work so hard to fold them this time but you can see it actually does no tier-2 compilation it actually has a very light startup it just runs a little bit of stuff does some wiring and we're good to go not a lot happening but that's kind of a good thing and a bad thing because that means you got a server it's up and running is it warmed up no definitely not when will it be warmed up I don't know when are you going to send traffic to it it's not just warmed up because you started it that's not how it works it warms up in response to some stimulus we start running methods when we start sending it requests when we run methods is when we increase counters which is what triggers the JIT so just starting it doesn't do much if the load shows up immediately great it warm up quickly if it doesn't show up for a while it'll warm up later so I just let it sit there for like 40 seconds and then I started refreshing the page and then I can start to see the compilation activity kick in I was just refreshing this manually I didn't use a machine and so we we see it still still sort of just like lukewarm if I really started a hammer it was something like Gatling or jmeter then I'd see it get hot but it actually doesn't get hot until somebody starts to stimulate this thing and for my customers that's a real problem because there are messaging applications and they want them to be turn the server on peak performance right away which is why we have this whole sort of pre-warm that the VM based offer recording from yesterday so warm up isn't just you know one moment in time we start to VM it's not one moment in time there's a single phase our applications are multi-phase it depends on how much traffic we're getting we have to keep that all in mind as we're now doing these smaller deployments so I just want you to think about that now back to our anomaly from the beginning what did cause that weirdly long iteration I honestly don't know for openjdk what I do know is that a couple years ago I noticed a similar anomaly in our own VM and it was from switching from c1 to AC to loop compilation and I went and fixed that and part of how I went and fixed that was by doing some visuals processing tracing and looking for these sorts of anomalies I know that's not what you guys are going to do and that's okay I just want you to have a little bit better idea of what's going on inside the VM and maybe give you some ideas on how to go look hopefully with the off-the-shelf tools but just so you know what's happening there are a lot of great tools I've stuck to showing you the free tools today because I want to give you things that you could go and look at but there are plenty of wonderful commercial tools your kit dynaTrace I'm not entirely familiar with all the capabilities but I would encourage you to go explore the tools so you can better understand the performance of your applications I think using visualizations can be very powerful because these blogs are gigantic even for tiny programs I'm looking through like a million lines of log file does anyone want to look at a million lines of log file no no so we're trying to find these tiny events that are like three percent of the time among these giant files and I think we need something to help us do that and so for that reason I'm actually kind of happy to announce that Azul has given me the go-ahead to open source some of what you've seen today they're a little rough around the edges they're not out in public yet but we are gonna put them out there I'd like the community to play with them just see what you think maybe add to them maybe in another couple years we'll have something even better to play with all right well that's all I have for you today I'd like to thank you all very much for coming if you'd like to learn more about some of these tools I'd encourage that go watch Chris Newlands videos on jet watch he also has a new book that he did with a couple others on Java Performance for a Riley that just came out I haven't read it yet but I'm sure it's absolutely worth reading I'd like to know more about Mission Control there's a youtube video it tends to be a little WebLogic focused in their videos but it's definitely a good tool and has lots of capabilities azul has a github that's for our open JDK distribution we've also contributed things like J hiccup and HDR histogram which are low latency measuring tools to the community and some of those have been used by Netflix and things like history so you may already be using some of our open source stuff and basically just go read anything that Brendan Gregg has ever written he is absolutely the expert on this sort of stuff he does not just Java performance but node performance and Linux performance down to like IO level of Linux all sorts of things syscalls you name it absolutely a wonderful resource so I hope that was enough to whet your appetite and thank you all very much for coming [Applause]
Info
Channel: Devoxx
Views: 21,219
Rating: undefined out of 5
Keywords: DVXPL18
Id: JLFjY6Ixct8
Channel Id: undefined
Length: 48min 10sec (2890 seconds)
Published: Sun Jul 15 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.