Grafana Loki: Like Prometheus, But for logs. - Tom Wilkie, Grafana Labs

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right how's everybody doing we're gonna go ahead and get started my name is Tim Hendricks I'm from open policy agent Styron I'm your track host so just like always we're gonna save at least five minutes for Q&A at the end please go ahead and stay don't get up early it's pretty disruptive when you leave also remember that at the end of the talk go ahead and rate it you just log into sched to do that so without further ado I'll give you Tom Wilkie one of the maintainer zuv loki and prometheus he's gonna be discussing loki a log aggregation system inspired by prometheus thank you very much hello everybody this is quite a big room quite daunting so my name's tom I work for fan labs I also welcome from atheists I was to work on a project called cortex as well and there was a couple I gave a couple of talks about it cubic on past few days and the new thing I'm working on is Loki when I do get spare time which is very infrequently I like to make my own bit I thought I'd include that for some reason I'm actually like I'm going off script here I'm in a Twitter war with my wife she's a journalist and so she's getting way more followers than me and so I'm massively whoring myself out here by asking so yeah I've never spoken to this bigger audience before face it so if I could if I could you know past 2,000 followers on Twitter me so happy and my wife so sad so you know make that what you will so that's pretty much gonna be this is the last talk I'm giving a cube concert this is the kind of you know this is what you're gonna get basically so we're gonna do some audience participation so who here's already using Prometheus hands up oh wow that's so cool who is already tried Loki okay so mostly people who happen great who here like I gave a very similar talk to this at FOSDEM in February and we had a lot of feedback that people liked it but I've said some people here have already seen it so who's seen the FOSDEM talk oh okay so I've summarized the Fossum talk I might just give the pause and talk again you haven't I'll go to the summer to see what happens and then finally who here cuz of the keynote like okay that's pretty good yeah having that opportunity to speak to suddenly people was was so awesome so low-key what is Loki Loki is a horizontally scalable highly available multi tenant you know you name it it basically does everything log aggregation system inspired by Prometheus we started the project quite a long time ago actually but it went for a big long where we didn't really have enough time to work on it and and we only really got to working on it and kind of pushing out the first release on the way to Cuba con us I mean if you go and look at the get history most of the code was written on the plane on the way to Cuba con us so it's it's six months in it's six months since we open sourced it the the like the response has been astounding like we were we spent the first like 12 hours we are after announced at the top of hacker news where over 6,000 github stars now we like we've just had so many people coming up to me and saying they love it and they use it and and this is like the first time I've ever experienced that so that's so awesome we've spent the last six months like listening to everyone's feedback we've hired a bunch of people our graph our labs to work on it we've got a load of open-source contributors I wanted to put together a load of stats but you can all don't get up and see them like there's like 80 90 contributors to Loki now which is just amazing for such a new project and we've added some really cool features so Loki your knowledge to make a lot of sense later and when I demo it but Loki takes a different approach to log aggregation and we've added some of the things that people want that brings it a little bit closer and allows you to walk the trade off a bit better Gotham I'm not sure if he's here I think he's in the other room he has been working on query performance or crew performance has come on leaps and bounds in the last six months it's a little work to do but we're getting there and then we started developing our own query language as well and then finally like today I want to show you that we've added the ability to see context we've added the ability to do live tailing that was like one of our most requested features but the doc up there the Google Doc that was the design doc I wrote over a year ago now I like to start of a project with a design doc so you know it's public go and read it now give you the wheel in-depth so what are we doing today well that was the intro we're gonna do a summary at the FOSDEM talk it's so much fun I thought I'd do it again and then we're going to talk about what's new where we're going and hopefully do a demo and some questions later the FOSDEM talk I just recommend you go and watch it if you're if you want a bit more in depth I think it was kind of mid day if anyone's ever been to Foss then I may have had a few beers by this point it's a good it's a good conference and it was just such a great opportunity it was a it was a lecture style cuz it's run in a university so it's a lecture style conference like everyone was like up in front of me it was so cool anyway we but we the FOSDEM talk basically focused on three things about Loki it focused on how we've tried to deliver on like the simple and effective to our own simple and cost effective to operate promise that we made all this should start with zero by the way the second thing we talked about was how we've integrated it with existing observability tools as I kind of alluded to in the keynote it's all about that joined up workflow and like joining the pillars together and then the airplane mode as I said like I built locate on a plane I've had to do a lot of traveling recently so like having an airplane mode for your software as well as making it cloud later I think is kind of neat and makes makes Loki cool so first what I mean by simple to scale well most log aggregation systems will index the contents of your logs so they'll go through they'll take every line in your logs they'll tokenize it and they'll stick it in a massive inverted index and your index is your logs and this is really cool right this makes it makes it super powerful it makes it really easy to ask questions like how many log lines had the word error in them right which a lot of people want to do with their log aggregation system this is also kind of how Google search works right it's how they index the internet it's the same technology the challenge here I think for me the challenges these inverted indexes are hard to scale you know and this is a bit bit contentious for sure like I think other people would say they're not this is my experience and I think there's a bit to why I think they're hard to scale if you think about an inverted index you take a log line you split it up into you know 1015 tokens and then you're going to like those tokens to your index now if you're going to optimize your index for writing you're probably going to want to shard that index out over all your machines and so what that means is that every log line is going to be you know right amplified by 1015 however many tokens in that log line similarly if you want to optimize it for reading you kind of want to localize your reads so you've got these two opposing trade-offs when scaling an inverted index so a low key we we do it completely differently we still do some amount of indexing as I kind of alluded to in the keynote there's degrees of indexing you can do you know something like okay log by Peter eschewed all forms of indexing didn't want to do anything like wanted it to be as simple as possible the challenge there is the only thing you can really filter by is time and you effectively just have to brute-force every single query so in loci and this is what I'm trying to indicate here in loci we try and give you a little bit of indexing try and give you just a taste of it we want to index metadata and labels about where the logs came from in Prometheus words we'd say these are the target labels and we give you this little bit of index that points to basically a stream of logs and the stream of logs internally isn't indexed this allows you in the workflow that we'll show you later this allows you to kind of filter what job you want to see what Metin you know maybe what log level you want to see what host what server etc get the logs for that but then the rest of it you have to brute force because the index is so much smaller you know it basically fits in memory I mean we run this thing at scale now the index is tiny so you don't really have a challenge to geylang it the other thing is these streams compress really well like all the logs from a single host you get a lot of locality in the stream you get great compression out of them again I kind of ran out of time to get real numbers here but I'll do a blog post in the next week or two and I'll actually give you facts then the second second argument is is integrating with existing systems so I work for a company called Cortana tirana labs I don't actually work on granite I work on back-end stuff so I welcome Prometheus on cortex on Loki this we a long time you know 18 months ago or more two years ago I built this workflow I built this slide in fact they kind of showed what my day-to-day activity was as a DevOps engineer you know I'd get an alert me not in slack but I needed a picture right I get a lot of my phone you know I'd follow the link it'd take me to a dashboard and that that's what I click around you know maybe use the dashboard maybe it's a red style dashboard to kind of hierarchically go through my services and find the one that's causing the errors causing the high latency and then I'd you know then I'd basically click on the dashboard I'd click Edit and I'd copy the prompt ql expression out into the Prometheus expression browser then I'd fiddle with it I'd fiddle with it to get more information I'd fiddle with it to find maybe the exact path of the request that's failing or the exact instance or the exact host and whatever it was I'd end up fiddling with the expression and that copy and paste and that fiddling was sonic we really wanted to streamline ingre fauna when I when I joined Gore fauna so this is I think David did he added the Explorer mode which allows you to click on this graph and and go basically into a recreation of the Prometheus expression browser inside Ravana and allows you to do that fiddling and in a nice way because the Explorer mode I don't know why I'm talking so much about the Explorer mode it's just cool but the Explorer mode now has kind of cool tab completion and it gives you suggestions of ways in which you can improve your query and and really helps you at that first step but I guess the reason we really wanted this Explorer mode is because we wanted that second step we wanted to be able to once we've isolated maybe the instance or the job or the host that's causing the high latency or the error rates we wanted to be able to with one click show you the logs for that host and that's the whole motivation for Loki you know and then distribute tracing obviously as kind of something we'll do pretty soon I'm pretty sure so how do we do this well we deploy an agent to all of your hosts we need a log collection agent so we build our own it's called prom tale we don't want to build our own no one wants to build you know there's a few things in world you should never do should never build a database this would be the third I've built you should never build a log collection agent luckily we managed to hire someone who did a really good job of making this one better anyway we built prom tail what prom tail does is it embeds the Prometheus service discovery libraries embeds these libraries and uses the Prometheus integrations to collect metadata about your jobs so it's a pod name the labels associated with the pod maybe the image that's the pods running this stuff and then it associates that data with the stream of logs it's following on disk so it'll tail the files that are in like volleyball or docker somewhere it'll tell them you'll associate the labels with it and send that to Loki the reason we did it this way mean you could do this with fluently but you'd have to tell fluent D what what labeling rules you're using in Prometheus and you'd have to translate between the two and we found like that was error-prone and you kind of had to arrange yourself and put time and effort into making sure those two sets of rules were consistent by using exactly the same libraries from Prometheus we've systematically made sure the labels will be consistent with Prometheus and this is key like you don't have to do anything to make sure you get the same labels as Prometheus you just have to use the same conflict Prometheus users that's why we've called it prom tale because it kind of really heavily Prometheus inspired that's how we keep the labels consistent and by keeping the labels consistent we can enable that kind of experience of switching between the two seamlessly so hopefully everyone gets that mystery think nope right so airplane mode is the next one I realized I started off with zero one one three that's embarrassing to do so many slides so as I said I needed to be able to run this on my laptop I needed to be able to develop quickly on my laptop but also I worked on another system called cortex and when we built cortex we you know we wanted to do it the cloud native way microservices docker eyes cloud dependencies BigTable you know s3 you name it we wanted to use all that all the cool stuff the problem we then found out is that it was really hard to run like you had to have a kubernetes cluster running in one of these clouds took to run it you had to provision everything configure everything correctly and there was just a lot of moving parts in cortex so when I wrote Loki I wanted to avoid that experience like I wanted to give people a really nice out-of-the-box one command runs Loki and it just works and this was also like you know kind of selfishly I also wanted more people to use Loki than ever ended up using cortex I also wanted more people to contribute to Loki than ever contributed to quartet and so we tried this we Loki uses a lot of Saint coders cortex to achieve all of the kind of dhts and and eventual consistency and all of the kind of algorithms are just cortex algorithms but we package it up as a monolith and I know like someone had a cloud native conference talking about monoliths I'm gonna get sued off stage soon I think so we packaged it up as a monolith and we made it so one command you could run it on your laptop we also added stubs to to allow you to run with like local databases like bolt DB an embedded database you know so you could store the logs on disk and this way we gave you an experience where yes I could run out on an airplane but you can also run it on your laptop or on your on-prem you know servers or you know in your kubinashi's cluster that isn't running in the cloud or even if it is running in the cloud maybe you don't want to spin up a big table instance because big table can be expensive and stuff like that so we built this monolith but now of course we run trendy and we're only building long lists so we added a flag to the monolith to say oh by the way we also want you to behave like a micro service so you can optionally choose to do the complicated cortex architecture this is how we run low key in production we deploy I think it's like 9 or 10 services that together make up low key right you don't have to use one one but we run 10 and you know if you want to run 10 as well you're welcome to do that it's just a flag the cool thing about this architecture and the reason you know we did it with cortex and the reason Mike services so a good idea is we get to isolate the query path from the right path for instance we get to scale them independently you know if you get a query of death if you get someone it sends as a query that decides to load a terabyte into a 16 gig pod like it will kill that pod you know but it won't kill the ingestion path so you can kind of separate concerns about reads and writes and separate the reliability of the two we we have this nice little model where we like to move quickly we like to merge stuff we what we deploy earthmaster we continuously deploy this but when you touch the in gestures which are kind of the stateful bit and a bit sensitive we like to do a bit of extra code review but because the query is and the query front ends and so on they're all like stateless and if they break we just roll them back like we like to move a bit quicker we like to L do TM stuff a bit quicker there so yeah Loki's kind of optionally micro service-oriented if you want so that's the really brief summary of the the FOSDEM talk the FOSDEM talk was I think 40 minutes so if you want more details I just recommend you go and read that and I think we will do a write-up of it as well so simple and cost-effective I've used number two here like I should have integrated with existing observability tools and an airplane mode but also being cloud native so halfway through what's new I mean that was kind of six months ago right that was where we started we've had a huge amount of feedback like it's been overwhelming I've never been involved in anything this successful so we've started to started to respond to some of it you know we had to hire a few more people I can't you know write all this code myself and we've got to the position now where I think we're doing pretty well we're like this close to being like you know ready for our first release I was really hoping to stand here and announce we're doing you know low-key naught point one but just a little bit so I tell you what we've done the four things I want to show today that we've added two loci our log filter chaining this is our first step towards our own query language for loci the most common thing people want to do you know to fulfill that just give me grep is like filter their logs but also like filter them in a negative sense filter them in a positive sense not everyone likes writing regular expressions so sometimes you just want equality so we've added log filter chain you can have as many of these as you want extracting labels from logs I mentioned that earlier so we we run internally I hope the demo works I really do we run internally we extract for the debug level from the logs so this now you know we include the debug level in our index every stream that comes in gets split into four streams and this is kind of cool because it allows you a little bit more power like the work the use case I really want to show you know in the next few weeks is the ability to add an annotation to your pod that says like oh I just want to record in low key like you know warnings and errors and info I can I want to report debug and then if you see an error you can like go along and just by changing the annotation in your pod you can change it to do you know a full debug logging back to low key you don't have to change your code but it will just be a done through Prometheus relabeling rules so that's like next time so we can now extract labels from logs one of the biggest things we were asked for his live tailing like people love log cube catalogues - F they really like that workflow and so they wanted to see that in brief honor so we've got that / if we can show you that in context you know again modeled after grep people like to build pipelines of greps the grep has that - see you know copper context you know I want to see three lines before a match and four lines after a match so we've added that to Loki yeah and there's much more so let me see how good the Wi-Fi is neat sir one second display arrangement no and one tip is you should always always close your slack first so yeah let's let's just try using the live environment yeah don't remain oh so i'm just we're just gonna go to our instance of the Wi-Fi seems good we're going to go to our instance of loki that's running in a dedicated cluster inside refine labs and we we run i don't know the final count it's like fifteen to thirty kubernetes clusters distributed around the world we know we've got ones for dedicated specific applications we've got one dedicated to specific like customers and regions and so on and we like to get a global view of all our logs in one place so we run an OP stores cluster where we do that I know you didn't see a login that's because it's behind Google's IP proxy so good luck so let's do the let's do the workflow I'm we've got this demo app called the TNS demo let's actually do a port forward this was me trying to run it locally earlier didn't work cube cuttle config use context ops tools one port forward namespace TNS demo come on come on there we go service 80 80 80 okay so we built this demo because I feel like like whenever people show you demos it's a fake thing like it doesn't have a UI and I really want my demos to have real things so so I built a hacker news clone they've all got fake stories these are not real stories they're supposed to be a joke don't take them too seriously please don't sue me you can let you can upvote and like the links work but so just to show you there is something going on but sometimes it fails why is that so we can go to our demo app and see that sometimes it's failures are low so we're going to go and well we've seen that the database is periodically failing so I can I can go and load up the Explorer view and this shows me yes there's definitely now you can see that I can kind of include you know maybe I want to see it by instance you know okay there's only one instance that's not helpful and I can also say well I'm interested in only those 500s so you can click there and this is really designed for people who like maybe aren't that confident with prompt QL and we can just help them build it and find what they want and when you're kind of getting paged 8:00 a.m. just before you're about to go on the keynote stage this is really useful that actually happened by the way so now we can zoom in on one of the particular spikes because we can see there was a spike five QPS of 500s at that point and we go up here and we can say I want to see the logs for that and then there you go there's the logs you can see various errors in there that's pretty cool that's the logs just for that we can see oh that's all for the same label so doesn't doesn't do anything but this is like there's a lot of stuff in here right so maybe I just want to see errors and I can filter down just by era I tried to demo this on someone else's keyboard but it was a Norwegian keyboard and I couldn't find the you know the pipe operator so here we go now we've seen its filter by us and we can even like do pattern that you are moment we can do multiple of these we can do you know looks like you know something like that yeah it works so there we go and then one final thing we can do is we can we can do some kind of ad hoc analysis of the thousand results we got in the browser this is done browser side right now we really want to do well I'll talk about a second I'll show what we want to do in a second but one of the things that is kind of cool so you see we're in this view and we filter by eros or only seeing errors right but I want to see maybe that error is going to be preceded by a stack trace that doesn't include the term error so I want to see the context that might include that stack trace and so we can click on show context and this is then going back to Loki doing a couple of queries and and this is your grep - see kind of workflow and I just think this is so cool like I should actually include stack traces so it works even better yeah what was the other thing I was going to show you our life tailing yes so let's go to the last 15 minutes and let's get rid of that and then I've never actually done this because I was told by David that it's working and I trust him that much so here we go we click here and we click live and then what happens so received and it takes 10 seconds because for various reasons you have to leave a buffer you have to have a delay to get things in order and then work oh well you know this is how you can tell it's a real demo we were really trying to get this to work he's David in the audience he normally is is he hiding he's down there is he what if I what have I done wrong yeah it didn't work did it yeah well there'll be a better demo online I promise so the other thing I want to show you showing you dog training the extracting label some logs have we got a log level in our thing yeah we we do not know well two out of four that's not bad for a demo so let's carry on so that's the demo to our for what's next so as I said like the biggest question we get about Loki is when can I use it in production we use it in production already we've been using in production for six months it occasionally lost some data but now it's pretty stable like we it's very intricate [Applause] I wasn't expecting you to laugh at that so but now it's really stable actually like we're really happy with it we were going to cut the beta the first Peter like a naught point one version really soon but don't let any of this stop you like go and use the master branch it's good it's stable it mostly works as you see we I alluded to we want to do that kind of aggregation view that you saw showing you how what percentage of errors were which message we want to do that server-side so we can give you bigger pictures over longer time periods so we're we're designing this what we're calling log QL this new query language we want it to look as much like prometheus as possible this is just some ideas like it might not look like this there's a design doc that's linked in the top right I'll upload the slides afterwards and you can go and check out the design doc we're like I think Cyril may I've got like 20 unread messages from Cyril right now so I think he's implementing this right now and he's probably also telling me my life tailing is not working so yeah we wanna we want to be able to implement some basic aggregations so you can start using Loki to plot graphs so you can start using Loki to do numeric analysis Loki will never be for business analytics you know that's not what we're trying to build we're trying to help developers debug and troubleshoot their applications so we're never going to be able to execute these queries as fast as like the fully indexed log aggregation systems but it doesn't mean we can't try and so we've got plans around like this fully parallelized streaming query engine that we're building which I think's gonna be really cool I'm really excited about that so once we've got that kind of numeric aggregation in log QL we want you to be able to use the Prometheus machinery to build an alerts off of it like it's you know I'm pretty opinionated about this stuff I don't think you should be using logs to do you alerts but you know not everyone lives in a panacea and sometimes your application will only log error messages and sometimes it won't expose metrics so we wanted to support we want to help you kind of that's kind of use cases especially actually somebody came up to me earlier gave me a really good example of like kernel drivers you know I doubt we'll ever get the Prometheus instrumentation libraries into the kernel and a lot of drivers are only ever write error messages to D message so we want low key to support the use case where we can count those error messages that are happening in D message and now you to alert on them so this was a really good idea that someone came to the came to the booth in the hall and said like I really want to use low key for auditing but I want you to like add this whatever that means signature changing so we can verify that they've not been tampered with I don't know how to do this I'm gonna have to go away and do some research but if any of you know how to do this Piazza welcome you know we want to do this I think it's a cool use case we talked about airplane mode and bolt DB but in production you still probably born will be using something like BigTable or GC or what DynamoDB and so we want to remove that dependency we want you to be able to run in production on bolt DB I think that'd be really cool and then we want to launch the first peter as soon as possible and with that I'm done thank you very much [Applause] [Music] [Applause] [Music] first question thanks for the talk and thanks for the tool I think it's very awesome my question would be you mentioned that you don't like alerting out of logs unfortunately some of my colleagues are using metrics out of the logs do you think Loki can help me in this I don't like it but I have to support that ya know it's a good question so you can Loki help you extract metrics from logs I mean yes that's exactly what we want to do we already have some basic ability to count pattern matches in the exporter and implement kind of what em tale and the Rockets water are doing so yes I'm standing here because I can't hear you otherwise and there's a speaker just here but uh that's just why I'm standing here I was playing around yesterday with Loki a little bit and I was wondering if there is the possibility to also digest the logs that you take in and put him in some sort of a structured data format so that querying the data will get a lot easier and you're able to run analysis on them as well ya know so can we take logs and give them a bit more structure sure like in the agent prom tale we already have the ability to like ingest log format logs or JSON logs use a JSON path expression to take like a field out of that logs and promote it into the index so that's how we're doing log level internally that I can't show you so yes it's short answer is we can take some things now the longer answer is you don't want to do that too much because you don't want to put everything in the index because you might as well of an elastic in that case okay so so the problem should be at rest on the prompt a level that's pretty much - oh yeah you can do the same library low-key side as well if you're using fluent D but but we do it prom tale because then it runs on the user's machines and not on our service next question there's one one over here oh you've got mic so hello thanks we are using elastic so for login because we have a strong dependences or security dependences for policy err back do you have any plan to cover the air back in Loki for the fitter to support our back did you say her back yes role based access control yes yes um Loki's is a multi-tenant system already and it kind of didn't cover it in the talk but when you push data into Loki you can specify a tenant ID and that will keep it isolated from other tenants and then when you specify queries again when you send queries you specify ten ideas at HTTP header and and we already already do roll but we already do multi-tenancy now role based access control isn't the same as multi-tenancy obviously like you need to be able to put a gateway in front of that and really like the expectation is you could put a gateway in front of that to enforce authentication and authorization to do that one of the things we've been toying with the idea of is putting like whitelist and blacklist filters in two loci to say that like this tenant can only access you know met logs that match this set of matches because I think the match has syntax is really nice to specify these things in but yeah like ideas are welcome I think you could probably you know we trying to build the building blocks to do that kind of thing but not there yet right not another question lucky deal with the horrible horrible thing that is multi-line logs and being able to search across those yeah we don't yeah it's really hard right like that's why we did context right so that you can see for that stream what came before and after it like I I don't know enough about how other people deal with it but I've not seen a nice solution to that I mean then I get ya know we don't like that's what context is for hi first time hearing about lucky fantastic really nice thank you very much so we using else else like for log base monitoring and exporters permutation graph owner for metric based monitoring my question is would you have such feature that loki is besides you know this kind of querying and seeing logs and metrics side by side on graph on a-- with you generate some automatic URL that could point back to cabana and you could basically search based the similar filters there as well to see the logs so i think the question was could we could we add URLs into profiler so you can do the same workflow with elastic basically yes we do the same timeframe where you have this spike for example you selected in the graph are now unlock here you can follow the URL to your i don't know the exact answer to that i don't work on the front end i know we are working on adding an elastic data source for the explorer view so maybe it's not via URLs but maybe you can build the same workflow with elastic and other monitoring systems but then like the point of loki is that that workflows automatic like it just works right whereas in other systems you have to go and set them up and configure them so that the labels make sense in both of them yeah and I know we're going to add an elastic workflow to explore I think I mean I know people are working on it right now so any more questions one of the front hi well thanks for the nice presentations and I'm wondering after you drill down till the locks in the example you did what would be the next step to zoom in into the profiling into into what's into the core deeper context of what yeah and a good question like what's the next step after exploring logs I mean what I really want to do I mean we the whole of the loci stack and tracing stacking beds open tracing and I now have to go and head open telemetry as well and they emit in the log lines trace IDs and so that workflow I alluded to in the keynote about clicking on a trace ID and going to your Zipkin or yeag or whatever that I think is a next step for us the exemplars is another great example we don't have that yet but I really want to add that support in graph honor and basically I want graph honor to do like trace visualization or at least link to other tracing systems that's probably where we're going to go in the next kind of six to twelve months but like suggestions are welcome right I'm great demo hello when I have question about what you were saying about generating alerts off of logs right are there any is it even feasible to generate synthetic metrics off of logs so you can do it now with Loki you can Prometheus yes so your you can set filters in prom tell and pom tell will export the number of logs that match a certain regular expression for instance and that then you know you can scrape that in Prometheus you can rate that and you can say if the rate of errors is greater than X far and low but I want to make it better right I want to be able to do it server-side as well a problem with doing it like this is you end up like not having history when you add new expressions and filters for instance one more one more hello hello rest of all thank you for lucky it's really great and the second question you little bit cheating on the first slide so you show that your work flow means that you have an alert and in the lucky for now you don't have any errors what do you think about this and how would it should be shown and maybe we'll use use method or read method for this scheme so it's very interesting did you say I was cheating a little bit a little bit you noticed but it was nice well so we we have alerts off of prometheus alerts right we use them expensively and so the first step of really hey we get an alert then we go and look at the dashboard that's really talking about that kind of workflow button but no yeah we don't have a locally alerts yet and it's something we're working on like fair point okay thank you very much [Applause]
Info
Channel: CNCF [Cloud Native Computing Foundation]
Views: 68,472
Rating: undefined out of 5
Keywords:
Id: CQiawXlgabQ
Channel Id: undefined
Length: 36min 41sec (2201 seconds)
Published: Fri May 24 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.