OpenTelemetry Tutorial: Intro to OpenTelemetry and basic deployment

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
oh we are live all right yeah we are live now hey everybody um in the different platforms we're actually broadcasting now with the help of our friends from uh stream yard to twitch and youtube and linkedin and what so i don't know so um i hope that uh everybody can hear us well if not um just leave comment or something we will see that and address that um so again welcome i'm andrew biener from aspecto um michael here with me um my partner and co-founder uh you're probably here because you are interested in open telemetry um to some extent and this is the purpose of this series we're basically going to go over from the basics to the advanced stuff of open telemetry um in the past two years we've been working very extensively as part of our startup perspective on on open telemetry and actually in the last couple of months we learned that there is like an increased interest in the industry we meet more and more folks that would like to start using open telemetry jager um trying to get some tracing capabilities in your systems and they don't have a lot of materials a lot of help to get started so i asked michael our expert in open telemetry to do this series um so we're gonna start today with the first episode um although we are broadcasting it you know feel free again to comment and ask questions um we would like to make it you know as open discussion as we can so please be my guest and without further ado michael to you okay cool so hello everybody and welcome to the open television boot camp um let me start sharing my screen and we will go through uh um what we have in this boot camp what was the idea and the motivation to create the bootscam what content you can expect and uh just before i'm jumping into the bootcamp itself uh a word about myself so you will know why i'm speaking to you about those uh topics so um i've been working with microservices associate applications for about five years now started as an independent consultant and help companies to migrate from monolith to microservices uh help company with too many services manage it also as an independent consultant and as an employee in the past two years i founded a company around microservices and worked mostly with open telemetry so i gained some experience with open telemetry and and i saw the difficulty people are having starting with it and this is where the idea came to do boot camp so our boot camp is um free and live mostly kind of guilty starting with open telemetry it would span over six uh sessions um and it has nothing to do with a spectroproduct we are not here to come and sell your product we're here to come and teach you open telemetry um we're definitely going to use the knowledge that we gain while bidding specto but not to talk about the product itself so what can you expect from this bootcamp so those are going to be the six main topics that we are going to cover and i think those would be uh the one to get you confident to deploy and manage open telemetry so everything starts you know as developers and dev accessories were responsible on the system that we built and open telemetry integrates into our system so we need to understand how it works and what are the risks and how things are operating internally so we are going to start by understanding how it works and what's the motivation to uh to the way that it's implemented once we understand how it's working we can then try and uh integrate it to an application um we will go and make the necessary code changes in order to have open telemetry open germany collects data and then ships it somewhere so that will be the second phase once we have started to collect data and this data needs to be shipped somewhere so for that we're going to deploy all kind of back-end components that are going to receive the data process it ingest it analyze it uh visualize it and for that i will refer it as the open telemetry stack and when that's that's deployed it's going to production environment this is where there's the fun begins and we're trying to get a lot of traffic and a lot of data and then we need to start to manage it under high scale we will learn how to do it both in terms of the infrastructure auto scanning and those kind of things but most importantly from the open telemetry perspective how we are efficient with the data that we're collecting how we're collecting the right amount of data and not too much data the fifth part um so your application may be similar to other application but it's not the same and you definitely will need to do some customization to open telemetry usually very few but you need to understand how to do it and last thing would be we did a lot of work to have open telemetry to collect this data let's make sure that we're squeezing the lemon and we're getting the most out of our open telemetry so um those are going to be the topics that we're going to cover those this is the process that i want you to to go with us so i hope you find it interesting and if you feel there is something missing please reach out and and let us know um so we're gonna have three main uh sections the first one will be mostly around coding and making changes to application the second one will be mostly about deployment and configuring it for high scale and the last section would be about customizing open telemetry this bootcamp we decided to do it with live coding and live demo it's going to be quite an hands-on thing uh the first session today is going to be a bit more talking and less coding um but overall live code live demos uh we want it to be hands-on we want you to learn from experience uh that means that when you do live code and live demo things may not work but this is a great chance to understand why they are not working and to see for first hand how the debugging and trouble troubleshoot our process looks like we do have a few resources available for you so we have a gitabripo where you can find the code the presentation more research resources to read so if you want that you can check it out and also we having a slack channel under the cncf workspace where you can just communicate with us so um again it's live use it ask question and i think that's about it to tell you about what is open what is the open limited boot camp and we can dive right into the first episode where we're going to talk about the architecture the motivation of open telemetry and see the first project of open telemetry that we're going to work on so before i'm starting to give you the typical lecture of what is open telemetry why should you use it um let's take it from an example let's take a real example um that we cannot relate to and try to figure out how we were working with this with this example within without open telemetry and my example would be something very common so you would get an alert you get an alert that some service service a is not able to write data to db1 that is a scary alert that means that uh um some service is not working they are not doing what they're supposed to do they're not able to write data maybe we're even at a data loss situation um this is something that we need to put our mind into we need to fix it we need to act fast and um down times and bugs and critical issues are bound to happen the question is how fast can you fix it how accurate you are in your fix and this is what we want to do so try to think for a second you got an alert from your logo solution saying service a can't write to db1 what are your first instincts what's the first thing that you're going to do what are the tools that you're going to open what are the questions that you're going that are going to run in your head when you're fixing it i can think of a few i can think of okay i know service a is sending some exceptions i can locate this exception in my code and then i can track the flow in my flow in my code that led to this exception maybe it will give me some hints maybe this service was recently deployed and that's the issue and maybe there are some database metrics that i want to check out maybe that's the reason db1 is not available at all so um i would start with those i guess you guys may have uh other things that you can think of so um let's say that that the logs show an exception you follow the law the the code you don't find anything too interesting when you look at the metrics of db1 you do see high cpu which is interesting it's something new that that has changed and um then i would ask a fairly simple question um maybe we're just having more traffic and it's not really a problem well it is a problem but more traffic it's probably a good thing so maybe maybe we need to look at traffic and then it's to ask myself which http router causing to service a to write in db1 that's a fair question i may not know uh that question from the top of my head maybe i need to check it out and maybe my assumption that it's an http is wrong maybe it's b because service a consumed some kafka messages and then in in response to that right to the database so this is the thought process that i would go through and if i'm looking and at this diagram of the overall service architecture so i have here um service aid and service a is writing to db1 and if i need to know who is calling service a whether it's the front end or some i don't know mobile device or other services it's it's hard for me to know thinking about metrics and logs it's even more complicated to know who called service a um and there are tools that may help you some tools that can't help you but let's say that you know from the top of your head again that both service b and cfc are calling service a okay so now we have to consumer for service a and now the question is which which one of them is calling service a that is actually producing the errol that we're experiencing is it service c is a service b or maybe it's just both um so this is something that now if i'm thinking again about logs and and metrics it's kind of hard to answer this question and this is where we are we're maybe unveiling the third and important part of open telemetry which is a trace so let's start with the log we all know what logo are basically a message that telling you uh something doesn't work and we know metric we have some number uh representing some value in the uh in the system and then we have a trace and the trench is going to tell us the context it's going to tell us the path that a specific api called pacific backend interruption what path it took which services it visited before it hit that that error this kind of the same way that you would try to look at the call stack and see that a specific invocation jump from function to function i want to know from each service to service each job once i understand and i can visualize i can see that maybe all of the issues that i had are related to service b this is a a very important piece of information for me to to resolve the issue and to resolve it fast because now i know that i need to look whether the increases traffic is related to service b or i if i know that it's specifically service b maybe deployment in service b is the one that caused issues in service a so basically when we get more uh data points describing the whole story it starts to get easier so to resolve this issue fast i needed to have all three of logs matrix and n traces and i've said traces quite a few times so um let me give you a visualization of of a trace so here you can see a ui that displays a trace specifically jaeger ui and what you can see here you can see that we have older service uh it got an api call to slash purchase slash order the whole thing took a bit over two seconds and older service the first thing that he did it communicated with user service to verify the user that's great and once that's done i can see that older service communicated with stock service and asked to update the stock and eventually the stock service um rented mongoose query which is a mongodb library to update many desktop items only when that's done we published a new older kafka message so basically i told you a story i told you a story of how a particular api un rolled through the different services and i was able to do it not only to tell you which services it visited also whether they were happening in sequence or in parallel and i am able to tell you this whole story and it's kind of a data tree a tree structure where we can see what was the root and what happened because of a root so now we are a bit uh uh um smaller with with what is what traces are so um we're talking about those three things all together we're talking about the logs basically the application itself is telling us a story it outputs some messages that we can read as humans and understand what went uh in that particular application the metric tells the statistical data about uh about our system how what was the cpu how many api called it i get and traces are the context and the story and the path that each event took though three um would usually be referred to the three pillar of observability so observability is a new term a new trend basically trying to say how we should do monitoring a distributed cloud native application and if we were today was using logs and metrics and apm um with with distributed application it's not enough usually and this is why we're starting to see the rise of observability which essentially it's though three things it's it's it's logs it's metrics and traces all together so what is open telemetry right here i said uh um this is the open telemetry bootcamp but i didn't mention open telemetry in the past few minutes so open telemetry in some way is the three pillars open telemetry is the ability to collect logos metrics and traces and you can argue and say hey um i'm collecting logs today right now as you speak i have logs and metrics i just need traces so i can just use open telemetry and uh um collect the traces from from telemetry but the magic really happens where they are all colorated or correlated all together and the traces are attached to the logs and the lower attached delay to the traces and everything is uh glued together making it that when you need to resolve some issue moving from logs to traces from traces to matrixes is is fast and very efficient so open telemetry for me is the glue um open telemetry is basically uh um the ability to collect all three pillars under one unified uh sdk so um open dmt is a big part of it is the sdk there are a bit more parts that we will see in a second but open telemetry is a project under cncf cncf is the cloud native compute foundation they're responsible for kubernetes so open telemetry isn't in a good hand and basically if you look at the project start structure at least the main part of it is you have one specification how data should be collected and should be sent and then you have implementation through different programming languages because it is an sdk we need to support all kind of different programming languages um of course the modern one are are supported like node java python go ruby um any modern one would be probably supported so in a very short explanation open telemetry is the three pillars glued together as an sdk under the cncf with one specification so they could communicate with one another okay cool so now that we have a bit of a sense what open telemetry is we need to talk about how it works and i'm going to do it in three sections the first one would be to describe what is the open temperature stock what you need to deploy in order to have open telemetry up and running then we're going to take a deep a deeper dive into how the open 20 sdk the potion that you implement in your code how does it work and then we're going to do a live demo and show you show your real application collecting logs collecting metrics collecting traces and give you a sense of how it looks like okay so we said that open telemetry is an sdk and collects all three pillars that's great um that means that it's done in memory in process this needs to be shipped somewhere so we're going to have some backend that is going to ingest the traces loads and metrics that we just received and then to store it in a database that's cool and once we have something in the database we need to query and to visualize it and that would be our last layer so mostly we're talking about three layers so how does it look for more uh you know a diagram perspective so at the top we have uh three microservices all of them install uh open telemetry so they are collecting logs events metrics when those are being uh collected they are being exported to an open telemetry collector open telemetry collector is a backend that can ingest those this telemetry data and it can process it and you can process it and do all kind of things that we will deep dive into uh later this course and once it receives those messages and this telemetry data it's going to be shipped somewhere usually a database may be a message broker and then a database but um at the end of the day it's going to be stored in a database and once we have everything stored we need to visualize this data and this is where we're going to use uh at this course we're going to use an open source um and we're going to use jaeger of course there is an endless amount of vendors that knows how to work with this data as well so if you are going to start open telemetry tomorrow you will start by implementing in your code then send it somewhere that is going to get this data store it and visualize it okay that's cool we can proceed to the next phase and the next phase is after uh we know what is the stock we can proceed to deep dive into the sdk itself understand how it works i mentioned four layers i will say that when we progress in this course we would see that we can reduce amount of layers or add some layers to simplify stuff or to make it more complex but in a nutshell and at least to this phase of the course let's address this as four layers okay so you took the first step and you took some uh service a victim service in your in your application that you're going to try and implement open telemetry cool so um let's say that this is an orchid an example of an architecture with a front-end communicating with service a that communicates with service b that eventually write data to a database both service and service b have open dynamically installed in them now um when we are trying to think how a trace looks and if you remember we had uh the purchase service the auto service communicating with the user service and i told you that it's kind of a trivia in order to have a preview we need to have a challenge a parent and child relation that means that any data that would be sent from service b it needs to be under the context as a child of what happened in service a for that reason when service a is sending an api call to surface b it's going to inject some headers in the in the http headers and um that would help us to understand that who has service b to understand who is the parent what is the cause what led up to this api call so we would know that service b is the child service a in this particular case and then the open telemetry is going to send data to our collector is going to say hey collector please you should know that uh i sent an api call to service b and this is all done under under trace id number one because we want to know how to find this trace later on and then service b is going to say hey i got an api call from uh service a and now i do have a parent now this thing uh it has a parent and the parent is is a service a so in this case uh um now any data that would be sent from service b it would have a parent then of course we have a service be sending again this data to send say data about the tv query of course it has service b service a as experiment and that so goes later on i can do the same kind of visualization but i would add an additional data about it but um every interaction that we had so when service a in is sending an api call that would refer the spam a single event within the trace and the span is of course it spans over time it has a style point and a duration and that being an end time and you can see that the first span has no parent because it's the first one is to root one and this pen id is a 55 and when we're looking at this band when this pen id is 66 it does has a parent even though i made a mistake it does have a parent and the parent is 55 and this is how we're able to construct the structure of of the trace okay so i talked quite quite quite a lot in theory but this is uh um would be a deep dive into real code and how thing looks like in real life so let me jump to uh our code and i'll spin up two services so i'm spinning up a service called yarn called items and another one is called users and i am going to send two types of api call i'm going to send slash data and that would return some uh data and i would also um you know what let's leave it with that so i sent one api call and when i'm sending a single api call now i want to visualize those traces so let's look at that search for it and we can see our very very beautiful trace so that's what i would experience uh if i'm you know having an issue with this api called i'm trying to to debug it that's how a trace look like so that too just to give you a bit of a context let's jump to the code so you will see what the code looks like and i'll go briefly from the application layer code after that i look at the application layer code um i'll show you a bit how jaeger and prometheus that's what i'm going to use in order to show you uh how much api calls uh we got so um and then um i'll look how the implementation of open telemetry works i will say one thing um in this in this first session we are not going to deep dive into open telemetry uh a lot um that's what happened in the second uh second session okay so let's look at item service so this is by the way node.js with typescript you don't need to understand node.js and typescript in order to [Music] understand open telemetry just the programming language that i use um so let me enlarge the picture will be easier for you to see so the first thing that we do is we initialize open telemetry um all you need to do in order to have open telemetry at least the basics of it is to um initialize import some file and then initialize it the file itself is not that long so it's uh 42 lines of code with spaces so here we have one extra so it's very simple uh uh uh um implementation we will dive into it in a second and what we are doing is we are creating an express application which is a restful service and we are listening to slash data as you saw earlier uh in the browser and um basically um if we didn't get a fail in our query string what we're doing is we're calling the user service and if that's successful we're just returning the user service data and if it's not successful we are putting some uh uh um evo logs to get all the data that that we need and also for every api call we are adding one to our counter and this would be uh the http calls counter that we got from our http goal so you can see here a very uh quick uh uh view of how we are working with metrics um cool the user service is also a very simple service the same goes in import tracer initialize some uh express stuff so we will uh have a rest service and when we get a request to slash user we are using this mock api that i did we get some uh users uh some data mocked data user um we randomize some index we save what we randomize and then we are just responding so basically if i'll go back to our ui here to our ui tool browser what you can see here you can see that we sent an api call to item service i can service communicated with user service and everybody is happy to give you a an example how it really looks so um again if i'm looking at the trace itself what i can see here i can see that item service got an api call to slash data so i can see everything right here i can also see that it sent an api call to slash users as we saw in the code and now i can also see that we communicated with the moki io and we got the data that we wanted to and we can see uh um um how long everything took we can see that um everything working uh nothing works in in parallel everything is in sequence and it's quite fast thing by the way if i would ask you a question on this particular tricks if i want to improve this this trace in terms of performance what do i need to improve what is the thing that makes my life uh not my life the the customer life the weight the the most time so usually you need to with traces you need to work your way up so um this thing takes a long time this is the call to our uh moki ui and this is the thing that blocks most of most of my i o um so basically um if i were able to improve it uh the whole trace would improve again from performance and latency uh aspect okay so um so we got a bit of a sense of how this uh thing looks like oh i do want to show you what happens when i'm adding an arrow so i have fail and fail one so now i got internal server level and if i go and search again on jaeger to find the trust that i felt that failed i can see here that there wasn't a call to user service and i do have a little true and i can also see the l details right here and if i'll go to primitives i can see that the count of api call is increasing because we are counting on api calls and we are getting more and more okay so now that we have a bit of an understanding of what the application is doing and how our monitoring tools are working let's um dive one step inside how this thing is working and let's take a look at the tracer again just to emphasize um this is not a deep dive into open telemetry there is a lot of setup there is a lot of moving components in the sdk different names that i you may see like provider like exported that we need to really understand a deep dive into them um but uh i do want to give you a sense of how this thing works so next time we will be able to dive right into it so next session i would just assume that you already saw this code and we are going to extend it and have uh um more and more understanding on what is a provider and explore there and and processor and all those kind of things so the first thing that is happening here is we are setting up our metric configuration and when you say metric um you need to send this metric somewhere and i'm going to use metric prometus exporter that means that our um that our metrics are going to be shipped to promote promote use and i'm using here some metric port that i'm defining so if you look in the user service i am providing here portfolio scraping and the item service has a different port for scraping the uh the metrics so in case you don't know the way prometheus works is prometheus scrape the data from the service so you need to expose some api call some api endpoint that prometheus can fetch the data from to show you how it looks so the url the port was 8081 and slash matrix and this is how prometheus works they're sending an api call to slash matrix and that's the way that they are getting the metrics from by the way if i look in in jager and you can see here that we have here our arrow but we have also a bunch of very short very small very short very small uh traces and if you look at those traces those traces would be the traces for slash metric so the in the api calls that prometheus is sending in order to fetch the metrics are visible also right here in jaeger cool so we defined that once it's up and running we are writing in the logger that the data what is the address of the scraper so you can see here the scraper and the address and then we are asking to fetch the traces every second probably in production i would use uh less but this is a demo and i want things to be up to update fresh after that we need to define the traces and traces are being exported to we are specifying the name of the service the user service and the item service and then we are asking to instrument http and express in order for us to close cd data that is incoming to express via the rest api and also the outgoing the outgoing api calls that are being sent from item service to user service and from user service to democ cool looking at our item service so um there isn't a lot of api in open telemetry api here so um there are two things that the uh that we're doing with open telemetry so um if everything works correctly and we don't have any failure and we're looking at the code that is being executed so we are in the slash get it's not failing we are sending a response and that is it so you barely you don't see any uh open telemetry data here and this is because everything works fine and we don't don't need to do anything special um so the trace would look as follows um but when things are failing and we're having an exception this is where things are starting to get a bit more complicated so i wanted to be able to correlate between my logs and my traces to do that i am writing a log so when i have an exception i'm writing a console error meaning that i have some failure and you can see this failure right here if i would output from my application critical error i won't be able to know what is a trace so for every log that i'm writing i am attaching the trace id and this is what you can see right here so basically i'm going to the open telemetry api that's the open telemetry api and i'm asking from the open telemetry api to give me the current spec the active spec remember every line here is a spec and this is happening in the item service and this is happening in the user service so we have space that are currently active once i got the active span i can write with my log that i have uh what is the current race id so imagine next time you have production issue you have an exception in your vlogs in in your log you have the trade the relevant trace id and you can take this trace id right here and you can grab it and search by the trace id and i got the trace id that is causing all of my issues so the ability to jump between a trace and a log between a log and a trace is very simple and very very fast also i did hear implementation of metrics i kind of showed you it already uh we'll talk about metrics of course with great more details but essentially you create a meter now you create a counter you provide it with a name you need to increase and decrease it and every time that you have an api call this is run again any anytime that you have an api call it's a kind of a middleware it eventually would end up in prometheus and you can see here the amount of calls that we're getting and you can kind of track it from there when the api call is being sent to the user service and this is happening right here i didn't need to do anything special with open telemetry i needed to do i need i i didn't need to uh write some code in order to be able to see it both in the user service and the item service that's happened automatically of course something that we will learn how to do and in the user service i did something a bit a bit which i think it's interesting and important so if everything works and i'm refreshing here so you can see that i'm getting harry potter and getting different different names so it's kind of a random thing right because i did call random number zero to the length of the array and if i look at the trace and let's jump to the trace let's find the latest traces find this one so i have no idea what you why why user service chose to return this user it's completely random right so i'm not able i'm not sure if you'll be able to see it but if in the right corner of the screen there is a small pipe and it describes that there was some look so i was able to write log within my tricks and this log is saying hey a number was random and the the random number that was generated is one so if i'm a developer and i need to figure out why this user was returned it seems so random now i have my answer because i know that it was index number one that was generated and again i'm getting an api call i'm grabbing the active span and in the active span i am adding a new event an event can have a name well it must have a name and also it can send how many data that you want so you can treat it as just more info being sent on the span and then the trace and you can also treat it as the real the real exception other real exception to real data that uh that we got and um this is like extremely helpful because because it gives you the opportunity to uh uh get everything that you want in in the story michael we actually have uh yeah yeah we actually have um a question from the audience um you know related to what you touched earlier um regarding metrics um is there a way to filter out metric calls from the traces list yeah yeah definitely it's a great question because well you don't care about the slash metric although it's cool that it's there you don't really care about it um so yeah it's up to you to decide what's being uh and and what's not um this is something that we will learn exactly how to do uh um when we will work um more in depth into open telemetry um basically the processor phase is the phase where uh um you can decide that you're going to omit some uh traces because they are uh slash metric other types of thing that you would probably want to admit is stuff like maybe health checks that could be very annoying or some bots that are kind of annoying your application cool okay thank you one last thing that i wanted to show you guys before um closing this session um you can see when we have error um i'm taking the active span i'm writing to our log the trace id and i'm also recording the exception that reason is for when we have a failure let me make it crush when we have a failure and i am able to see that i had evil right here that's great that's cool but i do want to know exactly what was the arrow so i can see here that the event type is exception the event message is a really bad arrow and here i can see the structure so i'll be able to uh you know to debug it and and to fix it okay um so our goal for today was uh for this live session it was to send traces to jaeger and view some logs we got that done um to send metrics to prometheus we got that done we're able to collect trace to to correlate uh traces and logs we have that uh working and if you're interested in that code you have everything in our uh repository uh just go to expect.io uh openmp bootcamp uh you'll have link to the video recording that is being recorded right now the slides the select channel and of course the source code itself in the next section we're going to implement everything that we did here uh but with more details and i think the question uh that we got uh uh to uh you know to remove slash metric is exactly the kind of thing that we are going to do because we are looking for real life issues that we face when we work with open telemetry yeah so we're going to get your questions cool yeah so we already have um um first question like a general question um about open telemetry and you know open telemetry compares to regular application monitoring um apms you know like app dynamics and dyna trace um what is the real difference is it's just about you know how logs and metrics correlated or is there anything else in it yeah yeah great question so open telemetry by itself is the method to collect data and to distribute it um so it's not about what you do with this data you do have a year to visualize the data but that's the tip of the aspect right you just can see the trace companies like uh penetrates and uh dynamics um are companies that are um basically trying to solve the problem that open telemetry collects data to solve which is to uh fix issues faster both companies used to not work with open telemetry and migrating to open telemetry so any apm tool that you know out there is doing some extent of effo to migrate to open to them too i think the question from from from the consumer side is um i want to be part of the community i want to be part of the standard and i want to get the flexibility that i want to have when i'm working with this standard that means that you will be able to collect open telemetry data and then ship it whenever you want you will be able to ship it to your favorite apm tool but also you may be able to uh part of its persist for yourself for other purposes or if you want to let's say ship traces to one vendor because it's very very good with traces and then the metrics and the logs to another one because it's good with that it will give you this flexibility um but in a sense open telemetry is going to change a bit how we are using uh um abm as for today so so if in the past like apms you know each one of them had its own proprietary capabilities of how to collect the data proprietary agents and ways to deploy and collect data so to some extent open telemetry is commoditizing the data layer and now it's it's more of a question of like what are you doing with this data and probably each vendor has its own ways to to do that uh but but but but maybe another interesting thing that we will touch on later on in these sessions maybe more of the last ones are um you know what else we can do with this data because it's not it's not just about um you know an alert in production it's actually a very very powerful data layer that can be used for different purposes okay cool um maybe just uh a few last thing before we are closing um please join our uh channel in the cncf slack uh open timothy bootcamp uh if you encounter any questions feel free to ask them and i'll be happy to answer you a few more resources that you can find uh mostly about how to run jaeger locally and how to run yeah you're on aws we're running on aws so that's why we chose to write about it so if it's that interesting you can definitely check it out and as i said next episode would be mostly about integrating into our code cool all right cool so yeah michael thank you for that that was uh very interesting you know what is it 53 minutes wow that's that that was really fast um um on my end so um um thank you very much for that thank you everybody that joined us um as mentioned this is going to be available online so you can you know access it uh whenever you want um the select channel is available if you have questions outside of these sessions or if there is anything in particular that you would like us to cover um in in future sessions so you know be my guest and and and reach out um we are here to promote open telemetry to promote um the community so feel free so thank you again we'll be here in our next session have a good rest of the day bye bye everybody
Info
Channel: Aspecto
Views: 671
Rating: undefined out of 5
Keywords:
Id: UEwkn0iHDzA
Channel Id: undefined
Length: 54min 19sec (3259 seconds)
Published: Wed Sep 22 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.