Distributed .NET Core (DShop) - Episode 9 [Vault secrets, Seq logging, Jaeger distributed tracing]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Before I did into this series (It's quite a long watch) - For those who have watched the other parts, is it worth it?

👍︎︎ 4 👤︎︎ u/anondevel0per 📅︎︎ Feb 04 2019 🗫︎ replies
Captions
hey welcome to the ninth episode of distributed donut corkers it's Peter and Derek and today we'll talk about a few useful tools so at first we'll start with storing your credentials connection string passwords in a secure and centralized way and later on we'll go through the logging and distributed tracing topics right yes yeah ok so I think that we should start with a very basic question regarding this storing the credentials or whatever so the question is why we should care about this why we just can't put the credentials like like I don't know some connection strings or passwords to simply in app settings like we have here well because it's certainly not secure I mean even if you are using your internal private repository for example some internal gitlab for storing your source code then again if you have you stored for example production passwords you know password attraction strings to databases that you use in production you certainly don't want to see you don't want all of the developers that work on this code maybe to see the passwords to have the potential access to your databases even if there is some fire roll in between so what you can do instead you could always keep these settings on the server side so whenever you deploy your app you could have some environment variables on a server side which is more secure but then comes the problem especially in the micro services world since you will be having tens hundreds or even thousands of instances of your applications then you know small change maybe to your connection string or to your you know private key will will occur it will trigger this the situation where you have to update all of your instances so there can be quite a lot of course you could do some scripting here but it's it can be some it can be carbon can be cumbersome and I also found out that sometimes people what they do so they use their build server and they use some deployments like maybe octopus and this deployment tool is actually responsible for injecting this correct credentials passwords on their private private settings into these environment variables and it's pretty good but then again what if you simply change your password and instead of just maybe restarting your application or making your application you know call some service to get this abated password you have to go through the whole redeployment process once again it simply because you change a single value in your settings yeah I would say I agree with this and of course there are lots of strategies when it comes to storing your your secrets or I would say that sensitive data if you want to call this and of course this depends pretty much on a lot of factors because for some people you can as piata set you can use some you know injecting the dynamic dynamically injecting the environment variables using using some build service but of course you could use some external tools like we are going to present to you like vault or if you're using let's say that you're running your micro services on a kubernetes you could use native natives secret volatile so this pretty much depends on your on your situation when it comes to to the current project yeah and again if you host your services in some cloud like Microsoft Azure or AWS you can usually find here some tools that will solve this issue for example a trochee vault you should change the mode to from plpl to you yeah I know the translation happens by default and it nice me yeah okey volt teenager but again in this whole course we'll stick to the cloud of Gnostic tools and mostly free and open-source tools if possible so we don't want to stick to an excel provider at all so today we'll we'll present you the vault and vault was created as being developed by Hershey table the same company that created console and other pro tools and you can use vault for free so let's see how we can get started with vault if you go to our docker compose directory and would you update it by the way so right now the rabbit Redis is our core infrastructure here is the console Fabio so if you run the console Fabio Volo you can see that here we are running this vault image and so this will run our vault so let me just quickly show you how it looks like it's here and this pretty much gold you just pull the docker image called vault and you run it and we'll run the development mode so here is our secret token just bear in mind that this development volt mode is not really for the production use if you want to do the production if you want you know deploy Duvall to the production you should you can click look for example into this docker images file that you created and here you can find the vault section and under the vault you can see here there is a vault server and the and a few steps to create a proper proper deployment for the production and by default vault integrates with inter integrate with a lot of different tools a lot of cloud providers and tools databases but by default in this in this scenario it can use the console as a back-end so can use the service registry back-end for storing the vault settings if you want in your production so yeah but but for this but you know for I'm not sure but I think that because this is a kind of work - yeah fourth dimension that things has changed a little bit since we recorded the episode about the council because now there is a new version of the council also the service discovery provided by the hash curve so they now turned it to the full service mesh and I think that the vault by default is the part of this service mesh so it's it's probably if you'll go to the council website now you'll probably see that I'm a right at the new version was one point four I guess yeah yeah one point for something or free yeah I'm not sure evolved it's part of the console service much maybe this if this is the stock I would I think that I would make sure that I think that this is now the part of this oh yeah it is part of the service mesh so yeah devote is part of this council service mesh and it's responsible for the or for example yeah certificate authorization and management nice so yeah we're very unlucky that we actually recorded this before they introduced this service mesh but yeah just to let to let you know that if you will if you want to stick to this Hoshi Corp stock then this will be probably look differently for you yes you'll have this out of the box when it comes to service manager yeah okay so the vault looks pretty similar to the vault in terms that is the same type of the user interface and I've just logged you see using this token mode using this secret which is of course our setting defined here and again for just to keep in mind the secret login should be used only by the system root so which should probably probably shouldn't be used by your applications for the applications you should use order type of application let me just look out so for example fear apps you could simply define the username and password and make a separate account for each service instead of using the token which is really available only for the administrator and once you log in you can see there are different types of secret engines by default we'll use this secret engine but you can simply create a new engine as you can see there are quite a few integrations already in place and you can also go to the axis and policies so for example in taxes you could enable here the new authentication method and you could enable here username and password or some other or some other education types which is very cool even there they don't have integrations kubernetes all the AP and so on and you could also go to tools sorry and to the policies and you could define different type of policies for example user that has this username or belongs to this group can only read these secret settings and so on so you can define here pretty much everything but I think so so this supports creating some some hierarchies right in there yeah yeah I think that by default would you should you should do maybe at least for starters maybe create a username and password for each of your application like a separate ones and then just enable for this just add these credentials for its service and you know set these users to read-only mode to this particular you know setting stuff and I think that will be pretty good start okay so let's get into our secret engine so I can simply click here and what I can do as you can see there is nothing nothing for now this is just some some path because just like console you can use the vault using either the CLI which under the hood will use the REST API so you can easily communicate with HTTP calls and I'll click click on this create a secret and here there are like two important things the first one is path for the secret and the path is simply the rest endpoint so I can type let's say want to have a secret for our maybe discounts and let's open the discounts service our beloved discount service okay so here in our discount service you have this application up settings and here I have my section called app so if I take a look at my home controller okay I'm not using me here but maybe I will use it so you can see what's how we can you know use vote so let me check where do we have the up options for the okay we don't have it here but I think there's no problem because I should be able to simply inject my I options of the up up options type options let's just add references no sorry okay you sure you did not register this as a single instance without this I options wrapper probably I could inject it right away just like this I think but let's just stick this the default behavior okay so let's try the following I've just injected my app settings and instead of returning this hard-coded value our return the app options value name so this one sorry should return our discount service name coming from the app settings so let me start the discount service for now and let's open the base endpoint so we can go to the local house five thousand nine right and it should load in a minute okay I think we might get some error for the activating particular registration so it's either not okay wait a minute see if we had a longer right now it'll be easier but we'll get to the logger part in the next next episode there's something with open tracing KC okay yeah I guess so all right so okay let me just I think quickly after the the method is rules in in the later part of the episode just to see if that was the route there was the that caused the issue right you think it could be this yeah yeah let's see so we'll get to this one in the next part of this episode probably in like 25 minutes but let me just add this one package here and one more and one more package coming from a nugget we'll talk about this later we want to install this package okay now let's go to our and SSC dot that ad package I didn't predict this to happen no one did yeah but uh yeah okay okay let's get back into our services yeah but you see this is why why you omit monitoring and other logging and other cool features as will show you because you want these things to happen okay and AD open tracing yeah I think it will be good now yeah this should okay one more time let's start this up okay cool all right so it's five thousand ten and it should return our discount service coming from the app settings it did all right you can see that's quite cool now let's say I want to change this I want to change it to be something else and let's imagine that this is some secret secret password and we don't want to keep in our repository so I don't want to stir it here I could store in the environment variables but again I don't want to do it because if I update my setting once again I'll have to redeploy my application to all of the instances so all the original machines I meant so let's try to do it with vault and the first thing we need is this secret path so it can be something like discounts or it can be like discounts - service settings or whatever this is just a endpoint now we can either provide a key value here or we can provide a JSON which is cool so here is our JSON editor I will just take this this small section and because it works you know of the box with a spinet color you can either override a single section do the whole file or you know just a few parts of it so let me paste this section here and I will just say that this discounts does service and the default so I saved my settings here and if I go back to my discount services now as you can see secret here is my path you know the next path for the settings and if I would like to for example you know update it I could create a new version because by default and the consort default will store the versions of your secrets so I could update it and you know have the new version of my settings but let's see how we can actually use this one so I can keep my you know options here that are ready and they will be very secured and it's quite simple to get it up running we'll go to the program CS and here I can say use vault okay this one thing so if you take a look at it it what it does it basically looks for the upsetting section cold vault that later on will be that you can override with environment variables because you can imagine that what's the point of keeping the vault password in the app setting since vols is our credential storage and we don't want you know compromised it and so what you should do here instead of keeping in the app settings which you always do you should probably keep it in the environment variable and then once you do a deployment you just you know assign this assign its volt settings that you can find here under these adult volts or and here are the involve environment variables that you can use for vault like URL secret keys application types and whatever whatever you need because by default what we do we just use the already available volts library written for donate core and we just wrap it like we did for most of the tools in the shop common so that's the one part the second part is that here we are using the a JSON parser coming from the ISP net core github so thank you guys for doing this so what do it will do it will grab will ask the volt store to get settings by providing our key which is this endpoint story vault will all these options and will parse this JSON and then we'll add this to the America finishing source return the Builder so it will be loaded by the app setting so it will load in by the configuration and register here correctly so the analytical framework will be able to you know load these settings once the application starts alright so the last thing to use vault is to add this section so if I scroll down in in any of the services I can again copy and paste most of our episodes will be like copy and pasting things that's a good thing yeah so things to work and let's place it here and again I can just type vault enabled true here is the URL to my vault then the key so the key will be the discounts - service settings so this is my vault endpoint and my authentication type for the demo I will use a token will be secret but we also added the user pass so you can change the username and password and you can you know up here more more application provider on your own so but for now we simply added token and user pass so you can refactor this our code and make it even better all right so let's give it a try I will start my service discounts once again and if it was so so on this on this point we should be able to yes to devolve at some type of yeah maybe it could be in a puff let's see this counts oh yeah I'm down services service or service discount service here yeah we should be able to correct correct so let me just quickly fix this I will get rid of this and update the amplitude discount service so discount service and do it once again do it okay create a new secret these come service slash things and paste it alright so at this point what this what is does it simply takes the data from the from the app setting Jason so in this case you specify that you want to authenticate using this token of course this is for the development purposes yes yes it's simply searching for particular settings using this you're sorry so this key right yeah this key okay and as the vault for for the secret and if it doesn't find the secret it will fail as you can see so right now I have it loaded correctly and if i refresh this up you can see that here should come my vault settings yeah that's here yeah so you know an easy way and a very secure way to circuit and shells and like I said instead of keeping this here you can use our environment variables and keep it in you know on the server side for example you could specify these variables in a docker file and then just update the you know just add the correct key on your server so instead of keeping all of your settings you should just keep the vault key or username and password on your server side or during the deployment process and yeah this is much it about volt and if you were wondering what does what is this this this IPC log so for example what it does it will prevent the vault from swapping your data being kept in memory you know secret you know in a secured way from swapping into your disk drive which can be you know which can be compromised so I'm involved has a lot of security built in you could also click here on your UI or for the whole CLI here's the C live by the way and you could basically you know seal the vault and then now no one will be able to fetch any data from it or update it and only the root administrator will be able to unseal it by passing D you know master key parts so that's pretty cool and it works everywhere so and one one thing because I don't know whether he specified this back when you explain the whole process but because you specify this whole settings inside this vault uh-huh under the under the Sun some key and to be clear the vault the settings are taken from the vault and this section is injected into or maybe this overrides the app settings yes section yeah and this happens just when the application is bootstrapping yes exactly that's how I how it's done for now but you could you could do it even better for example you could imagine that your app might simply you know ask volt using the HTTP client if there are new settings maybe every five seconds that's what I want and then you could have you know this dynamic update so you would even have to restart the app it would just grab the abated settings from the vault automatically so you could do another fit this one so this this implementation will not update this dynamic leader once you know my devotee up you will display the version that was provided only during the startup applications yeah yeah yeah that's but you can easily extend it so feel free yeah okay just fork the common pocket yeah yeah and I think that's all about the volt now let's get into another useful tool for the login and we'll talk about you know this whole idea behind login matrix tracing and so on in this episode next episode but if you want to start with logging and you are afraid or you don't feel like you need the whole maybe the whole elasticsearch stock or you don't want to use some some one of these multiple solutions like in reading data dog ray gun and a few others that are hosting a cloud somewhere you you like to use something simple that it is just for logging the pretty cool tool is secure so the website is get sec you net and you know CP o is not really for free you have to buy here you have to buy it so it's a one-year license it doesn't cost that much but you can use it for free and a you know for the invidual use as a single developer so they have this free license as well and you can start the sec you for docker so we'll show you how we can use sec you which is pretty cool logging system it will you'll send the log to the sec you and it will display this nice dashboard that you can filter and browser locks so in order to get started with sec you you can take a look at our docker images file once again and under the second section here you can run it like this and if you want to assign a volume you can just simply pass this - V flux somewhere here just make sure that you have the volume with this name on your harddrive are you creating your dog hair volume so I have my security started it's you know it will be available on this on this port so let me open it five three four one and before we do any logging let me just quickly guide you for this UI so here will be the UI oh I have some old locks here already as you can see that I didn't clean up from some previous section so here are some old locks and here is the dashboard so I could go to this this panel and I could look to some I could look for some graphs like all the events or of the exceptions during some periods and so on I can go to the settings which is quite useful so here I could define the API keys you can imagine that each of your services would have the unique API key maybe to send the locks you can set your backup you can set your attentions for example I could add a policy that will automatically remove my locks every seven days right so I don't have to really care about removing it on my own which of course I could do I can enable application to this dashboard and what is also cool they have all even the up section so we could you know add some plugins to secure to you know start you know setting the patient's all right so but let's see how we can actually how can actually start logging stuff okay so let's say we want to add the log in to our do you have a darker image container up and running off thank you oh yeah yeah story oh yeah okay this is yeah okay so let's say want to have some locks enabled so I will just say use logging and again if you take a look use logging is our common extension method so by default we will set it up because Sarah log is pretty cool it's quite performant some benchmarks says that it beats the analog even like over two times in some scenarios and it's all about this rush logging which is quite nice topic and this secular as already said looks just like analog block for net logging extensions can you know lock the data to different and multiple targets like databases files consoles and so on so for example here we could look to elasticsearch here I could look to console and here I could look to second so for a simple secure logging I'll just pass the URL and API key and one more thing about CQ if you take a look and Deportes let me just type docker PS quite a few stuff up running or maybe just the current inspect will be better so if you take a look at the sec you where was the port section it was somewhere here oh okay so the sec you by default exposes two ports I don't see okay so by default it exposed the 80 port and which is really the UI which is the UI and the this seculow going system so this is the port that you can either browse the UI or push locks to but it also exposes the 5/3 for one port that is that acts only as this you know logs gatherer so you can only sort from your app you could use simply this port and just pull the locks so you don't really have to use the same port as for your UI you could simply close the access to do the eye and just expose this port that will consume your logs that your app will post to this second okay so here is the user login which is a sir look and ask these extensions we also add this enrichment so for example like to maybe have environment an application name if if provided added to each lock and once again I'll go to my app settings and simply copy these two things the cell lock section and the section section and this is something that you could store in your vault right because for the sec you can have some credentials like API keys so that's a good candidate all right yeah this has to be removed all right so for ISO lock simply stuff I want to have my console enabled or not it's up to you and the me lock however and for this a cube I simply type this URL and yeah but like I said if I expose this privately port on the docker it will simply act as this log log log in consumer and my token given that I set up the API here API key for the secure and I think that's pretty much it so let's try to restart our our disken service and before I do so let me let me try to us to throw somewhere an exception just to see how we can browse for exception so maybe here in this get method I will try to you know frozen exception let's say I will fro clean and I'll give an exception and I like this fro new okay and I just we have this oops very informative message coming through for hours from our system and let's restart our discounts so it should start logging to the CEO and we should also see a little bit different locks on our console window yeah as you can see a little bit different templates using the setup we can also see here that we are getting the logs from the RabbitMQ like which which messages we are subscribed to and so on okay so let me refresh the page you can you can also set here the alt refresh ok and yeah here are d locks right the current locks so I can click on unlock and see the details and as you can see I have my enriched property visible here application name and environment which is quite cool and also some additional locks for example from my RabbitMQ or from other from other methods or classes and let's go into our endpoint it will throw an exception so let's try to get some random do it I think we should have some good in the rest file yeah all right let me just copy this one which of course isn't a valid discount ID and it should for an exception and on the server side we should just see some you know error yeah I enabled here this developer exception page but of course this is something that I would hide on my my production environment and if I go to the city once again and refresh it you can see we have this dot red dot and I can open it extend it expand it and if you are my exception details right here is the even ID which as you can see is an object this is all about the structure logging and the cool thing is that I could for example browse I know the filter my my locks here like maybe application name so it looks for these properties and I could do application name equals and my discounts sir volt right and it will only you know filter deluxe by these properties and you can do some pretty nice queries here is in sec you you could also define the you know levels for different logging level types and the queries so you can imagine that in your in your solution scenario you would deploy distinct services and its service would of course is the same name but it would have the different you know instance ID like we did for the console and in the all of these services would you know would send logs to the second which would act as a common log aggregator and it could browse all of your services locks for the sec you and you could also you know filter locks by the services but service type service ID croatian area whatever using these queries here or by define your own signal and queries here on this side so that's pretty much for logging very very nice very good tool and as i said i think i think it was free it's free to use for a single developer it has the docker image to try it out and even for this you know enterprise scenario it doesn't cost it much for this one year as a one-year subscription so we go to the third part yeah the final part so this is the fun part young yeah so basically mmm yeah it's a beaut just showing you yeah so basically one thing that you can actually add to your application is of course logging and in a lot of scenarios this is great because especially when you have this structure logging it allows you very quickly to see a lot of details that that comes from your applications but there are few scenarios or issues with the logging and the first thing is that you know especially when it comes to micro services if you would see something like you know imagine that you try to create some command on the API and you put this on your QE and you know later you'll see some exception that was from or you simply try to get some data from the API gateway which then forwards this to some some service that you don't know because it's covered behind this API gateway and you see some stack trace it's pretty it might be pretty hard to determine you know how this exception or maybe what was the sequence that actually force this exception to occur so tracking this through the distributed system this whole exception could be pretty painful for you and so that's the first first thing and I would say that also on some level logging will simply it won't be that that easy to to search I would say that if you're new to the some projects which is based on their micro-services architecture and you will try to understand this it could be pretty hard to you know especially when you're new to this project to analyze this logs and understand how this whole how this whole application is created and how this particular services are are working with with each other and that's why for some reason and in some cases you might use also a distributed tracing and so basically the desire there's a small difference I would say maybe not too small but there's a difference between the logging and the and the tracing and specially distributed tracing so the logging as just as you saw as it so simply comes from the particular particular application or particular micro service and you can analyze this as a whole log right so you can see that we have this bunch of logs that came from the last five minutes and some of them contain some exception that was thrown from the domain or thrown from the infrastructure whatever but you could not easily connect particular events together to see that okay so that event triggered another event and another event from an exception while was processed so that's when the distributed tracing comes to play because this is something something different when it comes to the presentation and use this kind of different technique that is similar to what we what we've done in in rabbitmq and a correlation context so basically we will use as you saw we had these two extensions and one is the Jagger and Jagger is is a tracer developed I think that was that developed by the ubirr yes so the ubirr had a lot of my as you probably expect have has a lot of micro-services and they wanted to over 2000 rock the or a little more yeah so they wanted to they wanted to easily track the requests that go through the system because for them especially for them logging would be would be a pain because it would be almost impossible to see the dependencies between some some locks from different services so they came with this idea of this tracing system called called jagger and this is this pretty much integrates with the open tracing so this is the open tracing can you go to the open tracing I think this open tracing comm let's see or the orc I'm not sure smoothly find out and I oh of course it's I oh I oh okay yeah yeah so basically yeah this is the as you see the Avenger natural tracing that is currently has some I think that supports nine or ten languages one of them hopefully is is C sharp so basically open tracing and the jagger will allow us to add some distributed tracing and to see how the requests go through the whole application so if you go to the start-up CS mm-hmm you will see this tool right yeah so we have this two extension so we have services add jagger and open tracing and one thing that's worth mentioning open tracing needs to be installed as you sew on your projects explicitly it cannot it we couldn't put this into common package and the reason for this is that for now for reasons that that the contributors explained on a github because there's the issue on the github about this yeah this one yeah basically for now the open tracing does not support net not in standard 2.0 it needs to be net core app 2.0 so since we had this common package in a next on that we couldn't put the open tracing so Jagger comes the common package comes with the Jagger itself but if you want to have some default tracing for asp.net core you need to install open tracing country net core if I'm correct in this library that you saw on the beginning for each project unfortunately yeah so let's see maybe we can run some oh yeah so maybe let's run something and then we can go to these methods right yeah okay well so yeah I we already we already have ipi wrapper running identity products will skip the discounts with Jagger for now right we can just use something yeah it has it enabled and I think if we open the up settings it has this section here yeah it has section right so basically we have some options here so first of course whether you want to enable Jagger or not then we need to specify the name for this because that this will allow us to in a UI to choose which service you want to trace or which service was involved in tracing particular you know flow that we have yeah and I think that you should probably stick to the same naming conventions whether you're using console fabio jogger whatever because otherwise you will have you lose your mind when you start yeah so I think I think that this is yeah you could probably even somewhere hard code or just have some simple like I could have you up name and then just reuse it everywhere for some injections whatever because otherwise yeah yeah okay so then we have to two things UDP house and the port and the you can communicate with jacquard using the either UDP or HTTP but there is I would suggest using the UDP simply because there is some issue when you use HTTP basically you have circular dependency because when you send the samples tracing to the jagger it detects itself and trace this once again so this is some weird stuff going on of course you can filter that you don't want to sample data trace to jagger but for for simplicity sake with simply yeah and by the way I mean why would you even need the HTTP that you know works on top of the a TCP protocol I mean with UDP you just send packages and you don't care you don't easily have these response from jagger whether your packages receive or not so if you if you're familiar with this you the difference between UDP and HTTP simply ECB should know that yeah I saw Lee the CPU should know that there is no acknowledgment involved yeah nothing like the session and so on so UDP is sending data it doesn't care about whether it arrives or not it was a stupid joke about this about okay we'll start a bit next episode yeah so basically this is the data and once you add this add jagger into your startup if you can go to internals of this add jagger you will see down here yeah so basically what it what it does is it takes the section from the from these settings and simply it builds tracer and if you'll go yeah so it simply takes this options by the key and put this into options and if you'll see we need to build a tracer and for this particular thing we need two important things so first thing is the reporter so reporter is the I would say the component that is responsible for sending the data to Jagger and there are plenty different reporters that are built in this library so you could use the reporter that reports to the console or as you see we use the UDP Center so we can specify the port and the host and so we could so we could send this data to the jogger using the UDP if we not specify the reporter the Jagger will simply report to local host so if you will not explicitly create a reportable you can simply assume that this will report to the local host and one question is it is it available like to know to add some IP IP application like that simply or is it like the authentication for for sending these locks and all this stuff I think that if you go to the github so if you'll go to Jagger yeah you should go if you go to reporters let's see if they another quite like this in their dogs okay but there says but there's a section reporters okay but this is Jagger I mean the Jagger for the c-sharp okay okay alright so maybe let's yeah now I'll take a look but yeah okay I'm not quite sure okay right I think this that is because it's I think quite quite cold yeah but I'm pretty sure that they have so this pretty much the reporter and now we go to this sampler and somewhere is yeah it's pretty much that's what it would sound so the sample simply about pretty much samples the data so basically you don't want to send everything for tracing because of course for the development purposes that would be easy and does not denote no big deal for your application to pretty much know each request to send each request or span will get to this to this pumping to trace this to Jagger but imagine that you will have like thousands or you know millions requests and that would be painful to report every single request to Jagger saying that you know this is this comes for application so you need to sample this data and you can pick from I think that there are six strategies six different samples for now we support to just free so first is the cons so the cost sampler simply this this was simply sample some constant number that was less there is picked by the Jagger then you can choose the rate rate limit sampler so as you see this is the max races per second so you can specify the max number that will be that will be traced by the Jagger per second and the probabilistic you can say that let's say you want to trace just 5% of your total total requests and there are quite I think that there are two more one is more advanced because this uses both rate and probabilistic sampler so yeah per operation and remote controlled and Lauren I learnt it through put okay yeah granted throughput is this one that actually uses I think that uses rate limit and and this probabilistic so basically once you have these two things you can create the tracer and tracer will be later used to to create new news pants but as I said we'll go to the span just came in so maybe let's see some locks then we can go let's write code alright so a Fendi API should already start sending some data so and maybe we should start with the doc alright right so who would have guessed in our docker images you can find this jogger section very down right at the little you know the below of the file of this file and here is this long comment let me just out Lenzi this stuff so it opens a lot of pores UDP listening on this difference you know protocols you don't have UDP some ports and in on this port you will actually be able to open the the front end so let's see yeah and one more thing can you go to Jagger page the official not the github but but the page itself okay each other this one yeah and the docks and the getting started I think and below yeah so basically as you saw there was the comment that actually was used and by default you can use this all in one image docker image so this will create everything that is needed to set up the Jagger locally but the Jagger itself consists of few things so you have this agent the query and the collector and by default if you can go to architecture now this will be much clearer to the docs of the architecture yeah on the left it's okay you should see how this is structure because this will allow you yeah so the below this whoa yeah so as you see this is pretty much how they how this is structured so you have this application and the jagger client so in our case this will be the C sharp client that reports to that reports to jagger collector so as you see this is in go and the jagger collector then stores this to some data store and for now as far as I remember Jagger supports Cassandra and also elasticsearch but if you will run this in this in memory image so the one that you will find in our docker section then this comes with simply in-memory data source so once you restart your container you will lost all your data so you can run this separately you can separately run the agent and the collector and the UI and the query and simply let's say you want to use the elastic search so you can just use the elastic search jagger collector instead of you know the the one you can find in in this all-in-one section so keep in mind that you can choose from different stores I think the Cassandra would make my logbook you know explode yeah I think so yeah he'll be something yeah all right so let's see some trends so we have enabled you know identity API product service in these cons we don't have it yet so here we have three parts maybe let's just take a look at the dependencies because you yeah did you do some call my finger D let's see if I did five thousand yeah but you already saw that we had something okay let's let's do one more let's make one more call to my products which will return the empty empty products collection okay and let's get to the draggers maybe we should start with this or just with these graphs we can start with the API yeah so let's select the API here our services group it by their names which is the upsetting ski in the jagger you know some different filters tax which can be quite useful and yeah well how we can actually feel this is pretty cool filtering by operation type yeah so as you can see already for example sending the HTTP GET or pause whatever is this general method or for the particular controller let's just click on all and find traces well my choice is nice okay so we had something and i think that we should start with this can you just we should start with this just API home controller just to see home controller that was the yeah I think this one mmm I think it will be the above above this one wait I guess yeah yeah I feel like yeah okay so this is how this racing well yeah okay so this is how this looks like so basically if you will if you will see we have this I know this is what color is it this is something between the blue and the green I guess I'm not that good in the color of this see color which I don't know how is it college alright so let's call it green I call yeah yeah I don't know what this is but let's let's say that this is a green so as you see we have this green rectangles and M grip yeah okay bars each of them represents span so I would say that this span is a crucial unit of the tracing so in a nutshell or span represents a unit of work so you're imagining that you want to get some data like we did so that was the request done by the pure so we can have plenty different unit of work and the connection of the relation between them could be either than one follows from the other or that the one span is the children of the other so in this case we have this children hierarchy so this very first span is the HTTP GET so if you'll go to the top yeah I think you can also browse it like this right yeah this is the alternative view so this is the first span but this will not okay look that good when it comes to the hierarchy okay so if you click on this first one on here yeah yeah here so this is the whole span that that you can that comes from this open tracing so this pretty much gives you the basic tracing for the asp.net call so you can see pre useful data he'll like that we called the localhost 5,000 and who called this so in this this case this is the port who's the host name and that's the basic unit of work right so this span represents the lifetime of our request from the very beginning of the asp.net core request lifetime to its ending okay but besides that besides that we have also a children's parents so in this case you can see that we have a children span which refers only to the asp.net core controller so this does not involve anything or you see yeah that with the whole life cycle that's pretty cool yeah because this plugs to the whole life cycle so you can see that we have printed plenty different action filters so you see that this is before actually executing after and so forth but this is the sticks to this particular lifetime so only the asp.net core and then the another child the last one of this yeah the last one is returning the result which was the separated sponge right and this took seven and yeah over the seven milliseconds so as you see you can visually see the whole request so this is I would say that yeah in some cases this is much easier to understand how your application lives and how this whole application is structure and how particular services talk to each other instead of analyzing the logs because imagine that now we would have as you see one span has multiple locks inside this is also pretty good cool feature that span can be either tacked so we can filter them or and you can also put a locks inside the particular spans and imagine now that you would have 20 lakhs that corresponds to different lifetime and then you have some okay object result and a little bit kind of kind of hard to to see how this how this goes and this is also one more thing that is I would say the crucial and this is the main advantage of the tracing over the logging if if you go to the logging and you know this is pretty easy when it comes to when it comes to production it's sorry to the development because there is or there's not that high chance that you will have a lot of time of locks but imagine that you will have like thousands of requests on the production and you would like to lock something because maybe there's some issue you want to investigate you would have like in a just a few seconds you would have a lot of locks to analyze and of course you can filter them but you know this keeps changing but things are pretty different when it comes to tracing because each trace has its own ID so you can see that in tracing you will see only things that were actually related to this particular to this particular action and nothing more nothing less so even though you would have like five thousand requests right now it still would be pretty easy to find this particular trace and then analyze it very easily because there would be no unnecessary data around here yeah so that was the that was the first demo right so we simply queried the yeah the home controller and this is getting more interesting right now and it shows another thing so what Peter did was calling the API gateway and API gateway then called to call the product service for some products even a whole query in the world is here yeah so you can first inspect what data was sent to this particular micro service from the API gateway but you also see this connection between your between your services so you can see that now I think that we could show this Griner that's right because I hope we can show the dependencies so this one is always so far away I don't know why I don't know why it's it's difficult to actually scroll a list for me so it'll somehow puggy maybe let's go to the other one because this one actually is a little bit weird and it's very far away so let me just fresh this I'm lost somewhere in this site okay where is my where are my dependencies okay so this for this for the graph is you know this direction between the this flow right right I know why it's so far away sorry this is probably some JavaScript issue yeah this one is better oh sorry yeah yeah so this one basically shows you dependencies in your system based on HTTP calls so you see that so far since we started tracing API called products free time yeah we did we open these endpoints two times before and now we open it once so it's three times yes so yeah so this counts the whole yeah the whole number of the of the tracing collected by the jagger but i would say that this for just for this feature that is awesome because of course this is a very simple that was very simple example when the api call the products but imagine that you will have that multiple calls to different services and simply by clicking on the dependencies and tracking this tracking this spans and the host is whole tracing you would see the dependency graph just for free and with not much trouble so if you're new to the project you could simply add assuming that you have the jagger that would help a lot to understand how this data flows through your system and okay and can we get back to this search mm-hmm which one should we choose products or yeah you can switch the products so if we switch the products this will show only traces in which product service was involved so in this case if you go to this get you could see that we had we had this call so basically as you see we had this get method that started from the bath in the API then we call the product controller and then we did another a HTTP GET which involved this time yeah you can see that this called the products underneath and we had the same thing for the product so we have whole spam for the for the products then we have products controller result okay object result and then we have another okay object result for the API so this is super easy to understand how the data flows through your system and of course this allows you also to see some to spot some bottlenecks because imagine that you would have a few times some latencies yeah yeah so latency or you would simply want to you know you would have some spans over the let's say that one span so this one in it will off work would wrap your connection to the database so then you would see let's say in this on this UI that okay my whole request took one second but for some bizarre bizarre reason we waited half second for the results for a MongoDB or from from readies so you could spot easily bottlenecks and which could be not that easy when it comes to logging so this is also a great thing and this is how you can basically search for this so I would say that if if you want to trace your data and using only HTTP then I think that there is not much to to change in the default c-sharp client because this is also important tracing is pretty much about HTTP and of course since we wanted to also make it even more cool then we integrated this also with rabbit and so let's let's make a request yeah let's make two requests that the first one will succeed so in the first request we'll create a new product yeah let me just log in as admin straight or here sorry do you have a token oh yeah I think I click this sign in but I'm not sure if all right and the first one is always the longest given that we are recording and so it tooks 12 seconds to get the access token down now just yeah that's the power okay I'll see and let's post to these products which is here that's right this one so this should create a new product okay and now I'll post it once again so we should try to create a new product with the same name which is not unique and it will fail of course will not see the failure here and let's go back together and find traces for products once again sorry search yeah I would say that this is another advantage actually yeah because we just keep in mind that we return accepted even though we yeah because this is what the message says we accepted the message but this does not mean that was actually processed successfully but in this case you will see that was the first one half right yeah I don't think so oh no because this is the limit pass to the pole yeah this was this was the as you can see this was this was the SEC you sell of pushing blocks to the SEC you should probably filters out so yeah and this one no thing the one below ah alright yeah yeah I forgot about the API yeah that's the school it's cool because the API sent the original request which makes sense yeah so as you see this is pretty much this is very similar but this does not come out of the box so as I said the tracing is based on the HTTP and since we push the data to the RabbitMQ without any additional work you would simply see that the trace would end at the third span so we would simply return except that and that would be it and to propagate this data through the QE we did some small world but just before we did we'll do this you can see that we some they're living right it's cool yeah so we'll lock this that we had some correlation context and we processed some command in this case create product so I think that we can go to the code and see how it's done to our middleware right here no I think that we should start with the API oh right that makes sense let's go into base controller of the API and it accepts now this I tracer yes yes so as you remember I tracer was created by this app jagger extension so we so the tracer uses sampler and and the recorder for for the for the tracing and now one thing so as I said there was an issue that since the jagger is based on the HTTP and the tracing itself so once we would send it except that will end the trace because you know how jagger would know that he needs now to start start tracing you know the this particular micro service so we simply put this on the QE and let's say it like like we've been there three minutes it would be such a process by the micro service and how how the heck you would know that he should trace this as one as a one request of the dataflow so for this we use our collation context so as you remember collation context is a metadata that is attached to the message in RabbitMQ so now we have this additional field called spawn context and what it's what it does it simply takes the from the tracer takes the active span so span in this case would refer to the span that is within this controller so as I said you had free spans one was the whole request then we had nested span which was just the asp.net core and the third one was the okay result so this active span repel refers to this asp.net core unit of work let's say and we simply take the context we context contains all the data necessary for Jagger to know that later lately we want to attach this to to the current tracing so yeah we have some some internal skill so basically what it what we do is simply I would say that the serialize but they have this two string overloaded so basically it's we simply take this context put into the string and the this context is kept as a string in the correlation context and then it is sent to publish to the queueing so then one thing if you will go to the add Jagger yeah is if you if you see we have this use Jagger that I think that should make this internal now I see okay I'll change it so basically we have this jacquard stage middleware and this is the this is the middleware for the rabbitmq so for the roller library actually sorry sorry for the rabbit so basically row rabbit has the very similar I would say the similar structure to the to the asp.net course so it uses the pipeline and the middle worse for processing your messages and of course there are different steps so if you see there is a stage marker that says that this middleware will be called once we disagree lies the message and if you're familiar with the with the middle words in asp.net code and this is pretty much pretty similar thing so basically we call you know the next a invoke next invoke async and so forth so the next refers to next step in the in the whole pipeline so basically now what we do we get the correlation context using this get message context and then we take this span and span span context can be restructured using this span context from string and this gives us I would say that the this gives us this connection between the API and particular micro-service so this refers to this let's say asp.net corresponds from the API gateway now what we do we simply create a scope which gives us another span and this one is a if you will see yeah so we simply say okay I want to create new span and this will be called processing and then some message name we want to give him a tag so message type would be either comment or event so you know if message implements I command this will be Kommandant you know and then we want to add a reference so as you see we have this reference follows from so we say that okay this is another span this another unit of work and we want to refer to this to this span context that you just DC realized from the from the correlation context so you will know that this is the next step of this processing data okay so you jaakor would know that okay this is that this is something that should be attached to this to this tracing from the API gateway and then we want to activate this this start active does the thing that once we quit from the using this will dispose the disp and and they will finish this and inside this using you can see that we simply invoke async so invoke async basically would call our bus subscriber and this will inside this invoke I think we will put everything that you're familiar with so that contains basically the the bus subscriber and the calling the particular command handler or so or event handler yeah this and yeah okay and so that was the I would say general span for this for this micro service but we can go even deeper and as you see now we are in the bus subscriber which will be called after we create this this span from the middleware and here as you see we have building we are building another spans but this time it's their child of the our current span okay so yeah so basically we create a new scope new new span and this is now the child and this wraps the whole executing of the handle icing so if you go to you'll see that we have them logging to logger and we also can look to span then we do a white handle which actually calls the command or event handler and then we yeah and then we simply end the span and one more thing for exceptions yeah if we have the exception we can also log that we had something and we can also set tag and if you see jogger accounts with some predefined predefined tax as a enums yeah so if you go out Midnight's sorry it's not the inner it says some yeah cants constraints so basically if you set the error to true you can dynamically add the tax to span this will be displayed on a UI as a span that actually failed and so when we have the catch close you will see that this actually failed so this will allows us to analyze this even more efficient and just last thing if the error was of type dish of exception then this means that the error type was the domain issue and if this is the this was not the dish of exception then it means that it came from the infrastructure so maybe or our you know there could be some doesn't have to be exactly infrastructure but this could be some stupid missile on occasion logics yeah whatever it is yeah so okay so we can see now how this looks okay so do see success and now let's get back to the this error and yeah we have this nice icon there with exclamation mark so as you see the first processing create product that was the Batman was yeah the products was created in a middleware and this child was created in a bus subscriber and since we hit the catch then was the filtering that says okay that was a dish of exception so we mark this as an error and also as a domain arrow type and we could put some data that says that the product was already exist and last thing I think that we could show this a red right right yeah the retry all right so it's because something stupid yeah yeah so let's throw an exception here just like we did before which doesn't really make much sense flow new argument exception again our oops yeah and let's restart it was supposed to be the products okay and let's try to create a new product once again and we should get an error after three retrace so because just remember that the hull handling the message of the message is wrapped by the poly policy damn it yeah but in this case yeah we want to retry few times oh for errors great so as you can see we have the rich rice right so the infrastructural error type with this custom tag and we're trying to create a product and then we have our event oops so this have an exception another retry you can see retry and neither one and so on so basically we interesting is that we have rich way one three and two like it was not ordered somehow this is interesting huh it's gonna see look I don't know why that is interesting maybe just show you know but ok as you can see I know why but on the graph it act can you yeah just a minute okay and so basically as you see we had this one processing create product and that was the span that was created artificially in the middleware and then inside the bus subscriber we created expands each of them tried to handle to execute the handler but since we felt and we got this exception Polly tried to try again and try it four times and then just get rid of this so that's why you can see four failed space or maybe not failed but the space marked as error so this is also clear that something went wrong with your app and that's think that's pretty much it when it comes to tracing so this is I think that you should get the difference between between the logging ante and the and the tracing I don't know whether we have something else no I think it's more than enough otherwise but we'll we'll explore these topics of these monitoring's matches and so only the next episode as well so this is not the end of this of this stuff yeah and the next episode we will talk about the prometheus which is Johanna and maybe something else we'll see but it will in the next episode you'll see how you can have you can actually collect some metrics some custom metrics or some built-in metrics and how we can display a pretty nice dashboards with a lot of you know graphs panels statistics being updated in real time so you'll be able to really build an awesome admin dev ops panel and you know some TV big big screen yeah yeah and then so how busy you are yeah yeah I'm taking care of my microscope yeah but this is a pretty handy stuff so yeah yeah I think I think the general general thing is that you should start to think about logging monitoring you know tracing and all these things before you did get it yeah before before you start deploying your micro services on like a big scale because you can have the best domain best services in the world but you shouldn't ask when you should ask if that happened but you can be certain that it will happen such as when my service or services will go down and things will you know start breaking because it will happen anyways and if you don't have these tools in place then it's like you will see the fire and you've you will send you'll you know send the smoke but you will have no idea where oh where is it burning so yeah that's a pretty serious I would say that this is pretty serious because it's it's like you know the people tends to think that especially when they have really poor logging like okay I have some logging Kyle gross the files file you have to file or to database and then way of course the database is way better because you can query this but the we know log into - no - some text files like okay so good luck with this especially when you will have like thousands requests a second then we'll see you know how how easy is to to spot the the issue in your infrastructure so so as purge that you should not think that whether this will happen it will yeah I figure just one more thing we we've had recently very good presentation actually about this kind of approach was I know how this translates but it was something like the chaos engineering or something so basically that was developed by the Netflix and the Netflix created the whole stack of the tools for breaking their infrastructure on the production so let's say that you have the kubernetes cluster that runs no multiple instances of your micro services and you simply I think that they call this monkey of chaos on my list the chest is Randy Alec so you simply enable this and you don't know when this will shut down half of your infrastructure and using this Netflix ya developed great resiliency when it comes to their marker services because they are prepared for different situations and and the different stuff whether this is the application or the infrastructure so if you're interested more in this stuff then you should definitely check this maybe will link this in some comments or something we'll see yeah here's my kinetics yeah okay the whole yeah oh is this Simon army this holds the whole science topic okay so I think it's we are good for today it's been one and a half hour already again so good time alright so soon in the next episode which will be it episode number ten awesome yeah thank you yeah
Info
Channel: DevMentors
Views: 8,055
Rating: undefined out of 5
Keywords: .net, .net core, devmentors, distributednetcore, dshop, aspnetcore, microservices, vault, seq, jaeger, logging, secrets, tracing, distributed tracing, open tracing
Id: 8bDHf7yiNKM
Channel Id: undefined
Length: 89min 47sec (5387 seconds)
Published: Sun Feb 03 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.