.NET Microservices – Full Course

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so i have just one word for you or maybe it's maybe it's two words i don't know doesn't matter anyway that word is micro services [Music] well hello wherever you are whenever you are where am i melbourne australia as usual and when is it it's august 2021 now i hope wherever i find you and whenever i find you that i find you happy safe and well so yes this is the big one this is micro services this is the course that i have been wanting to make for such a long time and now it's finally here free on youtube it was a lot hard work it was a lot of fun but i'm glad it's done now you will see from the timestamp that it's a long video so i wouldn't expect anybody to go through in one sitting so with that being said i have put the time codes in the description below so that means you can jump to the sections that you want to go straight to or you can come back and pick up where you left off that being said i do suggest you go through it start to finish for the best experience now i have tried to put in the right balance of content for an introduction to microservices but if you do have any feedback on how i can improve please put that in the description section below i always love getting your feedback one last thing just a big thank you thank you for joining me here on youtube and if you do like the video please think about giving it a like that just helps me out and if you've not done so already maybe think about subscribing thing the little bell and that way you'll get notified when i make new content and finally to my patreon supporters who go that little bit extra and support me on patreon thank you so much to you i cannot tell you how much i really do appreciate it and as usual as a way thank you your names are coming up at the end of the video that's enough for me we have got plenty to get through so i don't think we should delay any further and i think we get started all right so yes welcome to this step-by-step course on.net micro services and we will also be using kubernetes rabbitmq and grpc so upfront i just want to this isn't the course outline we go through that in a second but what i want to be very upfront about is just the approach we take in this course and the type of approach i've taken which may or may not suit some people so want to be very clear about that up front before you invest too much time going forward so this course will focus on the technical aspects of building.net microservices which shouldn't be too much of a surprise so hopefully that's why most people are here we'll cover the theory as we go but i wanted it to be as practical and as technical as possible and so we do do three-day but i didn't want to do so much theory up front that it becomes maybe a bit tedious so we cover it as we go but there is a bit of a theory block up front as well take a pragmatic balanced approach but always try to follow best practices but again you do have to balance that when you are developing in the real world it's not a highly theoretical place and you sometimes have to make tough decisions pragmatic decisions on how best to go forward so again i've taken that approach in this course as well it's fully step-by-step so the code is available on github but i don't point you to a repository and just tell you to download code and point at it as part of the video we actually go through every single bit of code step by step line by line and that goes for doing a kubernetes stuff as well so that might not suit some people i just i'm just calling out but that's the way i've approached it here it is fully step by step and that's why it's pretty long obviously cover this be as practical as interesting as possible but also acknowledge that micro services are hard and they don't solve every problem in the world so we do have to go in to that into the course acknowledging that that is the case and i certainly try and acknowledge that some things that this course doesn't do so i don't focus on things like domain driven design bounded context a lot more of the kind of highly theoretical stuff about how you would design your business focused micro services i don't really focus on that we touch on it slightly and i make the point about you know services should be bounded for sure but i don't go into lots and lots of uh slide layer or theory about that we don't build out too much redundant business logic in our example our example is incredibly simple well it appears incredibly simple there are only two services and you might think that's not enough to learn anything believe me it is more than enough to learn the basics and the introduction introduction material to microservices more than enough and i felt introduced more services and tried to build out something like an ecom app it would detract focus on really what i was wanting to try and get across so we don't focus on you know redundant business logic about building carts and you know category pages and all that kind of stuff don't demonize the monolith or bandwagon jump there are reasons and scenarios when you would definitely use a monolith and reasons where you would definitely not use microservices which is probably the same thing um and i call that out and i give some examples of why we do that so i'm not what i'm not saying what i'm saying here what i'm not trying to say is that microservices don't cure everything they're not the best approach in all circumstances definitely not and we don't cover everything there is a balance to be struck of course you're never going to cover everything and i certainly haven't covered everything here but what i do feel i've done is given enough material that's going to give you a really good introductory foundation to building micro services with net technologies and all the associated stuff around that i feel that's gonna this course is gonna give you that really good introduction foundation that you'll need to move forward okay let's go through the course overview so just fyi there are nine sections nine parts to the course so we're just going to go through those now so the first part is our introduction in theory so we go through a course overview which we're doing now some prerequisites which aren't mandatory but kind of nice to have the ingredients and the tooling that you'll need if you want to follow along we then move on to some theory so what are micro services why you would choose to use them we look at monoliths and why you would and would not use them as well we then look at our services that we're going to build out uh we do an overview of our solution architecture and then we look at our service architecture so how we're actually going to build our services out we then start the practical building the rest of the course is fully practical from this point on so it's all coding config so we start the platform service which is our first service so a bit of an overview of what it is we scaffold out build out the data layer and then we work on our controllers and actions part three we look at docker and kubernetes so a review of docker we containerize our platform service so we build it as an image and push up to docker hub we then go into kubernetes what it is why you would use it we look at the architectural overview of our application in kubernetes so that's the kubernetes architecture section and then we deploy our platform service into our kubernetes cluster and then finally in that section we set up some external networking so we can actually use our platform service inside of kubernetes part four we start with our commands service but really this section is focused on synchronous messaging actually so we scaffold up our command service to start with we set up a controller in action and then we look at messaging synchronous and asynchronous messaging and as part of the synchronous messaging we set up http communications between both our services our platforms and command services so as part of that we add a http client to our platforms service we obviously have to deploy our command service into kubernetes we do some internal networking and we also stand up an api gateway so it's a really big section where we cover an awful lot of material part five we move on to sql server every service has its own database so this is the sql server for our platform service as part of setting up sql server we talk about something called persistent volume claims we look at kubernetes secrets we deploy sql server to kubernetes obviously and we revisit our platform service and just a quick call out if some of this stuff you don't know what it is that's fine we cover all in the course so if you don't know what persistent volume claim is do not worry about that we obviously cover it in the course part six we revisit our command service we talk about multi-resource rest based apis so we review all our endpoints for our command service we build out our data layer and then we finish off our controllers and actions that looks like a short section but it's actually quite a long section part seven we then look at our message bus which is rabbitmq in this case so we do a review of our solution architecture we give an overview of what rabbitmq is we deploy rabbitmq to kubernetes and we test it part 8 we then start to really build out our asynchronous messaging using rabbitmq so that involves adding a message bus publisher to a platform service we look at event processing and then we add an event listener to the commands service and then to render the course we look at grpc so an overview of what grpc is why you would use it all that kind of stuff we'll do some final networking configuration inside kubernetes we'll then add a grpc server to our platform service we create a proto file or a protobuf file and then we add grpc client to the commands service and to render off we deploy all that and we test all end to end alright so let's move on to the course prerequisites now again these are just suggestions we do everything as it says they are fully step by step but some of the stuff is quite complex and if you've not touched it ever before you might find some of the concepts a little bit heavy going so i do have courses for all this stuff pretty much on the channel so if you're a bit rusty maybe you want to brush up on the following so experience with building.net core rest apis so i have a full course on my channel that covers that in in detail although we do it again here step by step as well but i do spend more time on bits and pieces in that video understanding of daca and associated concepts again if you've never touched docker before and you don't even know what it is we do cover it here but again doing a more detailed course might help you dependency injection in c-sharp would be useful again we do cover it here but some of our other courses go into in a lot more detail the more basic theoretical concepts and if you've used async in a weight before in the c-sharp language that would definitely be useful but again that's probably not super critical so all of this stuff would be useful we do cover all step by step but again if you've not ever built an api before using.net you might find some of the stuff a bit challenging all right so these are the ingredients that you're going to need if you want to follow along practically which i think most of you probably will want to do so i'm using vs code text editor you can use visual studio if you want but again i'm deciding to use vs code i just find it lighter and more adaptable than visual studio but you can use visual studio if you want dot net 5 is sort of mandatory there's some concepts in here if you decide to use an earlier version of the framework you may run into some little niggly problems so i highly recommend you run with net5 you will need docker desktop running kubernetes again which is free an account on docker hub again which is free so when we push images up to docker hub you obviously need to have an account to do that and you'll need either insomnia or postman which are basically api test clients which we will be using throughout the course to test our services and in this video i'm running everything on my local hardware i did have a decision to make on whether to go with this approach running everything locally or whether to go with a cloud provider like google cloud or azure i decided that this course would be using just your local machine so you don't have to sign up for cloud services which i know doesn't suit everybody but the downside of that is yes we will be running up quite a number of services so you do need a decent level of local hardware so i'll just show you my pc spec just out of interest now this is probably a higher spec than you need and i'll just go through what i've got anyway so yeah we'll be spinning up a lot of containers in kubernetes which obviously has a an impact on your local hardware most especially memory really so i've got an intel i7 i've got 32 gig of memory which is probably way more than you need in fact it's definitely way more than you need it's just what i happen to be running 16 gig will be absolutely fine so 16 gig you should be totally fine anything less than eight i mean really eight is probably the minimum the absolute bare minimum and i think even if you're running eight gig you're probably going to struggle a bit with that so between 8 and 16 i'd recommend 16 to be honest with you and the hard drive is sort of irrelevant but i just happen to have a one terabyte um solid state this drive but it's really the memory memory is the main thing that you will need um so yes 16 gig you should be fine now the other thing i want to call out is we are doing a lot of kubernetes stuff in this course so i have produced a free cheat sheet that you can download it's a nice one pager that you can refer to so either on screen or you can print off and it has all the docker and kubernetes commands that we're going to be using in this course so it's nice little one page or reference guide it has our kubernetes application architecture reference on there so again you know you can kind of tick off what we complete as we move through the course and we also have a kubernetes object glossary so again if you're new to kubernetes we cover it all in the course but again everything that you see on screen here is on a nice little one page summary for you to download so all you have to do is go to dotnetplaybook.com that's my blog website and just register for our mailing list and we will subscribe to our mailing list and we'll email you with a download link and i do want to call out that we never ever ever uh pass on your email address to anybody else in fact i hardly even use the mailing list at all i'm probably send out an email maybe once a year or something and i definitely do not pass on so just be assured if you do give us that it's totally secure and we don't ever pass on to marketeers or anything like that other course resources the code is available on github for you to download and follow along there is a lot of code here there's a lot of scope to make errors and i've made a few arrows as i've made the video which you will see if you follow along don't worry i do correct them but it's always nice to have that backup of a code repository so you can refer back to and compare what you've done in the description of the video i have put timestamps to all the sections all the main sections in the course so it's a very long video so you can i don't expect anybody to go through it in one sitting so the course outline description in the yeah in the description you can jump to each section and come back and revisit at your leisure and then finally the course slides the stuff that we're going through there's quite a few slides that we're going to go through i'm not making those available publicly but they are available to my patreon supporters the five dollar supporters and above can download the course slide where through my private github repository so if you're interested in that maybe check out my patreon site at the following address all right so let's come on to talking about microservices now we've done enough introductory stuff so what are they and yeah rather facetiously i'm saying how are they saving the world not being totally serious when i say that so let's look at this idea of a single responsibility principle which was coined by this chap robert c martin and what he's saying is gather together those things that change for the same reason and separate out those things that change for different reasons so what do they mean by that what do we mean by that well that really is a nice way to kind of a high level think about micro services so what are they they are small okay they are built let's say by a team that would consume two pizzas that's a way that we often measure team size so imagine you know at the end of the end of the sprint end of the week you want to treat your team to pizza you only need to buy two pizzas to satisfy this team so small small team or if you want to think about it and how long it would take to build a typical micro service you're looking at maybe two weeks to build that's what we mean by small although that is quite a subjective term they are responsible for doing one thing well so you kind of have this idea of a micro services application which will include many many different services each of those services will do one thing and do it really well okay so again go back to that single responsibility principle if something is doing one thing does it really well it doesn't need to rely too much on the other parts of the system although paradoxically in order for it to be a microsyst microservices application services typically will talk to each other because they need to share data between themselves but you don't want it doing so much sharing that they're chatting to each other all the time so that's why breaking down your services into doing one thing and doing it well is really important they're organizationally aligned so think of a large organization they will the parts of that organization will typically be responsible for running and managing a set of different services and they have almost exclusive control over those services so they can change them when they want to and they can make additions to them take them down when they need to in other parts of the organization they will have their own set of micro services that are lined to their part of the business and again they will need to or most usually they do talk to each other this is really important so again we will talk a lot about decoupling and the fact that you don't want services talking huge amounts to each other but at the same time again paradoxically in order for it to be a full micro services solution the services typically form part of a distributed whole so one service doing one thing by itself never talking to anything else is that a micro service possibly it's not okay it really is a true micro service when it does form part of this larger distributed system and again this is kind of we've only kind of covered this self-contained autonomous so again they are responsible for doing one thing they can stand on their own without being too reliant on other things all right so just help position what micro services are let's look at the contrasting side of things and look at monoliths for a bit now you might be wondering what a monolith is so probably the best way to describe it is by reference to an actual system now the system i'm going to describe here is actually an amalgamation of a number of systems i've worked with or been across at various points in my career but one thing is definitely for sure if you work in a large organization there's a very very strong chance that you have a system like this in that organization in your organization so this is a sort of example of something that i have come across on my travels so a large monolithic crm system that services millions of customers so it's a really really super important system over many years so again it's not something that's been stood up within the last six months it's been around for many many years in an organization that itself is many many years old built on a single proprietary technology stack so obviously there'll be different technologies that make up the monolith but for the most part it will be built on one platform one tech stack and in this case and in most cases it will typically be managed by an outsourced partner so it won't even be managed by the organization itself directly most usually or a lot of the time systems like this are managed by third parties so what's wrong with that well the system is very very difficult to change it's a huge code base making changes even to a small part of it you have to make sure that it hasn't impacted any other part of the code base so change cycles are months in duration and even for big migration projects if you may be migrating from one monolith to another they can be years years and years in duration but for this particular one i'm thinking about the change cycle was at least three months lots of testing often frequently manual testing to make sure that nothing has broken when you make changes to these systems so difficult to change and therefore very slow to change difficult to scale now what i mean by that is you know the crm platform is a huge system and let's say for example you are going to experience increased orders at certain points of the year say you know your holiday periods or something like that or you know black friday sales or something like that you may want to try and scale the part of the system that just deals with ordering and all the other parts of that system don't need to necessarily scale as much but with a monolith you can't do that you have to scale the entire system so quite difficult to scale specific bits of the system you can scale it but it means you have to scale absolutely the whole system which makes it quite inefficient to scale and then finally you're locked in you've got one big huge platform it's locked into the technology stack pretty much very very difficult to change the technology stack and also if you're using an outsourcing partner you're locked in from an intellectual property perspective as well so this one party knows everything about that platform which is fine but you can't then choose another party to manage that system while you can it'd be very very difficult to do so so you're kind of locked into that vendor managing this platform it's not a great situation so this brings us on to the benefits of micro services when you kind of contrast that with that example monolith definitely easier to change and deploy basically because they're smaller and they are more decoupled it can be built using different technologies so you're not locked into the one technology stack typically different teams will build different micro services and it's really up to them to choose what technology they want to use to build those micro services so long as they can talk to each other the services it doesn't really matter what they're built with you have increased organizational ownership and alignment so the crm system will be many different parts of the business that are relying on that system and if one part of the business wants to change it another part of the organization may not want to change it or they may have their changes that they want to prioritize over the other part of the business so it becomes this headache of managing priority managing change whereas with microservices typically each part of the business owns their own services and so they can change them and prioritize their own changes till the hearts content really resilience so we will see this with an example in a minute you know one service can break that's fine the other services will continue to run whereas with a monolith if the monolith goes down typically the whole thing will go down so you'll end up with a situation with micro services really parts of the system may be down but other parts will continue to run and i've got an example coming up and scalable so coming back to this issue of scalability you know with a monolith you can scale it but you have to scale the entire thing so it's rather inefficient to do so with micro services you can focus and scale out just the services that you need to scale you don't need to scale everything so it's much more adaptive to different demand patterns and they're built to be highly replaceable and swappable so that's the whole point of them if you are not happy with a particular service you can replace it with another service built by the same team or possibly built by a different team but the whole point is that they can be swapped out you're not locked in all right so let's come on to an example of a relatively simple microservices application that kind of talks to some of the concepts that we've just mentioned so let's say we have an airline and an airline has decided to go with a micro services architecture you could split your application up as follows so you would have a or could have a flight catalog service so that's the service that contains all the flights that people can book and let's say for example in this example it's written in.net with a sql server database and we have in this case one running instance of that service and it's built by a team let's say based in scotland we then have a second service this is the booking service so let's say for example it's built using ruby on rails it has a different type of database postgres sql database we have one running instance and it's built by a team in the usa and what you'll see here which is quite obvious is that each service has its own data store so the flight catalog will use its own data store and the booking app will have its own data store now these services will more than likely share information and that's really what the whole of this course is based around is how do these services share information that the other one relies upon but one takeaway is that they do not share the same database they may share data they will not share the same database and it comes back to this whole point that one of these instances can go down the other one will continue to run pretty much uninterrupted possibly with some limitations and then to round out our airline example we'll have a check-in service so somebody uses the flight catalog service to browse flights they'll then use the booking service to book a flight and then finally at the airport they will check in or they can check in online so they all have different domains they may be run by different parts of the business our check-in service in this case is written in node we have one instance and it's using a mongodb so again to label the point it has its own data store and it's written by a team in india so the takeaway from this slide is that separate services written by different teams they have their own database and they will be somewhat reliant on each other so for example you won't be able to check in unless you have access to booking details and again the whole point of this course is to understand how we can move data between services but they are independent so for example if the booking service went down people can still browse and look at the fly catalog so they can still go online they can still see what flights are available they just can't book and likewise with uh check-in people can still go to the airport or they can still check into their flight online they might not be able to book a new flight but they can still check in or let's say the check-in service goes down people can still browse flights and they can book flights they just can't check in so the point of this is that even though one of the services may go down the others remain up and certain parts of those operations that they are responsible for will continue to run unlike a monolith where the one thing the whole thing goes down everything goes down and of course what we're going to explore in this course is the fact that even if a service does go down we'll be able to recover them much much quicker the other thing is to talk about scaling so let's say this example airline has a salon and we have lots of people coming to the website to look at flights and book flights you can scale out additional instances of those services and the check-in service you don't you leave untouched because the number of check-ins probably isn't going to change unless you maybe increase the number of flights but let's say in this example the number of flights remain the same but we have a run on cheap flights that this airline is running and you can scale up and scale down your instances very very easily so that brings us on finally to say so does that mean everything about micro services is awesome and everything about monoliths is completely terrible absolutely not and on this slide we have some disadvantages of microservices or certainly some things to consider when you use microservices versus using a monolith so microservices are difficult to implement there's no getting around that they are difficult to implement writing an individual service might be somewhat okay but designing and building an entire microservices architecture is difficult when you're starting out especially with the greenfield site microservices may not be the best approach because if you don't understand the problem domain you may spend most of your time just analyzing the best way to break down and compose your services and you may end up just in this yeah analysis paralysis where you end up not building anything and you just keep going around in circles trying to determine what micro services you should be building so yes you need strong domain knowledge and so in a greenfield site that may be difficult microservices are distributed so that's something that your people tend to forget about and when you're running you know in a development environment you're not really taking into account the fact that these things necessarily are running over a network so you can have services running in different geographies that rely on the network and of course the network can fail or it can slow down and so that's something you really need to take into consideration when building microservices and paradoxically you know we keep going on about this concept of you know micro services are totally decoupled well you know as a paradox they still need to be coupled to something a micro service that doesn't talk to anything else is probably not actually a micro service it's maybe a mini monolith or something like that so yes you want to reduce lots of chatty messaging between services but ultimately in order for them to function as a micro service architecture they need to be coupled to something even if it's just a message bus or the database or more than likely other services so the good things about monoliths simpler to implement no doubt simpler to implement at least initially you still can use ci cd daily deploys you can do small changes so the example i gave of monolith it was a huge beast of a system and you were unable just given the history of that application to have a ci cd type pipeline do daily deploys all that kind of stuff but you can absolutely do that with a monolithic system assuming that you build it in the right way to begin with so one of the advantages of micro services is you know you can use an agile approach to deliver them and that's absolutely true but it's also true of monoliths so just because it's a monolith doesn't mean to say that you cannot deploy changes daily for a monolith you absolutely can but you have to make some considerations and build that application in a particular way but it is still absolutely possible building a monolith is also a great exercise in familiarizing yourself with the domain so again especially if you're like a startup or something like that and you don't really know your domain that well and you don't even know if your organization is going to be around next year spending thousands and thousands of dollars building a micro services application where you don't really know the domain may be absolutely a terrible idea and you may end up spending half your time yeah just trying to analyze the problem whereas if you just go about and start building a monolith number one it's a bit simpler to implement at least initially and it also is a great exercise in familiarizing yourself with the problem domain and so if you're one of the lucky startups that actually does start making money then that's fine you can then possibly move to microservices architecture at that point but at least if you've taken this approach and gone with a monolith to begin with you're probably going to get up and running much much quicker and also you're going to start to understand your domain so when you do come to actually implementing microservices you'll probably compose them in a much more efficient way and you can still have two or three big services so you might not you might this is kind of like a hybrid type approach i suppose you might not have micro services where you've got tens or even hundreds of services you may end up just having two or three big maybe not services but two or three big applications that each don't do just one thing but kind it take care of a bunch of stuff that's kind of similar so it's a kind of halfway house of approach and that that may work well for some people and generally speaking monoliths won't be as relying on the network as a micro services application they're not typically not considered distributed applications so you are not you do not have the same considerations around network latency or even you know network being unreliable as you would with a microservices application all right so let's come on and take a look at the services that we're going to build in this course and we have two services that we're going to build now our first one is platform service and that's what we will refer to as throughout the rest of the course and it's basically going to function as an asset register for our fictional company so it'll be used to track all the platforms and systems in our company now we're not building out a full asset register system just to be upfront with you but we are going to build our platform service to operate in that way but that is the idea behind it and most importantly it's built by the infrastructure team so this particular team they have a strong interest in maintaining a running asset register of all the systems in the organization all right so let's come on and take a look at the two services that we're going to spend our time building as part of this course and we have two services that we're going to build so let's move on to the first one and this is our platform service now what is the platform service well the idea is that it's basically going to function as a sort of asset register so it's going to contain all the systems all the platforms that our fictional company is running so yes as it says they're used to track all the platforms and systems in our company and it's built in this case by one team the infrastructure team so may not be called infrastructure team in your organization but maybe you might call it the operational team or the ops team or even the devops team so that's the team that build it that's the team that's owning it and they'll more than likely prioritize their changes against it but it's not just to say that they're the only team that will use it yes of course they will use it but it may be used by the technical support team systems so again faults are raised you may want to know what system you are raising at fault against and you know the technical support team will more than likely consume data from this service engineering team may want access to it for similar reasons the accounting team they may need access to it just to understand you know the assets that we have in the organization and you know what we need to write down each financial year all that kind of horrible stuff and even procurement the procurement team may be interested in data that this platform service provides when it comes to renewing licensing agreements or you know signing new contracts all that kind of stuff so yes owned by the one team built by the one team but will be used potentially by many other teams so our second service is the commands service and what is this the command service basically is acting as a repository of command line arguments for given platforms so the idea is it's going to aid in the automation of support processes so imagine that the concept where you may need to restart systems in a particular way or you may need to understand how to backup databases in a particular way rather than memorizing rooms and rooms of different command line snippets for different types of platform this service is going to contain a repository of all those command line snippets and so built by the technical support team so this is the team that's going to build this repository they're going to run it and support it and they are going to be the principal users of it so used by yeah the technical support team may also be used by the infrastructure team maybe used by the engineering or development team they may have an interest in getting access to this as well now one thing that's going to kind of recur through the course is the fact that i have the word platforms on this slide and that's no accident the command line service is somewhat reliant on the platform service and getting that platform's data into the command service is one of the things we're going to focus on as part of this course but take away two separate services can theoretically or should be able to run completely in isolation of each other even though there is somewhat of a dependency between the two of them all right so let's come on and take a look at our overall solution architecture so this is kind of the 50 000 foot view of what we're going to build now this does not reference kubernetes or any kubernetes type objects just yet when we come on to doing kubernetes i kind of overlay all the kind of kubernetes concepts on top of this view here so this one is abstracted away from that and it's really just the components that we're going to build or use so with that being said first service we're going to have is our platform service it will have its own sql server database and together that kind of forms the entire microservice it's basically going to be a rest based api so the public interface the interface that our consumers are going to use as a rest-based api and yes this service sits within our what we're calling our internal microservices domain now we can theoretically access that api directly from externally into our internal microservices domain and we will do that just while we're developing but ultimately we're going to use this concept of an api gateway which is really going to front up all our services and so any external requests coming in are going to then get routed internally to our relevant services and so in this case you can see our api gateway is routing through to our platform service using its rest api then our command service it's going to have its own unique database just for it just for quickness we're not going to stand up another sql server we're just going to continue to use the in-memory database but the important point to note is that it's absolutely got its own database it does not share a database with the platform service or any other services for that matter and it too is a rest based api and it too will make you so the api gateway will route through to our command service so taking a look at this now it's all great and everything but is this really a micro services architecture to be honest with you i would argue and say that at this point looking at the screen at the moment i would say this isn't really a micro services architecture mainly because the platform service and the command service are completely and utterly totally unaware of each other which you might say is a good thing but there's no data sharing between the two in any way at all they're really just totally separate apps that happen to go through an api gateway so again there is a paradox here you do want to have services decoupled you do want to have them independent but at the same time you don't want them so independent they don't talk to anything else that that is the paradox so in order for it to be a micro services application in my view anyway there needs to be some degree of data sharing between the two and that brings us on very nicely to looking a bit more at our command service so what you can see here is a snippet of what the database may look like and you can see that we have a list of command line snippets how to do a particular thing such as stop a service or restart services or generate migrations and you'll see that we have those command line snippets but most importantly you will see that our command service does rely on the fact that it needs platform information so for the majority of this course what we're really going to be concerned about is trying to understand the best way of getting platform data from a platform service down into our command service now as we've said before the command service cannot reach into the platform service database absolutely cannot do that one approach possible would be that we kind of couple quite tightly the platform service to the command service and what i mean by that is once we create a platform in the platform service we could theoretically get the platform service to post using http to the api the command service and say here here's a new platform that i have created this does create quite a tight coupling between these services though because the platform service absolutely has to know where the command service sits and that does kind of create this concept of a coupling so really what we're going to focus most of our attention on in this course is the idea of using a synchronous messaging via an event bus or a message bus and so what we're going to do similar scenario when we create a new platform in the platform service we're going to publish an event onto our message bus now all the platform services await all at this point is the fact that there is a message bus it is not aware of the command service nor does it care that there is one and in this case what we're then going to do is we're going to get the command service to subscribe to the message bus and receive those platform published events and then it will add those platforms to its own database so this idea of creating data in one service in this case a platform and it's subsequently making its way down to other services is something that we call eventual consistency again services don't share the same database but they do need to share the same data and so the main problem that we're really looking at here is how do we make sure that all these services have all the data that they need and data that may come from other services so in this case we're going to use a publisher subscriber model synchronous messaging to get that information down into our command service and again to to label the point the platform service does not even know that the command service exists and again we're only going to do two services as part of this video but as you can see here you may have other services that publish other events onto the event bus and subscribe to other events or similar events and you can see again that the xyz service has its own database the abc service has its own database so on and so forth so you have this decoupled architecture event driven architecture which is really what microservices are about so it's around our solution architecture we are going to make use of grpc and grpc is used quite a lot in microservices so for that reason i've included it and what we're going to use grpc to do is allow our command service to reach out to our platform service and pull down any platforms that it doesn't already have but the point to note about grpc is that it is synchronous it's not event driven and so we are coupling our command service to some extent to our platform service okay so let's move on to taking a look at our service architecture so again we'll look at the main components that make up both our services so we'll start with our models our internal data representations we'll have a db context which mediates those models down to our persistent layer in this case sql server we're going to make quite extensive use of data transfer objects dtos which are external representations of our model data and they're mapped as you can see there to our models and we're also going to make use of a repository pattern which just abstracts away our db context implementation we're going to have our api rest controller which is i'm calling synchronous in so http requests coming in we'll say externally reach into a repository pull back any data and then send back the http response using a dto so that's really our external contract going out to our external consumers we're also going to have a http client which is i'm calling here synchronous out and it's going to make http requests to our in this case command service and receive backup response now i'm including this just for completeness just as an example we're not making extensive use of this but we are going to build out just to demonstrate that particular approach then coming on to the main event which is to use a message publisher which is asynchronous out and so a platform service yes is going to publish events onto our event bus onto our message bus and then finally we're going to use grpc within our platform service as a synchronous in service and so clients in this case our commands service can use grpc to request platforms back from our platform service and we're using grpc to do that now an important point to note the only external interface here is our controller our http web api controller everything else that is not denoted by a green line is internal within our microservices domain and as you can see from the darker coloring on the publisher it's asynchronous the other ones are synchronous so command service architecture i'll just go through these quickly it's exactly the same again this is exactly the same we're going to have a controller a synchronous in responding to http requests and giving http responses we're going to have a message subscriber so this is asynchronous in so it's going to receive messages from the message bus and we're going to have a grpc client syncing this out so this command service is going to make requests to our grpc service and our platforms service and again the only external interface here really is our web api controller my rest api controller and with that i think we start coding all right so i'm going to come over and we're going to start to build our platform service so i'm just going to start a new visual studio code window and i think it's probably started on my other screen indeed it has let me just pull that over rather annoying now i've got my screen running at 150 magnification so hopefully that's large enough for everybody to read clearly what's happening on the screen but at the same time balance that it's not too cramped for writing code so i might change it if it's not if it's a bit too cramped but i think this is going to be okay so i'm going to bring up the integrated command terminal by doing control and back quote and then change into my working directory now this will be chances are something different for you but for me it's on my d drive and it's season four and then it's episode three do a quick directory listing and it's empty now we're going to create a few projects in here as our as our main solution folder um but for the moment let's just work with individual projects so the first one we are going to set up now actually before we do that let's just do a dot net version dash dash version just to check the version of.net that we have installed in its version five i would strongly suggest you use version five and not three one which probably will work but you might have to make some tweaks here and there to get it working so go with version five all right so i just did a cls to clear the screen and we're going to do a dot net new web api is the template type that we want and then we give it a name of platform set this and i'll just make sure i've spelt plot form and service correctly looks like i have so hit enter and that will go away and scaffold that up for us so quick directory listing will confirm that that is the case so i'm just going to do a code open this recursively i'm going to open this folder up so platform service that will open our project up for us in visual studio code now i'm assuming that you are somewhat familiar with web apis and dot net as i say if you if you're not then i would suggest you check out one of my other videos that i've put links in the description below too we'll just click yes to this and this will just create a vs code folder just so it remembers our settings and so on and so forth so i'm going to delete the weather forecast class it's just a like a class that's used by the templated project and i'm also going to come into controllers and i'm just going to delete the weather forecast controller fantastic and then you'll be you should be familiar with the regular anatomy of a web api project and we've really only got a controller's folder now empty um but don't worry we will be populating that relatively soon the first thing i want to do is really just start to add in our dependencies or package dependencies that this service will require so again i opened up my start menu there so control and back tick to get the integrated terminal up and running and i like to leave the platform service cs proj file up and running so as we add packages you'll see them being added to this file just to make sure that we've not made any typographical mistakes so the first thing i want to add is dotnet add package and the first one is auto mapper and i'm actually going to copy the copy the package name from my notes over here because it's a little bit long-winded there you go so it's automapper.extensions.microsoft injection i'll hit enter i'm sure you don't want to see me typing that in and you can see it's been added as a package reference to our cs proj file and then the next one dot net add package and again i will save you from my typing because i'm sure you don't want to see me type out some rather long package names this one's a bit shorter though microsoft entity framework core which of course our database content class will be using to communicate to initially our in-memory database and then subsequently our sql server database but we do need a few more packages in that space so i just did up key so dot net add package microsoft entity framework core design you need that one as well and that's just at design time this does not get included at run time and then the last one in the actually you know it's not the last one we got two more to go so microsoft entity framework core we are initially going to use an in-memory database because it's just a lot simpler than connecting it into sql server but we will of course use sql server eventually but yeah we do need microsoft entity framework core dot in memory and then the last one in this kind of entity framework core is of course sql server so microsoft dot entity framework core dot sql server and i think that says good for now so you can see that there was um one other package in there the swashbuckle that just gives us an open api spec out the box which is quite nice we might take a look at that later if i remember but for now we're going to leave it at that now we are going to add other packages but i'm not going to put those in just yet i'm going to wait until they come to the relevant section and then we'll add those in i'll just have to make sure that i do remember to do that so for now that's good our application is scaffolded up we've included our packages so now we are going to move on to adding in our model all right so back over in our project we're going to create our first model well in fact our only model for the service so in the root of our project i'm just going to right click and do a new folder and i'm going to call the folder models although we're only going to have one of them it doesn't matter use that naming convention anyway so right click on the models folder and new file and the model is going to be called platform dot cs so it's just a plain old c sharp class and this is really our internal representation our applications internal representation of our data why am i saying that well we're going to use data transfer objects in a bit which are basically our external view of the of our data going out so models internal dto's external alright so namespace which should be platform service and then we'll do models all right and then the name of our class is just a public class is platform this is probably going to attempt to autocorrect no it doesn't correct um that's it so it's public class and then we're going to add some properties to it so to shortcut writing properties if you type prop and then tab and then you can tab through the property that is created so yes this instance we do want an integer so tab over uh capital i d to give us our model id and that will eventually become our primary key down in our database next prop so again prop tab is this time is string and it's just going to be called name i'm only going to do a few properties here next property will be another string and i'm going to call this publisher okay and then the final property and i'm not really going to use this but i just want to label the point i'm going to create another property to make a string for simplicity but i'm going to call it cost now well that's a bit of an ambiguous property name the point here is that for this service you might want lots of properties that describe the platform so cost might be how much it costs or might be its annual subscription or it might be a support cost or it might be the cost to run and you could add all these in as well as lots and lots of other properties very specific to this domain to this micro service it's a platform service it's an asset register you might have loads of attributes in your platform model that you might want to have there spoiler alert we're going to have like platform representations elsewhere in our microservices ecosystem and our commands service specifically and it's just not going to be interested in that type of data so the point of making is yes you're going to have this concept of a platform that permeates through our system but different services will have different views on what that actually means for them in terms of the context of what they're trying to achieve okay so a bit of a long-winded a bit of a long-winded way to say that you could have loads of stuff in here that other services just may not be interested in when they come to modeling or representing this type of data cool so let's just save that off and then the other thing we want to do just to help entity framework along a little bit is just decorate these with some annotations now the first one i'm going to use is the key annotation just to say that this id is going to be a key in our database so make sure your cursor is in the offending article it's got the little red squiggle under it makes your cursor's somewhere in there and control and period to get visual studio code to give you some code suggestions and i'm going to select the first one here using system component model data annotations and we'll add that in the top for us now we're going to be doing that control period about a million times or not a million times a lot throughout this course i'm not going to say again although i'll probably mention it again but that's how you get those code suggestions up there's just a word of interest by default entity framework core would interpret this property as a key you wouldn't actually you don't really technically need to specify that it is a key but i just like to do it just to be a bit a bit more complete and it also acts as kind of in encode documentation although it's not really documentation this is similar this is not actually required because a primary key is of course mandatory so you don't really need to have the required attribute but i just like to put that in because well i'm maybe a bit weird and then i'm going to make all these other properties required as well so they cannot be null all right so that's it's a very simple class and you might be going that's a bit too simple believe you and me you don't want it to be any more complex because there's a whole other complexity coming down the line so we want to keep it relatively straightforward and relatively easy to understand cool so that's our model created we'll move on to the next section which is looking at our db context all right so the next thing we want to do is create our db context so again just a quick visit to our architecture so we've created our model just a platform model we're now creating our db context which effectively maps our models down to our persistent data store now to begin with we're just using an in-memory database but we will move to sql server eventually so this is the next thing that we're going to tackle so back in our project new folder in the root and i'm just going to call this data i'll put a few data related things in here and right click a new file and we'll call this appdbcontext.cs and then namespace is just platform service dot data and it's just a public class called app db context and we're going to inherit from the base db context class and it's probably going to try and auto correct here but uh let's see how we go yeah it did so let's go back and take that back out and just say we're inheriting from db context so we want to bring in that namespace so control period using microsoft entity framework core fantastic and then we just want to create a constructor that we can if we need to pass in additional options if and when we need them so constructor actually short short code for constructor is ctar tab and you can just tab through and we're going to pass in one parameter which is db context options of type whatever our class happens to be so db context and we'll just call that opt and we can get that from base okay and we don't need to put anything in there just yet but that's our constructor or constructor we'll use for this and then most important well it's as important we need to define a database set our db set and which relates to our model so that's really what's telling our db context to mediate or mirror the our internal model down to the database so we just specify a property prop tab this time so type db set uh well and the type of db set is platform and complain so control p bring in the namespace for models and we just want to call that plat forms plural okay so that's our db context class set up then what we need to do is we need to kind of register it over in startup so let's do that now and in our configure services method we want to add that in here i'm just going to take these comments out because the code looks busy enough at this level of magnification without additional redundant commentary so yeah we're going to put that in configure services which is where we register all our services no surprises there so i'm just going to add that here services adb context what type of db context are we going to add just the class that we've added in the last section and probably won't find that so we need to bring that name space in control period using platform service data fantastic and then we'll just specify some options and we're going to tell it what type of database we are wanting to use we're wanting to use in memory database and i don't think i don't think it's finding this something we've probably got to bring back yeah using microsoft entity framework well let's bring that back and let me take a new line on this just so you can see that a bit clearer and then you just give the database a name it can be anything it doesn't really matter i'm just going to call it nmen so we are using our db context with an in-memory database and again we will move to sql server and we'll make some determination in here a bit of a check to see whether we're using a development environment which i'll still use the in-memory database or when we move to production to kubernetes then we will use sql server but we can leave that for the moment we just want to get up and running with our db context and that's it for adb context so in the next section what we're going to come on to doing is we are actually going to work with a repository so we're going to start to set that up in the next section and really what i want to try and do is get all this blue stuff kind of set up this data stuff set up before we move on to the more interesting stuff not that this isn't interesting but we get all this wired up and then we can move forward so repository next all right so back over in our project what i'm going to do is i'm just going to get rid of this terminal for the moment gives us a bit more space in our on our screen and yeah we're going to create a repository now and we're going to use the interface concrete class pattern because we will inject our repository through dependency injection in our startup class but if you don't quite follow that don't worry we'll we'll do it step by step anyway so first thing we want to do in our data folder yet again is create an interface and an interface basically just it specifies what type of things the method signatures effectively that our repository will support and then any concrete class can come along and implement those interfaces so yes over in our data folder right click new file and because we're defining our interface first we will specify capital i and i'm just going to call this platform there we go here's going to do eventually platform repo for repository dot cs not reply that sounds like it's latin and did it again repo all right i platform repo dot cs okay and namespace of platform service and data and it's just a public interface called i part form repo and then i'll be doing here to specify the this method signatures effectively so the first one and if you've done any um entity framework course stuff before you'll realize what this one is for but basically we need to save any changes that we make so whenever we add stuff to our db through our db context and adding it or deleting it is not enough you actually have to save those changes so we'll need to put something like that in here which is what that is and you'll see how that works i've been come on to doing the concrete class and then the first kind of proper signature that i wanted to find is uh getting all our platforms we want to be able to use our api use our servers to retrieve all platforms so specify as a numerable which we don't have the namespace for so enumerable of platform a platform class platform service and we'll just call that get all platforms i'm not going to pass anything in so quite a simple operation so bring in using system collections generic for the ignorable and then we'll bring in the try that one more time using platform service models there we go okay so this interface should allow us to get all platforms um it's best thing as an innumerable next one we want to do is just return an individual platform so not an enumerable this time just returning a platform and we'll call this get platform by id and this time we'll pass something in which will be the identifier of the platform the id of the platform you can't you the unique id and then the last one is we want to be able to create a platform so i'll just make that void call it create platform and we will pass in a platform object into that method and then we'll have to call save as well when we come on to doing that but that's basically our interface and these are the three or really four but these are the three things that this interface should provide somebody that's choosing to use this interface and now what we need to do is we need to build out an implementation class of this interface so we'll do that now so again in data right click new file and we'll just call this platform repo dot cs so over here in namespace create namespace we've got platform service data fantastic and then we just specify it's a public class called platform repo and it's going to implement i platform repo there okay and which straight away it will complain because we've not actually implemented any of these methods here so again our best friend put your cursor in there controlling period and click implement interface now it's not clever enough to actually fully implement it all it does is create these placeholder method definitions and then you can just see it's just throwing a not implemented exception so yeah it doesn't fully implement it but yeah it gives us the placeholders that we need which is good good enough and actually you wouldn't want it to implement it anyway because generated code is not that great but this is fine all right now before we come on to doing that though we want to actually create a class constructor and this is our first use of the really our first proper use of seeing how dependency injection works so again ctor to create a shell constructor and what it's going to expect to be injected in is an instance of app dd context and we'll just call that context okay so whenever we use uh up whenever we use platform repo vr interface when it gets constructed it's going to expect an app db context to get passed in and because we have registered our app db context here in our configure services method and that will occur okay so i'm not going to go too much more about that i would expect you to be somewhat familiar with dependency injection so i'm not going to explain it further than that but we will use it so much throughout the rest of this course that it will it will definitely become a second nature to you and then to finalize we actually need to make use of this context that's injected in so we do that via a private uh private field and the con the convention is to typically start these with an underscore but like i do anyway you'll see it used a lot like that so basically whatever's injected in here will get assigned to this and again this doesn't exist so control period and what you want is this one here generate read-only field and i'll put it up here for his private read-only context and you'll see this pattern used time and time time and time and time again throughout the rest of this course and that's really dependency injection in a nutshell so we can come on to working with our methods down here so i'm going to start with the simplest one and we'll do save changes so let's just get rid of that for the moment and all we want to do is return and we're going to make use of our db context and we're going to say save changes on that context and then this little bit here is really just saying if the result of save changes is greater than zero then we basically return uh true back so we had something that changed so that's all that is and we will call this after any create update or delete operation any unsafe operation that we have on our data source we will have to call that to make sure those changes are flushed down to our database all right so let's start with this next one which is getting all platforms we'll do that one next and it's really quite simple return again we'll make use of our injected context and then on this you can see that we have our platforms object available and that is just coming from that's just this property here we're just referring to that now so we're going to return platforms to list and i think we'll probably have to bring in the link name space here yep so control period using system link it puts that in at the top for us fantastic and that's all we need to do for that we'll come on and do our get platforms by id and it's slightly more complex but not much so so return context platforms but this time we're going to use this first first or default method to say give us the first instance of the platform or the default one and then we're going to specify some criteria using a lambda expression so we'll say p goes to p and we're wanting to find the id of the platform where it's equal to i'll get rid of that um prompt where it's equal to the id that we pass in so quite straightforward we're basically saying yeah return everything from platforms oh what the first or default item excuse me from platforms where the platform id equals the id to be passed in so it's conceivable that we get nothing so you can see we get a null return there but nonetheless that's okay and then finally and then the final one is creating a platform so first thing we want to do is check to see that we've had some sort of valid object passed in so we'll just do a fairly simple if plat equals null then we don't want to do anything so uh actually i probably shouldn't have deleted that throw statement but we'll just type in again throw in your arguments actually puts a slightly different exception argument null exception uh name of platt okay so it just gives us a more descriptive exception and we'll just bring in that namespace using system so if platform is null if it's not been defined we're trying to create one then we'll just throw this exception otherwise quite straightforward context platforms add and then we just add the platform object that we've been passed in fantastic now when we come to use this as i've said before we'll then need to call save changes to flash that down to the database and that's it that's it for our repository and that's actually it we're not going to change this now that's our final finalized repository and that's our finalized app db context so the next thing that i want to do and just before we do our dtos is kind of create a little class that's going to prepare our database so when our app starts up i want to actually put some data some mock data into our database just so that we have some there to work with and when we come on to using sql server i want to also run what's called migrations to actually create the data structure in the database within memory database you don't actually need to run migrations which is kind of good but kind of kind of bad the other thing i will say about the in-memory database that we're using now is that it really is only used for test purposes it's not a production type database and of course because it's in memory if the service restarts you'll lose all your data so it's really really can stress enough probably should have mentioned it when we were setting up for dbcontext is really just for it's really not even used much much in development uh development environments it's really just used for testing really but we're making use of it here just because it's going to get us up and running much quicker than setting up databases so we will eventually move to sql server though so yes anyway what we want to do next is prepare our database and put some data into it so we're going to create another little class to do that and then we'll come on and finish off with our details before we move on to writing up our controller all right now actually just before we go on with creating our database preparation one thing i forgot to do having created our repository interface and our concrete instance of our repository one thing i didn't do is actually come over and register those well register that for dependency injection so that when somebody asks for an interface a repository interface we give them part of our concrete concrete class we specify so the way we do that is just over in configure services and we'll just use make use of services again and we're going to add a scoped service and we will just say if somebody asks for i platform repo we're going to give them platform repo cool so yes obviously very important when it comes to using dependency injection we register our interface and the concrete implementation of it in configure services so somebody asked for this we give them this fantastic so let's just save that off the one thing we've not done so far is build so let's just bring up our command line and let's just do a net build there's not much to run yet so we'll just do a build i mean it should be okay and we'll run it once we've kind of come to the end of this section of data stuff so that's all good i just want to do that make sure we're okay to go so yes next what we're going to move on to is creating a class to kind of prepare a database with some example data okay so just back over on our project i'll just close down our startup class and give us a bit more space and yes we're going to create our database preparation class so over in data just right click new file and we're going to call this prep db.cs and then we'll begin with a namespace as usual platform service dot data and this is a public static class with the name of prep db and yeah static class we don't need to create instances of it and this class is going to have two methods one public and one private and the public method is going to set up a database context for us that we can use now this isn't a constructor this is just a public method so you might be going wait a minute got a couple of questions here why aren't we using constructor dependency injection great question well we can't because it's a static class so we can't use constructor dependency injection and the second question you might have is why are we using a db context why do we use the repository that we just created and that's another great question so the reason i'm using a db context in this case is that eventually i'm wanting to use this class to generate migrations in our sql server database and it's just much easier to do that directly on a db context all right and there's a couple of other methods that we're going to use here that i don't really want to build into my repository anyway this class here is really more for testing this is not something i would probably push into production it's really just for us and for this video to help us set up things in our database environment so that's the two reasons we can't use constructor dependency injection and i'm using a dv context mainly because i want to use it eventually to create some migrations all right so with that being said um let's create our first let's create the signature for our first public method so it's public static void and i'm going to call this prep pop population now before we define what we're going to pass into if we come over to our startup class we're going to call this method this prep population method on this class from startup so once we've run configure services and then configure starts to get run we're going to call it in and around here and what we're going to pass in in order for us to be able to use a db context is this uh i application builder instance here we're actually going to pass this through into it and i'm going to use this to create something called a service scope which we can then use to create a db context so it's a little bit heavy at this stage in the course and it's slightly off topic but we have to do it and i'm just trying to explain it as best i can so hopefully you'll follow the code um it's not that bad but let's let's go on with it so um yeah we need to pass in an instance of this to this method over here so let me just code that up now so i application builder and we won't have the namespace for this obviously and we'll just call that app so what i'll do is i'll bring in the namespace for that and i think i probably spelled the wrong app look yeah there we go all right application builder let's try that again there we go using microsoft asp.net core builder so that that's a good tip you know if you're doing the control the control period thing and you get you don't really see what you want chances are you've spelt whatever is wrong okay fantastic now just before we continue with this what i'm going to do is i'm going to pop over to our startup class and i'm going to just call this method because we've got everything we need now that startup needs in order to know about it and call it just so i don't forget so prepdb we can access the class directly because it's a static class and then prep population and we're just going to pass over app all right and that's all we need to do in our startup class so i'm going to save that off so back over into prepdb we will carry on with this so yeah as i say we need to create like a service scope in order to derive a db context so i'm going to do that via using statement so i'm going to create first of all a variable called service scope i'll make that equal to using our app application services and then create scope which i don't think will be available to me just yet so i'll have to bring in that name space so control periods that's the one we want microsoft extensions dependency injection fantastic now in here we're actually going to call our next method our private method which is the method that's going to do all the seeding of our data and eventually run our migrations so let's just create the method signature for that it's a private static void method called seed data and this is going to expect an app db context and we'll just call that context okay fantastic so where is seed data going to get in your app db context from typically if this was a normal class it would be uh via constructor dependency injection can't do that as i've said so we're going to have to create an instance or call this method this private method with an instance of mdb context from within our first method so let's do that now so it's called seed data and then let's give it the db context it so desperately desires and we do that by making reference to the service scope we've just created upstairs using service provider and then get service and any ideas what service we want to get of course at db context is what we want to push in there and don't forget the round brackets at the end we can close that off with a semicolon so this public method that's it is basically complete so now for the remainder of this little section we're just going to fill out the seed data method that we're going to use to in this instance just pass in some data as we're still using in memory database we don't need to worry about migrations we just need to pass in some data we will come back and update this though when we move to sql server all right fantastic so let's move into here make sure i'm indented in the right place so the first thing i want to check i want to check to see if we have any data in our in our database relating to platforms at all so we'll make use of our context and we will just look at our platforms instance and that's if you come back over to app db context we're just referencing this here basically and then we can use uh this probably have to bring in link here but we can use this any syntax to see let me go we're getting autocorrect because we don't have link reference so let's do that now using system link yep so if the context platforms has anything or doesn't have anything so make sure you put in the exclamation mark so you start a not operator we don't have anything then we want to push data in okay otherwise you do nothing now i'm just putting this else statement in really just to help us with a bit of output a bit of debugging output so i'm going to do a console and i'll bring in the system namespace so control period using system console rightline and i'm just going to say be all ready have data and that will come in useful when we actually use sql server and data will be persisted at the moment most of the time we're always going to be seeding data at startup because we're using in memory and if once we shut something down and restart all the data's gone anyway so for the moment we're never going to see this we're only going to see that when we move to using sql server but we'll put it in for the moment and then likewise up here i'm just going to put in our console write line and we'll just say something like um cd data dot dot dot just so we know that's it's kind of working all right so now we'll come on to actually putting some data in and there's another nice method we can call on our platforms object on our context so we'll use that now so context platforms and then we can use this method called add range it's quite nice because it allows you to add a number of objects at once which is quite nice all right so we're going to create a new a few new platform objects to pass in and create and remember db context works off of the model the platform model nothing to do with dtos this is all internal to our platform so a new platform i'm just going to bring in the namespace i think i brought in the yeah use the fully distinguished path which i don't like i rather bring the name spacing up at the top um so we're going to create yes a new platform and then we simply just specify a new platform so name equals first one let's do net why not put space in there we'll specify the publisher which in this case would be microsoft and to be able to specify the kind of rather ambiguous cost attribute let's say it's free all right now i want to create a few more now rather than watching me type those out i'll just go over to my notes and i'm just going to copy in a couple more and you can of course put in as many as you like here you don't just need to put them three like i'm doing there we go in fact let me get rid of this thing at the side so ctrl b to take away the panel gives us a bit more space so nothing too controversial we're just creating a couple of our platforms sql server express and kubernetes don't forget the commas at the end of uh each line except for the last one all right so that's cool we've added uh our uh products to the the database effectively do not forget and i always forget this do not forget to context save changes because yes that ad range will add stuff to the database but unless you save the changes it's not going to go in there so that's it for the moment i think that's it for the moment anyway prep db is done it's going to start up give us a database context and add that to our database so why don't we give that a go now we did add we did add the call to this method here in startup so it should be good to go so control backslash to get a command line up let's do.net spilled just to build it first although when you run it it builds it i like to do the build before the run i don't know why um dotnet run now so what we should see is this seeding data message there we go i think it flashed up on screen quite quick though yeah seeding data so cool it's realized without any data in our in-memory database and seeded the data now we can't do anything with that now this we can't access it yet that's when we come on to our controller fantastic but what we're going to move on to now just to kind of round out the data section is look at details we're going to create some details i'll do that next all right so back over in our application i'm just going to kill our running server control c and get rid of this terminal so yeah we're going to come on now to dtos now before we start coding just a little bit of a sense check by reviewing our architecture so we've created our model our platform model we created our db context we have an in-memory database we will move to sql server at some point and we spent some time creating a repository which our other classes are going to make heavy use of but just to run out this kind of section we're going to now move on to dtos data transfer objects and what are the data transfer objects are basically external representations of our internal models and you might be going why do you want to do that well you don't want to expose your internal models to external consumers why not well number one there might be data in your internal models that you don't want to expose right the other thing which is possibly about as important is that as soon as you expose your internal models to external consumers be an api for example straight away you're creating a contract with them and if you then want to start changing your internal models which is entirely conceivable and you should be able to do that you may break the contract with your consumers and you're basically tying your internal implementation to an external contract which you absolutely do not want to do so you need to abstract away your internal data from your external data and that's all dtos are okay so they're very valuable and that's what we're going to do next all right so back over in our application make sure you've closed down any running services or anything like that and running servers and i'm going to do control b to bring back up our directory explorer and i'm just going to close down these windows here just to make it a bit nicer and we are going to create yet another folder so right click new folder and we're going to call this dtos first dto new file and i'm going to call this plat form reads dto.cs now you might question this and go what don't we just create one detail you could but you're not really going to get the maximum benefit from them we're going to create quite a few details in this section just two so the use case is consumers are going to be reading data from us platform data from our service so this detail the read detail is going to be a representation of the any data that we want to provide in that context when anybody's reading from us the other scenario is going to be when somebody wants to create a platform using our service we're going to provide a view of the data that we expect to be given in that scenario and it's going to be very slightly different okay and that's where the real power of dtos come in anyway let's go on with this one first so namespace platform service details and it's just a public class called flat form read dto it's a very very very simple class and again i'm just double checking them spelling platform correctly that looks good now what i'm going to do i'm going to come over to our models folder into platform and i'm just going to copy all of our properties i'm going to paste them in here okay now i don't mind doing pasting copying and pasting in this instance because i've already typed this code out and i'm sure you don't want to see me double writing it out again so let's take a look at these properties and think to ourselves when somebody's reading data from us is it conceivable that we would want to give them the id of our platform object yes we absolutely would would we want to give them the name yes we would would we want to give them the publisher this is where it starts going into well depends what the intention of our api is in this case we're going to give it to them and then cost the reason i put this property in is really to kind of make the um make the point that this could be really be an internal bit of data that we don't ever want to expose so again you if you didn't want to expose that you would take it out but for now i'm just going to leave it in may take it out later we'll see how we go so for the moment our read detail is going to be exactly the same as our platform detail and that's fine but what it does mean is that you can change your internal platform model later and this could remain exactly the same fantastic all right so that's all looking good so we're going to create another detail now a little bit more interesting new file i'm going to call this platform create dto dot cs part form create detail cs all right great so start the name space as usual platform service dtos uh it's a public class called plat form create sometimes too fast for my capability cl platform create detail all right fantastic now what i'm going to do i'm going to come back over here and i'm just going to copy this again but this time let's think about the data that we've got here so someone's going to create this is for when somebody's going to this is specifying the data we want when somebody's going to create a platform with us the first property here the id do we want them to give us the id no in this case we don't because we take care of that our database takes care of that by creating a primary key for us and that's where the unique key comes from so when someone's creating a platform with us we do not want them to supply the id as part of the body payload so we take that out okay as for the other stuff we'll leave that in okay because we do want it now let me just pop back over to the platform read detail and actually took out the data annotations that i should have mentioned that because it's a read detail i don't really feel that we need the data annotations in this case so i'm just taking them out for the platform create detail though i'm going to leave them in because outthebox.net does some nice data validation stuff with our controller actions so if somebody doesn't supply one of these properties when they create a caller controller dotnet will go back with a nice pre-formatted error condition for us which is really nice so that's what i'm going to leave it in in this case so in order to use them control periods and we'll bring in this namespace all right and that should clear up our arrows we'll save that off so yeah platform detail it's got all the properties we're going to take out this double spacing it's kind of annoying me um we've got all the properties that we had in our platform model we're not using data annotations but that's okay and then our platform create slightly different shape to it we don't require the id but we are using data annotations fantastic so that's our two details done for this point in time now at this stage we've got a model a platform model and we have our two details but they're completely they're just separate classes they don't know about each other there's no way we can map at this point to them but that's what we're going to do next and we're going to use a tool called auto mapper to allow us to map from one to the other so we'll do that next all right so moving on to automapper now let's just clear down our development environment and there's a few bits of setup required for automapper the first thing we should have already done so over in your cs proj file just double check that you have this package reference included at the top for auto mapper and if not please add it otherwise the next bit's not going to work the next thing we need to do is register automapper for dependency injection so over in startup in our configure services method we're going to do that here so making use of our services collection once again dot there we are add auto mapper at the top and yeah that should become a relatively familiar pattern that we're using you can see it's complaining here because we do actually need to provide a little bit of extra detail inside the parentheses so add domain app domain my apologies current domain and then get assemblies and don't forget the parenthesis at the end there and we'll save that off so that's auto map on our registered in a dependency injection container so we can use it throughout the rest of our application now the final bit of this section is to create what's called a profile so again we've got our details we've got our models they are not in any way mapped together and we do that via a profile so in the root of our project right click new folder we're going to create a folder called profiles and into that we're going to create a single class called platforms profile so new file platforms profile dots yes okay and then the usual setup the namespace of platform service and this time it's profile profiles plural and curly brackets and then it's just a regular old class so public public class plat forms profile and we're going to inherit from an auto mapper class called profile okay now just this has tripped me up a few times our folder is called profiles or namespace is profiles plural this class is profile singular okay it's tripped me up a few times and caused me a few issues so just make sure you get that correct and bring in the right name space using auto mapper fantastic so that's our shell of our class setup now where we create our mappings is actually in the class constructor so just ctor to do the shortcut for platforms profile and tab across tab across and we just take out the parameters we don't need any in there so this is our class constructor and it's in here that we create our mappings so let's create our first map and i'll talk you through about exactly what is happening and before i do that let me just do a little annotation source to target all right so we use something called create map and we specify the source so think about a scenario where somebody is using our service to read data consume data from us ultimately that data the source of it is in our database it becomes a model what is a model a platform model that's the source object and we're going to then map that model to a platform read dto and give the consumer the detail so the source is the model platform model and the target is the radio that's what the mapping is now again if this is a little bit abstract when we come on to writing our controller will become a lot more obvious when we actually start putting it into practice but for now let's create our mapping so platform so our model is the source and let me just bring in the namespace for that before it starts autocorrect starts driving me mad so using platform service models and then our target is platform read dto close off the angular brackets and end with a parenthesis and a semicolon and we'll bring in that namespace so again just to reiterate this is servicing our reading scenario so the source of our data is our platform model and the target is our platform the dto now a couple of important points here because our platform class and our platform read detail have properties that are named exactly the same thing and that's the way we want to map them we don't need to tell auto mapper anything else hence the name automap but it does it for us which is brilliant when we move through the course we're going to be creating other classes and other entities where the that's not straightforward and we have to then explicitly tell auto mapper how to map or not to map between properties but we don't need to do that yet the second point is here that we've created a mapping with a source of a platform model to a target of a platform vdto brilliant if you then had another scenario where you needed the source to be a platform read dto and the target to be a platform you have to explicitly add that here just because we've created a mapping one way from those two objects doesn't mean that there's an implicit mapping other way you've got to explicitly add that okay just those two points are pretty important so just before we finish off we'll create our last mapping and this one is for the creation scenario so again in this instance the source of the data is a platform create detail because that is what our consumers are giving to us that is the source of the data and the target for that ultimately is going to be the creation of a model down in our database a platform model in our database so the source is a platform create detail and the target is a platform model okay so create map once again and our source this time is a platform create detail and the target is a platform model make sure we get that correct right and that's it for now we will add more mappings in here uh with some rules as well actually but for now that's all we need to service our current scenarios so save that off and the last thing i want to do just before we move on to working with controllers is just do a bit of a dot net run just to make sure everything's running up and building okay and if it is we can move on to the next section and it looks good so let's move on to controllers all right so moving on to our controller now i'm just going to stop our server from running i'm going to kill the running terminal and as usual i'm going to clear down our previous work that we're doing just to make it look a little bit cleaner all right so we already have a controllers folder from when we scaffolded up the application so we obviously don't need to create that but right click on that new file and we're going to create a file called platforms controller dot cs it's obviously a cs file fantastic and then as usual we define the namespace of platform service controllers and then it's just a public class called plant forms controller we're actually going to inherit from something called controller base which is really a yeah a base controller class that's really used for apis as opposed to full-blown mvc uh web applications so controller base will have to bring in namespace so control period microsoft asp.net core mvc controller base oh i selected the wrong option now i want to do that again uh get it right this time uh using here we go my eyes were a bit cross there there we go so we had using statement at the top that's what we want otherwise it's going to get way too busy all right so we're going to create a constructor so ctor tab and into this we are going to work with our repository and we're going to eventually work with auto mapper so let's get going with that so the first thing we're going to inject in is our platform repository so what are we asking for we're asking for an i platform repo i'll just check that's what we called it and we just created it not so long ago but i just want to make sure i'm giving it the right name so we're creating one of these and that will be injected in and we'll just call that repository and then we're also going to request an instance of mapper i mapper which is our auto mapper stuff and we'll just call that mapper i'll need to bring in the name spaces for both of these so using auto mapper and using i think you might as yep there we are spelt it wrong platform not platform there we go so try that again that's better okay now as usual what we want to do is we're bringing these are these two parameters are being injected we want to assign them to private fields usually denoted by an underscore so assign repository to underscore to underscore repository there we go and likewise for mapper and we'll use these private fields inside our class of course they don't exist yet so control period and we want to generate a read-only field for each of them and you can see they are added at the top here now this is the kind of very very standard the constructor dependency injection pattern that you're going to see lots and lots throughout this course constructor for a class we'll pass in a number of parameters and then we'll assign them to private fields private read-only fields that we then use within that class fantastic now the other thing we want to do is we want to actually decorate our class with a couple of attributes the first one is api controller and that gives us a lot of out the box uh functionality relating to api controllers and i'll call those out as and when we come on to them but it's mainly around passing through data via our actions and stuff like that and yeah default uh behaviors that most web apis will want to adhere to so it saves us a bit of work and then we want to also define our route okay and this is basically where we can find this api and then subsequent actions so square brackets route open run brackets and we're going to specify api forward slash and then you can either specify explicitly the name of your controller so for example platforms or you can use this kind of wild card approach where you can just put in square brackets controller and that will take this portion of the controller's name everything aside from the word controller and basically put it in here so you get the same you get the same result so this route to this controller will be api forward slash platforms okay up to you which way you want to do it i've seen it been used both ways either using this way or just actually hard coding it up to you all right fantastic so now we want to come on to our first action and in doing this first action we can actually then test to see whether everything we've done so far has actually worked so we're going to do an action that just gets all the platforms that have been added to our backend database so public action result let's type i enumerable and we're gonna have to bring in that namespace and it's an enumeration of what is enumeration of platform read ttos okay and i'll let me just finish this method signature and i'll just take you through exactly what um what we're doing here just in case it doesn't make sense and we'll call it get platforms okay so we're not passing anything and we're not passing in an id in or anything like that and um yeah that looks okay and we're going to decorate it with a verb http get there we go so let's save that off now i think this i'm surprised i think i've made a mistake i'll just put this on a bracket and then i'll probably get some complaints just finish that off um i'm surprised it's not good there we go it's not complaining though it's complaining now thankfully about the fact that we didn't have innumerable or platform new detail uh name spaces at the top so let's bring those in so control period uh systems collections generic and then platform read detail there we go so basically what this is saying is when you call this action and it's going to be get get action at this route api platforms we're going to return an enumeration of our platform read detail not a platform entity not a platform model but our platform read detail because we've said previously whenever we're sending data out we want to send over the dto rep the read detail representation of our data so that's why we're doing that and then from there it's actually relatively simple and we're going to get to a point where we're almost ready to actually see that we've not made any mistakes throughout this whole build process to date um so just to help me debug if it's uh you know if we have any issues i'm just going to put in a console writeline i have to bring in the system namespace so control period using system and i'm just going to say getting platforms hopefully we get some platforms but we'll see all right so we want to make use of our repository to actually go into a repository and pull back the platforms that we want so i'm going to create a variable that will contain the return result set from our repository method so platform items and this is going to be an enumeration of platform models that our repository turns back so we'll make that equal to and we'll make use of our repository and then we'll have a look and we want to get all platforms we'll just do that so that will go into our repository class let's come over here and it's going to use this uh get all platforms signature here and then in our concrete implementation it reaches in to our db context and returns as a list of platform objects okay so we have platform objects in here but we want to return platform read details so this is where auto mapper comes in we're going to map our models to our read details and return that back and that might sound rather complex but it's not really thanks to auto mapper so we're just going to do a return okay so http 200 result and then we're going to make use of mapper injected map or object so it's mapper map and what are we going to map to we're going to map to an i enumerable off what let me just see if i can get rid of this thing it's a bit annoying now innumerable of platforming details and what are we mapping from so this is what we're mapping to we're going to map to the enumeration of platform details what we're mapping from we're going to map from this collection here of models all right and that should be enough to basically get us at the result that we want so if we just come back to our profiles and we have a look in here basically that is going to be making use of this mapping construct here now even though we're using an enumeration doesn't matter smart enough to know that enumeration or single uh objects it will still work okay so again automapper very very cool thing so let's save that off let's do our control backtick and let's do a dot net run and hopefully everything should be wired up correctly i don't think i need to do anything with the platform's control if we come back over before we test this we come back over to the startup class and just move this down our controllers have already been added into configure services anyway so everything should be wired up correctly and our endpoints are all wired up down here in configure you can see here we have our controllers mapped through so should be okay so the way i'm going to test it i'm using a tool called insomnia if you want to use a web browser or you want to use postman postman is obviously very popular go for it doesn't really matter i like insomnia because the interface is a little bit cleaner it's just nicer for making videos so i've got an empty environment here and you can download postman free off the internet i'll put a link down below um i've got an empty environment so what i'm going to do because we're going to do a lot of testing so this uh believe me we're going to do a lot of testing there's a lot of different environments so i'm going to create a new folder i'm just going to call it local dev so it's a local development environment i'm going to put all my requests for local development in here and then i'm going to create another folder i'm going to call it platform service because we're going to be testing a platform service and a few other things as well so local dev platform service then we're going to create a request new request i'm just going to go get all platforms or you can obviously um call this anything you like it's just a label really and it's a get request so create that and where are we going to get our stuff from well we're going to come back over to a running application and scroll up a little bit you'll see that our application is listening on localhost 5001 for https and uh 5000 for http now in this video i'm actually not going to work a lot with https which is not something i would usually do but for various reasons more especially when you come to kubernetes it becomes very very complex and it's going to kind of detract us from the core of what we're focusing on so for the most part i'm going to be hitting the http endpoints rather than https now that's something i might tidy up in future videos but for the moment i don't really feel it's too much of a an inhibitor especially not when we come to running things in kubernetes but it does come back to haunt us a little bit later on in the video when it comes to using grpc but we'll cover that off when we come to it so just wanted to call that out um yeah bring on localhost 5000 5001 for the most part in this video we're going to just be working with http to keep it simple otherwise we start going down some rabbit holes so let me just copy that in fact and i'll just paste that in and put that in over here and then we want to put our routing so it's api platforms okay so get request here's our base url and then the route to our action it's api platforms which should look familiar to you and if we come back over here it's basically what we specified up here at our class level i can find it no that's looking wrong looking at the startup in our platforms controller we come over here to our class or our class level this is our class level route api platforms and then we've specified a get and so the method signature that we're calling from insomnia will match this method signature here there's no attributes being passed in or anything like that so we should get a result back so let's hit send and indeed we do so there we go we got 200 response back and we get our three platforms returned which should look quite familiar these are the platforms that were injected into our in-memory database so that's awesome we're on the right tracks all we've got to do now is just finish off our last two controller actions and then we're ready to move on to some of the more interesting stuff all right cool so let's come on to finish off our controller we'll finish it off for this section we are going to add a lot more into it going through the course but for now we're going to yeah just finish off the basics and get up get our other two action results in place so if like me your server's still running just ctrl c to take that down and get rid of the terminal don't need startup for the moment and we're just going to work exclusively within our platform's controller class now so what i'm going to do is i'm going to get rid of the sidebar and i just pressed control and b just to get rid of that and you can toggle that on and off we're not going to be moving around any of our files for this last section we're just going to be working completely in here so we can give ourselves a bit more breathing room and see exactly what's going on so the next action i'm going to add in is going to be very similar to this but this time we're going to request just one platform using an id so let's come on and go and do that so again public action result what are we returning rather than an innumerable we're just returning a single platform read tto that's all and i'm just going to call that get platform by id and then we are going to expect a parameter this time of type integer and we're going to pass in an id now when i was talking about this api controller giving us some out the box behaviors this is kind of what i'm talking about here by defining this id is as part of an input parameter to this action result basically that will be interpreted automatically when we come over to testing it and we pass in forward slash you know two to relate to one of the ids of the platforms that this route will basically be automatically interpreted by this to map to that okay it's kind of nice you can explicitly tell what you're mapping through from your request but in this instance it's all doing this kind of stuff out the box was but of course we do have to specify that it's a http get okay and then it's actually very similar to to this here so the first thing we're going to do is declare a variable to hold the results i'm going to call it platform item singular this time because we're just looking for an individual item and again we'll make use of our repository class and we did create a method in here called get platforms by id fantastic and what do we want to pass into it we want to pass into the value that's been passed in through our url okay so we're going to attempt to get whatever was passed in now as pell sort of rest best practices if you try to locate a resource an individual resources we're trying to do here and it's not found you should return a 404 not found http result set back okay so that's what we're going to do next or that's the kind of thing the check we're going to make so if platform item it doesn't equal null then we have something that's good and we're going to return it back so we'll return it back with an okay and we're going to do something very very similar as we did up here in fact almost identical just we just don't need to specify that it's enumeration but let's type out anyway because it's good practice to to get across this stuff because we're going to use automatically quite a lot so mapper map bought my mapping too instead of an enumeration of platform details we'll just mapping to singular platform video read detail and what's our source it says platform item and that's it nice and straightforward so if the platform item returns there's not null okay we do that otherwise we're going to return a not found result and that will return a http 404 for us fantastic so let's save that off bring up our command line let's do a net run and remember doing a.net one also does a build for us as well as you can see there um but i i don't know why i sometimes still like to run a separate build and then do a run i don't know why it's just my weird way of thinking so let's run this one again let's take out the two and just make sure it's still working there's no reason why it wouldn't be well there is isn't working what have we done wrong you've done something wrong here let's find out what we've done wrong and we'll just scroll down a little bit and [Music] the requests match multiple endpoints i know what i've done wrong so this is actually a really really good uh really good example of yeah something that we forgot what i forgot to put and you can see here we've got two http get requests and it actually wasn't intelligent enough to yeah i forgot a really important thing we actually have to specify an additional route in here because otherwise what that was that error is saying is we've got two http get requests which it interprets as being completely identical it doesn't even get down to the method signature here even though the method signatures are different so a bit of an omission there so let me just rectify that so within our http declaration let me just do open brackets and what we're basically doing is we're specifying uh effectively a route part of the route that we're looking for so the way you do that is uh string double quotes curly brackets id so we're saying we are expecting as part of this get request and id and as part of that that is then mapped into this yeah that's that's my bad you really forgot to mention that so yeah it's part of this route here if we do this 23 for example uh we are go it's going to match this route here if we take this out then it's going to match this route here and it was getting confused because we did no point of differentiation and where api controller comes in is it automatically maps this value into here there you go early in the morning for me here not quite woken up yet now the other thing we're actually going to do while we're doing this i'm actually going to give this a a name and you'll see how this comes into play in the next in the next bit the next short while and i'm just going to give it exactly the same name as the method signature name okay and we actually make use of this a bit later on in our next action result okay cool there we go so that that should be okay now so let me let me run that up again and control c and run up again and let's uh see how we're going so let's come back to insomnia and let's just run our old our old request there we go back to normal that's cool and then we're going to create a new request explicitly for getting get a individual in the vidual platform [Applause] okay there you go and what might i do is i might just copy this and i'm going to paste that in here and let's look at a result set over here so you can see the in-memory database has automatically added ids for us fantastic so let's see if we can pull back kubernetes since we're going to be working with that next and put in three and indeed it does it pulls back the individual platform for us which is all good now if we put in some id that doesn't exist we'll get a 404 not found which is exactly what we want and that kind of adheres to our rest kind of standard practices fantastic okay good so i'm actually quite glad that i had that error honestly i am because it kind of helps you understand and how things work and that's a really really good point when you make mistakes and when and pain things don't work that is actually for me personally that's when i learn stuff if everything just goes totally smoothly all the way through you're gonna learn stuff but not as much as when things go wrong okay so this is a long course i'm just going to call this out here this is a long video you're going to make mistakes i guarantee it i guarantee you're going to make typos here you're going to forget to put things in there and things are not going to work don't stress out about that the code is available on github and you can if you want to you can just kind of overlay that with what you're doing but what i would suggest you do is try and fix it because i'm trying to fix it you're going to learn a lot more than just listening to me talking and going through the video all right so rant over um we're going to move on to our last action result which is creating a platform using our controller and then we're going to move on to the docker and then kubernetes all right so let's come on to creating our final our third and final controller action in our platforms controller and it's this controller that we're going to use to create platforms in our backend database and it's this controller that's going to be the one that we're going to return to and do a lot more working in terms of triggering messaging to other services so we're not done with this one fully yet but we're going to get it into a state where we're creating platforms and then we're going to move on to something else but we will return so before we do that let's just do a control c to kill the running cell we'll get rid of this and then we'll move down and we'll create our final final action so again public action result and what we're going to return back we're actually going to return back a platform read detail so when you create a platform using the service part the http part of the rest spec is that you should pass back that resource once it's been created and so any again about something like a cracked record here anytime you're returning data back we're going to use the read detail so there we go platform re-detail is what we're turning back we're going to call this method create platform and what we're going to expect as an input any ideas we're going to expect a platform create detail there we go and we'll call it platform i'll tab that through so just to go over that again we're going to return back platform we detail assuming we create one and what we're passing in we're passing in a platform create detail and that's the detail we created explicitly for creating platforms and the only difference really between these two at this point in time is the create detail doesn't have an id because our database creates that for us all right fantastic so let me just uh do our curly brackets and let's make sure this time we specify the correct we decorate this with the correct route so it's going to be a http post this time and that is all we need to do actually in this occasion because when we are creating or posting a new platform we don't actually have to specify an id if we were doing an update or something we would have to do something very similar like we did up here and actually specify that we're expecting like a resource id in here but for creating we're just saying it's a post request and looking across our entire controller all of our methods signatures are now unique so this is just a hdp get this is a http get with an id and this is a http post so the first thing i want to do is actually make use of auto mapper so we're getting passed in a platform create dto but our repository works with models it doesn't work with details so we're going to have to map our create detail to a temporary model object they will then use to pass into a repository so let's do that first so new variable i'm going to call it plot form model and that's going to be equal to mapper map and what are we mapping to or mapping to a platform object a platform class really not an object okay i'll bring in the namespace for that because i don't think we have that yet using platform services models okay and what are we mapping from what's our source of our data well it's this here's a platform create detail that was passed in just put that in here okay and we'll run that off with a semicolon and again coming back to our api controller decoration here you're going to probably ask where is this coming from well ultimately when we come to testing it we're going to place it in the body of our request and again by using this api controller decoration attribute here that this action will kind of look look initially to the body request of the request that comes in and we'll just put whatever object it has into here by default now you can override that with more explicit behaviors but again the default behavior is fine for our purposes okay so we've got a platform model now hopefully if our mapper has worked correctly and if i just do a control b to bring up my directory listing here and come back into our profiles you'll see that this is the mapping that's kicking into play or taking a platform create detail as the source which makes sense and we're mapping it to a platform all right so we're making use of that second auto mapper mapping here and then it's quite simple actually we then make use of our repository and we will do a create platform and what does that what does that expect it expects a model and so we're going to pass in platform model like so fantastic now one thing that we obviously always need to do whenever we make a change or anything like that we'll need to go back to our repository and we'll need to save changes as well if you don't do that if you leave that out you will have nothing created in the database so now what we want to do is we want to pass back our successful result if we have one so in this case a platform read dto as per rest kind of best practices whenever you create a resource you should return a http 201 along with the resource that was created and also a uri to that resource a location to that resource as well so we're going to do all that in the next few lines of code but the first thing we need to do then is we need to somehow get a platform read detail and so what we're going to do is we're going to use auto mapper once again and we're going to uh map from our model that we've just created back to a read dto okay so we'll do var platform read dto equals underscore mapper what we're mapping to oh we've got that map and what we're mapping to we're mapping to a platform read detail make sure you get these right i've got these wrong on occasions and mapped to the wrong objects and it's caused me some problems down the line but uh there you go that's all part learning uh so that's what we're mapping to what are we mapping from we're gonna map from platform model okay just get rid of that space there okay so platform create came in we mapped it to our model we put our model into our database and now we're going to return back the platform read detail version of that now this might all seem a little bit long-winded to you i've had some feedback people saying they don't like this kind of decoupling to this extent well that's okay personally i still think this is absolutely the right way you should be doing this okay you shouldn't be passing back models directly over the wire i definitely don't think you should be doing that so what we want to do next then is we've got a platform re-detailed we want to return it back with a http 201 to our requester now this line of code coming up is probably the one that i get the most comments about going this doesn't make sense and it's a little bit convoluted and the microsoft documentation isn't that great so i'm going to type it out and then i'm going to go through and try and explain it what it's actually doing um it's not really that complicated but it's a little bit convoluted and it's not really that intuitive to be honest with you anyway let's get going so we're going to return and we're going to use something called created created at route and so what this effectively does it returns to http 201 with a route uh as part of that return so the first parameter that we're going to pass in is actually the name of basically the route where we can get to our resource so name of open brackets and what where is this resource residing how do we get to that resource well that is why up here we gave this action a name because this basically is the resource uri whenever we create this resource you can actually then get to it using this action result so we're going to supply this name okay to this here so this is basically this first bit is saying this is the route where you can actually then get this created resource okay and then we're going to actually basically return back the resource itself and this is again this is going to get rid of this control b to get rid of that and give us a bit more room again this is a little bit kind of not that nice really but let me just type out anyway so new curly brackets id equals platform read detail id so this is getting our id and then we're actually returning back the platform detail as well and i think i've spelled that there we go all right so basically we're going to return it to a one we're going to give the route which is basically this thing here and then we're effectively generating a new id which is basically the resource id and then we're passing back the body payload and the response as well all right so let's save that off i think that's all looking okay let's do a control backtick let's do a net run to run that up and let's wait that looks okay and let's come back over to insomnia and i'm going to create another request new request and this time it's a post request and we're going to be expecting some json as the body we're going to pass in a create dto json payload and we'll call that create platform fantastic now the uri is actually going to be exactly the same as this different verb but same uri so let's just put that in here but we are going to supply a json body now what i'm going to do is i'm going to come over to let's pick this one here let's pick this one here because i want to get an example payload and i'm just going to copy one object okay just so we have uh you know we've got to buy bother typing out we have the correct format so i'm copying one object curly braces to curly braces i'm going to paste that in here now let's come back before i do anything with that let's come back and do a control b can i do a control b if we've got the thing running here we can and we come back to our details and let's just take a quick look at our platform create details so what we're expecting to get passed in here is a name a publisher and a cost okay no id so come back over here and back into insomnia we'll take this out okay we don't need that as part of our create payload there's no id in here obviously because we don't have one yet but we do need these three attributes here so let's create a new platform let's create docker because we're going to be moving on to that next it publishes docker i'm just going to put docker i think that's probably right might not be that and the cost is free so let's send that over and there we go we get 201 created and we get our resource passed back and you can see we've got a new id of four if you look over our headers most importantly you can see we have a location and actually is the uri that we can use to retrieve that resource and again that's a really important part of the rest specification and that was achieved by uh using let me come back over to our controller using this created that route methodology here okay that's what that really gives you although it's a bit of a not particularly nice construct that's what that is basically doing so that is all looking pretty good last thing i'm going to do is actually copy this uri just to check it and i'm going to paste it into here into our individual platform just check that it works there we go 200 okay and it's returned the right thing and finally if we just retrieve all our platforms now you'll see that we have docker in there now fantastic so that's basically just a fairly standard straightforward dot net core webinar i have to stop saying court.net web api now um but it's going to form the basis of our platform service going forward and we're going to add a lot more interesting stuff into it cool so we'll park that there for now and the next thing we're going to move on to is a bit of a review of docker we're going to dockerize this run it in a container before then moving on to possibly the main event which is getting all this into kubernetes all right so time for a quick review of docker now we cover everything that you need to know about docker in this video but if you want a bit more detail maybe check out one of my other videos but let's take a look and do a quick review so what is docker well as it says on screen docker is a containerization platform meaning that it enables you to package your applications in our case our platform service into something called an image and then run those images as containers on any platform that can run docker so what does that mean well basically what docker allows you to do many things but one of the problems that helps you solve is this idea that you might be running something you develop on your machine and it works brilliant but then when you come to deploy it onto some other environment it doesn't work and there's missing dependencies and all that kind of stuff so what docker does is it kind of removes that problem by allowing you to package something using docker into an image once you have that image you can be fairly sure that you can take that image and place onto anything else that's running docker and run that image up as a container and you can be fairly sure it's going to work okay so it allows for this kind of transferability and this easy deployability of applications okay and it's really referred to or really could be thought of as application virtualization which is a nice segue on to the next slide so what i want to do here is just call out the differences between running a plain old server or pc running an operating system and running applications versus virtual machines versus containers all right so in the first instance you've got a pc or a server it's running an operating system it's a physical machine and you just run your applications on that operating system fairly straightforward we then have a virtual machine set up which is basically operating system virtualization and in this case you'll have your host operating system you'll then have some kind of hypervisor which will then allow you to run entire virtual machines on that hypervisor which then have an entire operating system all of their own so in days gone by i've run things like vmware or virtualbox to run a linux virtual machine on top of my windows host system okay and that's a perfectly acceptable use case and virtual machines are still very much used today where docker comes into play and again this is a very simplified diagram there's actually more going on here there are actually virtual machines involved but i've taken that away because it was going to complicate the point i'm trying to make with docker or with any containerization platform for that matter you have your host operating system you have in this case docker and then you have your containers running on top of docker as applications there's no full operating system involved each of these containers has their own view of the host operating system they don't run on an entire dedicated os of their own so they're much more lightweight and much more efficient but you still get a lot of benefits with virtual machines and that one of your containers can crash and die or do something terrible your other containers are unaffected okay so are quite a nice a very nice concept and platform and very much one that lends itself very well to micro services so i'm sure you're probably starting to think about okay you're running up lots of services it's probably analogous in some way to a running container and then you can do all sorts of things like um scale out your containers horizontally so you can have multiple instances of the same application running as multiple containers to allow you to deal with extra demand and all that kind of stuff and we'll do this a bit later but for now we're just focusing on plain old docker so that's all great and everything but how do we get our platform service running as docker well let me take you through that so the first thing and the next thing we're actually going to do is write a docker file and that's really just a set of instructions that tells docker how to take our application in this case our platform service and turn it into an image and again remember image is one of those things you can distribute somewhere into anything that's running docker and allow you to run it as a container so we get docker file along with an application we run it through the docker engine and we end up with an image okay so we're going to do that next we're going to write a docker file we're going to build and create an image we're going to run up uh our image as a container and then finally we're going to push our image up to docker hub because we're going to be making use of docker hub a bit later on but we're going to do all that next all right so let's get going and start with our docker file so just come over back into our project now if you have anything running in your terminal just make sure you shut it down before we get going so in the root of our project just right click new file and we'll call it docker file no need for an extension and you can see by the blue whale that the s code recognizes what it is so i'm just going to get going and start start building up a file it's actually quite simple and if you come over to google and you just type in something like dots dotnet docker this first article here i'm basically just following this docker eyes in the asp.net core application in docker and there's a docker file down here which by and large we're really just going to be following that to be honest with you um making some slight changes but basically that's it so the first thing we're going to specify is the image that we want to pull down from docker hub that we'll use to start our build so the images mcr.microsoft dot com forward slash dot net forward slash sdk so we want the sdk image which is a kind of larger image than just a runtime image which will also be making use of and i'll just specify the build environment aim as build environment and i'm just going to copy this and i'm just going to push it into google and you can see where on docker hub that is hosted so just go to google and paste in the image name i think that should be sufficient you'll see on docker hub then we have the.net 5 sdk imaging with over 1 billion downloads so it's been used by quite a few people if i just come back over to dot the main landing page for docker hub for my area on docker hub you'll see i've got a few images myself that i've pushed up and hosted on docker hub now we're going to be pushing up our platform service here we're going to be pushing up our commands service here as well as part of the whole workflow that we're going to be following so if you haven't done so already please sign up for a docker hub account i won't tell you how to do that it's fairly straightforward but you will definitely need to do that if you want to continue to follow along but for now let's get back to our dockerfile and i'm just going to set my working directory so workdoor forward slash app and then what we're going to do is we're going to copy over the cs proj file so that we know what we what dependencies we actually need to work with in order to build build our image so copy everything from cs proj and copy it to our kind of working directory okay and then we want to copy everything else and build it copy from source copy it to our destination and then we just want to run dotnet publish see release and then we want to build a runtime image so what we're doing here is we are pulling down the.net sdk image which we're using to build uh the main part of our application but then we want to actually make the image a bit smaller this is what i'm talking about when i've said multi-part build when i'm going to pull down the runtime image and and finalize our build and just use that to package up our application we don't need all the sdk stuff as part of our image so very similar kind of syntax from mcr got microsoft dot com against dot net and this type the image is asp.net colon 5.0 specify the workflow working directory and then we want to copy the output from our build build environment from equals build emv to app out and don't forget the trailing period go to that and then we set the entry point now this is when our image is run what actually kicks off or what what do we what do we want to run basically and so this is what we specify that so entry point dotnet which is our what we want to run and then we actually specify the name of the dll that we want in this case it's platform service dot dll and that should be it let me just double check this over i think that looks right so again just to reiterate this is the first image we're pulling down the sdk image we're creating a working directory we're copying over the cs proj file uh oh i've forgotten the line in here we want to run dot net restore to basically pull down the packages contained in here that we need to build our image so this is really important so yes copy those over run.net store i don't know why i missed that we then copy those files over and then run.net publish we then want to build our runtime a full runtime version without all the sdk gubbins so we pull down the runtime image create working directory and we copy the contents of what we built over uh over to that and then we set the entry point for our image so when we run our image that's what gets kicked off so if you actually come over to the bin directory you'll see that we have a platform service dll in there it's just this thing here okay that gets run all right so that should be it for our docker file again if you want more information on this i've kind of skipped through it quite quickly watch one of my other videos we don't really need to revisit this now that's it and we're going to be doing a very similar one for the platform service but it's literally going to look identical aside from the name of the dll that we're going to run so probably just copy and paste it when we come to that point but that's all we're really going to do with our docker file so next we're going to build our image and you'll see how that works uh we're going to run our image um we're going to stop the running container and we're going to push something up to docker hubs we'll do all that in the next sections that follow all right so we're going to build an image now before we build it i mean it's very important that you actually have docker running so what i might do is just bring up a command line come on over here clear the screen clear the screen and if you just type in dotnet i'm gonna do that quite frequently doc or dash dash version you should see that you have some version of docker running now i'm running docker desktop so the app is over here so if you just bring this over onto the screen let me just make that a bit uh more manageable so you can see it oh my goodness there we go that's a bit better so you can see down here this is docker desktop on windows you can see docker is running down in the bottom then in the bottom left hand corner docker's running and also kubernetes is running so if you go into settings the cog up here you'll see a number of options one for docker engine all that kind of stuff but kubernetes make sure you have that ticked and that will actually make sure that kubernetes is running by default from memory it's a long time since i've done it since i've downloaded docker desktop kubernetes is not enabled by default so make sure you enable that and that you see the little whale and the little ships steering wheel forward for kubernetes um so yeah that's all running all right fantastic so make sure that's you get that same response whenever you type in docker at mandling and yeah we'll cover kubernetes a little bit later but we're just focusing on docker for now so what we want to do now is build our platform image and as i said at the start of the video i do have a cheat sheet that accompanies this video you can go to my blog and register there to get a copy of the cheat sheets all these commands are in that cheat sheet so uh building an image so making sure that docker is running is so build an image it's docker build and we want to tag the build tag the image with something with a name basically and the name that you give it is the name your username your docker hub username so mine is binary thistle and then the name of your service i'm going to call it platform service and from memory if you put capitalization in here it won't like it so make sure everything is lowercase platform service and then the most important thing now you can specify a version if you want we're not going to do that we're just going to not specify a version and if you don't then it just tags it as latest okay which is what we want most important thing that you'll probably always forget is make sure you put the build context which is just the dot at the end of that command okay you'll probably always forget to do that i do many many times so again docker build and we're going to tag the image with a name and in this case we want to give it the name platform service but the syntax you use is the docker hub username and the reason for that is when we push up to docker hub docker hub needs to know which account it pushes that image into so your name here is not going to be binary thistle that's my name you're going to have to put your name in all right so that looks good you just hit run you'll see what happens basically docker just runs through the docker file and it starts executing all the steps as you can kind of see here now if i think i probably have those images locally on my machine the microsoft sdk images on my machine if i didn't then it would actually go to docker up and pull down a copy of that so it might take a bit longer in your case if you've not done this before it might take a bit longer to run it's probably going to be relatively quick for me because i have those images and one thing it will check it will check to see that it's got the latest version of those images so even in my case it might download a new copy of the sdk image if the one i have is out of date but that all looked pretty good it looked like it built that all right now one thing i will draw your attention to that i use all the time is this docker plugin for vs code so you can go to your extensions and just type in docker and it's probably going to be the first one that comes up i think it's this one here yeah i've got it installed you can see you know it's pretty pretty well used get high ratings i really recommend it so if you're using vs code strongly recommend you install that because you can kind of look at your images and stuff like that so if you come over here and look at images and i'll just maybe refresh those um you can see here there we go i have this platform service image uh binary thistle platform service image with the version of latest okay so that's all looking good so that's building an image pretty straightforward so what we're going to do next is run this image up and see what we get all right so we've built an image we have that available as you can see here again and we want to run our image as a container all right so let me just clear down our docker file and i'll come in here and just clear the screen as well all right so to run up our image as a container the command you're after is docker run and then you don't need to supply this next bit but i'm going to supply it because if you don't supply this next bit there's no way of us accessing the server running inside the container if you run a container without this what's called a port mapping you can't get into it so what we're doing now is specifying an external port a port that we're going to use in our insomnia session to get into our container it can be anything as long as it's not running on your machine and then the internal port that we're going into now there's an awful lot of this port mapping type stuff that we're going to do not just in docker but more especially in kubernetes so it's it's there's quite a lot of networking a stuff in here which might surprise you but probably shouldn't when you think about the nature of a distributed application anyway digress for the moment so we're running up our image as a container so docker run dash p for the port mapping 8080 i'm using that in my case uh 280 which you will need to use and then we're going to run it in what's called a detached mode now if you don't you don't have to specify this d flag if you don't then basically the container output will be displayed in this command line and this terminal and you won't be able to use this terminal to type at i do i want to retrieve it back and run a few more commands but it's quite useful sometimes not to run it with the detached flag and you can actually see some messaging flying through such as errors or what you know what startup stuff you're getting but we're going to run it as detached and then simply it's just the name of your image and this can be any valid image name obviously in our case it's platform service and in your case yes it won't be binary so it'll be whatever your docker id is and in my case it's obviously buying the re-thistle platform service and that's it so again if you are maybe wanting to pull down a sql server image you could specify the sql server image here and run it up and do all sorts of stuff you'd probably need a slightly different port mapping but nonetheless it's the same idea docker just goes to docker hub and pulls down that image as we have this locally docker hub in this case will not do that hit enter and what you can see up here straight away if you're using this plugin you can see that we now have a container that is running as denoted by this little green play sign and we get a kind of id of the container so a couple other useful commands for you uh docker ps will show you the running containers and you know basically we already have one running here but if you're not using this you can use the docker ps command to see what running containers you have and you'll have this container id which is this thing here which we can reference copy that and issue a docker stop give it give it the container name container id and you'll see that up here yes it's been stopped and if we do a docker ps again we have no running containers okay so that's stopping and starting a container now if we do docker run one more time what do you think is going to happen do you think this is going to run up well let's give it a go no that doesn't happen what happens basically is you're telling docker to run this image so it's going to this image again and it's running up another container it's not restarting the same container that you had last time so just be aware of that every time you run docker run it's basically running up a container okay and again docker ps to get our running containers it's got a different id this time let's copy that and let's do a docker stop paste that in and you'll see that that too has stopped so what if we just want to restart the same container well that is then docker start okay and then we actually give it the name of the container or the id of the container that we want to restart so if we do that you'll see that the same container restarted we didn't get another one okay so docker run to go to an image run it up if you keep running that command it will run up new containers every time docker stop to stop it docker ps to get a list of running containers and then docker start with the container id to restart an existing container and that's other than one other command which we'll cover in a bit that is the only uh they are the only commands that you really need to work with or you can just use this uh this interface here so you can right click here for example and stop it if you want to do that okay so i did mention there is one other command and it's one command that we're going to be using an awful lot more than docker run and docker stop uh we're going to be using this command docker push along with docker build a hell of a lot a heck of a lot throughout the rest of this course so let's go and do a docker push and then we're just about ready to move on to kubernetes all right docker push now to be honest with you the command is incredibly simple there's not really much to it um but what i'm going to do is i'm just going to bring over my docker hub page again let's go over here so this is logged into dogger hub docker hub as binary thistle and you can see i have a few images up here already for other projects but there is nothing in here that says platform service that does not exist here at the moment okay so pull that back over and back over in our command line and let me actually just uh clear the screen i'm going to docker any guesses what the command's going to be that's right docker push and then it's just the name of our image binary thistle and again in your case it's going to be your docker id flat form service platform so it's cool and you can see one of the things that's saying it's using the default tag latest which is exactly what we want and then it's just going to start pushing this container up to docker hub so it depends how fast your upload speeds are on your internet connection mine used to be really really fast but for some reason it's got really kind of quite bad now so it takes quite a while so i'll probably cut the video because i'm sure you don't want to watch me uploading an image but that's it it just uploads the image now um docker's quite clever in that images are built up in what called layers okay and so if you this is going to take a while because we've not uploaded it before but if i upload it again and i've only made some minor changes you'll you'll see you'll usually see that the upload will be a lot quicker because it's not going to upload everything again it's only going to upload what's called the effect to the layer so it's a lot more efficient but as this is the first time we're doing it it's going to take quite a while so what i might do is i might chop the video here and i'll come back won't touch anything else i'll come back once it's uploaded but basically that's pretty much it but we'll wait for it to be uploaded we'll check docker hub and then we'll move on to kubernetes fantastic so that looks like that's pushed up now if we come back over to docker hub and if i refresh the page we should see binary this is platform service up there and we just click on it yeah you can see that we have the latest build up there i've not really versioned it cool so that's a push complete we're going to be using that an awful lot as we move throughout the rest of the video but next there was one thing i actually forgot to do which was actually when we ran our container take a look at the container running to make sure that we can get to it using that port mapping so we'll do that next just before we move on to kubernetes just to show you that that's all working so we'll do that yeah do that now all right so back over here let's just uh run up our let's start one of our containers that we already had fantastic if you do a docker ps so you're running not psa ps yeah you can see it's running and you can see we have this port mapping so let's come over to insomnia again and we've got that here so we can see we've got a local development environment a platform service i am going to create another folder top level folder i'm going to call it well it's technically a local environment i'm going to call it a docker environment and i'm going to create another folder in here we're probably not going to make too much use of this but we'll keep it segregated because we might flip back and forward between them platform service and then i'm going to create a new request i'm just going to call it get all platforms just to get request create that now what i might do is come over here i'm just going to duplicate that yep and i'm just going to actually so we can delete that then and delete and click to confirm and let's move that into here now obviously we click on this it's using localhost 5000 we don't want that you want to use 8080 all right let's see what we get there we go so that's how we are getting to our dockerized uh image rather than directly on uh directly when we were running it as dot net run okay so these are the two differences uh between the two and actually as we move through the video we're going to have other ways of because eventually this container is going to be running in kubernetes so we're not going to access it in this way either we're going to access it in two different ways actually so it all gets um not complex but yes it's quite complex it gets it can get quite confusing so that's why i'm being really careful to segregate which environments i'm running where because it can get a little bit convoluted let's see um let's see that but yeah anyway that's it the container's running uh we can get to it in that way now just before we move on then what i will do is i'll just stop this container uh with a docker stop i think that's the same one isn't it there we go stop all right great so that's a bit of a review of docker a bit of a crash course on docker i think that's enough to get us going docker file and some core commands we're good to go we'll be using docker build a lot and we'll be using docker push a lot but now it's time to move on and take a look at what kubernetes is all right so let's move on to talking about kubernetes now the reason i've got an onion on screen there is not because kubernetes makes you cry although it can get quite frustrating from time to time is because well the way i think about it is very much like the layers of an onion there's lots of layers involved in kubernetes and when i was first learning it that's kind of how i envisaged it so massive subject area but what is it in a nutshell basically as you saw in the last section we were running up multiple containers using docker desktop to do that that's obviously fine for development purposes but when you're moving into microservices architecture and you're wanting to kind of make sure your containers continue to run and if they crash that they get restarted or that you want to scale them out clearly you're not going to be running around a command line doing that so that's where something like kubernetes comes in kubernetes is a container orchestrator and you know it's a good name and the analogy i think fits really well the kubernetes is like the conductor of an orchestra make sure everything is running as it should running to titan if you want to put it like that and again yes responsible for making sure things run in the right way scale out in the right way all that kind of stuff and we're going to be working a lot with kubernetes in this section and the sections going forward and we're going to tell it how we want it to run things i guess so we're going to give it like an end state does you know an end state we want it to get to and kubernetes will figure out a way of doing that for us it's pretty cool actually huge subject huge huge subject area and actually one of the things i was thinking about um when i was making this video was do you use kubernetes because it can be quite tough it can be quite a difficult subject area and i did think about using something like docker compose docker compose is i kind of like to think of it as a kind of a middle ground between docker and kubernetes docker compose kind of almost sits in the middle it allows you to run up multiple containers and network them together it's actually a very nice option in you know development type environments but production-wise kubernetes is really the option that you would be going for you're interested in docker desktop not only sort of interested in docker compose i've done a couple of videos on docker compose to see how that all works but kubernetes it's the main event and that's what we're going to be using in today's video so just a little bit more about kubernetes i think really the best way of understanding is to actually start working with it but i'll give you a bit of background and then we'll go through the architecture of the application we're going to build in kubernetes next and that will introduce you to some of the terms and concepts but again actually working with this stuff is definitely the best way to learn about it so it was built by originally by google it's now in the public domain or it's now open source and it's maintained by the cloud native foundation uh often referred to as k8s simply because there's eight letters in between the leading k and the in the trailing s and i may write kubernetes as k-8s probably still say kubernetes though it's a container orchestrator huge subject area so again in this video we're going to learn enough to be dangerous we're going to learn enough as developers to get started and run stuff in kubernetes and it'll be a fully fledged micro services type deployment but it is a massive area and we're really only scratching the surface of it i like to think that there's two broad user profiles some people may disagree with this assessment but i think of developers using kubernetes so for the most part that's the audience i'm you know focused on this video as developers we want to run stuff on kubernetes okay and what sits beneath it and how it's set up personally i don't really care too much about that that's for really i think the second kind of category of of user that's the devops engineer or the administrator or whatever you want to call them because there is a lot set up required and you know it's people's entire roles just to run kubernetes and keep it running okay i'm not too focused on that i'm certainly not going to go into that in today's video because number one i'm not really that interested number two i don't know that much about it to be honest with you i just use kubernetes as a platform and that's what i'm focused on today all right so let's go through the architecture a bit now when i'm talking about kubernetes architecture i'm not talking about the internal architecture of how the kubernetes system is composed again don't really care to be honest with you we'll cover bits of it as we move maybe little bits of it as we move through the video but i really don't care how it runs underneath it's not that interesting to me i'm talking about how things run on top of it that's really what i'm referring to as the architecture that we are going to build out through the rest of this video that being said though i have provided a very very simplified view of how it kind of runs on top of my local machine in this instance and how it would run theoretically on something more complex like a cloud environment something like that just to give you that context all right so let's get started there so in our case and i'm sure in your case as well you're running this on a pc or a laptop or something like that so i have my bare metal pc on that i'm running windows 10 pro and just of interest i'm also running the windows subsystem for linux which is basically like linux running on a windows machine and the reason i'm running that a couple of reasons but the main reason is actually when i run docker docker actually uses the windows subsystem for linux to run containers effective that uses that part of the system to do that it doesn't have to if you're running running windows 10 pro you can use the windows containerization platform can't remember what it's called but you can use that that's not available in windows 10 home you have to go down to wsl too but everyone done that anyway because i think it's actually a nicer option to be honest with you so again got your hardware in our case it's just a pc you've got your operating system in my case windows 10 and then you've got your container runtime in our case docker and then not strictly correct but i'm just saying kubernetes then runs on top of that so kubernetes is this platform on which we can build stuff and again i'm not going to delve into how kubernetes is architected internally not too bothered about that we'll touch on some concepts as i said now this is really where we come on to talking about the architecture of the application that we are going to build now that what i'm going to one point i'll make here is there's a lot of information on the screen and i do not expect any of you if this is the first time you've used kubernetes to memorize this now and need to you know memorize it going into the next section that's not the reason i'm giving you this information now the reason i'm giving it to you now is number one to start to familiarize you with some of the concepts not asking you to memorize it number two to give you an end state because one of the things i found quite overwhelming when i was learning kubernetes for the first time was all these concepts just kept getting thrown at me and just when i thought i'd get a handle on it more stuff kept getting thrown at me and i was oh my god there's never an end to this and so the reason i'm giving you the end state architecture is so that you have that in view that you know that we're working towards an end goal and we can kind of tick off our progress as we move throughout the video and so we will keep referring to this architecture as we build up throughout the video so we can see what progress we're making and in that way i think you'll kind of understand number one the concepts and number two that you are making progress and there is an end in sight so the first kind of high level object let's call it that is this term cluster so this is a kubernetes terminology now and this is kind of what we're going to start focusing on docker desktop is a single cluster system okay and multi-cluster systems obviously do exist but they are kind of quite heavy going they are like full-on enterprise type setups we are definitely not going to touch that i'm definitely not going to touch that but for our purposes we are running on a single cluster and that is docker desktop a little bit more digestible then is this term a node and again docker desktop single cluster single load and those two terms i might even do it myself they kind of almost get used synonymously not technically correct cluster node and so within one cluster you can have multiple nodes running and for example if you um i did think about uh developing this video on something like google cloud or azure you're probably going to be running on a single cluster but certainly in the case of google cloud you you'd be running in a kind of multi-node environment but again not really that relevant to the video today so google docker desktop single cluster single load okay but you can't have multi-nodes obviously now now we're going to start to delve into really what we are building out we don't have to set up a node we don't have to set up a cluster it's just there okay that is already there the fact we have kubernetes running we have a cluster we have a node we don't have to do anything with these and so to be honest with you going forward i'm not really going to refer to them too much after this so you can kind of almost not ignore it but just kind of put it to the back for the moment okay but with insider node this is really what we are going to be standing up and running so the first concept really that you're going to have to get your head around and it's not that complex to be honest with you is this idea of a pod now what's a pod a pod runs containers so as in the last section when we're running up containers the command line a pod is used to host and run containers now a pod single pod can run multiple containers in this video we're not doing that it's one pod one container okay now a container and a pod are different things but for the purposes of this video you could almost use those terms interchangeably not technically correct but i'll forgive you if you do that and i might do it myself so pod container one-to-one mapping in this case however in the real world you can have multiple containers running in a pod but we're not doing that today all right so the first thing we're going to do is actually take our platform service image and run it in kubernetes in a pod first thing we're going to do and that's we're going to do that using something called the deployment but that's for later okay so pod is the first level object that we are going to start working with now in the same way as with with docker we can't actually access that running container now it's not accessible from within kubernetes so what we're going to do next is set up something called a node port and a node port is what's termed a kubernetes service and there's a number of services we're going to work with through the rest of the video no port is the first one load ports are really used only for development purposes you wouldn't use a node port in a profi uh production type environments used really just to allow us to test and get access to our pods okay and our containers as well so it will give us a kubernetes will give us an external port that we will use it will more likely begin with in fact it will begin with a three and it's usually a five digit port number semi randomly assigned think of almost like the port 8080 that we used in the docker example we'll also have the port mapping and the internal port mapping in this case port 80 so external traffic will come in and it will hit directly our port 80 of our running container in our pod and it means we can test to make sure that it's all up and running it's running okay in kubernetes and we can access it so that's the first thing we're going to do next we're going to do is set up another pod with our command service running and obviously we're going to have to write our command service that doesn't exist yet but we're going to write that build it as an image push up to docker hub put it into a pod running in kubernetes in order for these two pods to talk to each other they need to get what's called a cluster ip service and you can see on here you have your pod and then you'll have a cluster ip service for each of the pods that need to communicate to each other and again there's this kind of concept of a port mapping and that will allow our platform service to talk to our command service and vice versa next thing we're going to do the node port as i said is really only used in a development context in production you would not use a node port so what we're going to do next is set up yet another pod with something called an ingress engine x controller as it says an ingress engine x container but it's really an ingress indiunx controller running as a container and what that basically is it's almost like an api type gateway so i have done a video previously on envoy as an api gateway engine x is on a competing kind of platform that does a similar thing and we're going to use that in this case to basically act as a gateway for our two services it's going to run in its own pod it's going to work in conjunction with something called an ingress engine x load balancer okay and that's another service and so external traffic will come in via the load balancer it will talk to our ingress engine x controller and the ingress engine x control will route directly to our two services okay it won't actually make use of the cluster ip services they are really just used for generally for container to contain our communication the ingress engine x control in this case is going to reach out directly to our services next thing we're going to do is set up a database sql server database because at the moment we're running internal and memory databases for both our services so we're going to stand up a separate sql server just for use just for use by our platform service and it's going to have a cluster ip as well so it can talk to the other services the other thing it's going to have to have is access to persistent storage space because our platform and command services they can reboot they don't need to store data they're stateless that's fine sql server it's a database obviously it can't be stateless it needs to store data somewhere so we're going to use something called persistent volume claim to claim some storage on our physical disk now that area is starting to go into the domain of the devops type guys the administrator type guys it's not strictly uh developer um area persistence volume claims that's fine that's a developer realm but when you start going further down the food chain and you start talking about persistent volumes and storage classes and then setting up storage that's starting to go into the realms of the devops type guys fortunately for us we can just set up a persistent volume claim and that will do what we need it to do and everything kind of reaches down onto a physical disk the final pod we're going to set up is our rabbit mq message bus and again cluster ip and it will allow us to talk to all the all the pods to talk together within our node now one thing you'll notice here you'll probably aware is the sql server for our command service in this video just for the purposes of saving some time the command service is going to have its own database but we're just going to keep it as the in memory database still has its own database can't stress that enough it's not using the same database as the platform service using its own database we're just going to keep it in memory and maybe there's a bit of homework once you get started to get familiar you can spin up a second sql server just for it but again main takeaway it's not a sql server but does have still has its own data store very very important and then these cluster ips maybe not the best diagram basically it's not saying that the platform sql server just talks to the rabbitmq database blah blah blah it basically means all these four things can talk together okay they're all chained together they all could talk together maybe not the best diagram but yes this is our end state this is what we're going to set up this is what we're going to build and as we move through the rest of the video we're going to refer back to this time and time again to make sure that we are making progress so with that i think enough theory we're now going to move on to creating our first pod in the next section all right so back over my application just make sure you don't have anything running and what we're actually going to do is we're going to move out of just working within the platform service project and we're going to basically work within a kind of more i suppose you'd call it a solution type context rather than just a project context so in your command link if you just come out of the platform service and so we'll do a directory listing and you'll see that we have the platform service there i'm just going to come up one more level and do a listing now all my solution is going to be within this e3 folder here so what i'm going to do is type code r e3 and that's going to reopen visual studio code and still get a platform service here you can see everything in here but we're now going to create another kind of project level folder i'll just say yes to that all right and that's going to contain our kubernetes assets mainly our deployment files so when you're working with kubernetes there's a number of ways you can set things up you can run things at the command line to deploy pods and do all that kind of stuff or you can write files deployment files and that's the approach we're going to take because it's much more reusable than doing stuff at the command line so you know excuse me in our solution context now we're going to create a new folder and i'm just going to call it k8 now as i said in the previous section k8 is really just related to the kubernetes stuff that we're going to be uh writing so it's not a dot net project or anything like that it's just going to be a bunch of deployment files so the first thing as per our diagram and if maybe just pay a little visit again we come back over here and i'll just bring it back up on screen i'll need to get a more static version of this yeah the first thing we're going to do actually yeah that's actually working okay is deploy a pod with our platform service and that's what we're going to do next and then we'll come on and do the node port but for now that's what we're going to do so within our k8s folder right click new file and we're going to create a deployment file for our platform service so we're going to call this platform platforms dash depot dot ammo okay so it's a yaml file as the main mode suggests and apple obviously is just short for deployment all right so i think probably the best way to approach this is just to start typing out and trying as best i can to explain it as i go through it's a bit long-winded um and there seems to be you'll probably think there's a bit of repetition in this file but everything is necessary unfortunately and yaml files yes are white space sensitive so just be careful as as you work through them but if you follow along should be okay and remember the code is on github so you can pull them this file if you do run into problems following along yourself so the first thing we specif specify is an api version so colon and it's apps forward slash v1 and that's just specifying yeah the api version that this deployment file is going to use under the covers kubernetes actually works with um basically a rest api essentially to create and destroy its own resources so that's basically what we're specifying there what kind is it well it's going to be a deployment again we're deploying something into kubernetes and as we work through the course you will have different kinds of these files i don't want to uh populate it with all that stuff next thing is metadata and we give it a name and this is really just the name of our deployment okay so we're going to call it platforms for our deployment basically and then we come on to our specification and this is really really specify or pod effectively so uh tab in and the first thing i'm going to specify is the number of replicas now as i said when we were talking about kubernetes you can horizontally scale out the number of services that run so we could have three instances or platform service running if we wanted and that's what's referred to as a replica here we're just going to have a single replica we're just starting with one at least to begin with now these next there's going to be two bits that follow let me just type them out first and then let me describe them because they're a bit confusing to be honest with you but um yeah again they're mandatory so the first thing is a selector and we come down and indent once again and then we type in match labels and we can let's actually get some help with it and then app and we're going to call it platform service now we're going to back out again we're going to stay within our spec but we're going to be directly under our replicas and our selector tag now and we're going to define a template i'm going to give it some don't metadata the colons tab in and again you've got to be really careful with this white space here and we're going to give it a label or labels and we're just going to give it a label of app and again platform service and just select that from there basically now these two things the selector and the template kind of work in combination with each other and really what we're doing within this template it's within here that we're really defining the pod and the container that we're going to use and then this selector is basically making is basically selecting the template that we're creating okay so it's a wee bit kind of convoluted but this template stuff is where we're defining what we're deploying and then the selector is basically selecting it for us as part of the deployment then under there we're going to back out to underneath metadata so we're still within our template section but under directly under metadata now and this is where we're going to define our spec and under spec tabin and we're going to specify the containers that we want to run and we're almost at the end don't worry we're almost getting there and anything within yaml that has this kind of minus sign that's an array and you can even see by the name containers as plural you could have multiple containers that we wanted to deploy but in our case we're just deploying one we're going to give it a name of platform service what i want to do is just select this and make sure that we have these these three things are called exactly the same thing so the match label the template app label and the name of our container are all platform service and then finally under here just tab in under the name we want to define the image and this is just the docker the docker image for our servers so image colon and we just specify binary thistle forward slash platforms service and obviously make sure this name is the same name as the service that that we have defined or you have defined and again remember this will be your uh this will be your docker hub login id and we do actually want to specify the latest here okay now this complaining is just complaining about we're not putting a couple other things to do with resourcing limits i'm not going to bother doing that so i'm going to ignore that kind of warning now that's it that's basically um a deployment for a container into a pod okay so if you save that off we'll find out if we have made any mistakes but it looks okay for the moment so again the most probably for me the most confusing thing is this selector thing but really all it's doing is it's selecting this thing here that we've defined as part of our template that's really all it's doing now we're going to come back in here a bit later and we're going to define our cluster ip service in here as well if you remember from the previous architecture diagram we're not going to put it in yet we're going to do this all step by step one step at a time that's cool so remember to save off and yeah now having defined our deployment file we're going to actually effectively run it and deploy it into our kubernetes cluster all right so back in our application we are going to bring up a command prompt fantastic now before we do anything else regarding our platform's deployment what i want to do is just make sure that kubernetes is running appropriately now i did i believe i did anyway bring over docker desktop and i said to check that you have kubernetes enabled and that it's it's got the little running instance down here and that's looking all good but one of the other things that we rely on as part of kubernetes and this is starting to go into the architecture of kubernetes itself is basically the command line that we are going to use to drive all of this stuff and the command line is cube ctl okay and so if you type cube ctl version this is the command line tool we're going to basically use to issue all our commands you'll see hopefully if it's running correctly you'll see a client version and a server version and all that kind of stuff doesn't really matter what the versioning versioning is just so long as you get something back that means kubernetes up and running and the cube ctl command line is all good to go so let's clear the screen and again all these commands or most of these commands are in the cheat sheet that i talked about at the start of the video so let's just do a directory listing where we are within goodness sake we are in oh my goodness uh we can see that we've got a platform service and we can see we've got a k8s folder so we're going to change into our k8s folder and do a directory listing to make sure that you can see this platform's deployment file and we're now going to apply that file and so it's cube ctl apply f and then the name of the file so this is platforms dash depot dot yaml and if there's any issues with it you'll get an error at this point in time but that looks okay so we've got a command that comes back saying uh deployment apps platform depot created okay what does that mean so if we do cube ctl get deployments because basically you've created the deployment i'm going to start cubectl wrong there we go you can see here that we have something called platformsdepl zero uh containers ready zero pods ready and one is up to date so let's keep an eye on it um but the other thing you can do is see your pods running so cube ctl get pods now you're probably starting to see a bit of a pattern emerging here ubis command you can use cube ctl get whatever so you can get deployments you can get pods you can get services you can see here that we've got one pod within our cluster now and it's ready and it's running so that looks good let's just get deployments again and you can see now that zero from zero being ready to one being ready it's all up and running that looks really good okay so i think if we come over to our docker docker plugin you can actually see here that we have a running container running in docker but isn't it supposed to be running in kubernetes well yes obviously it is kubernetes is orchestrating it but it still has to run in docker now what i'm going to do is i'm actually going to delete these two um containers that we had created in the previous section because it's just going to get way too busy all right now the other thing i want to do then is come over to our docker desktop and if you come over here it's uh making itself rather big and we just go into a general which would cancel out this and we come over into the kind of main window actually to be honest with you though i actually find the docker desktop interface can be a wee bit confusing sometimes if you go into settings this is where our settings are you cancel to come out of that this is kind of showing you running stuff now the first thing you'll see is wait a minute there's actually two things running here how is that possible well this is a pattern that you'll see time and time again this is actually the running container and this thing here as you can see from the name is actually the pod that's kind of running so you can click on that the actual container one and you can look inside and you can see basically output from our service running in kubernetes and it looks very familiar seeding data because we've got using an in-memory database and you can see here it's listening on port 80 which is basically what we saw when we ran it in docker as well all right so that's all looking really good so give yourself a round of applause you've created your first deployment and you've deployed a service into kubernetes and it's running and waiting for us to start using it fantastic so in the next section um what we're going to do we're going to do a little bit more work here just to show you some of the things that kubernetes can do and how it actually orchestrates services and then we'll eventually come on to working with our new report all right so one of the things i want to show you and it's one of the things that kind of confused me a little bit about the whole kubernetes setup was okay what happens and maybe this is my destructive nature what happens here's our running a running container here what happens if i delete this okay so let's stop it and let's delete it just remove it what happens then anything so if we come back to our command line let me do let's do upkeep cube ctl get pods you can see that we've got our pod here and it's saying zero of one is ready to do it again it's now saying one of one is ready and you can see now that a container is running again and come back over to docker desktop you can see that we have the the container has come back we've stopped it we've destroyed it but it's just bounced back and come back up how is that well that is one of the whole points around the kubernetes it auto recovers your containers for you so if a container crashes for example something bad happens to it because we've told kubernetes as part of our deployment come back to our come back to our file over here and look in our deployment file because we've told it we want a deployment we want one replica kubernetes is going to do everything in its power to ensure that there's one replica of our service running so even though we explicitly destroyed it there it still came back and uh runs again so you can change that behavior if you want but the whole point of kubernetes is that you've told it to do something you've given it the end state it will make sure at all times that that end state that you've specified is being adhered to so that's one of the very very cool things about it now that's not to say that you cannot destroy things that's kind of what i thought originally you can so again if we come back to our command line you can do something like cube ctl actually let's get get our pods get our deployments first so cube ctl get deployments and we have a list of deployments and then we can go cube ctl delete deployment and we'll give it we'll take its name and we will delete the deployment okay the upkeep up key get deployments there are no resources found if we come back over to our docker desktop kubernetes instance you can see basically it has destroyed everything we've destroyed the deployment we've told it to delete the deployment and there you go it's taken it's taken everything down all right so just one other thing just to show you how easy it is to tell kubernetes to run three instances of our service we can just change the number of replicas to three save that off up key up key and we'll go back to our apply apply command so cube ctl apply f platforms depo yaml and we will hit enter and clear the screen a bit here and if we do cubectl getpods you'll see that there are three pods they're not started yet but kubernetes now is attempting to start three containers for us and if we keep refreshing this it will eventually unless there's an error in our in our image it will eventually reflect the fact that we have three running containers as part of our cluster okay so you saw how easy that was so again the idea is if you're seeing high traffic on one particular service you can quite easily spawn up to three four or many services and you know you can deal with that increased demand of course there are design considerations with the way you've built your service you know that may not there may be other things you have to consider but in our case it would literally be that easy so you can see here we've got two containers up and running and this other one is this in this container creating state so it's still spinning up and i'm sure eventually you will see all three will have started so unfortunately for those three containers there they are they're all up and running and we come back over to our docker desktop view you can see that we have our three services running and then we have three associated pods one for each platform and that's a pattern that you're going to see time and time again you're going to see these pods paired with a service all right so and also know how we spun that up i'm going to delete it because i don't want three three services running so up key up key and we'll delete deployments perform the platforms depot and then we'll do get deployments cub ctl get deployments [Applause] we have no resources found which is correct and we again we'll just double check yes it's in the process of deleting that's cool so we're going to come back here before we move on change that back to one save off and then for one last one not for the last time but last time in this section we're going to reapply our deployment and get our one service up and running so that's all good we have a service up and running but we have no way of accessing it yet so we're now going to create a node port that will actually give us access to see if we can query this service running in kubernetes so we'll do that next all right so we want to create a node port now and a node port is one of a few different types of service that we can create but just before we do that just run a cube ctl get pods just to make sure that the pod that we kind of recreated that the deployment that we ran in the last section with our one pod is now up and running so we have one pod running that's cool so just make sure you do that before you go any further so yeah we want to create a node port now so if we maybe just come back to our diagram it's probably worth just taking a quick look so we've done this now well done we're now going to create a node port and that's going to allow us external access into our service now you'll see on here we are going to create a node port we're going to specify this port here which is just referred to as or you'll see in a minute in the file the port and this one here this 80 is the target port we don't specify this 30 000 por kubernetes does that for us so we specify two ports this one here and this one here all right so back over in our k8s folder if you right click and a new file i'm going to call this platforms np srv just to keep it kind of short so dot yaml don't forget the ammo so platforms node port dash service dot yaml all right so i just pull this down a little bit and over in our folder in our file sorry we're going to specify an api version again this time it's just v1 as we're dealing with services the kind this time is going to be a service i don't really want it to auto complete it for us so service again our deployment was a deployment type this type is a service type and we're going to specify some metadata and this is really just the name that this service will be known as when we kind of work with it at the command line so i'm just going to call it plot form mp service dash srv okay platform npcs yeah okay some of these names can get a bit unwieldy so you have to be really careful that you type them in correctly and then we specify a spec section tab in and the type in this case is node port and then we're going to work with our selector i'm going to specify the app now this is where we pop back over to our platform's deployment file and we pick this value here platform service okay so it's basically it's another selector and this is what we are selecting the same thing okay this node port is going to be working with this platform service it needs to know where to reach out to and then we come back and then we specify the ports so we're gonna specify a name for memory the name thing is not mandatory um but i'm gonna put it in enemy i'm gonna give it the same name as this this man i think you need it if you're specifying and more than uh one array of port types you can see here this is an array that we're defining or array element should i say that we're defining so protocol is just tcp port is 80 and again that's the port on the node port and then target port is the target it's the port of the service that we want to talk to and that should be and again yeah if we were defining another you know set of ports then you you actually do need the name in that case if you're just defining one i don't think you need the name but we'll leave it in anyways makes for a more descriptive reading anyway cool so we'll save that off now in a very similar way that we did with our deployment all we need to do is do cube ctl apply f and then the name of our file now actually maybe before we do that we'll just do it well i can see we're in the folder but just do a directory listing to make sure that you're there and uh yeah let's go for it cube ctl apply dash f and then it's just the name of the file that you want to apply great and again we can see we've got servers created if you don't see this you will obviously see an error again the errors i think i've said this before maybe i've not if i haven't the errors are quite good they're quite descriptive they usually get you to the root of the problem if you have made a mistake so although it said created how do we know for sure that it's created well you can you use cube ctl again get services and you can see that we have two services here now this one here was actually here by default it's there already so we can kind of leave that one for the moment don't need to worry too much about that but this is the one that we've just created so here's the name as we specified up here in the metadata and then you can see that it's a node port and you can see port 80 is the internal port and this is the port that we need to use to actually access the service externally so let's move over to insomnia again and i'm going to create another top level folder this time for kubernetes environment so new folder and i'm just going to call it k8 okay so we've got one for local development one for docker and one for k8 now and then we're going to create another folder here for our platforms platform let's just keep it consistent platform service let's rename that rename probably a little bit pedantic but uh want to keep it consistent otherwise it can get a bit confusing so platform service and then we're just going to duplicate this one here okay and we'll move it under here but unlike we don't want to use 8080 we want to use that other port that kubernetes has given us or generated for us so it's here so if we just copy that try that again so it's 30 40 70. let me come over here and put that in here and yeah it looks right the query there we go we have access to our service running in kubernetes which i think is very very cool so give yourself a round of applause for that we're making very very good progress so basically that is well that's not it obviously we still have a long way to go but we've let's refer back to our diagram to kind of check on progress so we've now done this that's fantastic so the really next thing we want to do in the next section we're going to concentrate we're going to go back into writing.net c sharp code and we're going to construct upper command service and then we're going to do a very similar thing and deploy in here and then we're going to get them talking to each other and then it starts to become very much more interesting because then we're really going into microservices concepts we talk about how these services can talk to each other synchronously or synchronously or both so yes next section we're going to move on to is to construct our command service all right so now what we're going to come on to do is create our command service in our kind of solution area so back over in our it's not really our project anymore back over in our solution just make sure you stop any running services and let's close this stuff down as well all right so in the make sure that you are in the top level of the solution in fact we're going to need a command line so in the command line just do a directory listing to make sure that you're in the yeah top level solution area you can see the platform service in k8s folders and we're going to scaffold up a new service or commands service so dotnet new it's a web api and the name is command service hit enter that should go away and scaffold up our command service fantastic and it appears here in our directory listing so next thing we want to do i'm just going to get rid of this uh is bring up your cs proj file so you can see what package dependencies or package references we have in here and we're going to add some more in a very similar way as we did with our other service or platform service so net add package and the first one again i'm going to just copy the first one that'll probably copy most of them over save you watch me typing as auto mapper we're of course going to use auto mapper here as well so i think you must have done something wrong oh yes so what i haven't done uh is a changing to the project that we want to be working with so cd into command service there we go so what i was complaining about there was i was trying to add a package reference and it's complaining that there was no no project file basically so there we go rookie mistake and there we go so we've got auto mapper added into our cs proj file now fantastic so we're going to add a few package references here i'm not going to add all the packages that we need i'll add the ones that we're going to need in the kind of near future the ones that we'll work with a bit later than the line i'll pull them in a bit later so the next one is add package microsoft entity framework core and just make sure it's added into the cs proj file i'll do up key and then we need exactly the same but this time design and again this is mostly the same in fact it's exactly the same as the platform service as well now i did debate whether to just you know skip this not skip the section but you know not go through all the setup again but i think it's good practice for you to go through the setup again anyway and it kind of embeds what we need to do and it is only one other service so i think it's probably the right approach and then we're going to add in memory now as i said previously i'm not going to work with a sql server database for this service simply just to kind of reduce time a little bit working within a memory database is fine for the for these things for the video um if you are moving into a production environment yes you would you would use sql server but from for the moment i'm just going to use in memory and that's all that's it for the moment um we had the swashbuckle package reference in there again that's for open api if i remember we'll take a quick look at that um but that's it for now so as our package reference is done our project is scaffolded up what i'll probably just do is just remove the stuff that we don't want so the weather forecasts service we don't want that and into our controller we'll take our controller as well whoops we'll take that out as well and we'll delete that now the other thing we'll actually do is over in properties in launch settings json if we spin this up and we will actually spin this up locally it's using the same ports as our command service 5000 and 5001 and so we're doing some testing before we deploy to kubernetes or even package up in docker i'm going to run them both up locally and have them running at the same time just using the dot net command line and so obviously we'll get an error if they're both trying to launch on the same port so i'm just going to change the command service ports local ports to 6000 all right and what we might do just to make sure it's all wired up correctly let's just do a net run because you never know there might be some little gremlin in the system that we didn't think about and it all looks fine and it's running on port six thousand six thousand a month it's not going to do much in the moment because we've taken our controller we don't have any models or anything all right so we'll close that down now in the next section what we're going to do what i really want to focus in on is we're going to create a controller first we're not going to create models or anything i'm just going to create a very simple controller with a very simple post action endpoint in it and the point of that is i want to just get our two services talking to each other that's really the next part of the video we will eventually come back and we'll have models and we'll add all that kind of stuff in here but for the moment all i'm really wanting to do is get our two services talking synchronously to each other using http we'll do that locally and then we'll do it in kubernetes okay so that's really the path we're taking next so in the next section we're going to create a controller with one action in it that's going to receive a post request from somewhere else and then we're going to get our platform search to talk to it in a in a synchronous synchronous way all right so we'll do that next all right so we're going to create a controller in our commands service simply at this point just to receive a post request from something else in this case it'll be our platform service just just sending something over to us really just to test synchronous http communications between the services locally and then we'll do it in kubernetes all right so back over here i'm just going to close this down close in the command line and close down launch settings json and over in our command service if you right-click the controllers folder a new file we're going to create a controller now this service is going to end up with two controllers it's going to end up with a commands controller as you'd probably expect but it's also going to have a platforms controller why basically because this service is going to work with two models it's going to work with commands and it's going to work with platforms and the way i've architected this service basically the platform is really the parent resource of the command resource so if you think about it logically you're going to have a list of commands like you know dotnetrun.net build be loads of those and they will relate just to one other resource the platform okay so when we are using rest as we are here to create resources such as platforms and commands what we should really do is split those out into two different controller actions so we will have a controller for to deal with our platform stuff very similar to our platform service we'll have a controller to deal with our command creation stuff you'll see how that unfolds a bit later and i've got a whole section on some multi-model rest apis but for the moment again i just want to focus on getting some columns between the two services so it might be a bit sound a bit counter-intuitive but the first controller we're going to put in our command service is our platform's controller okay so in controllers create a file uh platforms controller dot cs all right fantastic now let's let's keep going but i just want to call your attention to the fact that we are running vs code with these three projects in here now and that it can sometimes behave a little bit strangely but let's see how we go um so namespace this time is going to be commands yeah i think it's doing already commands service yeah and so we're not really getting auto completion at the moment if you sort of see what's happening usually when if you start typing if i started typing command service i would get prompted with the full name of the you know of the service for our namespace um but let's uh it's very evident how reliant i am on it because i'm so used to that pattern um let's keep going command servers start controllers okay and we'll just bring that into the next line okay and it's a public class so public class platforms controller and we're going to inherit from controller base [Applause] and we don't have that namespace so control period now this is what i'm talking about here i'm doing control period to bring in my list of code suggestions and nothing is happening okay and the reason for that i believe the reason for that is because we're running vs code and we're now having it hosting multiple uh multiple projects so the kind of intellisense stuff appears not to quite not to really like that so what we're probably going to have to do is close down our vs code session running all these projects and reopen it just with command service and i'm fairly sure then we'll get our code suggestions so let me just save that off for the moment kill this actually i won't kill it i'll just bring up bring up a command prompt and then we'll come up a level do a directory listing so we can see all our projects and i'm just going to open code recursively and i'm just going to open command service itself let's do that okay fantastic and if we come back over to our controller there we go it should start complaining and i'm fairly sure if i put my cursor in there and do a control period i will get my code suggestions again so it's just something i've found i'm not sure if there's any other way around it there probably is but for the moment i'm fine with just running an individual project in in our vs code session all right so public class platforms controller inheriting from controller base the next thing we want to do is decorate our class with api controller as before it gives us some nice default behaviors and we're going to create a route again okay so from from memory from last time api and then you can either put in not curly black brackets square brackets you can put in controller okay now you might see a problem straight away here because we're going to have a route to something called api forward slash platforms we're also going to have that in our other controller as well they may go well that's not a problem because the rest of the url is going to be completely different the ip address so the domain name is going to be different correct that's fine so we could leave this like this for the moment however when we come on to using our api gateway things get a little bit more complex so i'm just going to call that out now i'm going to leave it like that at the moment this will still work at the moment but going forward we are going to have to change this slightly to reflect that this is a slightly different platforms controller from the platform's controller in our other servers in fact i'm going to put it in now okay because if i forget when we run into issues it might become a bit problematic so what i'm going to do and again when we move through the rest of the video you this is all kind of hidden once we hide all the stuff behind an api gateway this stuff is hidden so you're not going to have these horrible urls that you're going to have to deal with but just for future proofing i'm going to put this in so all i'm doing here is the root is api and then i'm going to specify that this c really is for our commands this could be anything by the way i'm just putting c in so api forward slash c to know our command service and then the name of the controller it will become clearer later on when we move on to the the gateway stuff so bear with me on that so that's our route fantastic i'm going to create a shell constructor just so it's there but we're not really going to put anything into it yet now we will eventually like we did with the platform service injecting auto mapper and a repository but we're not doing that yet again in this section all we're interested in doing is getting an action up and running all right so leave a constructor at that for the moment and then we're going to create our first test action so it's public action result and i'm just going to call it test inbuilt connection okay and it's not going to accept anything really and all we're going to do is just do a console writeline keep it really simple bring in the namespace for console using system console.whiteline and i'm just going to put something like uh inbound post at the commands service okay that's all we're doing so we hit this action we're going to get this printed out to the screen and i just need to make this decorate it with http post and then finally you do need to return something back because it's an action result so we'll return just return okay and i'll put something in here i don't know and test okay from platforms controller quite a few spelling mistakes in there all right fantastic so that probably looks a little bit uh convoluted put the trailing semicolon in there and maybe some of the stuff in here don't quite follow that that's okay don't worry it will become a lot clearer later on so what i want to do now is just test this okay i'm going to test it from insomnia just to make sure everything's wired up correctly and it's working okay so into the terminal make sure you're in command service we'll go net run i don't think i'm forgetting anything but we'll find out and it should be running again on port 6000 fantastic so let's bring up insomnia and we're going to close all that down and we're going to come over to our local development environment because that's what we're running in we're going to create a folder under here new folder i'm going to call it command service okay i'm going to create a new request it's going to be a post request we'll select json we might pass some json over eventually and we're going to call this a test and bind connection okay create fantastic and what i might do is just come in here and i'm just going to copy this make sure i copy it all and i'm going to paste it in here make that a little bit bigger so you can see it now a couple of things here the port is going to be different so it's going to be port 6000 and as per our route it's going to be api forward slash c platforms okay so back over in our controller you'll remember oh this is the route to this controller okay so apic and it'll be platforms because we're in our platforms controller all right so again screens running at quite high magnification so i'm just going to click send you get 200 okay and we get the response back inbound test from our platforms controller which looks good and if we come back over to our uh running service you can see that we got this message printed on screen cool so very simple but really just setting this up to establish that we have an end point in our command service what we need to do now is create a method of calling this from our platform service so we have that degree of so we can establish some communication service to service we'll not run that both locally and then we'll deploy it in to kubernetes so we'll move back over now to our platform service and write up what we're going to call a synchronous synchronous client effectively synchronous http client so we'll do that next all right now actually before we move on to doing any coding i do appreciate i've been talking about synchronous messaging and asynchronous messaging and you might be going what's this guy talking about so yes i think it's a good time to talk about this now i was going to leave it till a bit later when we came on to talking about message buses but i think since we're starting to do some messaging now i think it's a good place to actually put these slides in so yeah let's talk about synchronous and asynchronous messaging so what is synchronous messaging in short is basically talking about a request response cycle type up okay so as a client i'm going to make a request to our http endpoint in this case and i'm going to sit there and wait until it responds back okay so the requester will wait for a response this is probably going into maybe a bit too much detail at this stage but generally speaking and it's probably quite a controversial statement and probably going to get lots of hate mail for this one but generally speaking services such as the you know command service and our platform services when they're servicing requests to external clients such as a web application or insomnia those type of external facing services are synchronous okay that's a very common pattern when you're talking about service to service communication i.e within our cluster for example and it's just services talking to each other you can use synchronous communication for sure and we're going to do that but for the most part we in this video anyway are going to be using asynchronous communication and i'll come on to what that is in a bit that might be a bit of a controversial statement but yes for the most part externally facing services are definitely synchronous they don't tend to be asynchronous that's not that's less controversial all right um synchronous services usually need to know about each other so as a requester i need to go and know exactly explicitly i need to request that service for some reason and i need to know where it is now i put usually in brackets because there are things like service discovery and service meshes and all that kind of stuff where you can maybe have that kind of automatic service type discovery we're not covering that here today so for the most part what i'm saying is if you're making a synchronous request to an end point you need to know where that endpoint is maybe what why is that why is there an issue with that well it becomes a bit of a management overhead if your service is calling on 10 other services it needs to know exactly where all those 10 other services on in order to call them it becomes a bit cumbersome we're going to be using two forms of synchronous messaging today http which we'll come on to doing next and grpc all right now this bit i really wanted to put in because it can get a bit confusing so i've just said http is a synchronous messaging construct cool and it is right so i'm not retracting that a very good question is a question i had initially as well was hang on wait a minute if i've written a controller um which we have done and i've got some action results in there i can mark those action results as async doesn't that make it an asynchronous messaging construct then the answer to that is no and i'll tell you why so from a messaging perspective as you can see on screen say we did tag one of our action results as asynchronous from a messaging perspective the method is still synchronous any client making a request to that endpoint still has to wait for a response okay the async stuff is really all to do with the way that and service manages its threads internally from a client perspective from a messaging perspective anybody requesting that service still has to wait for a response so it's still synchronous messaging just to clarify async in this context the c-sharp language context means that the action will not wait for a long running operation so say that action result is doing some lengthy database step or maybe maybe even calling another service if it's long running what that async keyword means is it's not going to wait it's not going to maintain hold of its thread while that long running operation executes it's going to return the thread back to the thread pool okay and so yes it says here we'll hand it thread back to the fed pill but it can be reused when the run operation finishes it will re-acquire a thread and complete and then respond back to the requester so all this time the requester in this scenario has still been sat there waiting for the service to respond back this whole async construct is purely about thread management internally for that service messaging perspective is still synchronous though so yes as it says async in this context is really about thread exhaustion the requester still has to wait the call is still synchronous okay so async in this context is about c-sharp language thread management nothing to do with messaging http is still asynchronous by and large meth messaging a construct all right so synchronous messaging between services can and does occur and we're going to implement this so as i said this is all coming back to this do services talk synchronously to each other um we are going to do this however it does tend to pay our services it does tend to couple them a little bit creating a dependency now again in the microservices world one of the kind of tenants is that you want to try and decouple where possible you don't want lots of chatty communication between services they're going to have to talk to each other and there's about a paradox here but for the most part you don't want lots of inter-service communication and you certainly don't want it in a synchronous fashion i don't think it could lead to something called additionally it could lead to something called dependency chain so let's look at this diagram here service a makes a synchronous call to service b service b then has to make calls to services c and d and service d then has to make a synchronous call to service e so you can imagine if there's delays between any of those request response cycles or if one even fails what's going to happen not a fantastic result and service a is probably going to be sitting there waiting for a long time you also this horrible kind of yeah well the name says all dependency chain but each service needs to know exactly what other servers to call starts to get a bit cumbersome you might hear the term spaghetti programming maybe not fully um applicable in this state but you get the idea it becomes a bit of a mess okay but nonetheless we are going to implement it to begin with and there are scenarios where it's probably the better pattern to use as opposed to using a synchronous event based pattern and we're going to use implement that with grpc a bit later so i'm not saying you don't use synchronous communication i'm just saying that as a developer you've got to make that decision about which is the best to use into from a service to service perspective all right so asynchronous messaging there is no request response cycle the requestor does not wait sounds good doesn't it it's more of an event model and for example a publish subscribe type model typically used between services again might be a controversial statement some people out there probably disagree with that but this is my video and this is what i'm saying so um you know go with it eventbust is often used and again we're going to be using rabbitmq much later than the path we'll introduce rabbit mq and services don't really need to know about each other they just need to know about the bus so the idea is as a requester i just need to talk to the message bus and put something onto the message bus and go here you go here's something that other services need to know about or here's some information that i potentially need and i just put that onto the message bus and the message bus sends out these events to any other services that are subscribed to those events on the message bus those those other services will receive an event go oh okay i've got this new event and then we'll respond back onto the event bus in that kind of fashion so it's event driven the event bus kind of sits as an intermediary between services and it's a publish subscribe type model event driven model and we're going to go into that quite a bit in this video it does though introduce its own range of complexities and it is actually much more complex in a synchronous request response cycle it does introduce a lot of other things yeah that make it more complex it's not a magic bullet by any means so then the final slide in this section and it's a question i wanted to surface here is isn't the message bus a monolith you know it's this big thing that all these other services are dependent on to some extent yes and it's the same kind of conversation you'll have about api gateways as an api gateway basically a monolith again to some extent yes but focusing in on the message bus is a monolith yes to some extent yes internal columns if services are relying on the message bus internal comms between services would stop working if the if the message bus went down however if you've designed your micro services appropriately those services should be autonomous enough that they can still deal with external traffic they can still deal with requests to them and be able to deal with them to some extent even if they cannot talk to other services internally so yes services will typically still be able to operate and work externally with some caveats of course the messy bus they are for because it is so important should be treated as a first-class citizen similar to the network physical storage power any of that stuff and so typically when you're setting up a message bus in a production environment you'd set up in a clustered fashion you would have used fault tolerance you'd have multiple instances all talking to each other so if one goes down it's not an issue and again that's how rabbit mq themselves recommend you set up rabbitmq in kubernetes using this cluster type approach we're not going to do that here it's a bit too complex for this video it's just an introduction video so we're going to set up a single instance of rabbitmq for our purposes but generally speaking in a production environment you would have this cluster the other thing i put there is you'd also typically have you know a degree of persistence so if your cluster dies and then it re restarts again it's not lost all the messages that are had in the queue should have some degree of the ability to retrieve messages it had in its queue from some persistence layer that being said services should also implement some kind of retry policy so say the message bus goes down you don't want to just fire messages onto it and then if the message bus is gone you've lost that event the services should be intelligent enough to retry keep those events clewed up and retry when the message bus comes back up so something you'll hear about often in the microservices world as you aim for smart services your endpoint should be intelligent the pipes should be relatively simple or stupid as i've put there okay so they're really just there as a transport to get one event from one service to the other all right so with that yeah again i just wanted to introduce you to this concept of synchronous messaging we're going to be talking about a lot and asynchronous messaging we're going to be talking about a lot when we come on to asynchronous messaging and rabbitmq we'll do a full section on that a bit later but for now we're going to move back over to our platform service and implement a synchronous client that will allow us to talk to our command service so we'll do that next all right so back over in our command service we're not actually going to work in our command service now and we've got everything we need in here for the moment so i'm just going to close down the running service now i'm going to leave our command service running in this instance of vs code and i'm going to bring up a separate instance a new window for our platform service because we're going to be working in there now so in our new instance of bs code i'm going to move into my working directory see season 4 and change into episode 3 and i'll do a quick directory listing to make sure i can see a platform service and i'm just going to open that in this instance cool all right so we have our platform service up and running now so yeah what we're going to do is create a http client effectively it's going to issue a post message to our command service now this is just a test really we're not really going to implement too much functionality here but the idea would be that if we created a new platform in the platform service that we would want to let our command service know about it because our command service is somewhat reliant on platform information and one way we could do that is by sending a http post payload over to our command service and it can then take that and do what do with it what it wants we're going to properly implement that using a message bus type construct so using an event publish or subscribe type model we're going to properly implement it doing that here we're not going to fully implement all i really want to do here is just make sure that we've got columns working between the two endpoints more especially when we come to putting it into kubernetes this is more actually a kubernetes thing really just setting up the networking between our services and i'm doing this just to test that help test that nonetheless it's still a valid use case and you know we want to make sure that it's all working so back in our platform service we're going to create a new folder for our synchronous data services so new folder and we're going to call it sync data services and we're eventually going to end up with two types of synchronous data service http that we're doing now and grpc but we're going to just focus on http for the moment so new folder http now make sure i capitalize h all right now we're going to make use of something called a http client to make the request and more moreover we're going to make use of something called the http client factory to basically give us a http client and then i've done videos on this before the main reason we're using http client factory is if you're making multiple requests using http clients you should be using a factory to do that for you because it just manages connection safety and all that sort of stuff and you don't end up with again connection exhaustion and all that sort of stuff so we're going to be using a http client factory but if you follow along with the code it should be relatively relatively straightforward so the first thing we're going to create is an interface because we are going to use constructor dependency injection and we're going to inject this client into various places namely our controller so we want to allow constructive dependency injection from that perspective so new file and we're going to call it i command data client dot cs and then we're going to define namespace platform service sync data service services dot http fantastic and it's just a public interface called i command data client and we're just going to have one method on method signature it's going to be a synchronous and you'll see why in a moment so we'll have to bring in the namespace for task in a second and we're just going to call this send platform to command now we're not really going to do anything with this but i thought just to kind of make a bit more of a realistic um example if we come over to our dtos we want to the idea is we want to send over a platform that we have created and we have two details here we have our platform read detail with all this information in it and we have a platform create detail with this information in it now i'm thinking in this case when we send the platform over to our command service we probably actually want to include the id our id because we probably want to give it a reference a unique reference to the platform that we've created so it has that so i'm going to use the platform read detail as the message that we pass over okay so in here we're going to say platform read dto and we'll just call that plat so we're going to bring in a few name spaces here so control period to bring in platform service details and we need to bring in threading for this okay great so that's our interface created so i'm just going to come on now and create the concrete class so again within the http folder new file and i'm just going to call this http command data client plus yes okay specify the namespace platform service sync data services and as we've typed this in before you can see it's remembered it which is cool and then it's just a public class called http and i realize i've just spelt that wrong so let me just rename that command data client yep so http command data client and we're going to implement the i command data client so we'll do a control period implement interface fantastic now before we actually come on and implement this method here what i want to do is make use of this http client factory so in order to do that we use once again constructor dependency injection here so we're going to create a constructor for our class and into it we're going to inject a http client and you'll see how this all ties together when we registered this service in our startup class and we're just going to call that http client so into here we're going to get this http client injected but we do need to bring in the namespace of using system.net http fantastic and again we need to create a private instance so i'm going to call underscore http client is equal to http client and in here control period to generate a read-only field that we can use throughout the rest of our service or service method put in here using here okay let's just save that for the moment and now we're going to come down and actually implement our send platform to command synchronous call so the first thing i want to do is create a payload effectively that would contain our platform read detail or information so i'm going to do a var call it http content and then use string content object open parenthesis i'll just put in the semicolon and the first thing i'm going to do is we're going to serialize this object this platform be dto so i'm going to use json serializer spell that correct and i don't spelt it correctly and that's right yep so using system text js now you can see you get two choices here you could use the older newton soft json library i'm using the microsoft library moving forward now okay and we're just going to serialize the platform object so we can send it over the wire basically all right and in the encoding bring in the namespace for that should be system text there we go is going to be utf-8 and then finally we just need to specify the media type which is going to be application json okay i'm going to bring that back up onto this line here okay so this string content object is something we're going to send over with our post request okay and it's still complaining because we're not not really returning anything as yet then we're going to basically make a post async request to our client so i'm going to do var response equals a weight use our http client post async and i'm just going to do a control b to get rid of our sidebar you can see what's happening and here we don't have a problem at all because we will solve it of course but we need to know where and again this is talking about kind of repairing services to one another we need to know where to send this post request to now we can just hard code the uri of our service in here okay but that's not a particularly elegant solution so what i want to do is i want to move in and put this type of stuff into config okay but for the moment let's just hard code it and we will eventually come back and put it into config so that it's not hardcoded which is never a great idea so basically what we want to do let's come back over to insomnia and when we tested our inbound connection in our local environment this is the uri or url that we want to send this to so we'll just copy this in for the moment again i just want to get this up and running and working but we'll circle back and put this into config okay so pull paste that in there and then we need to supply the http content along with this as well and it's complaining they wait open okay so we need to put an async up here there we go and that should resolve that so basically it was complaining that we couldn't call an await method when this method signature wasn't asynchronous itself okay so we're going to post to our endpoint and we're going to provide that http content cool and then all we want to do then is just check that we get a successful or otherwise response back so if response is success code then we'll do a console.writeline bring in the namespace using system rightline and we'll just say sync post to command service was okay otherwise console.writeline and i might just copy this message here and we'll say it was not okay all right fantastic so let's save that off okay so we've got still got a couple of things to do we want to bring in configuration rather than hard-coded strings so what we can do actually is up here in our constructor where we're injecting an http client we can also inject eye configuration big duration and we'll just call that configuration okay we'll bring in the namespace for that now our configuration is one of those uh lovely objects that gets injected everywhere basically or can be injected everywhere so we don't need to do too much setup there the other thing to be really careful of when i made this mistake is that bring in the right name space for high configuration because auto mapper also has that too so make sure it's the microsoft extensions configuration that you bring up at the top okay and then we'll set up our private field so call it underscore config duration and make it equal to configuration and then control period generate a read-only field there we go and we can use this configuration now to inject this in from a config file so i'm going to copy this and i'm going to do a control b and we're going to come over to our configuration file so we have our app settings development json configuration file which seems like a good place in which to put our config for our endpoint from our development perspective so what i'm going to do is i'm going to come over to here and into our app settings.development.json i'm going to create a new parameter in here and i'm just going to call it command service and the value i'm actually just going to make it the kind of base url for our command service so we know where to get to it so we'll save that off in here and we'll come back to our command service i'll do a control b to bring this down and what i'm going to do is i'm going to take this out and i'm going to make use of string interpolation to inject in that part of our uri into our endpoint here now you could put the whole thing to be honest with you you could actually put the whole thing in i'm just putting the server uri but you could put the whole pattern if you if you wanted um which is entirely up to you and i'm just going to make use of configuration so basically what we get injected in here configuration and then just reference the setting will be put in here which is command services copy that and put that in here and make sure you get the forward side so basically this thing here all the way up to here is basically going to be a http localhost port 6000 and then the rest of the uri now again you could replace the whole thing if you wanted in here with that entirely up to you in fact you know what let's do that because it doesn't really make sense does it to uh to just have that little bit so let's just put that in here like that that makes more sense doesn't it yeah okay so basically we're just gonna put the whole thing in all right so that should should work now actually no one there's one thing that we still need to do and i'll just take this out i don't know how that got in there so let's save that off now because we're using this http client factory ultimately or we're getting a http client injected in we need to make use of a http client factory in order to do that so this is the last bit of this section it has been a bit longer than i had intended so i do apologize but control b and back over in our startup class what we want to do in our configure services method it's under our repository registration we're going to make use of our services collection again and this is where the http client factory gets used so we're going to add http client okay and we're going to specify the interface so in this case it's going to be i command data client and so when we ask for an i command data client we're going to give it http command data client so this is basically this ad http client is making use of a client factory and so when let me just bring in the name spaces for these and both should resolve and then as they're in the same name space that's correct so they have results so back over here when under control b when our class constructor gets called we're going to get a http client injected in here that's going to be managed by a http client factory all right so that was quite a long section we need to test this now so we'll do that in the next video just to make our cup of tea now and then come back and we'll test this and make sure test it locally make sure that um when we make use of this and we'll have to put this into our controller in a second we'll do that in the next video when we call this service we can actually send a message down to our command service but we'll do that in the next video all right so still in our platform service what we want to do is we've created our synchronous client our http synchronous client uh here what we want to do is we actually want to be able to use it somewhere and so where are we going to use it where does it make sense to use it will probably make sense to use it from our platform's controller so when somebody runs a http post and we create a platform in our platform service at that point we want to make a synchronous call or do we to our command service to make them aware hey we've just created a new command all right so we're going to put that code in here and then we're going to test it so just do a control b just to give us a bit more breathing space so the first thing we need to do is actually inject in an instance of our command data client into our platform's controller so we can use it so i'm just going to take a new line for our injected parameters because it's going to get a bit long so we'll still keep platform repo we'll still keep imap but now what we want to inject in is an i command data client and we'll just bring in the name space for that so using platform services sync data services http that's cool and we'll just call that underscore command data client and as usual we need to create a private instance orbit so i'm going to call the underscore command data client equals command data client there we go and oh hang on i should just create that yeah there we go that's right there we go that's better so yeah make sure the one that you're injecting is not really underscore and the one that we are creating now has got the underscore okay so generate a read-only field again this pattern should be very very familiar to you now and again coming back over to startup class because we've registered this here we have this available to us in here as well okay so we just come down to our post endpoint and just before we return back with our created a route statement i'm going to make a call to platform service at this point now because things could potentially or conceivably go wrong i'm going to use a try catch block because we don't want some horrific exception if that endpoint is down for example so i'm going to make use of the await keyword i'm going to make use of our command data client i'm going to send platform to command and i'm going to pass in platform detail we've got here okay and that should be sufficient for this and it's going to complain here but we'll fix up basically because this is not an asynchronous endpoint and again i will stress async in this context has nothing to do with a synchronous messaging but before we do that we'll just catch any exceptions we'll call the ex and then we'll just do a console.writeline with something of the nature of i will use string interpolation so put dollar sign before the double quotes and we'll say could not send synchronously and there may be reasons for that i don't think i've spelt that correctly how that synchron honestly that's right doesn't really matter but for those of you who hate spelling mistakes it probably is quite annoying if it is wrong and then we're going to just pass in the message that we get from any exception okay so hopefully this makes sense so we'll try and send a caller use our data client to send a platform re-detailed to a platform service if it fails we'll catch an exception now again this is complaining because it's saying there's no weight operator but that's okay you can just put that in here make it synchronous but in order to do that we have to specify it's a task really that's being returned so task and then close that off and then that should be okay not a wait a sink okay there you go so a sync await and i don't think we have the threading namespace so we'll bring in using system threading tasks so this is now made this action result in a synchronous action result because potentially this command here could be long running but again to labor the point from our calling perspective it's still a synchronous messaging so let's save that off let's bring up our command line let's do a dotnet build looking good we'll do a net run and again just to be you know super pedantic you can see here that uh it should say here it usually says it there we go the hosting environment is development okay so because the hosting environment is development it will make use of our app settings.development.json file and pull in the right config for our command service endpoint and actually what we might do i'm just going to stop running and what i might do is up here in our startup class what i might do is in uh configure services i might just do a bit of an output just to see console right line and we'll just say just so we can see it's printed to the screen command service endpoint [Applause] and we will make use of our configuration object called configuration in this case and we'll just specify this so we'll have this printed out to screen and because we'll probably do this quite a lot because we're going to be moving between different environments so it's probably worth just printing out uh to our console what it thinks the end point is so let's just start that again there we go so you saw there what it thought the command service endpoint is there we go so this is what it thinks the end point is and indeed that is exactly what we should be using we see the data all good so that's our platform service up and running let's move back to our command service let's make sure that's running fantastic and then what we're going to do is we're going to make a platform create request in our platform service which should then call our command service all right so moment truth so bring up insomnia once again we're in our local development environment we want to create a platform in our platform service okay so let me just send this over looks like it created it and we'll come over to our messaging and you can see here we're in our command service we have an inbound post from our command service fantastic when we come back over to our platform service we have a message saying the synchronous post to our command service was okay so that's all working rather nicely but what i'm going to do just to illustrate a point i'm going to come back to our command service and i'm going to stop it i'm going to kill it okay so that end point is no longer there i'm going to do the same thing again and we're going to see what happens okay so let's we'll create the same thing again so let's just click send you can see it's taking time still comes back okay because it created the service okay but you saw that took two or three seconds why did it take two or three seconds well basically if we come back to our platform service you can see we could not send synchronously no connection was made blah blah blah which totally makes sense but you saw how as a consumer we were waiting for that okay we're waiting for that response to come back and you imagine there's a chain of those type of things or anything like that not a great not a great experience nonetheless we have achieved what we wanted to achieve and just to recap it's been quite a long section we have created a test endpoint in our command service and we've created a data synchronous data client in our platform service that will call this test service in our command service synchronously cool so we've established that that works and what we need to do now is put our platform service into kubernetes and then wire up these two services using cluster ip addresses so we're going to do that next all right so let's just do a uh a quick sense check again okay we're at what we've done so we have our platform service running in kubernetes and we have our node port setup we do not yet have our command service running in kubernetes but we have it running locally and we had established connectivity between the two that's cool so in this section what we want to do is first of all get our command service up and running in kubernetes and then we're going to create cluster ips for each of our pods so they can talk to each other so really the first thing we need to do the first step of getting our command servers into kubernetes is packaging it as a docker image and pushing it up to docker hub so we're going to do that now so let's just make sure we're in the right space we want to be in our commands service so we're here so we're in commands service and you can see here we're in the right space so let's clear the screen and what we want to do is create a docker file so in the root of our commands service project we're going to right click new file and create a docker file as we did before with our platform service now all i'm going to do is come back over to our platform service and copy out the config from our platform service docker file because it's basically effectively exactly the same bar one small change all right and that's down here that's the entry point so rather than it being platform service dll it's going to be command service dll now let's just double check in the bin folder you want to make sure that the dll commands service is matched here it's exactly the same thing i actually made a mistake in one of the other recordings of this video and i got this name wrong and it caused all sorts of issues so just make sure that you have the right entry point we'll save that off okay and then as before we'll do a docker build and we'll tag it with your docker hub id and then the name of the service command service [Applause] don't forget the build context at the end all right so go away and build that should be relatively quick fantastic and then we'll push it up to docker hub so get rid of the build context get rid of the t and we'll just type in push and this will push this up to docker hub so that when we create our 8 deployment for our command service it has an image to go away and pull down so we'll just let docker do its thing and push that image up and then we'll come back and we'll actually test and make sure that our docker image is actually running as well i probably should have done that before i pushed it but we'll wait let the push finish and then we'll test and make sure that it actually runs up okay now interestingly interestingly you can see here that it's saying layer already exists so because it is aware of layers or for this image elsewhere it doesn't have to push the whole thing back up okay so that's pushed up now before we do anything else i should have done this before i pushed it we're going to do a docker run so docker run we'll specify put an external port mapping to an internal port mapping which you should be familiar with now i'm not going to run it in detached mode because i actually want to see the output when we run it so just the name of the service okay so binary this all command service let's run that up it's cool so it's run up okay it's started on port 80 we're not getting any errors so that all looks pretty good to me so what we might do is come over into here and we can see that we have a command service running so i'm just going to stop that from here let me get our command prompt back so that all looks good so in the next section what we're going to do is we're going to update our platforms deployment file to include a cluster ip service and we're going to create an entirely new commands deployment file that also includes a cluster ip service so we'll do that in the next section all right so let's come over to our k8 project and here we have our existing platforms deployment now that's not actually going to change that's staying exactly the same but what i do want to do is come underneath here and add in a definition for our cluster ip service now you could put this into a separate file if you want i like to keep it in this instance in the same file as the two things are really very closely related all right so we specify an api version and this time it's v1 so this will look very similar to the node port stuff that we did a bit earlier and the kind is a service and then metadata is basically the name of our service effectively so we specify name and i'm going to call this platforms cluster ip service these names get quite long so you've got to really check your typing and i guarantee you you will make a typo at some point throughout this course so just be really really meticulous with your spacing and with the you know the naming of your services and they have to be absolutely exact if you're one character out as has happened to me many times nothing will work so you really got to double double double check and if you do run into trouble just download the config file from github and you should be okay anyway so specify the name of our cluster ipservice and i'm just going to specify what type it is and it's a type cluster ip capitalize that and then select all type in app and we come up here and we actually pick the name of our template which is platform service in this case let's make sure i've not copied too much space there there we go i'll paste that in here again you've got to get this stuff absolutely correct we'll back out and then we'll define our ports and then uh it's an array so we'll give it a name and again i'm going to give it the same name platform service as well tabin protocol is going to be tcp port's going to be 80 and target port is also going to be 80. and that's it for now we will add another port onto this for when we come on to doing grpc but for the moment we're just using plain old http so this should suffice so our actual deployment hasn't changed yet and our we're adding a new cluster service ip into that so let's just save that off for the moment we're not going to apply that yet and then we're going to create a new deployment for our commands service so right click in here new file and we'll call this commands depot dot yaml okay and what i'm doing i'm going to come in here i'm going to copy all of this and i'm just going to do a bit of a replace for our command service okay so let's just work our way down from the top down to the bottom and replace the things that we need to replace so first thing we're going to change is the name of our deployment so rather than being platforms detail it's going to be commands the name of our service is going to be command service and maybe we just come back over to docker hub and we just take a quick look at just double check the name of our service so it's command service not plural okay so command service and i'm going to copy this because i'm going to replace it everywhere i'm going to push it in here i'm going to push it in here i'm going to push it in here as well okay so binary thistle command service latest let's come down to the cluster ip for this service and again we're just going to change this instead of platforms to commands cluster ip's okay and i'm just going to change the selectors to what we already had there we go that all looks good so if i just highlight this we should have these same things highlighted throughout the rest of the file and we'll save that off okay that all looks good okay so just to reiterate what we have done we have built our command service as a docker image and we've pushed that up to docker hub because we need that for our deployment and we've now updated our platform's deployment to include cluster ip service which has not yet been deployed and we've created an entirely new commands deployment that contains the deployment for our service and also cluster ip so in the next section we're going to run both of these deployment files to get everything up and running but just before we do that there is one little bit of config we need to pop back into our platform service just before we do that so we'll do that next all right so i was talking about um we need to make a bit of a change to our platform's service config so back over in platform service we need to know where our command service resides and this is one of the problems of synchronous data services if you're doing it this way you need to know where all the endpoints are and so we've created a config entry in our app settings development json file that tells us where we can find the command service in our development environment and that's all cool but when we deploy into kubernetes it's an entirely different environment actually runs in production so this bit of config number one is not going to get picked up and even if it did get picked up it's going to be incorrect anyway because the command service is not going to be running here it's going to be running somewhere else all right so the first thing we need to do is create another settings file just for production just for our production settings so i might just clear down a lot of this stuff because it's getting a bit busy and come back over into the root of our project i'll just minimize some of this to give us a bit more room and we're going to create a new app settings file in the root of our project so new file app settings dot production dot json okay and so these are the settings are only going to get picked up in a production type environment all right so we come back over here i'm just going to copy this copy and i'm going to come over here and create some curly brackets and create this as a top level item so that's point number one that's all cool but where is our command service going to reside in kubernetes well we come back over to our k8s folder and we look inside our commands deployment we have created this cluster ip service for our command service so again come back to our architecture diagram because i think it's worth revisiting let's come back here what we are attempting to do is get our two services to talk to each other via their cluster ip services so really the end point from a platform service perspective the endpoint that needs to reach out to is really the the cluster ip service attached to the command service pod okay so the name basically that we need to put in is this here the name of our service that is effectively the end point our platform service needs to reach out to so i'm just going to copy that and come back to our platform service and in our app settings production json i'm just going to replace this here with the cluster ip service of our commands service and i also need to specify the port so the name of the cluster ip service colon port and everything else should remain exactly the same all right so again you know a little bit kind of convoluted but you know it's there's a lot of config involved in setting all this up and again you're probably going to make mistakes along the way don't stress if you do i've made plenty but the other thing to note here is because i have changed i've effectively changed our image now created some new settings and app settings json i'm going to have to repackage rebuild our docker image for our platform service and re-push it back up to docker hub so let's do that now let's build it and let's push it up so i'm just doing up key up key so docker push button this platform settings and it shouldn't take that long as we've only made a minor change now going forward you would probably want to extract the settings out of settings file in this way and not have to redeploy images to read in this type of stuff but again for this video um and i've chosen to do it this way just in the interest of time but you can see this probably isn't a particularly efficient way of changing your settings having to redeploy an image but um that's what we're doing here anyway so once that's pushed up what i'm going to then do is we're going to then apply our deployment files the two files that we've run and we'll run into another interesting concept when we come to doing that so there we go that's pushed up so i'm going to pause here for a minute and i'm going to come back and we're going to revisit the two files that we've created because as yet we've not reapplied them and then we'll do our final test to make sure that we get our services talking inside kubernetes so we'll do that next all right so let's just do a quick sends check of where we're at so let's come back into our k8 project and i'm going to bring up our command line i'm just going to clear that down so the first thing i want to just you know level set on where we're at because we've done quite a lot of work and it can get confusing so let's do a cube ctl get deployments [Applause] okay we only have one deployment up and running which is our platform's deployment being deployed it's all cool looks good um if you do cube ctl get pods [Applause] you can see we have one pod up and running which is our again our platform service so again we've not we've created these files but we've not yet applied them and then the last thing we'll do is we'll do cube ctl get services and you can see here we have this kubernetes cluster ip which is already there by default and we have our node port that we created a few sections ago all right so the first thing i want to do is i want to apply our platform's deployment again so quick directory listing make sure we can see our platform's deployment file and we're going to do cubectl apply f and the name of our deployment file let's do that now and let's run that okay now you can see a couple of things here it's saying that the deployment has remained unchanged okay and that's that's fair enough because this part of the file what's really referring to is the deployment here the deployment we've actually specified in this file and that hasn't changed that's remained exactly the same but what it has said is we've created this new cluster ip service which all sounds great so if we do cubectl get services you should see we have this platform's cluster ip created which looks good but coming back over to our kubernetes session i'm pulling this over here we have our pod running and we have our platform service running what hasn't happened by re running that deployment file um kubernetes hasn't gone up to docker hub and pulled down the latest version of our image that has our new configuration pointing to our command service it's not done that if we just come back over here and go into it you can see that yeah it's yeah it's not restarted it's not done anything whatever so what we need to do is we need to force kubernetes to refresh this image okay so the way we do that is we come back over here and we'll just do a cube ctl get deployments just to get a list of our deployments again spell that wrong yeah there we go so we've got our platforms deployment we want to basically restart that or refresh that so the way we do that again this is all in the cheat sheet these command lines cubectl rollout restart deployment and then we just specify the name of the deployment okay and then it says deployment apps platforms devil is restarted and if we come back over to kubernetes we should see some things happening over here alternatively rather than doing that we can just do cube ctl get pods and you can see that we have a container that's running already but we have this container that's now creating which is the new container with our new image with our new config and it's creating and eventually this running one it will terminate once that one has started so we just have one running now which is good and if we come over to kubernetes and we have a quick look here you can see one pod has died and then we come over to our platform service you can see here the command service endpoint is here so it's picked up the right end point certainly that we've told it to pick up this is what it will be looking for when we hit the controller it will then reach out to a platform service but that won't work at the moment because we don't actually have our commands service deployed so we'll do that now and it's relatively straightforward so back over in k8 let's just clear this out and we'll just do just again cube ctl get deployments we only have one deployment so we now want to execute our commands deployment so again we'll just do an up key up key we want to apply our commands deployment this time okay so cubectl apply dash f commands deployment and you can see this time we didn't have a commands deployment at all so it's gone off and created one and it'll pull down the image it has for that and then it's created a cluster ip service for that as well fantastic so if you do ctl get services you'll see we have our two cluster ip services one for commands and one for platforms which is good and then if you do cube ctl get deployments up and running both are up and running and we'll do a cube ctl get pods to see the health of our pods and we have two pods up and running a commands pod and a platforms pod so that's all looking quite promising fantastic so we'll just come over again to our kubernetes instance let's make this bigger and you can see we have a pod for platforms pod for commands platform service and a command service when you click on the service it all looks like it's running okay so that's that's looking pretty exciting so final step here we want to do is just test that that is working so we're going to make a call into our platform service a post request we're going to see if we are talking to our command service so we'll do that next fingers crossed it's all wired up correctly and it's all working correctly so we'll do that yeah we'll do it next all right so let's get testing this so to test we're going to use insomnia so we're going to be testing purely within our k8 environment and so all we have so far is the ability to get platforms from our platform service let's just run that make sure that's still running okay that's good a good start so what we want to do next is then create a platform in our service running in kubernetes so let's come down to our local dev and i'm just going to copy or duplicate the create platform call and we're going to move that into here now at the moment we are using a node port and again we're reliant on this port here in order for that to happen so let's copy that and we'll come down to here and we're going to paste that in here okay so hopefully you remember that from when we were looking at node ports we were just calling a great platform but we're having to use this node port endpoint and we'll just again we've only got three platforms in our platform service so we're going to create hopefully another one now what we expect to see is that number one a platform will be created in our platform service fantastic but more importantly what we want to see in the logs of our command service is that it received uh that post notification and if it receives that post notification and we know that everything is wired up correctly and we're really quite far down um we've broken the back actually of a lot of the kubernetes networking stuff which i appreciate is quite quite voluminous and quite boring so fingers crossed let's click send looked pretty good it came back pretty quickly so we've looks like we've created a platform in our platform service let's just rerun this fantastic we have and then the last bit we want to test is we want to come over to our kubernetes session let's pull it over into the screen and we'll take a quick look at our platform service first and we should see that it sent some stuff over to yeah we've done a couple of calls that was getting a platform so we'd move a bit further up sync post to command service was okay so it called the command service okay it didn't get an error which is fantastic it's looking pretty good and then we come into our command service and you can see here that we got an inbound post from our command service so it looks like everything is working very well now the only thing you will notice is that we get this warning around https redirection so nothing really to worry about but if we come on into both our services or either of our services let's pick our command service and come into our startup file you'll see down here in a configure that we have this use https redirection that's what's causing that error now one of the things i decided to leave out this course was introducing https i think i've said this already into our internal kubernetes cluster we're not going to use that so what i might do just to get rid of those warnings i might just comment this out and of course as soon as we comment out it means we've got to redeploy our stuff back up to docker hub but again i'll leave you to do that in your own time i'm not going to do it in the video this is relatively benign at this point in time so i'm not too bothered about that so i might do that in between now in the next video i might push my images back up to docker hub i'm not going to include it here if you choose not to do that it's not a big issue so i'm going to comment out of our command service and then i'm going to comment out of our platform service as well so over and start up i'll do that as well and in between now in the next video i'm going to build these images and push them back up and of course you would have to do the rollout restart thing to to get kubernetes to pull down the latest version but i'll leave you to do that at your leisure and if you choose not to it's not a big deal it just means that you'll still get those https redirection warnings fantastic so that's a big a big milestone that we've got over what we're going to move on to next is we're going to move away from using the node port and we're going to move into using a more productionized mechanism to get into our services and that's when we start talking about the nginx ingress controller so we'll start to move on to that next all right so let's just do a quick architecture review just to see how we're doing with our kubernetes build let me just find the right there we go the fight the right slide so we have basically got to this stage here where we have a node port we have our two services running in pods and we have our cluster ip services wired up so they can talk to each other which is fantastic but now we want to stand up what's called an ingress engine x controller and as a result of standing that up we'll also get this load balance or service as well that will allow us to get external traffic in now this is kind of going into more detail about how kubernetes works we're going to go through it all step by step but i'm not going to dwell too much on it okay um because we're actually going to be reliant somewhat on an external yaml file that's going to set all this up for us and really the only thing we're going to end up doing is writing the routing config for this gateway for this ingress engine x controller and the rest of it's kind of taken care of by another github project actually which we'll take a look at now so actually that's probably a good segue into that so let me just come over to google and let's pull it onto my screen and if you search for ingress in genex be careful because as usual there's a few um very similar repositories but this is the one we want here so it's github kubernetes forward slash ingress engine x now don't worry we don't need to really do too much with this if you scroll down to the getting started document this is really what we want and this is telling you how we can install this in ingress engine x controller and we have all these different providers so if you if you were running on google cloud or something then you would pick this one or aws you would go with this one you're using docker desktop so this is all we need okay so we're going to copy this here now you can see the cube ctl apply f command that you've been very been very comfortable using for you know the last few sections of the video at this time we're going to specify a url to where the the yaml file exists so what we might actually do is let's just copy the url i can do that properly copy it and i'll put it into another browser window and you can see and you can probably suggest you take a quick look at the contents of this but you can see it's pretty big it's pretty long i'm glad i don't have to type out and i'm sure you're pretty glad you don't have to do it either but it's still worth going through and having a quick look to make sure you have a sort of understanding of what it's trying to do but by and large yes it sets up this this basically an api gateway that we can you know route into so enough of that i'm going to copy the whole thing now the whole cube ctl apply command in fact we can probably just take this little thing here that's good enough and we'll come back over to our command prompt let me clear this down i'll paste this in and there you go it's kind of gone through and basically applied everything so let's take a look and see if we can understand what it's done so if i bring over my kubernetes session and go back to the list of running services you can actually start to see there's a pod here ingress engine x pod here there's a couple of pods actually these will work through for a bit until the deployment has settled down some of these things will create recreate and then destroy themselves so let's come back over to our command line i'll do a cls and again we'll take a look we'll do cube ctl get deployments now something kind of strange going on we can only see the deployments that we have done our commands to plug in our platform's deployment what about if we do get pods to be of any any joy there not really we see our two pods that we have created our commands pod and our platforms pod up where's all this ingress engine x stuff well again a bit more of a detailed kubernetes concept but kubernetes works on this concept of namespaces in some of the way that we've got namespaces for our classes and stuff kubernetes works along the same line so we are working within the default namespace for our project so i'm not going to cover that as yet and i'm not really intend to cover it in too much more detail but the stuff we've created for ingress engine x is in a completely different name space to our application so just going back over to the cheat sheet you can actually look at all the name spaces that we have so cube ctl okay namespace and you can see here we have default which is kind of where we're working in and then we have this we have these cube namespaces as well which i'm not really too interested in and then we have this new ingress engine x namespace that's been created as part of what we've just run so we can look at the pods in that namespace by doing cubectl getpods and again this is on the cheat sheet namespace and then equals the name of the namespace which in this case is ingress engine x i'll do that i might actually make sure that i uh put two dashes in there there we go so test two dash is not one dash and the the cheat sheet is correct i just was misreading and you can see here that we have a couple of probably one running pod basically which is now our ingress engine x controller and these two things have basically completed they're no longer running and that's what i was talking about with these pods that kind of create themselves and then destroy themselves so that all looks relatively good so let's come back to kubernetes and you can see here yeah we've got a couple of ghosted pods so i might just uh get rid of these remove them don't need them and we've got enough going on our screen without having all this extra stuff running okay cool so we've got a pod for ingress in your next controller and we have a controller the actual controller itself and you can go and have a look and it's basically running on engine x which is similar to envoy which is basically used a lot as a gateway type set up fantastic so that looks good but at this point in time all we've done is set that gateway up and we might also take a quick look at networking um get services and you can see in a similar way that we've got the services that we've set up for our namespace but we could get the namespaces for ingress engine x and i'd read to think um what it's going to show us but let's take a quick look and you can see here we've got a load balancer which was created it's a another service type and we have another cluster ip service actually as well that's being used there as well all right cool so again we don't need to worry too much about that that's all been set up for us automatically what we do need to come on to doing now is to create the routing file that this controller is going to use to route to our two services um so we'll do that next all right so back over in our k8 project so make sure you have your case project selected i'm just going to clear the terminal screen i'm actually going to get rid of it for the moment we don't need it just yet and i'll clear down these other two files so we're going to create yet another yaml file in here so right click new file and i'm just going to call this ingress dash srv.yaml and this is again this is the routing file that the ingress nginx controller is going to use to determine how to route to our services so it's really quite important all right so we're going to type all this out i'm going to go slow i don't want to make too many mistakes otherwise it's going to be very you know confusing and difficult to debug so specifying an api version this time it's slightly different we want to specify networking dot k8 sk8 dot io forward slash v1 so let me just double check that api version networking dot k8 dot io for slash v1 perfect i'm going to specify a kind and this time it's in ingress i don't want all that auto complete stuff so i'll just type in make sure i type ingress correctly though so it's an oh my goodness i don't want that um so yeah kind of ingress as opposed to deployment or whatever fantastic and then we specify some metadata again we'll specify a name and i'm just going to call this ingress srv and then some annotations [Applause] now we have two and i tell you what i might actually copy these in from my notes because they're just going to be very subject to typographical errors and these are just a couple of helpers really that we're specifying that we want our controller to use okay so it's using it's talking about rare regular expressions and yeah we're using this ingress class for nginx which is the gateway that we're using all right so again the code is on github so again if you don't fancy typing this out and i'm just i don't want to type out on the video in case i make a mistake and then it's just a whole nightmare going back and fault finding so i've just copied those in have a go at typing them out or you can just go to github and do what i did there and copy them in fantastic so we're going to come back under well under the main route so indentation layer so underneath metadata kind in api version we're going to specify the main spec and the rules of our routing rules effectively so indent rules fantastic indent again and this is an array so the first thing we're going to specify is our host now like a lot of these gateways they actually require a host name they don't really they don't like localhost usually they don't like ip addresses usually and i did a whole video on the envoy controller that had the same kind of issue well maybe it's not an issue it has the same constraint so you have to specify like a domain name now we just want to really route to our look back address or localhost address i know they're slightly different but basically our local machine but we do need to specify a proper domain name so i'm just going to specify one here i'm just going to call it acme.com and we're going to have to update our hosts file our localhost file to reflect that acme.com actually redirects to our local address okay so we'll come back and we'll do that in a bit but just bear with me for a moment i'll just put that in here now you can put anything you like in here just remember what you put if you're following along i suggest for simplicity you just put in acme.com like i have done here all right under host we're going to specify it's a http based routing rules so again http is directly under host and then we're going to specify an array of paths now this config it doesn't get much there's maybe about another 10 lines involved so don't freak out too much it's not going to get too much bigger but it is quite a deep config file and you have to be again i keep saying it really careful about your indentation making sure that you've indented correctly because it can catch about so specifying paths and i think i've actually made a mistake myself here so it's paths like that you can see it was actually complaining so path is indented under http and under that we start to specify the individual paths that we want so singular path all right so the first thing we want to attempt to route to is our platform's uh controller platform service so the way we do that is just by specifying the route on top of the base url so anything on top of acme.com would be in the case of platforms would be api platforms and then we specify a couple of attributes path type as prefix and then we specify something called the back end and you can probably guess what that is getting at where what service basically are we trying to attach to here okay so the back end we specify the service and we specify the name and the name of the service basically is if we come over back to our platform's deployment file it's basically the name of our cluster ip okay so we're going to copy that so again platforms deployment file our cluster ip service that is the name that we are going to put in here and that's where we want to kind of direct this set up this back end to all right and then port tab and again number 80. okay so that's putting in a single route to our platform service at this path on port 80. all right that all looks correct so i'm going to create one more path but this time for our commands service so i'm actually going to type out because sometimes cutting and pasting can be more trouble than it's worth in a yaml file like this so again make sure you're specifying your next path as part of this paths array under this minus sign here so we'll specify another path and it'll get us used to doing this kind of stuff anyway so this time the route is api forward slash c now again this is the whole reason i put this c identifier in the route to our command service because it needs to be different from this otherwise we're going to run into some issues so again platforms path type prefix let me specify the back end in a very similar kind of way tab in specify the service we give it the name this time it's the commands cluster ip7 so we come back over to our commands deployment file and we come down here we want this name the name of our cluster ip service so you can see this stuff is quite horrible and again the video i did on envoy to be really honest with you i didn't really enjoy it and i'm not particularly enjoying this either i don't really like this stuff to be really honest with you it's for me it's kind of configuring api gateways i don't love it to be honest with you and it's so subject to error that um yeah it's not my favorite thing to be really honest with you but it's a necessary evil in this case and again we're specifying port 80. so i think that looks okay um there's no warnings in the actual file itself but looks of it so that's all good i think we're ready to go so i'm going to save that off and then all we need to do is just run the cube ctl apply command which we'll do next but before we do that i'm going to go into my host file and we're going to make sure that we have a an entry for acme.com but i'm going to do that in the next video all right so we want to update our host file now obviously your host file is in different location depending whether you're running on mac windows or linux i'm running on windows obviously so i'm going to you know browse to that location i will put a link to where you can find your host file on those two other operating systems but you can probably to be honest with you just google that yourself the result is basically the same so i'm just running as you know obviously the yes code so i'm going to open my host file from within my k8s project so file open now on windows anyway you have to basically open it as an administrator otherwise it won't let you save the file um or you can save it as administrator which is what i'm going to do here so open file first of all and location on windows is on the c drive it's in windows it's in system 32 drivers etc and then there's a few what look like a few different hosts file but it's this first one here one that just says hosts and i don't believe there's an extension so we just open that up and we can see here that there's a few entries in here already now what i want to do is i want to put in uh entry for acne.com or whatever you know you've put in your ingress service file and i want to just direct it to 127.0.0.1 and you can even see there's some stuff in here that's been put in by docker desktop as well to assist with networking okay i'm just going to put this in here and i'm just going to put in acme.com so what effect this has is basically whenever we type acme.com into our web browser it's actually going to look here first before it goes and does any you know dns lookup elsewhere so we're kind of hijacking acme.com so just remember of course you know that you've done this in your host file and unless there's a site called acme.com that you visit frequently maybe pick something else so i'm going to save that and i'll probably get the warning so here we go so it's failed to save insufficient permissions so i'm going to retry as admin and i've just got to pop up on my other screen saying i want to do that and i just said yes cool so that should be as good but what i'm going to do just because i'm paranoid is it open up again just to make sure that save took because i want to get um i don't want to get through to deploying our changes and then find out that something went wrong so windows system 32 drivers there we go that's the wrong one drivers and etc and i'm going to open my host file again and yeah the change took so that's fine so i just want to double check that that is the uh all right cool so then all we need to do as we've done many times before is basically run or apply this ingress srv file using a very familiar language so far so just make sure you're in your k8 project folder we'll do a listing and make sure you can see your ingress srv yaml file so again it's cube ctl apply f and i'm just going to apply this config there we go we've got this ingress networking k8 io created which looks good so the moment of truth we want to try and test this now and insomnia and again just coming back to our architecture diagram i will just show us where we hopefully have gotten to you want to try this thing here now so external coming in through this engine x load balance so it's going to hit to andreas nginx controller and then depending on what route you put in it's going to route off to either one of those all right so fingers crossed this is a big moment for us now again what i'm going to do here is yeah we're actually running in the k8 uh the k8 environment so again yeah i might want to specify something different here so maybe what might do because this is using the node port stuff so i want to differentiate that so i might just rename this and just call it platform service node port just so i know those are the node port requests which you may want to continue to use for fault finding purposes so i'm going to close that down actually these aren't in here so we just put these in here that's better and i'm going to create another folder in here new folder i'm going to call it platform service then i might just call it engine x [Applause] which is really our gateway proper all right so i want to copy this or duplicate it get all platforms and move it up here under this so instead of routing to localhost 30407 if we just put in acme.com here and make sure we spell it correctly it should route off to our platform service and pull back all our platforms so drumroll there we go fantastic so we've now set up successfully an ingress nginx controller now we can't quite test our command service yet because the only controller action we have is that stupid post command well i suppose we could test that really but i'm not going to do it here we'll come on and we'll check that when we put in some get actions that we can route through to our command service as well but i'm fairly confident at this stage famous last ones that this is looking pretty good fantastic so that's probably most of our networking type stuff done now we're going to do a few bits of repeated second setting up of cluster services and stuff but by and large in terms of new concepts we're done we're done with new concepts around kubernetes networking so it's a good place to be in so we're going to start to move on to i think that was a lot of the more interesting stuff so then the next section we're going to set up our sql server for our platform service along with some persistent storage and we're going to get all that hooked up and then beyond that we're going to move into using rabbitmq and all that really interesting exciting stuff so um yeah done in this section next next section is going to be setting up sql server so we're going to create our sql server now so let's just check to see where that fits in in terms of our kubernetes architecture so we've got to the stage now we just created our ingress engine x controller in our last video and now we're going to set up a dedicated sql server just for our platform service and along with that we'll create a cluster ip address now something new that we've not encountered yet is another new concept this is the persistent volume claim now all this is is something that we will put into the deployment file for our sql server that basically as the name would suggest stakes a claim to some physical storage in this case on my machine so just staying on that theme for the moment if we do a cube ctl get storage class just spell it correctly we have one storage class so what's a storage class well there's three concepts really uh when you're talking about storage well three basic concepts you need to understand the first one and really as a developer the main one you need to understand is the persistent volume claim which we're going to create then you have something called a persistent volume and then you have this thing called the storage class so thankfully all we need to create is the persistent volume claim and that will use our system defaults and ultimately we'll use this storage class which is effectively in our case for docker desktop just our local file system if you're using a more enterprise deployment of kubernetes in google cloud you would have to set up all of these things independently and that's when you start to move into this more administration type role rather than the developer role as a developer all i'm really interested in is making a claim to some storage other than that i don't really care to be honest with you so i'm going to leave it at that um and we're going to move on and create our persistent volume claim in our k-8s project so yeah in our case project just right click anywhere we're going to do a new file and i'm going to call this local as it's going to be on our local disk local pvc dot yaml so local persistentvolumeclaim.yaml let me get rid of the terminal for the moment and api version and it's going to be v1 make sure we put the colon in the right space and the kind this time is a new kind it's persistent volume claim and i want to i want to get the persistent one thing i don't want to type that but i don't want to uh i don't want all this default stuff so it's actually i can probably take most of this i just don't like um doing too much auto generated stuff because it kind of gets a bit lost on everybody so um yeah let me take this out that's what i was getting confused okay so api version b1 the kind is persistent volume claim we are going to have metadata and we're going to give it a name and i'm going to name it mssql dash claim so microsoft sql server claim if i can spell claim correctly that would be useful and then spec we're going to specify how this persistent volume claim is made up so the first thing we need to specify is something called an access mode or access modes and it's an array and i'm going to specify let us read write many so we want it to read we want it to write then the next thing is our resources and really this is where we're staking our claim to the storage resources or the persistent resources that we want to use and its resources requests and then storage typing is really starting to suffer now and i just want 200 megabytes i don't want a huge massive disk size so that should be fine for us now this looks rather sparse and indeed it is and you can obviously specify more things in here but for now if we just leave it at that kubernetes is smart enough just to use the defaults which will give us access to our local local file system so save that off and then bring up your command line we'll just clear this off and we'll do a listing just to make sure we can see the local pvc file and as usual it's just cube ctl apply f and then the name of a file and paste that in there and you can see here that our persistent volume claim has been created so if we do a cube ctl get pvc you will see that we have a persistent volume claim created is banned we've got our access mode which is read and write we've got our capacity and we've got our storage class here which is host path and if you remember from when we got storage class that is what our storage class is hostpath uses it by default so that's it really for the persistent persistent volume claim the next thing we need to set up just before we actually go on and create our sql server is sql server when we set up will require a system administrator password now again for a lot of config i've got in this tutorial a lot of it is just stored in config files which is not best practice but again in the interest of time that's what i've done but here what i'm going to do just to show you how you can use something called secrets i'm going to actually create the sql server password as a secret in kubernetes i'm going to pull that in via a config file so it kind of removes this this idea that you're actually storing sensitive information in conflict files which you generally don't want to do so we'll do that once and then we'll move on to actually setting up our sql server all right so let's just clear the screen here and what i'm going to do here is i'm actually just going to issue this command obviously at our command line i'm not going to create a file because again it would defeat the whole purpose of having something that's secret if you actually put it into a file because then you just have the same circular reference of bad security so we're just going to issue this at the command line and not store it anywhere so it's actually a bit like a user secret so if you've used user secrets with dot net development it's kind of similar concept to that but very very similar to that so our command line we're going to use cubectl obviously we're going to create a secret of type generic and we're going to call it mssql and we'll make use of this mssql name in a bit when we actually come on to creating our sql server in dash dash from dash literal and then we specify basically a key value pair for the key and the value so the key in this case is going to be sa underscore password all capitals and the value is going to be your password and i'm just going to use well you can use whatever you like i'm going to use this so let me just check the spelling checked everything is spelled correctly so cube ctl create secret generic ms sql dash dash from let's roll essay password that looks good okay so we have our secret created fantastic so we just make sure that we remember these two things here this name here and also the the key that we defined so now we're actually going to make use of this when we come on to creating our deployment for our sql server so we'll move on and we'll make use of a secret as well as our persistent volume claim and then we'll stand up a new instance of sql server all right so here's the main event of this section we're going to create a deployment for our sql server so i'm just going to get rid of our terminal because believe me we're going to need to the room this is a really it's actually the longest file that we're going to write the longest deployment file by a mile that we are going to write so in our k8 project again right click new file and i'm going to call this mssql dash plat as in for our platform service that's going to be dedicated to that depo yaml all right so we'll specify an api version of v1 and i need to be really careful what i type here because there's a lot of stuff going on so i'll double check every line as i type the kind we've come across this before is a deployment and i don't really want that so i'm going to uh just type it out deployment and the metadata we're going to supply a name and that is going to be mssql so that's really the name of our deployment so when you do a cube ct it'll get deployments this is the name that will come up and if you wanted to destroy this deployment this is the name that you would refer to all right so nothing we've not seen before and again we're going to come on and define the spec so tab in we're going to define number of replicas so one and then we're coming on to the usual selector selector stuff that you've seen quite a few times before i'm going to use something match labels and then we're going to call up mssql just mss just ms sql okay oh my goodness autocorrect is quite annoying sometimes okay so bring your cursor back under the selector and we're gonna define the template this time specify some metadata you can probably tell i'm not a huge fan of yaml files the only thing i do like about them is that you can put comments in which you can't typically do with json so that is the one saving grace so anyway template metadata labels app again is mssql so you've seen this selector template thing before okay new line and bring your cursor underneath the metadata underneath template again be really super careful i can't really stress it enough be super careful about how you type this stuff in because one mistake and it's game over so we're going to specify the containers that we want and again this is an array in this case we are just creating one so i'm going to give it a name of mssql and then bring the cursor under the name we're going to specify our image and this is where we specify the docker image for our sql server so it's mcr micro soft dot com forward slash mssql forward slash server colon and i'm going to use the 2017 version of sql server the latest version now i'm going to specify some ports and we're going to specify the default sql server port which some of you may know is 14 33e so bring a customer back underneath the ports and then we're going to specify some environment variables and this is where we're going to kind of inject in our secret along with a few other environment variables so minus sign we'll give the first one a name of mssql pid tab in and give that a value of express so basically that's saying we want to use the express version of microsoft sql server as opposed to the developer edition or whatever and this edition is of course free so we're going to create another environment variable give it a name and this one we'll call accept eula and the value is going to be yes now it's just accepting the license agreement as per you know microsoft's usual kind of um way of doing things and then we're going to specify one more and this is for our system administrator or sa password comes into play so the name is sa underscore password and just as a matter of interest if you were just running sql server and i've done a few videos on this if you were just running sql server in docker without using kubernetes you actually have to pass in some exactly the same environment variables as part of that so if you google sql server docker you'll actually see some very similar setup required just to run sql server as a container so again we're just doing that here obviously but i'll be in a kubernetes deployment so we're not going to specify an actual literal value because again that would negate the whole point of setting up a secret so we're going to use this thing called value from and where are we going to get our value from we're going to get it from a secret key ref okay take a new line tab in again what's his name mssql and again well if i bring up my command line it should be able to should still be there i think it's basically this thing here okay the name that we gave it here and then the other one is yeah well yeah here's essay password so again that's where i see password is coming in here's our key here all right so that's all that's pulling in and yeah the key we have to specify here is sa password okay so the name of the environment variable is sa password the name of the key that we gave it is sa password let me just highlight that that's the same that's cool okay and that's it for our environment variables we need to take a new line and we need to take our cursor underneath the env okay and here we're going to specify volume amounts and we're going to give the amount path and this is just the moment path of when sql server runs now we're actually running this image here is the linux the sql server running in linux so the mount path is going to be like a linux based path and it's just really going to tell us where our database is going to reside where our data for our database is going to be so it's var opt mssql forward slash data and we'll give it a name of mssqldp not dv db be for bravo all right almost there well almost there for this part of it we still get the networking stuff to go so we're going to bring our cursor out underneath our containers so again visual studio code is quite nice it does give you the kind of line marker to kind of help you not go to a cross-eyed and i'm just noticing here under volume mounts i need to bring this back like this and i need to bring this back for this that's better and volume mounts is underneath our env that's correct so yeah this stuff is pretty painful i will not kid you i don't particularly love it anyway um okay when our auto generator is actually doing that for me but i'm going to leave it at that okay so mss qldb is the name of this and this is really where we're setting up our persistent volume claim persistent volume claim claim name if i come back over here to our persistent volume claim it's this thing here this is the name that we want or the claim name that we want to use so we'll come back over here claim name is that okay so you'll notice this pattern when you're doing these deployments you reference other objects that you've created elsewhere so you just have to make sure that you're creating uh the right thing so i think that all looks correct i mean obviously when we run it we'll find out whether it's incorrect or not but i'm just checking it against my notes to make sure uh it's correct yeah so i mean it's kind of done some indentation that i've not used in my notes but i think it will be okay um i think that should be okay all right we'll find out if not so that's it for defining our sql server you'll be very pleased to hear but what we want to also do is create our cluster ip address so it can talk to everything else within our cluster so down here i'm going to take a new line now as i don't particularly want to do too much more typing i'm going to copy the cluster ip address spec from one of our other deployment files so i'm just going to come over to our commands deployment and i'm going to come down to where we specified the cluster ip address for that and i'm just going to copy it now obviously we have to change some settings in it but that's okay that's preferable to uh typing all out again so at version v1 the kind of service the name we're going to change and we're going to call it um ms sql cluster ip dash srv so the spec type is cluster ip the selector and again this is where we want to actually select the thing that is going to network is basically this here so we want to copy mssql and put that in here twice basically so take this out paste that in i'll take this out and paste that in protocol is tcp but the port and target port are both going to be 1433 which again is the default sql server port all right so that's good now we could leave it at that but i actually want to be able to access the sql server from my desktop i want to be able to kind of drill into it directly now we can't do that through our ingress nginx controller we could create a node port i think that would probably work but there's another service that you can create called the load balance so we've kind of already sort of seen it um with part of the ingress engine x stuff so we're going to actually create a load balancer service which is does similar stuff actually to the node port it will allow us to actually directly access this so let's go through and do that now i think i can probably copy this and we'll just change the relevant bits and pieces as we need to so again api version one it is a service we will give it a name of ms sql load balancing now i don't have this on our architecture diagram you might go where is this on the architecture diagram it's not on there i deliberately left it off because i didn't want it to become too confusing and we're actually when we create a rabbit mq server we're going to do something very similar with it as well we'll create a cluster ip and also a load balancer that we can use to access it but again i didn't want to put that on our diagram because it would just it get a bit too confusing now you don't have to do this this is somewhat optional but i think it's probably going to help us out to be honest with you anyway back to this in our spec we're specifying that it's a type load balancer the selector is the same mssql and ports now we don't need to give it a name so we'll take that out we need to put in the fact that this is an array now for specifying more than one port we would need to give it a name but we don't in this case so ports tcp and again we'll leave the ports exactly the same that should work for us i think all right so let's just do a bit of a review this section is our deployment of our sql server hopefully we've typed it in correctly we'll find out in a minute whether it's not correct here's a cluster ip which you've seen before and here's this new concept well semi-new concept of a load balancer so i'm going to pause here for a minute and then we're going to come back we're going to apply this deployment and see if we can access our sql server all right the moment of truth then make sure you save your file and we'll bring up our command line again we'll clear the screen and i'm going to move this up a little bit do a directory listing to make sure that you can see your ms sql plat deployment and again you should be getting used to this now cube ctl apply f and i'm just going to copy this in and this is the moment of truth if we made a mistake we'll get an error here now that all looks good so certainly the yaml that we've created has been is syntactically correct so you can see here again you should be getting used to this now a deployment's been created we've got a cluster ip servers created and we also have a load balancer created so let's do quick clear the screen and we'll do cube ctl get services and we should see our sql server load balancer it's available to us on localhost and we should be able to access our um our sql server using one of these ports here and then we have our cluster ip service again internal to our cluster available on port 1433 fantastic and then if we do a cube ctl okay pods we should see that our sql server is still creating there it could well be that i don't have the sql server images getting pulled down from docker hub so we'll just have to keep an eye on that and hopefully we don't get any errors and it runs up correctly now i've found from my own experience if anything is wrong with the persistent volume claim or anything like that and you can't actually mount uh to the date the mount point that's where i've had arrows before and it's caused me a few headaches but um i'm hoping that this gets created correctly so maybe if i bring up the kubernetes so user interface we can have a quick look and see what's going on here so that's the pod running i'm just waiting to see whether we have our sql server running not as yet not as yet so that's uh still container creating so i think it must be doing some stuff in the background so we're just going to wait for this unfortunately let's have a look at that now okay cool so we have a sql server pod up and running that's looking pretty good and i'm just i'm looking on my other screen here i've got my uh kubernetes kind of user interface up and running and i can see here that we have sql server up and running that all looks pretty good to me all right so what i want to do a quick test i want to fire up sql server management studio and i want to see if i can connect in i'm just going to pull that onto the screen here i'll just resize it and i'll just connect and what i want to select is localhost 1433 and we're going to put in the sa password that we specified make sure i type incorrectly and that's looking pretty good actually so we've connected in localhost port 1433 and yeah we don't have any databases on there yet which is yeah it's looking pretty good so what i might do just to test the persistent volume claim is working although the fact that it's up and running suggests to me that it's working fine i'm just going to quickly create a test database it's just it doesn't matter what it is and then we'll just create that okay i'm going to kill that and then what i'm going to do is i'm going to come back over to kubernetes here and i'm going to find our pod i'm going to find our sql server container this one here and as i've showed you this before i'm going to kill it just yeah just delete it let's remove it it's gone now again there it goes this one sort of pinging back as i've said before with kubernetes because we've told it via our deployment that we want one running sql server it's going to basically do anything in its power to make sure that is happening looks like we've killed that few times don't think i can kill it anymore and so what i'm wanting to see is the fact that when that sql server bounces back up that it will be using the same persistent volume claim again right let me open this up and we go our test database is still there so you saw our container being destroyed and rebooted and all that kind of stuff and we still have the database that we created so we can delete this now we don't actually need that it's just as a test fantastic so that's good we've got our sql server up and running we can connect into externally using this load balancer concept it should be networked internally so now what we want to do is we want to actually update our platform service to actually use sql server so we'll do that in the next section all right so with our sql server now up and running and accessible which is cool we're going to turn our attention now to our platform's service and we're going to set it up so that we can connect to that sql server in production in kubernetes now i think as i mentioned before when i run this service at the command link.net run i still want to use the in-memory database so we're going to have to pick between rb and production ran development and then pick which database we are going to use or database provider we're going to use now some of you might think that's a bit convoluted there's various reasons why i made that decision it's up to you if you actually want to use sql server and development as well you can absolutely do that and the way we've set this instance of sql server up as you saw it's accessible locally as well but i'm choosing not to do that and i'll leave it over to you if you want to follow that along but i suggest if you're following along you go with what i'm doing here anyway with that being said the first thing we want to do is create a connection string for production so with that being said we're going to come over to our app settings.production.json file and we're going to add in a connection string that will allow us to connect in to our sql server running in kubernetes so column column comma at the end of this attribute here and then we're going to specify another one called connection strings okay and then taking your line and then we'll specify you could specify multiple connection strings here i'm just going to specify the one that we want i'm just going to call it um platforms colin doesn't really matter to be honest with you as long as it makes sense so platforms con platforms connection and then we specify the connection string now this is a connection string for sql server if you're using some other database the postgres would be slightly different some a little bit different another thing i'll say here is be really really really careful that you type everything in absolutely correctly because this is the type of thing that will cause all sorts of issues if you don't get it absolutely precisely correct so can't stress that enough anyway i'll do a control uh d just to give us a bit more room and i'll start typing out my connection string so server equals what and that's that's the first stunning block what is the server we're connecting to well if we come back to our k8 project okay so there we go i'm getting confused too many projects open so over in our sql server deployment file here's our deployment of our service but here is our cluster ip and again similar like we've done before we want the name of our cluster ip service that is basically what we are connecting into so come back over here and we'll specify that as our server name comma and then the port 1433 then a semicolon so this is basically our server configuration we're then going to specify the initial catalog and that's basically the name of our database so i'm just going to call that platform's db semicolon again we're going to specify the user id and i'm going to make that equal to the sa account now typically you wouldn't use essay in production in fact you wouldn't ever use it at all again just for quickness i'm deciding to use it and likewise with password you wouldn't typically put a password in here but again just for a quickness i am going to do that but calling out is not a great idea so we produced a secret for our sql server and we read that in from the deployment file you could do something very similar with this and you could actually inject environment variables in to our platform service but again it's kind of taking us away from the core of what we're sort of looking at so i'm just putting it in here but yeah just be aware it's probably not a fantastic idea to do so and i'm just going to put a semicolon at the end i don't think it matters too much but i'm just going to do that anyway so that's our production connection string available next thing we want to do is come into our startup class and configure it so that it will work with either the memory database or if it's in production use sql server and use this connection string so we'll do that next so still within our platform service project i'm just going to do a ctrl b to bring up our directory listing and come over to our startup class and ctrl b again to bring it down now as you'll remember within our configure services method that we already set up a db context that uses the memory database which we want to retain but we also want to add one in now for sql server but not use them both at the same time as that would be rather confusing so we want people to switch between them depending on what environment we're running in now in order to determine what environment we're running in i need to inject something called a web hosting environment into our startup class up here and get access to the environment in that way so i'm going to do that first so in our startup class and our constructor i'm going to inject in an iweb host environment i'm just going to call it env now this is available to us there's a few things like configuration like web hosting environment that are available to be injected into classes at any point in time we don't have to set them up within configure services obviously because that would be a bit of a circular reference so this is available to be injected so you don't need to worry about any other set up from that perspective then over here we'll just declare as usual a private field that we will assign our injected instance too so again control period to generate the read-only field and it puts it down here now i'm just going to move these two things up above our class constructor don't ask me why i just think that's where they should sit that's just my own this is my own preference though you don't of course need to do that fantastic so that gives us access to this env object where we can now use it to determine whether we're running in production or not so coming down into configure services i'm going to create an if statement and i'm going to make use of our env object and i'm going to make sure it's an underscore a and b and i'm going to determine is it production and if so you want to set up our sql server if not then we will just use our in-memory database as a bit of a catch-all all right fantastic so all we want to do now and now in here is set up our db context that uses sql server now what i might do in here is just do a console writeline just so when things are starting up we can see what is being picked up and this will come in very very useful if we start to hit any problems so i'm just going to say using nmn db all right and then likewise i might actually copy that up here and i will say using sql server using sql server db cool and then all we need to do is just set up a db context so using our services collection again we're going to add a dbe context db context it's going to be exactly the same db context so app db context fantastic and likewise we're going to use some options you can see it looks incredibly similar to what we've already done for our memory stuff i'm going to tab that in and i'm going to say opt so instead of use in memory database i'm going to use sql server and then i'm going to pass in the connection string if you don't get your sql server you might have to add a name space by doing control period and bringing that in if you don't have that already and then in order to get a connection string all we have to do in here is make use of our configuration object configuration get connection string and i say this is why you actually wanted to call this thing connection strings and give it that exact name that allows us to use this get connection string method and all we need to do is supply the name of the connection string we want to use so we'll come back over here and we'll copy this in all right fantastic save that off so if we're in production we'll use sql server if we're not then we'll use it in memory database so what i'm going to do just as a quick test i'm going to run this now and it shouldn't make any difference now should still be using the in-memory database shouldn't get any errors that looks good so again we're using the memory database we're connecting yeah exceeding data all looks good shouldn't really have made any difference at this point in time which is looking great so the only other thing that i want to be able to do is apply migrations when our database starts up if they've not already been run and i want to do that from within our prep db class we're going to have to fire actually create some migrations which we will do our command line we have to do that anyway and then the application of those migrations we can either do at the command line or we can set up our class to apply those migrations so i'm going to set up our class to do it which makes things a little bit more complex but i think it's probably probably a slightly better approach so before we generate any new migrations what i want to do is i want to come back over to our prepdb class and get it to apply migrations but only only if it's running in production if you try and apply migrations when something is running in development and therefore using the in-memory database it's not going to like that so we have to again make this kind of determination when we apply migrations we're only doing it when we're running in a production environment so we'll do that next okay let's do a ctrl b and go into our data folder and select prep db and ctrl b again to bring it down so where i want to actually apply our migrations is when we come to seeding our data i want to just put it above this this section here where we see data and i want to be able to determine whether we're in production in which case we apply migrations or we already don't so first thing i want to do is determine if something and we don't know what that is yet maybe put a question mark in there if that's true then i want to use our context and i want to access the database and i want to run migrate and that will migrate our migrations which we've not yet created we need still to actually create generator migrations but assuming we have those that will run our migration so here i want to basically understand whether we are running in production or not so how do we do that well the quick way i'm going to do it is by coming back to a startup class and when we call this method here additionally i'm going to pass over again we have access to this env object here i'm going to pass over env is production so if it's production we're going to get through here i'll save that off it's erroring out obviously because that's not where the method signature currently is so if we come back over here up to our method signature which is prep population we're going to expect it to take in a billion of prod maybe we call it as prod slightly better so is prod if it is obviously it's true then we want to percolate that down if that's the right expression to use down in terms of data method so we're going to expect that also to take a bill called as prod and then finally up here along with passing over our db context we're going to pass over our isprod value and that should hopefully stop everything from erroring out and then finally down here we'll just say it's prod so if it's in production we'll migrate otherwise we won't do anything at all now in here i'm going to put a console right line just to let us know that we are applying migrations so i'll say something like applying or attempting to apply to apply migrations dot dot dot okay and we probably should put this in the try catch block because potentially this is something that could go not quite right so we will do that we'll say catch an exception and we will then write out console writeline we'll use string interpolation and we'll say could not run migrations and we'll give the reason so curly brackets ex message all right so i think that's us covered now again i'm going to save that off come back to a startup class this is still going to get called it will be running in development so it's going to run it again and make sure we're not broken anything so we shouldn't see applying migrations or anything like that it should all be totally benign so we come over here yeah we just see seeding data using a memory database so that's looking good hopefully when we do come to deploy into kubernetes that will kick in and it will apply the migrations fantastic however there is one thing we still need to do and i actually forgot to do this and i had to come back and re-record this i don't know why i forgot it but i did forget it and the thing i forgot to do was actually generate the migration so if you've worked with entity framework before you will understand that you have to do a generate migration and it actually generates a folder within our control b that generates a folder within our structure called migrations and actually has the instructions that we will run against sql server in this case and i didn't do that forgot completely about that and ended up in all sorts of trouble so we need to do that first we need to do that now so we're going to do that in the next section a little bit convoluted but we'll get that up and running and then we're then ready to push this all up to docker hub and redeploy our platform service and fingers crossed everything should be working but let's generate our migrations next alright so we want to generate our migrations now so the way that you will do that is we come back to our project and do a control backtick we'll do dotnet ef don't forget the ef migrations add and then you give your migration a name so we'll call this initial migration and see what we get build start will succeed we get some horrible looking error so without going into too much detail i was expecting that to happen and the reason that's happening is because entity framework core thinks we're using an in-memory database and we are in fact using one and it just doesn't support migration so we need to basically trick if you like our command line into thinking that we're actually using a sql server database or some database provider that supports migration so there's a few things we need to do here and it's a little bit kind of convoluted and this is where you might come to the conclusion that you want to use sql server in both cases in you know both production and in development because actually would make this a lot simpler and all we need to do is just change our config file which we're going to do we're going to have to do that anyway as well so again this might be an inflection point you go i don't like what i've done here i want to use sql server everywhere that's entirely up to you but i'm going to keep going with this because to be honest with you it's a good it's good practice it's a good bit of learning to understand what we need to do to kind of fake this a little bit so all i'm going to do is i'm just going to take out or comment out this if statement and we're not going to even determine whether we're running in production or not and this again this is only to get our migrations up and running we're going to we'll uncomment all this again don't worry so at the moment uh i'm commenting out we're going to be using a sql server all right and we'll come down here and we're going to comment now i don't need this to run i don't want this to run just yet so i'm going to save that off as well now the only other thing is it still thinks we are running in a development environment so it's going to come to development settings and see if we've got a connection string and in fact we want we've only got one in production and this will not get read in at all so we're gonna have to come over here and we're gonna have to copy this and we're gonna have to put it into our development settings as well the only thing though is this will not work this is only for use when we are within the kubernetes cluster so we're going to change that any ideas to what localhost so that should be enough i'm just going to come back to our startup class and just double check so we've commented this out we don't want that to run yet and we don't want to make that determination we just want to use sql server all right so control back click the up key up key and we'll see what we get now fantastic so that all looked good we're using sql server db because we've forced it to and what you should see over here is this migrations folder has been created for us and inside of here you can see that we actually have our migrations being created and it's basically just a list of instructions telling sql server how to create our tables and all that kind of stuff so that's cool we've got that now so what we can do now is we can roll back the changes that we sort of hacked there i'm going to use that term and we can attempt now at packaging this up pushing up to docker hub and then running it in kubernetes and we will see what we get so again we're going to go back to this setup here and yeah it will run in memory in product in in development and it will run a sql server in production when it does that we'll come down to print population and it will actually attempt to run these migrations we've now created and migrate our database and if it's successful then we will see the creation of a database in the sql server so getting quite exciting now so i'm going to pause here for a minute and then we're going to build this image push it back up to docker hub and then redeploy our platform service deployment and we should see everything come together and working as we're expecting so we'll do that next all right so back over in our project to start our command line we are going to build our platform service again so just do a directory listing to make sure you can see the docker file and make sure you're in the right space then as usual docker build t your docker our user username name of the service and then the build context which is just where we are at so a period so platform service and again you know i can't say it again can't say it more times be really really careful every step of the way that you spell everything absolutely correctly one small mistake and things won't work and i absolutely said it before guarantee you will make mistakes so and if you do get problems it's more than likely going to be some stupid typo somewhere i've done it plenty of times so you will not be alone in that if that happens to you so we'll just let this build cool that looks like it's built okay and then we're just going to push it up to docker hub so take out the build context take out the minus t and just put in push and we'll push it up to docker hub and then the proof of the pudding is going to be will we will do that rolling restart of our deployment because as far as kubernetes is concerned the actual deployment file for our platform service hasn't changed we've not changed that we're just going to do a roll restart so it pulls down the latest image and we'll see what we get we'll find out at that point whether we've made a mistake all right so that's pushed up now what i'm going to do is just come over to docker hub and just check that it was updated now the reason i like to do this is just in case i maybe misspelled the service name and spelt platform wrong and we had platform service as well as platform service or something silly like that which i have done before and then when you do a rolling restart you're not actually applying the new image down and then of course you can come over here and you can have a look and see when it was last updated and it did say a few seconds ago and we don't have any weird duplicates or anything like that which looks all good all right so with that being done i think it's time to move on and actually do a role restart and see if our migrations are applied to our sql server running in kubernetes and if we get that working it's another big massive tick so we'll do that do that next so i'll just minimize this and we can do this from any command line actually we don't need to do it in our k8 project or anything like that so let's just clear the screen here let's just do a cube ctl get deployments and you can see here we have our three deployments which is looking good and we want to basically refresh roll restart this one here okay so as before we've done this before and again just to label the point the reason we're not going into our case project and doing a cube ctl apply is because from our kubernetes perspective it doesn't think the deployments change that deployment file stayed exactly the same so we have to do this role restart so tube ctl rollout restart deployment and then we just copy the name of the deployment we want to restart and it's been restarted so clear the screen cube ctl get pods and you can see here we've got this container creating four platforms so the new container is being created and we'll just keep an eye on that and hopefully we don't get any errors give it a couple of minutes to find its feet and unfortunately we get an error so that isn't looking too good so let's come back over here to kubernetes and let's find the let's see what we've got here okay you could not run migrations login failed for user sa okay that's interesting so maybe i've done something silly and misspelled the password for our sa account so let's come back over to our settings in production and so we can see what we have over here and indeed i have spelt the password incorrectly as you will see here which is really really annoying but again actually i could have edited this out and pretended it didn't happen but it's probably going to happen to you certainly happened to me so there should be an exclamation mark on the end of that oh my goodness how frustrating so we'll save that off that should be right now we could test this locally actually by doing kind of a similar thing to what we did when we generated the migrations and make sure that we can connect in locally probably would it could be a good idea to do that now but i'm going to be brave not do that and be a bit full hardy so what we'll need to do and what i'm going to do is i'm going to kill that deployment because it's a bad deployment it's not working so let's let's do a bit of a error management here so i'm going to do a yeah we've already got our master pods let's do cube ctl get deployments and we can see we have this deployment and it's just you know it's going to be in a bit of distress so we'll do cube ctl delete deployment and i'm just going to delete this so it's deleted now so get our deployments it's gone if we get our pods we have a terminating platforms deployment pod so that's all good so that's going the way of the dodo so make sure we're still in our platform service we'll do a directory listing make sure we have our docker file we're going to have to do that whole deployment step again now you can automate this but again i think when you're learning i think it's good to go through this pain at least once or twice and you will be very very sure not to make these mistakes again so we'll do a build so docker build tag and then it's binary in my case anyway binary thistle forward slash plot form service okay i'm just going to double check that make sure to put the build context in it's platform service cool let's hit enter build that and that's built and then we'll just push up to docker hub once again so let's take out this t and we'll change this to push no right push up to docker hub again now because we don't have a deployment now i've destroyed it we don't have a platform's deployment we're going to have to go back into our k8 project i'm actually going to have to redeploy reapply our deployment okay because there's no there's not one there so you'll see it being created so again there's two distinctions there if you have a deployment and it's running you do the rollout restart if we don't have any deployment we have to do the application of the deployment so we'll do that fingers crossed this time we'll see what we get all right so that's been pushed up again i'm going to double check docker hub to make sure that it has been pushed up so let's just refresh that fantastic and we'll come in here [Music] and we will see was updated a few seconds ago and you can also see the count has been incremented all right so we do now need to actually go over to our k8s project so let me just find that that's this one here and do a control back to tick and the directory listing and we want to read the plot not redeploy we actually basically want to deploy as if we were doing it for the first time this deployment here so again just to label the point cube ctl get deployments we don't have a platform's deployment so so cube ctl apply f and real cop copy should i say this file here paste that in and what you'll see is that our cluster ip service is unchanged which is correct that was already there and our platform service deployment wasn't there so it's being created great so group ctl get deployments my sister get pods because that gives us a bit more detail so we'll see container creating and hopefully we don't see that dreaded error there's not there's nothing worse actually when you see that it's kind of your heart sinks a little bit and you think oh my goodness what i've done wrong but it's all part of learning so it's just part how things end up being so we'll do we'll check it again and it's running this time that is looking good let's do it one more time because i find sometimes it can run maybe i spoke too soon and then it's still running that's good so sometimes it can be a bit cheeky you can do it get pods and it's running and you're all happy and you just do it again and it's gone into an error state because it's actually failed but that looks that looks pretty good okay fantastic so what we'll do is we'll come over to kubernetes now that's that's not the wrong one that's the it's a docker hub it looks a bit similar they're using the same color scheme so that's why i'm getting a bit confused with it so let me bring that onto the screen here and we'll do a quick look at the terminal window to see what we get so here's our platform service deployment and we can take a quick look at some of the output that we're getting here we can see let's scroll up yep we're using sql server database we're attempting to apply migrations all looking good and then we're seeding data into our database so looking pretty good i think so the proof of the pudding really is to come back over to sql server and refresh it refresh this there we go we've got platforms db and we should have platforms table if you're doing your query we should have our seeded data in there as well so just make sure if you're using this tool that you change it to platforms platforms db up here and we'll do a select everything from platforms great indeed we do that is looking pretty good and then if we come over to insomnia and i'll bring this up make sure you're using the k8s version um we can do get all platforms and that's all looking good as well now what we might want to do if we want to be really really brave is we want to do a create platform but this time we'll just duplicate this and we'll move this into here and we will just copy this uri into here and we will create docker in our database and that will be the full proof that everything is working so that's looking good we've got 201 created we'll do a get all platforms from here it's looking pretty good and then we'll just come back to our sql server management studio and we'll just execute that and it's there quite a long section but a very worthwhile section so we've got sql server up and running in our cluster now we have using a persistent volume claim awesome and then we have changed our platform service to make use of sql server and it's running in kubernetes but to still use in a memory database when it's running in development so a little bit of a nuanced setup fantastic so what we are going to come on to doing next is we're going to revisit our commands service because it's very very bare bones at the moment it just basically has that one test endpoint in it so we're actually going to build that out to be a more fully featured service you'll probably be quite pleased to hear we're not going to go down the route of uh provisioning its own sql server we're just going to continue to using the memory database for that because we've covered that already and as a maybe as a challenge for yourself or a bit homework you can do that you you have all the information now to be able to stand up a sql server just for command service and i'll leave that for you but yes in the next section what we're going to do is revisit our command service and then kind of get it up to speed as a proper service and then beyond that we're going to start looking at more interesting messaging between the two services in a proper microservices style so we'll do that next okay so before we begin coding let's take a quick look at the end points we're going to have to build stand up in our commands service now this conversation is really more a rest conversation as opposed to a micro services conversation there are some kind of crossover touch points but for the most part this really isn't anything really to do with micro services this is really to do with the fact that our commands service is going to be dealing with two models a commands model as you would expect in fact let's take a look commands model as you would expect which will store the command line snippets but it's also going to have to deal with a platform model now where microservices conversation comes into this obviously we are going to spend a lot of time figuring out how to get platforms from our platform service down into our command service and use this concept of eventual consistency where we have something created in one service a platform service and it's somehow replicated or transmitted or event driven whatever down to other services in this case our command service so that's the microservice problem taking that aside for the moment and just looking at this purely from the lens of the command service the command service is going to eventually have platform data okay we will solve that problem so let's say the command service is going to have platforms and it's also going to have commands okay and so what we're really talking about here is how do we enable our command service to work with multi models now when we talk about just about platforms a platform you could almost think as the parent resource in this case and again our command service is not responsible for creating platforms or anything like that but what we will allow it to do is allow someone to return the platforms that it is aware of okay so that is what we're going to do next pretty much and we will put that into our existing platform controller and you'll remember we did create a platform controller with a test endpoint just to test connectivity so that's fairly straightforward but the whole point of this section is really saying that a platform is effectively the parent of a command and maybe putting it another way a command cannot exist outside a platform without a platform so as you can see here with these next three endpoints they're all going to be part of or reside within the commands controller which we will set up and probably the most important thing that you'll note here is that the route for all our actions is going to contain has to contain the resource id for a valid platform so for example taking a look at how we're going to create a command we cannot create a command without a platform hence we have to provide a resource id as part of the route where we are creating a command and the same rules apply for when we are retrieving commands or even retrieving a specific command we have to do it in reference to a platform so again that's it really for this section it's really just to get you up to speed on the fact that this is how we're going to have to create our commands controller and it's going to be make reference to valid platforms as part of the route um yeah so gets gets you across that and hopefully it made sense if not then we are going to come on to coding next for hopefully it will become much clearer so why don't we get started all right so we're coming back into some familiar territory now and back into a more coding type structure which i personally prefer and we're going to make sure we're in our commands service project now you'll remember it's been a while fairly basic in fact we didn't really do too much with it at all all we had was this platforms controller where we had a test post event just to make sure that we could communicate between our services and that was all we did with it we don't really have anything else in here we did of course with a dockerfile which we need and that's not going to change but other than that very very very basic even over in our startup class i don't believe there was too much here in fact i didn't even take out the default comments so i'm just going to do that which i usually like to do all right fantastic so the first thing we're going to do is create our models so over in the root of our project right click new folder models and the first model we're going to create is our platform model so new file platform does cas so we'll create a namespace of command service and this is models fantastic and we're going to declare a public class called platform and then we're going to define some properties so prop integer tab across we'll call this id fantastic and then prop we'll call this we'll call it we'll give it a type string and then we'll call it name and we'll leave it there for a moment now what we might actually want to do in fact what i might do is i might put another property in here an integer and we'll call it um i don't know external id and that's going to be the primary key of the platform from our platform service because we might want to keep track of what that is possibly we'll stick a pin in that for a moment but we'll use that so this is really a reference to any unique id that we get given by the platform service we may want to track that and of course we have our own unique database base id which we'll use kind of internally that's it for now the other model we want to create is our command model so right click on models new file and we'll call this command.cs so namespace command service oops models and again it's a public class called command and define a couple of properties on it first one with integer call id all good and then the next one we'll make a prop string we'll call this one how to to how to do a particular activity such as build.net project next property we will have that also as a string not string string and we'll call that command line and then we'll have a few more properties it's there an id and this one is our plat form id so this is the kind of foreign key reference back to our platform parent if you like then finally we'll create a public platform of type platform i didn't quite do the property there did i put in the handle fill it out manually now this is basically what's called a navigation property so it allows us to kind of navigate between commands and platforms by providing a reference to a platform there now while i remember let's jump back over to our platform class and we want to also in here make a reference to the fact that we have a collection or can have a collection so we'll use eye collection of command objects so again this is navigating the other way and we'll call that commands again set and we'll make that equal to a new list of command object command objects all right so we need to bring in a couple of namespaces and correct our spelling collection using system collections generic i think that should be enough yeah so there we go and the other thing that we do need to do which i haven't done is we'll need to annotate these attributes as well so i want to make sure that the string or the name sorry is required uh we'll make sure the external id is also required and we'll make sure that this is required and we'll also specify that it's a key and we'll bring in the namespace for data annotations now looks good i'll save that off and then back over in our commands likewise we'll make this required we'll also make it the key a how-to [Music] will be required as with our command line wouldn't be a terribly interesting command if we didn't have a oops if we didn't have a command line snip it in there and likewise our platform id needs to be required okay and we'll bring in the namespace for that and that should resolve fantastic i'll save that off and control back tick and we'll just do a net build just to make sure we didn't introduce any strange errors into here that all looks pretty good so in the next section again with this service we're just going to use an memory database but we still need to add a database context and actually within our db context we want to set up some we'll specify explicitly some relationships between or a relationship between our platform model and our commands model as well entity framework does a lot of that for us but we if we want to be really explicit about it to make sure we get the exact behavior you want then you you should set up those behaviors and we do that within our db context class so we'll do that next all right so we want to create our db context now so let's just get rid of this and back over in our command service like we did with the platform service we're just going to right click new folder and we're going to call it data and then here we're going to create a new file called app tv context that's yes and this is well the first bit is exactly the same as we have already done for our platform service so namespace command service data and we're going to declare a public class called app dd context and it's going to inherit from the context and we'll bring in that namespace with control period and then curly brackets we then wanted to declare a constructor so c2r tab tab into this and we wanted to clear some db con text options and of what type app dv con text and we're going to specify some options again exactly as we've already done for our platform service all good and then likewise we're going to do a prop properties db set so type db set and it's going to be the vcsdb set of type platform let's bring that space in using command 7 models and we'll call that plant forms and then we get a second prop again it's a dd set of type command and we're going to call that commence fantastic now as i said just a little while ago that should be sufficient an entity framework core does a pretty good job of wiring up the relationships using its defaults but just to be on the safe side i'm going to explicitly declare the relationships between these two entities because the naming conventions can be a little bit weird when you leave it to entity framework core so i'm just going to explicitly declare that here and the way you do that is yes within your db context but we just need to override the method so uh protected override what we overriding must void first of all and then we're overriding on model creating let me pass into that a model builder and just call that bottom builder and using this model builder we then specify the relationships we want to declare on our two entities two models so entity first one we're using is platform and then we just declare the relationship so we'll say has many then we will use a lambda expression so p goes to p commands a platform has many commands with one again using a lambda p goes to p platform put the exclamation mark in and then finally oop has foreign key and the expression again platform id and we'll finish that off with a semicolon that looks pretty good and then we kind of do the reverse going the other way from you know for our command so one builder again specify the entity we're working with so type command fantastic and then we just specify it has one and we use a lambda expression again e goes to p platform with many it's kind of the reverse relationship definition effectively commands and finally has foreign key equals to p platform id fantastic so we can save that off and i think that all looks pretty good so again we'll just do a clear the screen here we'll do a dotnet build just to make sure that's all humming along nicely and it is fantastic so um we're going to again use the repository pattern here so in the next bit next section we're going to add a repository so we'll do that next all right so let's close this down let's get rid of our db context and still within our data folder we're going to create our repository interface so right click data new file we're going to call this i command repo dot yes namespace as usual command serviced models data and then we just declare the interface as public interface call i command repo so again because we'll be using database we want to specify save changes which you know we always technically have to do so save changes and then i'm going to create a number of method signatures here and there's some that may most of them will make sense and most of them will service those endpoints that i showed you in the slide where and there's going to be one that wasn't there because it's not actually an endpoint that we're going to expose via our http controllers but it's something we're going to have to be able to do um when we come on to doing the microservice asynchronous messaging and that is adding platforms into our database platforms that we're going to get somehow from our platform service that's when it really starts to get exciting so anyway let's begin so the first method signature i'm going to declare is a innumerable platform a few namespaces to bring here i'm just going to call it get all platforms okay so allow us to get all the platforms from our command service the ones that it knows about anyway so let's bring in these name spaces systems generic custom collection genetic and our models name space fantastic next one is going to be avoid and it's going to be called create platform so again we are going to eventually need to create platforms in our database and we'll do that generally by getting that data from elsewhere from our platform service so yep it will take a platform object ultimately and then we need to create this method as well platform exists and we'll see how that comes into play a little bit later and into that we are expecting to pass a platform id so basically this is just checking to see whether this given platform id does exist within our collection of platforms and we will use this quite a lot actually throughout the rest of the course and we'll see how that comes into play a bit later so this stuff is really all about the commands i might just actually uh you can read the code i suppose but it might just put platform sorry this is all about platforms not commands this is platform related stuff even though it's in our command service and this next range of methods are really about our commands and these ones really relate to the end points mostly that i i showed you on the last slide where so the first one is going to be an innumerable it's going to be a type command so it's going to return a list of commands i'm going to call it get commands for platform i'm just going to expect a platform id so we give it a platform id and we're going to give it back all the commands that relate to that platform id the next one is going to be returning back a single command and we'll call it get command and into that we are going to expect a platform id but also a command id so that's going to be giving us an individual command if you remember from the routes that we had in the slide and then last one possibly the most important well as equally important is void create command into that we're going to expect the platform id of the platform that we want to create the command on and then a command object called command and that's our full range of method signatures that this interface needs to provide for us and if you just come back just over to this slide here you'll see here that we have most stuff relating to so we've got something that will give us all our platforms and then these are our three command based operations you're getting all commands for a platform getting a single command for a platform and then creating a command and the only one that we didn't have on there is this one here creating a platform which we do by other more mysterious more interesting methods fantastic so that's our interface done and we'll move on now to creating an implementation class for our i command repo interface all right so we're going to create our concrete implementation class for our command repo interface so right click data make sure you've selected that right click data new file and we'll call this command repo dot cs so namespace command service dot data and it's just a public class called command repo and we're going to implement our i command repo interface fantastic and so just click in here control period implementer interface just before we do actually fully implemented we've only got placeholder stuff here i'm going to first create a constructor so ctu r and i'm going to pass in an app db context and we'll call that context and we will assign that to a private read-only field called underscore context fantastic and as usual we need to create that of course so control period and generate read-only field perfect so let's move down to the bottom and we'll start working our way up the top start with the simplest ones first so save changes again we need to call save changes anytime we add or delete anything from the the db context and so it actually gets pushed down to the persistence layer so context save changes and we'll turn true if the value of those the value of save changes is greater than zero how many things we haven't affected or impacted will return true next one is whether we have a platform uh existing in our collection so we'll pass in a platform id and return either true or false based on whether something exists so return context again and this time we're going to look at our platforms collection and we're going to see if we have anything at all in there matching the following criteria so ctrl period to bring in the link and so quick lambda expression so p goes to p and we will look for the uh id and see whether it's equivalent to the platform id that's passed in and if so will return true if it doesn't find anything then we will return false next we want to get all the commands for a particular platform so i'm going to pass in a platform id and to be able to return any commands that we have if any we could we could get a null back that is a valid response so return underscore constant context this time in our commands and we will do where and we'll do another land expression so c goes to c this time we're looking at the platform id to see whether it's equivalent to the platform id build in and if so we don't need to do this but i'm going to do it we're going to order the results by the platform name just so we have them kind of grouped together so c goes to c once again platform name not mandatory but just makes our results set a little bit nicer fantastic next one we're going to get an individual command so we're going to pass in both the platform id and the command id and so we're going to return context commands where if we have clauses a little bit more complex this time so c goes to c and the first thing we'll check is the platform id to see if it's equivalent to platform id passed in but also so and you want to check see if the command id is the equivalent to a command id passed in now let's do a ctrl b to get rid of our file explorer and what we want here is the first or default okay we don't want a full result set we just want a singular singular command return back next one get all platforms this is pretty easy so we're just going to return back using the context again platforms to list simple as that and then slightly more interesting ones we've got a couple of create methods here so the first one is creating a platform so in order to do that we first want to check to see if the platform object that was passed in i'm going to make sure it's not null we don't want to try and add a null value to our collection so if it is null then we will throw a new argument null exception and i'll just bring in the system namespace assuming i spelt that correctly and amazingly i did and the we'll say name of plat okay what model are we throwing an exception on assuming it's not null then we will use our context again and we'll simply go to our platforms collection and add the platform that we've been given and then finally probably the most complex method of all although it's not really that complicated we are going to add a command so we're going to get passed in the platform id and we are going to expect a command object passed in as well now we're already getting the platform id from the uri so if we come back to our list of methods let me just find the right slide i just want to show you what i'm talking about here we go so for create a command this is going to be our route so we're going to go platforms platform id commands we're going to create a command at that route so we're already getting the platform id here so when we actually pass in the command although the command does need a platform id we don't need to specify that in the body of our create or of our post request that'll become a bit more clear when we create the dtos for this but i'm going to do a little bit of coding here that you might go why are you doing that but that that's basically why so basically similarly to the creating platform if the command passed in is equivalent to null now we don't want to do that so i'm just going to copy this rather than typing it and instead of plant it's going to be a command so assuming it's not null what we then want to do on the command object that's been passed in you want to set the platform id equal to the platform id that we get in the uri from here that should be excuse me that should be down here that's when i was getting that warning so i'm going to go as unreachable code so yes just to replay that again we get a command passed in but we're not going to have a platform id at that point we're already getting it in the uri so i'm going to assign the platform id to whatever is in our route and then finally context on our commands collection i'm going to just simply add that command fantastic and i think that's all done i think that's the concrete class done that's all methods implemented looks that way anyway so just gonna do a control backtick and i'm just gonna do a dotnet build make sure we've not broken anything that all looks good fantastic so that's basically um our repository done what we're going to move on to doing next is creating our dtos because then next we're going to move on to implementing our fully implementing our controllers but first we need to get our details in place and then actually we need to do our mappings as well so we'll do that next all right so we are coming on now to creating our dtos so ctrl b to bring back our command explorer and i'm just going to get rid of these for the moment we'll not get rid of them we'll just clear them down so in our root of our commands service project right click and new folder and we're going to create a folder called dtos now we're not going to create all the details that we need right now we're going to create the ones that map just to the controller actions that we're going to implement okay there are some other details we're going to create as we move forward more related to eventing and all that kind of stuff but these details we're just creating the ones we need just to service our controllers so the first detail we're going to create i'm not going to go through what details are again because we've already gone through that first one i'm going to create is a platform read dto so new file platform read dot d dto dot dot dto but platform dto dot cs all right and so name space command service details and it's super simple it's a public class called flat form feed dto and we have two properties prop first one is the id we want to return back the id of any platforms that we have and as an integer so that's all nice and easy and then likewise for our name assuming i can spell prop correctly but this time it's a string not a sting easter ring and it's a type name just going back to our models just to kind of review here's our platform model is pretty much exactly the same as that not quite but it's got our id and our name we're not going to pass back our external id we're just passing back our id and the name can of course change that if we so choose to a bit later on but for the moment that's all we want just quickly looking at our command model just before we move on any further we have an id a how to and a command line and a platform id we actually want to return all of these back via our command read detail so back over in details right click new file and we'll call this command read dto dot cs the name space command service ptos and it's just the public class called command read detail let me come back to our platform and i'm just going to copy this and copy these in here over here and i'm going to take out the annotations because again it's a read detail so i don't really feel we need those take these out and that's fine so when we when we are returning a command object back as part of our you know response this is what we're going to be passing back and we're going to save that off and the only other one that we're going to implement nick for what we're doing next on our controller is when we want to create a command so right click details new file and it's command create dto dot cs namespace command service dtos now i appreciate a lot this stuff is quite boring we've done it before but again it's good practice we've still not really come on to the really interesting stuff yet which is when we start using eventing and all that kind of stuff this is just still pretty much a web api to be really honest with you um but we're almost getting there so again command service it's just going to be a public class called command create dto now again thinking when we want to create a command what are the things that we're going to pass over what do we want to actually create with and we've come back to our command class we have an id which again as you know we do not want to pass over like with our platform service we're not going to supply the id so we don't need to include that i mentioned already and again this is maybe still a little bit confusing we do need the platform id but we're not going to expect that as part of our body payload because we already get given it in our uri so this is surplus to requirements although it's still needed we don't need to get it as part of our body payload otherwise would be duplicating the platform id and conceivably the route id and the idv pass here could be different it could be confusing so we're just not going to allow it to get passed over at all so all really our command create dto is going to expect is how to and command line platform ids passed in the uri so i'm just going to copy this hopefully that made sense but it will when we come on to actually doing it and again we because it's a read detail i'm going to leave these as required and we'll bring in the data annotations name space and i think we're good did we from memory that's a look no that's good we didn't put any other considerations on that so that's all good so we'll save that off so i think that's good let's just do a quick build again just to make sure nothing's broken i don't think that would break anything cool so again at this point all we're trying to do is get our commands service up and running almost just from a web api perspective we'll do a quick test if we can and then we're going to move on to the more interesting stuff which is actually getting these services to properly talk to each other using an event bus and that is going to be coming up in the section that follows we're almost there but next we have to do our controller so let's move on to that now actually just before we move on i'm actually forgetting a very important thing and that is auto mapper we have not included auto mapper yet again remember auto mapper is the glue that allows us to map between our models and our details and we've not put that in yet so first thing i want to check is just to check that we included it in our cs proj file and we did so over here automapper extensions microsoft dependency injection we need that in here and you will also remember it has to go into our startup class but i don't believe we have done that yet no we haven't so first things first we want to add automapper into our configure services method so under controllers we're going to access our services collection and we're going to add automapper and it is there and again you'll remember from before we need to pass in this app domain current domain get assemblies and we'll finish that off so that allows us to inject auto mapper in our service fantastic now this is getting a bit busy now so i'm going to close all this down so all it really remains for us to do is actually create the mappings so again you'll remember we created the folder for those mappings or for those profiles more correctly so new folder and we'll call it profiles okay so in our profiles folder new file we're going to call our profile commands profile dots yes and it's namespace plan service profiles again be careful with the pluralization of the namespace profiles and the auto mapper class profile that we implement so public class commands profile and we're going to implement an auto mapper class called profile i'll bring in the automapper namespace so hopefully you should remember this from last time and then we just create a constructor called commands profile we don't accept any parameters and it's in here that we do our mappings so just a bit of a refresh we will specify our source and then our target now again the mappings that we're going to do at the moment are very simple they just map kind of straight on top of each other as you will remember from last time this does get a little bit more complex though so we will come back in here and update our mappings at some point but for now simply create map and the first one that we want to create is from our platform object to our platform read dto so that's when we're giving back platforms from our controller the source ultimately is a platform model and the target is a platform new detail so i'll just bring in command service models and i'll bring in the dto's namespace as well fantastic that's our first mapping done should make sense and then next one create map and this is going to be a command create dto and that is ultimately going to be going to a command fantastic and i think that's all we need to do oh and then possibly the third one create command or create map should i say and the source is going to be a command and we're going to turn back a command read dto so again likewise when we're retrieving our command objects the source is going to be a command and we're going to be passing back a command read dto and that's our three mappings for the moment and i think this is yeah i don't know why that's wrong there we go that should be much better don't know how that got to be a small p anyway there we go that's our constructor done uh matching our platform all good yep ah maybe it got it from here that's interesting i didn't realize that you can see here that the name of the file is a small p and that's maybe where it's getting the name of the constructor i didn't realize that i thought it would have got it from the class interesting there we go something i learned so good pick up there anyway so we'll save that off that's our commands profile done so yeah now we are truly ready to move on to our controllers all right so moving on to controllers now and if we come back over here you can see that we had a platform's controller already here that we created just to test this inbound connection now you can see it's pretty basic there's enough here to get us going but we need to obviously expand upon that now we did create a constructor it's not doing very much at the moment so as usual what we want to do is inject in a repository so we can use it and a mapper object so the first thing we're going to inject in is an i command repo repository and we'll call that repository no comma don't know why i put that in and we also want to inject an eye mapper for auto mapper and we'll call that mapper and bring those namespaces in so using command servers data and using automapper and as usual we will assign both of those to private read-only fields so underscore repository equals repository and underscore mapper equals mapper and we'll bring in the name spaces for these now before we move any so read on the field and same for this we'd only field now before we move any further with this this will actually fail if we try and run up this class and then we'll run up the application and attempt to hit one of our action endpoints we're going to get some horrible errors here and if anybody can tell why then you're obviously following along very well indeed the main reason is that we've not registered our icommander repo in our startup class in our configure services method and actually nor have we configured our database context we're not registered that either we only we weren't using any data services from in here if we were testing before so we set none of that stuff up in our startup class so if we come over here you can see our configure services method is looking pretty bare we just have our controllers which was there anyway and then we did add auto mapper but we haven't yet registered or db contacts which we need to do and we haven't yet registered a repository so before we go any further we need to do that so up here i'll make use of our services collection again and we're going to add dbe context and it's going to be an app tv context here we go i'm going to provide some options opt now it's going to get a bit of a sidebar for a minute the directory listing so lambda expression if i can type it correctly opt goes to use in memory database i suspect me to bring in the namespace for that so let's do that control period to bring in yep into the framework core and then as before we just need to give the in-memory database and they even call it anything it doesn't really matter and again we're not going to use a sql server here we're not going to distinguish between environments we're just using the memory database just in the interest of time but again i've said it before key point to know is this service the command service does have its own database all right cool and then finally that's our db context register we now need to register our repository so let me just put it in here actually so services add scoped so we're going to can anybody request an i command repo we're going to give them should i spell it correctly we're going to give them a concrete class which is command repo and that's it so yeah i think that's all in the name space is there i think that's right so let me just do a build save it first dark knight build that's all good all right perfect so that's all our dependency injection stuff wired up for our data stuff anyway so we can close that down and we can come back to our controller so this stuff will now work so when we ask for one of these it will get injected in so that's all good so we're in our platforms controller at the moment and let's come back to our slide deck and we'll just have a look at the action that we're going to implement for that i can find the right slide with me a second here we go so again we're going to have two controllers within the command service we're gonna have our platforms controller and that's just going to allow us to get all our platforms so let's let's set on doing that first so it's gonna be a http get request uh no routes or anything it's just going to use the existing route which as you can see here is api forward slash c and then the name of the controller which in this case will be platforms fantastic and then it's just a public action result passing back an innumerable of what platform and dtos and we'll call that get platforms it will require a couple of namespaces so systems collections generic and platform read dto so command servers dtos fantastic so i'm just going to do a console.writeline just to give us a wee bit debugging because i find that quite useful especially when things go wrong as you've seen recently so i'll just say getting platforms from oh goodness command service command service more correctly all right fantastic so actually just give us a bit of warning that this controller action has actually been hit when we call it so we then want to get a collection of if there are any collection of platforms back using a repository so i'll create a variable called plat form items equals make use of our repository and let's just get all platforms you don't need to pass anything into that so it's in conceivable this could be null that's okay and in fact it is going to be null um for very obvious reasons which we'll cover in a second and then we're just going to return back and okay and we're going to make use of auto mapper so we're going to map from our collection of models to a collection of details so mapper map what do we want to map to we want to map to an i enumerable of platform new details and what's our source source is our platform items and again that's just making use of one of our mappings in our profile so that all looks good fantastic so let's give this a go let's run this up to our net one and let's come over to insomnia and test this endpoint now it's not going to return anything it's just going to return an empty set but i just want to make sure that we've wired everything up correctly and we're not getting any weird issues so we are running in our local environment so we've just done a.net run um so command service here's the previous test we did on the inbound connection in fact we could probably still test that and that's uh working okay if we come back over here you can see yep we've got our inbound post so again we'd expect that to continue to work but at least we've not introduced any new errors and so the new thing we want to introduce here is basically a getting all commands so new request all command sorry get all platforms put space in here there we go get request we'll create that and i'm just going to copy the uri from here it's going to be exactly the same the only difference is it's a get request not a post request send that over great so we got 200 okay and just an empty array of nothing because there's nothing there we've not seeded any data we didn't create our prep bill we've got prepd class from memory haven't we so we just come over here ctrl c that get rid of this bring up our um data folder no we don't i forgot we didn't actually even create a prepdb class like we did with our platform service where we see the data and all that kind of stuff we don't even have one yet we will in a bit but for now we don't so there's nothing there we just have an empty database with nothing in it so that's cool for the moment we'll circle back to this problem uh with the fact that we have no data for platforms but for the next part what we're going to do is we're going to finish off the other action action results that we need to put into our commands controller so we'll do that in the next section all right so what we want to do now is create our second controller our commands controller that will allow us to service requests for commands so before we do anything let's just take a quick review of the endpoints we're going to stand up so we've done this first one again and that was on our platforms controller should actually pluralize these but in our platforms controller we allowed the ability to pull back all platforms for the remaining three that's where our new controller comes in our commands controller and you'll see in all three instances the route is exactly the same with the exception of the second one which is an additional parameter the command id but taking that aside all three of these routes are exactly the same so this is our kind of controller level routes let's just take a quick look api forward slash c platforms we're then passing in a platform id in all cases it's a mandatory requirement and then our commands resource and in this case we're passing through a command id so just try and remember that this is the kind of class level route that we need to put in so back over in our project right click on controllers new file and we're going to call this controller commands controller dot cs so namespace as usual command service controllers it's just a public class called commands controller and we're going to inherit from controller base i wanted to bring that name space in fantastic okay cool now before we come on to doing constructors or anything like that we want to actually decorate our class with the fact that it is an api controller fantastic and we also want to specify the route so if we come back to our platforms controller you can see the route that we spent the class level or the controller level route was api forward slash c and then the name of our controller which in this case was platforms over here we want to do something similar but coming again just to refresh your memory although it wasn't that long ago we want to have our route or as api forward slash c platforms now bear in mind our controller is our commands controller so we have to actually write platforms as our string we need to figure out a way to provide in uh an attribute or parameter platform id which we have done before so it shouldn't be too in fact actually showing you the syntax here shouldn't be too controversial and then finally inheriting the name of our controller so back over here we want to specify the route and then we just specify the route so it's api forward star c forward slash platforms forward slash and institute we inject in our flat form id parameter which will be the resource id of our given platform and then finally the name of our controller which we will use the kind of wild card syntax ocon controller okay so that just picks up this commands portion and puts it in here so that gives us our class level or our controller level route which is pretty much the same for all our actions apart from one where we have to add on an additional parameter cool so before we come on to the actual results let's do our constructor so ctor and as usual let me just go to the stuff at the side here ctrl b as usual we're going to pass in an i command repo called repository and we're going to pass in a mapper object as well called mapper and then private read only fields so you should definitely i did tell you would be doing a lot of this stuff and i know it's probably getting a bit laborious but again all good practice and we'll bring in the name spaces for each of these control periods i'm not namespaces my apologies we're going to generate a read-only field i'm just so used to seeing that bringing in the namespace so generate a read-only field for both and we do actually we do need to bring in the namespace for these so let's do that command service data and using automapper fantastic so namespaces for these both these things and private read-only variables for that fantastic so let's come on to our first action result which is to get back all the commands for a given platform so it's going to be a http get i'm not going to bring up the slide again i think you've seen that enough let me just move this up a little bit and it's going to be a public action result and what we're going to pass back we're going to pass back an innumerable [Applause] off-what of command read dtos and we'll call this method get commands for platform platform correctly and then we're going to pass in the platform id fantastic so we will get our platform id from our route up here so whatever resource or a platform id passing we'll get that down here so bring in the namespaces for ienumerable system collections generic and also for our dtos there we go now the first thing we want to do what i want to do anyway is do a console bring in the namespace using system writeline just to say that we've hit this controller again this is not something you'd typically do in production but while we're doing some test code i think it's useful so use string interpolation it will just say hit the get commands for platform and we'll just put the id in to make sure that's all flowing through correctly plot form id fantastic all right now we don't want to attempt to get commands for a platform if that platform doesn't exist within our database so the first thing we're going to do and i'll do a control b and we come back to our repository as we did put this method in here platform exists with passing that platform id and that's really exists to just check to see do we have a relevant platform in our collection if not then there's no point in trying to do anything else and we return back a false or not found that whatever so we'll make use of that now so we'll say if and we make use of our repository if repository platform exists i'll pass in the platform id and actually i want to make this a not so if if it doesn't exist so exclamation mark so if we get a false back from this ie doesn't exist then we will hit then we will return a not found http 401 and that's it we won't do anything else there assuming that our platform is valid and it does exist we want to pull back our variable of our commands so var commands and this is coming from our repository repository and we want to get commands for platform and that expects a platform id so we just pass that through and we can be fairly sure that we have at least got a valid platform having done this check up here and then we just return an okay 200 response and we have to map back so mapper map what we're mapping to and i numerable of what and read dtos and what's our source or commands fantastic okay so that looks good now of course we can try and test this but we're just going to get a not found so let's let's try it and again we're going to get a not found because we have no platforms but again that's the that's the problem that we you're hopefully starting to see we have no platforms in our entire service how are we going to get them that well that's really where we come on to using some nice asynchronous messaging to get some platforms into our controller into our command service but nonetheless we can still test this endpoint to some extent to make sure we get a 404 um or 401 is it camera 404 not fan 401 is not authorized that's right so um bring up our command line we'll make sure everything's saved we'll do a dot net run and again it's always useful to run this stuff intermittently just to make sure that we're not introducing anything weird so i'm going to just duplicate this one i'm going to say get all commands or platform okay and in here we're going to maintain this is the same apic forwards api force rc for size platforms we'll put in some arbitrary resource id for our non-existent platform and then we'll have to amend this on commands okay so yes we get a 404 nothing which is what we expect as we don't have any platform ids but that looks okay looks like that's kind of being hit and if we actually come back over here you can see here we had our again exactly the reason why i provide this kind of stuff or put stuff in we did hit our controller correctly with the platform id of one and we just didn't find anything so that's all looking pretty good so i'll take a quick pause here for the moment and we'll come on and we'll finish off our other two action results for this commands controller and then we'll start to look at what we're going to do about this problem about not having any commands in this service how are we going to get them in there so we'll do that next all right so let's just kill that and yeah we're going to come on and finish off the final two action results or end points for our commands controller so the next one we want to write up is basically to pull back an individual command for a given platform so so http get uh i'm not going to bring up the routes again because i'm sure you've seen them enough already but you'll remember from this one we need to additionally pass in a command id as well so the controller level route takes us so far but we need to pass in another parameter which is our command id so http get open parenthesis double quotes and then you just need to specify our curly brackets command id as an additional route parameter that's all right and then we also i'm going to specify the name of this action result because we'll make reference to this a bit later when we come on to create a command and this should start to be familiar from when we created platforms in the last service that we built so i'm going to call it get command for platform and that's all good and then it's just a public action result of type command read detail just bring back a single command if we find one and the name i'm just going to copy this to make sure it's exactly correct exactly the same i don't want to introduce any weird errors into this at this stage and then i'm going to expect two parameters so the first one is the platform id and then the second one is an integer as well and it's the command id select that there okay so platform id is coming from our controller level route here and the command id is coming from our action result level route here fantastic so the first thing we want to do actually is do a very simple very similar exactly identical check here to make sure that the platform id exists i'm going to also copy this right line as well so again platform doesn't exist there's no point in doing anything else so i'll just change name of this so get command for platform i'll paste that in here and we'll pass through the platform id and we'll also pass through the command id just to help us with a bit of debugging if that if it comes to that command id fantastic so again you you're starting to see the problem that we're going to hit here we have no wave at this point in getting platforms into this particular service so all these action results all three of them in fact are going to just return a 404 not found because we have no platforms so the problem and start to think about is how are we getting platforms how should we get platforms into this service should we just create another controller action and just allow us to add platforms um we could do that but we're not going to because the whole point is we want to actually get our platforms from our platform service in some nice elegant way some event-driven asynchronous way anyway so yeah let's just finish off this action result before we delve into that too much further so yeah we are doing a platform check to see if it exists if it does exist that's all well and good and what we want to then do is return the individual command so r command equals repository and we're going to call our get command method this time and into that we're going to pass our platform id and also our command id now again at this point we may or may not have a command it's entirely valid use case that there is no command there that's okay so we need to do we need to check that first of all so we'll say if command is null then we go no further and we will just return a not find as before and make sure we put in the double double equal signs for equivalency we're not signing it that's cool and then assuming that's okay then we just return okay and we do of course have to use mapper so what we mapping mapper map what are we mapping to we're mapping to whatever we are returning which is a command read detail and what's our source just the command object that we have uh up here which is non-null fantastic so again that's all looking good we'll save that off we'll quickly run it up now again we're just going to hit this first not found condition here but we can't at least see that we get that number one and it's working correctly and that we're getting both our ids coming through correctly so dotnet run and then back over in insomnia we will come into our command service under a local dev and i'm going to duplicate this i'm going to call it get command for platform okay and it's going to look exactly the same except we're going to hit on or put on an additional parameter for our command still get a 404 which is fine but what i do want to check to see in our output is that we are hit the controller and we pass through number one for the platform id and number three for the command id so that's about as far as we can take this at this point in time cool so that just leaves the final action result which is our create command action result so we'll do that next all right so just stop any running services if you have any and we're back in our command controller and we're going to write up our third action result so back down here click a new line i'm just going to pull this up so this is a http post this time and we don't need to specify any additional routes because our controller level route is exactly the one that we want and again to label the point we have a http get here it's a different verb obviously from http post so they're hitting the same route different verbs it's still unique within the controller so that's all good so let's go on to the method signature so public action results what are we going to return back we're going to return back a command read dto we're attempting to create a command here when if we do successfully then we'll pass back the read version of that dto so i'll call this create command for platform and what we expecting and we're expecting again the platform id and also a command create dto i'm just going to call that command pto all right so again in our route we're expecting the platform id which we are going to get from here and again we'll need to perform this same check and we're also going to expect a command read sorry a command create detail in our body payload so again very similar check i'm just going to copy this one in fact console.writeline and our platform check we'll pop that in here all i want to do is just change the name of the action result so hit create command for platform will pass through the platform id perform the check to see if the platform exists if it doesn't exist then we return a 404 not filmed if it does exist then we keep going on so next thing we want to do is create our variable that's going in our car we have a variable that holds our command model so var command equals and we're going to use mapper to map support type to a platform on the platform a command only to bring in that namespace using command service models so yeah we want to create a command object that's going to hold a command model and what is the source the source is going to be our command the command create detail here paste that in here like so fantastic now just a little shout out here the control b you'll remember our command create dto we did specify some required parameters fantastic so one of the nice things about using this api controller decoration is that when we hit this action result and we pass in a command create detail there's a bit of a validation check using those annotations to make sure that we have those two attributes you can probably pass in other stuff as well but this action result will go back with an error if we don't do that so that's why number one it's very nice to annotate create type details and it's also nice to put in our api controller stuff at the top because you get all that out of the box very nice so we can kind of the point of making is we can kind of be sure at this point when we're doing this mapping that we've got a valid object you could additionally if you wanted to put this in a try catch block if you wanted to be super careful but i think we'll be okay for now i want to keep the code a bit light so we've got our valid command we now want to make use of a repository and call create command on it and we pass forward the platform id and then the command object itself and then we always always always have to call save changes otherwise that will not be pushed down to our persistence layer in this case or in memory database and then the last thing we need to do as i've said here we want to pass back a command read detail in the body payload but we also want to pass back the uri the resource uri to this resource as well in our headers collection so that's where the this created that route stuff comes in but before we do that we want to actually create a read detail so var i'm going to call this command read detail is equal we'll use mapper again you can see how useful automapper is we're using it all the time we're going to use it even more going forward so we want to map to a command read detail this time and the source is a model which is our command that's been created now because we've hit save changes and our model has been created in our database this command object now has an id as well so we have that at this point so this command re-detail has a valid id i'm just going to capitalize the r just to keep things consistent all right it's fantastic and then finally we use our created at route command which again is a little bit convoluted but nonetheless you've seen it working before so created at routes the first thing we're going to pass in is the name of the end point that will host our resource so in that case it's name of and the whole reason we gave this method a name was so we can reference it so we'll copy this and we'll paste that into our name of method here fantastic and i'm just going to take a new line because this is going to get a little bit busy tab in and then we're going to do new platform id equals platform id command id equals command read detail id and then finally we'll pass back our command view dto as well okay so as part of the resource we need to pass back both ids for both our platform and our command all right i think that should be good that returns a 201 obviously now again we can't test this right now we can test it to some extent to see if we hit this so let's see if we can do that at least so save off and then we'll bring up our command line and we'll do a dot net run fantastic and then we'll bring up insomnia come over here again and yeah within our command service on our local dev we're going to create a new request so new request and we're going to call it create command for platform great and let's see post request and the body will be json so we'll create that off i'm going to copy the uri from this one here this request here because it's going to be exactly the same uri just a different verb fantastic and then we're going to pass in a command to create detail so how to let's see build a botnet project and then the other one is command line and we'll quite familiar with this now.net build and assuming you know platform1 was.net framework now just check that the payload i've got here is correct so back over in our project let's do a control b look at our command create detail and yeah we've got two parameters both required how to and command line so that should be okay let's send this we'll get the 404 not found which is fine but we'll just come back over here and just check yeah that we hit our create command for platform endpoint fantastic so that kind of brings us to the end of this section on building up our command service and to be honest with you it does pretty much most of the things that i wanted to do in terms of acting as an api acting as a service but the one big missing gap is the fact that there are no platforms in this service as i said we could add another action result in here to a platform's controller and allow us to randomly create platforms that's totally possible but that'd be kind of pointless because our two services are completely then totally independent of each other and yeah what's the point in that so what we whole point that we've been building to is that we want to somehow when someone creates a platform in our platform service we want to notify our command service that a new platform has been created and added in to our command service that way so that is really what building to want to use this event driven messaging architecture to allow us to do that and allow the services to talk to each other in a kind of decoupled way but nonetheless there is still a dependency between them the command service still rely relies upon the platform service to give it its commands which is the right thing because the command service is the source of truth so we're now moving into more exciting territory so in the next section we're going to do a quick review of synchronous messaging we're going to talk a bit about rabbit mq and then we're going to stand up rabbit mq in kubernetes and then we're going to add some code into both our services that are going to allow for asynchronous event-driven messaging so we've been building to this last few hours we're almost there so in the next section quick review of messaging and then on to rabbit enqueue all right so let's just do a bit of a checkpoint it's been a while since we've done this over overall solution so we've been looking a lot at the kubernetes domain that's cool and it kind of maps onto this overall solution architecture to some extent but this is kind of in my view the next level up so let's just do a quick sense check of where we're at so we've got our platform service up and running to some extent it's got a sql server database fantastic we've developed our rest api and that's pretty much yeah that's complete now the rest based api is finished our internal micro services domain you could almost map that on to our kubernetes cluster if you wanted to do that we have our api gateway which is our ingress engine x controller we can externally get in and route to these apis and again here's our command service in memory database rest api which is basically complete we're not doing any more on that so this is basically where we're at and the problem that we were encountering in the last section when we were trying to test our command service was that we have no platforms in there so what could we do well as we've already kind of done we in a few sections ago we created a http client in our platform service that posted or could post data to our command service so one option would be to create another action result in our command service that would allow us to create platforms using http that's a that is a totally possible solution and in fact what we've already done by posting a test message is basically exactly that mechanism just we haven't gone that one step further and you added a platform to our internal database in our command service but we could do that i'm choosing not to do that because again we've talked about synchronous messaging we've talked about dependency chains and all that kind of stuff and the platform service needing to know about the command service so we're not going to do that instead as we've discussed we're going to introduce a message bus and this means that any service can drop things onto that message bus and other people can other services can subscribe to that message bus and so all the services really need to know about is the message bus they don't need to know about or be aware of any other services so when we create a platform in the platform service is going to publish a message onto the message bus and it doesn't care who's listening for those messages it's just putting the information out there it might be one service it might be 100 it might be none doesn't matter it doesn't care and then on the command server side yes we are going to subscribe for those events so the sections coming up we're going to look at rabbitmq what it is go into a bit more detail about that and then we're going to stand up a rabbit mq message bus in kubernetes and that will be the end of the kubernetes stuff and then finally we're going to come back to our both our services and implement the publisher and the subscriber functionality so there's still quite a lot to do and then just to kind of round off actually we'll leave the grpc stuff for the moment we'll come back to that when we finish this next section so that's it for now um now we're going to move on to a quick overview of what mq is and then we're going to move over to kubernetes and stand up rabbit and queue in there so we'll do that next all right so let's go on to talk about rabbitmq which is going to be our message broker as part of our solution or message bus so what is it well it's a message broker or message bus accepts and forwards messages to put it in its simplest form messages are sent by producers or publishers and messages are received by consumers or subscribers messages can be stored in the queue so it has some degree of persistence we're not going to be using that in our solution so if our message bus crashes then we use or lose all our messages but in production type environments you wouldn't do that you would allow messages to persist in the event of any failures but um yeah it's one of the benefits of it so the idea is messengers are published onto the queue if your services are overwhelmed and can't actually service those requests the message broker acts as a buffer for those messages and then as and when you bring more services online they they kind of chew through the messages on the queue um exchanges can be used to add a degree of routing functionality now we're going to be using an exchange but we're not going to be routing okay so i've got a bit on exchanges a bit in the next slide actually so just to clarify what we mean by that not probably that relevant to this video but if you're interested it uses something called amqp advanced messaging queuing protocol as well as some others now as well but am qpp is kind of the sort of traditional message queuing protocol all right so we'll come on to talking about this concept of an exchange which again is definitely a rabbit mq concept so there are four types of exchange that it provides direct exchange fan out exchange which is the one we are going to be using a topic exchange and a header exchange now i'm really going to talk mainly about the first two i'll briefly mention the third one the topic exchange and i'm not even going to talk about the header exchange it's way too obscure for my liking so it's not even worth bothering you your time with it so the direct exchange so as it says in the bullet points it delivers messages to queues based on a routing key so it's direct for ideal for direct or unicast messaging so you've got your message broker you have your publisher so that could be our platform service if a consumer that's our command service you have this direct exchange you have the queue the publisher will publish a message so i've created a new platform it will also attach this routing key which will place it onto the particular queue and the consumer will then consume off that queue so we're not going to use this method this is the one we are going to use so rabbitmq is the message broker our publisher which is our platform service there can be more many queues we don't really care i think there's only going to be one in our example but you could have a number of queues just don't care we've got our fan out exchange the publisher will publish its message onto the exchange in this case i've created a new platform and the exchange will then fan out to every queue that's binned to that exchange and then any consumer any consumers reading from my particular queue will get that message okay and so that is what we want okay and if there's any routing keys attached doesn't matter they get ignored so it's ideal for broadcast type messages and this is the method that we're going to use the publisher doesn't care who's listening it's just throwing out there now again depending on how you build out your micro services architecture you might want to use different types of exchange if you do want to maybe do more direct type messaging that's entirely up to you and maybe this fan out exchange isn't maybe the most appropriate use case for something you're building but for what we're doing it's the simplest one it's the one that we want to use it keeps our pipe simple stupid doesn't do anything it just basically delivers messages that's really what we want and we build the intelligence into our endpoints and then finally we come on to this idea of a topic exchange and it's kind of almost like a a cross between the last two exchanges basically the publisher routes messages to one or more queues based on a routing key so again you're using this concept of a routing key used for multicast type messaging and yeah it's basically again it implements different types of publisher subscriber type patterns all right so again i don't want to use routing or anything like that i want to just use a kind of you know broadcast type message and that's it so i think all we really need to be aware of is this fanout exchange that's what we're going to be using and so next we're going to stand up our rabbit mq server in kubernetes and then after that we're going to start working back with our services so let's move on to doing that all right so we're going to come on now and deploy our rabbitmq instance to kubernetes now maybe before we come on to writing up or deployment file let's just check in on our architecture kubernetes architecture and we've come pretty far we're almost there in fact yeah this is the last thing we need to do you'll be quite pleased to hear regarding kubernetes because it's been quite a long road to travel so yes we're going to set up a rabbitmq container with our cluster ip and not shown on here we are going to set up a load balancer that will allow us to access the instance externally now we're going to use that to look at the actual container itself but i'm also going to use it to allow our services to connect to it locally from you know doing a net run we're going to connect to it that way from our development environment rather than waiting until we deploy into kubernetes the services that is and seeing if they connect in and stuff we're going to do a bit more testing before we do that unlike what i did with sql server it'll probably save us some time in the long run now the other thing i wanted to say here is we're going to take a quite a simple approach to deploying rabbitmq and kubernetes we're going to set up one instance with no persistence now if you go to the rabbit mq website and you look at how they recommend to deploy rabbitmq in kubernetes they take a very different approach now it's all very well documented but it's quite a long detailed process and it's very much aimed at a production type deployment and again for me it's coming into this realm of as a developer i'm probably not going to be wasting too much time waste is probably the wrong word but spending too much time on creating clusters of rabbit mq message buses and making sure they have persistence that for me is outside the realm of my personal remit as a developer great if you get time to do that stuff i generally don't so all i really care about is that is it a rabbitmq server there yes there is in our case we will have one just be aware this is not a production quality or production class deployment um but uh what we are doing is totally fine we're just calling that out so back over in our k8 project let's get that up we're going to create a new deployment for our rabbitmq instance so right click new file rabbit nq dash devil dot yamo okay and the api version let's get started version is apps forward slash v1 and double double check everything that you are typing in here because one mistake and it will be critically it will go critically wrong you know what i mean so kinda deployment and we'll probably put a lot of stuff in there so i want to take that deployment that looks right specify the metadata which is effectively the name of our deployment and i'm going to call it rabbit mq-depot and when you do cube ct i'll get deployments this is the name that appears all right so back to top level um we're going to specify our spec and tab in and we're going to specify our number of replicas here we're going to specify one and go on to the selector stuff and then tab in for match labels and app and we're going to call this rabbit nq that's all take that off new line bring your cursor underneath the selector i'm going to specify our template now and more metadata tab labels and then app and it's going to be exactly the same as what we specified here so let's copy that in like so new line and i'm going to specify a container spec so put the spec underneath the metadata and it's auto generated containers yes which is correct name which is correct so i'm going to call this rabbit enqueue as well so again these three elements here should all be exactly the same and then specifying the image and this time it's rabbit mq colon 3 dash management and now i'm just going to check that we have spelt management because it's a word i frequently spell incorrectly now this is just the image that kubernetes is going to pull down from docker hub the management part relates to a management interface and optional management interface that we can have so we're going to pull that image down and we'll use the management interface just to make sure number one that we can access and it's working and we'll also see messages dropping on to the to the message bus as well and then finally for our specification of our image we need to specify two ports this time so ports tab in and the first one we want to specify we'll specify the container port get this right it's one five six seven two that's the management port that we're going to connect it on and we need to give it a name as we have two ports we definitely need to give it a name so i'm going to call it rbmq dash mgmt dash port come out again i'm going to specify our second port and it's uh container ports again [Applause] and this time it's just five six seven two so same as the management port except it doesn't have the trailing one and we need to give it a name as well so i'm going to call rbmq msg for messaging port and this is the port that our clients our services should i say are going to connect and on this is just for us to access the web interface so that's why we have two ports there that'll looks good so let's save that off now again we need to specify a cluster ip for this so new line three dashes to separate the config looking good so what i'm going to do is i'm going to come over to our sql server deployment i'm going to copy the cluster ip definition now obviously we need to change it for ram mq but to save me typing all this again we're just going to go through it line by line and change the things that we need to change so api version v1 is a service we're going to give it a name so we do need to change this so i'm going to change it to rabbit nq cluster ip service coming down to a specs section the type is cluster ip the selector we need to change so we're going to change it to match this value here and copy that down and put it in here fantastic and then we come on to specifying our ports all right so i'm going to rename our first port and this will be our management port so i'm going to give it the same name as this here it doesn't need to be but i think just for clarity i'm going to give it the same name as that protocol is tcp the port is not1433 it's gonna be one five six seven two and the target port is also going to be one five six seven two all right fantastic so again just make sure you get these numbers correct very easy to type numbers and wrong and get them the wrong way i've done that many times before especially with sql server i've often specified the port is one four four three as opposed to one four three three that's caused me no end of problems in my life so um try and avoid stuff like that um and then we do need to specify the second port so again we're going to give it a name and this is for our messaging so i'm going to copy that as well paste that in here i'm going to specify the protocol again likewise it's tcp and then we specify the port this time it's 5672 and then the target port is also five six seven two so again i'm just going to highlight this and make sure i get highlights in all the relevant sections that looks pretty good to me i think that all looks fine fantastic so save that off and then the final thing we want to do is create a load balancer so again i'm going to come back over to our sql server instance i'm going to copy the config and we'll go through again line by line just saves me typing in again saves you typing in again so again api version b1 is a service metadata will change the name to mq load balancer the type is a load balancer the selector again we want to change to rabbit mq and then we need to specify our ports in the same way so you can see here we didn't actually name our port for sql server simply because we only had one but this time we do want to give it a name because we have two ports so i'm going to specify that first name and i'm just going to give it the same name as this management port up here specify management port first copy now again you can see this kubernetes definition stuff can get really quite long-winded and quite repetitive and all that kind of stuff but there's not really any getting around it so this is the last stuff we have to do so take take comfort in that so one five unless you enjoy this kind of stuff some people like this type of stuff i personally don't it's so subject to error anyway so a little balancer where specifying our ports and that looks good so we've got a name protocol port and a target port and then finally we just need to specify the actual messaging port so we'll give it a name same name as this copy protocol is tcp port is 5672 and the target port is five six seven two fantastic so again just highlight this make sure we get highlights and well areas i'll highlight this and make sure we get the highlights in all the relevant areas that looks pretty good to me all right so let me take a quick pause there we'll save the file and in the next section we will run our deployment see what we get and then we'll try and connect in and make sure our instance is running so we'll do that next all right now for the moment of truth so back in our k8 project let's bring up a command line let's just do a quick directory listing so we can see our rabbitmq deployment file let's just do a quick check it's been a while to cube ctl get deployments just to make sure that what was going through my head there cuba ct i'll get deployments okay everything's running let's get pods just to make sure everything's okay everything's running everything's looking good that's all right fantastic so clear the screen and the usual um cube ctl apply for actually i've done that again let's do a listing so i can see the file yep okay so cubectl apply dash f and then the name of our rabbitmq deployment file and this is where we find out if we've done anything wrong all right so that looks good so syntactically it's all passed so we've got a new deployment created we've got two new services created so let's do cube ctl get services and we should see that looks horrible on the screen let's just pull this over that's better actually do a control b that's better you can see here we have our two rabbitmq services so we've got a load balancer and we also have our cluster ip which is our internal stuff fantastic that looks good so let's do cube ctl get deployments again we'll see how our deployment's going so it's saying zero of one is ready so let's check our pods so it's doing a container creating so we'll just sit back and have to wait for that so i presume kubernetes is pulling that image down from docker hub so we'll just have to wait and see what happens there and i'm hoping and assuming and expecting that it should run and start okay so it's running okay that's cool but again i have fallen into the the mistake of assuming everything's okay and then you do another get pods and you see it's errored out but that looks okay from from my perspective so let's just come over to kubernetes and have a quick look pull this over and take a look and we can see that where is it there we go hiding down here that we've got a rabbitmq container it looks like it's started up quite successfully fantastic so now what we can do is we can using our load balancer we can check to see whether we can access the management interface so let's just bring over web browser using firefox going back to firefox actually i like i like firefox so localhost and we want to go into our management port so one five six seven two excellent so we have got a management interface now to log in it's guest and guest so we just click login and you can see here you have logged in it's all looking pretty good so that's our uh rabbit mq service up and running in uh kubernetes what we're now going to come on to do is revisit our platform service and write a publisher service to publish messages onto the message bus so we'll do that next all right so with our rabbitmq server up and running and up and running in kubernetes so available to us in our production kubernetes environment but also via our load balancer available to us via localhost we can now move on to updating our platform service now just so we're clear on what we're doing in the next section or this section i'll just bring this up again so we've got all this up and running we've just stood up a rabbit mq message bus and now we're coming on to working with the publisher component of our platform service asynchronous client to publish messages onto the message bus so over in our platform service make sure you have the right project opened up one first thing we need to do is add the necessary rabbitmq client packages into our project because we don't have those yet so it's dotnet add package and it's rabbit mq dot client so we'll add that in assuming we've typed in correctly i don't think i've typed in.net add package rabbit so mw there you go grab mq client there you go again that's why it's always good to have this up because you can actually see it being added and sometimes i don't necessarily pay attention to the output down here it didn't really look too much for an error so yes definitely added rabbitmq client and excellent so that's the first thing we need to do and then the next thing we need to do is we're going to set up some config so when our platform service starts up it's going to connect to our rabbitmq message bus now it needs to know where to connect to so it's going to be one thing in development and it's going to be another thing in production so we're going to update both our app settings json files here so let's do development first let's put a comma at the end of this and the first attribute we're going to set up is rabbit eq host you can call this anything you like i'm just going to call it mqhost and in our development environment that is going to be localhost exactly the same as what we did here and then the second bit of config you want to provide is the rabbit mq port and the port in this case is five six seven two that's the message bus port so five six seven two and we'll save that off and i'll do an alt shift and f to format our file save that again fantastic so i'm going to copy this now and i'm going to copy it and put it into our production file as well now of course this is not what we will be using in production that will not be correct so in order to get what we need to use let's come back to our kh project into our rabbitmq deployment file and as before we have our cluster ip service here it is the name of our cluster ip service acts as the name of the server effectively so we'll copy that and you've seen that before with how you've used sql server so again making sure you're in the right config file in production you'll paste that in like so so that's our config setup and we'll read that in obviously when the service starts up and we've got two different settings for both production and development which is exactly what we want so next what we're going to move on to is we're going to create a new dto for when a platform gets created in our platform service we then want to create a detail for publishing onto the message bus and you'll see why in a minute why we have a separate detail for that scenario so we'll move on to that next all right so let's just review the dtos that we have we're still in our platform service and come back to our dtos we have to we have a platform read detail which is the dto we expect back from the service and again it's got some kind of domain specific fields like cost and so on and so forth and also our platform create dto which is the one that you know we expect when somebody's going to create a platform with us but we're going to create another one and we're going to call it platform publish dto and the whole point of this detail is that is the dto that we're going to put on to the message bus when a new platform gets created as far as our service is concerned our platform service we're going to publish that information onto the message bus but we're going to publish it in a very specific format we're not going to include the cost for example and we're going to include an additional attribute so it's exactly what details are meant for specific scenarios in relation to how we're moving data around our application so all will become clear when we come to maybe starting to use it but as a foundational piece it's the next thing that we need to do so in our details folder in our make sure you're in the platform project right click new file and i'm going to call this platform published dto.cs and then again we're going to specify namespace backing our platform service within our details folder and it's just a public class called platform published dto so what do we want to publish what information do we want to publish now just make sure i've spelt this collect correctly plat form okay that's definitely correct platform yeah cool that would have caused me some issues but later on so yeah platform publish dto what information do we want to publish on to the yr so the first property is going to be our id which is an enter integer so this will be the id as far as the platform service is concerned um we're going to publish the name of the service and we're also going to publish another string which i'm going to call the event the specific type of event and you'll see how that gets used a bit later on although i'm sure you can probably guess how that's going to be used now with our detail being created we now need to come back into auto mapper into our automatic auto mappers profile and create another mapping so the mapping is going to be as follows create map source this time is going to be a platform read detail and we're going to map it to a platform published dto and the reason for that is ultimately the trigger for our event is going to be in our controller in our platform create action so we create a new action it gets created we're going to ultimately create a model in our database we're also going to pass back the platform read detail to our consumers but we're also now going to put a platform published dto onto the message bus so we're going to have already a platform read detail we just want to map that on to a platform publish detail so we can push it out over the wire onto our message bus so we have that mapping there all right fantastic so i think that's the dto side of things done we're now going to move on to actually creating our asynchronous message bus client so we'll do that in the next section all right so back over in our platforms service let's just close down all this stuff and we're going to do a little bit of setup well we've done our setup but we're going to create our interface now so we're basically going to create a message bus client but the way i want to build it is again we want to use dependency injection so we're going to have an interface that specifies the methods that our message bus client can perform and then we're going to build a concrete class to service that interface so back over in our project we're going to create a new top level folder so in the root of the project new folder and we're going to call this async data services so you'll remember we had our synchronous data services with http for our http command we're now going to create some asynchronous data services for our rabbitmq message plus so in here we're going to create a new file and i'm going to call it i message bus client dot cs and we'll just make sure we spell message bus correct i message bus client that's cool all right so again this is just our interface so actually it's going to look very simple so namespace and it's this interface is actually agnostic of rabbitmq it doesn't rely on rabbit enqueue namespaces or anything like that so namespace is platform service and it's now a new namespace so a sync data services it's just a public interface called i message bus flat we're going to define a new method this is going to be called publish new platform and it's going to expect as an input the platform published dto fantastic so all you need to do really is bring in the namespace for our dtos so again that's it that is our interface so we're going to expect to use this message plus client and this method is saying that we're going to publish any new platforms onto our message bus fantastic so save that off and then next we're going to come on and we're going to implement a concrete implementation using webmq now fair warning there's a lot of stuff in this class this is going this next bit is going to be quite long and again it's possibly prone to some error so we'll need to take it quite slow quite um detailed and we'll go through build up step by step so yes we'll move on to our concrete message bus implementation class next all right so back over in async data services right click new file and we're going to call this message bus client so let's start building this monster up it's actually not that big but it's it's probably yeah it's probably the biggest kind of class we've got in our in our application so namespace platform service and we're in async data services fantastic and it's a public class called message bus client and we want to implement i message bus client to implement our interface so control period to implement our interface and of course it's not fully implemented it's just a placeholder but that's okay and i'm going to do a control b just to give us a bit more room now before we actually come on to implementing this method i want to implement a constructor a class constructor and it's within the constructor that we're actually going to set up the connection to our message bus so c201 message bus client and into this i want to pass an i configuration object called configuration because i want to get access to our config elements that we just specified in one of the previous sections so let's bring in the namespace for this and again be really careful we don't want auto mapper configuration we want microsoft extensions configuration so make sure you bring in the right namespace that has caught me out before now again eye configuration is one of those things that's just available to us all the time throughout our application so we don't need to specifically register it in our startup class or anything like that we just have access to that as a matter of course so we want to then specify a private read-only field to store or to access our configuration so underscore configuration equals configuration we won't bring in the namespace we will generate a read-only field again you've seen this stuff a hundred times already now now this is complaining and i'm just going to do a control b so i just want to check to see what the issue is here i think it's because this has got a capital l let's see if that makes any difference that's better small things like that can drive you crazy so i thought maybe the file was named incorrectly but as you could see the the constructor name didn't match the class name because of the mismatched capitalized l so just uh just be careful that things like that can drive you kind of nutty all right fantastic so that's the first part of our constructor setup but again we're only a small part of the way they are so the next thing we want to do within our constructor again as i say this is where we're going to set up our connection to our message bus we want to set up what's called a connection factory now this is a rabbitmq concept not dissimilar to a http client factory similar sort of idea allows us to create connections basically to our rabbitmq instance so i'm going to create a variable i'm going to call it factory gonna make it equal to a new connection factory and we're gonna have to bring in a namespace control period using rabbitmq client so it's a rabbitmq concept all right and then we're going to define what our configuration elements are what our connection stuff is so the hostname is going to be equal to we're going to access our configuration instance and then we're going to specify the name of our configuration element so control b and we're going to come down to either one of our files because we've named them the same thing that's a really important point these have got to be named the same thing so rabbit mq host and rabbit mqhost the values are of course different so we want to inject in our host value into here so control b and we'll paste that in here that's the first part okay and then we want to specify the port now the port value is an integer now we have stored our configuration element as a string not not a problem so let's build up step by step so we're going to access our configuration and we're going to access the port elements we come back over here we'll copy this bring it back here and paste that in here i'm just going to move to the end of this it's getting a bit long so maybe i take a new line maybe that's what we do i'm just going to put a semicolon at the end there now this is not going to work because we are expecting an integer so what we're going to do is we're going to say int dot palms and then we're going to wrap that in a pair of parenthesis and that should stop erroring out fantastic so let's just save that off just do a quick build just to make sure we're on the right track because we see we've got a few steps to go in this class and i want to make sure we're not making errors as we move along so that's all looking pretty good at the moment fantastic so we've got this factory connection factory object now that we can actually build connections off of so this next set of stuff we're going to put into a try catch block so try because obviously stuff can go quite wrong here catch and catch the exception call it ex and then in here right line and we'll write out something we'll use string interpolation so dollar sign double quotes put the semicolon at the end and we'll just say could not connect to the message bus [Applause] and i'll put the exception call on there and put the exception in here so we know kind of what issue we have and it's more than likely going to be the service isn't available the mq bus isn't available something of that nature all right so we've got a little bit of a setup to do now in our try block so the first thing we actually want to set up is a connection so what we're going to do is we're going to create a private read-only field called connection and i'm going to make that equal to factory and i'm going to call the create connection method on that so of course control period we want to generate a private read-only field we put that up there and that's added up to the top so this is our connection now next thing we want to do and again this is rabbit mq-specific is create something called the channel so underscore channel and that's equal to our connection and we've just been called create model so again the way rabbit mq works is kind of again it's about kubernetes it's multi-layered there's multiple things we have to set up we have to set up a connection we have to set up a channel and then we have to set up an exchange so again just the way it's architected so again we'll create another private read-only field for our channel so generate read only field and again make sure it's being added up here next thing we want to do off of our channel is declare an exchange so we're talking about fan out exchanges direct exchanges this is where we declare our exchange so exchange declare and then to here we'll use exchange we'll give it a name i'm just going to call it trigger as we're triggering messages and then we specify a type this is where actually put the colon in our exchange type and we specify you can see the different types here we specify it's a fan out exchange okay so we've got a connection we have to create a channel off of our connection and now we're now creating or declaring an exchange off of our channel and at this point really we're actually we should be pretty much uh connected to our rabbitmq instance but what i'm also going to do because we might use it at some point on our connection we're going to subscribe to the connection shutdown event because we maybe want to do some stuff in there so we'll subscribe to that event and i'm just going to call it rabbit mq underscore connection shutdown now this doesn't exist yet so we'll just come outside of our constructor and down here we will just declare that private void copy the name we'll get the objects end up and again if you've not worked a lot with events before this is basically event will be triggered when the connection shuts down and we can put anything in this method here basically when that event triggers i'm not actually going to put too much in it in fact i'm going to put nothing in it for the moment but we may come back to it at some point well i'll tell you what i'll tell you what we will do we will put in a console right line just to see that our connection has been shut down just so we're aware that that may have happened and it helps us debug doesn't it so right line and we'll just say grab an mq connection shutdown okay so if you don't want to put this in you don't really have to i'm just putting it in just to be a bit more kind of yeah it gives us a bit more detail if something does shut down but at this point we should be connected to our rabbitmq instance so again i'm going to put console brightline and we'll say connected to message bus and we'll save that off fantastic okay so that's our constructor done that's basically our connection done so the next thing we want to move on to which we will do in a second is basically create our publish new platform messages we'll do that next all right so on to publishing our platform actually implementing our main method here so let's get started so take out the exception that we've got there the first thing i want to do is create a message object and that message object is basically going to be serialized version or platform publish dto so we're going to use json serializer [Applause] i want to bring in a namespace for that so again use system text.json and we're going to call the serialize method and what we're going to see realize we're going to serialize our platform publish dto because we're going to send over the wire and ultimately we're going to convert it to a byte array all right so got that as a string now basically a serialized string and then all this method is going to do is going to check to see if our connection is open and if it's open then we're going to actually send it now i'm going to put the send functionality into a separate method so we'll leave that blank for the moment we'll tell you what we'll put our console.writeline in just to help us with debugging so console.line and we will say rabbit mq connection open sending message dot dot fantastic and if the rabbit mu connection is not open and we'll do a console.writeline saying rabbitmq connection is closed not sending now we could start getting into retries and all that kind of stuff i'm not going to implement that today it's just going to take too long but we'll just write out a message for now to say that we were unable to do that so we will come back here and put a bit of a to-do in you still have to actually send the message now we could put it in here but i just feel it's a bit cleaner to pull it into a separate method so i'm going to make a private method void i'm going to call it send message and i'm just going to expect a string of message type now the other reason i'm doing it this way is that we are exposing this publish new platform as a method on our interface it's conceivable we're going to have a number of different methods so publish platform publish something else whatever it happens to be so this send message method i want it to be generic reusable by any other methods and we may just have you know a range of other methods that be exposed to do other specific things so that's the other reason i'm kind of pulling it out into its own method send messages just totally generic so i want to now create the body of our message that we're going to send and it's going to be a byte array effectively so bar body equals encoding and bring in the namespace system text and then we'll say utf-8 get bytes and then we'll pass in our message now well remember i'm going to come back up here and i'm actually going to call send message from in here and we're going to pass over our message that we constructed up here or serialized message just in case i forget which i probably will so that's this method finished and now we're just going to complete our send message method now all we actually have to do now is on our channel we just need to publish our message so we'll call the basic publish method here we are and we have to express specify the exchange which we called when we declared up here what did we call it trigger so that's possibly this is possibly something you'd want to pull into config as well you don't really like to have magical strings in our in our code but anyway we'll brush over that for the moment let me get rid of this the next thing we need to specify which we are going to ignore is a routing key now remember fan out exchanges ignore the routing key anyway so it's just going to be nothing okay these tool tips can be useful but they can also be quite annoying i find sometimes the tool tips appear when you don't want them and they don't appear when you do need them next thing is basic properties and that's just going to be null let me keep getting that quite annoying our body is going to be our body that we've declared here and that is basically it okay so basic publish what exchange we publishing on routing keys are relevant anyway basic properties we just set them all in the body is what we've encoded here and then really at that point we can say console.writeline and we'll use string interpolation and we'll say we have sent and we'll put in our message here fantastic so that is pretty much it the only other thing i want to put in is a dispose method so when our class dies we clean up any anything that's left behind so public void dispose i'm just going to do a console white line because i like to understand when these things are happening and we will see message bus disposed and we'll just check to see if the channel is open and if it is then we will just close the channel off and then we'll close our connection off as well fantastic so i think that is as done so let's just do a quick dotnet build and we're all good fantastic now before we move on to actually using this now one last thing i want to do because it's the thing i tending to forget at the moment is we want to come over to our startup class and we want to register for a dependency injection our imessage bus interface and the concrete class so back down in startup let's get rid of that come back up to configure services and let me just get rid of that so you get a bit more room and [Music] after our http client let's add a registration for asynchronous client so this time we're going to add a singleton the idea of this is that we're going to use the same connection throughout the rest of our application through our application so we add a singleton lifetime basically that's why we're doing that and we'll add an i message bus client of type message plus client all right let's just bring in the namespace oh my butter fingers there so control period and yeah using platform service async data services and that should resolve them both in fact it does fantastic so that should be our asynchronous data client setup which is rather exciting so now what we want to do is we want to use it from within our controller so when we create a platform not only are we going to send and still send our synchronous message to our platform service which we're still going to do we're now going to drop a message on to our message bus so we're getting towards asynchronous messaging between our services so we'll do that in the next section all right so we're almost at the point but we are actually at the point where we can use our message bus client now so we're going to do a control b make sure i'm in the right place and we're going to come back to our controller so let's do a quick review of our platforms controller so control d again let's come up to the top so we have our constructor and we're passing in a repository find our map up fine and also our synchronous data client our http data client and down in our post action result where we're creating a platform we're doing all our usual create platform stuff but you'll remember down here we then actually attempt to send a message to our command service using http all right so we're now going to put in another component here we're going to send a message asynchronously so back up to our constructor we're going to add to the things that we are passing in via our constructor using dependency injection so at this time we're going to pass an i message bus client and that's going to be equal to it's going to be called message bus client and we need to put a comma there and we should bring in the namespace so we get here using async data services fantastic and then down here we're going to create a private read on the instance of that so message plus client equals message plus client we've seen this a thousand times before now we'll generate a private speed-only field super and that's why of course we had to again i'm just laboring the point we had to make sure we registered this year that's something i usually always forget but um we've done it this time so back down into our create platform method what i might do although it's obviously but although you can read it in the code i might just to do a little bit of documentation and say send sync message and then under here we're going to send an async message all right so again we'll put it in a try catch block because it's something that could conceivably go wrong and we'll deal with our catch first so we'll catch an exception ex i'll do a console white line i might just copy this here saves me typing and we will just say could not send asynchronously there we go so the first thing i want to do in here is create a published dto so we created our publish dto we want to map into we want to create one basically so we'll call it platform published detail and that's equal to mapper map what do we want to map to we want to map to a platform publish detail and what is our source our source is going to be our platform detail that we have up here hence the mapping that we created so you could create other mappings you could conceivably create the mapping from rather than mapping from a platform detail you could map from the model that's a possibility as well up to you how you want to do it now you'll remember if we just do a control b over in our let's have a look at the details in our platform publish d2l we had all this stuff here fantastic but we also had this property called event now that is not event is not available as part of our platform read detail so we need to explicitly add that in and that's just to help us identify at the other end what type of message event has been sent out so platform published detail we'll find the event property and we will just make that equal to platform published now you would probably want to have a kind of you would want to document the events that this service emits in fact within your entire micro services architecture or application you would want some kind of documented library of the events that can be expected to be sent and received along with the payloads and all that kind of stuff i'm trying where possible to keep the payloads as standardized as possible all right and then finally moment of truth we use our message bus client to publish a new platform and we just send that one to the wire fantastic so we are almost at a point where we are going to test this so i'm just going to save that off and in the next section we are going to test this we'll spin up our let's test it locally we'll spin up our command service locally we'll spin up our platform service locally we should see it connecting when we send we'll see it connecting and we should see it send a message once and only once we create platforms using the controller so we'll do that next and see how we go all right so let's uh run up our platform service so dotnet run okay so let's just take a quick look at the output shouldn't be anything different than what we've already seen before so uh where is it yep so it's using the memory database which is correct um our command service endpoint is our command service which isn't running yet and we've seeded data now a point to really note here is that we're going to run up our command service now locally for the asynchronous stuff we've just done that well it doesn't need to be there anyway because again all it needs to be there is the message bus so the reason we're running it up is so that the synchronous part of our messaging will still work and we'll actually shut down our command service as well to see what happens when that is not there and how the synchronous stuff behaves so we've got our platforms servers up and running let's come back to our command service over here here we are and let's do a dotnet run fantastic so again it's on running on localhost 6000 which is good and then we'll bring up insomnia and we will come back to our local development environment back to our platform service and we're going to create a platform now i might just move this because we're slightly running out of screen real estate but let me let me just do an insomnia and we'll see what we get so let's send this over so it's cool we got our 201 created fantastic move back over here so this is our command service and you can see we've still got the inbound post which is all that we're doing fantastic but more interestingly if you come to our platforms controller you can see that we connected to our message bus which is good uh our sync post was okay obviously because we just saw it was received by our other service and then you can see that the rabbit mqa connection was open and we were sending with the message and then finally we sent the message we dropped on to the message bus so that's all looking really rather good and you can see here boom in our rabbitmq message bus yes indeed there was a message placed on to the bus now off screen i've got installed now what i'm going to do is i'm just going to rapidly fire some you can hear me clicking i'm going to rapidly fire some events in here and we'll wait for this to catch up and you'll see how that should manifest so i've just basically blasted a load of commands onto the message bus i mean we need to get this to update a bit quicker maybe i do a few more and you'll just you'll only put a couple on you'll see how that compares to what we just blasted on holding our breath there we go so there's a few more so that's all looking really rather good so again we've dropped stuff asynchronously onto our message bus or a client or command service is only getting synchronous messages via http we still have to build in the subscriber we've not done that yet but let's come back to our command create service you can see here command service all the inbound post commands and i'm going to stop this now okay when i could do a control c so our end point is now down all right synchronous end point is down now down now i'm going to click send watch what happens we get spinner we're waiting we're waiting that is because our platform service is attempting to send a command to a synchronous endpoint and it waits it has to wait for the message to come back to go either successful or unsuccessful and there's i'm sure a default timeout window that we will wait for again just again just to demonstrate again it's waiting now you can imagine this whole idea of a dependency chain where you have hundreds or hundreds yeah hundreds potentially of services all doing that how that could stack up and cause massive delays just because that endpoint is not there what you will also realize though as we come back to our message bus once the synchronous messaging component failed the asynchronous stuff still went ahead and dropped our asynchronous messages onto the message bus because that's still there and that's still running so you can start to see the differences between synchronous messaging being problematic if it's not there and as just dropping async messages onto the message bus all right so that is all looking pretty good so i think what we should move on to next is we can do one of two things we can either deploy this into our kubernetes cluster which i'm probably not going to do i think what we do next because this was successful as we build up the subscriber in our command service so it can actually receive the events from the message bus so we'll move on to that in the next section all right so let's make sure we are over in our command service and i'm just going to before i do anything else make sure everything else is shut down let's just shut that down as well okay so back over in our commands service now i'm going to give you a bit of warning here this is actually by far in a way the most complicated part of the yeah of the whole video and that is basically allowing us to number one subscribe to the message bus and then to determine the event on the message bus and then do something with that event that we've determined so there's quite a few things that we need to build up here so we're going to have to add two new dtos to our command service um we're going to have to add the rabbitmq config to our config file so we'll do that next and that's quite straightforward you've seen that already we're then going to define what's called the event processor and the event processor is basically going to determine what type of event did we get so again the rabbit mq pipe is stupid we were just getting events so we need to find a way of determining what was this event was it a platform published event was a shipping event you know there could be a million different events so the event process or class is really determining what was this event and then doing something with it and then we're going to write the class that actually connects to the message bus and receives events because it's dependent on this event processor so you have to build up in the order in which they're used okay so there's a lot here and conceptually seems quite simple but the implementation there's a couple of little really gnarly things in here to do with the way dependency injection works service scope service lifetimes and we're going to be doing some kind of interesting stuff around getting access to our repository and there's a lot going on here so you do need to pay attention all right so in this section what we're going to do just some simple stuff to begin with we're going to check that we've got the rabbit enqueue client library simple enough and we're going to create our config and then we'll move on to the next section so commands servers commands cs proj file let me just get rid of this for the second make sure i'm in here and yeah we don't have the rabbitmq package so dotnet add package and it's rabbit mq keep saying mwmq client all right simple enough and there we go that's added in very good and then like we did with our platform service we want to add in configuration elements to both our production and deployment deployment development app settings json files so under control b and what i might do is i might select our app settings development json and i might just come over to our platform service and i might just copy in those files so here we are platform controller have some servicing to say and i'm going to close all this down and we'll come to our development json and i'm just going to copy these because it's exactly this connecting into the same message bus okay so copy these over all right and i'll paste those in and it doesn't like that that need to paste it inside here paste it in here that's better i'll tidy that up by doing an alt shift and f to format our file save that off fantastic and then we'll select our ah we've only got an app settings json file here that's interesting we don't have a production settings file yet so let's do that so in the roots of our project right-click new file and we'll create an app settings dot production json file and we will come back to our platforms class platform service get rid of that go into our production class and we will just copy this out that we only need the two elements so copy that and back over in here we need to put in a pair of curly brackets and this is our config for our platform service so platinum servers are command service in production okay all right so command service we got our development configuration and we have our production configuration remembering that it's the rabbitmq cluster service ip name that we want for production so that is our sort of basic config setup next we want to move on to doing our dtos all right so we're wanting to create some details now to support receiving platform event messaging down at our command service so let's just close this down and we are back in our details folder so the first detail we want to create is the platform publish dto dot cs so again we're dropping this dto onto our message bus from our platform service we want to take it at the command end and use it and add it ultimately to our database so namespace is command service details and it's just a public class called platform published detail and what might do just to kind of save a bit of time let's come back to our platform service and just copy these properties over into here as they should be exactly the same fantastic now ultimately what we're going to do is we're going to get get this receive this understand that we've got this and we want to then create a model in our command service database so if we take a quick look at our models it's the platform model that we want to create obviously the id is taken care of by our database so we don't need to worry about that the name we are getting here we're passing over this additional event property which our platform doesn't care about we're going to use that for other purposes but we have specified that we do require this external id property now what that is going to ultimately be is it's going to be the id of the platform in our platform service we're going to use that as this kind of unique global identifier so we're going to have to map between platform publish dto and platform detail now unlike previous times where you know in the case of name it's just a straight through exact mapping automapper can do that for us without any additional config in this instance we are going to have to tell auto mapper that we want to map id to external id and so this is where the auto mapper stuff starts to get a little bit more interesting so before we do that though let's just create another detail and there's one more it's a very small one called new file and we're going to call it generic event dto dot cs and we'll just create a namespace namespace and that's command service details and it's just a public class called generic event dto and it's just got one property it's a string and it's just called event now i'm not going to go into the details of why we have this just yet we'll come on to that a bit later so just stick a pin in that for the moment and we don't need to use auto mapper to map this either that's just something we use a little bit later but we do need to come back to creating a profile now and a profile mapping that's going to map our platform publish detail and map it to our platform model so we'll do that in the next section all right so let's come down to our profiles folder and in commands profile we want to add in a new mapping so we've seen this before so create map what is the source the source is our platform publish detail that's what we're going to get over the wire and we want to map it to our platform object which all sounds very reasonable and we'll just close off with a parenthesis but as i said in the last section we want to explicitly tell automapper that we want to map the id of our platform publish dto to the external id of our source or target sorry or destination let's do that so the way you do that is don't put a semicolon here you come on to the next line if you want to make it look a bit neater you can have it on the same line as well obviously and we do a four member and then we specify land expression destination so this is where we want to map to so we say the destination which is ultimately our platform object test and it knows that our destination is a platform object so we want to select the external id property we then select some options opt equals to opt and i'm just going to do a ctrl b to give us a bit more room so we can see what's going on here and then we want to specify on this option app from and then what are we mapping from we're mapping from our source which is our platform publish dto and what are we mapping from or mapping from id okay and we'll terminate that with a semicolon and everything else should map through okay but that is our explicit rule saying at our destination which is our platform model we want the external id to be mapped from the id of the source all right and we'll save that off so that's our mapping setup now just i'm going to preempt something that i want to do a bit later and what that is is when we come and we receive this platform published object we have to do a lot of stuff before we we get to that or after we get to that we want to determine that we want to put this model uh this object into our database now we want to make sure that we're not duplicating the same platform in our database so we're going to do uh another repository method to say given an external id do we already have that external id in our list of models and if so then we don't add it so it's very similar to the platform exists method this one is just slightly different we're using the external id this time so i'm just going to add that to our repository ahead of time and then we'll use that a bit later in our event processor class so we'll do that next all right so let's just do a ctrl b and close this stuff down and we're going to come back to our repository we're going to use this a bit later so i'd rather just get this done ahead of time than do it at the point where you need it so back over in i command or repo we have already had this platform exists and that's checking to see whether we just actually have a platform with that uri in our in our database we're going to create a very very similar method let's call it platform we'll call it external platform exists and this is just going to be used to check that we're not going to add additional platforms into our database when we already have it there okay and so we'll say int external platform id fantastic and we'll get an error now on our concrete class because we've not implemented this and i'll just pluralize that so external platform exists that looks okay fine and then we'll come over here and we'll just come up to here and do a control period to implement the interface and that will put it down down here somewhere where has it gone to the area external platform exists now what i might do is find our platform exist method and just copy this because it's gonna be very very similar to that so in here we'll paste in return context platforms nap goes to pe and this time it's not on the id it's on the external id is equal to the external platform id all right so again similar concept we're just going to check to see whether we have that because we do not want to double add more and more platforms if they're already if we've already got them and we're doing that based on the external id fantastic so just a quick change and now what we're ready to do is we're going to move on to our event processor class all right so on to event processing so just to give you a quick explanation of what this is we've got two main bits still to do and that is the event listener effectively we're going to do that later this one is the event processor so upon getting an event which we'll do later we need to determine what is this event what what is it is it a platform published event is it something else so upon determining what the event is we then want to do something with that and what we're going to do is we're going to add that platform to our database assuming we don't already have it in there hence the last section so that's ultimately what we're going to do with the event processor now i will call out that we only really have one event that we're interested in in our platform if you had lots more events then the way i built this class you would probably want to decompose it a bit further but for our purposes i think it's fine so back in our command service let's close everything down and we're going to create another top level folder called event processing now this mix bit is actually there's quite a lot here and there's a few interesting concepts in it not most especially around dependency injection it can get a little bit gnarly anyway so we're going to create an interface first because as always we're going to use sort of dependency injection so new file and this is called i event characterize e processor yes all right now this is actually going to look very very simple um but behind the scenes is a lot going to be going on here so namespace command service event processing and then it's just a public interface called i event processor that's right and it's going to have one method called process event which seems benign enough and it's going to take a string called message very very generic so ultimately we're going to get this message string from our event listener service which we've yet to build and it's just going to be our object or event string so we're going to take that string and understand what the heck is this you know dc realize it find what event is and then do something with it so if any benign looking interface but there's going to be a lot going on behind the scenes so let's not mess around anymore let's create it so new file still in event processing and we'll call it event processor dot cs and let's do the easy stuff first so name space command service then processing and it's just a public class called event processor and it's going to implement my event processor so as usual let's implement the interface we'll implement the placeholder but we're not going to come on to that just yet now the first thing i'm actually going to do is i'm going to step outside this class and in here i'm just going to create a very simple num so enum and we'll call it event type and this is just listing out our well event type so we're going to have platform published and undetermined [Applause] okay and you know that that's the only ones we're dealing with and the reason i'm creating an anonymous i don't like really like working with magic strings so this just gives us a bit more structure around the types of events that we can have and you'll see how this is going to get used in a second and now just make sure you create that outside the class all right so we're coming on now to our class constructor and again coming back to the intention of this class is to take a message determine what it is and then we're going to add if it's a platform published event wouldn't add a platform to our database so immediately you're going to think well that's cool that's just constructor dependency injection that's just injecting a repository into this class unfortunately in this case no that is not the case we cannot do that and the reason is for rather torturous dependency injection reasons so i did kind of brush over a bit earlier in the course but when you register your services in configure services they have what's called a service lifetime so scoped transient or singleton now singleton registered services they exist for the lifetime of the application basically sculpt services they exist kind of for every um kind of session and then the transient ones every request so they have different lifetimes so singleton ones last for the entire length of the application finally you may say when we come on to creating our listening service that is going to be created as a singleton service ultimately it's going to be there for the lifetime of our application and that service is going to call this service and in order for that to happen in order for us to inject this event processor service into our listening service it too has to have a lifetime the same or greater than the service it's being injected to so this service is going to have to be a singleton service as well fine if we then try and inject in a repository and ultimately further than the food chain and db context those services have been registered with a shorter lifetime and you can't do that freaks out okay it's not good practice so in a long and roundabout way we cannot inject a repository into the constructor here we're going to have to inject it well we're going to have to create it a reference to it in another way so that's what we're going to do here so coming back to this probably become a bit more a bit more understandable when we actually go through it so class constructor so instead of injecting a repository we're going to inject something called an i service scope factory and we'll call that scope factory and the use of the what's going to be the thing at the side there the use of the term factory you might start to have some idea of what this may allow you to do or maybe you don't i don't blame you if you don't but the good news is we can still inject our eye mapper in so in the usual way that's fine and we'll bring in the name spaces for this and we'll have to capitalize that so let's bring this in microsoft extensions dependency injection and auto mapper all right let me actually just take this back up onto this line i don't like using multi lines unless i really have to so again we're going to create some private fields so scope factory equals scope factory and mapper will underscore mapper equals mapper and then we'll create some read-only fields for these here read on the fields and correct that and i'm going to read the field for that as well fantastic so we'll come on to using the scope factory a little bit later when we actually do have to use a repository um but stick a pin in that for the moment so the next method i'm going to create in here is in support of our publicly exposed method which is process event it's a private method and basically this is the method that's going to determine what is the event that we got and ultimately what it's going to use if we come back to um do control b and we look at our detail we look at our platform publish detail we are passing through this event attribute here we're going to basically look at that and see what that is and you will remember from maybe you don't from our platform service over in our controller when we generated our synchronous message we set that event to this field here to the string here so we're basically just going to ultimately look for this string to see and understand whether it's a platform published message so back over in our command service let's get rid of that and let's come back to our event processor so this is a private method and we'll call it private events type so it's going to pass back uh an event type that we defined here and we're going to call it d turning event into that we're passing the string that we get passed through from our ultimately from our event list law and we'll call it string notification [Music] message fantastic okay then i'll do a console.writeline [Applause] bring in the namespace for system and we'll just say something along the lines of determining [Applause] fantastic so the first thing we want to do is basically deserialize this string into an object okay so i'm going to create a var call it event type and we're going to use our json serializer and we're going to bring the namespace in first before we start to use it so system text json get rid of that and we're going to deserialize this time now what we're going to deserialize too well again to bring this back up the reason we created this generic event dto is we're going to deserialize to this because all we really want to pull out from any mess any message anywhere in our micro services platform is this event property here so when we de-serialize it we want to de-serialize to that now in days gone by if you're using the jason uh the newton soft json library you could use what's called dynamic d serialization and that means it kind of does it on the hoof i don't i didn't like that anyway because you kind of end up with um yeah a non-typed message which i don't like so this is typed using dynamic is non-typed and it leads you into all sorts of crazy madness which is not i don't think best practice anyway so long long and short of it we're going to deserialize to a generic then dto and what's our source source is our notification message so this is really just to enable us to get at that event attribute and we'll just bring in the name space for this and i think i've spelt it totally wrong gen air ick no t let's try that that's much better okay so we have our generic event here and then all we want to do is just switch on that so switch and we'll access our event type and access the event on it and let's move this up and then we're just going to do some case statements case and i'm going to come back to our platform service and have a look at what we passed over i'm just going to copy this i'm going to paste that in here so if our event is that then of course we can be confident we have a platform published event and i'm going to do a console writeline and we'll say platform published event detected and we will simply return back event type of platform published fantastic and that's the only switching we're going to do so we will just round off with a default and i'll just write console right line and i'll use this and actually do that here as well because i've been using it all throughout the application so let's keep it consistent and we'll see could not determine event type and we'll just return with that oh actually we need to return the event type which will be not platform published that will get very confusing undetermined there we go that's great fantastic so again the whole point of this private method is just to ensure that we can understand what event we got down the wire and our process event is then literally going to do something but it's going to see what we got back and it's going to then action it do something with it so we'll take a quick pause here because this video has been going on quite long and we'll come back and we'll finish off our process event and one other method of this class all right so let's come back to our process event and actually just don't forget to save your work just in case something crazy happens so in process event now this is kind of our main publicly exposed method all we want to do is we'll create a variable of event type and we'll make that equal to determine event and we'll pass over the message that we get here so we've kind of you know subcontracted the determination of event this method here we now have that in our event type and then simply you just switch on that we'll switch on event type case so in case event type is platform published then we want to do something here we'll break and then we'll just put a default in and we'll do it well we don't need to do a console right line because we already kind of had that from here so we'll just leave it we'll just we'll just put it again probably a bit redundant but we'll leave it like that so what do we do we've got a platform published event we want to actually add it to our database that is ultimately what we want to do now i would probably going forward move out the kind of classes that we use to action our events possibly i'm not sure i would wrap it into our event processor but i have for this video and i think i feel in this instance it's totally okay so still within our event processor class we're going to do one more private method and i'm going to call it private void add platform now we're just going to expect in a string and this is effectively our a platform dto but it's still in a string form so we'll call it platform published message okay now this is where we're going to start using our scope factory to actually get a reference to our repository so we'll start off with a using statement determine or declare a var call to scope our scope is going to be close to our scope factory and create scope all right so then within here we're going to create a variable to contain our repository so we'll just call it repo and that's going to be equal to our scope service provider get required service and what type of service do we want to get we want to get i command repo okay so basically getting a reference to this in this way as opposed to injecting it through constructive dependency injection and again it all comes back to the service lifetime of our repository and this class here so again a bit gnarly but you know it's definitely useful it's going into the more detailed use cases around dependency injection so the next thing we want to do is not do that we want to create a variable and this time we want to essentially deserialize our platform publish message into a proper platform publish dto so var platform published detail is equal to json serializer d serialize what do we want to deserialize to platform publish detail and what is the source again this is very similar to the mapping syntax that we use with auto mapper we want to deserialize this here and that's okay the namespace is already as already there so that's cool so this will then be our platform published detail and finally we want to try and add this to our database so i'm going to put this in the try catch block and so i'll do the catch statement first and we'll just do a console right line and we will see could not add platform to dbe and we will put in our message in here so ex message don't forget the semicolon which i'm sure you want at this point we've been doing quite a lot of coding here so within our try block what we want to do is ultimately get a platform model so multi stages of uh converting objects so var plat equals metal not scope factory mapple and what we're mapping we're mapping to the platform and what we're mapping from we're mapping from a platform dto here this here okay so we'll leave it as is bring in the name space for that and again semicolon great so we've got a model now so all we need to do is determine whether we want to add this platform to our database and that is where this new repository method comes in so if repo or if not repo and we'll do external platform exists and we want to access on our platform object our external id and that should be mapped through so if it doesn't exist then we will create it otherwise we will do something else and in fact we won't do it well we will we'll write our message because we're going down that path aren't we writing console.writelines so we'll do that again whiteline and we'll just say platform already exists so we won't bother adding it fantastic and then finally in here we will do repo create platform and we'll pass in our plant object and finally we'll do repo save changes otherwise it will not get added and i think that is that is that that is our event processor class so you know logically it doesn't sound that complicated but you saw there was some fairly interesting things going on in there so let's do a net uh build that's all looking okay don't jump the gun ah now the one thing again not to forget this let's put it in control b over in our startup class we want to register our event processor event processor interface and our event processor class so after controllers we'll put it in there so using our services collection again we are going to add singleton this time and again the reason we have to do that is because we are going to inject this class into our message bus subscriber class so add singleton i event processor [Applause] and we're gonna let's bring in the namespace and then processing and then we're gonna give them an event processor like so all right fantastic so we'll save that off now what i might just do is i might just run this up because stuff will often build fine and sometimes it won't run if you've done something wrong with dependency injection um doing a dot net run really proves uh that's all looking good that's still running up perfectly okay for the moment so that is looking fantastic so we're almost there we're almost there the next thing we're going to do is now create an asynchronous data service to listen from our command service on the message bus and then it will use the event processor to add our platform to the database so we'll do that in the next section all right so we are getting towards the end of our message bus stuff and yeah i i do appreciate it's been a long haul and the commentary i would want to make here is this is probably the most simple microservices application you could build is literally the most uh this is literally the simplest one you can build if you're you know standing up in something like kubernetes and using event must event messaging and you can see how much work has just been involved in getting it to this stage and we're still not finished right so micro services are really hard okay they are hard they are not an easy thing they are not a magic bullet now once you have all this stuff set up and you're humming along and then you're just responsible for building uh one service and you've got all your libraries already you know all your packages you're humming along and you you know how to subscribe and you know how to do this then yeah right micro services is quite easy but the entire setup and learning and the learning curve is big it's large so um not trivial stuff at all so let's just clear all this down make sure we don't anything running and we are going to create i think this no it's not i'm going to say it's the final uh folder within our command service it's not we have one more after that as well um but yeah for now we are now going to create an asynchronous messaging folder for our subscriber so async data services [Applause] okay and into that we are just going to create a file one file called message bus subscriber dot cs now you're probably going um okay we've been creating interfaces and concrete classes interfaces and concrete classes why are we just going straight for an implementation class that's a great question the reason is this this service here is going to have to run in the background it's going to we're going to start it when our service starts and it's going to be continuously listening on our message bus for events and so because of that we have to build it in a slightly different way we don't use the interface concrete class repository again i did try building it that way but you get into some very torturous uh tail chasing around service lifetimes and all that kind of stuff so this is the simplest way of doing it and you'll see how it both unfold as we start building it but yes we're basically building something called background service and i'll put some links below that will explain a bit more about background services but really yeah the whole point of it is that it's something that runs in the background it's a long-running process ideal for listening for events on a message bus now a lot of this stuff is actually very similar to the client that we built in our platform service this is a subscriber it's very a lot of stuff is very similar to the publisher that we've already built so it should look quite familiar so i'm going to go through that stuff quite quickly and then the more interesting bits i'll call out in a bit more detail so let's get started namespace and it's command service and it's we have to type this in because we've not used it before so async data services and it's a public class and where i capitalized that not done that before and we call it message bus sub scriber and it's going to yeah inherit from something called a background service we'll bring in the namespace for that first one here microsoft extensions hosting is the one we want now you can see straight away it's erroring out why is it erroring out if we do a control period they mean to implement the abstract the abstract class now this is this thing i was talking about the long-running task okay so it's called execute async and basically it's when within here that we're just going to listen for events ultimately okay that's the whole point of one of these background services but before we come on to that i'm going to create a constructor as usual so ctor message bus subscriber and we are going to eye configuration as we want to get access to the configuration elements to know where our rabbitmq instance is dry configuration and we'll call that configuration and then the other one we want to inject in is our event processor which we took great pains at creating in the last video all right so we're just going to bring in the name spaces for this so event processing and assuming i've spelt it correctly yeah microsoft extensions configuration again please don't bring in the auto mapper uh configuration and i might just put this on a new line as well and let's get rid of our that's better looks a little bit neater doesn't it okay so this is our again our class constructor and as usual we're going to assign these things to private fields and processor okay and then we'll just create those so control p be the only field and here as well read on the field now again just to kind of label the point about the event processor we are injecting the event processor into this class this class as well is going to be running as a singleton lifetime so it's okay for us to inject in our event processor that's a singleton life's life's time as well so it's fine to get injected we couldn't have a part inject a repository all right so i'm going to actually create another method we could we could put all our setup and connection stuff in here in our constructor i'm going to actually just separate it out into a separate method just to keep things a little a bit more decomposed but this is basically using the connection factory you've seen all this stuff before but we'll just do a private void or call it initialize in it michelle eyes and i'm very conflicted about using zed's but um i'm going to go with the americanized spelling even though i don't agree with it um it's an international audience so maybe it's probably more appropriate so yeah we're just going to do our connection into rabbit mq from here so you've seen this before to be honest with you so the first thing we're going to create is a factory and that's going to be a new connection factory and then we just pass in the attributes that we want and we bring in connection factory actually so we know what we're dealing with so it's rabbit mq client and we'll specify the hostname and that just equals configuration again you have seen all this before i will just go back to our config file either one because these should be exactly the same name and we'll just paste those in as before and while we're at it i will just come back over and i'll copy this in preparation for pasting in as well and i'll get rid of that so we have a bit more room okay so that's our hostname set to that and then you just set the port equal to underscore configuration and we'll paste this in now again it will complain because it's expecting an integer so as before we'll do an int parse and we'll just and i'll just run that off here and put set equal on and that should be good okay so that's our factory we're then going to expect a connection and that's just going to be equal to our factory create connection so and we don't have this as yet and i think i've put three ends in i'm getting overly excited because we're almost at the end of this so control period and generate a yeah generator field there we go and you can see it's added up here private eye connection all good and then channel and that just equals our connection and we are going to create model and again we will generate that as well it should add up to the top there yeah that's all good and then we declare our exchange off of our channel so exchange declare and then here we'll specify the exchange and we need to use in fact we should definitely move this into into config so we're not having it in here but we'll leave it in there for the moment so it's the same exchange as the one that we're publishing to and then the exchange type is a fan out and then we'll specify something called the queue name and that just equals this is slightly different than what we've already seen cue declare and then we want to bind to that and actually have to create this first so field and then on our channel we do a q bind to the cube with underscore q name so it's somewhat dynamic and we'll just take a new line because there's a few parameters coming up here and we're going to again specify the exchange which is trigger and we do have to specify a routing key which as i said before is kind of ignored and i'll put that in the right place that would be good and at this point we are listening on our message bus so console whiteline to bring in the namespace first using system whiteline listening on the message bus listening on the message bud no it's bus okay and i'm going to do it anyway because i did in the last one so i'm just going to keep it consistent we're going to subscribe to the connection shutdown method so connection connection shutdown plus equals and we'll see rabbit mq um connection shut down and we'll just specify that down here i think under this so private void i'll just copy this specify an object sender and shut down event args e and i'll just do a console white line just to make us aware if this has been shut down connection shut down okay fantastic and while we're at it we'll just do some other housekeeping type stuff we'll do a public uh override void dispose i also want to put in um i've got a check to see if the channel is open then channel close and connection close as well fantastic and then just coming back up to our construct i just want to call this initialize method from within our constructor so we'll do that here just before i forget because that would be quite annoying if i forgot to do that so all that leaves us to do really is put in the code in our execute async method i'm going to do that this video has been a little bit long so i'm going to just take a quick pause and then i'm going to come back and finish this off and then we're almost ready well we'll be ready to test asynchronous messaging end to end locally at least and then we have to deploy up to kubernetes but we'll do local testing first so anyway next section we'll finish off the execute async method all right so let's come on to finishing off our execute async method so let's just get rid of this for a moment now there's some kind of interesting stuff in here all really based around this background service concept so the first thing you want to actually declare is a stopping token and we'll just throw cancellation through of cancellation requested so again we can request that this method stops by passing in that token into here next some rabbit mq stuff we want to declare a consumer so we've just to reiterate we've kind of created a connection but now we want to actually start consuming events effectively on our message bus so we'll save our consumer equals new eventing basic consumer we'll bring in the namespace which is rabbitmq client events let's take in that space and we are going to pass in the channel that we declared as part of our connection then here's the magic here's the the magic part that you've really been waiting for so consumer and then we will go to our received event and we will subscribe to that event and then to here we will pass module handle and ea and you'll see how these are used in a minute so large can a lambda function i'm going to do a console right line and i'm going to say event received okay so basically if this triggers it means we have an event on the message bus that we've been listening on fantastic so then i'm going to clear up declare another variable called body which is the body of a message and that is going to be equal to the ea body the a input that we got here all right and then we want to convert this effectively to a string because it's a byte array at the moment so i'm going to call this our notification message [Applause] and we're going to equal that to encoding bring in the name space for that so system text utf 8 get string this time not get bytes so we're going to go the other way and then pass in the body and we will call the two rear method on that and then we're going to make use of our event processor from the previous section and we're going to ask it to process the event for us and we'll pass in the notification message that we've just kind of converted here and that will determine what the event is and then if it's a platform publisher vendor we'll add that to our database and don't forget to put a semicolon here easily easily done so again we want to keep on consuming so on the channel we'll say basic consume we specify the queue which is q name that's definitely not right so eq so underscore q name auto acknowledgement now this isn't really something i covered but you can also acknowledge messages we'll stay true doesn't really have any impact on what we're doing and then consumer is the consumer and then finally we need to return a task completed task so this is again really to do with this long-running task execution we will return that at the end and save that off now i'm just checking through this i think that is everything that we needed to do so initialize that's creating our connection effectively um this is our long-running task you're just waiting and listening for events a shutdown event is there if we want it and our dispose method is there as well so that is all looking good let's do a net build [Applause] perfect there's one last thing we need to do we need to register this for dependency injection so let's come back over control b control b to our startup class and i'll just get rid of this for the moment now after our controllers we're going to add it and the way you add it this is where it gets quite interesting services and we're going to add what's called a hosted service and we just pass in the name of our asynchronous listener so it's message plus actually okay message bus subscriber bring that message bring that namespace in and we'll just run that out with a pair of parentheses and that should be good all right so very very exciting now so unless i've missed something that should be us ready to test locally so we will run up our platform service and we'll run up this service and we'll actually see that it should connect in to our local what it thinks is a local version of rabbitmq and it's listening and then we will issue a platform create on our platform service and we should see the event trickle down and we should actually see platforms being added to our command service hence completing the chain of pretty much everything we've been working towards this entire video so i'm getting quite excited we'll do that in the next video though okay so the moment of truth and i can tell you i'm genuinely genuinely nervous because there's so much kind of riding on this so let's first of all come over to our platform service and let's just clear the screen and we'll do a net build first cool and then we'll do a net run and again just double remember although we've been doing this throughout this course that platform is running on 5000 but not on anything you like really but just make sure it's running on a different port to our commands service so platforms service we already knew was working so that's no surprises it's now really more to a command service fingers crossed here so does this do a net build i will be so disappointed if this doesn't work this building so let's see what happens when we run it straight away i'm almost crying with joy i'm happy let's see what we've got here so this is the thing i was really wanting to see we are listening on the message bus so by registering our background service in configure services we are starting that long running task and it's now listening for events so that's really good news now what i want to do now is create a platform on a platform service using our friend insomnia i'm going to do this off screen okay because i want to see i want to capture the and save of the moment and see what we get down here in our command service so in three two one i'm going to create a platform in our platform service and we should get the event down here on our command service so three two one there we go fantastic so first thing we get is our inbound post event which we were getting anyway that's our synchronous event next thing we're getting is an event received so straight away that's cool events are coming down then we're determining what event we got and then we determined that our platform published event was detected now i can't remember what other commentary we put in our event processing class so let me just come over to event processing uh and select this so let's just see when it determined the events which it's doing here um platform published event detected i don't think we got that did we no we did we did get that that's cool uh and we returned that and then ah okay there we go we didn't we forgot to put in that's a classic one we forgot to put an ad platform so maybe that's a good thing because everything else has worked correctly we just haven't put in our add platform function i'm just checking to see whether we put in we didn't actually put in a so that's quite good let's let's kill this i'm happy so far but i want to put in our console writeline just when we add uh adder add a platform so we'll save platform added okay so let's try that again so we'll do a.net run and in three two one i'm going to create another event so three two one don't tell me i don't think i put in i didn't put an ad platform again did i i forgot to do that again as well oh my god all right i'm losing my mind so let's let's put that in you must be watching this and going what is this guy on it's been a very long video so give me a break so into add platform we are going to pass over the message oh my goodness all right there we go let's give it third time lucky as they say so run looking good and in three two one uh that's what we want to see so platform event detected platform added that's looking pretty good so i'm quite happy now so let's come over to our insomnia session and let's come back to our commands service and let's see if we can get any platforms on that service and indeed we do we get docker fantastic and so docker's got an id of six so let's just copy that and let's see if we can create a command for the platform so let's put in six in here and that's docker so let's let's see i don't know push the darker container to hum and we'll just see docker push and just put name of container something off container fantastic um i spelt it wrong i don't care at this point let's send that over and we get e201 created so we have oh this is good i'm so happy at this point in time it's been a long long video um we have created a resource a command resource with a platform id of six and if we do uh let's do a get commands for platform so let's do that send this over there we go so we're requesting all the commands we have for our platform number six and that looks like that is working so if you've been following along this far and you've got the same result as me congratulations if you didn't get the same result then again we've been doing this step by step so you should be you should have been checking as you've been going along and i'm sure you'll be able to find out what the issue is things i would personally check are the config settings so the first thing and then again just check the code with the code i've got up on github so that is really good i'm very relieved so the last part of this section is going to be packaging up both our services and pushing them up to kubernetes and then we'll do this test in kubernetes and we'll do a bit more of an exhaustive testing to make sure everything's working as it is but at this stage it's looking pretty good so we're now in a position where something is added to platforms it's event driven down to the command service and it has the events that our platform event has but you'll be saying wait a minute our platform service had already had three platforms in it and all we got was the new platform that we added as part of our event driven model that's true and we're going to come on and solve that in the last section once we've deployed to kubernetes the last section we're going to look at resolving that issue and to do that we're going to use grpc all right so just before we move on to the last section which is on grpc we do want to rebuild both our services rebuild their images and push them up to docker hub so i'm going to begin with the platform service make sure you're in the right folder quick directory listing to make sure you can see the docker file and then as usual it's a docker build dash t binary thistle and then the name of the service which in this case is plant form service don't forget the build context and we will build that up and we will then push it up to docker hub so i'm just going to take out the minus t and the build and just put push in platform service okay brilliant so while that's pushing up let's come over to our command service uh do a quick directory listing make sure you're in the command service folder which we are and we'll do a docker build thistle and the name is command service and just note should i spell it correctly service and just note yeah this is singular command service the name of our project is command service our service name here is just command service so we'll rebuild that that looks good and then we will push that up as well so pushing will take a few minutes so we'll give that yeah a couple of minutes and we'll come back when that's happened and then we'll restart both services okay so our command service has been rebuilt and i'll just pop over to platform that's been well not really pushed pushed back up both of them are now pushed so quick check on docker hub just to make sure that they have been updated so i'll just bring that over and i'll just refresh it should see okay so both of these were updated yeah a few seconds ago and one minute ago so that is looking pretty good fantastic so what we want to then do is we want to restart both our deployments so clear the screen here we'll do a cube ctl get deployments and we already have deployment so that's why we're not doing the cube ctl apply we're doing the role restart because we already have these deployments and again if we didn't apply cube ctl apply deployment um file there's the deployment file hasn't changed so again i've said that a few times already so let's just redeploy the platform service first and we'll do a test on that so cube ctl rollout restart deployment and we want to do platforms deployment so copy that we'll paste that in saying it's restarted so clear the screen and we'll do a cube ctl get pods just to see [Applause] so we've got the container creating state which is fine so we'll just have to wait for that to create itself and then what we'll do is we'll create a platform using insomnia and we'll see if it drops an event onto the event bus just make sure that part of the equation is working and then we'll redeploy our command service to check that it receives events successfully i just want to do one leg at a time rather than trying to do everything at the same time cool so that's all running now that's looking pretty good so if we come over to kubernetes here we are i'm going to do a sort by started time so we should see our platform service at the top here um looking pretty good using sql server applying migrations we already have data that's all looking fine good so we'll then pull over rabbit mq management console so we can take a look at messages going on to the message bus hopefully and then the final thing i'll do is i'll just use insomnia to create a new platform on our platform service so let me just bring insomnia onto the screen the screen's getting a little crowded but um that's okay so here's our kx environment here's our platform service i want to just see what platforms we have in here now i did a few just while i've been testing offline we've got a few more platforms in here um that's okay i want to create a new one so i'm going to create a new platform let's create one called console i think that's how it's pronounced it's a service mesh uh application from the same people to make terraform that's all good so what i'm gonna do is i'm gonna click send okay and we've got a uh two one created let me just uh get all platforms make sure that was created yep that was created good and over here we were successful in sending a synchronized post synchronous post to our command service we then move on to the asynchronous stuff so rabbit rabbit mq connection is open we're sending the message and we actually did send a message with our attributes onto the message bus and we can also see down on the message bus that yes indeed a message was sent down so that's excellent that's our platform service working in production so i think all we need to do now is just redeploy our command service and see if it receives events so let's do that now now you can do do a restart in any terminal it doesn't have to be within any particular project so i'm just going to clear the screen i'll do a upkeep up key i want to get all my deployments so we want to restart the commands deployment now so i will do a key up key and then we'll replace platforms with commands and that should restart our commands deployment okay and we'll do a get pods and we'll see we've got our commands service in the container creating state so we'll just wait for that to create and then we'll run a very similar test and hopefully we should see not only events going on to the event bus but the event being received in production and then we can also query that service as well in production so i'm just going to do a quick check to see how our pods are getting on yep the other one is terminating which is good there we go so got all pods all four pods running that looks good so what i might do is come back over into kubernetes let's bring that onto the screen there we go and the problem here is i have two screens and they're running at different sort of magnification so the rendering can be a little bit weird anyway let's order by start time so again command service is the newest one so that's really good news you can see it's listening on the message bus which is excellent so it started up and it's listening so what i might do is just come back over to insomnia and we'll create another platform so let's create another one of the hashicorp applications um they do something called vagrant [Applause] as well so let me just move this down a bit click send there we go looking pretty good so we got an inbound post from our synchronous service which should be working anyway we also received an event on the event bus we determined the event we established it was a platform published event and we added a platform to our in-memory database so that all looks like it's working exactly as we would expect and you can see here that we had that event placed onto the message bus so looking pretty good so the only thing that we should really do is add um add some api calls for our command server so we don't actually have any yet so in our k8 folder i'm going to create another folder and i'm going to say commands service nginx and we want to let's see what commands we have let's get platforms that's the one we really want to check so let's do actually we can probably copy it from down here so uh get all platforms let's duplicate that and we'll move it into a bit further up move it into here and all we need to do is change the uri and again we want to make sure that we put the correct route in or copy the wrong thing i think this must be very painful to watch let's copy that let's paste it back in here okay and it's api forward slash c get platforms there we go so we have our one new platform created in our command service and of course we don't have the other commands yet because we're only working off an event-based uh model at the moment but we will come on we will rectify that with grpc we'll use that to pull down any other commands that way but that looks pretty good so maybe what we should do is add a couple of other endpoints in here let's create a command so i will come down here and duplicate this create command for platform and we'll just move up to here and it will be the same i don't know actually won't be the same it'll be slightly different let's copy this let's paste this in here and let's just check what this platform was eight so if we come over here we want to put in eight and then commands and that should create a command for that particular and we'll just leave the the same body and it doesn't really matter but we have created a command there for that particular platform good and we'll just duplicate one more request let's get all commands for the platform let's just duplicate that just to make sure that's working and we'll move that up to here as well and move it into the right place so again we'll come over here we'll copy this url and we'll paste it into here and i think that's a get request so we're getting all commands for this particular platform we go and we've got our one command that we just created so that is all looking pretty good um i think we can draw a line under kubernetes stuff really well yeah we can draw a line under the kubernetes stuff we don't have to create anything else in there we can draw a line under event driven stuff we are pretty much done with that now obviously we have just started using an event-driven model we now have the fabric set up to send events both way well you'd have to kind of put the client in and the subscriber in each class if you wanted two-way communication but we've got basically everything we need to work with event-driven models so that's great so the last thing yeah that we want to come on to is we're going to move back to synchronous messaging now and we're going to address this final problem that yeah okay cool every time a new command is created we push it down to our command service so every time a new platform is created we push it down to our command service but we still have the issue where we don't have all the commands so we're going to solve that in the next section with grpc all right so we move on to the final section of the video and we're going to be talking about grpc you're not just talking about it we're going to develop some grpc code but before we move on to that as it says on the screen let's discuss what it is and why we should use it or why would you use it so first of all it stands for google remote procedure call so obviously created by google but it's now again in the public domain in the same way that kubernetes is as well it uses the http 2 protocol to transport binary messages and that's quite important it's yeah binary messaging it's designed to be very lightweight and highly performant i've also put in brackets here because this is something that we're going to discuss in a bit when we want to implementing it it does require or usually does require the use of https or tls whatever you want to call it as part of that connection now we are not using https or tls internally within our kubernetes cluster so that does mean we have to do a little bit of a work around to get grpc working because it is kind of usually reliant on that it says they are focused for high performance it relies on something called protocol buffers or protobuf which effectively defines the contract between the endpoints so you use this protobuf file which defines the service it defines inputs and outputs and this protobuf file exists at both endpoints both the client and the server and it really forms the contract that both parties have between each other as you would expect it's multi-language support so yeah we're obviously using c-sharp but if you have a c-sharp client and a ruby service they can talk together using grpc and there's language support for most most of the main languages out there it is quite frequently used as a method of service to service communication so you wouldn't typically have a client web browser using grpc to talk to service just given the complexity of the http 2 protocol and grpc itself so it is used really just internally for service to service communication but it is still asynchronous and messaging methodology if you like so here's about the diagram so yeah we're not implementing this obviously but yeah we have a as an example a c sharp client on the left with the file and then we have a ruby service on the right again with the same protofile definition they talk via http 2 and yes a client has a grpc stub on the left and our services are grpc server on the right now one of the things with grpc when we come on to using it within our applications is there is a lot of generated code so we write a proto file which is actually relatively simple very small as part of the build process the grpc library actually builds up a lot of stuff in the background that allows it to work and then we work with grpc to actually make calls and receive requests so we'll see how that works in the next section now just before we move on to the code i do want to just uh come back to our solution architecture and just kind of run that out because we have been sort of checking in with us as we've moved along so we are basically at this stage now both are services we have http communication between the two we have publisher subscriber event driven messaging between the two both in development and in production and now finally what we're wanting to do is introduce grpc and our use case is that when the command service starts we want to use grpc to reach out to the platform service and retrieve all the platforms that the platform service has so it gets around this issue of the fact that we only have at the moment the event driven platforms that the command service is aware of with this grpc last bit command service is going to reach out at start start up and pull using grpc all the platforms down and it will load the ones that it doesn't already have so that's our use case and that's what we're going to implement in the next section all right so we'll move on now to implementing our grpc stuff and i just want to be clear um because maybe i've not been totally clear about the server and client model here so in our scenario the command service is the client and it reaches out to the platform service which is in this scenario considered the grpc server so at startup command service is going to reach it and pull down all the platforms that the platform service has so the server is the platform service and the client in this scenario is the command service now i also realized i did lie to you in the last section when i said that we had finished all our kubernetes stuff that was in fact a lie not an intentional lie an unintentional one i just simply forgot that there was one small piece of config that we have left to do and it's very very small so you don't need to worry too much about it and it's really just an addition to some of the some existing config that we have and for those of you king of eye you'll actually see on screen here the addition that we still have to make and that is around really our cluster ip service and the fact that it needs to provide a port called 666 and what is that port well i'm just using the number 666 it can be absolutely anything you like i'm just using it as a little bit of a dark humored joke mainly because it took me a while to get this stuff working and what we need to use it for i did mention that grpc uses http 2 with tls effectively https now we don't have https running in our cluster and it was one of the things i decided to leave out because there's quite a lot involved number one to get up and running in the cluster and number two a lot there's a lot of um opinions that you don't really need https inside your cluster because it's kind of an internal domain you would terminate tls at the ingress engine x gateway so certainly from your web browsers to your services up to the gateway yep use https absolutely 100 inside your cluster there's different differing opinions some opinions even suggest you don't even need to use it so it's why i decided to leave it out um there's a lot of work involved to get it set up and generating certificates even development certificates so i've left it out so with that being said that left as with a scenario where we're not using https we're not using the tls therefore and so grpc needs that little bit of extra config in order for us to tell it that we only use http 2 protocol without tls so in order to do that we need to do some configuration on the kestrel web server running on the platform service which is our server and then we tell our client endpoint to connect in to a specific port in this case 666 that we allocate that is just running http 2 without tls when a client connects to a server using grpc there's a negotiation that happens and they will use the highest level of security available on that connection and so for if we don't explicitly say there is no tls it will attempt to use tls as a connectivity method and it will fail because we just don't have it set up to work correctly so it's a little bit of a workaround but in some ways it's not a workaround because if you don't want to use https in your cluster you would have to do something like this anyway so that's what we have to do to get it all working simply because we don't have https running in our cluster so a very small bit of config in kubernetes so we'll do that next and then we'll move on to configuring the kestrel web server which we'll do via config and then we're ready really to move on properly to the grpc stuff but this is a bit of networking stuff that caught me out um so we need to get it done and out of the way so back over in our k-8s project and back in our platforms deployment yaml file all we need to do is add another port to our cluster ip service so the actual deployment does not change this stuff does not change all we need to do is add another port so in the file underneath ports we'll add another array element we'll give it a name you need to give it a name and we'll call it platform grpc and we will give it a protocol which is just tcp and we'll give it a port now this port number can be anything you like if you don't feel comfortable using 666 for various reasons you don't need to use it you can use anything else you like i'm just using that given my sense of humor and then target port is also 666 and that's it save that off and then we need to basically apply this deployment file so it's not a rolling restart because our actual file has changed so we need to apply it again i keep laboring that point um of difference so we'll do a quick let's get our services so cube ctl get services just to have a look and i'll just move this over that looks horrible and you can see here our platform's cluster ip service excuse me is just running on port 80 so we want to additionally add our port 666 so to do that let's do a directory listing and get the name of our file there we go and we'll do a cube ctl apply f name of our file make sure i pick the right one copy that then paste that and what you'll see is that our deployment itself is unchanged which is correct we didn't change any of that but we did change the cluster ip configuration and we can see that it has a configured status so we do get services one more time you should see now on our platform's cluster ip service we have this additional port that it's listening on fantastic so i'll just pause here for a minute and then we'll move over into our platform service project and we'll do our kestrel configuration and then we're ready to move on to actually using grpc all right so back in our platform service and we only need to do this in the platform service we don't need to do it in the command service the client we only need to do it in the server side of things another thing i'll mention is this is a configuration for when we're running in kubernetes when we are running locally doing a net run http is supported so we can actually you don't have to worry about this it will just work all right so what we want to do is i'm just going to do this for the moment and i'll close down all these files because it's taking up a bit of room and we want to come into our production our app settings production file okay and this one here so again make sure you're in platform service app settings production json and we want to add in some configuration into here and so we'll add it under our rabbitmq stuff so comma and we're going to nest some attributes in here so as you can see here vs code is kindly um providing some defaults so it's the kestrel web server we're wanting to work with you so i'll bring this on to new line we then want to specify endpoints and you can go it's actually giving us some some help here within endpoints we want to specify a label effectively so the first one is going to be grpc and then i'm going to provide some more curly brackets so i just bring these onto new lines my eyes cross slightly when things aren't indented properly so grpc we then want to specify protocols and then we want to specify for this http 2 now that just allows us to use http 2 with no tls and then we provide a url and again it's a production url so this is the basically the url that the server is going to be running on so it's going to be http not ps so make sure it's http and what is the server that we're running on well we've come back again into our k-8s folder project and we come down to the name of basically our cluster ip so in production this is the url the server that it's going to be running on so we copy that back to our platforms file platforms project paste that in and we put 666 on there as well as the running port okay so just go through that again we're configuring the kestrel web server we're configuring a number of endpoints that looks a bit weird so i'll take a new line we're calling this endpoint grpc this can be anything and then inside this particular config we're specifying the http 2 protocol and the specific url that we want now we need to actually now explicitly specify as well just the web api or the http endpoint for our http services which have been running on port 80 thus far and so that's all we're going to do we're just going to specify that it's running on port 80. so again protocols this time we're going to select http 1 and then finally the url as before i'm just going to copy this up to the colon exactly the same paste that in and it's port 80 this time all right so that all looks good i'm just going to bring these um commas up here to make it a bit consistent that looks a bit better doesn't it um yep this must be quite pain in fact if i do an alt shift f yeah that does it for us there we go so alt shift auto formats it was instead of me clicking about an idea so yeah we've got two end points one this is basically what we already have and this is the new one that kestrel will now listen on and we're telling it just use http 2 without tls so we'll save that off fantastic and that will only be applied in production now i do want to just check uh come into launch settings json i do want to check that we're still using yep so we're still using https 501 in in local development mode but i'm doing.net run that's fine and i might just comment quickly while i remember in our command service just double check in properties as well checkpoint settings json and then again we're running https there with the local development certificate so that is all fine all right so a little bit of faffing around there that's a bit of networking stuff that caught me out um when i started deploying this into production but we should be good to go now so in the next section what we're going to do is we're going to move on to more interesting actual proper grpc stuff and we're going to specify our proto file and our protofile basically specifies what the service is and what the shape of the data is and then we'll do some build that actually also generates a lot of code in the background that we don't actually alter but is auto generated from this proto file so we'll do that next all right so before we actually even get started with coding we do need to add some package references to both our services to both our platform and our command service so let's come back to our platform service first as it's slightly simpler would you believe um it is the server in this scenario but we only have to add one package here so bring up your command line and i might get rid of get rid of our sidebar control b and i'm going to do a dot net add package and the one we want is grpc asp.net core and that's the only package that we need for our server okay so again our platform service is acting as a server in this scenario so that is the only one we need there so that's cool then let's move over while we remember otherwise i will probably forget let's move back over to our command service project now this is acting as the client now surprisingly or maybe not surprisingly we need to add quite a few package references in here to support grpc from a client perspective so we'll just add those in one by one clear screen and so the dotnet add package and the first one we'll add is grpc tools okay and let's add that in here the next one we want to add some standup key add package grpc.net dot client and let's edit that as well and then last one is the google package is google dot pro tool buff which is uh protocol buffers right so those are the three packages you need in the client google protobuf grpc.net client and grpc tools fantastic so we've got everything we need from a configuration perspective now let's move on to constructing our proto file all right so we want to come on now and create our pro tool file in our project so just close this stuff down and yes in our platform service we're going to create another top level folder uh called protos so new folder call it pro tools and then i'm going to create my file and i'm going to call it platforms dot protocol like so all right get rid of the terminal and we do control b to give us a bit more space so it's in here that we define our contract effectively and what it will get passed between server and client so some sort of syntax setup it's kind of similar to the stuff we did with kubernetes specifying api versions but this time we specify the pro tool version so pro tool version three semicolon and then we specify the c sharp namespace which let me just double check so control b we'll just pick any class at random and it's just the top level um the top level namespace so in this case platform service so we'll copy that into here and put that in here like that okay fantastic so yep that's just kind of general setup stuff so the protofile applies to this namespace and that will be used later when we do some building and code gets generated so protofile will consist of the data the messages that gets passed in and back from our services and that's the other thing we define our remote procedure called services in here so i'm going to define the messaging first and you do so by using the message keyword so the first thing i'm going to specify is our input so again just to reiterate this service is going to allow our client our command cells to retrieve all platforms so what input does it supply doesn't actually need to supply anything so we still have to specify an input so i'm going to call it get all request and it's just empty okay and then what do we give back well we're going to give back a load of mod load of platforms assuming something exists which hopefully they do but that's what we're going to get back so what we're going to specify now is effectively the platform model the structure of that so we're going to call it gr pc platform model and you can almost think of this almost as analogous to a platform model definition of c-sharp model definitions almost the same as that really let's come back up here and do our curly brackets now we're going to effectively now specify the properties of our model now you define them in a protobuf file in a slightly different way we specify the data type and the first one i'm going to define is the platform id i'm just going to call it platform id in this instance and i'm going to make it equal to one now your first question if you've not done this stuff before is why are you making it equal to one the one in this case is not a value of platform id the one in this case is a position number so for every property defined you've got to define its position it's just how protobuf files are defined so that's it and again bear in mind the messaging between services is totally in binary so again it's all part of that encoding process which i'm personally not gonna get that involved in the next one we're going to specify is the name it's a string and we're going to make its position equal to two so you can kind of see the pattern emerging here and then finally we'll pass over the another string which is the publisher and no guesses as to what position we're going to get we're going to give it position 3. so to be honest with you this actually isn't the data that we passed back we want to pass back multiple potentially multiple projects multiple platforms so in order to specify that we have to specify one more message and we're going to call it platform response and this is actually what we passed back and we want to pass back many of these multiple of these so the way you do that is the repeated keywords and then the name of the model that's being repeated and let me just give it a name platform and we have to again specify a position id in that in that message payload okay so if you've not done it before maybe appears a little bit strange i guess you know it's not fair maybe to say it's strange it's just if you've not done it before it's probably not that familiar to you but the most important takeaway here is we're going to be referring to this kind of as our model our main platform model object and we're actually going to be doing some auto map or stuff with this eventually now something to really call out which i find again a bit counterintuitive for me is when we build this project the grpc package will actually create code for us and it will actually create a model a grpc platform c-sharp model for us that doesn't exist as yet okay exists here in this format but doesn't exist as a referenceable c-sharp model it will eventually when we do our build but there's a few steps we need to do before we even get to that so we're not quite finished the portal file though um we need to specify the actual service itself so the you know our client is going to call this service to get back these platforms so we define that by defining service and i'm going to call it grpc platform brackets and then the remote procedure call itself is going to be get all platforms and then we supply the inputs this is basically all yeah this is the procedure call here that we're defining the method if you want to call it that so what are we inputting to that we're inputting this i'm expecting that as an input which is basically just an empty object it's nothing and what do we return we return this platform response which is a repeated array of models or a collection of models so we want to turn it and we put it in parenthesis like that so that's our platform's protofile done let's just save that off and again this is the contract between our server and our client and we're going to copy this file eventually over into our command service as well it's going to have exactly the same copy of this so it knows how to talk to our server great but we're not quite done with our setup we need to add something to our cs proj file so that when we do our build process we get some code generated we then need to define our actual asynchronous sorry our synchronous data service that makes use of grpc and then we need to update our startup class to pull all this together so there's still quite a bit to do just setting up our server and then we'll come on and effectively repeat the same thing in our client but next let's update our platform uh service cs proj file and then we'll move on to actually defining our service all right so let's go over to our cs proj file and we need to specify a new item group all right and just copy that and i will terminate that off or paste that in and we'll uh terminate that usually i found uh usually this kind of helps you auto complete some ebook for you but maybe it's not being that helpful today anyway so what we want to define within our item group is something called protobuf and this is basically telling a project where it can find a grpc protobuf file so protobuf1f include equals and it's just a path to our protobuf file so platforms dot proto and then the other thing we want to specify in here is whether our platform service is running as a server or a client and as you know we're running as a server in this instance so we just say grpc services equals server terminate that like so so let's save that off so again this is really just saying we can find our platform's proto file and when we do a dot net build we're going to get some automatic code generation going on which can take a get a bit of getting used to i don't personally like it actually but it's how it works and i understand that's why it has to work like that but i don't really like code gen it being generated on my behalf but anyway that's how it works so let's give that a go then so control backtick and we'll do a dot net build okay and i'll be building that builds up so what should have happened if we come into our obj folder into the into debug you'll see we have this pro tools file and then in here we have our generated code and you can see by the code window over here it's quite voluminous code okay there's quite a lot going on here and that gets generated for us every time we build our project so you don't touch that code that's just done for you and you know fair play all you really need to specify is this protobuf file and the rest gets taken care of but just be aware that there is that auto code generation going on and if you haven't done a build and you try to access this class for example you'll have some trouble with it but that is there now having generated it all right fantastic so that's us just set up the protobuf stuff what we actually need to do now is create a synchronous data service that is going to act as our server effectively for our platform service so we'll do that next all right so back over in our platform service let's just clear this stuff down we are going to come back into our sync data services so you'll remember that we had defined some http sync data services we now want to create another folder under sync data service called gr pc that capitalize the g all right and into here we're going to create our server basically our grpc server so right click new file i'm going to call this grpc platform service dot cs all right so name space as usual namespace platform service uh sync data services and then we're gonna had grbc onto that because it's newly created so it's a public class called grpc platform service and then we're going to inherit from gr pc platform geospc platform base and coming into our pro tools file you'll see that that looks familiar okay that is our service the name of our service that we are kind of inheriting from now we got auto complete there because we'd already built our code if you haven't built our code prior to writing this you wouldn't get that probably not unless you've got the automatic build stuff going on so again just be aware of that all right now ultimately we are going to be making use of our repository because we're wanting to go into a repository get a lot of platforms and then pass back this protobuf class what's called platforms response class we come back into platforms response we are wanting to pass that back basically so you're probably starting to think okay we're going into repository we're going to pull out some platform models we're going to be passing back this grpc model type we need to map those and indeed we do need to map them so just before we complete our service i'm going to come back over to our profiles folder into our auto mapper profiles i'm going to create one final profile in here so create map what is the source the source is going to be our platform model so we're going into repository we're pulling our platform models okay what are we mapping to what's our destination our destination is grpc platform model okay so back over in our pro tools file it's this here this is what we're mapping to okay so we have to map our attributes effectively now the only one really that we need to map is this one here because i've called it platform id here whereas our in our model it's just coming back as id so let's do that now so we've seen this before so for member destination so what we're mapping to so this is the grpc platform model we are mapping to platform id now you can see here that the code generator has changed platform id into the standard c sharp uh capitalization so we get capital p capital id remember in here it was all well it was different case all right so it's done that for us so we're now working with our c sharp type class the generated class so we want to map to platform id what do we want to map from the source so opt let me just get rid of the sidebar there so opt goes to opt where do we want to map from well in our source src goes to s rc and this is a bit long-winded then it's our id so our platform is our source we want to map the id of a platform to the destination which is the platform id of our destination now that's all we should have to specify because the other names are similar but just to be just to be totally complete i just want to do the other mappings for the other attributes just in case we get caught out we shouldn't need to do that though because they are basically the same so in our destination we've got a name what do we want to map to we want to have to name okay so really don't really need to do this but only because we've only got a few i'm just putting it in just for like complete safety just in case uh we run into any issues but these these last two shouldn't be necessary you should be able to run without them okay this is the first one the first one is the only one we really need to do and don't forget semicolon so again that is the last mapping automatic mapping we're going to have to do so we're mapping our platform to our grpc model so going back over here let's finish off our class or platform service so i'm going to do a constructor and into our constructor i'm going to pass are repository so i platform repo i'm going to call it repository and we're going to pass in a map or object as well called mapper and we'll assign those actually let's bring in the namespaces for these so it's a service and auto mapper fantastic and so pause underscore repository equals repository and underscore mapper equals mapper we'll generate read only fields for those as usual so it means we now have access to our mapper and also our repository so we've seen this thousand times before and then we come on to actually exposing the method that is going to allow us to return our platforms back via grpc so public override synchronous so task and what are we returning back we're returning back platforms response so again in here we are returning this back okay that is what we are returning back and again our platform's response is an array of these okay so it's it's not the most intuitive thing to be really honest with you and you're maybe scratching your head a little bit i really don't blame you but bear with it it does all work so we're gonna call this method get all platforms and then we're going to specify get all request request again that is this thing here then finally we have to specify this other thing here called server call context context and we'll just bring in the namespace for that grpc core that's a grpc specific thing that we have to specify in this method all right so task i think we need to bring in the namespace for threading yep that's all good so this is just complaining because we've not returned anything back yet so i'm going to create a new platform response object i'm going to call it response equals new platform response i'm going to use our repository to populate another variable platforms with a bunch of platform models if we have any so i've seen this before using a repository gets all platforms all right and then we want to basically add each of our response models each of our platforms to this platform response object so i'm going to do that here for each loop so for each variable platform and platforms so for each model in our platform's collection on a response object which is this platform response object here platform add and then we're going to use mapper increasingly becoming my favorite tool and what we mapping to we're mapping to a grpc platform model and what is our source our source is plant all right and finally we return task from result here response and that should be our service good to go so we'll save that off so yeah basically we're making use of a repository or making use of our mapper through constructor dependency injection and we are returning back a platform response object which again comes from here and we're basically doing that by using auto mapper to populate that collection using a for each loop all right so that's our last use of auto mapper and this service anyways served us very well um and yeah we're basically almost at the end of the defining the server related stuff for grpc all that we need to do now in this next bit is come back into our startup class and actually register this service for dependency injection we also need to um be also into space but we need to specify it as a as an end point not dependency injection we do need to register grpc more globally for dependency injection and we specify this as an end point in our configure method so we'll do that next all right so almost there with our server so let's bring up our directory browser and i'm going to come into startup let me just go to this other stuff and close that down and in configure services we do need to register grpc for dependency injection so services add grpc that's it we don't take those out we don't need to do anything else let's save that off all right but we do need to do which is a bit more kind of involved is we do need to specify the service that we've just defined the server that we've just defined as an end point which kind of makes sense i think so under our controller's mapping we need to specify endpoints again and this time we're going to map grpc service and what service are we going to map well the service that we just created so let me just bring that up and we are referring to the service here our grpc platform service so i'm just going to copy that and paste that into here so i'm going to bring in the name space for that so using platform service sync data services grpc fantastic now this next bit is optional and i'm going to put it in just for completeness now this is what this allows us to do this allows us to serve up the proto file to a client and we they can then use it and understand what the contract is so you don't need to do this bit but i'm just going to do it for completeness so endpoints map get and we're going to specify a path and i'm going to say pro tools platforms dot proto and we'll say async s y and c context pair curly brackets let's put in our semicolon and in here we're going to await context response not request response right async bring in that and then you will do system i o i o i might actually take this out in just a file we'll bring in the context using system io that's better uh read all text and i'm going to specify what we're wanting to read which is pro tools folder platforms dot pro tool so again don't need to do that we're just adding another endpoint to serve up our contract which is probably good practice all right so let's do a dotnet build all good let's do a.net run all good so again not really doing much at the moment um we do need to move on to doing our client to make requests to this to see if this all actually works but the fact it's not crashing or doing anything horrible at that i think we're probably in quite a good place so now we're going to move on to writing our client in our commands service do that next all right so we have just completed the grpc stuff from the perspective of our platform service so we'll just stop that run if you had it running and now it's time to move over to our command service which is our client so we're going to do a lot of the same stuff but they're going to be some differences as well now just make sure before you continue on that you added these three packages to the cs product file in the commands service all right so the next thing we want to do is we want to add some config so when the commands service starts up we want to tell it where to go to retrieve our list of platforms so we need a destination basically so we'll need one for development and we'll need one for production so it's just really the uri of the endpoint of our platform service endpoint so over in our command service i'll close all this down and we'll go into app settings development json first so along with all our other development endpoint config we will create one last piece of config we'll call it gr pc platform [Applause] and we'll give it a value of our platform service endpoint so https now note that i'm using https in our local development environment that is supported so localhost port 5001 and that means we don't need any additional config that will all just work out the box fantastic so i'm going to copy this and then come over to production settings and this is where we want to target our production end point now as we have already gone through we cannot use https and then we just need to find the endpoint of our production server and so as before let's just come back to our k8 project come back to our platforms the deployment file and you'll see again this is the name of our server so we'll copy that cluster service ip and we'll bring it back over here and we'll put that in here and then the only other thing we have to put in is the port of grpc port which in my case is 666. so when this starts in production this is the end point we'll try and target and when it starts in development we'll just access our local host on https and note that this is http make sure that you put that as http fantastic so that's it for the configuration setup for our client next thing we'll do is we'll just copy the proto file over from our platform service into our client service so we'll do that next all right so let's just close this down and we will create a folder in the root of our command service project so right click new folder and we'll call it protos as before and then rather than well what we'll do is we'll create a file and then we'll just copy the contents over so new file and we'll call it plat forms draw tool and then we'll come back to our platform service get rid of that and find our find our pro tools folder platforms proto and we'll just copy all of this it's exactly the same and we'll bring it back over and paste it in here fantastic so we'll save that off now like with our platforms project you will remember we had to put a line in here to define where grpc can find any protofiles so i'm going to copy that as well out of our platform service cs gonna copy that and back over in platforms sorry back over and commands i'm gonna paste it in here now the one thing that we will need to change is this it's not running as a server it is running as a client which shouldn't really surprise you but everything else remains the same the relevance of the path to our proto file is exactly the same so let's save that off all right let's do a dotnet build let's go back to our console that's our terminal should i say clear that and do a dot net build and everything looks okay that's cool all right so with that done we really now all we really want to see there's a few more things we need to do but the next thing we're going to do is actually build our client service that's going to be called to call our server endpoint so we'll do that next all right so back over in our command service we're going to create our client our synchronous client to connect in and make a call to our remote procedure call over on our server so back in our project we're going to create one more root folder called sync data services because again grpc is asynchronous data service now we're not creating any more other than grpc but just to keep it consistent i'm going to do a new folder and create grpc just in case you wanted to add a http client here as well to make other calls we'll just structure it in a consistent way so we're going to create an interface first so in the grpc folder right click new file and we're going to call this i platform data client yes namespace command service sync data services grpc and it's just the public interface called i platform data client and what we're going to get it to the tongue we're just going to get it to return now innumerable of platform objects and we'll call it returning all platforms so we'll run ultimately run this at startup so let's just bring in our few namespaces we'll bring in i think i've spelt that yeah it's better it didn't look quite right to the eye so here we go and we'll bring in our models as well fantastic so you're probably starting to think okay we're going to be using a grpc client to ultimately return a list of platform models so you're probably starting to think we need to use auto mapper we do but we'll come on to that in a bit but for now this is sufficient for the moments so we might now want to create a concrete class so right click the grpc folder new file and we're going to call this platform data client dot cs i'll get rid of this and i'll give us a bit more space here so namespace command service i think data services grpc and it's a public class called platform data client at least i think that's what we call it let me just double check platform data client yes that's fine and that's going to inherit from our interface so cursor in there implement interface but before we do that we're going to create a constructor and into that we're going to pass eye configuration as we'll read in the endpoint from our configuration file and we'll call that configuration and we're also going to need a mapper object as usual so let's bring in those namespaces using automapper and using make sure you use microsoft extensions configuration and then we'll assign those to private read-only fields so underscore config configuration and underscore mapper then we'll just generate those read-only fields fantastic okay and so really all we know we need to do is come on and flesh out this method here but before we do that i think it's probably worth doing the auto mapper stuff now so let's just come back over to our profiles folder and we're going to put a new profile in here right so under here we're going to create another map and we're going to define the source now what is the source in this case the source in this case is a grpc platform model now you'll remember if we come back over to our pro tools file in here this this is basically the source because we're going to be getting these i'll be an enumeration we're going to be getting these from our endpoint and so we need to map it to our platform model as we specified here platform so let me just bring in the namespace for that using platform service and we need to do some mappings so let's just come back again to here we're going to be bringing over our platform id our name in our publisher so i'm probably not going to bother with the publisher because from him i don't think we're even storing that in the service so it's irrelevant the name will map over but but that should map automatically but this here the platform id we want to map it to the external id of our platform model so we need to set up some mappings so don't forget the parenthesis and we'll come onto a new line and we'll have to do this for member type stuff so same thing as we've done before specifying the destination so the destination is on our platform object so here external id so it's actually very similar to what we've already done up here and put the comma in the right place so opt goes to opt [Applause] and what we're mapping from mapping from our source and oh i've got that completely wrong so let's put that in the right way what we're mapping from or mapping from our source it's form id now again like with the other mappings we did in the platform service we probably don't need to do this next mapping but i'm going to do it anyway so for remember i'm gonna copy this and we'll just change the now is i'm gonna map over the name value but that should auto map so we're saying the name destination on our platform model we're gonna map to the name from our source the only reason i'm doing this i'm double checking is because we're using grpc and i just want to be yeah doubly confident that we're mapping everything through that we want to map through now the other thing i want to do is i want to do an ignore so for our destination method test commands i just want to ignore that i think it would be ignored anyway but again i just want to be explicit in the fact that we are ignoring any mapping of command objects or a command collection so i'm just going to go ignore so we don't auto mapper doesn't even attempt to do anything from that so i think to be honest with you this is the only thing we really need to do by way of mapping this other stuff is probably not necessary but i'm doing it just to be doubly sure so we'll save that off that's our auto mapper stuff done and i think that is all our auto mapper stuff for the whole project now finished we're really coming to the end the end of things now um so back over to our platform data client we've got our mappings in place now so we can now finish off this method here so let's get rid of this and i'm just going to do a console bring in the namespace using system writeline and we're just going to say calling grpc service and what we might do we'll actually name the service just so we can kind of see what it is we think we're calling so curly brackets and we'll use configuration and then we'll bring back the config item so let me just go and pull that out keep doing control v instead of control b so let's come back to one of our config files this is the value we want here so let's copy that into here all right and we'll just end that like that so i think that should be okay cool again just helps us with debugging a little bit so we now have to set a few things up in order to talk to our endpoints the first thing we need to set up is a channel and again this is a different channel concept to the one from rabbit mq so we'll declare a grpc channel bring in the namespace for that using grpc net client for address and then we specify our endpoint and it's actually exactly the same as this we're just pulling in the endpoint so i'll paste that in there that's so and i don't think we need that looks better fantastic and then we need to declare a client and that's equal to new grpc platform bring in the namespace for that platform service so again all this stuff will only work if you've built the solution and it's built the the code behind for the proto file so grpc platform and grpc platform client and then we pass in the channel like that then we want to declare our request which is basically an empty object really so that's equal to new get all requests which again is basically we uh come back over here is just this thing here okay so it's not nothing but it's yeah sort of nothing almost all right so then we're ready to actually make our call to get all platforms we'll put that in a try catch block i'll move this up a bit and we'll do the catch first and that's usually easier to declare so same as usual exception ex we'll do a console right line i'll make it a dollar sign in front let's say could not call grpc server and we'll put the exception in here all right and semicolon then finally within our tried block we want to get a reply [Applause] equal to client get all platforms and pass in the dummy request now just fix that reply okay and then finally we want to return mapper map what do we want to map to an innumerable of platforms and what is the source the source is a reply dot platform and that should be the clients done let me actually you should probably put this is complaining we're not returning uh something for everything so we'll just return it all in here and that should resolve the errors there that's it so the client's relatively simple we're just yeah putting in our configuration object to mapper object set up a channel set up a client dummy request and then yeah we just make the request as so so it's not that complicated all right so that's it almost at a point where we're done all we need to re all that remains to be done is calling this at startup so what we're going to do is we're going to create a prep db class a static prep db class like we did with our platform service we don't have one here yet we'll quickly knock that up just to keep it consistent and then we'll call this service from within there and we'll seed in the platform data that we get from this method and that's as we're done well and then we have to deploy to production but if this all works uh locally then i'm fairly confident uh given what we've done so far it should work okay in production so we'll come on to doing the last real bit of coding which is the prep db class to retrieve this information from our server all right so still within our command service in fact before we come on to doing the prep db stuff let's just do a net build just to make sure we are all uh still working okay great and actually before we even come on to doing prep db the one thing i did forget and i keep forgetting we need to register for dependency injection our interface and our concrete class because we're going to inject the or get a reference to those anyway within our prep db class so let's just do that now so let's close all this down let's come into our startup class let me get rid of my terminal and let's come into configure services and we'll just put that after this so services we're going to add a scope lifetime and we're going to pull in or we're going to say that an i platform data service will give us a platform data service let's just pull in the namespace i think i probably yeah platform data service it looks okay must be something wrong platform data client not service been a long video getting confused at the end of it so yes it's our client isn't it of course so let's try that again and yeah yeah that should these both should resolve now great so let's just do a dot net build again let's do a net run in fact just to make sure we're all looking okay yep that's all looking fine fantastic so yes now last class of the whole project is our prep db class that was going to call our grpc client code so let's just close that down let's minimize that and then in our data folder right click new file prep db.cs all right so namespace command service data and it's a public static class called prepdb fantastic then we're going to create a private public methods private public method that makes absolutely no sense a public void met public static method public static void and we're going to call it prep population and we're going to inject in our application builder into that application build up we'll pull in that namespace and we'll use that ultimately to get a service scope again because again we can't use constructor dependency injection because it is a static class so we'll create a server scope application builder application services create scope and we need to bring in the name space for create scope there we go so we want to get a client our grpc client so grpc client equals service scope service provider get service what do we want to get we want to get my platform data [Applause] that's what we just created from the last section so let's bring in the namespace fantastic and then we want to basically get a collection of platforms so create platforms collection and make that equal to grpc client and then we will call we're talking all platforms and then like our prepdb class and our platform service we're going to create the private static void method called seed data into that we're going to expect an eye command repo i'll call that regal and we're going to expect an innumerable of platform objects which is basically our result set from the method the prep population method we'll call that platforms all right so i just want to make sure that we have that name space so systems collection is genetic and then bring in our platform actually you want to do that using my using statement there we go and saying we're going to call this method from in here before we finish seed data we're going to call seed data use our server scope again service provider get service and we're going to get an i commander repo and then we're going to pass in our platforms from here okay and i forgot to put the parenthesis on the end there okay that should resolve fantastic so again yeah interesting data method we're passing a repository we're just getting that from our service scope again and we'll pass through our platforms which we retrieved from here and then finally we want to push the data into our database so console.writeline bring in the namespace and we'll just say seeding new platforms fingers are getting tired after all the typing so almost there the last four each loop so we're going to say for each platform in our collection we're going to do a check so we're going to say if not underscore repo open the screen so used to say underscore repo external platform exists plat external id so if we don't have it then we will add the platform to our database so we'll say console.line actually don't bother with the console white line in here we'll just use repo create plan create platform and pass in the platform object so this just means we're not duplicating platforms and ones that we already have using the platform service id as the kind of global key and then irrespective we'll just save changes so we may or may not add anything doesn't matter we'll just call save changes fantastic all right so so yeah all then that remains to be done is we just need to come over to our startup class and we need to call our prep db or caller prep population method off our prep db class so prep db prep population and we just need to pass through our app instance as so so that is it that is it assuming we haven't made any errors so dotnet builds to build up looks good so a good quick pause here uh we've done quite a bit coding here so in the next section we're going to run both of these up locally and we're going to test to see if our grpc stuff actually works and then we'll take it from there all right so on to testing our grpc stuff so let us come back over to our platform service here we are and through the screen we'll do a dotnet run so we're using a memory database everything's looking pretty good so we're not really going to do anything with this now let's just check to see it should have data in the in its repository in its database so into our platform service we'll do get all platforms send and we have all our platforms they are the three platforms now what we want to do is run up our commands service and it should start up it should use grpc to pull those same commands into its repository and if that does work then we know it has worked correctly so let's do a net run so looks pretty good so it's listing on the event bus which we wanted to do it's calling the grpc service at https port 5001 and it looks like it's seeding that data into its database so if we come over here to our command service and we get all platforms we get the three platforms that we had in our platform service so using grpc to get that data which is very very good news and then just to kind of label the point if we come back to our platform service and we create a new platform and then we come back to our command service and get all platforms we have that platform as well delivered via our event bus so the two services are now completely in sync and we're using synchronous messaging and asynchronous messaging to achieve that and really although this was a very simple example you see how much work went into creating it but fundamentally those techniques that we have used here are then usable in much much more complex setups so it's a good foundation on which to move forward so very pleased with that glad that all worked all we have to do now and it could be a little bit the sticking point is we need to repackage or rebuild both our images push them up to docker hub and then restart our deployment in kubernetes now instead of that unique config stuff that we put in because we're not using https within kubernetes so that might cause a sticking point shouldn't do should work okay but that's our last thing that we need to do so let's just finish this off and get that done do a quick test and then we can all go home so we'll do that next all right the moment of truth uh let's just stop all our local running services just to give us um clean slate and let's start with our platform service so yes what we want to do is we want to rebuild the image so do a directory listing make sure you're in the platform service folder make sure you can see your docker file now we'll do a docker build platform service don't forget the build context so let's just do that now yeah i forgot to tag it after all this time there you go it's the first time i made that mistake uh it didn't tag it so docker build dash t and then the name of the service don't want to make any stupid mistakes at this point in the game all right it should build relatively quickly but because we've added some newer coding it might take a bit longer to build it builds again builds the images and layers and we've added quite a bit of code so it's probably updating a few more layers than it would have done if we'd made only some minor changes so it's probably pulling in all the grpc stuff actually that's probably why it's taking a bit longer so let's just let that do its thing all right that's built so up key and we'll just push this up to docker hub push all right we'll just let that push up and in the meantime let's come over i just want to wait yeah in the meantime let's go to our command service and we do a directory listing make sure you're in the commands service and we'll do a docker build don't forget the t this time binary thistle then it's command service and don't forget the build context let's just check that's correct binary this all command service fantastic let's rebuild that and test take an up key and we'll just push up to docker hub and then as usual once both of these things have pushed we will check docker hub to make sure that uh both our images were actually pushed up and we hadn't made any mistakes and that we have the latest images up there so let me just uh wait for that to happen and we'll come back and double check docker hub all right so that's our platform service pushed up let's see how our command service is doing it's still it's still running along so we'll just wait for that to finish as well all right so that's our command service finished our platform service should have finished as well indeed it has fantastic so let's just come over to docker hub we'll do a quick refresh and hopefully we should see a few seconds ago a minute ago that makes sense to me fine so they look like they have been pushed up successfully and we haven't get any other images misspelled images or anything that's good so last thing we're going to do a cube ctl rollout rollout restart and then we will make a but we'll see actually almost immediately when the command server starts up whether the grpc stuff was successful so we'll do that next all right so let's come back to our platform service i think we're already there yeah we're already there let's clear the screen so here we want to let's get our deployment so cube ctl get deployment and we basically just want to restart our platform's deployment so cubectl rollout restart deployment and we want to restart this deployment here so that's restarted so we'll do cube ctl get pods and we'll see that the container is creating i've opened a beer at this stage so i'm fairly confident it's going to work let's do another get pods still creating again and you can see that the old pod is terminating so that shouldn't take too long to terminate so let's bring kubernetes back over we'll order by start time so our platform server should be the first one that started i just want to make sure that everything's looking sort of okay yep it's looking all right there's no nothing too concerning there and actually one thing we can see is that it's listening on port 666 as well which is good news so we're on port 80 for our http traffic and it is listening on 666 for grpc so that's looking pretty good now maybe before we do anything else with our other service i just want to make sure that we can still call our endpoint so let's just say not in our command service and our platform service get all platforms yeah we have all our platforms there it's all still working fine fantastic so moment of truth real moment of truth let's clear the screen get our deployments again and we want to restart our commands deployment so dot key up key rollout restart and this time is commands and we'll restart that we'll do get pods and we should see container creating and again shouldn't take too long and at this point when it starts up should use grpc to pull down the all the platforms that we have running in our platform service so fingers crossed um see how it goes let's just check our pods again okay so our old pod is terminating it's gone that's looking good let's just do okay it's all running up so that's a really good sign we're not seeing it erroring out which is obviously a good sign so let's sort by start time again and it's our command service at the top and we should see here calling the grpc service see the new platforms i think that looks good so let's come back to our production queries and get all platforms from our commands service indeed it did it pulled down all of our platforms fantastic so with that incredibly long video we have finished and we've deployed everything to production running in kubernetes so if you've followed this far thank you for following this far it's been a very very long video now i do have some final thoughts on a bit of a review of what we've done just go back through reviewing what we have done and there are some final thoughts on things that we could do to move this forward even more the things i left out that i would maybe include the next time so quick pause i'll finish my beer and then we'll come back and we'll talk about those things before wrapping up all right so here's just some final thoughts from myself on how i would move forward if i was to revisit this course or add some more material into it as you're i'm sure very aware it was a very long course a very long video but we didn't cover everything i mean you're probably never going to cover everything but there's some things that i would probably introduce if i was going to extend it going forward and this is just my thoughts at this point in time so the first thing i would like to look at is https tls whatever you want to call it i don't like ignoring that at any point in time so i would definitely want to at least revisit it in more detail and make it more of a considered assessment of whether we would want to use it within our kubernetes cluster or not we didn't obviously and that may still be the case but i probably want to focus in on that a little more and at least provide some options for introducing https into a kubernetes cluster i'd probably want to revisit the event processor it was just one big kind of massive class and so i think that could be a bit more elegant and i'd probably think about re-architecting that but for our purposes it was fine possibly a more elaborate use case our use case was very very simple which i think for this video was the right approach it didn't detract us from what we really needed to focus on but going forward i think a more elaborate business focused use case would be probably beneficial because you then have to start really thinking about how inventing works and this kind of feeds into the point above really so that might drive a more elegant eventing system and then the last one service discovery so that means that services can discover each other without having to have hard-coded endpoints which i think is a really nice concept and it might also address some of the issues that i called out the synchronous messaging some of them anyway not all of them so something i possibly would look at introducing into another course going forward i'm sure there's other things but those were my top four um other than that thank you for watching if you liked the video give it a like if you haven't done so already please think about subscribing but until i make my next video stay safe and i'll see you next time [Music] [Music] [Music] [Applause] [Music] [Music] [Music] [Applause] [Music] [Applause] [Music] foreign
Info
Channel: Les Jackson
Views: 369,833
Rating: 4.9807744 out of 5
Keywords: .net, c#, microservices, rest, api, rabbitmq, kubernetes, k8s
Id: DgVjEo3OGBI
Channel Id: undefined
Length: 665min 57sec (39957 seconds)
Published: Sun Aug 29 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.