.NET 5 REST API Tutorial - Build From Scratch With C#

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
You are about to learn how to use .NET 5 to create a Rest API. The creator of this course is a Senior Software Engineer at Microsoft and he is also an excellent teacher. REST API allows your app or system to expose its functionality to multiple types of clients both inside and outside of your network, including clients across the internet. This is what you would use if you wanted to write a program to collect data from say, Twitter, Yahoo, finance, or even NASA. If you're looking into building your own REST API, and you're considering the dotnet platform for it, please stay with me, as I show you how to do this end to end using the latest innovations provided by dotnet. Five, I hope you enjoy it. In the first part of this tutorial, you're going to learn the scenario to be used across a tutorial, how to create a dotnet five Web API project from scratch, how to use Visual Studio code for building and debugging the project, we're going to work on how to draw the development certificate installed by dotnet. Five, and that's going to be needed for HTTPS access and how to use swagger UI to interact with the API. To follow the tutorial, you're going to need a few things, including the dotnet, five SDK, Visual Studio code and some basic understanding of the C sharp C sharp language. Now let's talk about this scenario that we're going to be using as a domain for this tutorial. So let's imagine that we have some sort of a catalog system. And we have a bunch of items available in it. So in this sense, like I am, video, video gamer, so I like to think of these items as items that I will use within a video game. So items like potions, swords, shields and stuff like that. Right. So that's a system that we have in place, it has a catalog. And of course, we're going to have users that are going to be a would like to banish this catalog via their, their browser, right? They have a browser, they want to manage these catalog items, somehow. So these are they would they would like to do is well, how are we going to create items in the catalog? A How can we retrieve the list of items currently available in this catalog? hunka. We update properties of the items. And how can we delete items in this catalog. So as it is today, we do have the catalog available. But we lack we don't have a way to expose this catalog to the internet so that people can use go ahead and manage it from the browser. So that's where we're going to introduce our REST API for this catalog. And during this tutorial, we're going to see how to build this REST API from scratch using dotnet. Five. So here we are in business to recode. And the first thing we're going to do is open a brand new terminal. And this terminal, we're just going to switch to the directory, where we're going to create our project and create a project, we're going to be using the dotnet CLR. So to do that, we're just going to say that, that new and the type of product that we want that we want to create for our REST API, that's going to be Web API. And the name of the project is going to be catalog, hit Enter. And that creates your race all the files, face it on the web API template. So now I'm going to open that folder that got generated, catalog. And a, as you open a as usual, as you open a dotnet project in Visual Studio code, it will prompt you to add a few additional files for building and debugging the project. So I'm going to say Jess, and those files get d rated under Ws code. So the left side, you can actually see all the generated files. And let's take a quick look at each of these files as a quick lack of love around. So the first part we're going to look at is the ACS proc file. This file is used is this is called the project file. And this is used to declare how we're going to build this project. In this case, we're saying that we're going to be using the dotnet web SDK to build the project, which includes a bunch of tools and tasks to specifically designed for web kind of projects. The next interesting thing will be the target framework, a monitor or target framework, which in our case is net five. The target framework defines the API's or face or what kind of API's are going to be available to your project. In this case that five is perfectly good for us. And the next thing is going to be a bunch of nougat packages that we're not going to be diving into right now. But those are just dependencies that we already acquired on this project. Close that and the next we are going to take a look at this program. See His pronunciation is what we call the entry point of the application. And what this will do is just pretty much race or stand up the host. That I mean, the process is going to become the host of our problem. And it also declares a bunch of defaults. And it also sets up what we call the startup startup class for our project. So let's actually go to a startup and see what's going on there. Really, the main things and startup are that we have this property called configuration that we receive as part of the startup constructor. This is you can use anytime you need to read some configuration information from multiple sources, like from a environment variables or files, different kinds of files, or a bunch of other places, a configuration that you don't want to hard code is your service. The next interesting method is configure services. And this is the place where you would register all the services that you're going to be using across your application. And we'll talk about this later on in this video. And the last interesting piece is the Configure method. This is where we configure what we call the a pipeline that a request pipeline on ASP. net. And so this just defines a bunch of typical middlewares, which are additional components that will execute before your, before your, your controller or your your code actually executes. So each of the each of these can execute as a request comes into, into the ASP. NET process. And from there all the way into when your code executes. But we're not going to be exploring this part in this video. A couple of other things are so we have a weather forecast. So this is a model that gets out generated for this sample application here just have a few very simple properties here. And alongside this a this model, there's a controller. Now the controller in ASP. NET is just pretty much the class that handles the route. Yeah, pretty much the routes that a your service exposes, right, but we're not going to be using these in this video. So let's not dive too much into that. A few other files a upsetted. json, this is where you can declare configuration, that's going to be that you don't want to use hardcoat into your, into your, your program your your source code. So And right now, it just has some configuration for logging and the wholesaler allowed in the app. There's also a variant of appsettings JSON, which is absurd in development, Jason. So development, I mean, the fact that is that says a does say upsetting start developing that JSON means that we're running in development environment, these settings will take precedence on top of the opposite is Jason. So you could have a bunch of these appsettings files for each of your environment like production, test integration, all of these environments. And I'll be talking about environments may be a good time to take a look at the files era that VS code, which are a task Jason and logic a lot Jason does, Jason is just a file that declares the tasks with these key concepts and pieces to your code, you can declare tasks and a task is can be a bunch of things. And in our case, the most interesting does is the fact that we want to use run the dotnet build command. So dotnet build, which is going to be used for building our code. And in terms of launch the JSON, this is the file that controls what is going to be launched or executed when we do like an f5. Or when we start debugging the code in this case is already pointing to the right DLL for start to start debugging. Lastly, we also have launch settings. So Jason are really the only interesting part here that they're like to take a look at is the application URL. Here we are defining the URLs, URLs Imperial, for our application, in which case in this case, we're saying we're going to be serving our server in localhost 5004, the HTTPS version will be serving in which is going to be the default version is going to be 5001. We also declaring the actual a, a spinet core environment environment variable. And we're setting it as development. And so also, in business to the code. This is not the actual one that's been harder the one that will is going to be harder eating lunch, Jason. As you see over here. This is the one that takes precedent if it was to recode. So so that's good. Well want to do now is actually to test this this project just to make sure that everything is running as expected. So what I'm going to do is I'm going to switch to the debug hub. Hear me expand this a little bit. And so what I would do is just click and start working. Let's see what happens. Right. So a browser pops up. And then if you're getting these page here, a, what it means is pretty normal, it means that we don't have the self signed certificate that goes without net, it has not been trusted. Right? So there's a very simple way to go around decision and to properly trust the survey that comes with dotnet. So let me actually just close this, and stop. And let's switch back to the terminal. The actually let me open up a new terminal here. Yeah, there it is. So in order to trust a certificate that comes bundled with the dotnet SDK, what you have to do is just type dotnet, def cert, https trust. When you run this, you're going to get this pop up, asking you to confirm that you actually want to trust that cert, I'm going to say yes. And then that we should do it. So I'm going to run it again. Let's see what we get. Alright, so yeah, so we still are not getting pretty much anything here. But we are not seeing any trust issue anymore. Now if what we want to see something meaningful here, although it doesn't matter too much for us at this point, what you can do is just go to swagger. And then you're going to get this this nice UI. So this is what we call swagger are also an open API specification. So this is a component that is bundled now with dotnet, five, so you don't have to do anything to make it available. As you saw, we did nothing. So it's just there. In the slash swagger URL. What this does is allows you to easily describe all the operations, all the actions, all the routes that are available in your API, and allows you to also interact with them easily. So for example, if I just go to the get route, and I click a trade out, click Execute, it will go ahead and run it. And we can see already some results for that route. And then one thing that I like to do as I work on these projects, is to really not open the browser every single time that I run the project. So let's actually switch a little bit the behavior of VS code so that we just keep this window open. And then anytime we just hit f5, or hit Run, it will not open more windows. So to do that, I'm going to stop here. And we can go to launch the JSON and close this. The only thing that you have to do is just remove this server Ready Action section here. And that's pretty much it. If I run again, and by the way, you can do this by just pressing f5, which I'll do right now, five. That starts to host, as you can see, but we did not he did not open any more windows, which is fine. Because we have the browser ready to call right there. Then the last thing that I like to do in terms of a practice set up is to simplify how we build our project in pieces to record. So for that, let me minimize this and go back to business to recode. Stop this, close that out I'm going to do is to go to data Jason. And the only thing that I'm going to do is to add this little section here under the build task, which is just the what we call the group, which is going to build and it says is default true. What this does is that allows us to more easily build a project. So now I can go to let me just save this saved. I can go to terminal room built tasks and immediately builds. Without that, we're able to just have a pop up yet another menu to do the build. I can also now do Ctrl Shift B and it will do the same thing. So yes, that just speeds up our situation. In the second part of the net five REST API tutorial, we introduced the foundations of our API, which includes the core entity used to represent the items in your catalog, the repository class responsible for all items, storage related operations, and the controller that will handle all the requests sent to a REST API. Here you will learn how to model an entity via C sharp record types. How to implement an in memory repository. Sources and how to implement a controller with get route to retrieve resources. Alright, so now that we are comfortable with the initial setup of our project, it is time to start setting up the the entities and a repository that's going to be used to a store and retrieve the items that are going to use it across a service. So the first thing we're going to do is well, let's actually just close these and the terminal. And let's get rid of a few of these classes that really make no sense for our project. So I'll just delete weather forecast here. And I'll delete the weather forecast controller. So we can start cleaner than that. So let's introduce our entity, the item entity. So for that, let's add a folder here called entities. So people will say domain, as a lot of people may see models, in our case, entities should be good enough. Let's create a file here, let's call it item. That's Yes. Let's follow the right namespace here. So the next page should be catalog as he's wearing the ntds folder, let's say entities. And then what you would usually do at this point is declare a class, right, so you would say public class and let's say it. However, since a dotnet, five and a C sharp nine, there's a new option here for you, which is what we call the record types. So record types are pretty much a pretty much classes. But they have better support for immutable data, immutable data models, right, which means that once you get a one instance of this object, it is not possible to really modify it. And that's pretty convenient, especially for objects that you receive over the wire. So all of these are coming from, from the web, usually use one to take them and do something with them, but you don't want to modify them. Right. Also, record types have this thing called with expression of who you're going to see later on in this tutorial. And then also a provide ability to compare based on a value. So validated equality is what they call, which this means that when you compare a two instances of these a of an item, in this case, those instances will be equal, only if all of the properties of that instance are the same, as opposed to just the identity of the object itself, which will be the case of classes. So record types are pretty handy, I think are a great option for the for the objects that we're going to be using here, over here. So let's switch class for record. There, it is time to introduce the properties for this record. So let's add just a few properties here. So let's see. So let's use a good for our ID. And let me import that system, your system namespace was missing there. And then before adding more properties, let me actually make a small change here. So instead of using Get set, let's switch to it. So why you seen it. This is another addition in the in C sharp nine in dotnet. Five, which it's a great fit for, for a property initializers. That where we want to only allow a setting a value during initialization, right. So this means that, for instance, in the past, you could say you could say, Get set. What this means is that after you create the object, you can just modify in this case, the ID a time. So that's not really desirable, we want an immutable property. So for immutable, we will have to set privates, right. So yeah, so this makes it immutable. But now it is pretty challenging to construct any set of these objects. Now we have to introduce a constructor. And then our customers don't have a really nicer way to go through our objects. So to get into a good middle ground there, they introduced the the init accessor here. So this means that you can use a creator creation expression to construct this item object, as you will do with a with a set. But after the creation, you can no longer modify these property. So it's a very nice a balance between the two words in there. I will see how this plays out later on. So at this point, let's just define our properties. So let's see. Let's add Of course, name. Name. It's great to be studying name Oh, Also in it, let's add, let's add a price for our item. Price, also in it. And finally, let's add a time offset. This is great. So this is going to be the date and time where the item got graded in the system. Let's also not forget to change this to it. Now it is time to introduce our repository, which is the class that is going to be in charge of storing the items in the system. To keep things simple For now, we will only use an in memory repository. And a few episodes later in the tutorial, we will bring in a proper database. So let's create a new folder here called depositories. And here, we will add a new file. Let's call it hidden man itemtype was 30. The writing space here not forget that it's going to be catalog that people see 30s. Right. And now, let's create a class. So holy class, I'll just grab the name from the main file. Right? We're going to do here, like I said, this is in memory. So we're just going to define a very simple list of items that are going to be the initializers that we're going to be working with. So let's declare a lease here. So let's see write rate and make it read only because it should not change. I mean, the instance of the lease should not change after we construct this repository. object. It's good to be a list of item we used to fight him. And I think, yep, so these the item items. And yes, we need to import the couple of namespaces their collection generic for the least. And that other entities for the item. And we want to say this is new. And we want to declare the list. Now these one another addition in C sharp nine. As you see here in the past, you may have needed to say okay, so this is new list of item, right? But that's a bit redundant. Since we already know clearly this is a list of item. So why, why so much ceremony, so let's just remove this. And that's all you need to do at this point. So nice addition, C sharp nine. So let's add a few items here. So let's say new item. And let's, let's do the initialization here. So let's just do random go it. You go it for the ID. Again, let's import the museum a space. And so for the name. And so like I mentioned in the introduction, I like to play with these ideas in terms of video game items. So the first item that I'm going to choose here is abortion. Super classic in these video games. And let's say the price for that one is going to be just nine. And for created date. Well, I'll just say they offset it etc. Now. So that means right now in terms of UTC, so not the local time, just the UTC time. And do assess we need these one, let's add a couple more. So same a good. I mean, get a new GUI for each of these ones. But let's call this one. It's going to be a idles word, resort. And let's say it's going to be more expensive. Make it 20. And then for the last one, let's say we're going to do rose shoot. Got to be a bit cheaper, let's say. All right. Oh, there is we have our initial list of items ready to go. Now in this repository, we're going to have to deal with a bunch of things, right. So how do we get an item? How do we get a collection of items? How do we create an item update, delete all these things. So to get in simple, let's start with a get. So we're going to do to get metals here. The first one is going to be get items for getting items if we're going to do, we're just going to return an ienumerable of item. items. Right? So a nominal is pretty much the very basic item, it is a basic interface that you can use to return a collection of items. And then yeah, that's always got to be as simple as say, well, as yours return, whatever we have in items right now, it is that sort of get items method. And the next one is going to be similar. But this guy is going to return just one item, it's going to be named get item. But this is going to need to know the ID of the item to return. Right? Start. And then. So in order to retrieve the correct item, what we're going to do is you say Okay, so let's return from the items collection. And I'm going to say where these requires important yet another namespace sit in that link importing now. And so we're going to say, okay, so from that list, where the item ID equals the ID that we got in the parameter. And that's going to return a collection. But we don't want the collection, which is one, the one item that that he should find all the folder where the file is going to be? No. So if it finds the item, it will return it. If it doesn't find it, it will return null. All right, so that makes up our repository. And so the next thing to bring in is controller. So like I said, the controller is going to be a the class that receives a request that has been sent by the client and handles it properly. So let's add our controller class then. So new file under the controllers, folder, and scenes, the reasons we are dealing with here is a items. So the convention will be to just name it. Items controller. That's Yes. Right. And again, security right namespace catalog, in this case is going to be controllers, controllers. Alright. So this controller is going to be all the class items controller. And then the interesting thing about controller classes is that you always want to inherit from controller base that gives you and this also important namespace here. So that will effectively turn this into a controller class, right? So let's always inherit from controller base. That's the first thing. The next thing is to mark this class as an API controller. Right there. Yeah, API controller. So this brings in a bunch of additional default behaviors for your controller class, that just makes our life easier. So yeah, don't forget to add API controller layer. And yeah, and by the way, there's tons of documentation on each of these a attributes, or the next thing is to declare the route. So the route defines a, to which a HTTP route, this controller is going to be responding. And, by default, what you would put here is just the name of the controller. So if you do this, this will mean that whatever the name of the controller is, that's going to be the route. Right? And so in this case, for instance, let's say for a get, it will do get slash items, right? That's going to be part of the URL that we'll be using. So yeah, I mean, you can do it either that way. Or you can explicitly declare a route that you want to use, like, I could say, two items, which is perfectly valid. And in fact, let's stick with that. And let's move forward. Now, here in this controller, of course, for any action, or any operation we're going to do, we're going to be needing that repository that we worked on a moment ago. So let's bring in an instance of that. We're going to declare private read only. Again, because it's not going to be modified after construction. I suppose seat 3030. Right. And Yep, for any space we have ready. Now, before moving forward. Let me just tell you that we'll be making a few not so ideal choices as we move forward. Just to explain that A bunch of concepts, right? So the fact that I'm introducing explicit dependency into i ng meme it suppository. It's, it's not ideal, but I just want to do to keep things simple and then improve as we move forward. Right? So those just don't take this as a final word on how you should do this. Now, we're going to construct these in the in the constructor of this controller. So let's add a high def controller as the constructor. Okay, and then let's say yeah, so double storey equals new name. It was, right. So boom, we have a suppository right there, ready to be used? So let's define our route state to find a route. So let's see. How do we find a route to retrieve all the items? On this storage? let's declare a method called Hi enumerable. Item? Sorry, item, get items. Okay, so these are methods that's going to return an ienumerable of items just like the repository. Let's import the right namespaces. Are we missing here? Okay. But then also, in order for, for this method to actually become a route and react to some HTTP verb, you have to declare it the right, the right attribute in this case, HTTP GET is what we what we want to declare for this. So by doing this, it means that when somebody goes to perform a get against slash items, this is a method that's going to be reacting to that, right? I do see how a few cases we pretty much have the same route, but it is the verb that makes a distinction into which method is going to be invoked. Then what do we do in this HTTP GET? So fairly straightforward, we're going to say, Okay, so we have items is going to be a depository. You remember, we created these get items method. So we just invoke it, we have the items, and then we return the items. And that's all it is. Okay, so with that, we should be able to test this out and see how that goes. So I'm going to hit f5 here. Alright, going to go back to swagger. And I'm going to use refresh the page that we get open from the previous video. As you can see, everything got refresh. Now there's no more weather controller. Now we have an item's controller. And we have our first route here we use a get for items. We not only that, we also have a schema that describes a an item, how it will be laid out. If we click on items. And let's see, try it out. Let's see what we get. And right here, as you can see, swagger will show the the router boot executed, like we said, it's a get on slash items. This this is here, this is our host and the port. So against localhost 5001 items. And here's the result. So we have our three items that we declared as static. Well, the initial value for our items collection, right? Or the potion, the sword and the shield, right? This is working great. Stop this and add our second route, which is a route to return one specific item. So this is going to be public item, get item and then we receive good, right? And then are we missing any space? Again, let's see. Yes, focusing in a space. That's where it is. And then just like before, we just say, well, var item equals repository dot get item. Okay, so here's the other method that we added to the repository, we just pass the ID, and we return that item. Now, again, we need to mark a top like that right bear here. So it is going to be an HTTP GET. But in this case, there's a little additional detail, which is the template. So we have to provide a template where we specify how we're going to treat another piece of the route. Like in this case, the route is not going to be just get slash items. It's going to be good slash items. And the the idea of the item, right, which is is that piece of the big plate, I was just going to put it like that. So would you request the items is slash rF cific item ID then this This piece is going to get executed. Right? So yeah, let's see how that goes. I'll do f5 again. Okay, this is running back to swagger. Got to just refresh the page. And as you can see, now we have our second route available. Like I said, this is slash items slash ID. So, in order to execute that, let's Well, let's actually execute the first one items to get a list of all of our items that we know about. And let's grab the first one. so dearly, we should be able to get that item via the the other route. So gateway ad, so let's try it out our basic item here, so these how in open API swagger, you can introduce values, right? And now execute that, see what happens. So here's the executed route. So you can see slash items and the slash the actual ID. And then interestingly, we are getting 204. Right? So this means that something didn't go quite as suspect. What I'll do is I'll go back to Visual Studio code, and I'll put a breakpoint over here, see what we're getting. So go back to swagger, UI, and execute. Here, we got a breakpoint. And, right, we're getting No, that's what's happening. So the item has not been found. So why could why would that be? Well, let's see. Here's our items. And here's our get item. I made that. And so, yeah, so really, what was happening here is that anytime we make a request to this to our controller to our, to our service, we are actually creating, as you can see, over here, a new instance of the in main items are positivity. Right? So that means that he's created, a brand new list is created with a random set of new codes here. And of course, when we try to use our previous use it good, it will not find it, because now we have a brand new list. So I'm going to do a five, yesterday go and this is a good actually a good thing, because you have to realize that we need to deal with that situation properly in in the controller. So in this get item method, we should be a not just returning an All right, so how do we handle this? So let's stop it. So I think that we probably want to do is to return the proper status code HTTP status code in the case that we can find the ID. So let's say so if item is no all right, so let's return it something different. So let's return not far. So that's the way to actually ask a dotnet to create the proper status code for not found so we don't have to actually figure out so I set a status code. And then yeah, if it is found, they will just go ahead and return the item. Now we do have a problem here. Because now a in one branch, we're trying to return this type, or not found result in these other graduates right to return just an item. So what is it going? I mean, it's not like in it, right? We expecting to always return an item. How do we deal with this is a by the use of the action result type. So if we do action result, that actually allows us to return more than one type from this method. So as you see, there's no more errors, because now this is saying, Okay, if if if you want to return not found return not found, or if you want to return the type that is in the a direct tie right here, you're also able to do that, right? Or we're going to also say something like, Okay, if you wanted to, that's probably fine. But now in this case, we can handle both cases. So let's run this again, and see what we get used to get the proper status code for each of these actions to be properly restful. So let's refresh this. Let's again, let's get one of these items are going to try it out. Execute. So good. Yeah, can you just get one of these we know it's no good to find it. But get that good. And then open up here, try it out with ad and execute. So this time, we do get a 404, which is the correct status code word for not found, as you can see here, in this third part of the dotnet five REST API tutorial We learn about the dependency injection technique, and how to leverage it to properly inject a repository instance to the items controller. We also introduce the concept of data transfer objects, and how to use them to establish a clear contract with our API consumers. Today, you will learn what is dependency injection, how to register and inject dependencies in dotnet. Five, how to implement data transfer objects, also known as dtos, and how to map entities to details. In a previous video, we were able to actually create our entities, repositories and even our controller to be able to get items and our specific item. However, we find an issue where we try to retrieve an item, we can retrieve it, because as we found and just go back to the code, anytime we receive a request, in our items controller, we are creating a new instance of depository. And that's bringing a bunch of new items in such a way that we are never able to find, right. So how can we go around it? I mean, how can we actually fix this the right way? So for these, there's a pretty important concept that we need to learn here, and which is called dependency injection. So let's talk about that. So what is dependency injection? Let's think about our class, right, so we have a class, which wants to make use of some other class, where we have this kind of relationship, we say that this other class is a dependency of our class, right. And, in more concrete terms, in our case, we have the itis controller, which is constructor is a creating a new instance of the repository ready in meme items repository. Now, what we really want to do in terms of dependency injection is flip things a little bit. And instead of have a test controller construct that instance, and I'll just open up my highlighter here, we will receive the repository in the constructor, and then just take that, that, that reference into the class. So at this point, we are injecting the repository dependency into the IDS controller class. Now this is also brings in something very important, which is the dependency inversion principle, in which again, so we have a class, and we have some dependency, let's call it dependency a. And well, this class depends on dependency a, but what we want to do is just not take that kind of dependency, and instead, have our our class depend on an abstraction, which is, in this case, in C sharp, it is an interface, right? So the class no longer depends on dependency, it just depends on some interface that dependency a will implement. Right? So we have inverted the dependency by having a class only depend on an interface and dependency a implement that interface. At the same way, I mean, as we do that, we could bring in a dependency or any other dependencies that also implement interface. But in this case, you can imagine, now the repository that I just control receive is just an interface. So class in this case, has no idea of which explicit dependency it is working with, could be a B or any other dependency, as long as they implement the contract, which in this case, is the interface. And this instruction class is very happy to work with it. Okay, so that's, that is the dependency inversion principle. And well, the thing is, okay, so why why do we want to do this? Well, really common reasons. And yeah, like I said, like he says, right, so by having our codependence dependent upon abstractions, we're decoupling implementations from each other. So it gives us much more freedom in terms of moving around these dependencies, without ever having to touch our class. And this makes the coal cleaner, easier to purify, and much easier to reuse. But I'm very wary, so much easier to test. But then, if we're going to do this, how are we going to construct these dependencies, right, because now we're just receiving them in the constructor. So if we have all these dependencies, dependency ABC, how are we go the constructor constructor. So because our class was to receive them, right? They were going to inject them there. So comes into play this thing called a service container, which in terms of a dotnet five is an AI service provider. So what happens is that a during a application application startup, we're going to register, each of these dependencies are going to be registered into the service container. And then eventually, when the class gets instantiated, the service, a service provider separate container is going to take care of resolving any of the dependencies needed by this class, like it has a map of all the dependencies are needed by each of our classes. So resolve the dependencies, construct them, if needed. Only the first time, of course, depending Well, actually, depending on the application lifecycle that has been set up for those dependencies for the class. So if needed constructors, otherwise, it will reuse it. And then it rejects the dependencies. And this is, in fact, what's going to help us with little problem that we have right now in the inner project where we want to be constructed and constructing I mean, we don't want to get a split dependency on the regulatory, or we don't want to construction, explicit construction of warnings as every time we created the controller, we just want to receive an East as if it's available, a lot of constructed getting constructed first time and get it all sorted by the service container. Let's see now how we can use dependency injection to our advantage. So what I'm going to do is I'm going to get back to the code. So let's stop debugging close terminal. And so let's fix the situation where we have this explicit dependency on email it depository. So firstly, we're going to need is some interfaces so that I just controller does not really a operate on concrete instances of the depository. So let's go to a main idea posit Ori, and we're going to do is just right click on the class and actually in the light bulb, and let's extract interface. So that's going to create a interface for us and makes it makes it possible to implement that interface. Now, we probably want to take the interface out into its own file. So let's do that. A new file, let's actually call it a I items depository, should be a better name for this. Okay. And so thanks, space catalog, dipole stories. And here's where we're going to bring in our interface, I'm going to use cut and paste here. It is a report a couple of nice spaces, let's see, for the entities. For CS for good. Collections generic, that should do it. Alright, so we have our interface. And here's the repository. repository implements interface. So now that we have the interface, let's go back to our controller. And let's switch this into I mean, this type into i a items of repository of policy items. For seat gaudy. Make sure this is off. Yeah, we have to do the right naming here for that game. And now we've got the receive it here. So I demonstrate depository. This is a boss story. And then just to not confuse things, let's say this, that repository equals depository. Alright, so yeah, so now we got the pennsy injection working here. And no longer This class has any idea of which repository is going to use behind the scenes. Now, the only thing we have to do here is to what's going on here, actually, I was just fixing that in our laboratory. The other thing is that we have to do the registration, right. So to do the hyperinflation of our depository, or we're going to do is go to start up, configure services. So this is the place where you register all the services that are going to use it across your service. And the service that we need now is our history. So let's do services that add Singleton. Now there's a bunch of ways to add a URI. To register your services here, I'm going to be using a singleton so and a singleton is nothing, nothing else, other than just having one copy of the instance of a type across the entire lifetime of our service. So only one will be created. And it will be reused wherever whenever it is needed. So that's going to help us resolve the problem that we have today. And so to add the singleton fares, we specify the interface. So I added repository, which we may need to add. Yes, we namespace. And then so that's the interface. And then the concrete instance is in main items repository right? That's it. So that's how you register your dependency. And so at this point, we should be ready to be ready to go. So I'll do a five. And now I'll switch to swagger. So we have the same API as before, I'll refresh anyways. And then let's try an exercise now. So let's see, I'll try out our items, endpoint. So I'll get one of our items. And I should be able to find it now. So see, items, Id try it out. I'll put it here and there, let's try to find it execute. You have a breakpoint here. Let's see. Yes, this time, we can resolve no more, no, remove that breakpoint, run back to swagger. And then here it is, we got our response code 200, for the request of that item, and here's all the body that is crucial. So as expected, now we only have one copy of repository hanging around, which is injected into the controller, and that allows us to actually find the data we're looking for. Now, there's one more thing to notice here that we should fix right away. And which is the fact that these these routes that we have enabled right now are enabling a or are exposing our item entity directly to the outside. And we have to understand that as we build REST API, we're also establishing a contract with any of the clients that we're going to be using, which is a contract that we should not be breaking easily. And the problem that we have right now is that since we're exposing item, which is the item that we're using for dealing with persistence, we have versatility. Anytime we want to add feel, I mean, anytime we want to modify or remove any of the fields that were that we're using in our inner storage, right now, in the versatility, we can potentially break our clients, right, break that contract, which is really no goal for us to build these REST services. So how can we avoid exposing these item contract there? So let's, let's take a look, let's go back to the project. Let's actually find that entity. So we have item here. And as we said, we are returning it both in get items and get the item. So what we're going to do now is introduce what we call a DTO or a data transfer object. So data transfer object is nothing else other than the the actual contract that's going to be enabled between the client and our service. And to do that, we're going to do introduce a new folder here. Let's call it videos. And let's add a new file for our item dt. All right. All right in the deal. So again, namespace catalog, in this case is going to be videos. And the identity is going to be fairly similar to our item, actually. So let's want to just copy the item. And so going to add missing spaces. here and there. Yep. And, yeah, I mean, in this case, it happens to be that the item that we want to return in our methods is pretty much the same as the item that we will be storing in a repository or retrieve the depository. So we use, okay, seems a bit redundant right now. But the benefits become evident as you move forward as you start modifying your database. You don't have to be touching this contract, or you can be very careful with the contract, or supposed to be breaking our clients anytime. So this gives you a lot of flexibility as you evolve your data store. So now that we have, actually let's rename this to item DTO. Sorry for that. Okay, I think the deal. And now that we have that, it is time to start using it right, so let's go back to iTunes controller. And so at this time, what we'll need to do is to convert the iTunes or we're getting started with good ideas, right, we have to convert these in from item into identities. One way to do this would be to just do a simple projection with link. So we would do a select and then maybe missing the link a space here to add it. I'm going to say okay, so I tend to project into a new item DTO and I may need to add any space here. There it is. So here we're going to bringing the properties, right, so I'm going to say alright, so Id is equals item.id. name equals item name, same thing. Price. Okay, so now we have a our items collection is a collection of items, we do think we're missing parentheses here. And we return those items, no longer item, there has to be item B to setting up the contract. Okay, so, yeah, so that should do it, we have to inform the item, do it and do. And as you may guess, we have to do pretty much the same thing over here, right? This point, it will be a bit redundant, right? So why will want to do this transformation twice with exactly the same properties. So one way that we can overcome this is by adding an extension method. So let me show you what I mean by that. So I'm going to add a new file here, we're going to call it extensions. Games. So good luck. And then what extension does addition metal does is it just will extend the definition of a one a type by adding a sum a method that can be executed on that type. So in this case, we're going to add a class only static class or for extension methods, you have to use a static class, there's a way to go, extensions. And then we're going to declare one method here. public static is going to return item DTO. And we're going to call it as DTO. What is going to it's going to operate on the current item does, that's what this method means. So again, let's add some namespaces here. So this method receives an item right, by by using this here, it means the current item can have a method called as DTO, that returns its identity or version. So at this point, we can probably take advantage of what we did here. So let's see, this is what we used to create activity Oh. So we can say well return new identity or out of the item that we received. Ladies, we have an efficient method ready to be used. So now we will go to adjust controller, what we can do is instead of all of this, we can say so item is projected into it, and that as the deal. That's all it is. Let's collapse this a bit. With that, we can also use the same method over here. So when we get the item, we will say as you do, of course, we need to change this into our do contract. And then the rest. Actually, I will do this. So let's just get the item first. Check if it is no, if it's not, then we do the CTO. Okay, so now that we have done that, let's see how that goes. So I'll do a five again. Okay, back to swagger. And I'm going to refresh this. And let's see if this still works. So items, I'm going to try it out. Execute. And yeah, just before we can get the list of items. But this time, if you scroll down, you will see the disclaimers. The contract we're exposing to our consumers is no longer an item but it is item to do with the properties posted right here. In these four parts of the dotnet five REST API tutorial, we introduce additional controller actions for creating, updating and deleting items. We will also learn how to validate the incoming dtos to prevent invalid data from landing in the service. Today, you will learn how to create resources with post how to validate the values of DTO properties, how to update resources with good and how to delete resources we delete. It is time now to introduce the rest of our, our routes, right so route for both to create an item, route for update, delete, or update the item and the route for Delete to be able to delete a date. So let's start with post. Right. And before doing anything else, what we're going to have to do is to update our repository to be able to have that create that route for creating an item right So let's do that. And I'm going to start again, by going to the interface, that's the first thing to do. And so let's declare void, create item, item item. Okay, so this new method just returns nothing. And it only just received the item that needs to be created into the depository. Now let's switch back to the concrete in main, it depositories. And I'm going to say, implement interface. So that brings in a new method right there. And CNC turning memory depositories, this is as straightforward as just saying items that add item. That's all it is. And then now we want to expose these into the controller, right, we start to add a route the controller. But before doing that, we got to realize that so the client is going to be sending these, these items. And we will have to establish another contract for receiving that, that item, it cannot be identity or because we don't need as many properties as in identity or for the creation of an item. So let's see what I mean by that. So I'm going to go to our details folder, I'm going to create a new file, let's call it create identity Oh, and add the space catalog that videos. And then. Okay, just like before, we're going to be using record for this pretty convenient for the tiers. And so now I've looked back at it to, let's see, so what will make sense to be sent by the client as we create an item. So normally, the ID is auto generated in the server side, right? So we don't need to be passing in that ID, we do need a name and a price. And likely the created date is going to also be generated on this service. So we're only going to include in these two lists. So we have name and price. And that will be for our create item DTO. And now let's see how we can use it in the controller. So back to the controller. Let's see how we declare these a post route. So it's going to be public action result. Because again, we could return more than one thing in this method, things could happen. And they call the convention for a post or a create method is to create the item and return the item that got created. So if we're going to do that, we're going to we should be fine to return the identity. Now that the conventions here are going to vary. So some people will create their own response object here, it doesn't have to get into to it happens to me that it works fine for us in this case. So that's okay. So it's going to be called right in grade eight MBT Oh, that's going to be our input contract. Item BPO. Okay, and then let's qualify these with the right verb. So each HTTP POST, and then just for the communication will say so yeah. So this is going to be invoked when somebody does a post into the items route with the correct body, of course. So how do we create an item? So very straightforward, we're going to say, well, item, item equals new, we will have to find the type. So thanks for sharp nine. And then yeah, so let's just say ID equals, here's what we actually generate the ID for the item. So it's going to be good, that new good. So the name is identical to that name, same for the price. Price, and then the created date, as as you would expect a time offset that UTC now, that's the item. And now we have to build we'll take advantage of the repository method we just created. Create item, and then it goes there. And then once the item has been created, a convention here is to Yeah, I mean to return the items that were created and also a to return a header that specifies where you can go ahead and get information about that graded item. So to do that, what we can do is use Created add action, you can also use create route, that's another way to do it. gradient action will work fine for us. Because what we can do here is say, Okay, so what's the action that we want? What's the action that reflects the route to get information about the item. And that's going to be our get item action right here. So what we can do here is say, well, there's going to be the name of get item. And then when you specify the ID, that's going to pass it to that order route. So for that, let's create just a simple anonymous type here. With ID equals ID and ID. So that's a generated ID. And then finally, the actual object that's going to be returned, which is a thing. And again, let's take advantage of our extension method as DTO. So we take the identical created, and then we just converted yesterday to. And yeah, that's it. Let's do f5. See if this works. So back to swagger. Let's refresh this. So now we have our post route over here, as you can see, and also our grade identity is showing up as a new contract that we're exposing torque lights. So let's go to post. Try it out. And as you can see, now we only need to provide a name and a price. Let's see for the name. But let's see, let's bring in something like another type of support, I guess, for let's see, a de neumes sort to be a bit expensive, let's say 35. And yeah, so let's start let's execute, see what we get. So as you can see, here's the request URL the same as the get. But in this case, it is a post, we have guarding 201, meaning a created art route. So he got created a new sort. And as you can see here, we have a location here that specifies Where is that we can find that item. So if we actually take this a these code here, we go to our collapses and go to our get route. Go to try it out. We're going to paste that here. Execute. As you can see, now this is the route that was provided in the location here for post. And it actually is able to find a developer a new source we just created. And in fact, his use case, get the full list of all the items. Try it out, execute. So now we don't have your three we have four items, including the deadliness. But then let's also try one more thing. What happens if I try to create something without a name? Does that make sense? Well, let's try it out. There's the V that that was accepted. And now we have an item with a no value, which is totally unacceptable. And and again, in fact, we use code to the get route and we say, execute, we can see we have an item with no, which is pretty bad. How can we protect ourselves against that situation. So there's this thing called a data annotations, which is something we can add to our DTO in this case, to prevent that situation. So back into create identity, oh, what we can do is just request that this field is required. And I'm going to add the data annotations namespace in there. So it name has to be provided, and price has to be provided. For price, don't just add, let's do one more thing, let's say there's going to be a range of possible values for price because we should probably not accept a negative value or even zero here. So let's say that we're only going to set values from let's say, from one to 1000, that should be a valid range for us. So just by doing this, we are protecting the values are going to be coming into the controller. So I'm going to do a five again. And let's see how that goes. So back in swagger, I'm going to be collapsing this route. So back in post, I'll just try to do the same thing again. Try to execute this. And this time. Now we have a 400. So bad request error. And clearly it says here the Name field is required. So now data validations are coming into place. And so we must provide a name. Let's actually play with the price here. Let's say I try a negative number. Right see what happens. If you have price must be between one and 1000. Right. So data notations for valuations pretty useful for relating dot r or D do Now it's time to implement our update route. So let's go back to source code, close terminal. And just like before, let's go back to our a repository, I identify the interface and add the relevant method. So let's say this is going to be a void. Again, update item. And this is going to receive the item to get updated. Very similar to create item. Now, back to the concrete implementation. I'm going to say, again, implement interface that brings in the method data item. So how would we update this item. So since it is an in memory list, really, the only thing that we have to do is use find the relevant item and update it with the incoming item, right. So to do that, let's do this is find the index of the relevant item. So I'm going to say items, find index. And then so this is the existing item. So you're going to find the existing item where existing item ID matches item.id. Okay, so that's just finding the index of the of the ID that we're looking for. And we've found it, we can do items, index equals item. That's all it is. So we will update the item in the right location. So it's time to go back to the controller, well almost like to go back to the controller, because, as you may realize, at this point, we do need some video to receive the input for the update route. And even when that disables, you're going to be pretty much the same as with great identity, oh, these are good practice to actually have a another a video for this case, because you don't know right now is the same thing. But eventually, it could be that an update means something different than accurate. So let's use to update identity Oh, that's a copy of great identity, oh, pretty much has the same properties is the only thing that you can verify the name and the price. And required and the range, right. So now let's go back to the controller, right this controller, let's create out of the drought. So this is going to be again, public action result. In this case, a convention for output is to actually not return anything. So use what we call no content. So it's going to be no type here other than action result, it's going to be called update item, we receive two things, the first thing is the good of the idea of the item. And then they are new update item DTO. Let's call it item DTO. And then, let's not forget to add the correct verb here is an HTTP put. And just like we did before, the route is going to be just for documentation, when you to put into slash items, and then slash that's the piece that we're missing actually here, we need to also specify here the template which is in this case is the ID. So that means that when when we do a put, we have to certify the ad in such a way like these, right, so put items slash the ID, and they will hit this method here. So let's see. So first thing, so how we will do an update, first thing to do is a Find the Item. So existing item, it will use a repository for that we already have a method for this, which is get item passing the ID. And then it will be great to verify if this ID if these items actually exist. So if an existing item is no, then well, we couldn't find it. So we will just return not found. And at the end of story for that that branch. And then if we find it if we found it, well, we're going to do is just proceeded to do the we're going to create a new item, which is the updated item in our system. So in this case, we're going to say is item updated item equals it essentially our existing item. But with a couple of differences. Like we need to use a the name of the provided identity. Oh, and the price of that also provided identity. Oh So, now here, I just use one 190 of a record types that I mentioned when I was talking about record type, which is the width expression here. So what's happening here is that we're saying, Okay, so we're taking this existing item here, are we creating a copy of it with the following two properties modified for new values. So that's a very nice addition into records, and allows me to use what is really an immutable type. But still, I can go ahead and modify a some properties on initialization. So if the item is just a copy of existing item with a bunch of properties, like nice addition, in record types, where we have this updated item, we can go ahead and say, okay, so repository that updated the method that we just created, and send the bladed item. And like I said, the convention is to return no content, so nothing to report just returned. Let's try this out. f5. And back to swogger refresh. So here it is, our put our port route. And before doing a port, let's actually get a one of our a items. So try it out, execute. So let's say we're going to modify our potion here. Okay, so the potion, I'm going to collapse these up and put, so for put, we have to provide an ID, and updated values here. So this is a potion, let's actually rename it to, let's say super potion. Let's bump the price to I don't know, let's say 29. So execute. And then just as expected, we get 204, which is no content. And you can see the route that was executed here. And then if this succeeded, we should be able to get a updated version there. So I just get the full list of all the items that we have. Now let's see what happens. And there is no longer potion super potion with updated price. Also, notice down here that we do have a this update identity are available now here, the new contract. So we'll update really, the last thing to add is our delete route. So let's go back to controller. So back to the project. Just before, back to the repository interface. Let's do void. The title, we already already know we need for the reading item is just to know the idea of it. So back to the repository. let's implement the interface for delete item. And actually, the item is going to be very similar to update item, first thing that we have to do is find the item, I mean the index of the item. And now we can just say items that we move at. Look at that index. That's all it is. Sorry, we have to say it here. We had the opposite already. And now we can look forward to the controller. In this case, we're not going to need another DPO. Because the only thing that's needed here is to use a simple ID. So let's use implement the controller action. So public action result. Delete item, just like with update, we're going to return no content. So action result delete item, good ID. And let's add the verb. So this is going to be an HTTP delete. Again, documentation. So these are going to haunt or delete slash items. And then slash, let's not forget our template, which is the ID slash items ID. And to perform the deletion, similar, certainly with similar to update. So let's try to find the item first. And we'll just copy that base there. So not found if we can't find it. And then we go ahead and to repository, that delete item and it just like we did before, return, no content and that's all it is so f5 and by swogger refresh again. And now you can see we have our delete action available. So let's come to the name of one of our items and see if we can delete it to execute this. So let's try to delete that potion, copy, collapse, expand, try out, put ad and execute. So here's the route. We got 204 is expected, no content. So if I tried to get the items again, let's see what happens. All you have to, so no more potion habido here. And yeah, so that will be at the end of our routes. In this episode of The dotnet five REST API tutorial, we will see how to start our entities in a persistent store. Specifically in a MongoDB database, we will implement a simple MongoDB repository that can replace our existing emancipatory with minimal changes to our service. Today, you will learn how to implement a simple MongoDB repository, how to run MongoDB as a Docker container, and how to use postman to interact with the REST API. Starting with this episode, you will need a couple of other things to follow the video step by step Docker, which we will use to run a local MongoDB instance. And postman, which we'll use to interact with the rest API's from Iran. Now let's think about the scenario as it is right now. We have our user who is trying to manage his items via the browser. And he will do that by reaching out to our REST API. And more specifically, by reaching out to the IRS controller, which is where or all of our routes let at this point. Now, I just controller will interact with the inmate item depository to manage these items. And the items are actually stored inside the main depository as a simple items collection right. Now, what happens if the REST API for any reason stops, either eat stops or is restarted, and it could happen, either explicitly or unintentionally. But it is a really common scenario that a service would need to be restarted. If this happens, of course, our items collection is going to go away, because it's just a collection in memory, right? This is all desired, we need to figure out a way to keep these items alive beyond the lifetime of the REST API. So what are these, we have a few options. And I can think of first very basic option will be to using files. So you can think of, well, I'll have one file for each of the items in the repository. But really, the most common option these days would be to use a database. And in terms of databases, we have 1000s of options. But you can categorize them into relational and no SQL databases. In this tutorial, we will go for a no SQL database. And the reason for this is well, because of the benefits that it offers, beyond the fact that no SQL databases are one of the most popular options these days, there's the fact that you won't need a schema or SQL to interact with the database. I mean, you don't have to learn SQL yet another language here, you can just stick to your object oriented programming. And in our case, today, she's C sharp API's. And you also have low latency, high performance and these because there's no need for strong consistency as it as it would be in a relational database. And also, these are highly scalable. In our case, we'll go specifically for MongoDB, which is it is a no SQL database, and specifically is a document storage type of database. So it stores a the entities as documents, specifically as JSON documents inside the database. So this is we're going to use for this tutorial. So you can think that now we will have this database living outside of our race API. And now, when a controller receives a request, it will actually hand it over to a new repository called a MongoDB. I just have visited it, that's what we're going to create here. And even if our services stopped and restarted, the database will not be restarted at all, our data will be saved in there. And that way, we can keep our items beyond the lifetime of the REST API. Before we start implementing our MongoDB repository, I wanted to show you one more tool that you will probably find useful as you interact with your API's and they want to so is postman and you can get it in the postman downloads page. If you had no done already? And why would you use postman for. So if you remember, so far we've been using this page, the swagger UI for interacting with all of our API's. However, the problem here is that eventually you may not want to be opening, you know, web pages do interact with each of your API's. And these API's may not even be in your host, they may be somewhere. So what else were you there may not be a swagger UI. So you do need a way to interact with them. Plus, you may need some additional capabilities that are just not available in this page. So that's where postman can help you. So in this case, let me open a postman right here. So how we can interact with your APIs in postman. So it's as simple as clicking the plus sign here. And the first thing that you have to do is pick a verb that you're going to use. In our case, let's just try I get that you're going to read the request URL, all of those URLs are going to start with your host. And so in our case, let's see what's your host, let's do an f5 in VS code. And as you may recall, our host Ace and is one of the last lines here for the HTTPS endpoint is going to be localhost 5001. I'm going to copy this back to postman, paste that. And then if you remember, our route, going back to VS code, now, our route is a starts over route, start with items right, right here. So that will be the route that we want to use in postman. So just type here, items. And that should be enough to perform I get. So I'm going to just going to see click Send. And then if you get this problem here, this issue that says SSL error, unable to verify the first certificate, this is because of the SSL verification that postman is performing. But this such verification will just not work with the self self signed certificate that comes with dotnet five. So in this case, what you want to do is just disable SSL verification, and that should be fine. So you can just click here. And that will run the request. Again, as you can see, it ran the request. And we have the resource over here. Same results that we had in swagger UI before. But now, we are interacting more directly more directly with that API, the same way that we need to get you can do post put and a bunch of other verbs here. We will do so as we move forward with this with this episode of the tutorial. So now let's go back to VS code. And I'm going to stop the host here and close terminal. And I'm going to close it with controller. So the first thing that we want to do is to implement a new repository to be able to interact with a MongoDB database. For that, I'm going to create a new file. We're going to call it Mongo DB. Items repository. Right. Thanks space. Got a look. depositories game. So it's going to be called for league class MongoDB. depository. And just what are your sweet as with our email it suppository, we are going to implement items repository. We may need to import a namespace here, I will implement the interface. And just by doing that we have VS code has scaffolded all the methods that need to be implemented to comply with these interface. Now in order to interact with MongoDB, we're going to need a what is called a MongoDB. client. So the client is is a component provided by the creators of the owners of MongoDB that you can use to interface will be kind of the adapter that used to interact with MongoDB. So we'll need to inject that a as we as we everything else, we need to inject that dependency into our repository so that we can interact with it. So the first thing I'm going to introduce here is our constructor. So let's see. folic MongoDB is capacitor. And here is where we need to receive an instance of our MongoDB client. Where do we get this MongoDB client from? So to do that, we'll have to add a nougat package. Plug the nougat package what I'm going to do is open up a new terminal and here I'm just going to die dotnet package MongoDB that driver, enter, that's going to go to noget, grabs the MongoDB nougat package. And if you go to catalog CS Pro, you're going to see that now we have, we have the dependency right here. Oh back to the repository, we are able to start doing the injection. So let's see, we're going to receive here in the constructor is what we call an eye, Mongo client, to listen port import the correct namespace here MongoDB driver, and we will call it Mongo client. And now what is it that we're going to store? We're going what we want to store here is not really the client, but what we call a collection. So the collection is the way that MongoDB associates all these entities together. So I'm going to declare a variable here. I wrote it with only because we only modified in constructors is read only variable, I Mongo collection. And that you have to specify the type that that of the items, the type of the entities or documents, actually in this collection, our type is going to be our item entity. And his colleague, items collection. However, before we can get a collection, we need a couple of other details, which is the database name, and the collection name. So usually, like all of your document, yeah, all of your documents are going to be a grouped into collections. And you can have one or more collections in a database. So the first thing we have to add here is the name of our database. So let's just add a constant here is call it database name. And probably a good database name here would be use catalog. And now let's add a collection name. So private string, collection name. collection is going to be called items. Now that we have these available, this go back to constructor, and we actually close the terminal for now. And what we can do is the following. So first, let's get let's get an instance, let's create a reference to the database. So I'm going to say I Mongo database database equals Mongo client that get database. So database name. So that will get us a reference to that database. Now we need a reference to the collection. So items collection, is the variable that we say that we declared before equals database that get collection, and then the type of the item and then the name of the collection. And then the good thing about this is that both database and the collection will be created the first time that they are needed. So it doesn't matter which API we use to interact with the database and collection MongoDB. Or I guess the driver will detect if we don't have them, and they will be created automatically for us to we don't have to worry about it. So we have a few a few methods to implement. So to get started, we'll go for the Create item method. And we'll start implementing one by one and exercising each each one as we move forward in this video. To implement create item, what you want to do is just make use of that items collection. So you can say, items collection that insert one, and then you just pass the reference to the item. So in this case, it will be just item. So at this point, you may be wondering, where is this database is MongoDB database that we're going to interact with? Because Yeah, I mean, we have the code here ready to create an item, but we don't have a database. So there's a couple of ways to get a MongoDB database into your box. So you can either install the database via the MongoDB installer, or you can run the database as part of a Docker container. We're actually going to go for the second one. And the first concept to understand on that side is a Docker image. And I mean, we're not going to go deep into Docker concepts here where we're actually going to talk about that in a future video. But for now, you can think of our Docker image as a standalone package of software. That includes really everything needed to run an application. This application in our case is MongoDB. So everything is packaged in this Docker image. Then, when we run or when we execute this Docker image, it becomes what we call a Docker container. So is a running instance of a Docker image. That Docker container is going to run in the Docker engine. So how do you get this Docker engine into your box. So you just go to the Docker download page, which I can show you now, right here, did you go to this page, you pick your platform, and then you can go ahead and download and install Docker in your box. Then you have a Docker engine. And you're able to run any of the Docker images available publicly, like MongoDB, or perhaps some private Docker images that you may be storing in your own Container Registry. In this case, we're going to go for the A MongoDB, a public Docker image, let's see how we can acquire it, we can run it. So first thing we're going to do is go to terminal, say new terminal, we're going to type the following Docker run, then we're going to use the dash d dash dash RM modifier This is so that we don't have to attach to the process. So we just let it go. And RM is so that if the image is actually sorry, the container is destroyed after we close the process. So then, we're going to give it a name, Mongo so that we can easily recognize which image is this. And then we're going to open a port, that port is going to be 27, zero 1727, zero 17. So this syntax here means that we want to open, let's say, we're going to open kind of a window or a view into the Docker container. MongoDB usually listens in Port 27, zero 17. So what we have to do is we have to open some local ports as in the local machine, we have to open some port that can be mapped into the MongoDB port inside the Docker container. So this is the way that you would do it, you could assign actually any other port externally on the left side. But on the right side, you have to point to that to the MongoDB port. Finally, we're going to specify a volume. And the purpose of this is so that you don't lose the data that has been stored in MongoDB. When you stop the Docker container, okay, if you don't do this, then you will lose all that data as you start getting the content. So let's declare this volume MongoDB data is going to be the name that is going to be mapped into data dB. Here, slash data slash dp is the usual location where MongoDB is going to store the data inside the container. And we're just saying where we're going to map this location called MongoDB data from our local machine into the slash data slash dv directory inside the Docker container. And finally, we have to specify the a the name of the image, in this case Mongo, then I'm going to hit enter, perhaps I can expand this terminal a bit. So the very first time that you run a Docker image is going to pull it down from in this case from Docker Hub, into the machine. So that may take a while depending on your internet connection. But as you can see, there's there's multiple lines here. Each of them represents what we call layers. So each of them has some piece of this Docker image, including all the dependencies. So like I said, this is just the first time next time is going to be blazing fast on as long as you have those people's lives already in your box. So now we have the Docker image both into the box, and even Docker container running. If I actually do docker ps, I can see that I have the Docker image up and running and listening in this port 27, zero 70. I'm going to close the terminal. And so what we need to do now is to be able to point to that to that Docker image. So to do that, we need to write a little bit of configuration. So I'm going to open up settings Jason. So the basic two pieces of information that we're going to need in order to talk to the database are the host and the port. So for that, I want to introduce a new, a couple of new settings here. And let's call these MongoDB settings. Like I said, we're going to need a host and we're going to need a port. So, in the case of a MongoDB instance that's running in your in your machine, you can refer for the host you can just call it localhost. And as we said, the port that we opened in the Docker container was 27, zero 70. Those are the details that we need to talk to them to the to MongoDB. Now in order to read these settings into our service, I mean there's a bunch of ways to do it. But I think the best way is to declare a class that represents the settings so that we can easily interact with the multiple settings from our C sharp code. To do that, we're going to introduce a folder here, let's call it settings. ukoliko, call it configuration options. There's a bunch of ways, I'll go for settings. So let's add a file here, let's call it sorry, Mongo DB settings. And, again, namespace, got a look that settings in this case, this declared a class. Let's call it MongoDB. settings. And here, we're going to declare those settings that we saw in app settings, Jason this, this declared them as properties. So our first property is going to be prop string cost. And the next one public int port, so this is an integer. And then, let's actually take advantage of this class here, to calculate the connection string that's going to be needed in order to talk to MongoDB. So what we can do here is actually a read only property. So let's see. Let's let's do this string, let's call it connection string. Okay, we're going to remove the set, we don't need to any sailors in there, this gate, we're going to open up. So that becomes a read only property here, we can just return the calculated connection string. So we need to do a little bit of string interpolation here. So what I can say that they connect normally, a MongoDB connection string looks like this. So MongoDB, lashes lash, and then here comes a host, and then comes abort. So those are going to get from the properties, host and port, the properties that we declared over there. So with these, we have an easy way to grab the connection string, as long as we have populated host and port. Now that we have this, it is time to perform the registration of that a MongoDB, a client, a Mongo client, that we injected into the MongoDB, it was 30. So this client here has to be registered somewhere, right? And as of right now, what we know is that we do that stuff in startup. So just open up startup. And here, let's go to configure services. That's what you register all your all your services, we're going to do this services at Singleton because we only want one copy of the Mongo, the IMO client for the entire service. I'm Mongo client. We made it important namespace here. Yep. And then here, instead of just declaring the type of the explicit type of the dependency to inject, as we did over here, in this other thing, you don't, what we're going to do is actually construct explicitly that type. So that it is injected with the additional configuration we do this week, because we have to specify a connection string that that a client is going to need. So to do that we're going to do is say, Okay, so we're going to say service service provider, we're going to receive that service provider. And then we open up braces, going to add column there. And then over here, we can do things like First, let's actually grab the A, the the settings. So an instance of those settings that we have populated in app settings. Jason, let's grab them via our MongoDB settings class. How do we do that? So back in startup, it's two bar settings. You can use your configuration property, the one that we have over here that has been populated by the runtime, you can use that one to say get section. And then so now you need to get a one of the sections in the opposite is the case and we name it MongoDB settings, which is the same name as our MongoDB settings, settings class. Therefore, what we can say is just name off MongoDB settings sorry, Let me type this properly DB settings. And then we need to import that namespace. Alright, so that will get us a section. And then let's actually turn that an object which is returned as I configuration section, let's turn it into a proper MongoDB settings like this. So now we have a setting subject. And now we can actually construct our Mongo client instance with return new Mongo client. And then we will do settings that connection string, the property that we calculated in that class. So we that we should be ready to register and inject the client into the repository. Now that we did that, it is time to actually flip our service to start using our new MongoDB depository. So that we can do in this line, you remember, we previously registered the email it was it already. So switching to this other depository is as easy as you're saying here. Mongo DB items repository. That's all it is. And then one more thing that we're going to do here just to make our lives a bit easier, is to tell the MongoDB, a client, the driver, how to serialize a couple of types in this case is going to be a the codes and the date time offsets. If you remember our entity item, it has both agreed on a date time offset. And I think our MongoDB is that if you don't tell it, how exactly we want to reflect this, these types in the database, they may end up with a representation that's not very friendly, at least not for our learning purposes. So what I'm going to do is say, let's say these, so I'm going to do these on serializer. See if I can get the right namespace, register serializer. Today, we're going to say new COVID serializer. And it's going to be be some type. Let's collapse this coordinate space, that string that tells delsey that anytime it sees a good in any of our entities, it should actually serialize them as a string in the database, we're going to do something very similar with our date, time, date time offsets. So just copy that line. And I'm going to say, date time offset to your laser based on type string. And we'll see how these properties how the data actually looks like in the database in a moment. But for now, we should be ready to start testing this out. So I'm going to hit f5 here. And I'm switching again to postman. Here we're going to start logging in again. Because we have not implemented that we can start with a post. So to do a post, just, I just opened a new tab, and I'm going to switch the verb to post. We need the request URL. So we can grab that from our previous proper use right here. Should be that same URL localhost 5001 items. But what we're going to do, we're going to need in this case is a body, right, so the body the payload that we're going to send to our API. So we'll switch to row here. And then we're going to switch here to Jason. Let me minimize this a bit. So here, we will just a type, the JSON that represents the entity that we want to create that JSON is going to be composed of a name, if you remember, it's just going to be a name and a price. So trying to pick a name and a price. So for our first item in their database. Let's say we go for a great x, and price is going to be let's say 22. So that's all we need for a post. And then we're going to hit sent. And as you can see, the item got created 201 created. And we can see the response that we got from the API. We got an ID, how to generate it in MongoDB and a created date created via our controller. We can also check out the heaters that we received here. And as soon as we returned a created at action. You can see how the location heater has been populated as With the proper URL to retrieve the details about that item, but then also how does this item actually looks like in the database if you're curious. So how can we tell that. So let's go back to VS code. And let's stop this and close terminal, what we can do is install a MongoDB extension for Visual Studio code. So I'm going to open extensions hop over here. And I'm going to type MongoDB, in this case, is the first entry over there. So just click Install. And with that, we have a way to talk to our MongoDB instance, I'm going to close this one. And if you see, there's a new item on the left side MongoDB when you click it, and then there's a connection to already establish or defined there for localhost. 27, zero 17. This may fail the first time you try to connect. So if it does fail, what you can do is just not going to do it, I'm going to remove this connection, or I'm going to add it again. Add connection, localhost 27, zero 17 is the default, and close the STS and that, and then we're not, we're not using authentication at this point. So let's say no. And then hit Connect. And you're going to see on the left side that you have a connection to your local instance of MongoDB going to collapse these and these. As you can see, there's a few databases here. And some of them are really default databases for MongoDB. The one that we cared about what those are closest, now that we care about is the catalog database that I'm going to open. As you can see, we already have both the catalog database and the items a collection over there. We disband these. And you can see that we have one document for us open up that and these good here should match the good of the item created. So let's see FCO 27, open up postman is right here. Sorry, the body of the response, FC 27. So that's the item back to VS code. And if you click over there, you can see the actual data that's stored in there. So as you can see, this is suspected as as of our document database, it is storing the data as Jason directly into the database, we have an ID, the name, price and degraded date. If we had not done these two lines here, these reducers realize it lines here, the data you would see either for ID and for created date would be in pretty much in a different format, that will not be very human friendly. But you may want to play with that decision a little bit depending on on your requirements. I'm going to close this. And I think it's time to implement a tournament winning auto route. Where are other methods in the depository, let's actually go for the get items route. First, how to implement get items, the only thing you have to do is this. But let's say return scope to our collection, I just collection, we're going to say find. And here, since we want all the elements in that collection, we're going to just say new visa document. Oops, there will be some document, I'm going to import namespace. And then that will find all the documents. And then you're going to say to list. So that will give us a list of all the items in the in the collection. So we found on respected a way to request all the items. But this is just one of the ways to do it. There's a few other ways. But yeah, this will get all the items in there. So with that, we can try to get items. So I'm going to do a five again. Back to postman. And happens do we already have the gate for is open here. So it should be as easy as running these again. So just going to click Send. And there it is. We're getting our collection of items. And see if you only have one right now, why not create a few other ones just to have a small list here, as we had before. So I'm back into the post page, the post tab, and I'm going to add a couple of other items. So let's say How about an antidote as this let's make it 15. So he'd sent three if he created and then one more let's add. Call this worked for the golden sward is going to be more pricey say 40. It sent created and then if we go back to get it sent. Now we have those three items available. As you can see Okay, now what if we have to get just one item. So that's what we, for that we need to implement our get item, or our get item method here. But before we can implement this, there's one thing that we're going to need is what MongoDB calls as a filter definition builder. And that's a way that you can kind of filter the items that you want to return as you find them in the collection. Since this is a pretty common object that we're going to use across multiple methods here, I'm going to actually just declare it up here as its own local variable. Sorry, class variable. So I'm going to say, private read only filter defini shim builder, then we have to specify the type in this case is going to be item. Let's call it filter builder. And then we're going to use the builders object MongoDB. of type item, again, that filter, so that we have a reference to this filter object that you will to see how we use it now. Forget it. So but to get item, we're going to do this. So first of all, to build this a filter. So we're going to say, bar filter equals filter builder that equals So where the item item that ID. So the ID of the item has to match the ID that we have received as a parameter. That's the filter. And then we just have to do similar to before items collection, find, we passed a filter. And then we don't want all the items, we just want the one item that they should find. So we're going to say, single or default. That's all it is. With that, I'm going to say five again. And back to postman. So this time, I'm going to open up another tab, I'm going to paste the route forget. But now I have to specify one of the items. So from our previous exercise, let's say that we want to look information about the last insert. So I'm going to copy this ad. And I'm going to paste it in the route item slash the item ID, and I'm going to hit send. And here is we were able to query for one specific item, as opposed to all the items back to VS code is time to implement our update method update item. So similarly to suit ticket item, we need to introduce a filter so that we can tell which item to update. Here. Now, we have to really find these slightly, because we don't want to have so many variables named item. So to avoid confusion. So the item the existing item is going to be named existing item. Existing item that Id should match the item that we got that ID, which is the item to update. And then what we do when we find it is items collection, replace one filter item. So that we'll go ahead and replace that item into the MongoDB database. Let's look at five and try it out. Five, back the postman. And then what are we going to copy the gate route and open another tab here and use our put. So basic there. And then we have to switch, switch to body row. I'm going to be again, Jason. Then we bring this down. And we have to put the body here. So if we use if we do this, we're going to be updating our golden sword. Right? And so I'm going to get the body of our athlete. Yeah, I mean the the format of this a put request, I'm going to grab it from post. And so let's see what can we say about this call this word? Let's call it actually. But in insert. Let's say that the price is actually much more pricey. Let's say it's 35. So this should turn into a platinum. So with this change. So let's see we're going to the output, click Send. We get a tool for no content as expected, and then the item should have been modified. So if you go now, back to Or get route for that item, I'm going to run it. And then as you can see, he has changed noticeably newsworthiness art price 75, we get all the items, hit send, and we see that we have the readiness sword as the last item is great. So finally, it is time to implement our delete method to back to VS code. Let's see delete. And yeah, the filter is going to be again, very similar to our get item filter. So just copy that here. In this case, it's as simple as saying items collection that delete one, here's the filter. And that will do each. So I'll hit f5. Once again, back to postman. And then again, I'll copy the router and use input open a new tab, switch from gate to delete. Basic route, we don't need a body because this is a delete. And I'll hit send. We got our tool for no content as expected. And if you go to the get route once again and hit send. Now we don't have that bloodiness. You see there's only two items. The other one has disappeared. If you wanted to delete yet another one, let's say the great x call to delete 40. There it sent no content back to get get all the items and there is no great x. So yeah, looks like it worked. And then I'll just go back to VS code studies and close that what I want you to realize is that we did not have to really touch that our items controller at all. The only thing that we need really here, besides adding a few configuration and you know, registering the MongoDB client, the only thing that we did is use create this new identity depository that was plugged in into the service. And that is able to by itself do all the logic of interacting with interacting with MongoDB, the rest of the service has not changed at all. And that's a great the great benefit that we get from dependency injection and in this case, the repository pattern. In this episode of The dotnet five REST API tutorial, we will talk about the synchronous programming model in dotnet. Five, why you should care about it, and how to implement it in your REST API by using tasks async and await. Today you will learn what is a synchronous grabbing model and how to use tasks async and await to add the synchronous programming to your REST API. To understand the concept of a synchronous programming, you can think of common scenario which is say preparing a breakfast. So when you prepare breakfast, you're going to do a bunch of tasks. So for instance, you're going to prepare your pan you're going to heat it in order to after that, you will go ahead and freezer max. And that will be followed by perhaps toasting some bread. And after that when the bread is toasted, you may want to add some peanut butter or use butter or jelly which you prefer. And finally, perhaps you also want to prepare a glass of juice. So a bunch of tasks that in this case you have executed sequentially. If done in this way, it could take let's say 30 minutes to complete. But is that the way that you will usually do this? What about something like this is that so you go ahead and you hit the pan. But instead of waiting for the pan to be heated up, go ahead and immediately start tossing the bread. Right, you don't have to wait for the retest to complete. And not only that, after pulling the bread to toast, you could also start preparing your class of use. That's totally something that you can do at that point. And then yes, eventually, the pound would be heated and you can go ahead and fry your eggs. And also while that's happening, you can if the the bread has been toasted, you can go ahead and put a peanut butter jelly what you're going to put in in that bread. So things are happening, a bunch of things are happening kind of in parallel. And as opposed to sequentially. With this kind of sequence of events, you can reduce significantly the time it takes to prepare your breakfast, let's say all the way back to 15 minutes. These two models is what we call the first one is what we call the synchronous programming model. I mean making an analogy to our programming models that will be synchronous and then the other world will be asynchronous. So in a synchronous manner model, you are not waiting for every single task to complete before starting another one. So you started when you can, you will go back to the previous task when it's time to do so. So then thinking back to our current scenario, we do have a database, a MongoDB database. And then we do have a repository class that's interacting with that database. Now interacting with the database is an expensive operation, because you have to perform input output, right, you have to go over the wire and talk to that database, that database may take time to give you back results, depending on where it is, depending on how much load it has at that point in time. So it may take time, so you don't want to be waiting for that database to finish the work to where you're going to do is instead of doing a synchronous call to database, you're going to do an a synchronous call to the database. So you start the work. And then you let it finish work and work. While that's finishing, you just go ahead and do something else, if you have to do something else. And the same way, we will have our controller talking to the repository. But now we're going to turn it into an async call. So the controller will talk to the opposite, it will not wait for it to finish doing whatever work it has to do with the database, it will just keep going doing anything else that you can do. And then eventually it goes back to that task to complete the work. And the same way, whoever calls, our controller should also be able to call it in an asynchronous way, so that they don't have to wait for our controller to finish whichever work is performing in order to continue doing some other work. So this is what we call basically async all the way. So all your calls a chain a is is when doing in an asynchronous way. And that provides a lot of performance and efficiency to the execution of your code. In order to to reuse the asynchronous programming model to our REST API, we will need to introduce a few changes to our depositories and our controllers. Let's start by making the necessary changes to our items depository interface. So let's open up the depositories items capacity. And here, there are two things that we need to do. First one's going to be make sure that each of these items return dusk, as opposed to item or ienumerable or void. And the other one is to rename each of these methods to have an async suffix. Because that's a convention when you create an API or interfacing says in this case, and you have if you have an synchros better, you should be fixing it with async that tells the consumer that the method is Nic Nic method. And we will actually start by doing that, because we're going to take advantage of the refactoring capabilities of VS code. So that this renamed habits across the board as opposed to having to go to each of the files and make the changes. So let me show you what I mean. So I'll just right click on Get item. And I'll do rename symbol. And then we'll do is just type the new name here. It is amazing. I'll hit Enter. And as you can see, not only the name, change it here in the interface. Also if you see it, I just controller has been modified. And if we see here, where we're calling the repository, get itis async is a new name that it is using already. Without us doing anything else he may might have suppository is also now it has been renamed the method to get item async. And the same for MongoDB. depository, get item async. So that factoring actually goes all the way. So we're going to do the same thing for all of the methods. And then to make things even faster, instead of a right click, and we're going to do is use heat f2 we use a shortcut for this. So you do f2 and then just type pacing. And I'm going to copy this suffix here, enter F to base a seeing enter F to a Singh enter F to async. So with that all of our a or four methods have haven't changed. Now, like I said, the only thing that we have to do is make sure that each of these methods return task. And this is because a that is a way to signal that this is not going to be a synchronous method anymore, but it's going to be an asynchronous method. So what you do is you do task of item in this case. And let me import the namespace system three tasks. And so like I said, this, this is saying that when you get an item from this method, you're not going to get the item right away, it's not a synchronous method anymore, you're going to get a task that represents an asynchronous operation that eventually is going to return an item whenever a we have finished retrieving that item from the from the from the database in this case. So that turns the metal internet synchromed we're gonna do the same thing for the other methods here. So task of ienumerable of item and then for the void cases, we just turn into desk And that is it for this interface, the interface is ready to operate as an a synchronous interface. Next, we're going to go to our list open up this a little bit MongoDB items have auditory. Okay, so let's see. Let's go one by one. So the first one in this list is great items async. So I'm going to turn again Boyd into task. And then I'm going to import a way to import the namespace, right namespace. And at this point, what we have to do is start invoking the a synchronous version of the methods that in this case, MongoDB is item collection offers. And this is a common a pattern, like in this case, I'm actually going to open up the IntelliSense here, and you're going to see that for insert one, there's an alternative insert one async method. And for in this case, for insert, many there's insert many AZ met, this is going to be a common situation for many of these libraries that have to a reach out to some external service. Since those operations can be expensive. And they represent input output operations, you want to offer the capability of executing the operation in an asynchronous way, as opposed to an asynchronous way. Where you do a C synchronous operation, like we had before just insert one, you're actually doing a blocking call, where you're just you're making it so that this method just stops there, nothing else can happen until the call comes back into the method, right, in this case from the database. And that's exactly what you don't want to do. So just turn turn into using the asynchronous method. And you will not have to wait I mean, the code will not have to wait for that call to finish. And and that will make your entire application way more efficient. So that is one piece of whatever we have to do here to make that a synchronous call. However, we still have a little problem here. And the fact is that we're missing one thing, which is the async await keywords. So by doing async, here, just next to task and then await here, when we made the call, we're kind of adding a little bit of a syntactic sugar around the whole method to tell a dotnet five, the compiler that this is going to be an asynchronous call. And that it please help us to not have to write even more code to tell it that how we're going to go to go around this asynchronous call. So a sink and a weight, it really helps us a lot in terms of defining that the methods are going to turn into async. So that we don't have to write even more code to deal with task and how to break it. And that will be it for grade item async. So now let's keep going with the other methods. Or let's go with delete imazing. So same way, we'll do async task. And in this case, we're going to do a here's the line where we call database. In this case, the lid one will do a weight is collection, delete one async. Yep, and that's it for delete. Next one, get item async. In this case, we're going to do async task of item, because we had to return that item, but it as part of a task. And then we're going to do return await, I just collection find foot by filter single or default async. Next one get is async will do async task of ienumerable of item and then return await. And then here to list async. I think this is the last one update is async. Once again async task. And then we just await here, we say we base one async. And that's all you have to do. Like I said, By doing this, you're making sure that every time that you talk to the database, you're not making a blocking call anymore, you're actually letting a framework, I mean, giving back resources, I mean, giving back the chance to the framework to keep doing work while we wait for the database operation to complete. And like I said that, that gives a lot of more efficiency and performance back to Europe. Now Now we did this, as you can see, we still have the problem with a with a meme is a possibility, which is, this is the original repository we're using in the first videos. So we could choose to just delete really this repository at this point because we're not going to be using it anymore. But as a learning exercise, we can actually turn this guy also into an async a class, I guess, even when we're not going to be calling anything external, right, so in this case, we're just dealing with a with an email list of items. So there's not really that much of a need for async stuff here, but we need to We need to handle the interface that were implemented. So let me show you how we can do this. Let's start with a get is hazing. So same thing we did before, this has to be async task of ienumerable item. And then we have the important space. What we do here, we say return await. But in this case, since we don't have anything to call noising, close by to to call, but we have to say, say, Hey, I just want to do tasks that from result. And we're going to pass items. What this means is that we want to create a task that has already completed. And we want to introduce the value of these items collection into that task. So it's kind of the equivalent to saying, hey, go ahead and execute this, execute this is this other method and wait for it to complete and then return the results, right. But since we don't have anything to call, you say, hey, just return a complete task with the items in it. And that's, that's always, so that's how you can handle this situation where you don't have something else to call. Let's see what we can do about get item async. So we'll do a scene async, task vitam. And in this case, we're going to use capture in the case of a dynamic thing we're going to capture ID equals items where we capture the item that was found, we do something similar, as before, await does that from result. Item. Yep, same thing, we return result, complete task with the item that we that we found. For create item async. Again, we're going to do async task. And in this case, we don't have really anything to return. So the only thing that we have to return here is some some sort of a task, right? So we're going to say is a weight task that in this case, we can do a completed task. So this means just create some task that has already completed and return it without returning anything inside it, because there's nothing to return. Now let's move to update a item async. So async task. And then once again, copy this await task, that completed task. Finally, we'll do the Delete. So async task and await task that completed task. Yep. So it's as simple as that. Now, just to let you know, it is not necessarily it is not necessary to use async and await in every single case, you could go around and avoid a avoid this combination. In some cases, however, I would consider that to be a bit of an advanced concept that I will not like to dive into that right now just because it has some pitfalls that you have to be aware of. And it is not trivial to know them beforehand. So for now, I would recommend you to stick to async await anytime you have to run an asynchronous INVOKANA synchronous operation and define your own method as an asynchronous operation. And now let's go to our itis controller and do the final set of modifications that we need to introduce here. So let's start with a get items. Right so same thing. Let's turn this guy I mean this is a sink all the way right so we have to turn everything async now async task of ienumerable identity all these forget items. So again, importantly space. Let's not forget that this method now should be suffixed with async because it is an asynchronous method. Now here what we have to do is again do and await and await for the get a get is async call. But it's it's a bit more problematic here because the weight is separated from the from the actual method, and we're trying to chain a select right away, that's that's not going to work, we have to wrap this into into these parentheses to tell it that first go ahead and do this. And when that's completed, then go ahead and do the Select right so that is just to comply with the with the syntax that the compiler is expecting from us. I'm actually going to put this in this in a second line to look at it better. And just before we go ahead to get is async await for it and when that's done, we select the items that we got. We turn them sto and then we just return them back to the caller. When we learn to get item, we're going to do now a sync task of actually sold it to and it should be get item async. And here when we call get item async we used to await That will do it. When we don't do the next one post, we will do async task. grade item pacing. And then here is where we create the items. So we will do a weight, create animation. And then remember that now the method is actually named the get item async here, so we have to do the proper rename. Next one update item. So let's say async task. Action result update item, a sink. And let's see, here's what we call it. So we will say await get item async. And over here, we also have to await for the call to update item async. And finally, let's go to our delete item method. It's going to be again async. task, action result list item async. And then await and then over here, wait suppository, delete item async. So now our controller is all async and is calling methods that are all async. And the depositor is also calling methods are always in. So we're doing async all the way basically. So let's see how this goes. So I'm just going to do f5 now. And I'm going to open postman. Let's see, let's start trying out the API's. First one here, the first one that we had is a the items. So let's try to get the full list of items. So I'll hit send. And as you see, we do have the antidote from the previous video still hanging around, that's working just fine. Now let's try to create a brand new item, right, so how would we call it let's say this is going to be high potion. So a potion that provides increasing strength to the player. And so the price is going to be, let's say 30. And I'm going to hit send. Let's see if that works. So we do have an issue here, we're getting a 500 internal server error, and no route matches the supplied values. So this here is actually unexpected situation. This is because of a breaking change in ASP. NET Core three. And let me let me show you why this happens. Let me go back to items controller, I'll use a stop this, close that. And if you remember, what we just did a in grid item async method is that we updated or created an action call here, to use a name of get items get async as opposed to get item because we just rename it that method over here, get item async. It was get it before. And now the breaking change that interested in ASP. NET Core three zero is that at runtime, the framework is going to actually remove the async suffix from from the mid range at runtime, this actually looks like just get it as opposed to get Iam async. Given that when we try to do the greatest add action call here, it is not able to find the route that represented by that action. And so that breaks things. So there's a couple of ways to fix this. And what I'm going to do is to actually tell a dotnet that I don't want that behavior, I just want to keep using the async suffix. So to do that, what we can do is just go to startup startup, what you want to do is find your call to our controllers. And there, you just have to specify a one option, which is options. Let's see options, open curly braces, and then you want to do options that suppress a theme suffix in actual names equals false with that, that that will not remove the async suffix anymore from any method at runtime. So let's see how that goes. like five, back to postman. And now the one thing that you want to keep in mind is that the creation actually succeeded. It just happens to be that we were not able to be a we were not able to create to invoke the created our action a call, right. So just to confirm that if you go back to get a return to do a get here, you will see that we do have the high potion created. So it is there. But we will not we were not able to complete the creation. I mean, we were able to call the Create action successfully. So let's actually create something else here to not confuse these with a high portion. So let's call this one mega potion, or mega potion is going to be more expensive. Like let's say 45 So now I'm going to do sent. And this time, they actually it was actually created successfully. No issues there. Now let's use the A this item to actually try the get route. So just to get a for that item, let's see if that works. That works just fine. And now let's try our put our put route for that item. As so let's see, we're going to use the same name, perhaps make a potion. But let's put another price. Let's see what price we have. Here we have 45, let's say it's even more expensive 50 for the mega potion. So I'll hit send. And that should have updated the mega potion. So let's go back to the gate, make sure that the price changes. So hit send again. And as you can see, price is now 50. And let's see, let's actually not delete that makeup in we're going to try to delete route. But let's delete something else. Let's say we want to delete this high portion. Actually, he's kind of our favorite experiment. So I'll copy that ID. So I'll try to delete that. Anil he'd sent two or four new content back to the target route. Let's see what we get. And as you can see, Hypatia is gone. So we only have the antidote and the mega portion. You want to configure that things are actually getting read into the database. It's as easy as going back to VS code, and close. And I'll open up again, our MongoDB extension here, catalog items. And we do have let me refresh this. We do have two documents, one for the antidote. And whilst they make a push, and yeah, now you have fully a synchronous REST API, which is going to give you great performance and great efficiency. And from here on. In this episode of The dotnet five REST API tutorial, we will talk about secret management and health checks. We will see how to securely store secrets during development that your REST API can use easily as with any other piece of configuration. We will also learn about health checks and how they are a great way to report the health of our API. Today, you will learn how to store and use secrets via the dotnet secrets manager and how to use health checks to report the health of the race API and its dependencies. So let's talk about secret management. As you know, at this point, we do have a REST API that is able to talk to a database or MongoDB database. And in order to be able to talk to it, we have defined a configuration service where we have specified details like host and port. So to be able to connect to that MongoDB database. The configuration source that we've been using so far is our app settings, that JSON file right there, we have host localhost and Port 27, zero 17. But now we are going to enable authentication to the MongoDB database. So you're going to need a username and a password in order to be able to connect to it. We have to tell our REST API how to how to use this information. So for the user, we're going to add our user another user is setting in top settings JSON for are going to call it just MongoDB admin for the username. And then we also have to specify a password. So should we just specify the password directly in the app settings JSON file? Well, the answer is no. You don't want to add any sort of secrets into app settings JSON or into any of the files that you have that are part of your service. Right. So that's a basic, good practice in terms of security, never introduce secrets in there. If we cannot do this, then how are we going to pass that information into the service. So it tends to be that app service JSON is just one of the possible set of configuration services that can feed into your REST API. There are other options like you could use a command line arguments, you could use environment variables, or you could even use a bunch of other cloud searches, searches coming from the cloud that can provide configuration information into your REST API. In our case, we're going to use one that's called the secret manager. This is just one more configuration source that's built in into dotnet. And that is already pre configured for you, for any new brand new web API are in that secret barrier. We can securely store our password, let's say pass in this case. And without having to put that password within our REST API. So it's not going to be in any of the files that compose our REST API. It will be in some placing our machine securely stored, but yet, they are REST API is not going to have any trouble reading that. That password because just as as anything else, it is coming just as a new piece of information. majorly from our configuration service. And the rest the REST API can easy to consume. We are now going to enable authenticated access to our Mongo database. There are a couple of ways to do this. But to keep things simple and safe, this is just a development database for learning purposes, we will delete the volume where Mongo container currently storing all the data. This allows us to start a new container with a brand new Docker volume with authentication will be enabled. So the first thing that you may want to do is verify if you're running the container already or not. So we're going to do is open a new terminal. And I'm just going to do docker ps, that will tell you if you have the container running already. And you are indeed, in this case, I am. So what if the first thing that I have to do is to stop this container. So I'm going to do Docker stop Mongo that stops the container. And now what I want to know is, which is the volumes that I have available here. So Docker volume, LS, that will give us the one volume that we care about right now MongoDB data. And now what I'm going to do is just delete it. So Docker volume, RM MongoDB data. So now the volume is gone, we are free to restart the container with a brand new volume. So what I'm going to do now is just a grab the initial set of a, the initial command to run the container, just as we did before. And then as you can see, the same volume is here, there's going to be a new one, since we deleted it already. And what I'd like to do is just add a couple of environment variables that represent the username and password that are going to be used at a within the database. So if I just do dash E, and this is the file, the name of the variable, in this case is going to be Mongo in it. dB. Root username. And here we specify the name for our user, you could use any name here, I'm going to, I'm going to use Mongo admin. So that goes for username, and now the password for the password, you have to use Mongo ini dB, root, password, equals, and then you got to pick a password. So I'll pick something here, not super strong, but still something like these. And finally, the name of the of the Docker image that we want to use in this case is Mongo. So with this, it's a hit Enter. And now we do have docker ps, we do have a database that has authentication enabled with a user of Mongo admin and a password of pass bout, we're one. So at this point, our database requests authentication, but our service does not know about it yet. So what happens if we try to query date at this point, what I'm going to do is just hit f5, the service. Okay, and then go into postman. And I'll just try to query for a few items and see what happens. So these are items query it sent. And yes, command find failed command phi requires authentication, right? So just good. authentication is working. It's just that our service does not know the correct credentials to talk to the database. So how can we make a how can we make this service a aware of the user and password needs to use. So let's go back to VS code. Let's stop and close this. And what I'm going to do is, first I'm going to declare a a configuration setting for the user. So I'm opening up series Jason. And in this section where we have MongoDB settings, I'm just going to add a one more section here. For for user, let's call it user. And we know that the user we specify for the database is Mongo. Now, we also need a password. And of course, I could specify a password right here. But that it's not a good idea. You should never specify a secret or confidential information in your app settings JSON file, that's a security kind of a security hole. So you should not do that. Instead, we're going to take advantage of the dotnet secret manager to store the password security as still we should be able to pull that secret into the service into the REST API without any trouble. So to do that, again, I'm going to open a terminal. And here's what I'm going to do. First, let's initialize the secret memory for our project. And that we can do via dotnet user secrets a bit And if you take a quick peek into catalog CEUs blog, you're going to see that there's a new entity there. User secrets ID. So this represents a identifier of this secret configuration for this project. From here on, we can start actually adding secrets to the for this project, and do a secret what you can do is.net user secrets set. And here you have specify the name of the secret. Now for the name where we want to do is follow the convention of the settings, the settings property that we have defined already, and yours a add to it in the in, in the in the format accepted by the net. So in the in our case, that means is specifying MongoDB settings, and then we do colon, and then we specified the actual a property that will represent the password, this case is going to be use password. And after that, we specified the actual value, in this case for the password, and we know that the password is pass bound word one, or one. Okay. So just like I did here, what I'm saying is, is that the, the, the name of the secret is going to start with MongoDB settings, which matches a MongoDB series right here. And password is one of the properties that is going to follow the other properties that we already have in the MongoDB settings class, and the actual values passed out word one, hit Enter. And the secret has been add. What we need now is a way to read both username, the user and the password into a our service. To do that, we're going to go into our settings, settings class MongoDB settings. Absolutely, let's close this. Let's add a couple of new properties here. So let's add string, user. And string password. So these are the two properties that will be populated at runtime by dotnet. into into our app. Now, the other one that we want to define is a modified connection string because now it is not going to be enough to provide us whole support. We have specify user and password. And the way that you do that for a MongoDB connection string is by saying user colon password. And then we'll say at host column port. So that's the same text that MongoDB is expecting from us. So just by doing that, let's try equating again and see what happens. So I'll do a five. And I'll go to postman now. So let's see if we can query for information sent. And yes, we don't get anything because remember, these are brand new database because we modified the volume, but we are getting a 200. Okay, oh, things seem to be a running just fine. And just to be completely sure, what we're going to do is do a post. And we'll switch into the post tab here and see if we can actually recreate this mega portion with price 45. And not just that, let me actually put a breakpoint over here. And to see how these values look at runtime. And Cynthia Singleton, I'll have to stop and restart our, our service. So let's hit f5 again. So the first time that the connection is needed, we should hit this breakpoint. So go to postman. And I'll do send here. And I suspected we have a breakpoint here. So as you can see, the user has been read from our app settings JSON file, and the password has been read from the secrets manager. So password is not at all reading, updating JSON is just coming from secrets manager. And really that that that magic that takes that does running here is being driven by program CS when we do create default builder, that piece takes care of injecting the secrets barrier as one more configuration serves to our service. So you don't have to do anything special for that to happen. So if this happened, and we just let it keep going. Go back to postman. And in fact, yeah, the mega portion has been created. If we create it for them. We'll see that it is right there. So moving on to the next topic. Let's now start talking about health checks and how to enable them. So But first, let's learn about health checks. So as we know, we do have a REST API at this point that is talking to a MongoDB database. However, it is not uncommon to face issues as time goes by, right? So a REST API could go down for a variety of reasons. Or it could be a good intention, right? We may be redeploying these REST API to our, to our server to our cloud service or to our server wherever it is going. So even if temporarily, that REST API could go down. And there will also be issues on the other side with talking to our database, right? So for any reason, that database, we may lose connection to the database, either temporarily or for a long time. Or something could be going on, and we're really talking to that database, right? So communication issues could happen. So with this, we may start getting equations like, is our REST API alive? So is it alive? Ganga, we actually talk to this REST API. Right. So you may start wondering this, or you may start wondering, Well, can we reach a database? So is it there? Is that connection in a good state? So but really, what you're asking here is the broader question, is it healthy? So is our race API healthy? Is it ready to receive quiz? Is it ready to do the job the right way? And to answer that question, the right way to do it is to enable what we call health check endpoint. So so you don't have to guess. So we will have, you will have a natural endpoint that you can call is part of our universal API. And you can call to it and it should be able to tell you if the service is healthy or not. Right. And, of course, there will be a person or people interested in that information. So us as, as a developer, as an engineer, that increases service may want to query for that help endpoint as either services healthy. But really the most important case, the most useful and original a scenario is when you have an orchestrator system, that will be in charge of knowing when your service is ready to receive requests. So we will talk more about this in a few episodes in the future. But having a health check endpoint is a key piece of any REST API that you have think about right away. Now that we learned about health checks, let's see how we can enable them for our REST API. So the first thing that we have to do is to add the services for health checks and that we can do within startups. Yes. Before that, let's stop debugging and close terminal and open up startup CS. And I'm going to head into the Configure services method. In this method, we're going to go all the way down. And we're going to add just one line here. Services dot add health checks in now, so that's the services and now we need to add the middleware for it. And that we have to do inside the US endpoints method inside Configure. So here, we're going to say, endpoints that map health checks. And then here, you get to pick the route that you want to use for your health endpoint. So in my case, I'm going to go for health. But feel free to choose something else, like you could do something like hc, or you could do health See, it's really up to you, what do you want to use, I'll go for those held. And now I'm going to do a five. And let's try it out. So that should give us a very basic health check endpoint for service. So back to postman. And the way that you query for this health check is really very similar to what we do for the iTunes route. So we just have to open up in a new tab in postman for the get burb. And I'll go for my localhost colon 5001. And then you just add the URL to route our case is held. I'll hit send. And here's you get a result, he says healthy. So with that, you have a way to almost like being your service to see if it's in a healthy state. So this means that yet the services is up and running, and everything should be fine. However, this is actually not super useful, because even if the service is up and running, it means nothing if the database is down, right, if the service is down, or if we cannot reach it. Our service is not really that much healthy. So how can we tell if we have any dependent service like a database in a not in a healthy state? And how what how can we take advantage of health checks for this? So let me show you what we can do. Going back here, stop that close these are going to do is add a nougat package. That's called ASP. NET Core healthchecks MongoDB. That will allow us to add health check is specifically designed to verify if MongoDB is running properly. So I'm just going to open a new terminal here. And I'm going to say dotnet, add package, ASP. NET Core health checks MongoDB. So this is an an open source project, it's not part of ASP. NET Core or dotnet. Five, but it is a very handy project. And with these with this package, the let me show you what we can do. Close the terminal again. And then. So here, what we want to do is add some options to this call to add a health checks. But before we can do that, we need to pull out our MongoDB settings in such a way that we can reduce them later on. So to do that, I'm just going to grab this, this line here, outside and into, let's say here, and we will call these MongoDB settings. So now those MongoDB settings can be used over here for our MongoDB client Singleton. But also for our purpose right now, after saying a uthealth checks, we're going to say that at MongoDB. And here we will use MongoDB settings, that connection string to specify the connection string that needs to be used to connect to the MongoDB database, right or health check is going to be based on can we reach this database or not. We will also add a couple of things here, like a name used to define this specific health check. And we will just say MongoDB. And one more thing is a timeout. This is because we don't want this healthcheck to take a long while to tell us if the database is is just down. Right. So in this case, we'll say timespan from seconds, let's give it three seconds. If after three seconds, we cannot connect to the database. But we'll consider that this has failed write the check has failed. So now that we did that, let's try it out. So I'll do f5. And let me actually remove this breakpoint. And back into postman. So here, yeah, I'm going to say hello again sent. And it is healthy. And I suspected because our database is up and running. But if I go back into VS code into our terminal, and I use a stop our MongoDB container. So let's see Docker stop Mongo that will is Docker container, which is equivalent to just stopping completely our database, right and the entire database services going down. So let's see what happens as I go back to postman. And I tried to check her health will take like two seconds. And then yes, right here. So now health check is reporting as unhealthy. Which is great, because now anytime a database is down, we can easily tell that that as part of the health check that we're using right here. Now there's one more thing that we can use to improve the scenario. And is the fact that we may want to have more than one endpoint to verify not just the fact that the services is up or not, but also if it is ready to receive requests or not. And the typical pattern here is specify both a ready endpoint and a live endpoint. So our ready endpoint is going to tell us if we are ready to well, if a service is ready to receive every incoming request, right, which in our case really means so is the database up and running Ready to go? Can we use it? While the live endpoint is just going to tell us if our service a justice service is up or not? Is it alive or not. So to do this is these two endpoints, let's actually go back to VS code. I'll stop this again. And back to startup. First thing that we have to do is going back to our health check configuration, we have to assign an attack to our health check to let me show you what I mean. So actually put these things down a bit so that we can do the easier. So I'm going to open up a One more line here. And I'll say tax. So I'll create a little array here. And here, I'll say ready. So here I am attaching a tag that I'm calling ready. That will help me a group, every single health check for which I want to apply are ready endpoint. So the endpoint that specifies if I'm ready to start receiving requests, how do we use this. So now let's go back to our map health check section. And I'll add a line here. So now we know we need to specify in our two endpoints, one for ready and one for life. So let's go for the raven first. So instead of just having held here, I'm going to say okay, slash ready, it's going to be our ready, ready endpoint. And now has specify health check. Health Check options, see for museum a space right there. Okay, open up this. So at this point, we have to specify what we call the predicate for the predicate is the way that you can filter out which health checks you want to include in these in this endpoint. So remember that right now, we only have one really, our that one has been tagged with ready. That's one form of MongoDB. So in the case of the radium point, we want to include that MongoDB endpoint. So to do that, what you do is is this, you say check. And then we'll say check tags contains. And then the tag name ready that you have already endpoint that will, will will only include those a health checks that have been tagged with ready. And then we have to define another endpoint. So I'm going to actually copy, copy these, these one is going to be just live. Really, for the live case, we don't have to do much, we actually don't want to, we don't want to include any health check, we just want to say a kind of the response over pink rice. In this case, we're saying just false. By doing that, we are excluding every single health check, including the Bongo DB one, and so that it would just come back to us as long as as long as the REST API is alive as the service is alive. So that's the way that you can do the life. So ready, we'll make sure that the database is ready to serve requests, life is just going to make sure that our site, our service is up and running. So with this, let's run again. So I'll do f5. And I'm back into postman. And this time, so let's, let's see. So I'll query for, let's see, health, life. see what we get. So he says healthy, is expected because the service is up and running. Now I'll try ready. see what we get. So after about three seconds, we do get unhealthy. And this is because our database Docker container is still still down, right? as we did before. If I go back here, and I go back to my terminal, let's say this one, and actually restarted the Docker container. For Docker, Ron, Yep, sure, do it. That starts a container again, go back to postman. And he'll sent and I get held up because the database server and the database are they're up and running. And usually that's good enough. So that's what you want to do to enable your your health checks. But if you happen to want a little bit of more information about the health check that you have configured, you can actually customize the message that you're getting here. This gives us the basic thing healthy, but you can get more if you want to answer Let me show you how to do that. So back in VS code, open close, we're going to look for our map health checks function. The red one in particular is interesting. And what we can do is take advantage of what's called the response writer. This is post writer you can use to specify how to render the message that you're getting as you're collecting the results of the health checks. So I'm going to say async context. Sorry for that context report. Let's do this and then open curly braces, and then here I'm going to collect the result of the of the checkout. So for that, I'm going to say sold equals, I'm going to use the JSON serializer. The one that comes with dotnet, I'm going to serialize. Now we're going to create an anonymous type here. So I'll just say new. And then we have to give it a shape. So this is the shape of what we're going to return back into postman, to the collar. So the first thing that I want to show is a status started, we can get from report dot status, just to string. And then we want to get the array of checks, which should include our MongoDB check. So in this case, I'm going to say report entries, select. And I'll say, so I project each entry into a new anonymous type. And in this type, I want to show first the name, which is going to come out of entry that key, then I'll get the status of this very specific check, which is going to come from entry value status to string. And then there could be an exception coming from the database. So let's actually capture that too. Oops, sorry. exception, is going to be empty that value that exception, but we may or may not have an exception, depending on the status of this check. So we're going to say is that if the exception is not No, we will get an entry value, exception message. But if we don't have any exception, if it is no, we will just say not. Finally, one last detail could be interesting is the duration that will tell us how long it took to do this health check. And three value, duration to string. game, so to taste. And now one more thing that we can do here is to format that output. So I'm going to say, context, that response that content type equals media type names. And using something that application that Jason, that will let us render as a nice JSON string, back in postman. And finally, I'll actually write this information out. So await context response. Right? Right async result using one new spacer. And that should do it. So now we have customized how that message should be rendered. So I'm going to do a five once again. And back into postman. So let's see what happens when I tried my radio endpoint now, sort of hit send. And now as you can see, the result is a bit nicer. So we do have a status of Lt. And then we can see the array of objects. In this case, we have MongoDB, with a healthy state nociception and a duration in there. And if we wanted to get the sample with an actual exception to this, you can just stop your local Docker container again. stalker stop Mongo container stopped. I'll try again back in postman sent. And after three seconds, we should get Yeah. So now, as you can see, here, the entire health check is unhealthy. The specifically MongoDB is unhealthy. And yeah, the operation has been canceled. So that's what we're getting from, from dry dock to MongoDB. Now one last thing that I wanted to show you is that there's actually a bunch of health checks already available for you. And so just like we did a MongoDB one, let me stop and close this. If you remember, we did this here, we did the MongoDB. One, just like this, there's a bunch of other ones already available for you to try out, depending on the service that you're using. Let me point you to this page here, which is a page for the open source project that I mentioned that has these health checks. So over here in this GitHub project, you're going to see that so this will tell you everything about how to use these health checks. But I wanted to show you that there's already health checks built in for a bunch of providers, right, so SQL Server, MySQL, a bunch of things Cosmos TV, send sendgrid, a few Azure services, Amazon services, Google stuff, here's the one that we use MongoDB. And so yeah, so there's A bunch of providers already available. And there's even an here's a nougat package that you can use. And there's even support for let me show you the support for for a UI. I don't usually use this one. But if you wanted to show a very nice UI with all your health checks, with a breakdown for for each of the checks, you can actually enable these, these health check UI. Sorry. And yeah, and that will just stand up another endpoint in your in your service that you can go to, and get all these nice UI rendered. So that's something that you may want to try it out. I usually find there's some old ways that I quantify the help of my services, as we're going to see as we move forward with these videos. But this is another option that you can try. In this episode of The dotnet five REST API tutorial, we will start our path towards getting our API deployed to our production environment. We will learn about the relational challenges involved in getting the API beats deployed outside of our developer box, how Docker can help us address these challenges, and how to turn our existing API into a Docker container. Today, you will learn the challenges of deployment, how Docker works, and why you should use it, how to turn the REST API into a Docker image, and how to run your REST API as a Docker container. So the way that things are at this point, you know that we do have a REST API up and running in our local box. And there's also a MongoDB database running next to it as a Docker container. But now we need to figure out how we're going to share this REST API with the world, right, either how to share the REST API with people inside of our local intranet, or how to share the REST API with the world perhaps in the public Internet. And that place where we're going to share things is what we're going to call the production environment. The production environment could be anything from some server running in your in your building, or your house. Or it could also be some server running software in the internet, right. But for all cases, this is the production environment, this is a place where people will be able to access the a REST API without having to get access to your local box, your developer box. And one of the first things to think about when we want to pick this, this server for production is the operating system. So we will need to figure out when to find the right server with the correct version of the operating system that our REST API is able to support. So in the case of for REST API is not really that much of a big deal, because that five is cross platform. So what we have built so far is to be able to run in really in a bunch of operating systems. But still, we need to sell install operating systems. So if we say that we're going to be running this on on Linux, then we would need to go ahead and make sure that we have Linux available the correct distribution and version of Linux available in that production environment. Then, we also know that we do need the dotnet five runtime to be able to run a REST API, the REST API is built on dotnet five, therefore, we do need the dotnet, five run time, and all the corresponding files of it, they need to be placed in the right place in that production box. On top of that, our app may have a bunch of dependencies, like the MongoDB driver that we've been using to talk to one MongoDB. So like that, it may have a bunch of dependencies that they also need to be placed into the production box. So that finally, our REST API can be placed alongside all of these other components, and it can run happily. So it is at that point that we can, we can continue that we have all the files and sorry for the REST API to be running in production. But then also, we have to consider that we also have database requirements, right. So we have a database that we need to play somewhere in this production environment. And it will have its own requirements in terms of operating system, and dependencies that MongoDB may have the MongoDB and Gene, all these things that we need to run the database in production. So that by itself presents another set of requirements. So as we think about all of these, these things that we need to that we need to happen to get our production up and running. So we need to think about a few challenges right. So the first thing is going to be preparing a box. So what you really want is to make sure that whatever box we use for production has everything that we that we have had so far in our developer box, right. And then but you also have to think about so are we going to pick a physical machine Are we going to pick a virtual machine or virtual machine is really just average the latest version of operating system running on top of some other physical machine. So any regardless if office same physical machine virtual machine, we need to figure out well, where are we going to get this machine from? Where are we going to host it? Who is going to be taking care of this machine? A bunch of questions just regarding this this machine. But also way to figure out okay, so is it going to be Linux? Or windows? Who's going to put the right version of the always in there? Who's going to make sure that he has the right set of patches? And who's going to maintain these? Really, somebody has to take care of these things? But also, how are we going to take the files to the production machine, right? So we have all these files in their box? And somehow they need to land in production? How are we going to do this? If we're going to be using perhaps some FTP protocol, to be able to talk to an insanely fast to production machine? Are we going to put the files in some USB or pendrive? And then just copy them using it into the production machine? Are we going to send the files to some person from operations, B, perhaps some email and then the person will place the files in there? How do you make this happen? The first thing to get there? So one way or another? What happens? I mean, in this case, we're thinking of placing, let's say that the database server just next to the REST API, if we're going to do that, how are we going to make sure that all the dependencies of the database and the dependencies of the REST API have the right versions, even when we only were placing everything in the same machine? Right. So there could be version mismatch between the things that are needed in the MongoDB database? And the detail needed in the REST API? Right, starting from the previous system? How do we make sure that all the dependencies are the right version for both of them? And if they're, if they cannot be, we have to figure out okay, so perhaps we need to split his REST API into one machine and the database server into another machine, which could not be that uncommon. But we have to think about all these things. Also, what if we eventually decide to move to a new version of dotnet? Right, so let's say that net six is out. We have to we want to move to the next six. What does that mean? Do we have to bring in a brand new set of servers, physical rooftile, that are already enabled for dotnet? Six? Do we want to just update the version of dotnet in this existing server, and then somehow make sure that the app does not break by making that change? How are we going to make that happen? Also, how do we start recipient the machine because it is not just about copying the files in there, right? So somehow, we need to start the dotnet app into the server, somebody has to do it right. So we have to bring in some sort of automation, sort of a scripting, something needs to happen in that machine, to be able to start with API and not just started but started fast. And we want to make sure that as soon as we put the beats in there, the application starts quickly so that it can start serving our users. And finally, what if one instance of the app is not enough? So what if we start having so many users and us having one web server for a REST API is not enough? And now we need to bring in yet another? Let's say VM virtual machine for a recipe and then another one, and then another one? So who's going to take care of provisioning all these VMs? For us? What about the database? What if we alternate multiple copies of the database server to be able to handle the load? So how is all this going to happen? So do we need to think about all these challenges? Do we need to deal with all of this? Or is there a better way? So luckily, here's where Docker can help us. So now, let's go back to our local box. So yeah, so we have our local box has REST API and the and the database currently, and we need to get to production. But now, instead of starting to worry about how to copy all the things into production, or how to make sure that production has the right things already in place, we can start using this thing called a Docker file. So a local file is kind of a template of all the things that are needed by your a, in this case, a REST API to get it deployed into production. So in the Docker file, you will declare things like the operating system that we're going to need. So you're going to say, I run in this specific version on this specific distribution of Linux, let's say, What is his version of Windows. So he is already declared in that file. And so you're saying as long as that version is available in the production machine, I am, I am going to run just fine. And not only that, you can say, Well, I actually need the version five zero of the dotnet, or ASP. NET Core runtime available in there. And by doing this, as long as I have all the dependencies of the dotnet core runtime, I am able to run my my REST API, right, so you can declare the runtime that you're going to be running on. You can also declare or you can prepare all the dependencies that are needed for your app. Right, let the ongole be driver and any other DLL or any other dependencies that need to be present in there, you can specify how to place the files that you want to put in that in that production environment, you can say how we're exactly what to put them. And you can also tell it exactly how to get started or how to start that REST API. So that scraping that is needed to say how to start it, it can also be said in this Docker file. So just by using the Docker file, you get, you get, you're already handling a lot of the challenges that we were talking before, because this Docker file is clearly declaring exactly how the how the environment needs to be built for the REST API to run properly. And so but then, it is not not enough to use have this Docker file. So now that you have Docker file, what you're going to do is use this thing called the Docker engine to actually prepare what we call a Docker image. So what happens here is that the Docker engine, which is just a process running in your in your box, the Docker engine is going to take that Docker file, and it's going to tag it and and build what we call a Docker image. Tagging is is really just kind of a synonym for creating a version for your image. So you're going to set a version on that image, and then you're going to build it. So building a Docker image means reading that Docker file line by line, or executing all those instructions to prepare the environment where a REST API is going to run. And that goes all the way back to ensuring that all the exact dependencies are in place, put the files in the right location and starting the app. Right. So all of that is encapsulated in that Docker image. But then, once you have the Docker image, it is not enough to just have it running in your machine, right? what you really want is to make it available in production. So how do you take it from your box into production. So enter the Container Registry, the Container Registry is a place that could be anywhere going from some server in your again, in your internet, to somewhere in the cloud, it is the place where you can place your Docker image, you can push your Docker image so that it eventually becomes available for a your production environment. So this Container Registry, I mean, you don't necessarily have to be the one that pushes the Docker images in there. There may be some some other images already available in the registry. So for instance, in the case of MongoDB, there, we've been using an instance, the Docker image of MongoDB. Since a few episodes back, we did not actually create that image Damon was already there. And that image is actually available in a specific Container Registry called Docker Hub. So Docker Hub is a public registry, where many vendors place their Docker images for public consumption. But just like there's a Docker Hub, there's also other container, private cul de registries like a Azure Container Registry, or Google a registry. Amazon ECR, I think is the name. There's also these days, GitHub have a registry. So there's a bunch of versions of these available. But all of them are able to operate with your Docker images in the really the same way. So what you have Porsche Docker image into the Container Registry, then you're able to have your production box in the physical box, virtual machine, whatever it is, it is able to pull your Docker image into it. And by pulling it and executing that image, it turns into what we call a Docker container. So the Docker container becomes kind of living version, our executable or running version of your Docker image, but a Docker container. So the Docker container just has it has all the files and all the dependencies that have been declared in the Docker file, and executes the REST API the way that you have declared it in the Docker file, and just like what well, just like how we stand up Docker container for the REST API, we can stand out the Docker container for our MongoDB Docker image. And then, of course, these containers can talk to each other on all of these magic, I mean, it only works just because you also have the Docker engine of the same Docker engine that you have in your box, you will have it available in a production environment. So here, as long as you have the Docker engine available in whichever machine, you want to go ahead and run your Docker image, then your Docker image is guaranteed to be able to run as a Docker container in that environment don't need that you need is a Docker engine. And that brings a lot of benefits along the way. The other thing is that you not only just can just run one instance of your Docker container, you can actually run multiple instances of your local container. So as you need to scale up more and more, perhaps because you have too many users, then you can use to start spinning up more and more copies of that Docker image into Docker containers in production, without having to incur into a lot of hassles to be able to provision more and more environments. So, lots of benefits about Docker really here, started with efficient use a resource usage. So as opposed to the case of having to stand up new virtual machines or new physical servers, you don't have to really enter into a lot of new resources. spinning up a new Docker image does not take a lot of RAM does not take a lot of this space, because there's a lot of caching happening in there by by this thing called layers in in Docker. So a lot of caching, memory is going to is going to only going to increase in terms of what, what exactly is needed by for your, for your, for your image for your service, free REST API. And so you can really fit much more instances of your REST API, in this case, also have your database, many more instances in the same production box. As you could before, just feed one instance of your REST API, or your or your MongoDB container, or a MongoDB database, in the same production box, right, you can fit much more. By using containers, there's also faster start, because because Docker, the Docker engine is able to cache all these, these layers are only the very first time that it needs to pull down the Docker image, it will do that with all the layers apart from their own, it will be able to just pull only the layers that have changed. And so that allows it to really start very fast. So you don't have to also you don't have to boot up an entire operating system, just to put a your your app, right. So the operating system is already in place, you just need to start your app. And Docker is able to do that very, very fast. There's also isolation if each of these containers is running in a in a completely isolated way. So it doesn't matter what's going on in the actual production host machine, or what's happening in any other containers running in the box. Each of the containers are running in isolation. And so from their point of view, they are the only thing happening at that point in time in that environment. So that that gives you a lot of benefits from that side. And then also you can you can think of these containers have been able to run anywhere. And because like I said, as long as there's a Docker engine running in your production machine, you're guaranteed to be able to run your Docker container in there. So lots of portability. And finally, scalability, like I mentioned, in the same in the same space, where before, you will be able to run just one instance of your REST API, like in the case of virtual machine. Now you can actually run multiple instances of your local container with using much, much, much less amount of resources in there. So you can really scale significantly by using Docker containers. Let's see now what we need to do to containerize or Docker eyes our REST API. To create the Docker image for a REST API, the first thing we need to do is create the corresponding Docker file. However, before doing that, and to keep things simple, we will update our REST API so that it no longer performs HTTPS redirection, and allows the use of the HTTP address only. The use of HTTPS from here on is a topic out of the scope of this tutorial. So what does this mean? If you remember, we do have two URLs configured for a REST API. And those are configured in our lunch settings file on their properties. And you said Lisa, Jason, application URL, we do have HTTPS localhost 5001 and HTTP localhost 5000. And the way that things are configured right now, if anybody tries to access HTTP localhost 5000, they will get redirected into HTTPS localhost 5001. You want to test this, what you can do is just launch the app. So I'll do f5 here. And I'll open postman. And normally we would go to HTTPS localhost 5001. But I'll change this into HTTP localhost 5000. And in this case, I'll actually open the postman console down here to see what's happening behind the scenes. And I'll do sent to query for items. The query succeeded. But then if you see there are two calls in here, the first one for HTTP localhost 5000 slash items. It returned 307 code, which means a redirect. And then that was followed by a call to HTTPS localhost 5001 slash items. So that's a the redirect that has been configured right now is what we want to change for the Docker case, at least. So how do we change this? Going back to VS code, and I'll just stop the app and close this for now. This is configured in startup.cs. In under the Configure method, we have this line here, up that use HTTPS redirection So he turns to me that when you run inside the Docker file, well, sorry, inside the Docker container, the ASP net environment switches from development into production. And this is what we can use to put a conditional on this on this line. So we can say is, if, and that is development, then we will allow the HTTPS redirection, but otherwise, we will not allow. That's the only change that we're going to make here. We will see how this works. Actually, when we have the Docker container ready to go. This point, I'll just close this. And now we actually want to generate or actually to create this Docker file. So there's two ways to create a Docker file, either you can create it manually, or you can generate it. For this tutorial, we will have to speed things up, we will generate. So you need a Docker file. Well, I would recommend if you're a business to recode, is to use a Docker extension for Visual Studio code. And that you can find if you go to the extensions hub, you can just type Docker here. And then you will go ahead and install the extension. And now close this. And now what you can do is just say, view command palette, and then you can type Docker Docker files to workspace is the first option there. Now you get to pick the platform of the Docker image we're going to create in our case will be dotnet. ASP. NET Core, then you got to pick the operating system of the container, in our case is going to be Linux, just because it's the most popular option most of the time. And then you get to pick the port that you're going to release in in in within that container. In our case, it's going to be Port 80. And finally, if you want to generate a Docker compose file, we will not use this in this tutorial to hit No. And now if you go to our explorer here, you're going to see there's a couple of new files, we have the Docker file and the Docker ignore file. So let's start by looking at the Docker file. In the Docker file, each line that you can see here represents a one set of instructions that are going to be applied as the Docker image or Docker image is going to be built. And each of these lines will also generate what we call a layer that represents the changes that are happening from one line to the next line. And that's what helps a lot in terms of a caching of steps as we do subsequent builds of this Docker image. So the first time we will build it, it will take time. And this was a good time. So silicon times is going to be much faster. Now let's go one by line by line here to understand what's going on. So the first thing that's happening is we're saying that we will be building out our image base it on the dotnet ASP net image, specifically version five, zero, right. So that's, that's the way that you that you start by saying, Where do you want to go from by specifying dotnet. asp. net, you're guaranteed to be building a your image based on working ASP net environment, in this case, a five zero environment that has all the dependencies that are needed to run an ASP. NET app. And not just that the ASP net image has a has a at the same time has been built based on the correct always based on wait where you're running your image on in our case, since we're going to be running it in a Linux machine. That will include all the dependencies to run an ASP NET Core app, or the net five app in Linux. Now, we're also saying this is five zero as base. This means that this is going to be a our first stage of the building the container. This is a good segue into the concept of multistage builds, which is what's enabled here. This means that there will be more than one stage on the build process. And in each stage, you can specify a different set of instructions that may have nothing to do with the instructions executed in some other stage. So for instance, here we're saying, so this is the first stage and we'll call it a base. And in this stage, we'll go from a dotnet. asp. net, which is the runtime image for ASP. NET five work there is going to be slash app means everything that happens after this will happen in the in the app directory. And then that we expose a port 80 this line actually means does not mean much is just a kind of a documentation field. The way that you expose a port is a bit different actually. But it is a there's a convention to specify the port that they are policing in. Now that temporarily finishes that first stage and in the next nine In line five, we are going into our next stage that we're calling built. And the interesting part about this stage is that it comes from a another base image, if you notice this is coming from dotnet, SDK to dotnet. SDK five series are supposed to dotnet. asp. net. So SDK is the image that has all the build tools, and all the libraries, everything that's needed to build and dotnet five app, which is not the saying that you need to just run other than five AP, right. So this, whatever is coming in this ACP image is potentially much bigger. There's a much, much more files, compilers and stuff in there that are needed for building the container, but not needed for just running. So that means that your final image that's going to go actually from the base image is potentially going to be much smaller than the image that we're going to use to build your container. So we start with this stage, in this case, it worked there is going to be a slash src. So that's where we're going to place any files from here on. And that's exactly what we're doing. Then in the next line, we're saying, Okay, so let's copy the catalog, that CS profile, the one that defines our project into the root of the current location. And they will run a dotnet restore, on that break. So that brings in all the nougat packages that are needed, then we say, OK, so we have restored all the packages, now copy every other a file that's needed a folder up. So that includes all the files that you can see on the left side, except for a few sections, but most of the files are included here. And we'll talk about that in a moment. And then there's this line here, that's actually not that much needed. Because as you can see, it's pretty much the same as the as line six. So we will actually delete this in a moment. And then we go into the actual build process, where we say, hey, go ahead and perform a dotnet built on catalog that says Prague, we're modifying a Korean configuration to not be the bug anymore, we want to release version of the app to optimize it for the production environment. And the results of that build should go into the slash app slash build directory. So currently, that finishes our build stage. So it's this section here. And then we're switching to yet another stage that we're calling it publish. In the bullish stage, what we saying is, actually, we're going to go from the build stage. So notice, now we're going from build is the stage that we just created. And we will go from there and name it as bullish, and just execute the dotnet publish command. with similar set of parameters and the previous one, just changing the output directory. So that then polish, what it does is it creates a new folder, in this case called Polish with all the files are needed in the right shape, to just execute the app. Now at this point, I'd like to point out that I find a few of these lines are a little bit redundant. So we could simplify things a little bit here. So I don't think we need to separate both a build and abolish stage. So I'm going to make a couple of changes here. So I'm going to actually remove these a warfare source line, because we already have these in line six. So I'm going to remove that. And I'm going to also remove dotnet build just because dotnet Polish also performs dotnet build for us. So there's no need for two separate lines. And I'm actually going to completely remove the Polish stage because their build stage should be good enough for what we did here. So now we just have one stage called built. One second stage called built, that will end up by running duct and polish. And all the files that we need should end up being in Polish. Now in 912, we go back to our initial basic stage, build stage, and we're going to be calling it as final. Right. So now we're pretty much finishing what's what's going on here. So we switch again to the app directory. And then we're saying what we're saying is, okay, so now let's copy from the foolish build stage, we net, which in our case is no longer there, we actually opted for built. So I'm going to actually copy this and change it into build. So from the build stage, go ahead and copy whatever is in the app slash Polish directory, the one that will replace all the files, copy all of that into recording directory, which will be slash app. And finally, we'll define the entry point for our app. So this is how we define how to start our REST API. And in our case, that will be just by hitting the we're executing the dotnet command with the catalog that dll file. That's all that's needed to start our, our REST API. With this Docker file, we're pretty much ready to build our image. But before doing that, what I wanted to do is just show you about the dot Docker ignore file. So this Docker ignore file, what it does is, it defines a series of files and directories that you want to exclude from the Docker image. So there may be a one or more files that you don't want to include in the Docker image, because it makes no sense to close. For instance, the stuff that we have on there that VS code is really only useful for development purposes. But it means he has, it makes no sense to include that into the Docker image. So for instance, this line here specifies Hey, use, go ahead and exclude everything under slash that VS code. And the same thing for a bunch of other files and directories. So a good file to not forget to keep in mind, otherwise, your image will end up being bigger than it is. So with that, we should be ready to start building the array image. So what I'm going to do is just open a new terminal. And to build the image, what you do is use a Docker is Eli. So what we're going to say is Docker build, then you, you have to specify attack, I mean the name for the image and attack for it. So that you do be the dash D, qualifier. And then we will give it a name, the name will be catalog, and then you give it a tag, which in our case, let's say is going to be v one. And finally you specify the directory from which you're going to execute the command. In our case, that's going to be the current directory, which is specified by the.so. I'll hit Enter. And that's going to go ahead and build the image. So first thing we need to do is go ahead and download any of them base images. In our case, that will be the dotnet ASP net image, the runtime image will go ahead and do that. And then for the second layer for the second build stage needs to pull in the dotnet SDK, a five zero SDK, a five zero image, which may take a little bit more because it's speaker, and then it proceeds to perform the actual build process within the the image. So Docker restore, and they are to publish, and then all the other steps that ended up creating that image. So at this point, we have an image ready. And like I said, each of the lines going back to a local file, each of the lines here represents one layer that maps to each of the steps that you see here. So he says step, a bunch of steps here. So step one of 13, to 13, three of 13. Each of these will correspond to each of the lines that you can see on the Docker file. So these are the layers that will be cached from here on, so that you don't have to do them again. So in our case, if we don't have it, if you don't make any change, if you've tried to do Docker build dash D, the same command again, I hit Enter, really mostly nothing happens because everything is cached. So only if you make some changes, then those some of these layers may need to be executed again. Now, one thing to remember is that this is not the only container that we need for our REST API. We also have the MongoDB container, which is the one that handles that access for us our database server, right and which is the one that has our database. So we need to make sure that our new container for the REST API is able to talk to the MongoDB container. And the way to do that, at least in the local box, is by setting up what we call a Docker network. And having both containers join that that Docker network. So how do we deal with this network to create a network, the only thing that you have to do is just say, Docker, network, create, add use, give it a name. So we will say that five tutorial and that crazy network if you want to see the the existing networks, you can do Docker network LS. And you will see we have the our net 530 network already created. So now when we run our containers, we should take advantage of that network. So what I'm going to do is first make sure that I'm not running the MongoDB container yet. So I'll do docker ps, the MongoDB container is running right now. So I'm going to stop it Docker stop Mongo to that stopped, and I'm going to run once again, the same command that we've been using so far to run our container. let me paste that here. Same command. But I'm going to add one modify here that says network equals and the network name that you use was defined in this case, net five tutorial. So that makes a disk container a joint that did work. So I'll say Mongo, the name of the Docker image, so hit enter. And then the MongoDB container is running in that network. After doing that, we're ready to actually start running our a catalog container, which by the way, if used to Docker images, you should be able to see our created image. As you can see, we do have catalog be one, ready to be executed, as well as the Docker images for the dotnet. SDK, and the dotnet, ASP net, runtime, and Mako. So to run our Docker image, we're going to do something very similar to what we did with MongoDB. So we'll start by saying, Docker run. And then instead of doing dash D, for the dash, we'll do dash ID for interactive, that allows us to keep our terminal connected to the Docker run process, just to see whatever whichever logs are coming out of that container. And then we will do the RM so that whenever we stopped the container, it actually it gets deleted, so we don't keep hanging around. And now we specify the port, similar as we did with MongoDB. Now we have to specify which is the port that's going to be mapped from our local machine into the container. In our case, let's pick ADHD, it doesn't have to be it, you can use any other body port. And that is mapped to the internal port in the Docker image. Now, for ASP net, and that five images. The base image itself, it's been a runtime image defines or overwrite the port where the app executes, at least for HTTP. So that port is 80. So regardless of the fact that we've been using Port 5000, for local development, when we base our Docker image in the ASP NET Core runtime image, the port is overrated into 80. So you will have to specify 80. And if you don't want to use 80, there are ways to specify the port that you want to use in your degree in your Docker file. But in our case, the Port 80 will be just fine. Now we have to remember that in order for our app to connect to MongoDB, we have to specify a series of settings, right. So if you remember when we go to app settings that Jason, we have this MongoDB Settings section, where we specify host, port and user. And not only that, it will remember going back to Settings MongoDB settings, we specify host port user and also password which is coming from secret manager. So in this case, since we are trying to talk to Mongo to MongoDB. From within the container, we can no longer use just localhost to reach out to it. They remember that each of these, each of the apps running inside of the container, both arrest API and MongoDB are running in an isolated way. And they can no longer resolve things by localhost. So for our REST API to talk to the MongoDB container, it has to do it via the name of the container of MongoDB. In our case, remember that we gave it the name of Mongo. So we have to override the host name. To talk to Mongo, we don't have to touch it update the JSON file for this, what we can do is just take advantage of the configuration system of 35 to be able to override the settings. So this one going to be setting section can be totally overridden by environment variables. And that's what we're going to do now. So as long as we follow the convention, we should be able to make that happen. So I'm going to say dash E, that's the way that you define an environment variable in in Docker, I'm going to say, I'm going to use MongoDB settings. So just copy that. And then we'll say colon host. And that host is going to be Mongo, the name of the Mongo container. So that's how you can override this localhost as specified here. Now the same way that we did that, we need to specify a password because the password that we've been using so far is stored in the secret manager. And the secret manager is only being used for development, a development purposes is only available in development environment, it is not available in production, and the Docker image, the Docker container is really going to be running in a production environment. So how do we feed that password, same thing that we did a, just now we'll say MongoDB settings. Let's not forget to also add dash in MongoDB, settings, password, and then we specify the password as bound word one. And that's it. And then let's not forget that we need to join also this container into the same network as the previous container. They won't go on the inner so that's going to be network med five tutorial, make sure this is exactly the same network that you used before. And finally the name of the container and the name of the image and the deck. So that will be catalog. Be one and I think I may have a couple of mistakes. Here. So the first thing might be, let's see, yeah, so it should be dash dash RM as opposed to a dash RM. And the next one would be Yeah. So this one should be equals supposed to column. And then I'll hit enter. So that has started our Docker container. And as you can see, our hosting environment has now changed from development to production. So production is the environment that will be reflected when you run the Docker image as opposed to development anymore. Now let's see if things are working properly, we should be able to reach out to our REST API at Port 8080. So let's open up postman. And now let's just switch from Port 5000, to Port 8080. And let's see what happens send. And through to be told, the API keeps working. But now as you can see, we're hitting a port 8080. So things are happening within the Docker container. Also, notice how we did not get a redirect anymore. So instead of getting at 307, like we did before, we got 200 right away. So there's no more redirect happening in there, we're just going to be at the HTTP endpoint. Now to keep things interesting, let's actually post one more item here via our post tab over here. And I'll do the same thing. I'll switch from HTTPS to http. And then I'll do Port 8080. The 80. And so let's say let's say that we bring in back that antidote for you. So sometime in the past, and then I'll do price 15. an hour heat sent. And as you can see, I'll just close the console. Now, the audio has been created. Now really, the interesting part about Docker and Docker images, is not just being able to create them, right, but also to be able to share them so that they can be used by either other persons or some other systems. So how could we share this Docker image that we have in our box now with some somebody else, right, so use for the sake of this tutorial, we'll do something very simple to see how this can happen, we're going to take advantage of a service called Docker Hub, which you can find Let me open up the browser here. You can find at hub that docker.com. So this is the place that the creators of Docker offer, so that people can publicly share their Docker images. So creating an account here is totally free. So feel free to try it out. So you'd have to come up with some Docker ID, I mean, some kind of a username and a password. And then you can have your own account in Docker Hub. So how can I get my image Polish into Docker Hub. So let's go back to Visual Studio code. And I'll stop my Docker container now, by doing Ctrl. C. And the first thing that I'm going to do is actually logging in into Docker Hub. So once you have an account in there, you should be able to do a simple Docker login to be able to start pushing image in there are a way to do is just say, Docker login. So this is going to ask me for my user username, my case is, will you see it will ask for my password. And now I'm logged in. So now in order to get our currently existing image into Docker Hub, we just need to do a little bit of retagging to tell it where we want to place this image. So let's leave our images that once again, local images. So here's the mission catalog b1. So we want we want to do is just retag these in in this way. So let's do Docker tag, catalog be one that's the Korean, the green tag, catalog, big one. And then our target tag is going to be starting with our username, my username, in this case, for you see, and then slash the name of the of the image in Docker Hub, when you get to Docker Hub is going to be called a repository. And so that was during August is going to be catalog. It doesn't have to be you can choose. But I'd like to give it just the same as we had before. So you can typically see a slash catalog be one, hit Enter. And like I said, this is just a tag. So if I do Docker images again, we're going to find it now you have catalog B one, and who you see slash catalog B one. But if you notice the image ID and you look closely, you'll see that the ID that both of these images have is pretty much the same So it's really you can think of the stacks as just pointers into one of the images. But with this retag, we are able to now actually push the image into Docker Hub. So what I'm going to do now is just say, Docker push our new newly written image, catalog b1, hit Enter. And this starts the process to upload, not just our REST API dogri mesh, but all the layers that are composing these REST API needs to go into Docker Hub. So remember that our Docker images Docker base is the ASP NET Core runtime image. And that one, in turn is vetted in some distribution of Linux in this case. So all of those things need to go into Docker Hub, so that anybody else got in the future, just pull that image and start using. So these may take a while because it's an upload, upload task. Alright, so the image finish uploading. And now if I go back to Docker Hub, where I'm going to do is actually sign in with my account here. And as you can see, I do have my image just uploaded a few seconds ago, who you see slash catalog. And here's the one fact that we have right now. So that would be one. So let's see that let's play the role of somebody that does not have this image in their system right now, and that they want to use it. In that case, what they can do is either do a Docker pull, or just run the image. So let me show you what I mean. So the image is now available in Docker Hub. So I go back to VS code. And what I'm going to do is just completely remove the image that we have currently in our system. So let's do Docker images, once again, and we're going to do is just remove these two images, so that it gets completely out of the system. And to do that, I'll do Docker Rmi, will you see slash catalog v one, and Docker Rmi, catalog, v one. So now there should be no catalog image anymore. In my system, z only, I only have these three images right now. So now, the only thing that I'm going to do is use to Docker log out to simulate somebody that actually has no access to my Docker Docker registry. But since the image is public, they should be able to pull it. So in order to pull the image, you can either do the local Docker pool operation, so local pool, or you can run it right away. So we can run it in the same way that we run it before. So let me show you is going to be pretty much the same command line, I'm going to actually copy paste here. But now instead of you're saying catalog b1, what we can say is polisi, slash, catalog, b1. So just keep in mind, so I am at this point, I am somebody that has never, that has never had access to the catalog, REST API service, I have no access to the Docker file or anything about how to build this image, I just want to run it for the very first time. So I do this command line and hit enter. It says yeah, I cannot find this image locally. So we'll go ahead and actually pull it from Docker Hub, and then run it. And as you can see, it is already running. If I go back to postman, and I try to query for items again. I get my items from this Docker image. So that's how you can publish or push your Docker image into what we call a Container Registry, in this case, Docker Hub, and then it is pulled back, potentially in some other machine. That's how you can share it. So we may end up using some other form of Docker registry in future videos. But for now, I just wanted to show you how you can do the sharing of Docker images across systems. In this episode of The dotnet five REST API tutorial, we continue our path towards deploying the REST API to a production environment by using Kubernetes. We will talk about the implications of running containers outside of the dead box with no downtime and how Kubernetes is a perfect fit for these and the many other challenges of running distributed systems resiliently. Today you will learn why a container orchestrator is needed. What is Kubernetes and which are its basic components. How grenades enables resilient distributed systems, how to stand up a basic units closer in your box, how to deploy your REST API and one of the to coordinate this, and how to scale a Kubernetes deployment. So if you remember from the previous video, we talked about these orange box, which so far we've been talking about it is the production environment, right? In this case, let's just call it an old Indian, this is the, this is the either the physical, or the virtual machine, where we're going to be running our Docker engine, right? These things to this Docker engine, that were able to run a bunch of containers in this dog, this could be either our API containers, or also our MongoDB container or many other containers, right, that we want to run in this node. However, a few interesting questions start arising as we move forward with this approach, right? For instance, who takes care of pulling and running containers. So I mean, can imagine that without any forter sort of automation, somebody will have to come to this box, and actually start doing Docker run for each of the containers that we want to run in this box? Right? So that's an interesting question. So either some information somebody saw somebody has to take care of this, how to run the containers, right? So who knows? Or where do you see the reading? All these different environment, variables, secrets, and different common arguments that we have to fit into Docker run, to be able to run the containers exactly the right way, for each of the cases? And what if, what if we don't have enough container instances? Right? So what if we need more, so something or somebody has to start spinning up more and more instances, as as needed, both for a recipe containers or for any other containers that we want, we may have in this box? Right? And then also, we may win, we may not be able to fit, just I mean, we can only fit so many containers in a box, right? So at some point, we may need to introduce get way more nodes, right? And then somebody has to decide if the containers are going to be created are going to go into in this case to node node two, node three and two, node n. Right? So who decides this distance here? So do we have a person that is looking at the different stats for each of these nodes, and just decides to do Docker run in each of these indices machines? How does that work? Also, a are the containers healthy? So what happens if one of these containers crashes? What do we do? Who is on point to make sure that we bring back this failed container so that we keep having as many as we wanted? To start with? Also, where do we store all the secrets? For the restore to database files for MongoDB? database? Where do you put all these? How do we enable containers to talk to each other? Right? Because we know that our REST API containers need to talk to our MongoDB container. But how do we enable that communication in so far, we've been using these Docker network, right to make them communicate. But as we have more boxes, more containers, how to make sure that they can all talk to each other. And one more thing, how do we read this container from the outside, right? Because all of these containers are running in the box. But all of them are running with some port a opened locally. But then what if somebody comes from the outside? What will be that public IP that somebody from the outside will use to talk to these containers? And if they talk to them? Which of all the instances would serve the request? Because we have so many instances in there who decides which is the right instance, for the request that's coming in? So all of these questions? How can we solve all these challenges? Well, this is exactly the reason why we want to introduce an orchestrator tool like Kubernetes. So enter Kubernetes so as we describe what goons can do, let's they think again about this These dots, right? So we have, let's say we have these three nodes. And we need to start placing containers in them. And we have to do a bunch of things to get these containers up and running in the right way. So instead of having to have an individual person, or some sort of a script that needs to run, to be able to allocate, and to make sure that everything is going properly in across these nodes, we're going to be introducing this component called Well, I've got a series of components that we call the control plane in Kubernetes. So the control plane has several components that take care of all the the it's kind of the brain of coronaries. So he decides how to schedule the container into the different nodes, he decides what to do if one of these, one of these containers is destroyed, how to bring back one, one more how to let them communicate to each other, and a bunch of decisions, right. So for instance, when we want to get one new container deployed to one of these nodes, or we can do is via the control plane, we can create a deployment, what this deployment is going to do deployments, by the way, one of the resources inside grenades. And so what is flaming can do is go ahead, find the image that we need in the Container Registry, and then allocate what we call a port for that, for the container that will be will pulling in now the ports are really the smallest deployable unit of computing that you can create, and manage inside Kubernetes. Right. So the pod, it has a similarity to what we call, if you think a pot of wealth, right that I guess that's the symbol for Docker. But it's a group of one or more containers that are sharing, storage and network resources. And it The report also declares how to run the containers inside them, right. So you will always be working more with bought that with containers, you can read with the containers directly, you will only work with bots. So for instance, in this part, we will have, let's say one container, which is a very common case to use have one container per pod. But you don't have to have one, you can start standing up your deployment object, you can start standing up more than one container for let's say this is for catalog race API. But in this first note, we don't have to just have this catalog API, right. So we could have some other port for some other service that in this case has not one but two containers inside of it. More than that, we will have also a port for our Mongo for a Mongo container, right that we need to also pull in, that has access also to these database database needs to get a granted access to some storage to be able to store the database files. But then the thing is that the this catalog posts don't have to be using node one, right? So when we run out of space, we may want to take advantage of other nodes like no two here, right. So again, the control plane will take care of deciding where to put these these ports across the entire set of nodes that we have here. So in this case, we have seen three, but you can think that this can be an entire form of dozens and dozens of nodes, and then control plane will decide which is the node that is the perfect fit for the ports that need to be allocated. Right, if Lemmon declares that it needs three ports. So this may be Was it the one way that the polls are distributed? If deployment says no, I need four pots that we may want to deploy get another pot pot for into node three because notaries use free at this point, right? Same way with some other pod. He may want to be allocated somewhere else. There's no space in node one. So let's just put it in no tree. Let's see, just because no tree has so much memory available, right? The other thing is that what three, let's say for three wants to talk to utterly all of our catalog boards, we want to talk to the database, right? So how do they do that? So there's this object called the service in coordinates. And the service allows us to reach the other components that are available. So what else in the cluster? So in this case, I'm saying, well, I want to reach out to the Mongo service via that Mongo service, I am actually able to reach to the database. In a similar way, if we have an external client that wants to talk to our catalog, a REST API, how can I reach it? So again, we stand up a catalog service, which can reach in our case for now ingesting the localhost, but eventually it could be a public IP. And then this client can reach to the to this service and be at that service. It can reach not just one but all of the pods that are running behind the Behind the surface, right, so it's a way to route to those to those bots. So all of these is what we call a meet all of these components is what we call The Goonies cluster, right? The suit of all these components. And now, what I'm what I'm showing here is really just just the tip of the iceberg in terms of all the possible resources. And, and the series of configurations and the capabilities that can be a use within coordinates. Really it there's a lot that can be done with coronaries and but here, we'll just explore just a few of them to understand how it works. And so in terms of benefits of using Kubernetes, one of the main things about this is the ability to turn desired state into active state. So like I said, we create a deployment object where we say, hey, I want three copies of the catalog service. So that is a solid state. So Kubernetes via the control plane, we'll have to figure out how to make those four, four instances of the catalog container available as support somewhere in the series of nodes that we have available. If you have three nodes to find space in those two nodes, if you just had one node, so figured out how to allocate that in that one node, or we could have hundreds of nodes, right. So decided to stay in the Draco state is a key feature of Kubernetes. Then, like I said, select nodes to run the bots. So it the control Bay has knowledge of the stats of features installed. So depending on the amount of CPU, or the amount of RAM available in each of the nodes, it is able to allocate the port in the right place. It also allows for self healing. And so if he detects that one of the ports, let's say port three is just easiest to destroy for any reason, it should be able to automatically bring back another part. So let's say that a note to the entire note just goes down, right? If that happens, then a control plane will know that it is able to I mean, that is missing one note, sorry, one port, and then it needs to allocate that boat somewhere else, either node one or node three in this case. So this is the self healing capability of Kubernetes. That's super handy. It also is able to store configuration secrets. So he has a it has a place to store all the configuration that you need for your for the services, and also to store to safely store sensitive information and like secrets, so somewhere in there so that they cannot be compromised. Like I said, he provides service discovery and load balancing. So when the client calls and talked to the catalog service, it is able to be directed to the right at one of these nodes following following some algorithm to load balance across them. It also ensures no downtime. Because whenever it it, let's say that we want to deploy a new version of our catalog, Docker image right across all of these spots, it will not just destroy all the pods at once and bring back a bunch of pods with a new image, it actually those will will slowly roll out the new version of the of the image as as new ports as it starts destroying the older port. So only when it configures that new, the new instances of the ports are available, he will start getting rid of the older ones. So that way it keeps I mean, it makes sure that there's no downtime for clients. He can also auto scale. So it can be configured to say hey, if there's so much CPU being used across the across all of our nodes just spin up even a more pots in any order available nodes to satisfy the demand. So, within this tutorial, we will be doing scaling exercise manually, but corneas can be also configured be configured to do the scaling automatically for you. It also disabled to on demand automatic storage, so you can easily declare that you want so much storage for your port, and granary will I mean without having you don't have to know exactly where that storage is going to be. You'd have to say hey, coordinates, I need storage and grenades will be in charge to figure out where to find the storage and make it available for the port. And finally, it provides a what we call gradual rollout and rollback use, just like I said, it will gradually start bringing in new ports as needed. And also if there's some issues with some ports, it is able to roll them back to a healthy version. So now that we know what Kubernetes is, let's see how to get our catalog REST API and the MongoDB database deployed into a local Kubernetes cluster. To get started with Kubernetes The first thing that we're going to need is a Kubernetes cluster. locally. If you're already using Docker in your box, you already have it An easy way to send up a simple local velocity for development purposes. Let me show you how. If you go to your Windows taskbar, you should be able to find this Docker icon. You can right click this I can select Settings. And this opens Docker settings in your box, maximize this. And here, what you're looking for is the q&a section on the left side, and then enable coordinators over here, so I'll just click on it, and then click Apply, restart and then click Install. Now this is going to take a while if this is the first time that you're enabling coronaries, because this is going to set up a local cluster simple with just one node. But still it is a local cluster, and that Docker needs to download a bunch of Docker images for all the components of Kubernetes, into your box to that. So that may take a while. In my case, I already have all these images downloaded, so it was a bit faster. What is complete we're looking for is this message here that says grenade is running, it should say running and it should be green, and then you know that you're good to go. So I'll just close this window now. With the Kubernetes cluster up and running, one of the first things that I like to do is to make sure that I am connected to the correct Kubernetes cluster. And that we can do via the cube control command line tool, which is the command line tool to interact with Kubernetes. These tools should be already installed for you with your Docker installation. So you should be able to start using it right away. I'll open up a brand new terminal. And what I'm going to do is just type cube control config. current context, hit Enter. And that is giving me back Docker desktop. So indeed, Docker desktop is the is the name of the cluster installed by Docker, once we enabled the Kubernetes cluster, so we are good to go on that side. Now it is time to start creating or declaring how we want to deploy the components into Kubernetes. For our REST API and database, do that we will need to write a few yamo files. To make this simpler. What I'd like to do is to take advantage of the granaries VS code extension. So to do that, I'll first just close this terminal. And I'll go to the extensions hub. And I'll search for Grenada is this one here should be the first heat, I'll just say install should take a few seconds. And now it's ready. So I'll close this. What I'll do now is go back to our file explorer, and I'm going to create a folder to start storing all the files that we need to deploy our resources to Kubernetes. So this is going to be our Kubernetes folder, it can really be any name as you please. And then the first one we're going to create here is going to be named catalog the job. And I'm naming it this way, because this will declare all the resources that we need to deploy our catalog REST API to Kubernetes. And also to keep things simple. To speed up the process, I'm going to take advantage of that goodness extension that we have installed to generate a little bit of code here. So I'll just say deploy, and that pops up a little bit of IntelliSense, as you can see, and then we just have to click it. And that will give us a basic shape for our deployment resource. Now the plugin resource is what you would use to declare the desired state of the containers or specifically the ports that you want to get deployed into Kubernetes. For in this case for a REST API. So let's understand what what this file means. So if you see the first line declares what we call the API version, this is something that all the coronis resources will will have. And this allows you to specify what is the API surface that you want to take advantage of in Kubernetes. So depending on the version you pick here, you will have access to more or less features of the resource that you're configuring, in this case, a deployment resource. second line declares the kind as as we know, is a payment object. And then we go to metadata name. Metadata name defines the name for these deployments. So in our case, we'll use name it catalog. And then we go to the specs section. The first part of the suspects section is the what we call the selector. And the selector is what we use to specify how is this deployment going to select the port that is it is going to manage. So in this case, it is saying Well, I'm going to say to manage all the pots that have the following labels. And in this case, we just have one label, it is called app. And then there's a value, the value that we're going to assign here is going to be good. So all the ports that have an app label name with the value catalog, will be managed by this deployment. Then we keep going into the template section. So here is where we're going to declare all of these containers that are going to be included in the, in this deployment. So one of the first things to declare here is the label or the labels for these for the bots. In this case, we're going to name this this bot as catalog that's going to be the label. So that's the way that we can identify all the bots that are related to catalog in this case, and this one has to match exactly the label that we just typed a moment ago, so that the deployment can actually manage these bots. Then we will forward to the most interesting section, which is a spec on the template. And the first thing that we're going to find here is containers. So here, we declare the list of containers that are going to be included in this deployment. And the first thing that we have to do for this very first container is we'll give it a name. So again, to keep things simple, we'll just name it the catalog. And then we have to declare, which is the image that we are going to be deployed. In our case, we're just going to be using that image that we created in the previous video that we deployed to Docker Hub. And then in my case, it was blue sea catalog, b1. So we are saying we are going to be pulling down into Kubernetes, this image named who you see catalog be one that we were saying there, but you could specify any image that you want here. Next, we go into the resources section. So here's where you declare what resources specifically in terms of memory and CPU, in this case, what resources are needed from the Kubernetes node, in order to run your REST API. In this case, I'm just saying, Well, I'm going to be needing 128 megabytes, which by the way, while may be right will be equivalent roughly to 1024 gigabytes. So that is the full value here, you could change this according to your needs. And then on the other side, we have the CPU where we're saying that we want to use 500 million CPU. So this is an interesting notation. And what it really means is, this is similar to say, I'll just type here would be kind of like 0.5 CPU, so roughly half CPU, just to understand it better. So 500. So basically, we're saying we are good to go with one half CPU for our REST API, which would be definitely enough for us. Next, let me scroll down a bit. We have the ports. So here's the word declare, which is the port that that our container is exposing. And we can communicate to in order to access the REST API. In our case, we're going to do just as we did in the previous video, where we mapped into the into port 80. Because we said that they are base ASP NET Core image overrides our port, and by using Port 80. So we will be using the same port, or 80. So that's a port inside the container where we can reach the REST API. The next thing that we're going to need in this deployment is a few environment variables. Because if you remember, when we run our container, we had to specify both the host to be able for it to be able to talk to our MongoDB container, and also the password for the user that connects to that to that database. So how do we specify environment variables in this case, it's fairly straightforward. We can use the M section, so you just type M. And then you have to declare key app name value pairs for each of the environment variables. The first of all we're going to declare is the one for host. And then if you remember, in app settings JSON, which I'm going to open right now quickly, we have this hierarchy of settings, right we have MongoDB settings. And in what we said is we have host or user and password. So I'll go back to catalog jammal. And we have to follow that same hierarchy. So I'll do Mongo DB settings. And then the way to separate add to go into the hierarchy in in this java file will be be a double underscore. And remember when we run will be passing variables into the container, we use colon, one in this java file, the convention we're going to use is going to be double underscore. So that's How you address these different settings. And then we'll give it a value. The value for the host in coronaries is going to be MongoDB. service. And we'll see in a moment how we define this MongoDB service. But for now, so yes, this is how we can address the MongoDB container that we're going to declare later on. Next, we need an environment variable for the password. But before we can declare it, we actually have to create this password, add ingredients. And luckily, coneys has this built in support for secrets. sensitive information like that. So in order to do that we're going to do is just open my terminal. And I'm going to actually use a shortcut here Ctrl J, that opens up the last terminal we used and scroll down a little bit. And then we're going to type here is the cube control an option to be able to create a secret. So that would be cube CTL. Create secret. And then I'll type generic. Generic is the type of circuit that we can use in this case, but there's a few other options that you that you can also use here, then it's fine for us. We'll name it, catalog secrets. And I'll use spooler, just because you can actually put multiple secrets in one of these secrets here. But we're only going to be using one. And in order to type it directly in the command line, what you do is say from literal. And then you have to specify the name and the value for the secret. So the name is going to be MongoDB. Password. and the value is going to be just like we did before as bad bound word one. And then I'll hit enter. So that created the secret. Now that we have created the secret, we can actually go ahead and declare it and feed it into an environment variable. To do that, we'll do something similar. So we'll say name. And again, we'll follow the the hierarchy, right, so MongoDB settings, in this case, double underscore is going to be password. And then in this case, we don't want to use the value, we want to read it from the secret. And to read it from the secret what we can say is value from Enter. And we'll say secret key ref. So we're referencing a secret that's named name is going to be catalog secrets is the name that we gave the secret. And the actual key within that catalog secrets is going to be Mongo DB. Password. So that's how you can address a secret from the declaration of an environment variable in Kubernetes. That's looking good. I'll close this terminal again, have more space here. And the next thing that we want to specify here is our our health probes. So the health probes is a mechanism that enables coronaries to constantly monitor the health of our race API, by using the health checks that we already defined. If you remember, a few episodes ago, we declared some health checks in the REST API. So now it's we're gonna we can actually, we can actually use them to let us know if our containers are in good shape or not. And he remember we had both liveness and readiness probe. So both of them we're going to be using here. So start with the liveness probe, and to declare it where you can use his liveness probe. And then the kind of probe that there's a few kinds of probe as well, what I'm going to be using here is the HTTP HTTP GET type of probe, which is just going to call some HTTP path with the get verb into the API. And so the path that it needs to use is, if you remember, that would be held live. And it needs to use Port 80 for that one. And that's it, that will be the biggest probe. And I'll just copy this to define our readiness probe. readiness probe is going to be very similar, but it ends in red. So if you remember, we would use the liveness probe to tell if our recipient is up and running, and the readiness probe to tell if any dependencies of the REST API are responding correctly are ready and basically, REST API is ready to serve requests. So with that we have properly declared a deployment for our REST API. But it is not enough. So this will go ahead and create the bots pull this pulls the containers, and environment variables and all that. But still, there's no way to reach for anybody to reach into this container from the outside. And to enable that, we need to bring in another component that's called service. So that service, we could either create in another java file, or we can use security right here, which is what I'm going to do. And the way that you separate these two different resources in the Java file is by using three dashes. And then you can go ahead and start declaring your resource. So I'll do some IntelliSense. Again here, so I'll just type service. And that brings up the query service IntelliSense. Here, so I'll click on it. And that will scaffold a little bit of pieces in there that we can take advantage of. So the service is the is the type of resource that you use to be able to address resource within Kubernetes, either internally or externally, which is what we need at this point, the service we need to give, we also need to give it a name, just like we did with deployment. So we'll name it the catalog service. And then one piece that's not listed here that, but that is important for us is the type, there's a few types of services. But in our case, we want to use a tight named load balancer. Load Balancer is the one that allows queries to request natural IP, or a natural or not, I guess, not an IP, in this case, because it's localhost. But it requests a way for two to open kind of a window to the to the outside, so that people can actually reach out to our REST API from the outside. So we'll see how that that name resolves when we run this, but you know, various responsibilities. Otherwise, we cannot reach the service from the outside. For selector, we have to specify the port. Well, the label for the the ports that we have declared before, if you remember these ports, here, we declared the template, we said that the labels for all the ports that are going to be managed by the deployment is going to be Catholic. So that same label is the one that we have to use, what we declare the service, or the color selector with, we're going to say, well, we're going to this service is going to be able to target any of the ports that have the app label with the value catalog. So that's how you connect the service with the bots. And then you also have to specify, okay, so which is going to be the port, that that from the outside, people can reach into our API from the outside, and that has to be mapped into a port in the in the container. So that port in this case is going to be we're going to be using your ad just because it's the default port for HTTP. So that will allow our clients to not even have this Wi Fi port, it will just be able to use call directly into the API. So they will go into port 80. And the target port has to be the port that we have specified for the container to remember, over here, imports, we declare container port 80. So container port 80 is the target pool that we want to use here. So it's really mapping 80, to 80. But it doesn't have to be like that, we could, we could have said, let's say 8080 maps to 80, that will be totally fine, just like we did with the container. But it is more usual to use a port 80. So now we should be ready to start deploying these well, to deploy these bodies resources, both deployment and the service. So how do you deploy this into Kubernetes. So we have to go back to our cube control tool to be able to do that. So I'll open up my terminal again, Ctrl J. And I'll switch to my release directory. The command to apply this deployment is called this this java file is cube control, apply dash F, and then the name of the file, hit Enter. And you should get a couple of messages stating that they got up the deployment has been created, and the service has been created. Now you want to see a which is a deployment or what's the state of a deployment that should be created. What you can do is to say cube control, get deployments and that will give you a list of all the created deployments this case is saying we have a one deployment named Gatling deployment and currently it is saying that zero of one of the bots is it is ready to go. So it means that it is not really ready yet. So, let's dive in a little bit more and get details about the actual bots got created. So what we can do is say, cube CTL get bought. And so as you can see all the with one pod that we have declared here, the name is starts with enable deployment. And then it gets a name for what we call the replica set, which we've not, we will not talk about in this video. And then finally, some identify for the actual pod. And then indeed, the port is not ready. So this is saying zero of one ports are ready to be used, right to be reached by the outside. But still, he says running. So this means what this really means is that our our liveness probe, that you remember, we have a liveness probe is is reporting a healthy status, but our readiness board is not reporting healthy status. So to find out a little bit more of what's going on here, and I'll expand this a little bit more. Let's actually get some logs from that catalog board. So to see cube control logs. And then the name of the bot hit Enter. And then we're seeing something interesting here. Let's let me scroll up a little bit. Yes. So here, as you can see, the our MongoDB readiness check is failing with status on healthy dubray him was cancelled. And as you can expect this is well, this is not expected because we have not really deployed any database yet. But this is great. This This confirms that our readiness health check, both have checks are actually working properly. And we just need to make sure that we fix this problem with the database, we bring it up. And then we should be able to have an REST API up and running. Let's close the terminal. And let's actually declare what we need to declare for our database. So I'll go back to migrate this directory. And I'll say new file. And this file is going to be called Mongo DB. Because everything we're going to declare here is just for the deployment of that MongoDB database. Now, the type of resource that we're going to create for MongoDB is actually called a stateful set. And see if we don't have a way to generate the skeleton for a stateful set. And yet, we're going to use the deployment template for this. So let's use to deployment select Windows deployment. And then what I'm going to do is just switch from deployment to stateful set. So let's say Phil said he has similarities with the deployment. But it is actually meant for stateful applications. So a stateful set provides guarantees about the ordering and uniqueness, uniqueness of pots. So when you see the pots that are created by a stateful set, they will not have random, random names, they will likely have some very specific order names. So for instance, in the case of this one, it may be called MongoDB, one MongoDB, two MongoDB, three. And more than that, if one of these ports dies, let's say MongoDB, one dies. And when it comes back, he will come back as MongoDB one once again. And this is very important in the case of MongoDB. Because we will attach a persistent volume to it, which will have the database data files. And we want to make sure that those files don't just get lost as as the port is reconstructed in the case that it needs to be given for any reason, right? So we want to keep it around. So the right type of resource to use for for a persistent service, like MongoDB would be a stateful set. So just like we did with deployment, one of the first things to do here is to assign a name to a stateful set. In this case, we will name it MongoDB stable set. And then one important thing that's not being generated here is what we call the service name. And the service name is used to give a some identity both to the stateful set and to the bots there are going to be a managed by it, which is not actually needed for deployments. But for a stateful set. It's it's a requirement. The name that we're going to give here is MongoDB. service. We see how the service is died later on to the MongoDB service and back to the catalog later on. Then also just like we did with deployment, we need to define which are the labels that the status is going to be using at to select the ports that is going to be managing. So in this case, let's say that our bots are going to have the Mongo DB later the MongoDB value in its app label. And we're going to do that Then we have to make sure that in the pod template down here in labels app, we have to be using the same value. So all our parts are going to have an app label with a value of MongoDB. And then the stateful set is being configured to manage all those pods with the MongoDB value in the app label. Now we keep going down into the container section inside the spec. And then let's just give the name of the container listen this name it again MongoDB. And then for the image will be we'll use the same image that we've been using so far, is just Mongo, latest version of Mongo, then we keep going down into the resources section. And we will leave these resources, they should work just fine for us. But it's up to you to modify how much memory and CPU you expect to be using for your MongoDB server and the MongoDB database. Moving on, we have to declare the the port where the MongoDB container is listening on. And by now we know that this port is 27, zero 17. Right. So that's the port that we have to use to, to connect to that MongoDB service inside the container. And once again, like we did with, with gadelha yamo, we have to define a couple of environment variables to be able to talk to that well to be able to start up our MongoDB container. And those are the username and the password for the for the user configured for for MongoDB. So let's declare and sorry, let's see clear and M section. And here, let's start declaring. So the name of this environment variable is Mongo in it, db. root user name. and the value that we're going to assign to it is the one that we're using. So far more go admin. And then for the for the password, we're going to be using really the same secret that we're using gatok. So I'm switching to catalog here. And I'll just copy our password section into MongoDB. base here and fix it a datian. And then as you see we're reading from the same location, but the name of the environment variable is just a little bit different, is going to be Mongo in it, db. root, password, right, so that's a, that's a username and password environment variables that the MongoDB container is expecting. The next thing that we need to define in the case of the MongoDB container, is what we call the persistent volume, right. So we need to declare some storage way too, ask coordinators for some storage space to place the data files for our database. Otherwise, I mean, yes, a database will exist within the MongoDB container. But when the container and the port, the containers container is killed for any reason, the data will just disappear, right? So we don't want that to happen. So to do that, we create what we call a persistent volume, and greatest persistent volume, we're going to use what we call as a volume claim template. So let's do that. And they make sure I pick the right place for this. I should be here. So let's say volume claim templates. And really the balloon claim templates is a way to declare or to ask Kubernetes for some space, some storage space in the node, where the where the pod is going to be executed, right, so I need some space. And we have to declare a some details about that space that we're going to reclaim. So first, we're going to set up some metadata. And we pretty much just need the name here. So what's the name that we're going to give to these to this volume, that's going to be data. And then we have to specify the spec specifications for the volume that we're going to ask for. So that goes into the spec section. And then here, we have to specify an access mode access mode. So access mode declares, which way is this persistent volume going to be going to be mounted into the note, and actually, this should go into a previous line to do it like this. And, for our case, the mode of READ WRITE, once, once it should be good enough, and really this one here, it means that the volume that's going to be created that's going to be mounted is going to be mounted as read write, and for a single note, so meaning that just one note can read and write to these volume at a time. If you wanted to. You wanted to have more than one node we able to write to this volume, you will have to use a different kind of access mode. And then finally, we have to define the resources that we need here. So in this case, we'll declare resources and Then requests. And finally, storage. And here's where we declare how much space we want for this volume. And for our case, let's go for one, I think the term will be gigabyte, similar to one gigabyte, that's going to be as much as storage, we're going to reclaim for this volume, say this. So with this, we have reclaimed space in the note, right? For this form for MongoDB. However, we still need to tell our container that such a space exists. And we have to map that space in the in the host machine to space into the container. And to do that, we're going to declare a volume mount, the balloon mount needs to be declared inside the container spec. And so I'll just type here, balloon mounts. And here, we're going to specify a name for it. And this name will have to match exactly the name that we have specified for the in the balloon claim template, right. So we said it's going to be data. So data is the name that we should use here. And then the mount pad is the part where traditionally MongoDB is stored its data files right inside the container. So that that space is going to be slash data slash dB. So what this really means is that when MongoDB MongoDB container writes into its data DB directory for the data files for the database, those files are going to end up into de a persistent volume that has been declared on the site here, right, so the data gets read in outside of the container and into the persistent storage in somewhere in the host. So this way, if the container goes down is the port goes down for any reason, the data is not is not lost, and they will come back as the port comes about with the same identity in the stateful set. The next thing that we need to do, just like we did with the catalog yamo is declare a service write a service that allows us to address or to get to the MongoDB service. And so I'll just please, and then I'll type service for cleaning service, so generates just like we did before. And just like before, we have to specify some name for the service, let's call it MongoDB. service. And then one important missing piece here is this thing called the cluster IP. So cluster IP is what you would use to specify the IP address for your service, right. And so by default, any service will get local IP that that can be used to talk to other services within the cluster. But in our case, we want to create what we call a headless service. So we don't want to assign an IP to the service. Because in a stateful set, we want to address each of the nodes for each of the pots individually. So in this case, we're going to say none, so we don't want an IP in this case. So it turns it into a headless service. Now for the selector, once again, we have to specify how is this service going to be mapped into the ports that exist for for MongoDB. Right, so for that, we have to go back, once again into the bottom plate, the bottom plate metadata labels app, the label that we're looking for is AP, and the value is MongoDB. So we go back, copy that into this lector MongoDB. So anytime a request comes into this service, it will find the nodes that sorry, the ports that have been tagged with the MongoDB a value in the app label. And then that's what, that's how it's going to find them. Finally, we need to specify the ports. And so just like we did with the container, we're just going to do a simple mapping of 27 zero 17. So any request that comes into 27, zero 17 should go should be redirected to the port where the container is listening on. And that that port is again, as you can see here, 27 zero 17. I'll just copy that over here. Now that we have done this, it should be ready to to get to get started with this MongoDB container. So I'm going to open up my my terminal again, Ctrl J opens terminal. I'll scroll down this a little bit. And then I'm going to switch to the Kubernetes directory. And here I'll just do cube control, apply dash F and then our demo file MongoDB demo, and I'll hit enter. So both the stateful set and this service have been graded. And so at this point, let's see, how does the stateful set look like right, so let's do cube control. Get Data Sets. And they'll give us the one stateful set that has been created MongoDB stateful set, which seems to be ready, it's healthy, it's in good state, and it's ready to be used. Now that we did that, let's take a look at all the ports that we have right now. So once again, cube control get hot. And as you can see, now we have our catalog board, it is actually ready. So it's no longer reporting 01 is reporting one out of one. And that is because the ad because the the ready the readiness probe that was trying to reach into the, into our MongoDB database, it is now able to actually do it. And that's because we now do have the MongoDB service available. If you remember, when we were in catalog, Jamel, we declared that the MongoDB settings horse was going to be MongoDB service, it was not available yet. But now that we have declared it and we have deployed it, we do have the MongoDB service that I can reach out to. And then that allows our health check to pass so we can connect to the database successfully. I just like we have that board, we also have the port, the MongoDB stateful, set zero. So as you can see, it is not a random random ID as a as in the case of deployment is more of a very specific numbered and ordered index for these stateful sets. Right. So if it dies, it needs to come back with that same number. Let's now see if our REST API is working properly. Now that is running within coordinators. So let's open up postman. And last time that we did a get against our REST API, we did it over Port 8080. So because that's the port that we exposed when run it is as part of Docker. But now we have switched to Port 80. And we can either query the API like this, or we can just remove ad because it's a default HTTP port. So this should be good enough to do a get against the API. So let's see what we get. And sure enough, we don't get any results. Because remember, this is a brand new database that's now hosted inside Kubernetes. So this effectively don't have any data. But we can go ahead and create something in there. And so I'm switching to the post up here. And I'm going to also change these to just be HTTP localhost items. And then for body, let's come up with something quickly. Let's say we're going to create a potion again, price 12. And then hit send. And sure enough, the potion has been created, we can create a just one more thing, let's say an antidote. With Price, let's say 17, hit send, and has been graded to go back to the get it sent. We are getting our two items now. So the REST API is working properly. But everything is now running within grenades. And so we talked about this capability of grenades to self heal, right. So it should be able to always enforce that desired state, regardless of what happens to the bots. So let's see how how that exactly works. So I'm back in VS code now. And what I'm going to do is just to get a little bit more space here to be able to visualize things better. So I'm going to move this left side all the way to the left. And then what I'll do is spin up another terminal with the split terminal burden. So that now we can have two terminals side by side. And I'll move things around a little bit like this, perhaps right there. And so what I'll do is on the left side, I'm going to be watching for the bots with cube control, get pods, dash W. So that lets me see any changes that happen across the the current existing bots in there. And on the right side, I'm going to simulate the killing of one of these bots. So let's say that I mean, let's imagine that the board had some bug, and that causes the bot to crash. Right that then what happens. So let's do cube control, delete board. And so let's try our catalog board or only catalog board. And I'll go ahead and delete it. So I'll hit Enter, and see on the left side, how right away. As soon as it starts terminating that port, it immediately starts spinning up a brand new container, right. So that's the capability of coordinators to always detect this discrepancy between the desired state and the actual state. And as soon as you detect the situation, it needs to do whatever it needs to be done to bring back things to consistency, right. So in this case, the Newport is up and running, and then I'll just do Ctrl C. And I'll do get pods again without the W and you can see that now. We have, again, our two bots. But now we have a catalog but with a different identity, right? You can see it's not the same identity as before. And that's because deployment type deployments, great bots are ephemeral, ephemeral in nature, right? So they are just stateless. Now, that's not the same case. As for our stateful sets, as we said, Our stateful sets should be persisting across the lifetime of a pod, right? So if we just have these MongoDB stateful, sets zero here, what happens if we deleted right? More than that what happens with the data that is being held by this MongoDB bot, because we already have data in there. So would it survive for the data to still be there, with the port come back, so let's try it out. So I'll do a wash again for the bots. And I'll copy the name of our stateful set. And so I'll do cube CTL, cube control, delete bot, and I'll do MongoDB stateful. Set, let's see what happens. And see on the left side, the pod is indeed getting terminated. But right away, Grenada is detects that it needs to bring it back. And it needs to bring it back with the same identity MongoDB safe was at zero, because this is a stateful set, right. And we don't want to lose the data that's being held by the sport. And in fact, if I just control CDs, and guys, I'll do get pods, we have our two pods. And now we should be able to verify that the data is not gone. So it should still be there. So if I go back to postman, and I query for the data, once again, we can see that the data is still there. So it has not gone anywhere. So so that means that indeed, our persistent volume got created, and the data is being stored outside of the container and into the host. And that's enabled by our stateful set. So I'll go back to VS code now. And we also talked about this a capability of grenades to easily scale the ports and the containers inside them, without really much, much stronger, right. So if we are somewhere our network we need, we have much more requests in our website, we do have not just one port for catalog, but we need to have three, well, what would we do? And how can Kubernetes help us with that. So once again, let's monitor what's happening on the left side, get pods. And then what I'm going to do here is just ask Kubernetes to scale the deployment for discovery deployment, where you can do is cube control, scale, deployments, and then it will be catalog deployment. And then you have to say how much right right now we have one. Now we want to have three. So hit enter. I noticed on the left side that immediately starts provisioning a bunch of new bots to enforce this new decided state, right, and this happens blazing fast, right. So if I just do Ctrl C, now on the left side and get bots, again, you can see that we already have three bots, and we only had one just seconds ago. So here you can tell, like the power of the combination of Docker containers, and Kubernetes, right, how we can bring in a much more instances of our containers of our of our REST API. In this case, without really much trouble just running one line, we now have three copies of the container running. And now, the whole point of having these data replicas of the pots is so that we can do some good load balancing between them, right. So when a request comes, it should land in one or another of these bots. And how can we verify that? Well, unfortunately, we don't have good means to do that right now. But I think we can easily introduce a little bit of logging into the bots so that we can easily tell which bot the request, the request is landing. And so to do that, what I'm going to do first is just I'll just close the terminals for now. And then I'll go to our Explorer, or go to our items controller. And here, I'm just going to add a little bit of logging. To add logging where we can do is bring in the standard a logger interface. So I'll do only a logger for items controller. Let's call it logger. So I logger is, is a standard object available in dotnet five, so pretty much in any application dotnet five. And so what you can do here, you just do dependency injection, as you do with any other service built in service of dotnet five. And I'll come up to the logger instance right here. Now that we have that, what we can do is just pick one of our API's and do some logging. So I'll go for the easiest one, which is going to be our Get diverse API. So in this case, I'll just open up here, one line. And I'm just going to log a little message that says how many items we found, right. So to do that, what I'll do is just logger dot log information. And then let's do a little bit of string interpolation here. First year I'm going to put here is the current current time. And I'll do that via the date time class, I'll choose the UTC time to string and then we'll show adjust the time in the format of hours, minutes and seconds. Okay, so type format there. And then I'll just do corner just say retrieved. Items that count items. Okay, so just for us to verify that things are landing in different parts. So I'll save these. And then they will not change. What we have to do now is to create a new version of our Docker image. So I'll open up our terminal now. And I'll delete one of these, so that we only have one. And here I'll do what we did in the last video. So I'll just run our Docker build command. And I'll be publishing this into Docker Hub. So I'll be I'll keep using the same format as before, using the username first, and then the name of the image. And then the one thing that I'm going to pop here is the deck because this is a new a new version of the image. So we should be bumping it, let's say to be two, this is necessary, so that coronaries can later tell that this is a new image, or that it needs to pull it down from Docker Hub, otherwise, he will not be able to do so then I'll say dot O, hit Enter. And then I'll just miss I'll just miss one parameter here, which is the dash D for the tag, you enter again, this is going to build the image, it's going to reuse some of the cache layers and then it's going to be just a piece that is needed. Okay, so the image is built. Now I'm going to do me to login into Docker Hub Docker login. And now I should be able to just push the image shorter. So I'll do Docker push, would you see catalog, B to enter. And then again, some of the layers are going to already exist in Docker Hub. So only the layer that's missing, which is my little change of one line change here is the one that is included in this image and this one that needs to be pushed into Docker Hub. Okay, so with the image in Docker Hub, we should be able to tell Kubernetes that we want to start using it. So for that, I'll go back to catalog Jamel and I'll say hey, cornice, I don't want to keep using a demonstration one, I want to use version two. Save that. And then back in the terminal, I'll switch to our coordinates folder. And here, I'll just apply this file was once again. So cube control, apply dash F, and then we'll do capital gamma. Okay, and so life is if we do cube control, get pods, watch, see what's happening. So as you can see, the old the old container dope bots are getting destroyed. And new bots are getting immediately installed stand up for the new image, the new misperception that we need to be using. So this should take us a few seconds. And so I'll Ctrl C now. And I'll see the actual status of the pods. Let's see what we got. So yeah, so we have three new copies of our a catalog REST API, the replicas of the pod. And so now we should be able to tell if we're load balancing across them. So to do that we're going to do is again, I'll just make some more space here. I'll expand this terminal has even more. And I'll split the terminal now in three. So that we can add Delta locks of the of the three pots. So first here, now go back one directory. And as they could control get pots, so we get the names of all the pots, and then I'll do one by one. I'm going to do cube control logs for these one and then I'll do dash f so that we can tell the logs and then I'll do the same Here, which is our second part, let's take a look is going to be this one here, dash F. And then the third one is these guy here for our third terminal to control logs, dash F. Okay, so now we're tailing the three parts. And I'm going to go back to postman and perhaps we can accommodate this into here just on top of the other one to see what's going on. Yep. And, yeah, so one more thing, actually that we have to change is postman is our heater. So let me maximize this for a moment. Because the default behavior of postman is that it's going to send this a connection heater with a value of pickle keepalive. And what that's going to do is actually set up a persistent TCP connection between postman and our bot. And that will actually prevent our little exercise from allowing different subsequent connections to go into different bots. So just for testing purposes, and to see how things work, I'm going to disable that, that heater here and see how these things work. And, yeah, so with that, minimize these in this way. And let's send one request and see where it lands. You see, it landed in the first part on the left side, between two items at 704 52. Let's send again. And now we landed in in our second part, let's send the game loops. Again, in the second part, and then at this point is going to be a little bit random. I mean, the algorithm that's supposed to be using is round robin. But really, things can land in any pot. That is point. So there you go, load balancing in Kubernetes, fairly straightforward, without you having to do really much work on it. And if you scaled into 1000s of bots, then all of them will be serving your requests appropriately. In this episode of The dotnet five REST API tutorial, we talked about unit testing, test driven development, and how to implement them to raise the quality of our REST API. Today, you will learn what is unit testing and why it is so important. What is test driven development, also known as DDD, and why you may want to consider it in your project, how to retest REST API controller via the x unit testing framework, how to mock dependencies via the mock framework, how to write better assertions via the fluent assertions library, and how to implement TDD in practice. So what is unit testing? This is a topic that I'm very passionate about. And to understand it, let me introduce a quick analogy. Imagine that you're a member of the team in charge of testing the SpaceX rocket for the first time, the engineers have used dozens, maybe hundreds of different parts and systems, some not even produced by SpaceX. To assemble this awesome vehicle. Everybody put their top game to build a rocket. And now we would like to see if everything worked as expected. launch day is here. And then yeah, that didn't go as expected, there has to be you can't just assemble a bunch of parts and test them all together the very first time the rocket is launched. Fortunately, this is not how they usually test a rocket before launching. Without getting too technical about rockets, because I'm definitely not an expert in the area. I just wanted to show here a simplified diagram of the parts of a rocket that I got from the NASA website. All of the different components and systems like the payload system, or the oxidizer are also individual units of this entire vehicle that all need to operate properly before the rocket can lift off from it. And the engineers don't just build this unit and send them to the assembly team to put them on a rocket and after everything is put together, figure out if all the parts work or not. Each part of the rocket is tested in isolation, likely several times, where before being sent to the final assembly into rocket. This gives certainty to the teams behind each part that as long as it is used according to specifications, that unit will work as expected. And the same goes for the team assembling the entire thing. They know that they can connect all the parts according to specification, and the rocket should lie. This saves time and money for everyone. Unlikely saves a few lives along the way. So in terms of software engineering, we can define the testing as a way to test every piece of code in isolation without external dependencies. Now coming back to our catalog, REST API. Even with a simpler screen at this point, we do have a few components that talk to each other, like the itis controller, or I didn't have auditory class, the Mongo client instance. And finally, the MongoDB database. Each of these components are made of a bunch of methods that represent the behavior that we can get out of them. For instance, the ages controller has has functions like get item, grade item, update item, delete item, and we will certainly keep adding more in the future. These are the different granular units that must individually work correctly, to ensure that the whole service provides the expected functionality. Therefore, for each of them, really to write a series of unit tests that really exercise every aspect and every corner or each of these methods to give us enough confidence on their quality way before trying out the whole service from postman or from any other client. Beyond this, unit testing has a bunch of benefits that you definitely don't want to miss. With a unit test, you can quickly verify your code without having to worry about dependencies. For instance, you can make sure that your class can retrieve items from the database without having to stand up or talk to a database server at all. And such a test can give you results in milliseconds as opposed to seconds or minutes, you can make changes without worrying about introducing regressions. After you have a unit test in place. You can refactor your code as much as needed without concerns of breaking the service. Because you know that the unit test will provide you with that safety net unit this will catch box at the point where it is easier and cheaper to fix them. Which is before merging your code to the code base on way before getting a deploy to production. fixing something that is already impacting impacting customers in prod can have an enormous cost, both in human hours and of course in the amount of money lost by anybody that use it or service. And unit test. If don't Well, can be the best augmentation of your REST API, since every use case should eventually turn into a unit test. And those tests can't lie. They must represent the way that the system works. Now that we know what unit tests are, let's also talk about test driven development or TDD. So what is TDD? Simply put, TDD is a software development approach, where you write a test before you write just enough production code to make the failing test pass. This translates into a cycle made of three phases, a red phase where you write a test that represents your software requirement. This test fails because you have not implemented any production code yet. In fact, the test doesn't even compile at this point. A green phase where you write just enough production code to make the test pass. You don't need to implement anything beyond what's needed to pass a test. And inelegant or only code is allowed at this point. Finally, if needed, you refactor the code you just wrote while you keep running the test to make sure that they stay green. It is at this point where you eventually arrive to code optimize it for readability and maintainability. You keep repeating this cycle for any new piece of functionality. This is a basic cycle of DDD. Why would you want to embrace DDD? Well, there are a lot of benefits of embracing this practice. But there are three aspects or like most we did it, you start by focusing on the requirement, not on the implementation. This gives you a lot of freedom in terms of trying to properly address the requirements, because you're not constrained by an already implemented piece of code. When you implement the code, first, you end up writing tests that verify only as much of the implemented code as you have time or patience for because you already invested a lot of time and effort in that code. You can increase it this coverage, because by definition, you would have not written any more production code other than needed to pass a test. Again, when you don't do things the DDD way, you might end up with multiple corner cases that might feel that you might forget or might not have time to test properly reducing the test coverage. Finally, clean design is enforced from the start. As you write a test, you will naturally start designing the pieces of production code in such a way that leads to a passing test. The classes and methods emerge from the test, and you'll naturally avoid the pitfalls of writing code to Cobbold to be tested. There are three main unit testing framework in the dotnet ecosystem these days, and unit, Ms test, and x unit. They all fulfill the same purpose of allowing you to write and run your unit tests in an automated way. However, for any new project, I strongly recommend you choose x unit. This framework comes from one of the original authors of the popular end unit framework. But it was really meant to be more closely in line with the dotnet platform and to help write clearer tests. It is also more intuitive than ms test which requires more attributes in test classes. Some of them not straightforward to use properly, especially for developers new to the platform. Let's see now how to implement unit testing and TDD in practice. It's time to add a new test project for our unit tests. But before we can do that, I think we should restructure things a bit to give our REST API a more specific directory that will live side by side next to our upcoming test project. So I'll start by going to our explorer view on the left side, and I'll just look for an empty section over here. And right click, and I'll say New Folder, the new folder is going to be catalog that API. And now let's move most of the directories and files over there except for the VS code being catalog, API, and OBJ. So let's grab everything else into catalog API move. And now that has all the files for the catalog REST API. Now, let's close this. And then let's delete this directory, we don't need this bin directory or this OBJ directory, those are gone. Now just to match the folder name, let's rename the project name into catalog that API. And then we'll have to make a bunch of renames in a bunch of files, just to match this new project, right file name. So I'll copy this name here, close this, and then I'll go to search and replace. And we're going to replace catalog. That's us broke into catalog that API CS broke. So let's just replace all, they will do the same thing with the DLL. So gotta look that DLL, siapa DLL, it's gonna be renamed into candidate API, that DLL is replaced these all the files. Now let's look at the namespace. So today, we have this namespace catalog. And that should turn into base base catalog that API. And then for every everywhere classes that are using that namespace, let's make sure they use the new namespace. So using catalog should turn into using catalog that API to that replacement. Now let's look at our to VS code. That's a JSON file, where we have these a workplace folder slash catalog API stand to change that, to replace these with slash gallery, that API slash how to using this file placeholder, all the entries. It's all done. And now let's go to lunch that Jason and do a very similar replacement, so workplace folder into catalog that API, or three folders slash catalog API, replace all this file. And I think that's what we have to do. So I'll just close these two. And then I'll do a Ctrl, Shift B to make sure everything is building properly. Yeah, looks like it is, or close these around this one here, you'll see that nougat packages are restored, but everything looks fine, or close terminal. Now among the things that we modified, and they'll collapse, this one is the Docker file. So the Docker file is now pointing to a catalog API that says broke. So I'll just, I just want to make sure that this is still building properly. So I'll rebuild our Docker image now. So open my terminal. Let's open a brand new partial terminal. And given this new directory structure, I'll have to go into catalog API. And here I can run my Docker build command again. So just to Docker build, dash D. And there's going to be Will you see slash catalog, B three. And so B three, because we were creating a new version of this Docker image, which was B two last time, hit enter. Okay, so the new image is created, I'll do Docker images. So it is right there, the image was created. So everything looks just fine. So this is great. I'll close terminal and close this file. So now we can actually create our display. So I will collect this for a moment. So we want to create a new display you could use at the same level. So go back to terminal, actually, and we're going to go up one directory. And to create this project, you do it very similarly to how you do for the creation of the web API. So you choose to just use a dotnet CLA. So dotnet new, and like I said before, they preferred this framework for this project is x unit. So I will go for x unit. Now let's name this project. catalog that unit tests hit Enter. And our unit display has been great, because this one over here it has a project file and an initial class, display class. Are there. Now one interesting thing that we have now here is that we need to build a not one, but two projects, right? And so anytime we want to make sure that everything is really properly, however, our physical environment has not been configured for that yet. So if you look at the JSON alkalosis, it is configured to be only the catalog API CS Pro. So how can we make it so that anytime we build it builds both projects. So there's a handy way to do this, that I'll show you. And what I'll do is I'll create a new file at the root here, just add a route that's going to be named, we'll call it yours, we'll get proud. Okay, and then I'll collapse this for a moment. And this file is going to allow us to build all the projects in one shot. So how do you how do you do? So let's declare the following. So you will do Raju SDK. And here we're going to be using the build trails traversal SDK. So to do that, you just type here, Microsoft dot build that traversal. And this was the file operation. So because he's actually going to pull a nougat package down into your machine. The last version that I found last time was 303. So I'll do that. Okay, let's go ahead and also, close this. And inside this section, you have to define an item group. So I can group, close that group. And here, you have to reference all the project files that you want to compile. So for that, you want to type like reference, include equals, and then we'll just do this, this expression. So everything, so star, that star, so any files that end in Brock, are going to be compiled by this file. Okay, so with this file, ready, let's go back to Explorer. let's actually go ahead and into task Jason. And let's ask it to no longer just build catalog API CS, but instead is going to build build that block. All right. So with that done, I'll do Ctrl Shift v. Yep. And now it's close to East. Let's close that. You can see that boat. Let me just do this for a moment. catalog that API DLL and Catherine, etc, they'll have been built by these one comment. So notice that you don't need a Visual Studio solution for these at all. This is this is my preferred way of building like all the products in solution in a solution. So build Prague use include all the break files, and that will do the trick. Right, so close these two now. And then our test break here, we'll need Of course, a reference to our API, because we're going to be testing the API controller. So let's make sure we have that reference. So open terminal again, I'll switch to my PowerShell terminal. And then let's see, I'll go into catalog that API service catalog does unit tests. And then I'll do dotnet, add reference. And then we're going to go into catalog that API. catalog API. That's Yes, bro. Right. So that adds the reference, we look at unit tests, close these, it will have that reference right here. Okay, so now the break the test probably can't use any of the files, or reference any of the files in their API project was that and then we're going to need a couple of additional nougat packages in this test project. Let's actually open these again. And these are going to be a first we're going to need the nougat package for a extension plugin abstraction. So let's do that. dotnet add Bakhash Microsoft extensions that logging abstractions. And this is because we are going to be using or trying to test our controller class, which, if you look at it quickly, the controller class does receive a logger in constructor over here. So we're going to be needing to use these a logger class. And for that, we need that to get package that we just added. The other package is the net package is called Mach. Mo Q. And this is this little framework that can help you actually mock your your classes, the classes you're using in your controller. So So that you can a test only the pieces that you care about in controller, but you don't have to worry about how to create or how or how the dependent or external dependencies of that class a work. And we're going to go into those details in a moment. But yeah, those are the two nougat package that we need for now. So I'll close terminal again. And then let's start focusing on our desk class, this case, you need this one. Let's rename this class into a more appropriate name. So rename these into items, controller tests, right, so the convention that we're following here is that if the controller is named itis controller, that's a class name, we're going to be using the class name with the suffix of tests for this class is controller tests. That's the one that we have here. And that's the one that we're going to use for the, for the class over here, titled The third test. I'm going to collapse this Navigation Pane now I'll do a Ctrl, Shift V, make sure everything is building properly. And then, if you happen to be getting any of these red squiggles here, we should not be there. But if it happens, let me just close these, what you can do is use to Ctrl Shift P, and do only sharp is dark, only sharp, or you can type that there on the sharp, we start on the sharp, and that should take care of that. Now notice this fact attribute that was added to our audio narrated this one method here. So fact is the attribute that you have to use to declare that one of the methods in this class is actually this method. So that's the only way for the test runner to recognize that it has to execute these tests. So use Don't forget to add fact to each of your test methods. There are other ways to declare or to decorate your metals assessments. But for this tutorial, we'll stick with fact. Now let's go ahead and start writing our first unit test. But before that, let's look again, other items controller class. And let's see what should be unit tested. We will write unit test for all of our, all of our methods are here. But let's start with get it basic, just because there's a couple of simple and nice cases that we can go across these these methods. In fact, after receiving an ad, as you remember, we will reboot and we will try to find the item in the repository. And if we can, if we cannot find it, if it is no, we will return not found. Otherwise, we will go ahead and convert the item into a detail. So let's write a couple of unit tests for to verify this method. So going back to our test class, one thing that I'd like to encourage you is to use is to use a good naming convention for the unit test. This is very important because it helps it really helps to identify what is this they're supposed to do what what is it really verifying, without having to go into the all the details about the unit test? So one one good naming convention that I like is the following. You want to use three parts here, that's going to be unit of work. State under test, and expected behavior. Right. So first part you into the work. So what is that you're testing? What is the function that is going to be tested by this unit test? That's unit work, then state under test. So under which conditions? Are you testing these these test methods, right? And then finally, the expected behavior. And so what do we expect from this from this unit, as we execute these tests after we execute the action part of these tests. So now translating that into our guided async method, let me just copy the name here. The unit of work is in fact, the name of this method. So gate item async. And the first case that I'm going to test is what happens when we cannot find the item. So the item, the item is full, right this section here. So in that for that case, I'm going to say with an existing item, so the item does not exist, what will be the expected behavior, then it's returned returns not found. Okay, so that should be a good name for arguing this refers to a test. Now, within the body of the reading test, there's also some other another good convention that encourages to encourage us to write the test in this way. So there's going to be three sections here. The first one is called a range. Second one is called tact. And the last one is called cert. These also a be named as AAA. Arrange, act assert. And the idea is that you're going to first have an array section where you're going to be much set up everything to be ready to execute the test. So just before securing the test, so this includes any sorts of marks. levels and inputs that you're going to need to be able to figure into this. Then we go to the x section where we actually execute the test, we perform the one action that we are testing here. And finally, the assert section where we verify whatever needs to be verified about the execution of of the of the action of the unit. Okay, so how does that translate into a unit test? So we want to test is controller. And for that, we'll have to of course, create an instance of item controller. However, remember that we need two parameters here. So we need to have a positivity and we need a lager. Now, one thing to remember here is that for the purpose of testing this unit, this get imazing method, we don't really care about the internals of how the repository for instance, accusatory behaves, so we don't care what happens. Normally, when you call get it basic, as you know, that we'll go ahead and talk to the database, retrieve the item and all those things. But we don't want to test that we don't want to test a repository, we just want to test the get itemizing method, this is the one unit that we want to test. So you have to make sure that you find a way to exclude those dependencies and the behavior of the dependencies from these tests, and just focus on testing what was right here. So to do that, of course, we cannot use a real a real Idaho story or a real logger. And here's what we have to introduce a what we call a, what we call a stops. And so I stopped is going to be a kind of a fake instance, or a fake version of these items of these classes that are going to be used just for the purposes of the test. So let me show you how that how that looks like. For this case, I'm going to say so we need a depository, right. So let's I'll say depository, stop equals new mock. And then mark is a con for the from the mock library that we installed a moment ago, using mock. So this is a library that allows us to mock any of the mock or stop any of the different dependencies of our class. So locally, our controller, as we looked at the previous videos, we made it in such a way that we can do dependency injection in there. So the controller really doesn't know what kind of repository is coming in here, or the kind of logger comes here, you just look, it's just looking at interfaces, as you can see. So that gives us the flexibility to fake these both dependencies from or from the unit test. So I'll go ahead and say okay, so this is going to be a new mock of items repository. Okay, let's see, if I'm using a meeting any space. Notice that I'm naming this as a stop and not as a mock. And that's a slight difference that I like to make, because when you do stops, you will not verify anything on the on the on this object itself. When you do mocks, you will in the assert section, you would go ahead and actually verify something that happened to the mock accuracy test. So it's a slight difference, but it is good to it's good to to do the proper naming here so that you understand what is the purpose of the variable that you're using. Okay, so now the expectation is that when we call this call to controller, when we're going back to our method, when we call the get imazing method, the idea is that it will return a no. So when returns No, we should return the controller should return not found. So we need to set up that isn't it? So how do we set up this method in the mock so that it returns No. Going back here, what you can do is this repository, stop that setup. And then you're going to say, let's say repo, repo that gate item async. And then comes the parameter. Now, at this point, it really doesn't matter what's what's the item that we are going to be passing in here. It's irrelevant, because what we want to do it revelate is what happens with the return value. So because of that, what you can do is just say it is a which is a function provided by Mark also any COVID This means whichever value comes in, it doesn't matter. So Mark will take care of providing some value there. And then that will do it should return a value of No, that's what we want. That's the expected behavior. And then, but just so that marketers doesn't get confused, we have to cast this into the item entity. And let me add Robert namespace right there. Okay, so this is setting up the scenario for artists. So we're saying again, we're saying when whenever the controller invokes get good at basic with any any code, which mark is going to provide, you have to return a null value. That's what we need for this use case. And then similarly, we'll do something With the logger, so we'll say logger stop equals new mark. I'll say, Hi logger, or items. I just control. Okay, let's see if we're missing something again. Yep. See dad remains in the construction space. Okay, so now we have two stops ready to go. And now we need to create the actual controller. So now we can say, controller equals new items controller. And then we passed it, this docks. So if I posit Ori, stuff, what we need to get actually the object property of these because that's the real object that's going to be passing out just a mock the mock object, and then logger stop that object. So that covers the arrange phase of this unit test for the act phase. Here's where we execute the action. And this is normally just one line where you execute what you're going to be testing. So I'll just say, sold, equals await. And yeah, now that I'm using a Wait, let me remember that we have to use a proper return type for these unit tests, since we're going to be calling an async function. These days, it should actually switch to be an async. task. C. So now we can do a weight controller, that good idea may sync. And then, like I said, it doesn't matter what what good we provide here. So you'll say, cool, great. And that's the action. And then finally, we'll go ahead and do the assert. So we next unit, you have a bunch of assert methods available. So let's just use assert. And then what we want to verify here is that what we got was indeed are not found, and they're not found, there's a class that is not found. So what we can say is assert that is type not found. result. And then let's see if we're missing something. Yeah, add that. And then we're going to pass here is result that result. So the result object, the result variable represents the actual result that we received. And the result property inside that result represent the actual type of result that we got, which in our case, it should be not found result. And now that we have that, we can go ahead and actually execute the test. And so there's a bunch of ways to actually run this test. In VS code. One of the ways that I use frequently is a via via using the kotlin. So as you can see, over here, there's these these set of annotations, which is, which is introduced by kotlin is to code kotlin. And you have to Will you have to do is click on Run test. And that should go ahead and run the test. Let's see what happens to running test. And that this has passed, as you can see, the one that we have passed, which is good. Now, there's another way to run the tests by using directly the dotnet. CLI. So if you go to terminal, I'll switch to here, make sure that you are in the catalog unit test directory. And here, what you can do is just do dotnet test. Yeah, so that will go ahead and run not just not just one test, but all the tests that you have across the desk like so difficult, more handy when you have you just are having more and more tests for this test project. And as you can see, it passed zero failed, one passed. Now, as the number of tests increases, you may want to have a better way to visualize the overall status of your test suite, which is cases are passing which words are failing, and so on. So So to do that, there's actually a nice Visual Studio Code extension that you can install to provide the dissertation. So let's go to our extensions hub. And let's expand this a bit. And let's look for dotnet core, this Explorer, just the first one here, I'll collapse this for a moment that will cortex explorer by June Han is the best one that I found so far for these kinds of tests. So go ahead and install it. And what you need to do about this extension is to tell it where to find the the test project. And if we go to Explorer for a moment, you'll remember that we have artists in this category test folder, gather the unit test OCS proc, so we need to provide that location to this extension. To do that, you can click this gear icon over here. Just click on that extension settings. And if any chance these settings that you see over here are not showing up for you where you can Try is just closing close to the code or closing the current folder and reopening it. And then these things should show up, it may happen the very, very first time, but after that, it should be just fine. Then what you want to do is go all the way down where it says Internet Explorer display bad. These are windows specify the path of this project. But also, do you want to go not into the into the user section, but into the work workspace section. By doing that you are going to what you're going to type in here is going to live alongside your project, as opposed to so in some place in the in your user profile. So that's good, so that you can keep everything, everything together. So here, what we want to specify is just a simple expression. So let me type that is a glob pattern where we're going to say just search all over the place inside of our catalog directory, and in all directories, and then look for tests that CS Brock, okay. And after doing that, we can close this. And if we go to these three dots here, there's now a test section. And as you can see, there's our one test is already showing up over there. And then to run it, you can either click display icon here, or you can just click the play icon on the top, and it will go ahead and run all the tests that are available there. So aspect that is green. And you can see the green bar also in on top of the test. And if there was any error that you will get some red squiggles in the location where we will test failed. So yeah, so we're using this extension across this video to to see the status of our tests as we are adding them and executing them. Okay, so now that we have that in place, and we have a test for checking for the existing item case, let's add another test for the existing items. So what happens when the item actually exists. So let me actually collapse this for a moment. So I'll do hide sidebar. And let's add a second test here. Perhaps I'll just copy the header of this test. So I'll copy that over here. Here's a new this method. And this one is going to be called get item async. Wait, existing item. returns. expected it. Okay, now, just like before, we will do the AAA. Arrange, act. And assert now for these tests is very likely that we're going to need a again, our our capacity and our logger stop boat stops. So instead of copying, then copy the instantiation over here, why not just declare a class level a couple of class level fields. And that way we can reuse them in this test and in any future test. So let me do that. So I'll go here. So I'll declare by bait read only to be marked off items repository, then I'll just copy this base here, although I could just do this. And then private with only mark of a logger, or items controller. And then it will be the loggers stuff we can use to this. And with that, we can go to our initial test case, and to simplify a little bit by not having to declare it here, because we stop there or they stop here. Okay, so going back to the new test case, for this test case, we will actually need to have an item that we can use across the desk. Because let's go back to the controller quickly, I this controller IDs. And if remember, get item async. In the case that we want to hit, which is the one that returns the DTO we need to have an item, we need to have the repository, get the return an item. And then we need to convert it to and return it. So to do that, we'll have to set up that item beforehand in our test so that it can be used it over here. So I'll go back to the test. And instead of just creating it on the fly for this test case, I think that we should have some sort of helper function that we can use not just in this case, but instead of user tests to create some random item very quickly. So I'm going to go ahead and create a private function to private identify to return an item. Let's call it create an item and this is going to say just return new And let's specify all the properties for the new item. So it will be ID, this make it good that you go in, because we don't really care what Id it uses the same way we don't really care what name these random item uses, should work with any any name. So new gu ID, to string, them for the price we want to do is probably just generate some random number so that we don't get fixed into any specific price. So for whatever, what I'll do is I'll actually create a random a variable, and let's put it at the top over here. So I'm going to declare a private read only. Random and let's name it yours around, it's going to be new. So this will be with us in a couple of places. So now we can go back here. And we can say ran that next. And then I think we said that the price should be between one and 1000. And so let's just say a maximum value of 1000. should be enough. And Rarity is going to be the time of said that UTC. Okay, so now that we have this handy method, let's go back to our test case, over here. And in terms of arrangement, what we have to do is first prepare the item that we're going to deposit or is going to return. So we're going to say bar spec, this item is create random item, then we have to do the setup for the repository. And that's going to be a little bit similar to what we did the previous test. So I'm going to copy this, this first line from the previous test there. So when we call get item async, again, with any good on what to do now is return that item. So returns async. Expected item. Okay, and then we'll go ahead and do pretty much the same thing that we did in the previous day. So copy this, these couple of lines over here. So we declared a controller with the two stops, and then we get a result, right by invoking get item async with any GUI doesn't matter. And then it is time to assert a what we got. So what would we like to assert here, so probably want to first make sure that we got the activity or not some other a result, like not found or bad requests or something like that. So I'll do a search that is type identity Oh. And let's add in a museum a space. And then we're looking at result dot value because it is the value property, the one that actually should have the deal in this case. Okay. And when we have asserted that well we can also do is, what we should do is verify that all the properties of the return to match the expected item. So for that, let's first take out that DTO. So we can do that by doing some casting here. We sold as action result of identity to that value. Okay, we can do a ser equal expected item that Id should equals do that ID. And just like that, we will need to go through every single property right, so x with the ID now expected item. That name should equal the to that name. And, and well, we'll keep going and going with the other properties. But at this point, imagine that you don't have just a couple of properties, but you have dozens of properties, right as objects can can get complex. So today is going to be become very cumbersome to just keep searching and searching and searching. And in fact, it's not a very good practice to be asserting too many things. In this case, you should try to assert or get closer, just one thing in each test, that's kind of the best practice. And so to do that, what we can do is instead of doing all this, we can switch to a very handy assertion library that's called fluent assertions. And davon will allow us to do this in a much more straightforward way. So let's bring in the terminal as well. Ctrl J and l go to PowerShell and hoovering we do so make sure that you are in the catalog that unit tests directory, and that you can do dotnet add package. fluent assertions. Okay, so with that, I'll just close terminal. So now we can do something a little bit different. So let me show you. What we can do is now we can say we sold that value And then should, then it's important if your assertions namespace be equivalent to the expected item. Okay, and then I'll remove this. What this is expected to do is that it should compare the values of the properties of the resulting DTO. With the properties of the expected item, the item that we created over here, right? So that way with this very handy method, we don't have to go property by property, it will just go ahead and compare the entire thing for us. But that the only issue here is that seeing suspected it is actually a record type. As you remember, the item is a record type record types already. Alright, the equals method of the of the object, and that will make it so that this method doesn't behave very well, right, because he believes that you have to compare the DTO to the entity directly, as opposed to comparing the properties, which is what we care about. So to address this, we'll specify an additional option here. Which is going to be Options, Options dot comparing by members of the item of the item class. Without we're saying, Hey, don't compare the DTR directly to the to the item, just focus on the properties that each of them hum and as each of them have. And as long as the property is the same name, compare the values of those properties. So that way, you will go ahead and and issue go ahead and tell us that the objects are the same. Yep. And so with that, let's go ahead and run this test case and see what we get. So we'll go ahead and click Run test. Yep. And this case is passing. And just like we did this, just to keep things consistent, let's also modify the previous test to also use fluent assertions. So in this case, is going to be result that result should be off date. Not Found result. that replaces the previous line. Okay, so let's just verify that all these cases are passing, I'll go ahead and run these in Data Explorer here. And yes, it's all looking good. Good. And so now let's move ahead into our next unit test. Let's go back to the controller briefly. And let's see what else we got there. Perhaps this and back in the controller, and what we want to test now is our first metal here, the one that returns all the items available in the REST API in the repository. So remember, these items will just go ahead, retrieve all the items, transfer them into DTO. And then he returns and that's all he does. So what we want to verify is that for any of the items that we that we set up for the suppository that we have to set up the depository, they have to be returned as details, and they should match exactly the IDs that will obtain through depository. So let's go ahead and write a unit test for that. So back into the test class. Again, I'm going to copy the heater of these this case. And I'm going to scroll down here, Copy that. Okay, so here, let's do the proper naming. So these are going to be named get I get items a think are these going to be just with exceed with existing items. He will say, returns or items. And again, let's bring in our arrange, act and assert sections. And in this case, what we need is of course to get a series of items from the repository. So what we have to do first is to declare such a set of items are we will for that we'll create a simple array. So what I'll do is I'll say bar expected items equals new and this is going to be an array. And we're going to be using our create random item method here a few times perhaps less bringing in three items. Yep, so that will do it. Very handy method. And now we can go ahead and set up the repository to return those items. So I'll do repository stop, set up where the repo where they bought the gun get it and say saying so get one gig Titan 16 is invoked, which should return the expected items. Then we have to construct the controller. So I'll do it very similar to what we did with the previous this case, I'll just go ahead and copy this line over here. And then we will go ahead and do. So bar, act two items, equals a weight, controller, get items async. Thanks, that should retrieve the items. And now we have to do the comparison. So once again, we can use that very handy method of fluent assertions to do just an equivalence of comparison. So we can say, items should be equivalent to expected items. And once again, we'll have to do the options. Because we like, again, we're dealing with record types. So otherwise things will not work, right. And then options that comparing by members, I. Okay, so with that, let's go ahead and run this test. Let's see what we get. Yep, this passing. Awesome, close that. Okay, let's move forward back to ISIS controller, it's time to test our great item async method. So this is the one that goes ahead and creates the item in the repository. So in this case, we have to provide great identity as an input. And then we can verify is that we should be returning, we should be receiving as a return added to with a graded item. That's, that's what these metals is supposed to do. And in fact, that return an item should include the ID because it is not provided in there, create that and do so return items will have an ID and he will also have a graded date. So let's write out a test that can verify all these things. So back to items, control tests, they may gain grab the heater off the desk, perhaps so I grabbed the entire thing. Just copy this, and then remove the space and and remove this piece and I removed the solid piece. Okay, now this method will be called just start with the name of the method, which is create item async. And then this will be in this case, the current state, let's name it with item to create, because we're going to be providing the idea that should be created. And the dissertation is that it returns the Create a title. Right? Now in this case, the arrange piece involves preparing the accurate identity. So in this case, it's going to be it will be tricky to try to use a created random item, as we've been doing so far. So we'll be we'll be explicit in this case. So we will say I tend to create equals new create item DTO. And here we'll provide the elements of the query in detail. So we do provide a name, which is just going to be cool nucleoid string, we will provide a price, which again is going to be just random next, with 1000 again, and I think that that will be. So let me go quickly to create a new table with 12 to make sure I've covered the properties we need. So he has a name and the price. Yep, back here. And then in this case, I will actually not need to set up anything in the repository. Because it is not interesting. What happens I mean, for the terms of of this test case, it is not interested to see what happens when the repository is invoked, if there was a really simple act to create the item. So if you look back at I just controller, what's really going to happen here is that Jehovah God needs to call greet, I are amazing. And I will go ahead and gradiated. But I will encourage you to be a bit careful about what you're going to be testing here. Because you could also decide that no, I want to make sure that they created amazing metal is called in the repository. Right. So that so that yeah, I mean, it should get created. But that's going a little bit too much into the details of the test case. So you are getting you will be making your test case to be very prone to needs changes in the case of the implementation of this method changes, right. So ideally, you want to treat each of your test cases in such a way that they only provide some inputs to the method. And then eventually they validate the outputs of the method but they don't try to make Some just sort of what is going to happen inside. So in this case, we are not going to be carrying all this metal at all. So in fact, we're not going to be setting it up, we will go ahead and go back to this case, we'll go ahead and use invoke, invoke the action. So we will say, var greeted item. Sorry, it's going to be bar result equals weight, controller dot, create item aceing. We provide the item to create. And then we can go ahead and do the assert. And the first thing we're going to do here is try to retrieve that identity. So what I'll do is, I'll do we did item is the result that result as created an action result. Okay, that's the type of result we're going to receive here is a created action result. From that one, we want to get the value. And that value will turn it into an item we do a little bit convoluted for for this case. And now we can do the same equivalence thing that we've been doing so far. So I tend to create should be equivalent to created by, okay, and then we provide options. Like we said, there will be options that comparing by members. And in this case, the data we're going to use here is actually I can be deal. Okay, which is the, the type of the object that we received from the front desk of the creditor. Okay. But additionally, we want to do one more thing here, because a, both of these a DTS don't have the same members, if you remember, if we go to F 12, to identity, Oh, this one has four properties, as you can see. But if we go into great identity, or this one has only two properties over here. So that means that if you try to compare things like this is going to fail, because you're going to say, Hey, what are you writing has more properties than the other. So this, this doesn't make sense. So in these cases, what you can do is say, only look at the properties that are common between the two objects. And that will simplify things in this case. So that's all we can do. And then in the spirit of fluent assertions, we can use to dot excluding missing members. And that should do it for that for that one assertion. Okay, so we're comparing that the two objects are equivalent, but we will not pay attention to any members that are missing from for any other. However, we may want to actually check for those additional members. So for those who will be will be a bit more explicit. So So credit item, the ID should not be empty. Because the the method, it should generate an ID for for the scooter rider, and also create an item that create a date. Should we don't know exactly what, what day we're going to get in there. But we know that it should be, it should be close to the current time, right? Because this test just take milliseconds to execute. So we will say datetimeoffset UTC now. And then, just to be safe, we're going to give it a range of precision, because we don't know exactly how much time did basically today, although it should be super fast. But we're going to give it 1000 milliseconds to for for the difference between the times of when the item is created and the data we're checking here. Okay, so that should be should be it for this test. Let's go ahead and run it from the test. see what we get. Test is passing. It's looking good. Okay, so the next test we're going to look at, but less case we're going to look at is the case of the update, update item async. Does remember this method, it gets at the item to update, it will get it from the database. And if it doesn't find it, it will say not found. And otherwise, it will create a copy of the item to become the updated item with the updated properties. And then it will go ahead and update the item in the database and finally returns no content. Okay, just to be to not spend too much time here, we're only going to be covering one case here, which is the case of where the item actually exists. But you can you can imagine how to test for the not found case, which is a bit simpler. But then again, don't don't get into the trap of verifying that the item is actually being sent to the date itemizing method of a depository. We want to we will provide an input which is these two parameters here. We expect the output of no content of added that's everything that we need to verify here. Nothing else, we don't need to worry about the implementation of the method. So let's go back to I just got a test. And let's bring in again, just I'll do a copy of our last test over here. And so, this is going to be update, I think we take existing item returns no content, okay. And then we will meet these who need a controller will need that, we will remove this species at the end. So for the range base, as you remember from this controller, we will need a when good imec, Usain Bolt, you need to return it right so that we can move forward into this case. So, I think that that situation we covered already somewhere else. So, let me go back a little bit up. So this one here, so, we will grab these from the get item async. This case, let me grab that here. So, that would allow us to have an item to be returned by the by the depositor is done. So we will name these one, please thing item ID, that's the ID that exists in there. And now we need to declare the the actual item that we're going to provide to the methods. So they updated it. So let's first grab the item ID as a variable. So the item ID is system ID that ID. So we'll grab that there. And then I tend to update these a new update ID to and so we're going to provide a name naming scheme to be cool if you go it to string. And perhaps we download line. And then for the price we can do is just pick the price of the existing item price and just increase it a by somebody, let's say by three. Okay, so that becomes the updated paper from this item. So we pretty much were changing the name because we're getting really unusual, we're changing the price by adding three to it. Then we create the controller. And I started to do the the action. So we'll say result equals await controller that update async. And then here's where we provide the item ID. And then we provide the item to update. And then finally we go for the assertion, the assertion is going to be very simple, because like I said, we only want to verify that we we get no content. So result should be of type. No content result. That's all it is. So again, we set up the return of the item from the repository, we prepare nine out of date, we modify the properties, we invoke a bit of the async. And at the end, if everything went well, we should be getting the content result. So let's go ahead and run this test. All best, I think we're missing just one method at this point. Let's go to IRS controller. So this is delete I basic, right? It's got to be very simple. To verify and very similar to update, we get an ID, it will have to a Find the Item. Again, we're not going to check all the cases, for this case, just a case where the item exists. So we'll make it so that it returns the item. And then it should return no content. And again, we don't care what happens with the repository or any of the internals here, we just care about the fact that it should return the content. So back to I just adore tests. I'm going to copy your lastest case, once again. This is going to be named delete it may sync with existing item returns no content. Yep, that's an appropriate name. The setup will work just fine for this case. But then we don't have to prepare any item to update us do that. Create the controller with those setups. And then we'll do the database sync with the it will be just the existing ID and that ID and then the assert is exactly same as before. There's should be of type knock on result. That's all it is. So I'll go ahead and run this test now. And Yep, it is passing. And in fact, if we go to our test Explorer, now, let's go to the test section here. And let's run this. As you can see, we have a full suite of tests passing at this point. Okay, so this is great, we have a bunch of test cases covering many, there's nothing in our controller. And what this gives us now is actually a lot of confidence or making any future changes across the, across the REST API. So what I'd like to do now is, there's there's been a few changes that I wanted to make across the board. But I didn't, I didn't have a way to make sure that didn't break anything. But now we do. So I'm going to make two critical changes here. One of them is that I'm going to switch our entity to not be a record type anymore. So remember, he's here if you go to the entities item. So the fact that he's a record type is actually making things a bit inflexible, I mean text. Because of this, we cannot use update existing items in the update a operation in the controller, and we have to create this copy that's not really needed. So these entities should be actively moldable. So we don't need these any stuff here, really. So we'll switch this to be a normal class. And we will also add a description property here, so we can provide some description for the items. So let's make those two changes. And let's see if our test cases can help us prevent breaking anything. So like I said, This item is going to change from record to class. And then we will not be using it here. But we will switch it to just set. So set in all of this. And now this is a normal standard class. But also, like I said, we're going to be adding a description, description for the entity. So with that done. The other thing that I like to do, actually, is to simplify the way that we use videos. Remember that we have these three files for the three videos. But it turns out that there's a much better nicer way to declare the video such as record types, which actually provides even more benefits that we are doing today. So what I'll do is I'll actually just get rid of all these files. Let me just delete the CTOs folder. Perhaps collapse these a bit, what I'm going to do is create a new file at the root of catalog API. Let's name it. videos at CS. Let's bring in some namespace, right namespace, which would be catalog that KPI details, we will declare all the details here in line in a much more a nicer way. Let me hide the sidebar. So for the first video is going to be the item detail. So we will say what do you record it to, then we will declare the properties as if we were creating like a constructor for for this property. That's something that you can do that we will say good ID. And then let's see if we're missing this system. And they will do name the new description property price, and they time offset created date. Okay, so that's you know, that's that's what you need to do to create to declare a record type in this other syntax. So as you can see, is much, much more simplified. Now, let's declare the next one is going to be only record, create item DTO. And then this is going to have no ID but the name and then the new description, property. And then the price. Let's not forget that we had set up some attributes here to make to make sure that we get valid inputs. So in the case of names, we want it to be required. It's amazing the space and in the case of price, what we want is 25 range. So the range Yeah, it was from one to 1000 Okay, we will not require description, and we will not require we will not add the required attributes to price because price is actually validate. So it cannot ever be no, you will get somebody there anyways. And we will just verify that the range is correct. Lastly, let's add our last detail that we're missing. I just copy that one. And this is going to be updated in detail. Yep. So that's all we need. So now in one file we have we're declaring all the details that we're using across the REST API as opposed to three files. Let's see what else we need to modify. So now that we make those changes, let's look at extensions. Let's see what's going on here. Yeah, so I'll hide cyber. So now that we change the way that we declared our details as record types, they have become immutable. We cannot go through this initialization anymore. We must create the DTO via the constructor. It's the only way to create instances of them from here on. And after construction, nobody can change the properties of the DTO. So we will say new identity Oh, and then we will provide it that ID. Id and the pain. Right? And that scription. I think that price, an item that created date, right? That's always Yep, we got liquid in line these over here, but it may be too much to read. So we'll just leave it like this. Okay, let's see what else we have to fix. If we go back, so there's something going on in the controller? Let's see what it is. Yep. So there's an issue with update imazing? Yep. So we cannot use this syntax anymore. Because the item is no longer a record type. Now it is a standard class. And in fact, this is this is good. This is actually exactly what we wanted to fix here. Because there's no need to create this updated item, we could just modify the existing item. So we will say, existing item that name equals item D do that name, an existing item, the price equals identity or that the price. Yep. So we don't need these additional guy here. We'll leave it like that. And then for the update, we'll use invoke existing IRA. Okay, so yeah, there's no more breaks around here. Let's see what else. So because of API, everything looks good in the unit tests. Let's see what's going on. Okay, let's hide the sidebar. Let's see. Yep. So great identity Oh, has to be weighted to define the properties inside the constructor. So we'll have to provide these for the name. And they probably will do the same thing for the description. And for the price, the random number, perhaps, so I'll delete this. And then I'll do it like this. That is it is easier to see. There. Okay. So let's see what else we have to fix here. Just in case for update identity. Oh. So let me copy this line here. So we have to provide that then we provide a description. And then we provide the price, which is going to be existing item price plus three. Okay, remove this. Yep. And with that, I think we don't have any more breaks are not in there. I can see actually. And so let's go ahead. And now let's run our tests and see what we get. So go ahead and run the tests. Yep. And, interestingly, we do have something failing here. Let's see what's going on. So great. It may sync is the one that's failing. And as you can see, we are getting the x over here, signaling that there's a problem. We get the squiggles here, signaling that there's something in there. And if this if we see the message over there, you can see it says expected member description to be no, but found some value in there. Yep. And then, so what could have happened? So let's actually go back to our controller. So I'll do f 12, from the controller. And let's go to the Create method. Over here. Let's hide this hide sidebar. And, yeah, so what happens here is if you can spot that is that we have not a percentage value for the description. So let me actually close this close these. So back to the test. Yep. So when we declared the item to create, we did add a description. But when we created the degraded item, it does not have any value for description. That is because we have not specified that here anywhere. So that's something we have to fix. And that's, that's a nice thing about using these should be equivalent to because as you add more properties, and you don't have to be remembering to do the proper checks in the test cases, right. Otherwise, anytime you you add a new property here, you have to go back to the test and other properties in there to make sure that you don't Forget what it is that this case is actually covering you, making sure that you don't forget to add those properties in the place where you're implementing the Edit method. For this case, I'll do description equals identity to that description. With that, let's go back to our test list, lets us run them again. It's all green now. And then one minor improvement that you can also do here is to change the way that we're doing some comparisons across a few test cases. So we go back to two lists test case, erasing, we created a retrospective item. If you see how we're doing B, A k be equivalent to here, these options, we need to do them because the item A during class A we're able to actually record before, so that was causing issues. Now that it's not a record type, we can stick to the normal behavior. So now we can just remove this and do the equivalent to this way. And fearless agents will know what to do. Because this is just a standard plus the same way we can go to the next case, and change this think. And that should be enough. And this is only because like I said, because what we're comparing here is the other classes are not record record type, you still have to do the other way. So let's just make sure that the test cases are still passing. So I'll go ahead and run all the tests. And everything is still all green. Okay, so that's working pretty well. And now I wanted to I'd like to switch gears into test driven development TDD. So we talked about TDD, it has some very nice benefits, because it allows you to start for the test for the requirements really, and then move forward to implementation later on. So let's, let's see, how do they work. In practice, what we're going to do is if we go back to items controller, and I'll hide these for a moment again, let's go back to our first method here, get items async. So as you see, this is the method that returns all the items. But a new requirement to our REST API is that we should be able to return a the items by a by the name. So if somebody specifies a name into into such a method, which should be able to return only the items where the name contains the specified parameter, right. So So if the if the, if you have items that have for instance, a word potion in them, like a potion, potion, potion, all these all these things, which are going to return all the items that include potion in the name, and not to the right, so it's a way of filtering things. But let's see how we can go ahead and implement such a method by using DDD. So I'll go back to I just controlled tests. So let's grab this, grab this method, get it and pick it as a sink. And then I'll just copy that just under it because there's similar methods. But in this case, we rename it to get either a sink, we will say, with matching items, returns, returns matching items. Okay, so in this case, we're going to go ahead, and we will not be using the random item anymore, because I actually want to specify a name for our items. And in fact, that's the only thing that we cared about, in the case of this test case. What do we want explicit here. So what I want to do is rearrange this a little bit, so that instead of these, what we have is, let's name these, all items are all items equals new. And then I'll just move these to the next line is going to be a bit more verbose. Like this perhaps. And then, in each of these lines, instead of these, we're going to create new items, new items, so we will say new item. And he will will provide a name. So let's use something that we can use for these tests. Like I said, Let's go for the potion. potion case. So we will have a potion there. And then let's add two more. The second one is going to be named Let's name it something completely different, like how to do it. And then the last one is going to be highbush. So in this case, we have two items that have the same, the same term portion and another one that doesn't have it. And then we'll declare a variable here that we're going to say its name, name to match equals Now notice that we are already implemented this case for this new method, but the method just does not exist at all, he has not been implemented. And that's the right way of doing DDD. So we will start with a test case that will actually fail because we don't have the method, and then we will move forward implementation later. Okay, so now we're depositary inbox, get items async, we will return all the items that we have prepared in here. Remember, this is the call to the opposite, it is not the call to the controller get is async. So then we go ahead and prepare our controller and then a constant time to make the call to the act. Now here, we're going to be a bit more explicit. The first it will now go to us in bar on the left side, because we want to signal to the to C sharp and VS code, what we're going to receive from this new method or method to be created. So in this case, what we're going to receive is an ienumerable. of item DTO. Are these going to be found items? And are we missing something? Yeah, system collection generic was amazing. And then these are going to come by calling weight controller, get items async. But not the signature, we need a new signature, where we can actually pipe them async we need a new signature, we can receive the name to match. So we're going to pass name to match here. Okay. And yeah, so yeah, that's got to be great. And then as, as our assert, what we have to verify is that we only got items that were their name, match the name to batch in this case potion. So to verify that we will say phone items should only contain. And then we can say item where item that name should be equals to item, it will be items soup, sorry, so this is all items, sub zero dot name, because that's the first one, let's scroll up a little bit. So we got zero, and we got one and two. So it should be that one. Or, and then just copy this should be I think name should be equals to a lightened suit to that name. Okay, so that's a way that you can use fluent assertions to verify that the items in the collection match some condition. So whatever we're getting, we're getting four items, a, the item should match either phusion or hipolito. In this case, okay. So if we go ahead and build this, I'll do Ctrl Shift v. Course it fails. Because we don't have such a metal gate is a thing that receives some argument. So that's kind of the red face of DDD. And now to start moving into the green phase, we need to move forward and implement implement this is meant to do that, or we can do is just click here, I'll do a Ctrl dot. And that a presents his options or we can do is use a generate method, I just controller get get isolate async. So I'll do that. And then if I do f 12. Here, the method now exists. And I'll move this up just next to the one that we have already the other overload here. Okay, and then we could go ahead and try to implement these. But as you notice, these methods are exactly the same thing. But one of them receives a parameter and the other one does not receive it. So I think it's better to just put everything in this one method, as opposed to try to implement the second one. So I'm going to just take this parameter out of here and into there. Are there Okay, and then we are not expecting to always receive the name, it depends on what the caller wants. So let's make so that these a can receive no, so it is allowed to receive. Okay, and so with that, let's go ahead and run or this case a once again, let's see what we get. I mean, at this point, you should build just fine this verify that should build Yeah, it does build. Now let's run the test. And in our suspected it is failing. It is failing because it is carrying a more items than expected. And on this as a result found items, just getting all the items is not getting only the ones that have been specified here. So let's see how we can fix this. So let's try to get green. So what we're going to do is something very simple after retrieving the list Both items, we're going to apply some filtering if we have to apply a filter. So if string.is null or whitespace. Name, name to match. That only list, let's just rename these to name as opposed to need to match. That should be enough. If name is more whitespace if it is not known as whitespace, then we will apply a filter on the list of files we have already received. So items is going to be items where item height in duck name, contain corn contains the me. And just to make sure that we don't, we don't worry about casing here. Let's do strings comparison already. Now, in our case, we that it doesn't matter. If you're looking for potions for with capital P or with smaller p, it shouldn't matter. We don't we don't care about that. So as long as as a name has been provided able to use it to do a filter on the items, so we get a filtered list of items. And that should be enough to satisfy the condition. Let's go back to the test. Let's run it. And this time it is passing. So we are agreeing, this actually confirmed that this is true by running the entire test suite now. And making sure that we have not broken anything. Yeah, as you can see, everything is green now. And so everything looks look great. So that's how you how you can use a DDD. We don't really need to do more refactoring here at this point. But if you needed to you feel free to go ahead and do more refactoring. But we have gone through the red phase, green phase. And now refactoring is not not dealing this case. What we can do now is to verify that this new functionality actually works in the in the real life. So we will start our host, and we will see how to use these from postman. Okay, so I'll go ahead and hit f5. And then going to postman are here. So let's see. To start with, we'll see what we have currently in our database. So I have here the URL to get all the items in the database. So I'll hit send. And at this point, we have a potion and antidote. And we are ready verifying that things are not broken, because we already in both this method that has been modified, it can receive a parameter now, but it seems to be working just fine. So to properly verify that the new functionality is working, I'm going to actually add a yet another another item here via the post action. So I'll just copy the URL and say plus, I'll switch this to post, pay the URL and in the body of the raw. And I'll do Jason. And then I'll just copy the body of something else here, copy this, notice that the previous items don't have a description, and that's fine. And the new item will have one let's make sure we can have one. So let's do this. That. And then to keep things simple, I'll just name this guy potion. And this is going to be stores a small amount of HP. And the price is going to be let's say seven. So let's pause this. Okay, is there. And if we go back to our get a operation I hit set, here we can see that we have the three items now create now is where we can see if the filtering is working. So now I can say that name equals and are we looking for potion right? So portion, so they should only give me the mega petition and abortion. So I'll go ahead here and hit send. And indeed, we are only getting these two items. Notice that it didn't matter that I use a smaller p here and not a capital P that we have earlier, he was still able to find the items. So things are working as expected. So yeah, that's how you can use DD to drive your your process to add new functionality to the REST API. So as always, I hope this was useful. And if you're looking to dive in deeper into what I have covered in this tutorial series, please check out the link to my full online course in the video description that's been watching and I'll see you next time.
Info
Channel: freeCodeCamp.org
Views: 273,066
Rating: 4.9708695 out of 5
Keywords:
Id: ZXdFisA_hOY
Channel Id: undefined
Length: 384min 9sec (23049 seconds)
Published: Wed Jun 23 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.