Have you come across a situation where you
need to change something in production to implement a new feature that will affect
thousands of users probably change the data type of a column in a live database. However, getting through a complicated
approval process, implementing the new feature and many regression tests later,
you're still nervous and sleepless before. Rolling it out to. And in the end, if that didn't go well
at many more nights of stress, I take for another instance where you need to wait
on the entire project release cadence to just patch a bug in your project. If you know what I'm talking
about and have that kind of. That is gold people. And today we hear companies that roll out
features to production multiple times a day, and probably a hundred times a week. How does that mean? The software developers
don't sleep jokes apart. I was comparing the pros and
cons of monolithic architecture and microservices architecture. There is no old or new about
these architecture styles. It is all about choosing
the right architecture style that fits your team and your. My name is Nash, and I'm a PM
on the darknet community team. Focusing on helping developers build
production-ready apps with.net. Today, I'm going to talk about building
your first microservice indoc net. This is the first video in the
series, and we'll only scratched the surface on the topic. However, there will be videos that will
go in-depth into microservices later. Do you know what is cool about this? Every video will give you links
to hands-on tutorials that you can try yourself for free. Also included are the links to free eBooks
and reference samples that will make you a zero to an expert in just a few days. All right, let's get
started with microservices. Microservices also known as microservice. Architecture is an architectural
style that structures an application as a collection of smaller services. They are highly maintainable and tests. When the services are small,
the code base is much easier to maintain and test the loosely. Coupled this is an important
point services connect with each other on a service end point. So no to services, no. Any implementation details about the
other or have any direct references, microservices are organized
around business capabilities. What that means is instead of
splitting a large application into more minus services based on. You decouple them based on the business. Often patterns like the domain-driven
design are used to identify the business problem and separate them. They're in their own domain logic,
they're owned by a small team. This is why people often talk
about implementing microservices. Need a cultural change in the organization
to every microservice is usually managed by independent teams that
give the autonomy to decide when to release a feature or fix a bug with a
weighting on the entire teams released. Microservices are independently deprived. Having a service model based on
domain business, domain, autonomous teams, managing them and loosely
coupled their package and deployed independently to understand this. Let's look at how microservices are
compared to monolithic applications in the monolithic architecture,
even though the app is decompose into multiple modules and fragmented
into layers, like there's a user interface layer, there's a cache layer. There's a service layer, a data
access layer and the database itself. However, when they are ready for
deployment, the package has a single deployment unit, meaning they all
are deployed at once with specific configurations and have create scripts. And. To a known infrastructure. The critical thing to note about
the monolithic architecture is that the modules are tightly coupled. Hence the feature roll-outs must have
a specific release cadence that all teams must work together to prioritize
for every release, increasing friction, also in a monolithic architecture, you
have a common database for all your modules are often the bottleneck for
performance and implementing change. And scaling monolithic application means
often scaling VMs or the infrastructure itself in microservice architecture. Since the app is decomposed into multiple
independent smaller services that focuses on specific business functionality,
they can be scaled independently. Every micro service is responsible
for processing the data or external state in their respective databases. Individual services can now do. If they want to use sequel or no
SQL database or any database of choice based on the business need. Unlike the monolithic architecture,
microservices do not share databases. And that is a crucial benefit because
the teams can now independently deploy applications into production multiple
times a day, without breaking other services, microservices communicate
with each other by using well-defined. Hence the internal implementation
details of each service are hidden from the other services. It also supports polyglot programming. For example, microservices don't
need to share the same technology stack libraries or frameworks. So you can have a dot-net application
for one service and a Java application for the other, and they all can
run within the same infrastructure or entirely on their own. As you rightly guessed, this kind
of implementation is complex. Since the services are loosely
coupled, the data will be depleted. EPA calls needs to be resilient. Maintenance will be complicated and
you will need to invest in technologies that give you a holistic picture for
logging and debugging and fixing box. And that's why monolithic
applications are still great for smaller teams and projects. You don't need to start writing
micro-services for every single project out there. Remember most applications that
referenced themselves as successful microservice implementing. Started as monolithic as a team size
group, they migrated to microservices. Okay. Back to microservices. Let me tell you about how
microservice are package and deploy, even though Docker containers and
Kubernetes are not a requirement for microservices based applications. Most discussions around microservices
involve talking about them. The reason for that is the significant
benefit that you get with these cloud native technologies that help teams
shift features to production foster. What role do containers? Containerization is an approach to
a software development in which an application or a service its dependencies
and its configuration of packaged together as a container image, just as
the shipping containers allow goods to be transported by ship, train, or truck,
regardless of the cargo insight software containers act as a standard unit of
software deployment that can contain different code and its dependencies. All the concept of containerization
is more than a decade. This technology has gained popularity
since Docker's open-source implementation. Docker is an open source project for
automating applications as portable self sufficient containers that
can run on the cloud or on-premise. Docker is also a company that promotes
an evolved as technology, working in collaboration with cloud Linux and
windows vendors, including Microsoft when using Docker, a developer
creates an app or service and packages it and its dependencies into a
container image and image is a static representation of the app or service
and its conflagration and dependencies. It's the image that when
run becomes the container. Are in the other words, the
container is the in-memory. It's tense of an image in a typical
scenario using darker, you package your app as an image and store it somewhere. I just need a container registry. And when you want to run your app,
you pull that image from the container registry, deploy to the host operating
system and run them as container instance. For scalability, you can scale out quickly
by creating new containers, and that is a crucial benefit for cloud native. Another benefit is that a
container image is immutable. Once you have built an image,
the image can be changed. The only way to change an image is
to create a new image and replace it. This feature is our guarantee that the
image we use in production is used the same in the development and QA as well. So how does this compare to what
your machines in a traditional virtualized environment? You will have the hardware, the host
operating system hypervisor and the isolated virtual machines was apps. Most often there is a low
utilization of resources. And if you're hosting this on the
cloud, chances are you're paying for some underutilized resources. So whenever there is an under
utilization of resources, you can migrate containers to underutilized VMs for
improved density, isolation, and cost savings containers are lightweight. You can spin up a container within
seconds as compared to Williams, which will take longer to boot up, whether
you're spinning up multiple containers to scale out or migrating continuous
to underutilized, VMs, to lower. You will need to do it
in an automatic fashion. And that's where the orchestrator
comes into the picture. What is an orchestrator using orchestrator
for applications is essentially if your application is decomposed into
multiple smaller services running in containers, in a microservice based
approach, each microservice owns its model and data to be autonomous from a
development and deployment point of view. This systems are complex to
scale out and manage them. You need an orchestrator to have
a production ready and scalable multi container application. Kubernetes has become the de
facto standard for orchestrator. It's open source supported by the
community widely used, and it is when the neutral you can run Kubernetes on a local. All in the cloud on Azure. Now there's lot more that I want
to share about Kubernetes, but I'll keep that for another video. All right. Enough of me talking,
let's go write some code. We will look at a simple demo of creating
a micro-services and pined in Dr. Containerizing it and running
it locally, that's it. I could start showing you everything
from scratch, but I thought it was best to show you a free
interactive hands-on training goes. That is part of the Microsoft learn part. The link for that is added to the
videos description, or you can find it somewhere on the screen. This is totally a free course. Either. You can start this module with
me, pause the video properly and execute the steps along the way,
or just watch this video for it. Later you can run them at your own pace. The important thing is you complete the
module, answer questions, pretty easy ones actually, and earn your badge. All right, let's get into it. Go over to AKA Ms. Slash eighth PayNet hyphen microservices. That will take you directly to the
learning part with so many modules covering cloud native technologies,
starting with building your first microservice in dotnet to more advanced
ones like covering darker Kuvan. Helm Linka, the service mesh
best practices for logging and monitoring, resiliency, DevOps, and
more hit the start button and let's get straight to the first module. Prerequisites include installing
dotnet SDK and Docker, desktop locally. And of course having basic
knowledge about executing commands in your favorite terminal. If you know the copy, the snippet
from the portal and paste it to the. Yeah, good. I'll go straight to unit three so I can
start executing the hands-on labs, but you should check unit two hours later because
it has a good amount of content explaining what microservices are and what role
containers play in building microservices. Copy the code and paste
it into the terminal. I want to make sure
the directory is empty. All right, clone the code, the
repo has partially completed files for the exercise. Once it is done, navigate to the. Go ahead and open the source
code in your favorite editor. There are many files in here,
but the important thing to notice is that there are two main
projects that we will work on. One is the end. And the other is the front end. Typically in microservices,
you'll be working with multiple smaller applications. You may have to run hundreds
of services to get a holistic view of the application itself. However, for this module,
there are just two. And as the name states, there is a
backend that provides the data and a front end or UI that connects to the
back and then displays the data to the. The backend project is an
esp.net web API project. It has a controller called pizza info
controller, which is nothing but a restaurant pine that other services
and clients can talk with ESPN often. It lets you define valves and
verbs in line with your core using attributes data from the request part,
quite a string and request body are automatically bound to method parameters. In this case, we have
a list of pizza with. Ingredients cost and availability stored
in a local variable that is returned. When a hedge DB get request is made to
the API, not as how we adjust returning populated variable in the get method. esp.net automatically see a
license, the classes to properly format a Jason out of the box. If you're new to asp.net, very
API, go to the darknet website, scroll down and hit the web link. Then click on the API and hit. Get. Alternatively, you can also
check out some fantastic videos. Put together there by
my good friends sessile. All right. Back to the terminal. Let's go to the backend
directory and build the product. This is great. Execute the dotnet run command. The web API now is listening
on finite zero one port. I'll go ahead and click the link
since there is no homepage map, I'll go straight to the swagger, which
by the way, it comes included in the latest asp.net project templates. As you can see, there is a pizza
info in point with the get method. Let's try it out. Oh, cool. That is the pizza data that we can
consume in our front end project. Okay. That was easy. It feels good. When the code you cloned from the internet
runs successfully in the first attempt. Well, isn't that what
Docker also promises. Once you package your app and
its dependencies into an image, it should work everywhere. The Docker host runs next out. We're going to create a Docker image. What you need is a doctor. The project already has an empty
Docker file, which we will fill in with instructions from the module. Notice that there is no
extensions for this file. It is just the Docker file. Go ahead. Copy the instructions and
paste them into the file. So what is DACA file? A DAGA file is a text file
that has instructions for Docker to build your image. You need to create an image out of an
existing image or start from scratch. Pun intended. Gratz is a minimal amaze that
you can use as a starting. Five images, especially
building an operating system image for almost all scenarios. It is recommended that you create an
image out of an existing popular image. In this example, back in projects,
it means it's based on Microsoft official dotnet SDK image. Using the official images will ensure that
you have the correct operating system. And the dependency is included in
order to run your doc in an app, each instruction within the Docker
file corresponds to a read-only layer that is created while building. The work DIR command changes the current
directory inside of their container to SRC the can't be command tells Cod Docker
to copy the specified files on your computer to a folder in the container. In this example, the back-end
Nazis proj file is copied to the current working directory. Then the dotnet restore command is run and
then other fats are copied into the image. Finally, dotnet publish
builds and package. The DLLs into your app folder, you may
be wondering why did we copy the dot CS proj file separately, and then ran
the darkness restore while the darkness published command would have done it. All. This is to optimize our image. Building process darker creates a separate
layer for each instruction in the fine, the layers are stacked, and each one is
a Delta of the changes from the previous. Docker also Cassius, this layers. So it doesn't have to rebuild
a layer when it is unchanged. That's why we copied the CS proj
file and ran dotnet restore first. So all the packages are cashed within that
layer and they don't have to be relearn when the, when we rebuild the image for
that optimizing the DACA bill process. If your bill contains several
layers, Docker recommends you. From the more frequently changed
to the less frequently change. Go ahead and build a measure using
the Docker build command, and then tag it as a pizza backend. Ah, that's an editor. If you see this error,
there could be two reasons. One, your Docker desktop
is not running or second. It is not confident correctly. I'll go ahead and start
the Docker process. Once it turns green. I'm good to go. The simplest way to check the
Docker is configured correctly is to run the Docker, run command
with the hello hyphen world image. It's not going to find the image
locally, so it will go and pull the image from Docker hub and then run
it not as a hello from DACA message. If you see this message, you're good. I'll go ahead and read
on the Docker build. The first time it is going to
take a little bit of time because it needs to pull base images and
then create layers and catch them. Notice how it is running each
command from the Docker file. If you want to see all the images in your
system, run the Docker image command. There is a pizza backend and the hello
world image that we ran recently, I go ahead and run the mate's Docker. Run it to be. And I'll also add on him to
remove the container when it stops and the image name. And now we are inside the container. If I run L S inside the SRC folder,
you can see all the souls that we had copied in order to build the image. Let's go one step up and
navigate to the app for them. And there is the fines
that dotnet publish. Um, actually copied too. So to recap, we included the dotnet
SDK into our image by basing it out of the official Lockland image,
copied our source code within it and built it right inside it. But if you had noticed the size of
the image, it is about 600 megs. And that is because we have
the SDKs to build image. And even the source code is
copied right into it, which are unnecessary for the final. The final image can be much more compact
if you just include runtime and the package deal, it's only luckily Docker
provides multi-state bills that allow you to reduce the size of your final
image drastically in multi-stage. I can base my image out of another
image, the asp.net runtime, which is substantially smaller. And use the copy command to copy only
artifacts from the previous stage, which is named built, not as the
hyphen hyphen from keyword there. This will ensure the final bill
will only have the artifacts and runtime required to run our image. Now, if you run the build again and then
check the size of the final image, you'll notice that we have drastically reduced
it to 200 max from 600 multi-state builds are super helpful in UCI pipeline, not
just running continuous builds, but also. When you're testing your applications, you
may require other tools and dependencies. Since they are darker images, you
can easily pull them into your bill crosses and ignore them as part of
the final bill, you can include your unit tasks, integration tasks, or any
other checks as multi-stage is, since the final image will ignore all the
other stages, you can ensure that the final image is small and QA is checked. Now run the image this
time with the hyphen. Five to double zero, that maps to the
internal port, ITI of the container. Run the code command to the
pizza and point with port 5,200. And you'll get the same
bid sign for restaurants. But this time the app is running
inside the Docker container. Well, this is. But let's go ahead and run our front end
to, to see this data nicely visualize in a UI to run multiple containers locally,
you can use the Docker compostable that comes with the Docker desktop. You will need a Yammel file
to configure your services. And they're called Docker compose files. Go ahead and copy the
instructions from the model. And paste it right inside the
Docker hyphen composed dot YML file in the root directory. Since this configuration will explicitly
define the name of each service, the image name, the ports, it is going to
expose, and the context of the Docker file you can build and run all your
services with a single command instead of. Darker build our Docker
run for every container. By default, it also sets up a
single network for your app. Each container joins the default
network and is reachable by other containers on the network. That's why you can call the other
servers directly with Heshie DB, colon, double slash, and the name of
the service, not as how we pass in the back and Euro as the environment
variable to the front end container. Since it is within the Docker network,
this name will not be access to. If you need to expose them out
to your host machine, you need to specify the port separately. In this case, finance it or two for the
front end, the back in the model, the environment variable will be picked up
by the dock and adapt when you use the configuration dot get value method. All right, now that we have the Docker
compose file configured correctly, run the Docker hyphen composed bill
command to build a boat images together. It's going to be faster this
time because Docker is going to use the cash for most of this. Done. I will. Once again, verify they may just
using Docker images command, and your lot is that we now have the
pizza front end image also built. Let's go ahead and run this together. Using Docker hyphen compose up command. This command will set up a default
network using the port expose to the host. I can navigate to the financier to
port to see the pizza in for Jason data represented visually in the beautiful. And that's it we're done. If you like this video, do the usual
thing like this video, share it with your friends and subscribe to this channel. But most importantly, run the
hands-on module that is linked in the description of this video. Answer some simple questions
and earn your badge. Thank you for watching. My name is niche. I'll see you in the next.