Java and Spring Boot Microservices | 10 Hour Full Course

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what's going on guys assalamu alaikum welcome to amigos code in this 16 hour course yes 16 hour course i'm going to teach you everything you need to know about building microservices from the ground up this is a course that a lot of you have been requesting for some time now and i took my time and i plan everything and yeah i'm giving you technologies that you should know these days if you are looking for a job then this course will definitely help you to secure that high paying job if you're working for a company and you want to start building microservices this course will help you to do that you're going to learn how to build microservices from the ground up using latest technology you're going to cover spring boot for micro services maven for packaging your applications using jib so we're going to package into a jar from a jar into a docker image and from that docker image we're going to run it within a kubernetes cluster you also will learn about service discovery which is very important distributed tracing message queue both rabbitmq and kafka and a bunch more if you're new to my channel go ahead and subscribe also give me a thumbs up literally just take one second and smash that like button so i can keep on recording courses like this this course is currently on sale so hurry up there is a 30 discount code on the website and you haven't got anything to lose instead you have lots to gain and change your career without further ado let's kick off this video hey what's going on so i just want to tell you that if you are not part of the discord community make sure to join because for this course right here or actually all courses on this platform there is a channel there is a channel that you can post your questions if you are stuck post your questions and a lot of people will jump in and try to help you and also be part of it so if you see other people struggling or they have similar questions then jump in because that's the best way for you to improve your skills by teaching others and i would love you to be part of the community so go ahead and join and if you have any questions about this course post them on discord catch you later let's begin this course by building our very first microservice this microservice will be responsible for handling customer registration and we're going to name this microservice as customer some people usually they like to call their macro services as customer and then service but i'm just going to drop the service out completely and i'm just going to use customer so we'll build customer first then add an api connect to its own database and then we'll proceed to the next microservice and then make those two microservices talk to each other by the end you should have a complete understanding on how to pull all of this together now how exactly are we going to achieve this if you've been following my courses on springboot then you should know by now that springbook makes it super easy for you to build java applications so in here in the official page of spring.io you can see that basically uh what you can do with spring you can write microservices reactive programming with asynchronous cloud web apps serverless event driven badge so on and so forth so you can go and basically see how to get an application up and running super easy with spring boot but in this course you learn all of this and what i want to show you is so right here because we are learning about microservices let's just see what they have to offer so in here microservice architecture are the new normal building small self-contained ready to run applications that can bring great flexibility and added resilience to your code so basically you can read more about this um just to have a an overview but in a nhl in here the way that we build microservices in the spring world is using spring cloud so this diagram right here is the representation of this diagram right here and basically i've put everything together because i want to build a fully fledged system that communicates to each other with their own databases so on and so forth and this diagram here is much easier for you to understand what is happening but basically at the end of this course you'll have an understanding of all of this but what i want to show you is if you click on spring cloud in here so they give you again definition so developing distributed systems can be challenging and this is what the spring cloud project does gives you so here you can see that we have projects we have spring boot the framework cloud data so on and so forth as well as security which i have a course on it but right here so again the exact same architecture you can see that we have service discovery api gateway cloud configuration circuit breakers tracing testing so on and so forth and what i want to show you actually if i click on projects uh let's go to spring cloud and right here have a look so this is all the sub projects within spring cloud now obviously we're not going to use everything in here because as i said you should only really take what is necessary for you to write your own microservices and then as we move towards kubernetes you'll see that some of these things are not needed but go ahead and basically have some reading on how to get all of this up and running but later you'll see that how we're going to install um spring cloud so on and so forth i'll make it super easy for you to understand with maven multi modules so hopefully now you know about spring cloud and within projects you can see that we have spring boot so this is what allows us to write the micro services and then spring cloud allows us to bring all these micro services together and also you've got spring data for databases spring security for securing your market services so on and so forth if you have any questions drop me a message otherwise let's start developing microservices for this project we're going to use maven as our build tool and in this page right here which you can find the link in the description of this video you can see that they give you the installation guide for the operating system that you have and then how to create a simple project so because i'm on a mac i'm going to use brew so within my terminal i'm going to say brew install and then maven so this should take a while now and there we go so now we have maven now if you want to make sure that maven was successfully installed just click the screen and then type mvn and then version and there we go so this is the version which was installed in my machine so now that we have maven installed in here in this guide you can see that we have this command that we can use to create a project so we say mvn arc type generate and then we pass a couple of flags in here so here the group id so this is basically our organization and the artifact id is the application name and then we pass some other arguments so copy this command and then change this according to your organization or your name and this as well and i already have done that so i'm just going to paste it there we go and you can see that the group id is com.egoscode and then the artifact id is amigo services and in fact before i run this let me cd into desktop and then paste the command again there we go and just give a second and there we go so now in our desktop you can see that uses amigos code desktop we have this folder right here and what i'm going to do is cd to amigos code services and then in here i'm going to say ls and this is the maven file structure so let me see if i've got a tree in here no so i'm going to say brew install and then tree there we go and now i can type tree and check this out so this gives me now the fold structure in a very nice way so you can see that in here we've got the root we have the palm.xml src main java com amigos code app.java and then the same for the test and we are good to go next let's open up this folder with intellij so in here i do have the intellij ultimate edition and if you want to grab it i would recommend you to install jetbrains toolbox and if you want to grab it i recommend you getting it through jetbrains toolbox because this is the easiest way that you can manage all of your ides and perform all the upgrades on and so forth so here i've got the ultimate and this is what you see in here now let me open up this folder with intellij there we go and inside of my desktop i do have amigo services which is this folder in here so just pretty much just click on the xml or the folder either one should work open as project and just give you a second while it's downloading whatever it has to download there we go that's done and what i want to show you is so let me just put this full screen so in here if i go to file and then project structure i just want to show you that i'm using java 17 and in here if you want to use java 17 as well so that everything works with no issues just click on edit and then click on plus and you can download the jdk just like 17 but in here you can see that i can download all the other versions in here so 16 15 13 11 and 1.8 so 17 is basically the one which has the long term support as i speak so just u17 and then that's it next let's bring in the dependencies that we need in order to start building our micro services with spring boot all right so let me go ahead and open up this folder right here and this is basically our parent project so we're going to use maven multi-module so that we can have dependencies and then lay all the all the sub modules choose which dependencies that they need to import and we can also enforce dependencies to all microservices so in here what i want to do is this folder src we not going to need it at this point because this is the parent module and open up the pom.xml and in here so we have a couple of things so just go through it but basically change this according to your url if it's not so let me just delete that and leave the properties as is and for dependencies so inside of dependencies let's just get rid of this dependency inside so we're going to have our own dependencies in a second and then for the build and then plug in management so you can see that we have a bunch of plugins inside so let's also just get rid of everything and there we go so let me just have the empty plugins and there we go now inside let me just have the empty plugins tag and we're going to fill this in in a second now the dependency that we need in here is the following so here i'm going to say dependency management and then within have dependencies and the dependency that i want in here is so dependency so the artifact id so this comes from springboard dependencies and this comes from org.springframework.boot just like so and then choose a version so in my case i'm going to pick the two five seven so this version right here i'm gonna show you in a second but this i think is the latest version as i speak so make sure that you choose the exact same version so that you have no issues then what i also need is to say scope and then import so this is using the bom where inside of this artifact there's a bunch of dependencies that we can use and this is because we're not using springboard as the parent project so this is our own parent module and then we have some modules that can use basically these dependencies inside so if they need for example the spring will start a web they can just include it right here so this is the beauty of having the dependency management whereas if we have dependencies in here so dependencies so in here let's say that we want every single sub module to have the lombok dependency so here just say lombok and this comes from org.project lombok and let's also make sure that all some modules they have the testing artifact from spring pool so here i'm going to say spring boot dash starter and then test and this comes from org spring framework dot boot and there we go so let me just put this full screen now finally because we are using spring boot let's have the plugin for building the artifacts so here let's have a plug-in and the group id will be com.spring boot dot spring framework and then dot boot and then the artifact id will be spring boot maven just like that and also let's specify the version in here so version and this will come from the property so this will be spring dot boot dot maven dot plugin dot version and if you grab all of this and then scroll up and inside where we have the properties let's just paste that and then close that and then basically what i want is to say 2.7 in here but i just realized that this version right here is the exact same thing so let's just take this from here and then say springboot.dependencies.and then version and we can take this from here and then put it here close that and then inside we can say 2.5 and if you want you can basically control these independently so uh the springbook maven plugin as well has the exact same version and you can see that this now is taking shape so this should be https and also this should be java 17 there we go and yeah so this is now taking shape so let's just basically reload the changes and it looks like that we have an error oh no that went away uh but basically if i open up maven so this is still red so if i clean you can see that the process terminated so let's have a look so it looks like that i'm missing the type inside of the dependency management so that's fine so inside in here i'm missing the type that's for sure and i want this to be palm so this is so that we can use all these dependencies for our sub projects now let's reload there we go so all good and you can see that we have the dependencies right here everything has been resolved if i open up dependencies we have lombok and starter test so this is what we have included right for our sub projects we also have plugins and in here let's now clean so i just want to make sure that this runs and there we go so let's validate as well and also work so we haven't got anything to compile but you can see that this is working so obviously if you want to grab this file you can grab it under the description of this video so that you can follow along and all you have to do is just change the url the name the group id and the artifact and off you go now just to recap so we have some properties but really dependency management so this is so that our subprojects can pick whatever dependency that they want and you'll see this in a second and then dependencies this is so that all sub modules by default have these two dependencies without them explicitly imported in their pom.xml and then we have the exact same thing for the build so build management right here and basically not everything will be and basically not everything will need this springboard maven plugin and that's why it's inside of this plugin management if you have any questions drop me a message otherwise let's move on now that we have this pom.xml right here and this is the parent palm let's go ahead and create our very first micro service as a sub module so in here what i'm going to do is right click in this folder right here and this is the root so amigos services or whatever you have named it right click new and then module now at this point so this will be a maven project the project sdk will be java 17 and don't tick anything just say next and right here you pretty much name the microservice so in my case i'm going to name it as customer so this will be a microservice that will deal with customers and if i expand the artifact coordinates in here so the group id will be com.amigoscode and then the artifact id is customer and then leave the version as is finish and there we go so now have a look so we have this new folder right here this is customer and inside so let's just open up the parent palm and what i want you to notice is that we have now by default this new section in here so modules and this is a list of modules so we can only have one module right here and this is customer now if i open up customer so this will again have the exact same folder structure so for any maven project so you have the pom.xml so inside of this pom.xml we have some properties and basically have look the parent you can see that we have this icon so this goes to artifact amigos code services right here and i usually like to flip these around just like that and then the model version and then this is the artifact for this project right here which is customer and if i open up src main so this is uh basically everything is empty and we're going to add a few things in a second but for now what we're going to do is let's just open up the palm.xml for our customer microservice and let me close this for a second and what i want to do is inside in here let's have so we're going to say basically we want to have the dependencies right here and now we can bring any dependency that we want so for now all i want to bring is so i'm going to say dependency and this has the artifact id as spring boot dash starter dash and then web or starter dash and then web just like that and this comes from org dot spring framework dot boot there we go so now so in here have a look so you can see that this dependency right here right here it comes from so if i click on this button in here it comes from springboot dependencies dot palm so two five seven so this is what we have defined in here have a look so right here so dependency management right here we brought all of these dependencies and now we are letting each of these microservices choose whatever dependency that they want to use so for this microservice we want to be able to write a restful api so hence we are bringing in this dependency at this point this is all we need so what we're going to do now is let me just close this bomb and also this main palm oh actually i closed the wrong one so let me just open the pump for customer now inside of src so inside of customer src main java let's create a new package name it as com dot amigos code dot and then customer just like that and what i want to do is i want to basically create a new java class and then i'm going to say customer application and this will have a main method and this class we're going to annotate it with at and then spring boot application and here i'm going to say spring and then application dot run and the primary source will be customer application dot class there we go and also we need to pause the arguments from the command line and this is pretty much what we need so we also need to have the application.yaml inside of resources so here file and then say application dot yaml or if you prefer properties it's completely up to you enter and inside i'm going to have server and then i want to have the port so i want to have control of the port just like that so the default is 8080 so let's say that we want this to be 8080 and we also want to name this application so spring application name and this will be customer there we go now what i want to do is i want to basically create a new file in here and i'm going to name this as banner.txt and open up google and search for create spring boot banner and click on this very first link and in here let's just say customer and you can change the font if you want but i'm just going to leave the default copy everything go back paste that in and last but not least what i want you to do is open up the palm.xml and you should see this maven button right here or you can press shift command o to load the maven changes so we've added this dependency so let's just make sure this is reflected in this microservice so if i open up so right here we've got customer and the only dependencies that customer has is lombok and starter test and this is because so if you recall correctly inside of dependencies we said that we want all the microservices or all these sub modules to have these two dependencies right but we included this one right here for this module but it's not showing up in here because i need to reload in here so you can reload from here or from here either one and you can see that you now have the dependency and it's just taking a while to index and there we go now let's open up the customer application so this is our main method and right click run and there we go so you can see that so let me show you the logs have a look so we've got our custom banner in here which is customer and you can see that everything started correctly and you can see that tomcat started on paul 8080 so there we have it we have our first microservice currently it doesn't do anything it doesn't talk to any database on so forth but we'll add that in a second but you see how easy it was first to bootstrap a microservice with springbook so within this package right here com.amigoscode.customer let's go ahead and create the model for this microservice so this will be customer so this is a class and in here i'm going to use lombok so every submodule has lombok in it so here i'm going to say add and then data so this is lombok and then at and then i also need the builder in here and there we go so we're going to come back to this class in a second but for now this is all we need now here let's have a couple fields so let's have the id as an integer let's also have the first name and last name as a string and finally let's have the email in here there we go now what we're going to do is so we've got the model so basically we're just following the interior architecture and i've got a bunch of videos covering all of this stuff how to properly organize your applications on and so forth so here let's start from the controller so i'm going to say customer and then controller so this will be at and then rest controller this will also need the at request mapping and we're going to map this to api forward slash v1 forward slash and then customers and i also want to have the at sla 4j just like that so i can just slow a couple of things and i've just realized uh instead of a class let's just use a record and because inside we're going to pass a couple things in a second so just leave it like that and now let's have a public method that will basically take a customer from the request body and then register a customer so public void register customer this will take a customer request and we're going to create this in a second there we go and this customer request will come from the request body so at request and then body let's log the customer request so log dot and then info so new customer registration and inside we're going to pass the customer registration just like so and we also need to annotate this with add and then post and then mapping so that we can fire post requests against this endpoint in here now let's create this class quickly so here new and then java class this will be a record and let's just say customer registration then request there we go and what we want to pass in here is the first name we also need the last name and finally the email there we go so this is our record and the reason why i'm using a record and not a class is because i get immutability to strings equals and all that stuff for free whereas in here i'm actually going to use jpa in a second now inside of my customer controller so this should be actually customer registration request just like that delete this and then let's just log that instead in here and job done so there we go and now let's create this service that will basically handle this request so let's just say customer and then service service and this customer service so let's just create this class so create class there we go and let's also change this to a record and pass nothing inside for now and if i go back in here so what i want to do is just say customer service dot and then register customer and then pass the customer registration request just like that and i need to create this method in here so there we go and voila so i've got the request in here now let's turn this request into the customer so here i'm gonna say customer customer equals to and then customer.builder we're going to pass the first name from the request we also need a last name and finally email and all we need to do is just say build so this is the builder pattern and there we go now let me just say to do here and we're going to do this in a second so there you have it so basically right here you can see that how the application is taking shape and what we have to do next is just get the database up and running and then configure our application so that we can store the customer to our database and finally we need to annotate this class with ad service so that spring initializes this as a bin for us so that we can inject it in our controller let's go ahead and get our database up and running so that we can connect our microservice to it so in here what i'm going to do is under the root folder of microservices right here so amigos services actually go ahead and say new and then file name this as docker dash compose do end in ammo and then press enter now inside i'm going to paste some yamo configuration which you can find under the description of this video and if you want to learn more about docker go ahead and check my website i've got a course teaching you everything you need to know about docker but in a nutshell so let's just start from the top we've got services this is postgres and then this is my container right here the image is postgres i'm exposing the ports and then this is the network and right here i've got pg admin so this is the graphical user interface client so i'm giving the name and then passing some environment variables to connect to it and exposing the port 5050 to 80 inside of the container and right here i've got the networks so that they can talk to each other and then some volumes to store some data so once you have this file and make sure it's named as docker dash compose.yaml so you can actually run this from here if you have the ultimate edition and if not i'm going to show you how to run it through the terminal so in here open up your terminal or command line and make sure that you are within the project so i'm going to type ls and you can see that i do have this docker compose.yaml now to get things up and running just type docker and then compose and then up and then dash d for detach press enter and you can see that it's creating the network and now it's creating the containers so the pg admin as well as the database and there we go so you can see that this was super quick you can type docker compose nnps and you can see that we do have pg admin which is listing on port 5050 and the postgres which is listening on port 5432 so now that we have this database let's connect to it so open up your web browser and in here type localhost and then 50 50 press enter just give it a second and there we go so you see that oops the front is too big so now we need to set a master password so i'm just going to say password right so now let's basically add a new server and in here i'm going to basically name this as amigos code and the connection so in here the host this will be post grass and this is because we are connecting from a container to another container right here so if i show you inside of services have a look networks postgres in here so this guy right here uses the exact same postgres or the the exact same network which is postgres so pg admin uses postgres and right here this is how we define the network so that these two containers can talk to each other and if pg admin wasn't running with docker you would need to say local and then host so this is postgres the port is 5432 leave the database maintenance the username so this is amigos code and the password is password there we go i can save the password and to be honest this is it so click on save and you can see that we managed to connect to our database so i can click on it and data you can see that there is not much information in here but basically if i expand in here you can see that we have databases and by default we have amigos code and postgres so now we have a database that we can work with next let's configure our microservice to connect to it so within intellij what i want to do is let me just close all of these tabs so close all tabs and let's start fresh so inside of our customer microservice open up application.yaml and in here what i'm going to do is paste this configuration and you can find this under the description of this video and also if you want to learn more about spring data jpa you can check my website where i've got a course on this teaching you everything you need to know about connecting to databases joins how to model your table queries so on and so forth so it's a very in-depth course which you should take so in here what i have is the data source key right here and within it i do have the username so this is amigos code and the url this is postgresql localhost right here and the reason why is localhost is because our application right here when we start it's not a container if it was a container then we would need to connect via the network but in here you can see we also have the port and customer so i'm going to come back to this in a second we've got a password which is password and then here we have some configuration to set the dialet format sql update when we update our entities and then show some sql now this customer right here so this is the name of the database that we have to connect to so if i go back to pg admin inside of databases create database and then the owner is amigos code and here let's just say customer i'm going to say save and there we go so now we have this database that we can connect to it now the last thing that we have to do is to open up the palm.xml and in here we brought this dependency so start a web so this is for restful apis but we also want the dependency that allows us to perform queries and interact with our database so in here let me just put this full screen and this is spring and then boot and then starter and then j pa or data jpa there we go and what we also need to bring in is the postgres driver so dependency and this will be post graysql and this is from or postgresql and you can see that this is coming from the power and palm and in here i'm going to say that the scope for this is runtime there we go now go ahead and basically reload so the changes are picked up if i open up maven in a second you should see that we now have data jpa in here as well as the postgresql driver so all good now let's open up the customer class and in here we need a couple of things so one we need the at and then entity annotation and also we need at all rx constructor and also no our constructor in here we're not done yet so we need to annotate this with at and then id and the id will be based of a sequence so let's just have the sequence generator and then import sequence generator and finally we need to have the generated value just like so and let's import the generated value as well as the type so i think at this point is trying to basically have a star so here let me just say type so generation type and we are good to go so with this in place let's now create a new interface so this will be customer repository this will extend and then jpa repository where the entity is customer and the data type for the id is an integer now what we need to do is open up the customer service and inside of this record we're going to inject it so customer repository so that we can basically save our customer and for this to do right here store customer in db let's just say the repository so repository dot and then save and then our customer drop down and delete this to do obviously there are more checks that we have to do but for now this should basically save customers in our database now let's start the application and hopefully this works and there we go you can see that we have some logging and here create table so id email first name last name and then the key is the primary key is the id and we also have the sequence so this is good stuff so now what i want you to do is to open up pg admin and within amigos code databases customer and then open up schemas so we have one schema and this is the public schema open that up and then we should have one table inside and in fact let me just refresh there we go and now if i open up tables have a look we have the customer table in here and also the sequence which is right here so customer id sequence so this is really cool so if you want you can basically uh count the number of rows so this will give you zero so there is nothing inside and you can even basically run queries against this so you could say for example select uh star from and then customer and then run this and you can see that this gives us back but you can see that we actually are connected to our database to our database and we have a table called customer based of our entity now what i want to do is send a post request into my api and see whether we can save a customer so i'm going to use postman as my rest client and you can use any other so in here within postman new request to so this will be a post and the url will be a localhost and then 8080 v1 customers select body raw from text to json and in the body let's have this json blob so first name last name email jamila ahmed j ahmed and this json object right here corresponds to so if i go back in here so corresponds to this guy here so if i open up the customer registration request so it maps to this record first name last name and email so if i go back and then let's try and send the request and see whether it works so send there we go 200 status code which means that we most likely have saved this customer to our database so here i'm just going to rerun the exact same query and have a look so now we have jamila ahmed in here so this is beautiful and you can see that how we now have our microservice connected to its own database so we've managed to build customer now let's build a second microservice and the idea with this microservice is that in order for a customer or a person to be a customer in our system we're going to perform a fraud check so we'll have a second microservice and this microservice responsibility is just to perform for checks i.e if the customer is fortunate and obviously we're not going to use an external provider for checking whether someone is foster or not and instead we're going to mock things by just pretty much just returning true but you'll see that how we basically handle the responsibility to a second microservice which is only focused on the fraud knowledge right so if the fraud macro service or if the fraud knowledge increases then it's actually independent from the customer microservice so you can have one team which is focusing on the customer side of things and then you've got another theme which is focusing on the fraud aspects of your application so let's kick off this section so within intellij let's create a new module and this will be the fraud module so this will be a maven project and then next and this will be fraud right here so this will represent the fraud microservice and leave everything as is then finished and let me just say don't ask so i'll be pushing this to get in a second so cancel and here we have our throat module now let's add the dependencies within our palm so inside of our palm what we need is a dependency section so here dependencies and then inside for now let's just have the spring boot starter web and let me just close this there we go and within src and by the way you can see that this is already configured so the parent is amigo services right here and you can see that we can navigate to the parent palm and within the parent palm we have this module inside of modules already added for us so let me go back to this pom so this now is looking good we're going to add the database dependencies in a second but now within main java let's create a new package this will be com.egoscode dot and then fraud there we go and let's have the fraud application so this will be our main class there we go now let's annotate this with add and then spring and then boo application and within we have to have the main and now let's just say spring application do run we pass fraud application.class inside and also we pass the arguments and to be honest this is it so let's just make sure that this is configured correctly now always when you add something to your palm so in here always reload so that you can pick up the dependencies so if i click on maven and then go to we should see fraud in here so if i expand this dependencies you can see that starter web is not in here now you can reload from here or from here so either one will work so reload and you should see that we have the starter web if i close this now within fraud application i can pretty much just run the application and there we go so you can see that it is running now one thing that we have to configure is the port so tomcat started in port 8080. now if you have customer so the customer microservice up and running this will fail because customer is using this port so let's just configure that so here within resources we need the application.yaml so file application.yaml and then let me go to customer and within source and then main resources let's take the application.yaml and let's copy all of this so i'm going to paste this here or actually not there but in here paste that and let's use 8081 and then for the application name this will be fraud there we go and we also need the banner.txt and you can go to that website to generate a binary like this so this now says fraud and we are good to go now if i restart the application there we go you can see that it started correctly it's running on port 8081 let me also start customer just to make sure that customer starts correctly so within the customer package open up the custom application and then run and there we go so you can see that we have customer up and running as well now let me just stop customer because we're not going to develop anything for now but you see that we successfully managed to have a second microservice up and running next let's add the database to this microservice in a typical microservice architecture you kind of want to have one database per microservice now because i'm running out of resources and i'm using the mac m1 and if i basically have a bunch of containers for different database servers or instances then i'm going to run out of space very soon so what i'm going to do is within our main server so amigos code let's have yet another database inside so we have customer so let's also create the fraud database so save and basically what i'm saying is so within our docker compose in here you can see that we have uh services and this is postgres and basically what i'm saying is we would need basically to have another service for fraud right but so this you'd say for example postgres and then fraud but because i'm going to run out of resources in my computer so i'm just going to reuse the exact same database instance and have a database inside but ideally again in a microservice architecture you want to have the database service independently now let's add the configuration so i'm going to copy from the customer microservice in here so let's just copy the data source so all of this will be the exact same thing so application.yaml and then paste that in and i need to indent this a little bit now this is a valid configuration and now instead of customer this will be fraud because now we have the fraud database in here and the username will be amigos code and the password will also be a password but as i said so this usually would be a different database but in our case we're just reusing the exact same instance and then within it we have the fraud database so now let's open up the palm.xml for fraud and we need to add the spring data jpa as well as the postgres driver in here so make sure that you have these two dependencies the same way that we've done for customer and now what we're going to do is within our main package let's create a new java class and we're going to name this as fraud check history and within it we need the add data builder all our constructors and know our constructors and make sure to import all of these and we also need the ads entity so this is for jpa and within it let's have the id so the id will be the same way that we've done with customer but we going to have the sequence for fraud instead so at and then id let's have the sequence generator make sure that this is for fraud so here make sure that the name is fraud id sequence and fraud id sequence for the sequence name let's also have the generated value and let's import this the same for type and last but not least we need the id so private integer and then id let's also have the customer id just like that i also want to have a boolean just to make sure that we store the result whether the customer is a fraudster or not so is fraud stir and finally on a record when this check took place so local and then date time and create it and then add and to be honest this is it now what we have to do is restart the fraud microservice and just make sure that everything works there we go you can see that we have create table and we also have the sequence if i open up pg admin and in here refresh and then if i go to schemas public we should have one sequence fraud id sequence as well as one table with all these four columns next let's go ahead and build the controller service as well as the repository so we have the entity setup let's have the repository for it so this will be an interface in here and we're going to name this as fraud check history repository press enter and let me put this full screen this extends j p pa repository this is check history or fraud check history and then integer oops not that but integer just like that and let me put it like that and now let's have the controller so here or let's have the service so i'm going to basically float check and then service so in here let's just have a class and then i want to name this as service and this time i think let's let's not use records because i don't want to confuse you guys so i'm going to change the customer as well to use a normal class because i think it's easy for anyone to understand whether you're not using the latest version of java or not but in here what we want to do is just to perform a check for a particular customer so here i'm going to say public and then boolean is fraudulent and then customer and we're going to take the customer id so integer customer id just like that and in here so what we're going to do is basically for now we're going to make sure that everybody it's not a fraudster so i'm just going to return false so let's just return false but obviously if you were doing this for for real maybe you would use a third-party system or you could have your own system that checks whether a customer is a thought so or not maybe checking their email against so on and so forth and all kind of things like and all kind of things that you can do but in our case we're just going to return false in here but also we want to store the fact that there was a check that took place so private final and then fraud and then we want the history repository just like that and then add to a constructor parameter and there we go now obviously i want to basically store the fact that this took place dot and then save and then inside i'm going to say fraud check history dot and then builder and in between so before i build i want to set couple things i want to set the so set is fraudster i'm going to say false in here and dot let's also set the customer id so this will be customer id and finally on a set when this took place so created that this will be local date now just like that and then we build and then we basically store this in our database and we are storing the fact that a check took place now here let's annotate this with add and then service there we go and let's also have the controller so class and then i'm going to say fraud controller this will have at rest controller at request mapping this will be api forward slash v1 forward slash and then fraud and then dash check there we go and this will be a post and then mapping and here we're going to say public and the return type will be a fraud check response so i'm going to say fraud and then check response is fraudster and let's just basically let's just create this so this will be a record so create record and inside i'm going to say is and then fraudster and this will be type boolean so this is all i need then if you want to include more things in it then you can now for this post mapping all i need is to receive the customer id so here this will be a path variable or a path in here and i'm going to say this as customer id just like that and if i put this full screen we say this will be a path at and then path and then variable this will be customer id the type is integer customer id and let me just put this like that and now i can have the service so private fraud and then check service there we go and make sure that this is final and add this to constructor there we go and now all i need to do is just say fraud check service and then is fraud pass the customer id and all i'm going to do is extract this to a variable and i want to say is fraudulent customer put this on a new line so you can see everything properly now i'm going to say return new floor check response and then pass is and then fraudulent customer and i've just realized that this must be a get mapping instead of a post mapping and import that and also instead of us doing all of these shenanigans what we're going to do is basically having private final and then the constructor in here i kind of forgot that we have lombok so let's just in here and if i go back for a second so we just have to delete the constructor and in here we can say at and then all our constructor and you can see the error goes away let's just do this for uh basically everything so the check service so we'll delete the constructor in here and we pass the annotation so at all our constructor let's take this and let's do the same for customers so customer service so we'll change this from the record to a regular class and in here let's just take this from here private final customer repository and stick the annotation in there beautiful stuff let's also do for the controller so annotation in there take this guy change it to a class instead and then private paste that in and let's make this final so the records gives us a bunch of things that we don't need for our services and i think this is much better okey-dokey so now that we have this fraud controller next let's understand how the communication between microservices will work so we have customer which is running on poor 8080 and we have fraud which is running on port 8081. now we want this microservice to send a request to this other microservice called fraud to check whether a particular customer which is being registered is a fraudster now there are multiple ways that we can achieve this one is to use rest template which i'm going to show you first and then i'm going to show you how to add service discovery so that we can eliminate the use of the ports and then i'm going to show you also how to use open fee which is my favorite way now let's go back to intellij and within our customer microservice open up customer service and in here remember we had this method register customer and we have couple to do's but basically what we want is to have this to do where oops so to do check if fraudster and also what i want to do is so another to do here i want to basically send notification now obviously we now have the ability to check whether a customer is a fraudster so in order for us to send a http request to fraud microservice let's have a class in here called customerconfig let's annotate this with add configuration inside let's have a public method this will return a rest and then template and then in here i'm going to return new and then rest template now obviously we can configure the rest template but at this point we're not going to because i'm going to show you a better way of doing this now obviously we need to undertake this with app bin and off we go now within our service we can inject this by just saying private final rest template and there we go now inside in here so line 22 we can complete this to do so here we're going to say rest template dot and then query get for object and inside we're going to pass a couple of things so one is the url so this will be http column local host column and then the port which the fraud microservice is running is 8081 and then forward slash api forward slash v1 forward slash and if you don't remember inside of the fraud controller we have fraud and then check so here for check and also we have to pass so forward slash and then customer id and then curly brackets and then customer and then id just like that and then here i'm going to say comma and then we need to pass the response type so at this point let's just basically so in here for check response we're going to copy this so let's copy this and i'll show you a better way of doing this right so here let's just copy this class so fraud check response and then within our service we can say that this will be a fraud check response.class comma and now we need to pass the id so basically in here you can see that we registered the customer after so let's change this for a second and what we want to do is we want to save the customer at this point and in here i'm going to say save and flush so that we can have access to the customer id and now on the entity so if i don't say save in flush then the id will be no so here we are saving and then flushing and if you want to learn more about the entity life cycle go ahead and check my spring data jpa course so now customer dot and then get id and there we go and now i'm going to extract this to a variable and this is the response right here so i'm going to name this to flow check response and then if the fraud check response dot is fraudster so if it is fraudster we basically want to throw maybe a new illegal and maybe you want to have your own exceptions but here i'm just going to say state exception for now and then i'm just going to say fraudster and also you can have a better message just like that and we have to check for no pointer exception as well so you can assert in here but at this point right here so we save and flesh the customer and perhaps we could have some other logic just to make sure that the same customer is not trying to register with the same email so on and so forth right but you can see this is why we have these checks and also i can see that i'm missing forward slash forward slash so http column four slash four slash localhost column8081 so there we have it next let's go ahead and test these changes okie dokie now let's go ahead and start the fraud microservice and let's also start the customer microservice so i'm going to start both so fraud is up and running now customer just give it a second and there we go so they are both up and running now what i'm going to do is let me open up postman again in here so i still have the jamila payload and i'm just going to send this request so again this is post to customer 8080 api v1 and customers send and check this out so we have a 200 status code now at this point let me just open up pg admin and in here let's basically check the customer database as well as fraud just to make sure that the request did work so in here let me just let me just refresh everything so refresh and open up customer so in here right click and then open up query tool and then say select start from an end customer and if you run this query have a look we have jamila ahmed in here so this is cool and by the way if you have duplicates that's fine because i've deleted the previous record now here let me open up the fraud database right click and then i'm going to say query tool on that as well and here i'm going to say select star from and then the table name was so within schemas so tables we have fraud check history so fraud check and then history and then if i run this request in here check this out so it indeed worked so this was the time stamp this is the customer id you can see that this is one which corresponds to this one which is jamila right here and in here so you can see that jamila she's not a fraudster so there you have it so you can see that we've managed to have our micro services talk to each other now one thing that i want to do is just to add some logging so inside of fraud controller in here i'm going to add at and then s la for j and then i'm going to say log dot and then info and here i'm going to say fraud check request for customer and then in here i'm going to pass the customer id so there we go so i just want some logging so that i don't have to check the databases that often and let's make sure that we have one on customer as well so customer controller and indeed we have the logging in here so far you've seen that customer talks to fluid microservice on port 8081 via http now in here we only have one instance of customer and one instance of fraud but what about if fraud gets too busy and we have to bring up a second instance now you can see that customer can talk to fraud via this port 8081 but also via this port which is 8085. now in order for this to be possible customer would need to know all the existing ports for fraud now you can see that this is a problem if fraud scales to let's say 10 instances then customer needs to know about all of those 10 instances i.e all of their ports and where they are running and this is when service registry comes to rescue and in this course we're going to kick off with eureka service discovery but later as we move to kubernetes you will see that we won't need eureka server at all but still good practice for you to fully understand how it works so service discovery is the process of automatically detecting devices and services on a network so on this side we have our micro services so because we are using eureka we're going to refer as the eureka clients and then we also have the server itself now the server this is where the clients will register themselves so when they register the server will know the exact information where the service is running ie the host as well as the port then the microservices as well whenever they need to talk to another microservice they will basically look up to this server as well and this is how they can connect with each other so the server knows all the client applications running on each port and ip address and now let me demonstrate to you how it works if we were to have two instances of fraud so before i said that customer would need to know all the ports for fraud ie 80 81 as well as 8082 or 8085 right now customer the first thing that it does registers itself as a client to the eureka server the same with fraud so this instance of fraud as well as this instance of fraud so now this server right here has information about the ip addresses as well as the ports for all microservices then if customer wants to talk to fraud let's say through http the first thing that it does it sends a service discovery request i.e where does fraud lives then the server in here will return the address for one of these services i.e this one right here or this one running on port 8082 and then off it goes the request to for example let's say this instance of fraud so this is how it works now you can see that in here we have a bottleneck so this eureka server if for whatever reason it goes down so if this goes down then you can see that how all of these microservices are meant to talk to each other so this is why it's very important that we do our best to keep this server up and running at all costs but later as you'll see when we move to kubernetes we will need to use this eureka server which will be one less pet for us to worry about next let's go ahead and install the spring cloud dependency in order for us to set all of this up okie dokie so in this section let's learn about spring cloud sleuth so sleuth provides spring boot auto configuration for distributed tracing it configures everything you need to get started so this includes where the tracer data spans how many traces to keep ie sampling and if remote fields are sent to which libraries are traced now this pretty much just adds a trace and spine ids to sla for jmdc and mdc stands for mapped diagnostic context and it pretty much provides a way for us to enrich data i.e the log data so that we can correlate data across multiple services so in here you can see instrument common ingress and egress points from spring applications servlet filters rest template schedule actions message channels fake lines and others and if zipkin is available then it basically sends the the traces via http on this port right here which zip can should be listening on and we'll configure this in a second so obviously you can basically have a look at the configuration which i'm going to show you in a second but to be honest this is a very simple example and what i want to do is so if you click on len so in here go to learn and then go to the reference docs and you can pretty much just click on any of these links right here to learn more about it so in here if i click on getting started and then what i want to show you is so basically have a look so the following image shows how span and trace look in a system so if i zoom in there we go so in here we have a request and at this point there is no trace nor span id so the span is the basic unit of work and then we have the trace id so this is the id which is unique on a different service so in here have a look we have a request that is sent so right here there is no trace id nor span and as it goes through the service one have a look the trace id becomes x so throughout the entire system the trace id will be the exact same thing the request goes from service one two three four all the way back and if you look at the trace id they're all the exact same thing but what changes is the span id so as it lands on service 1 the span id is a then as it transitions to service 2 it becomes b then you can have a custom span id in here becomes c then as it goes through service three becomes d in here and you can see that you can also generate a custom span id but all the way back you can see that uh basically it goes all the way to f and then as it goes back you can see that it basically matches from the request to response so let's go ahead together and add this library so in here if i close this and let me go back and let's add spring cloud sloof to our microservices all applications have been restarted now let's open a postman and what i want to do is send a request to customer microservice there we go status code 200. now let's open up the zipkin dashboard and in here what we're going to do is so you can see that it says find a trace so just pretty much click on run and then query and have a look so have a look in here we kind of have a bunch of requests but one so a few seconds ago this is from the eureka server and that's because we did install it but the one which i want to show you is this one here so customer right here so this one which has five spans and the duration was so this was actually quite slow i think it's because on startup it takes a long time but i'm going to send yet another request so we see some improvements in here but in here what we can do is we can expand this and you can see that this is the trace id and then these are all the micro services which are involved for this trace so we have customer notification and fraud so let's go ahead and click on show and check this out so this is kind of cool so you can see that the request started on customer and this was the path so post api v1 customers and this is the time that it took right here so this took 2.798 almost 3 seconds in total but i think it's on startup things are a bit slow but then from customer we made a request to fraud so have a look so at this point somewhere in customer we made a request to fraud and this was api v1 fraud check passing the customer id this was really quick and then we send a notification in here so after the free check we then sent a notification so we can click on this individual calling here and you can see some more information have looked the controller class the mvc method decline address the same for post in here and if i click on that you can you can click on show all annotations and then you can have more information so the server start time and the finish time so you can see that this is really cool also you can basically download the json in here if you need to but have a look in here we have the duration three services involved the death was two one two and a total of three spans with one trace id now what i wanna do is send yet another request so i'm just gonna keep things as is send and this time you can see that it was much quicker so in here let me click to find a trace run a query it was this one right here and now it didn't take two seconds so this time it was super super fast have a look so this is really cool so almost 400 milliseconds you can click on show and you can see all the information now you can see that the eureka serve is actually generating a lot of noise so let's click on show for example and depending whether you want this kind of information if you need it which i kind of doubt now one thing is so for this eureka server we're not storing the results to a database so before you saw that we could store the results so that if this server crashes then we have the data intact so currently we're not doing that finally if you click on dependencies in here let me just click on search and have a look so this is currently the flow for our microservices so let me see if i can zoom in no zooming doesn't really help in here but basically i think there we go so you can see the error which is going all the way across to fraud and basically customer interacts with fraud as well as notification now if you have a bunch of micro services then you'll see all of these different arrows going to uh whatever direction they have to go so this is pretty much distributed tracing zip king and spring cloud sleuth in action now one thing which is kind of cool is that you can also so if i click on find trace in here so you can see that this is actually generating a lot of noise so if i go to intellij and open up logs i basically want to grab this trace id in here go back and in here search by trace id paste that in and in here you see that we don't have the time it took to perform the database query right because we are storing to a database from customer in fraud as well as notification so currently we we didn't configure the instrumentation for database calls but this is something that you can do so that you have a better view on what happens in your backend but you can see for example if this service right here fraud is taking let's say two seconds then you know that there is something wrong in this microservice and then you have to diagnose the same for notification right so if notifications are taking a long time to be delivered then you can basically use this to debug your applications and finally i forgot that you can actually uh search so in here if i click on find a trace i can click on plus and then have a look service name we've got four services currently so i can click on let's say fraud and you can even combine these but if i run the query however these are the results so this is pretty much all for this section if you have any questions please do let me know otherwise let's move on in this section let's learn about load balances and i'm going to show you how to configure one and how it works in detail and also show you the best way of you taking advantage of load balances as you deploy your applications so in a typical application you have the clients this is the internet and this right here let's say that this is a vm somewhere on the cloud with this particular public ip address now so far we've been creating microservices and we have one which is customer so this handles customer registration now this right here so this setup where you've got one client or two clients or maybe three clients using one particular instance to register the application is absolutely fine if you are working on a personal project then this is fine for you to build because the setup it's really easy all you have to do is you spin up an instance somewhere on the cloud with a public ip address and then you can access this instance so let's say that your application is becoming popular day by day and it's actually receiving more traffic so let's say that you went from one or three clients to 10 000 clients using your application the problem that you're gonna have is that one you either need one really big machine i.e one um vm with 32 or 60 64 cores lots of ram and other high spec right so this might solve the problem temporarily but still doesn't scale if you go from 10 000 clients to 100 000 clients then basically you kind of have to do something else so usually what you do is you have multiple replicas of your application so in this case customer so here you can see that now i do have three instances so this is one instance another instance and then yet another instance now the problem here is that these clients right here that are connecting to your backend system right they don't know which vm to connect to so if they all connect to this one then this one will run out of resources leaving you with these other instances receiving our traffic or maybe you could actually implement some logic in here that has a list of all ip address for all of your instances so on and so forth but no one does that so how are you meant to access your applications in this scenario well this is what load balances solve so to fully understand this let's take our architecture that we currently have and let's introduce load balances so you see how it works so in here this right here is my private network in here and what i have is an external load balancer so this external load balancer this is the main entry point for any client that wants to connect to your applications so instead of clients going directly into customer or fraud or notification you say the main entry point is through this external load balancer any traffic that goes from the internet directly into these instances inside of the private network will be blocked by the firewall so a request comes in through the load balancer and the load balancer is actually quite smart so there's a lot of things that you can configure within it but let's say that you want to send a request to customer so this request right here is sent to this other load balancer now this is an internal load balancer and the reason is because you might have multiple instances of customer or fraud or notification so the same problem that you are solving from the outside world connecting to your services is going to apply within the private network because these communications right here so this is an internal communication and this is a communication from service to service so you also need an internal load balancer now the configuration between this load balancer and this load balancer will be a little bit different but i'll explain more in details in a second and to be honest this is load balances now there are a lot of things that we have to take into account such as tls certificate management where do you authenticate the requests to authenticate the requests within every single microservice or within this external load balancer determinate tls in here or or through these services in here how do you make your load balancer high available meaning that it's always up and running no matter what is it across region if you're running your application in multi-region or not logging caching path based routing so in here you see that instead of having multiple external load balances for each of our applications we can just have one and then based on the path we can redirect the request to the appropriate internal load balancer now you see that this actually is quite a lot of work to do and most often when you run applications in production you should choose a managed load balancer instead of managing your own because this right here is the main entry point for your application so if this load balancer dies right so let's say you are managing the load balancer if it dies then you can see that this is a save one incident immediately right so we actually let cloud providers manage this for us so that we can just again focus on our micro services and our own business logic so let's take a look at the cloud load balancing provided by google cloud so in here you can see that they give you some information about it what it is but in here let's just take a look have a look over 1 million queries per second so you can see that if we were to manage our own or set up our own then it can be quite challenging for us to reach these numbers seamless auto scaling so can scale as your users and traffics grow including easily huge unexpected and instantaneous spike so maybe there's an advert going on so you can expect you know a bunch of traffic coming uh your way so this scales out for you without you having to do anything internal load balancer so as i was saying for internal services so you can see support for cutting edge protocols http http 2 grpc https as well load balances so you can see right here cloud logging tcp ssl ssl offload affinity cdn and the list goes on and on and on the same with aws so they provide either an alb or gateway load balancer or or network load balancer and they give you some information how it works in here so the same thing as the one provided by google cloud and if i click on features for example have a look so these are all the features that it provides so redirects in here path based routing i am health checks ssl offloading uh clouds as i target logging so on and so forth um http header based uh path routing so this is really important and one that we want and you can see all the protocols listeners the same with the targets whether you want to target to an ip an instance or a lambda so this is a really cool feature for serverless applications so you can see the benefits of a managed database now if we were to um oh actually i didn't even show you so uh ssl so certificate management so all of this is actually integrated now if we were to manage all of this ourselves you can see that um it's quite challenging so also um nginx so this actually provides you a a load balancer as well with some of the features that you'll see and you can read more about it how to configure one with nginx if you want to learn how all of this stuff works but basically this is why you really should use a fully managed load balancer when running your applications in production next let's go ahead and learn how the load balancer works so far we have this implementation in here where we have the load balancer customer fraud and also notification now in here we have got no asynchronous communication within these microservices so fraud and our customer to notification now let's say that notification takes let's say 10 seconds to send a notification to our customers now this means that from the time that clients are using our application there will be a delay of 10 seconds in total plus some milliseconds and whatnot but in general 10 seconds now this is a very bad user experience so maybe twilio is having an incident or firebase they're having an incident and they can't really deliver messages for now and it's taken them 10 seconds now you can see that we are depending on twilio and or firebase because this call right here from customer to notification it doesn't have to be immediate so it can have a delay right so whether the customer receives the notification or sms after one second two seconds three seconds one minute it doesn't really matter what matters is from the point that when the customer registers into our system in here customer talks to fraud so this score right here cannot be asynchronous because we need to check whether it's a fraudster or not if it's not then this call right here indeed can be a synchronous because we don't depend uh for any other checks to be performed currently this is a problem that we need to solve even more let's say that customer right here is sending lots of traffic and we only have one notification instance now what happens if notification gets too busy well the notification won't be able to handle the requests coming from customer nor from fraud it's just going to be too much well you might be wondering okay maybe we can add a second instance so just like that well it is fine but again still a problem because you don't know whether this will be enough and the same let's say that you kind of want to have 10 servers for notification now this is not using resources correctly because one it doesn't scale correctly and also things are not asynchronous and even worse let's say that we kind of need to perform an upgrade on notification so we found a bug and in the meantime a bunch of requests are coming through now if notification so in here so if we get rid of notification then it means that the entire call from here to customer fraud and in here notification zero instances this will fail and at this point we already registered the customer and the only thing that we need to do is just to send the notification but if notification is not up and running then the customer will be created and the response that the client will get is a 500 and you can see that this is not feasible right so this is pretty much what a message queue allows us to deal with next let me show you the example where we slow down notification and see what happens within intellij i do have all of the micro services up and running so if i click on run you can see that i do have customer fraud as well as notification now let's send a request to the api gateway and before the request reaches notification let's stick a break point in here so notification is running in debug mode and i've got this breakpoint in here so let's open up postman and in here we're going to send this payload to the api gateway which is listening on poor 8083 and from here you can see that this is very quick so it takes about 941 milliseconds um give or take right so this is when the request goes from one end to the other now if i send this request so this actually took me to this breakpoint and now let me go back yet to postman and have a look this request right here is hanging now what i'm trying to simulate is the case where the request reached notification and by the way this is without the message queue right here so the request reached notification and now let's say that twilio is taking forever or firebase they are taking forever to process our notification now you can see that postman is actually hanging in here so it's saying send a request and it's waiting waiting and waiting forever now if i oops i was just going to basically uh remove the break point but i can see that this probably was a timeout and before i send the request so let me just go to notification and in here let me just resume this and eventually it sends the notification but let me go back yet again and let me send so let me just wait for a second and you'll see that so let's just say that twilio was having a bad day and they took some time to process the notification now it worked let's just resume back to postman and have a look 18 seconds to process our request which is insane so you can see that we have a bottleneck in here now within zipkin you can see that in here i do have the calls so this one was the first one that took some time and then this one was the one that took 18 seconds so if i click on show in here now i want to check this out so the request from the api gateway took 18 seconds customer made a request to fraud and have a look fraud you can see this line right here which is barely visible so this was super quick so if i try and click on that and have a look fraud controller and if you show all annotations you can see more information but this was really quick so this call right here was very quick and the problem really lies within notification which is taking 18 seconds this line is actually all the way through the end so this is the bottleneck so you can see also why having zipkin is really important because we can visualize all of this stuff now this is when the message queue comes to rescue so at this point if we stick the request inside of the queue it returns an acknowledgement back to the sender in this case customer and then if notification is taking a while then basically we don't have to wait for 18 seconds as you've seen right because we know that eventually the notification will be processed so this is why we need to use a message queue if you have any questions please do let me know otherwise let's move on so when it comes to decouple microservices and provide a synchronicity between them as well as brazilians there are a couple of big players in this field one of them is apache kafka now apache kafka is open source and i believe it was initially developed by the guys at linkedin and it's basically used by a lot of companies so it provides high throughput it's scalable one cool feature here is that it has permanent storage so you can store streams of data safely in a distributed durable fault tolerant cluster in here so something that rabbitmq for example does not provide then is also high available so you can stretch clusters efficiently over availability zones or connect separate clusters across geographic regions now kafka on its own it's a big big beast i have to admit but still i'm going to basically show you how you're going to use kafka with sprinkle at a later stage then what we have is rabbit mq so rabbit mq is the most widely developed open source message broker for this course we're going to begin with rabbitmq due to its simplicity of getting started understanding its concept and the integration with springboard will be really straightforward as you'll see we also have amazon sqs so amazon sql stands for simple queue service it's a fully managed message queuing service that enables you to decouple and scale microservice distributed systems and serverless applications so if you are deploying your applications to aws then this is a great solution because it's fully managed now one disadvantage of using sqs is that let's say that you want to port your microservice architecture to google cloud so recently there were a couple regions of aws that were down for example now in this case if you are deploying to multi-clouds i.e you're deploying to aws as well as azure and maybe google cloud you know let's say that you've managed to deploy to all those three at once right so it's it's a very difficult job i'm not gonna lie but let's just say that you've managed to do it now if you are tied to sqs then you can't really move across any other cloud provider because sqs is aws specific whereas maybe if you are running your own kafka cluster or rabbit mq then you have more flexibility in here so usually you'll see that you have teams that are dedicated for making sure that your cluster i.e you know either using kafka or rabbitmq that these are always up and running because again if they go down so in here so if you lose your message queue then you can see that again so if this goes then you have a big big problem but usually um it shouldn't be a problem because when running kafka or rabbit mq you want to run in multi-ac for example right multi availability zone so if one availability zone is down then no matter what you know things are still up and running so you'd never have one single message queue for all of your microservices when running in production and before i forget i'm going to leave a link where you can learn more about the differences between apache kafka rabbit mq and sqs so that you have a better understanding of these tools if you have any questions drop me a message otherwise let's move on so in order for you to understand what kafka is i would highly recommend you to go and enroll to this course right here where i teach you everything about microservices in terms of setting up everything from scratch microservice communication connecting to databases logging and a bunch more so basically everything in here that we are doing and previously we actually did connect to a rabbitmq in here so if you've missed that video as well i would highly recommend you to go and check it out now because we are learning about kafka in here we just have to replace this part right here because kafka can be also used as a message queue so let's dive into kafka to understand what it is kafka is a distributed streaming platform that allows producers to send events of streams right here to the broker and then consumers can actually receive those events and the way it works is through topics and i'm going to explain topics in a second but kafka is a distributed streaming platform now you might be wondering why distributed well you can imagine that you want your application to be resilient i.e you don't want just one node to act as your kafka broker and instead you can have two three four or thousands of nodes so these are brokers that basically run your kafka process so these could be cross region different data centers so on and so forth and with kafka you can build real-time event-driven applications such as for example location tracking so maybe if you order food from ubereats then when you see the driver updating its location on the app that is done through kafka kafka is fault tolerant so you run a bunch of these nodes across region it can scale so if you basically need more brokers or if traffic increases kafka is highly scalable and as i said kafka should always run as a cluster now one difference here that you might be saying okay so what's the difference between kafka and rabbit mq for example well with kafka messages when they are sent to the topic they don't disappear so they can stay there for a couple of minutes couple hours couple of days or even forever whereas in a traditional message queue the messages are gone as soon as they are consumed so we have producers and we have consumers on this side producers these could be micro services or web application or anything really that wants to produce messages or events of streams and then you have consumers so again this could be microservice to microservice communication or maybe you want to send logs as well or you want to build monitoring systems including dashboards or even do work with hadoop also right here on the side you can see that kafka has something called kafka connect so these are mainly for data integration you've got kafka connect source so this pulls data from your existing data sources into kafka and then you have kafka sync which takes data of kafka topics and then puts it into your data source so the connect right here so these are java applications that run independently from the main process there's also something called kafka streams so this is a java api and it's mainly used for data processing and transformation you can enrich data you can transform data you can perform filtering grouping aggregation and a bunch more now here you can see that the arrows go both ways so as data comes to kafka basically streams performs all of these data processing and then once you're done you can actually put it back to a topic so that any other consumer that wants to consume that enriched data they can do so now let's look where these events of streams are put inside of kafka so with kafka we have this concept of topic and if you've seen my mq you've seen topics and different types of topics but in kafka it's a little bit different so a topic is a collection of events which is replicated and partitioned so if you can imagine so if you have uh messages going to your topic then if you have a bunch of nodes these are replicated and also partition now partition here means that so if you don't have a node which is big enough to hold this topic right here the topic itself is actually partitioned across multiple nodes as i said it's durable so you can sorry for hours days years or even forever and there is nothing that says how big or small these copies can be so there's no limit so you have to decide what you want to store and you can basically have really really big topics or tiny topics now in contrast to a message queue such as rabbitmq as i've mentioned a topic in here you can think of it as a log so if you've done any java using log4j storing logs to a log file this is the exact same concept right so data going into the log can be kept for hours days years or even forever and once data is read of a topic it's still there it's still there whereas with a message queue if for example this is red then it's gone but with kafka it's not the case meaning that you can then have a bunch of other consumers reading off the same topic so we have producers that stream data of records so these are the records in here and then we have consumers on the other side so in our example we are working with micro services so we'll have a micro service that produces and another one that consumes these records now let me quickly touch on partitioning so as i said there is nothing that says whether a topic can be small or big so there's no size limit for them really now you could have one node you could have one node that is big enough to hold one topic right so if you have a massive topic you can have one giant node to hold that or instead what you could do is so these days as you know we can scale up or down because of the cloud and resources are cheap and we can have a node which is kind of a medium size right so in this case here let's say that from a giant node we go to three or four small nodes now because we have the topic so we have a topic this because it doesn't fit on these smaller nodes we have to partition it we have to partition it so here we have petition one petition two so on and so forth this is the main idea of kafka next let's go ahead and build a project with kafka so you see the integration and we're going to create the broker which will run locally we'll also configure a topic producer and consumer and then we'll have everything working together by sending and receiving events of streams so we have everything configured and set up now let's take the jar and package it into a docker image from that docker image we can then run containers and you'll see the benefit of docker if it works on my machine then it surely works on your machine docker is a technology which has changed the game in terms of the way that we package software and later you'll see with kubernetes it's the perfect match if you want to learn more about docker i've got a complete course on docker explaining everything in detail but in this section don't worry because i want to simplify the process teach you the core concept of docker and make sure that you have the required knowledge to take your jar package up into a docker image using jib so we're going to use jib and then from that we can run containers and the containers that we're going to run are within a kubernetes cluster so i'm super excited about this section let's kick off let's begin by understanding what exactly is a docker docker is a platform for building running and shipping applications i think this bullet point really tells what docker is in one single sentence now bearing that docker is a platform for building running and shipping applications developers can easily build and deploy applications running in containers now more in detail on containers but you can think of a container as a running instance of our application packaged up and running within a container the cool thing with the docker is that local development is the same across any environment i've had this scenario where it kind of works in your machine but it doesn't work on the development environment or staging environment demo environment production environment because of hardware issues installation issues configuration so on and so forth now docker removes all of that away if it works on your machine it will work in dev staging demo pre-pod prod and basically any environment docker is also used a lot for ci cd workflows so continuous integration and delivery workflows so we're going to touch more on ci and cd workflows later in this course but just bear in mind that docker is used quite a lot in the devops space so let me teach you about docker images and containers and basically how everything fits together so how do we go from some code that we have to running a container now usually when building software you have your source code so this could be pretty much any code written in any programming language now what you as a developer do is you take that code and then you build a darker image now from this docker image you can run a container so from the docker image you can run a container the docker image you can think of it as the template for running your application now the cool thing is from one docker image you can actually run multiple containers so here for example you can see that the container is a travis application or nginx or postgres or pretty much anything that you want literally any programming language any technology that you used to you can pretty much just run it as a docker container so a docker image is basically a file used to execute code in a docker container is a set of instructions to build a docker container so you've seen that from our code we can build the docker image and from an image we can run containers right so this is the purpose of the docker image from it we can run multiple containers right so it contains the application code libraries tools and everything needed to run our applications you can think of a docker image as the blueprint and from the blueprint we can run multiple instances of the actual thing that we want which are containers so a container is an isolated environment for running applications so that's literally what it is so this is a container now this might not tell you exactly what the full picture is but let me just go inside and tell you what the container is so within a container it contains everything that your application needs it contains the operating system it contains the tools and also most importantly it contains the software so here let's say that this is a spring boot application or it could be a javascript application a node application a golang a python application it could be literally any application that you have so it contains the operating system tools and binaries and software so when we run the command docker run and then i said amigos code for slash 2048 so what that gave us it gave us the application right here so on the left hand side is the container and then on the right hand side is the actual application that the user interacts with in this section i want to focus on kafka and i want to build a little project using kafka so you see how it works but then towards the end i'll basically let you go off and explore how you're going to change the existing application which is using rabbitmq so that you can integrate with kafka now the reason why i want to teach you kafka is because you should know both and you'll see that because we are using spring boot then the integration with kafka is also really straightforward without further ado let's take off this section let's go ahead and learn about kubernetes kubernetes originated from google where they've been deploying and working with containers for many years now and internally they have this tool called borg was a very successful internal system that allowed google to deploy billions of containers every week from borg then they develop omega and from omega kubernetes was born now just to note that cubanetis is actually written from scratch so it shares the same dna of borg and omega and the difference here is that it's open source so everything that developers at google have learned over the years developing borg and omega they took all of that knowledge and mistakes along the way and they came up with kubernetes so kubernetes is open source which means that both you and i can contribute to this awesome project fun fact kubernetes is actually written in golang which is an amazing language and if you want to learn about golan go ahead and check my website where i've got a course on golang now the easiest way for you to picture kubernetes is by looking at this picture so here you can see that we have this cargo and kubernetes really it means pilot it's a greek work that means elseman or pilot you can see that the ship has a bunch of containers and then kubernetes is actually managing all of these containers now if you know about containers and docker then this should be somewhat familiar to you and if you need to learn about docker and containers go ahead and check my website where i've got a course waiting for you on docker so let's dive a little bit deeper and understand kubernetes so kubernetes aka kate where the eight is the eight letters in between k and s is an application orchestrator so in this picture you can see that we have kubernetes and kubernetes orchestrates all of these applications when we are talking about applications we mainly refer to containers so kubernetes deploys and manages our containers it also scales up and down according demand it performs zero downtime deployments rollbacks and much more so knowing kubernetes will set you apart from many software engineers because right now the demand for people that understand kubernetes is quite high and the salary as well is super high so known kubernetes will take you a long step in your career now when we talk about kubernetes we need to first understand what a cluster is so a cluster is a set of nodes where a node can be a vm or a virtual machine and these can be running on the cloud such as aws azure or google cloud or even on premises now when we talk about the kubernetes cluster we have to understand the difference between the master node and the worker node so the master node is simply the brains of the cluster so this is where all of the decisions are made and then the work nodes this is where all the heavy lifting work happens such as running your applications and both the master and the work node they do communicate to each other via the cubelet which i'm going to teach you in a second now within a cluster you often have more than one worker node and for this particular cluster we have four nodes in total one master node and three worker nodes now that you know what the kubernetes cluster consists of next let me go ahead and dive deep into both the master and worker nodes so in kubernetes a pod is the smallest deployable unit and not containers now to visualize what a pod is i've got this really nice diagram so this is a pod so within a pod you will always have one main container and the main container represents your application whether it's written in node.js javascript or golang whatever language that you want then you may or may not have in its containers and more on any containers but they are containers which are executed before the main container and then we might or might not have some side containers so you could have one or two and also here whatever language that you want now side containers are containers that support your main container so for example you might have a container which acts as a proxy to your main container also within pods we can have volumes and this is how containers share data between them now the way that these containers communicate to each other within a pod is using localhost and then whatever port that they expose the port itself has a unique ip address which means that if another pod wants to talk to this pod it uses the unique ip address so a pod is a group of one or more containers it represents a running process it shares the same network and volumes and one thing to bear in mind here is that you should never create pods on its own we should use controllers instead such as for example a deployment and we're going to learn exactly deployments jobs replica sets throughout this course but it's good for you to understand the smallest deployable unit of kubernetes now the reason why i'm saying that you should never create pods on its own is because pods are ephemeral and disposable ephemeral means that they're not long lived therefore they are disposable and remember i said that kubernetes is an orchestrator and we have a bunch of controllers that monitor the state of our cluster so if a pod dies then if you create a pod on its own then there is no way that kubernetes knows how to manage and bring another part for you more on this later but remember i said the smallest deployable unit for docker are containers and for kubernetes are pods next let's go ahead and learn how to create pods so to deploy customer within kubernetes let me go ahead and close this file in this file and within so within minicube folder let's have a new folder and here i'm going to name this as services and within services let's have customer so this will be yet another folder and basically for microservice we want to have a different folder with their yamos now in order for us to deploy customer we need the following so here i'm going to have a new file and let's start with the service so i'm going to say service dot and then yaml here i'm going to use auto completions okay and this is a service there we go and here the name will be customer and then the port will be 80 and we're going to say the target port is 80 80. so again 80 here is so that we remove the need of specifying ports from our applications so if i open up clients have a look clients dash cube.properties here i'm just saying customer so customer refers to the name of this service and the port is 80. if i had 8080 in here then this would need to be 8080 but we don't want ports so let's just get rid of that and leave this as 80. now for the type i want this to be a load balancer and i'll explain load balances in a second but effectively in here we are configuring the service and i'm saying that this will be type load balancing now really what we want is to have an ingress in here but we're not quite there yet so for now let's just say load balancing now if you're running kubernetes in for example eks that will spin up an application load balancer for you so it will spin up an application load balancer for you and again we don't have this ingress yet but i'll touch on ingresses in a second all right and this is pretty much everything we have to do for the service now let's create in here a new file and i'm going to say deployment dot and then yaml and here i'm going to say k and i'm going to choose kubernetes deployment the name for this will be customer now make sure that the replicas is one make sure that replicas for now is one and i'm going to explain this in a second why but for now just leave it as one then for the image we're going to say this is amigos code forward slash customer column and then latest and if you don't specify this tag it will basically it will basically pull latest anyways now for image pull policy here you can basically change this to always or never actually you don't want never so maybe you always want to pull the latest image or if not present and in fact let's just say always so because we are testing things it doesn't really makes a difference next we need to define the ports so ports and the container port for customer is 8080 there we go and last but not least i want to specify the environment and here so remember within our docker compose we have spring profiles active equals to docker so we want the same thing but the value will be cube so customer so customer in here has this oops not there but in here it has this application dash cube.yam so we want to select this file and to be honest this is the configuration that we need obviously we can configure resources limits and requests we can also configure the liveness and readiness probes but this is the bare minimum configuration that you need in order to have a deployment with one replica up and running now the reason why i said one in here is because if i open up the application dash cube.llamo in here have a look the hibernate ddl auto this is create drop this is create drop so it means that if a second part comes along it will basically destroy the database so this right here you only have it for testing purposes and you can go and basically say for example update so that it doesn't delete data and in my case i really don't care about the data so i'm going to leave create drop and in the replicas i'm going to leave as one as well because i also don't think i have many resources because i only have 16 gig and i've got a lot of things already up and running on my machine so i'm going to leave the replicas to one but usually you want to have three replicas at least so that your application is always up and running if one fails right but again this is when you go to production for testing you can leave it as one and this should be okay and there we go this is customer next let's apply this yamo okie dokie i'm glad that you've made this far so go ahead and enroll to the entire course link will be in the description of this video 16 hour course packed with awesome content if you're not part of the amigos code community both discord and facebook group i would like to see you there and that's all for now i'll catch you on the next one you
Info
Channel: Amigoscode
Views: 192,359
Rating: undefined out of 5
Keywords: amigoscode, java tutorial, microservices, kubernetes tutorial, docker tutorial, spring boot tutorial, service discovery in microservices, kafka tutorial, rabbitmq tutorial, rabbitmq vs kafka, spring data jpa tutorial, maven tutorial for beginners, intellij idea tutorial java, learn java programming for beginners, microservices architecture
Id: 1aWhYEynZQw
Channel Id: undefined
Length: 141min 36sec (8496 seconds)
Published: Thu Feb 17 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.