How To Structure Your Go App - Full Course [ Hex Arch + Tests ]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey what's up everybody and welcome to this video on applying the hexagonal or ports and adapters architecture to a grpc api and go in this video we'll build a very bare bones grpc api application that applies the hex arc principles we'll keep the complexity level of this application to a minimum to help you focus on the architecture and grpc as opposed to details unrelated to the target concepts another reason for the reduced complexity is that i would like to store the resulting application in a repo for you all and others like you to use as a skeleton application to aid in getting started with hexarc and grpc and go this video should serve as a sort of tutorialized documentation of sorts for those that make use of the resulting application in their own projects in order to get started with this tutorial you should have a relatively decent grasp of the go programming language if throughout this video you feel that my explanation of a concept is lacking please feel free to leave a comment in the comment section below and i'll attempt to improve upon my explanation of the concept in either another video or in a response to your comment in this video i'll be using the terms hex arc hexagonal architecture ports and adapters architecture interchangeably because they're basically the same thing i'll try to limit my explanations to just one of these but i can't really make any promises there because i often find myself just using whatever comes to mind we'll start by going over what hex arc is and what the overall structure of an application that applies this architecture looks like we'll then go over the principles that we want to pay attention to when applying this pattern to our application then with an understanding of what we're setting out to achieve we'll go over the structure of our application and its implementation of the architectural pattern finally we'll get into coding our project and after coding our project we'll add both unit tests and end-to-end tests to demonstrate the usefulness of this architectural pattern in practice by the end of this tutorial you'll have an understanding of hex arc and how its loosely coupled components make adding tests swapping out databases or adding additional entry points like http trivial by the end of this tutorial you should also have a general understanding of grpc docker and docker compose there's a lot to learn in this video so be sure to get some coffee and without further ado let's get started so what is the hexagonal architecture well let's start with a visualization using a diagram the hexagonal architecture is an architectural pattern that we can use when designing our application that aims to decouple application components this way application components can be easily connected to our application as well as swapped out for different components in our diagram connecting these components to our application would look something like this for example we could plug in an http component to drive our application via http or we could plug in a grpc component to drive the application through grpc we could even add a cli component and drive the application that way the point is the components are decoupled such that the application does not depend on the components that drive it nor does it depend on the components that it drives or makes calls to such as some form of storage like a database communication between these components and our application is achieved through ports and adapters so what do i mean by this so in this diagram our application is the hexagon and the components are the items labeled grpc http and db as you can see our hexagon representation of our application actually has multiple layers as well but we'll get to that in a second the grpc http and db components we would actually call our adapters so let's pull up another diagram to visualize what i mean by this in this diagram our adapters are represented by these plug-like components and these outlet-like sections we would call our ports although the previous diagram doesn't show it so visually the same idea applies each side of our hexagon is representative of a potential port to our application and our components or our adapters can connect these ports to either drive or be driven by our application and what do i mean by adapters can either drive or be driven by our application this part's very important so pay close attention our application can either be driven for example an http adapter receives calls to our application causing our application to do something or our application can be what's doing the driving for example our application queries a database which means our application is driving the database to do something this means that our adapters can be separated into two categories driving adapters and driven adapters these two categories are also sometimes called primary adapters and secondary adapters respectively but they both mean the same thing so just pick a naming convention that you like and go with it now when representing these adapters on a diagram the driving adapters are usually placed on the left side of the hexagon and the driven adapters are usually placed on the right side of the hexagon this is why our db adapter is on the right side and our http adapters on the left side of our diagram the http adapter drives the application and the db adapter is driven by the application now as you can see the hexagon representation of our application has three layers the domain layer is the core of our application and will contain our domain logic or business logic the application layer will orchestrate the use of our domain code as well as adapt requests from the framework layer to the domain layer by sitting between the two and lastly the framework layer will provide the logic for outside components such as database or grpc adapters to interact with our application and yes these layers too are loosely coupled and communicate through ports and adapters as well which brings me to the first thing that we want to pay attention to when applying this architecture to our application the outside layers depend on the inside layers and not the other way around or in other words dependencies point inward that means that our domain layer cannot depend on our application layer for anything and the application layer cannot depend on our framework layer well that brings me to the second thing that we want to pay attention to when applying this architectural pattern to our application to communicate with the database from the application layer we need to use dependency injection which just means that instead of calling our framework layer to create an instance of our database from the application layer code we just invert the control of database instantiation to the program at startup and have the program inject that database instantiation into the application layer i know that's a mouthful this is probably the most confusing part of this tutorial and we'll spend some more time going over this when we get to that part of writing our code now let's go over how the structure of our application will look with this pattern applied so our project directory is going to look something like this now don't get discouraged you'll become quite familiar with this project structured throughout this tutorial for now you only need to focus on these key points this adapters subdirectory will contain the code for all of our layers each layer has its own subdirectory so the application code will go here in this app directory the core code in this core directory and the framework code in this framework directory now remember on our diagram we place driving adapters on the left and driven adapters on the right so that is also reflected in our framework directories subdirectories left and right so left contains our grpc code as well as some http code but the http code is really just there to emphasize that the left side adapters are driving adapters and that multiple adapters of different types can drive our application and as you can see write contains our dv code and then we have this ports directory which will contain all of our ports organized by layer so any port that we add to the left side of our framework layer will go in framework left.go and any port we add to the right of our framework layer will go in framework right go the ports that connect our app's inner layers will go in their corresponding files for example the ports that we add to the app layer will go in app.go and so on and so forth and lastly we have our main.go file the main file is basically going to be the file that orchestrates the startup of the application it's going to be where we write the code to connect all of the ports and adapters and also most importantly where we will write the code to inject the dependencies into the layers that need them for example it'll contain the code that'll inject the database dependency into the application layer so now that we know what the hexagonal architecture is and the rules that we need to follow to implement it as well as how our application structure will look once we have implemented it let's get into writing some code okay so to start we'll want to install go and set up our go workspace so you can install go by going to golang.org and just clicking this download go and from here you can select your operating system i'm using a mac so i would select this package and then you would download the package and then follow the instructions for the installer it's pretty self-explanatory and once you've finished the installation you can head into your terminal and if everything was installed correctly you should be able to type in go environment and then grep go path so we're going to pipe the output from the go environment command into grep and search for gopath and you should see your go path here and for you it's not going to have youtube for you it's just going to be your home directory and then go so for mac it would say users your name and then go and this go path is where we're going to want to set up our workspace and although go says that your go path is your home directory and then the go folder that doesn't mean that the go installation actually created that directory structure for you so you're going to need to create it on your own and you can do that by just running this command and what this command is going to do is it's going to make the directory structure for you so within your home directory it's going to create a go folder and within that go folder it's going to create these subfolders bin source and package so once you've created your go workspace your directory structure should look like this so within your go folder you should have three subfolders and we're going to change directory into source because that's where we're going to be writing our code and within source we're going to create another directory and we'll call it x and this is going to be where our project is going to be stored so we'll change directory into hex and lastly we'll just do go mod init which will create a new module for our current directory called hex and once you do this go mod in it you should have a go.mod file here and now let's create the rest of our project directory structure so we're also going to want to make a folder called command and we're going to want to make one called internal so now your directory should have a command folder an internal folder and then a go mod file and the first file that we're going to make is going to be our main file in our command directory so vim cmd main.go now vim automatically sets the package and imports format and prints this vim go for me so it basically sets up the initial structure of the file but for you you're going to have to type this out if you're not using a similar vim setup to mine let's go ahead and change this we don't need that so as mentioned in our hexarc explanation this main file is going to be the file that's responsible for orchestrating the startup of our application and doing dependency injections and stuff like that but we don't have anything to set up here currently so let's just go ahead and save this file and leave it as is and we're going to start with making the port for our domain code so just like in the project structure that was shown in the hexark explanation we're going to create a ports folder inside of our internal folder and within that ports folder we're going to create core.go and that's going to be where our core port is and again vim is automatically doing this for me but actually we're going to need to change the package to ports and we're not going to need an import for this and we're also not going to need this function and here we're just going to create the interface that our core adapter needs to implement and this application is simply going to do basic arithmetic so the core logic is just going to return the results for adding subtracting and multiplying numbers and i'm trying to keep the functionality of this application as simple as possible so that we can focus on the core concepts so we're going to create an interface called arithmetic port and we'll have addition which is going to take in a which is going to be in n32 and b which is also going to be an n32 and it's going to return in 32 and error and we're going to do the same thing for subtraction multiplication and division so we're just going to change this to subtraction multiplication division so basically what this interface is doing is it's saying that in order for our adapter to be eligible to connect to this arithmetic port it's going to need to implement addition which takes in a and b which are both in 32s and returns an error and in in 32 which is going to be the result of adding those two and it's going to need to do the same thing with subtraction multiplication and division so that's going to be our arithmetic part so we can go ahead and save that so once we've finished creating our arithmetic port the ports subdirectory should look like this so within our internal folder we should have a ports folder which contains a core.go file which contains the arithmetic port that we just created so the core functionality of this application is going to be to perform basic arithmetic so now we want to go in and create the adapter that is compatible with our arithmetic port so let's change directory into internal and within our internal directory we're going to create another directory called adapters so make their adapters so now we should have an adapters directory and we can cd into adapters and within adapters we want to create a core directory and within that core directory we want to create an arithmetic directory so we can just do make dirt and pass the p flag so that it creates intermediate directories so we don't need to create each directory individually and we can just write core and then our subdirectory within core is going to be arithmetic and then just hit enter and then we can change directory into core arithmetic and within this arithmetic directory we want to create our arithmetic dot go file okay so here's how we're going to create the adapters for all of our ports so pay close attention so first let's just set our package and it's going to be called arithmetic and we're going to create our own type and our type is going to be called adapter and it's going to be a struct so basically this adapter type is part of our arithmetic package so our arithmetic package will have its own adapter type and it's going to be the same for all of our adapters they're all going to have this adapter type and in go types can implement interfaces by implementing the methods of those interfaces so if we go and have a look at our ports we see that our arithmetic port is an interface that defines the method signatures for addition subtraction multiplication and division so in order for a type to implement the arithmetic port interface it needs to implement the methods defined by the interface so we want our arithmetics adapter to implement the methods addition subtraction multiplication and division and in order to do that and go we just create a new method and in order to tie that method to a type you just do this so we'll just say arith and then we'll say adapter and then we could do addition and a takes in n32 and b is in n32 as well and it returns and 32 and error and now the implementation is just going to be return a plus b and nil and here the adapter is referencing this type that we just created and this arith is just how we reference the type within the method so say for example if this adapter struck tab val and it's an int we could access that within this method by doing a rif.val so that's all this arith is and then this is referencing the adapter type so let's remove that and as you can see our arithmetic port interface defines the method signature for addition subtraction multiplication and division so when we implement these methods on our adapter type we need to make sure that we follow the method signature so this method needs to return an n32 and it needs to return an error and it needs to take in two n32s but currently our adapter type only implements addition so it's still not valid for implementing this interface so we also need to do subtraction multiplication and division so here we can just copy these and we can close this because i think you get the idea there and then we can go change this to subtraction change this to multiplication and change this to division and of course we want to change this to division change this to multiplication and change this to subtraction so now our type adapter is implementing the arithmetic port interface and that's how we're going to make our ports and adapters so now we need to create a function that creates one of these adapters and it's going to return a pointer to adapter and we're just going to return a pointer to an adapter struct literal so basically this here is an instance of our adapter struct if you were thinking in object-oriented terms and since all of these methods are tied to our adapter struct type when we return this instance of our adapter it's being returned with all of those methods attached to it and in that case we can now take that adapter and do something with it in our main.go file so if we go back into our main.go file and we import this arithmetic package and sorry this should have internal and in our main we set arith adapter equals arithmetic dot new adapter this a rift adapter will have access to all of the methods so we could do addition and we could pass in one and three and let's go ahead and put the results of this somewhere and let's go ahead and close this one so you guys can see then we'll just handle the error and we'll go here and we'll import format and we'll do format dot print line error if we get an error and then we can just do format print line results if we go ahead and save that and we need to save this one as well we can change directory back to and then we can just do go run command main.go and we'll get the result of the edition so in this main file we're going to have a section called ports and say for example our core we want it to be the arithmetic package we can say that the type for core is going to be let's see we need to import ports and we can say ports dot arithmetic port is the type so that's how we'll have our port and then we can plug into the port by doing core equals and then it could just be a new adapter for our arithmetic package and that's basically how ports and adapters are going to work within this application so let's go ahead and remove this stuff this was just so you can see how everything's working and we'll just leave this here but we'll comment it out so we don't get any errors and we can save this again so now that we've created our arithmetic port and arithmetic adapter we are done with the core layer of our application we can move on to the application layer okay so now that we've created the port and adapter for our domain layer or our core layer we can go ahead and move on into creating the port and adapter for our application layer so within our ports directory we're just going to create an app.go file that's going to contain the ports for our application layer so if we go into internal and ports within this ports directory we can just create a new file and we'll just call it app.go and then we can go ahead and enter the file and this file will also be part of the same package ports and once again we're going to create a type api port interface and this interface is going to define the method signatures for get addition it's going to take in a and b as parameters and they're both going to be in 32 and this get addition method is going to return an n32 and an error same as our core method for addition and we'll create the same signatures for get subtraction get multiplication and get division and if you're used to or familiar with apis within the context of rest implementations and you're confused about why we're calling this application port api api just stands for application programming interface and it's just an intermediary that allows two applications to communicate or interact with each other so it provides an interface for another application to interact with the application that implements the interface so api is actually a pretty general term most of the time when you hear about apis you hear about them within the context of rest but apis in a very general sense are just interfaces that allow outside applications to interact with the application that implements the interface so that's why we're calling this an api because that's what it is i hope that makes sense so we can go ahead and save this so we now have the port for our application layer so now we need to create the adapter for the application layer so what we want to do is we want to once again go into our adapters directory and within our adapters directory just like we created the core directory for the core layer we're going to create the app directory for the application layer so we can just create a new directory so let's just delete that and we'll do app and then within that app directory we're going to have an api directory which is going to contain our api.go package so now we have this app directory and it has a subdirectory api which contains api.go and then we'll enter this file and this package is just going to be called api and this package we're actually going to need to import our ports and the reason we need to import our ports is because the adapter for this particular layer is going to actually take in our core and what do i mean by it's going to take in our core so our api adapter is going to have this eric field which is going to contain our core adapter and that's how our api layer is going to access the core of our application and this is going to be done through dependency injection which we will go into more detail about later and the same as with our core adapter we have to implement all the methods that are defined within our api port interface so let's go ahead and go back into our ports and we'll just open this we can just close that and as you can see up here we have our method signatures for the api port and back in our app.go file we need to create the methods that implement these method signatures so we'll do func we'll call this one api a for api adapter and we'll just say adapter is the type then git edition is going to take in the parameters a and b which are n32s and they're going to return in 32 and air just like the method signature and then all this api method is going to do is it's going to call our arithmetic addition method and it's going to pass in a and b now let me close this so you guys can see more clearly because we need to go over what's happening here all right so our api adapter can access this addition method that's part of our arithmetic port because we have access to our arithmetic adapter here and that access is going to happen via dependency injection because we're also going to need to create this function here which creates a new adapter and it's going to take in arith as a parameter and that's going to be of type ports arithmetic port it's going to return a pointer to adapter which is this adapter here and we're going to return a pointer to an adapter struct literal which is going to contain a rift which is going to be the arith that we inject into this new adapter so that means that this adapter that gets returned when we call this get addition method that's linked to this adapter will then have access to that arif through this api a variable which we have here and from there we can access arith and from arith we can access addition and that's how our application layer is going to access the core layers code and the benefit of this is if we make any changes to the core code we don't have to worry about impacting anything in the application layer because as long as our core implements the methods defined within our arithmetic port it doesn't matter how these methods are implemented because the application layer doesn't care how these methods methods are implemented so that means that both the application layer and the core layer are completely decoupled and then we would need to handle this error if error not nil we're going to return 0 and the error but if everything's fine we'll return the answer and nil so we've implemented the get addition method so now we need to go ahead and do the same thing for division multiplication and subtraction so this would be get multiplication and then instead of calling the addition method we're going to call the multiplication method and we'll do the same thing for get subtraction and we'll just change this to subtraction and actually let's keep with the same ordering so we'll put this one up here get subtraction then get multiplication and we can do the same thing for the division hit division and then change this to division so now our api adapter implements the api port interface so we can go ahead and save this and we can quit that let's go ahead and clear this and let's see what our project directory is looking like so now within our adapters directory we have the subdirectory app and we have the subdirectory core so now we want to create our framework layer because this is for the application layer this is for the core layer so now we need the framework layer and for the framework layer we'll start with the database so we'll go to [Music] let's do the port first and the database is going to be a driven adapter so it will connect to framework right so we can just go here and create framework right go and within frameworkright.go it'll be part of the same package ports and it'll be type db port interface and the methods are just going to be close db connection which is going to return nothing and add to history and the parameters are going to be the answer in 32 and operation string and it will return an error so all we're going to do with the database is whenever we perform an arithmetic operation we're going to store it in a history table within our database and we're only going to store the answer and the operation so multiplication subtraction division so it's just going to be a history table that has two columns answer and operation and we might add a date column in as well but we're not going to need to pass the date into this method we're going to just add the date on the database end so that's going to be it for this port so we can go ahead and save that now we need to create our database adapter so within internal adapters we need to create a framework folder so we'll do m and adapters framework and within that framework folder we're going to create a write folder and we're also going to have a left folder but we'll get to that folder when we add grpc but for now we're just going to create the right folder and the right folder is going to contain a db and then within that db folder we'll have our db.go file so we have framework write db and db.go and here we're going to need to do package db and we're going to make a couple of imports here we're going to import database sql log time and we'll also import this github.com masterminds squirrel and we'll call this one sq because squirrel's too long and we're going to use mysql so we'll just do the mysql driver and then just like with the other ones we're going to create type adapter struct and this one's going to have a field db that's going to point to an sqldb and we're going to go over what all this is doing in a second just let's write this out first now let's go ahead and from the start let's do our new adapter as well and our new adapter is going to take driver name and data source name and they're both going to be strings and it will return a pointer to adapter and we're going to need to return error in this one as well because there's a chance that when we're building this adapter we get an error and you'll see what i mean in a second so the first thing we're going to do when creating this new adapter is we're going to connect so let's just add a comment there and to connect we just need to do this sql open and we need to pass in the driver name and the data source name so this sql open if we go to the definition it returns a database and in order to return that database we need to pass in a driver name and a data source name and you'll see what the data source name and driver name are in a second well actually you'll see what they are when we go back to our main main.go file but for now you just need to know that they're just connection parameters to the database so it's basically just gonna have like mysql as the engine and then it's gonna have like the login credentials and the host and stuff like that to connect to the database and we need to check this error and this is why we have to return an error in this new adapter function and if we get an error here we're going to log fatal we're going to close the application because we need the database so log fatal f it's going to log the error and then it's going to do os exit which is going to close the application and we're going to say dv connection failure so if we get an error when trying to open a database connection we'll just log db connection failure and then the error and we'll also test the connection once we've successfully made the connection and we can just do error equals db.ping and this db is coming from here and ping just verifies the connection to the database is still alive and of course if there's an error with the ping we'll get an error so if air not nil log fatal db ping failure and we'll just have the error here as well and then if all goes well we'll just return a pointer to an adapter struct literal which contains db and we'll return nil as well so first we'll attempt to connect to the database if the database connection is successful we'll ping the database and if pinging the database is successful we'll return the adapter containing our database connection so now let's start to implement the methods defined by the db port interface so we can just go have a look at what that is again so we need closed db connection and add to history and close db connection doesn't take in any parameters and doesn't return anything either and apt history takes in answer and operation as parameters and returns a potential error so we can close that close that so let's go ahead and do funk and we'll do da for database adapter which is going to be type adapter and we'll do closed db connection and we're just going to do error equals da dot db dot close and if we do get an error we're going to log the error and close the application so log dot fatal f dv close failure and then error and the next method that we need to implement is add to history so we'll do func ea which is adapter again add to history and the parameters are going to be answer which is going to be in n32 and operation which will be a string and we'll just return an error and here's where we're going to create a database query so basically to create the database query we're going to use this squirrel module which is just going to simplify writing the query string so let me show you what that'll look like we'll just do query string and then sq because we are importing this squirrel as sq and then it's going to be an insert and we're going to insert into a table that we're going to create called arith history we haven't created this table yet but we're going to and then columns that we want to insert into our date answer and operation and it's getting quite long so let's just go to the next line and then we're going to do values which will be the values that we're going to pass into these columns and it's going to be in the same order as the columns that we just wrote out above so for our date we're going to use time dot now and if you're wondering where this time is coming from we just imported it up here then answer is going to be answer which will be passed to the method and then operation as well will be passed to the method and we also need to do two sql and if we get an error we're going to return error now what this is doing is it's basically just creating a query string out of these parameters that we pass so this 2sql builds the query into an sql string so we're going to get a query string here passed into this variable and then we're going to get the args passed into this variable and the args are just going to be the values and with that query string and args we're going to execute a database query so we'll do da.db dot exec we're going to pass in our query string and our args then if air not equal nil we'll return the error for that as well and if everything goes as expected we'll just return nil so as you can see this add to history method has access to this db and this tv is part of this adapter type and we're accessing this tv by using this variable ta and the reason we're using this underscore here as the variable is because if we go into this exec method we see that it returns a result and an error and if we go to this result here a result summarizes an executed sql command and we don't need to do anything with the summary or the result so we're just not going to assign it to a variable and if everything goes well we just want to return nil as our error meaning we don't have an error and let's see what this is saying so yeah it says squirrel is not in our go mod file because we didn't add it to our mod file yet so let's go ahead and save this and we are in our hex directory and we can just do go mod tidy and what gomod tidy is going to do is it's going to check our mod file and compare it to the imports that we're making within our application and if there are any packages in our mod file that are no longer in our application it'll remove them and if there are any packages in our application that aren't in our mod file it will add them so as you can see here it says finding module for this squirrel package and it also says finding module for mysql because that none of these were in our mod file so it finds them and it adds them to our mod file so if we did go mod tidy again it wouldn't do anything because all the packages that we're importing within our application are in fact in our mod file as well so now that we've set up our database adapter we want to go ahead and give our application layer access to it so we're going to have to go back into our application layers adapter file so we can go back into app and in api.go and just like we're using this earth field to access the chord logic within our application layer we're going to do the same thing for db so we'll do db and it'll be ports.db port which means that whenever we create a new adapter for our api we're going to inject db into it which is ports.db port and we'll also need to return it and now within our methods for our application layer after we perform the arithmetic operations we want to add the results of those operations to the database so now here we can just do error equals api a db dot add to history and we're going to pass in the answer and the name of the operation and this operation is addition and if error is not equal nil then we'll return zero and the error so just like we discussed in our hex arc explanation the application layer is going to be able to access both the core and the framework layer but since dependencies can only point inwards we're going to have to give the application layer access to our database through dependency injection and we're giving our application layer access to our core through dependency injection as well which is not necessarily a requirement we could import the core logic directly into our application layer but we're using dependency injection here because it works better with the flow of our application and we'll go into more details about how this all is working and how everything is orchestrated in our main.go file a little bit later but just know that we're just going to inject the database into our application at startup which is what we do here when we create a new adapter we're passing a database to the adapter so now whenever we make calls to our api and we call the get addition method it's going to do the addition operation and get the result and then it's going to add it to the history table within our database and then it's going to return the result so let's go ahead and add this portion of the code to the rest of our methods so here we'll do the same thing but instead of addition this one's going to be subtraction and we'll do the same thing for multiplication and we'll do the same thing for division and we can just save that and now we can start adding the folders for our grpc so we're going to go within framework and our grpc is going to be a driving adapter so it's going to go in framework forward slash left as opposed to right so here we're just going to add left and within left we'll have grpc and within that grpc folder we're going to create a file called server.go and we'll also create another file and we'll call this one rbc.go and we'll create a subdirectory as well and we'll call it proto and let's go back to our ports and we can go ahead and add framework left.go which is going to contain our left side framework ports which our driving adapters will connect to so we can go into framework left and it's going to be packaged ports still and we're going to import a couple of things here so we'll import context and we're going to import more later but we'll get to that soon so now we'll do type grpc port interface and the method signatures are going to be run and then git addition and for now we're not going to add the return values and the parameters to these method signatures yet because we still need to implement another part of the application before we can add them so we'll just do hit subtraction hit multiplication and get division and and let's just comment this out for now and we can save that it's clear okay so we've arrived at the point in our application where we'll be making use of grpc so let's go ahead and take some time to understand what grpc actually is so what is grpc in order to understand what grpc is we need to understand what comprises grpc let's first start by understanding what rpc is rpc stands for remote procedure call and it basically just allows for an external computer to make calls to execute procedures in a different address space or on another computer these calls are coded as if it were a local procedure call without the programmer explicitly coding the details for the remote interaction so basically rpc allows you to execute the procedures of a program of another computer as if you are executing local procedures from within your own program now the second thing that comprises grpc is its use of the http 2 protocol now details about the benefits of http 2 are out of the scope of this tutorial but for our purposes you only need to know that http 2 is binary instead of textual like http 1 this makes the transfer and parsing of data over http 2 inherently more machine friendly which makes it faster more efficient and less error prone the last thing that comprises grpc is its use of protocol buffers protocol buffers is a way to serialize structured data so what do i mean by structured data just think json or xml we need to structure our data so that it makes sense for our applications that structured data will at times need to be communicated between programs or stored in some form of storage protocol buffers or protobuf involves using a language to describe the structure of our data and a program that generates source code from that description the source code generated from that description is used to write and read that serialized data so how does all of this come together to create grpc well the protobuf method of serializing structured data results in the serialization of our structured data into binary which in turn allows for payloads containing the serialized data to be faster and more efficient this coupled with the http 2 protocol makes for a very fast and efficient way to transfer data and the request response protocol used by grpc is of course rpc or remote procedure call so with all of these things considered grpc is a faster and more efficient form of rpc now if you're feeling like you aren't understanding this completely don't get discouraged once we start writing our grpc code you'll start to understand how all of this is coming together and after we write that code i suggest you come back to this explanation and watch it again to get a full understanding of grpc okay so in order to get started with grpc we can make use of a quick start documentation page on the grpc.io website so at this location you can access this quick start documentation for grpc and this guide gets you started with grpc and go with simple working example so as you can see here they have guides for various other languages as well but obviously since we're using go we're going to go with this quick start guide for go and basically we're going to need a couple of things to fulfill the prerequisites to be able to add grpc to our application so the first one we've already fulfilled we need a version of go which is any of the three latest major releases of go and the next thing that we're going to need is the protocol buffer compiler so let's start by installing this protocol buffer compiler by going to this link here and as you see here it says that the protocol buffer compiler proto-c is used to compile dot protofiles and we're going to create these dot proto files that are going to contain our service and our structured data and then we're going to use this protoc binary to generate the code that will allow us to read and write our serialized data and as you can see here we can choose to either install this binary by using a package manager or we can install the pre-compiled binaries and since this is a tutorial and i can't assume that you guys are using a mac and have homebrew installed or if you're using linux we're going to install the pre-compiled binary so to install the latest release of the protocol compiler we just need to follow these instructions so first we manually download the zip file corresponding to our operating system so we can go ahead and access this link and we need to find the zip file for our operating system and this is going to be different depending on your own operating system but since i'm on a mac and if you're on a mac you'll be using this one here the os x one so we can go ahead and just download this and i'm just going to download it into our project directory for now and once you've downloaded the zip file you should be able to ls the directory that you downloaded it into and then you can just unzip the file by using this unzip command and then we can go ahead and unzip it and as you can see when we unzipped this file it created this include directory and it also created this bin directory so if we go ahead and ls we'll see the contents that were extracted from the zip file it's this readme this include directory and this bin directory and all we're interested in really is this bin directory but just so you can understand why let's go ahead and dim into the readme.txt so it says here that the package contains the precompiled binary version of the protocol buffer and it says down here if you intend to use the included well-known types then don't forget to copy the contents of the include directory somewhere as well but we're not going to use these well-known types and also it says to install simply place this binary somewhere in your path so that's what we're going to need to do we're going to move the binary to somewhere in our path so that when we run the protoc command the binary can be found within our path and we're just going to remove this include directory so we'll just quit this we can remove readme.txt because we don't need it and then we can remove rf include and let's ls bin and we have our protoc binary there so if you're unfamiliar with how the path variable works you can just echo path and it's going to show you all of the directories that are within your path separated by colon so this is a directory that's in my path and it's separated by a colon and then after the colon is another directory that's in my path so if we move the binary to any one of these directories that's in your path then it'll be accessible when you just write the command into your terminal but if the binary is not accessible within one of your path directories then the command won't be accessible so it's pretty common to just use user local bin and most likely if you're on a mac this user local bin is going to be a directory that's already within your path but if it's not you can always add another directory to your path but unfortunately that's out of the scope of this video you can easily just google how to add directories to your path so i'm going to move this proto-c binary to user local bin and this user local bin is already in my path so it'll be accessible the command will be accessible to me if i move this protoc binary there oh i'm sorry about that protoc is actually in this bin folder and i already have protoc installed and i already have the binary in my path so it's asking me if i want to override the original binary and i'll just go ahead and override it maybe it'll update so once you move the protoc binary to a directory that's within your path you should be able to do the proto-c command and hit help and my max security features aren't going to allow me to open this yet until i configure some settings so just give me one second so now i should be able to run this and i'll just hit open anyway and as you can see we're able to use this protoc command on our command line and let's just ls and we can rm this bin rm rf bin and we also don't need this proto c zip anymore so we now have protoc installed so now let's move on to the next step in our quick start guide and as you can see here it's telling you how to update your environment's path variable to include the path to the proto-c executable but even after you append a new path to this path variable if you exit your shell and you re-enter your shell this path variable is going to need to be exported every time so what you're going to want to do is add this to something like your bash profile but this is all depending on what shell you're using i use z shell so i would add it to a file called zshrc so like i said it shouldn't be too difficult for you to figure out how to update your path so once we have protoc installed we can go back to our prerequisites and we're also going to need to install the go plug-ins for the proto-z compiler and what these plugins are going to allow us to do is they're going to allow us to generate go code from our dot proto files which is going to be the files that we use to define our structured data so if we go ahead and just copy this and then just go to link we'll see that the protocgen go binary is a protoc plugin to generate go code for both proto2 and proto3 versions of the protocol buffer language so if you remember from our explanation we talked about protobuf having a language to define structured data and this protocol buffer language is the language that we're going to use to describe our structured data and when we describe that structured data we're going to write it in a file with the extension.proto so it would be like structured data.proto and the file with this extension is going to contain our structured data written in the protobuf language and protoc is going to generate go code based on this file so we need this plugin because protoc doesn't only work for go it works for other languages too so we need the go plugin for it to generate go code and this one is the same thing it's going to generate go code but this one is specific to services and you'll see what i mean by that soon so we can go ahead and just take this and copy it and we can just paste it in and run it and then this is again saying that we need to update our path but this is just telling us to add our go path the binary folder within our go path to our path which we should already have in our path from our go installation so you can echo path and you can see that i have here users giorgio go bin and this users giorgio go is my go path although i'm using a different go path for this tutorial in particular i'm using this gopath but this is just because i'm trying to keep everything from this tutorial isolated within a separate environment but for you it's going to be your go path and then bin you're going to need to add that to your path and you can see your go path by doing go environment and then you can just grab go path like we did when we did the installation and you'll see what your go path is when you do that and i'm really sorry if this is getting confusing the whole go path thing is kind of annoying to be honest so the rest of this is actually going into actually working with a real world example which we don't need to do we just need it to install the prerequisites because within our own application we're going to make use of the protocol buffer compiler and the go plugins anyway so let's clear this and i feel like i may have confused you all with the whole go path situation with our go path for this project being this but my actual go path being this so let me just clarify really quick for you the project that you're following along with right now should be in this go path so if you do go environment grep go path the project that you're currently working on should be in the path that gets printed to the screen here so you should be currently working in your go path and then a source directory and then a hex directory but for me my actual go path is this but for this project i exported a new go path and it is this but you don't need to worry about that because that's just going to confuse you just focus on your go path the project that you're working on right now should be in this go path in these sub directories so you should currently be working in go path forward slash source forward slash hex that's where you should currently be working in so now that all that confusion is done let's get into writing our proto files okay so if you remember in our grpc directory we have a folder called proto and within this photo folder is where we're going to create our dot proto files and we'll start with creating our structured data and we'll call this file number message.proto and you'll see why it's called number message soon so we'll hit enter and we'll go into this number message file so within this file we're no longer writing go code at this point we're using the protobuf language to create the structure of our data and at the top of this protobuf file we need to put which protobuf language we're using so if you remember there are currently two versions of the protobuf language and we're going to use this one here proto3 and we need to explicitly say that we're going to use the proto3 version so we would just type in syntax equals proto3 and we have to have a semicolon and our package name is going to be pb for protobuf and for go we need to add this option go package and we need to set it equal to the folder that we want our generated code to go into so what do i mean by that so when we use proto-c to generate the go code based on this file that go code that gets generated is going to get generated into a go package within our application and that package is going to be called pb so we can import that generated code into different parts of our application so that package is going to be called pb and this we need to specify the name of the directory that this package should be placed into and this go package is actually going to be relative to the output location that we set when we run the proto c command so for example when we run protoc the command in our terminal we're going to pass a flag called go out and the flag will have plugins grpc and then it will have a directory and this here is going to be relative to this directory and don't worry if this is confusing right now because you're going to see what i mean when we actually generate the go code so let's just remove that and go on to creating the rest of our structured data so our application performs basic arithmetic operations right so our structured data is going to be very simple we're going to create a message and it's going to be called answer and this answer is going to be an n32 and it's going to just have a value key and it's going to be at position 1. so this answer is going to be the answer to our arithmetic operation so this is the structure of the answer to our arithmetic operation and this one here is used to identify your fields in the message binary format and should not be changed once your message type is in use so basically this means that when our structured data is converted to its binary format each field within the structured data is identified by these unique numbers so that means that we can't have two fields with the number one as its field number and if you're interested in deep diving and the protocol buffer encoding you can go ahead and access this link here and then access this link here to read up more about it but for our purposes you only need to know that these numbers are used to identify the fields in their binary format because when we're using protobuf we're working with structured data that's converted to and from binary so this is how we are creating our structured data so this is our answer and we'll create another message and it's going to be operation parameters and we'll have an n 32 it's going to be a and we'll have one as its field number and then in 32 b and we'll use two as its field number now operation parameters is just going to be the two numbers that we're performing the operation on so if we pass operation parameters to an add method it's going to add a and b and that's going to be it for our number message.proto so now we need to create the proto file for our actual service so within our proto folder we'll create a file called arithmetic service dot proto and we'll go within this file as well and we need to set the syntax equal to proto3 and it's going to be part of the same package pb and within this package we're going to import our number message.proto because we're going to be performing the arithmetic operations on the structured data within our number message proto file and again we're still going to have to do this go package equals pb and now instead of creating a message we're going to create a service it's going to be called arithmetic service now if you remember we had to install two plugins for our proto-z binary to make use of to generate go code and one of them was this one the one that i said is generating code specific to services so we're defining our structured data within a dot protofile and our structured data is our answer and our operation parameters messages so this plugin is used to generate the code for our structured data and then we're also defining a service which is our grpc service and this plugin is responsible for generating go code for our services so it's good to know that this import here numbermessage.proto the messages are structured data and this is a service it's not a message so it's not structured data it's actually a service it's an rpc service more specifically so what we're going to do is we're going to create rpc endpoints get edition and git edition is going to take in operation parameters as its parameter and it's going to return an answer makes sense right and this service is going to have an rpc endpoint for subtraction multiplication and division as well and actually this should be capital a and this is how we are going to define our service and proto-c is going to generate go code for this grpc service and that's going to be it for that one so we can just save so let's clear this and let's ls just to make sure we're in the right directory so now we're going to actually generate our go code based on those proto files using our proto-c binary and this command is going to be quite long so i'll try to explain what's happening here in just a second so we're going to do proto-c and we're going to do go out equals plugins equals grpc internal adapters framework left grpc so that's the first flag and then we'll do protopath equals internal adapters framework left grpc proto so before we go any further let's go over what both of these flags are doing so this first one here is going to be where the generated code is outputted to so this is going to be the location of the resulting package and when we added that go package option within our dot proto files that directory is going to be relative to this path so actually let's just go ahead and go back in there and have a look at what i'm talking about so this here this option go package equals this folder here this folder is going to be created relative to that path in that first flag so this forward slash pb is going to be appended to this path so our resulting generated code is going to be in internal adapters framework left grpc and it's going to be a folder that's created in there called pb and that's going to be our package which is a result of our generated code from our proto files and this one here this proto-path is the path of our proto-files for imports so if we go back in you can see in this service here we're importing number message.proto but as you can see we have no path here we just put the file name right so how does protobuf know where to import this file from it knows because it's expected to be within this protopath and as you can see internal adapters framework left grpc our proto folder is where we put that file right so it can import that file without us specifying explicitly what the path is because it assumes that it's in this path this proto path so let's write that out again so we're going to run protoc with those two flags and we're going to run it on our dot proto files because as you can see here when i pressed enter before finishing the command it says missing input file the input file to this command is our dot proto files so we're going to do internal adapters framework left grpc proto and we're going to do all the dot proto files within this directory then we'll run it so we actually got an error so it says go out protocol go plugins are not supported use protoc go grpc out to generate grpc so maybe they change something so let's see let's just try and use what they said to use actually let's do protoc go grpc out okay sorry about that so apparently the plugins that we're trying to use are no longer supported so what we actually need to do is we don't need to install anything additional we're just going to do two separate protoc commands for both our message file and our service file so the first command is this one and this one's going to be specifically for our message file we're going to use this go out flag and the code generated from our message file our structured data is going to be output to the pb folder in this grpc directory as you can see we're only doing numbermessage.proto here so let's go ahead and hit enter there and now we need to generate the code for our service so for that one we're going to use a different command we're going to use this command and as you can see in this command we're using go grpc out to specify that it's grpc service and it's going to be output to the same directory in the same folder because they're part of the same package and as you can see i'm specifically generating code for this arithmetic service.proto file here so to make that more clear before i run this command let's just run tree on our framework directory and you can see the pb folder has been created from running that first command on our number message proto file and the code that was generated from this number message proto file is this number message pb.go file so now we're going to run the second command on this file and it's going to generate another file within this pv folder for our service so let's go ahead and run this command and we can tree that directory again and now you see that within this pv folder we have this arithmetic service grpc pb.go file and let's just go ahead and have a look at those so we have this newly created pb directory and let's go into the service one and as you can see in this comment it says code generated by protoc and we're getting this error here because we haven't yet downloaded this grpc package so it's not in our go mod file but regardless you can see that we have the interface for our arithmetic service client which is going to be the most important part of this generated code for our grpc api because our grpc adapter is going to need to implement this interface which means that our grpc adapter is going to have to implement these methods and you can go through this code if you want but you don't necessarily need to know what any of this stuff is really doing aside from managing our service and let's also have a look at the numbers the code generated for our structured data so as you can see here we have the struct that was created for our answer and it contains a value which is an int 32 as a field and let's search for operation parameters and you see we have the struct for our operation parameters as well with the two fields a and b as in three twos now the code generated for these trucks are actually a bit more important to understand because we're actually going to be using these structs within our code to write our adapter for our grpc so we're going to be importing this pb package and we're going to be using the structs that were generated within this file but for the most part for the service we only need to worry about implementing the interface in the code for our grpc adapter okay so we can now go back into our port framework left and now we can start finishing up our method signatures here so run is going to stay as is but we're going to need to import our pb package forgot to put left here and now that we've imported our pv package we can make use of the structs that were generated from our structured data in our dot proto files so in git edition we're going to pass in context and we're going to pass in the request and the request is going to be pb.operation parameters and if you remember operation parameters are just values a and b the values that will be operated on and then we're going to return pb.answer and error and this is going to be the method signature for git edition and the signatures are actually going to be the same for the rest of these so i'm just going to go ahead and remove these and just do it that way so the signatures are the same for all of them pretty much aside from the name of the method we're going to return pb.answer and error and we're going to pass in as parameters context and request which is pb.operation parameters and of course the answer is just going to be one value which is the answer of the arithmetic operation so our grpc port is finished so we can just save that and we can go back into our grpc directory and we're going to create our adapter for our grpc port and we'll start with our server.go file so we'll start with package and the package is just going to be called rpc then we're going to do some imports here we'll do log net and let's import our pb package and we're also going to import our ports and lastly we're going to import google's grpc package and really quick let's go ahead and save this and then just do go mod tidy to update our go mod file and let's go back into our adapter and now we can create our type adapter and our grpc adapter is going to take in an api port and we're going to return an adapter pointer to an adapter and here we'll do an adapter struct literally and within this server file we're only going to implement the run method from our port and then in another file called rpc we're going to add all of the grpc endpoints and the run method is going to be what starts our grpc service so we're going to do func grpca for grpc adapter and that's going to be of type adapter and then run and we'll start by just having an error variable and then we need to listen using net.listen and this net is imported up here and it's going to be what enables us to listen to tcp connections on a specific port so net.listen if we go to the definition it takes in a network and an address and we're going to pass in the string tcp for our network and the port 9000 for our address and if error not equal nil we're going to log.fatal failed to listen on port 9000 and then we'll throw the error in there as well and then we'll do arithmetic service server equals grpca and if you remember grpca is of type adapter and type adapter is going to implement all of the methods of our adapter port and the methods of our adapter port are the same methods that need to be passed into our grpc service as the server so you'll see what i mean right now so if we do grpc server equals grpc.new server and we're going to do pb register arithmetic service server and this is coming from our pb file this is part of the generated code and as you can see here this register arithmetic service server it needs to take in an arithmetic service server and if we go to definition of arithmetic service server it's going to have all of the methods that are defined for our grpc port as well so if we pass a grpc adapter that implements our grpc port to this function this function here it will be eligible to be passed this function as arithmetic service server because it implements all of the methods that are necessary for it to be considered this arithmetic service server so we're going to pass in our grpc server which is this here and this is from this function here that's part of this grpc package so new server creates a grpc server which has no service registered and has not started to accept requests yet so this is basically just an empty grpc server and this register arithmetic service server we pass in an empty grpc server and then we pass in our arithmetic service and it registers this arithmetic service to this empty grpc server and once it's registered we can do if air equals and we're going to take our grpc server and then we're going to use this serve method and if we go to this serve method we see that serv accepts incoming connections on the listener list creating a new server transport and service go routine for each the service go routines read grpc requests and then call the registered handlers to reply to them and the registered handlers are the methods that we implemented on this arithmetic service server so we're going to serve this on our listen that we created up here so we're going to serve this on port 9000 so here we're registering our service to an empty grpc server and then here we're serving that grpc server here where we created this listener i hope that makes sense and then error not equal nil blog.fatal failed to serve ger pc server over port 9000 then we'll throw the error in there as well and if you're not familiar with this syntax here this serve only returns an error so we're setting this error variable equal to this serve and then in the same line we're checking the result of that error and if that error is not nil then we're going to log fatal f but if the error is nil then the server is being served and let's see what's happening here we need to put the comma and same with here and we're actually getting an error here missing method git addition that's strange so let's go oh actually that's not strange because we haven't implemented the method signatures defined by our port for this adapter yet so let's go ahead and do that now so we're going to save this server file and we're going to go into the rpc file which is going to be part of the same package but just a different file just to separate the server startup code from the actual endpoints now here we're going to do package rpc again and we're going to do some imports here as well do pv and we're going to do context and actually let's move context up here and then let's do google dot bowling forward slash grpc codes and we're going to do the same thing but status and now we can just start adding our methods we'll do funk grpca adapter git edition and we'll take in context and request db to operation parameters and we'll return a pointer to pb.answer and error we'll start with creating this error variable then we'll have our answer is equal to pointer to obstruct literal answer and then we just need to validate our requests by checking to see if our value a and b are not equal to zero and just a quick note about this arithmetic application it's not going to do arithmetic on zero values so we'll use this get a method and if we go to definition you can see that within our generated code there's this git a method here that will return the value for the field a and if that value is nil it's actually going to return zero which is why we're not going to do operations on zero values because i don't feel like trying to figure out a way to validate zero values with this generated code so we'll go back in so if get a equals zero or if request dot get b equals zero then we're going to return an empty answer and we're going to return status error codes.invalid argument and then we'll say missing required as the message and this code is what we imported here and the same with this status we imported it up here as well and they're basically packages to handle responses in grpc so we don't need to do anything else with our errors we just need to return what type of error it is so this in this case it's an invalid argument but we could also return the other arguments that they have available like already exist data loss all of these out of range permission denied but other than that we're just going to return an empty answer which is just going to be this empty struct literal and whatever type of code it is and a string and then we'll do answer error equals grpca.api dot get edition and then requests.a and requests.b so now we're calling our application layer and as you can see here our grpc adapter doesn't need to do anything with our core it doesn't need to do anything with the database all it needs to do is call the application layer and everything else is handled it doesn't have to know about anything else and if air not equal nil then again we'll return an empty answer status.air codes and then this one will be an internal error and we'll just say unexpected error figure out the return here and this is the answer and the reason this is going to be an internal error is because if we get an error here it's because something failed when we called this get addition method and then lastly we'll do answer equals pv dot answer and we're doing a struct literal again but this time we're going to actually put the value and the value is going to be answer and then lastly if everything is okay we'll return answer and nil and that's how each method is going to be implemented for our grpc adapter so for the rest of them we could actually just copy this one and just change this to get subtraction and change this to get subtraction as well you can do the same for this one copy change this one to get multiplication get multiplication get division get division so now we've implemented all of our methods for our port so we have get addition get subtraction get multiplication and get division so we can go ahead and save that and let's clear this and let's go ahead and really quickly go back into our server file to see if we still get that error and this is saying missing method must embed unimplemented arithmetic service server and this is because of recent change to grpc so we're going to actually go ahead and let's save this and let's tree our framework directory and what we're going to do is we're going to go ahead and remove this pv folder and we're going to regenerate the code using our proto-c command but this time we're going to add in an additional option to remove that requirement so we'll do rm rf internal adapters framework left grpc pb so we're now going to want to use this command which is going to be the same thing it's going to be proto-c and we're going to do our go grpc out which is going to be the output path of our package but now we're going to have added on to it this require unimplemented servers false we're going to add that as well and then we still have our path here to where we want our package to be outputted and then we still have our protopath and we're doing this on our arithmetic service proto file because this is the go grpc out command so we'll hit enter on that and the proto command for our structured data is going to be the same so the numbermessage.proto we're going to use this go out flag and it's going to be the same directory that we want our service generated code to go to and we'll hit enter again then we can tree and we see that we have our generated code once again and now we can go back into our file our server file and you see that we no longer get that error now we can just quit that and let's clear and from there we can move on to writing tests so that we can test our grpc method functionality and see if everything's working as expected but before we start adding our tests let's go ahead and take some time to bring everything together in our main.go file and within this file we're now also going to need to import our application layer and we'll also want to bring in the database portion of our framework layer and lastly we'll bring in as grpc our grpc adapter oh and put these in the wrong order and then within our main we'll just declare a variable for error and we'll now create the variables for our ports and next we want to start plugging in the adapters to our ports so we'll start with the database adapter because the database adapter is going to need to be injected into the application layer and the application layer is going to be injected into our grpc adapter as well so we're going to set our dbase adapter and we'll do error as well equal to db.new adapter and let's go back to this new adapter definition and you'll see that our adapter needs to be passed a driver name and a data source name and that's because when we use this open method we need to pass to it a driver name and a data source name so let's go over what a data source name is really quick so if we go to the mysql driver page we can go ahead and search for dsn and it will take us down to this data source name portion of the documentation here and it says the data source name has a common format so it's basically similar to a url connection string so in its fullest form it'll look something like this and it's going to contain the username for the database the password for the database the protocol which in our case is going to be tcp and then the host and we're going to be using a test database and that's going to be the host that we're going to add here and then the database name plus any additional parameters so that's what this data source name is going to look like and then the driver name is going to be mysql because of course we're using the mysql driver so if we were to hard code this we would just put this in as a string mysql and then we'd put in a connection string here like admin and then password and so on and so forth but we're not going to hard code this because we want to be able to swap out our database drivers without needing to make changes to the code so what we're going to do is the arguments that we pass to our new adapter are going to come from our environment so let's go back up here and we're going to import os and then we can go back down here and then we can create variables for dbase driver and we can set this equal to os.get environment and what this os.get environment is going to do is it's going to get the value of an environment variable that we pass as an argument so it says git environment retrieves the value of the environment variable named by the key so we're going to pass in the key as an argument so say for example we set an environment variable for dbase driver and we just call it driver actually let's be more explicit d db driver and that's going to be our dbase driver and then we can set another environment variable for we can just call this one name for data source name and here we'll just do d source name and we'll pass dbase driver and d source name into our new db adapter like so and our new adapter function for our database adapter returns an error so we need to check that error and make sure that it's nil and if it's not nil we're going to log fatal f and we'll just say fail to initiate dbase connection and we'll throw in the error as well and we need to import log and once we have a database connection we want to defer the closing of that connection and if you remember we created this function within our database adapter and all it's going to do is execute db.close and then we're going to plug in our arithmetic adapter and we'll just do arithmetic dot new adapter and our arithmetic adapter doesn't take in anything because this is the core of our application and the logic within this layer isn't going to depend on anything else so we have our arithmetic adapter and now we can create our app adapter or plug in our app adapter and it's going to be api new adapter and if you remember the api adapter takes in both the database and the arithmetic adapter and that's because from the application layer we're going to have access to both the core of our application and we're going to have access to the right of our framework layer and this is all happening through dependency injection which i will elaborate a bit more on in just a second so we'll do dbase adapter that we just plugged in and we'll do a rift adapter that we just plugged in and it's saying undeclared name let's see oh it's because i called it core here so let's go ahead and change this to core and we'll change this to core as well then we have our app adapter and grpc adapter so now lastly we're going to plug in our grpc adapter and we're going to set it equal to grpc dot new adapter and our grpc adapter is going to take in only our app adapter as an argument and as you can see the grpc adapter has this api field in its struct so that it can access the application layer and make calls to our application and then lastly we're going to do grpc adapter dot run and this run method is the method that we implemented that starts up our grpc server and as you can see our grpc port has both this run method and our methods to execute our arithmetic operations now we're not going to be able to run this just yet because we're going to need to set up a containerized environment for our tests and within that containerized environment we're going to have these variables exported for our database so in this tutorial our database is going to be built from a mysql docker image and i'll get into what that means in just a second but let's go over dependency injection really quickly and we don't need to import format so we've been talking about dependency injection and inversion of control or inverting the control from our code making calls to instantiate a database to the program actually being what starts up and allocates dependencies to our application layers so basically this funk main this is going to be the start of our application so we when we run our application it's going to start from this funk main and within this function is where we're going to initiate all of our dependencies so as you can see from an imperative standpoint we're just going to declare these variables for our ports and we're going to get the necessary environment variables and then we're going to plug our adapters into those ports as you've already seen here and then the necessary dependencies are going to get injected into the adapters that need them so for example app adapter is going to have its database adapter and the core injected into it and the same with our grpc it's going to have the app adapter injected into it so let's take some time to visualize what that is actually looking like and how that relates to the hexagonal representation of our application that was shown during the explanation of hex arc okay so with all things considered our project's inner layers should not depend on outer layers and our layers should be decoupled by making use of interfaces to abstract the implementation details of each individual layer so if we were to look at how we've achieved this and apply it to our hex arc diagram we'd get something like this so here we'd have our core port so we can just label this core port and here we could add our grpc port and also we'd have our application port here app port and here we have our db port and if we look up here at our actual code we see that our application layer actually takes in a database adapter and the core and these parameters are of type db port and arithmetic port so our application layer here we are plugging our core into via dependency injection via this port here and our application layer also has a port here for our database and both of these are being plugged into our application adapter via dependency injection because we don't want our inner layers to depend on the outer layers and to avoid that we can just use dependency injection for pretty much everything because although our application layer can depend on the core layer our application layer cannot depend on our framework layer so to circumvent that we're just going to use dependency injection both here and here so basically our core is our core adapter and we're plugging this core adapter into our application layer via this port which is here and we have our db here which is a mysqldb and it's also being injected into our application layer via this port here and the application layer is accessing both the core and our database via the injected dependencies and our application layer is only aware of our method signatures that are defined in the interfaces defined by our ports so that means that the application layer doesn't care about the implementation of the methods defined within these ports which means that our application layer is decoupled from our domain layer and our application layer is also decoupled from our framework and our mysql adapter and then as you see here our grpc adapter which goes here grpc adapter is past the application adapter that we built here so that means that our application layer here via this port plugs into our grpc and our grpc adapter just plugs into the application via this line here so we define a grpc port here and we plug in the grpc adapter here which is where this port is so this port is this port here and we're plugging in the grpc adapter to this port so that's what this port is and then this port is the application layer being plugged into the grpc adapter and technically we could just depend on the application layer from the framework layer here but instead of doing that we inject the application layer into our grpc adapter and access it that way and with this setup that we have here the domain layer or our core layer has no outside dependencies it doesn't depend on application and it doesn't depend on framework and as you can see here our core layer we're not even injecting anything into it our core layer is just that it's only our core code so this is how we've made it so that our applications dependencies point inward and with the use of interfaces we were able to abstract the implementation details of each individual layer resulting in a valid implementation of the hexagonal architecture so now that we see how we were able to make all of our dependencies point inward in turn following the guidelines for implementing the hexagonal architecture we can move into starting to create our containerized environment to run some tests so let's go ahead and save this so to test our application we're going to spin up two docker containers and these containers are going to contain both our mysql database and our application that we just built and we're going to orchestrate the startup of these containers by using docker compose so let's take some time to do a brief introduction of what a docker container is and what docker compose is so what is a docker container a docker container can be thought of as an environment isolated from your host system with its own isolated resources they're similar to virtual machines in that regard but the similarities pretty much in there docker containers function very differently from virtual machines which in turn makes them far more portable lightweight and efficient docker containers are not virtual machines they're just regular processes running on the host system all running containers make use of the same linux kernel on the host system the isolation magic of each individual container is possible because under the hood docker uses c groups which is a linux kernel feature to isolate the resource usage of a collection of processes and namespaces which is a feature of the linux kernel that partitions kernel resources such that one set of processes can only see one set of resources so in other words c groups are used to limit the resources the container can use and namespaces are used to limit what the container can see you're probably wondering if docker containers all use the host system's linux kernel how does docker work on non-linux systems like mac and windows well when you install docker on one of these systems generally it installs a linux vm on which to run the containers then the containers share the linux kernel of that linux vm and another question you might have is how does a container emulate having a separate os or a separate operating system simply put all linux distributions are using the same linux kernel the difference comes from the applications installed and the tools and dependencies that ship with each individual distribution at its core it is still just a linux kernel which means that these disk drills can still use your host kernel they do this by using a file system snapshot of the specified distro so a docker container using the alpine linux distro is just using the host system's linux kernel with a snapshot of an alpine linux file system so how do we run one of these docker containers docker containers are basically just the running process of an executed docker container image a docker container image is a lightweight standalone executable package of software that includes everything needed to run an application including code runtime system tools system libraries and settings we can make use of a docker image by creating what is called a docker file docker can build images by reading the instructions of the aforementioned docker file a docker file is just a text document that contains all the commands used to assemble an image so once we've created our docker file we can use it to build our own container image once the image is built we can then start up a container by running the image and this is exactly what we're going to do we'll first write a docker file that includes all of the commands to build the image for our application and then we'll use docker compose to both build our docker image based on our docker file as well as pull the mysql image from an online repository docker compose will then run both our applications container using the image built from our docker file and the mysql container using the image we pulled from the online repository so what exactly is docker compose docker compose allows us to compose all of our commands and configurations to start up our containers in one structure without using docker compose we would run individual commands on the command line to start up individual containers passing in the necessary configurations via flags to each individual command this makes it difficult to start up our containers and maintain consistency in our configurations so by using a docker compose file we'll store the orchestration of running our containers and whenever we want to run our containers we'll just run our docker compose file if you're having a hard time understanding all of this i encourage you to continue with the tutorial and after you've finished the part where we configure docker and docker compose come back to this part of the video and watch it again so if we access this url here we're brought to this get docker page and it gives us the instructions for installing either docker desktop for mac or windows and as explained since docker makes use of the linux kernel these desktop versions here are going to essentially install a linux vm as an additional layer to the installation but if you're using a linux system already the installation process becomes a bit simpler so i'm using a mac so i would click this docker desktop for mac and you'd basically select one of these buttons depending on if you have a mac with an intel chip or a mac with an apple chip and from there you just download this.dmg file and you can just install it using this file and i'm not really sure about windows but i'm sure it's the same thing so you would install this docker desktop for windows and go through this windows installer and if you're using linux depending on the linux platform that you're using you would select one of these and you'll be provided with detailed documentation on how to install docker on your particular system okay so now that we have a general understanding of what a docker container is and how we're going to orchestrate things using docker compose we need to first create a docker file and this docker file is just a set of instructions to create an image and the image is going to be what's used to generate our containerized environment so the containerized environment is going to be built based on the image so we'll vim into a new docker file and we want our base image to be an image that's already been created for go and then the rest of our instructions for building our image will be built on top of that image that we're using as our base so to get our base image we're going to do from goling 1.15.3 alpine 3.12 and we're going to want to expose port 9000 because that's going to be the port that our server is running on and this base image is a lightweight image and it doesn't have a lot of unnecessary things installed so we're going to actually need to install a mysql client onto this bare bones alpine linux image so we're going to run apk update so alpine linux uses apk as its package manager so we're going to use apk to install our mysql client and then after updating we want to do apk add which is how we install we want to do no cache because we want to keep this image as lightweight as possible we're going to install mysql client and build base and here we should do that as well and then we're going to run a command so to set an instruction to run another command we could just do run and we want to do make dir and we want to make the dur app and then we're going to do this workdir which is going to change our working directory to this app directory that we made or that we will have made and then we want to copy our go mod into our current directory which is app and this is copying from our host system the go mod from our host system into this app directory within our containerized environment our container system and the same thing for this gosum and then we're going to run go mod download which is going to download all of our dependencies all the dependencies from our go mod file and then after that we're going to copy everything from our host system's current working directory which is the directory where our docker file is into the app directory within our containerize system then we're going to also copy another file that we're going to create called grpc entry point dot sh and i'll explain what this file is when we create it but we're going to copy this file into user local bin as grpc entry point.sh and then we're going to run bin chmod and chmod is going to change the permissions of the file so for example in our case we're going to do chmod plus x and then we're going to put the file which is user local bin grpc entry point dot sh so we're going to run this chmod command passing this plus x on this grpc entry file and what that's going to do is it's going to make this grpc entry file executable which means that we'll be able to run the file like we'll be able to create a process that runs this file as an executable because this is going to be a bash script and the next thing we want to do is run go build command main.go so we're going to build our application in this in this line here so the application that we just wrote all the code for our hex arc application we're going to build it here and it's going to be built into its own binary and if you remember a binary is just a program that can be executed like we installed that proto-c binary and we put it in our i think it was user local bin directory and then once it's in that directory if that directory is in our path if we run the command for example this chmod command is in bin if we run the command it runs the program so run go build command main go is going to create a binary for our application and then we're going to move that binary and it's going to be the binary that it's going to build is going to be called main because our file is called main.go so it'll just build a binary called main and we'll move that binary that we created that's our application to user vocal bin last but not least we do this command and we're going to do main so this is the command that's going to be run to start the primary process of our container so this command is going to essentially run our application and then that container's primary process is going to be to run our application we're also going to do an entry point and that's going to be our grpc entry point dot sh which is also in bin and the entry points a little bit more complicated and i will have to explain that to you once we write this grpc entry point script so that's going to be it for our docker file so we can go ahead and save that and now we can create our grpc entry point dot sh file and this file i'll just make accessible on github and you can just copy this file into your project because i don't want to make this a bash tutorial when this is supposed to be a go tutorial so i'll just copy and paste the bash script into this file and then i'll explain to you what it's doing so we'll just copy and paste there and let's go to the top and let's explain what's happening here so i've actually decided not to use bash because then we would need to install bash when building our image and we can just use the regular sh so this is just our shell and these are all just shell commands and this set e command just makes sure that our script immediately exits if there's an error when running any of this script and our command is going to be this dollar sign at which is just going to be whatever command we pass to the script when we run the script and you'll see what i mean by that later so the reason for this script here is because when we're building our two docker containers the mysql container is going to take a while to start up but our application container is going to start up pretty much immediately and the application container if it tries to access the database before it's available it's just going to crash the application so all this script is doing is waiting for the mysql database to start up gracefully and then once the mysql database is available then our application will be able to connect to it and start up our actual application so what this script is doing is it's going to have a maximum of 10 tries and it's basically just going to try to connect to the database so while max tries is greater than zero it's going to try to connect to the database by using this mysql connection command and every time it fails it's going to minus 1 from our maximum tries and it's going to sleep for three seconds which just means wait three seconds to try again then it's going to try again and connect again if you can't connect again it's going to do the same thing it's going to minus one from our maximum tries and it's going to keep doing that until we either get a successful connection to the database or we run out of max tries and if we run out of mac max tries we're just going to print to the screen unable to connect to mysql after 10 tries but if we successfully connect then we're just going to execute our command that gets passed to the script and the command that gets passed to this script is going to be that command that we set in our docker file which is going to be main which is going to be our application binary and this file is our entry point so you can think of it as we're passing our command to our entry point and our entry point is going to be able to do the necessary like startup procedures and stuff like that before executing the command so we can just save this and the next thing that we're going to need to create is our docker let's do them docker compose.yaml a dot yaml file is just a way to write configuration files so yaml is just a data serialization language which just means that it puts the data in a architecture independent format like basically we're going to have a bunch of key value pairs essentially so you'll see what i mean in just a second so for our version we're going to need to set the version of our docker compose and then we'll do services and we're going to have two services like i explained before we're going to have our db service and we're going to have our grpc or our main application service and the db service is going to use an image because we're creating a docker container for each service and it's going to create a mysql image and this command we're just going to use to set this default authentication plug-in equal to mysql native password and we're going to set restart to always and here's the important part we're going to set our environment and our environment variables are going to be mysql root password and we're going to just use admin 123 which is obviously not a secure password and then we'll do mysql database it's going to be hex test then we can do volumes and volumes is going to be tests and we'll create a directory called testdb and it's going to be mounted to docker entry point init db.d and in just a second i'm going to go over what all this stuff means and then we'll do ports and we'll do port 3307 3306 and that's going to be the configuration for our database service our test database service and this volumes here is we're actually going to mount this test testdb actually we could just call it testdb but we're actually going to mount this testdb directory that we're going to create in our current working directory on our whole system and we're going to mount it to this docker entry point init db.d directory within our containerized mysql system and that's because if we go to the docker hub page for this mysql image that we're using and we find init db we see here this initializing refresh instance and it says when a container is started for the first time a new database with the specified name will be created and initialized with the provided configuration variables furthermore it will execute files with extensions sh which is a shell script sql which is an sql script and this sql.gz extension is just a zipped sql script and any file with these extensions that's found within this docker entry point init db.d directory within our containerized environment is going to be executed in alphabetical order at the start of our container so basically we're going to be able to create an sql script and we're going to be able to have it in a directory like this on our local system which will be mounted to this directory of our mysql containerized system and whatever we have in this directory which will be a mysql script is going to be initialized at the start of this container so it's going to start this container by first creating our database that's defined within our sql script so we're going to write the sql script to create the tables for our database and whenever this mysql container gets started it's going to initialize this mysql database with our tables from our script i hope that makes sense and if you have any questions about these other options that we're adding here you can go to this docker compose compose file reference page here and docker actually has really good documentation so i suggest you have a look at this stuff anyway but yeah you can have a look here if you have any more questions and you want to dig a bit deeper but for the most part we're using this example yaml file well at least this db configuration and that's going to be it for the db service and now we can start adding the configuration for our application service or our grpc service and actually let's go ahead and remove this and actually it's probably best to go over these in a little bit more detail so let's go ahead and do that so as i already explained the image is just the image that we're going to use to run our container and this mysql image is being pulled from an online repository which is available here at hub.docker.com and as you can see it's telling you how to pull this container image and it also gives us a description of the image and how to use it and all of the other information that we went over as well as an example of how to use this within a docker compose file and next we have this command key here and this command is basically the same as the command in our docker file but if we build our docker file using docker compose this command overwrites our command in our docker file so basically we're passing a mysql configuration for default authentication plugin as our command to this mysql container and we're doing that because newer versions of mysql no longer use mysql native password as the default authentication plugin but for our purposes and for this container we want to use mysql native password as our default authentication plugin and restart always basically describes the restart behavior of our container so if we go to this docker reference page for our compose file that i told you about for any of these keys we could just search the page and we can just type in whatever it is that we're searching for and you see that here for this restart section it says that no is the default restart policy and it does not restart a container under any circumstances when always is specified the container always restarts so we're setting hours to always so that our container will always restart we can also set it to on failure which only restarts the container if an exit code indicates a failure so that's what restart is and environment is very important because this is going to set the environment variables within our container so it sets the mysql root password environment variable and the mysql database environment variable and volumes i already explained it's going to mount a volume from our host system to the container and ports is our port mapping for our container so the first port here is our host port and it's mapped to the second port within our container so if we want to access port 3306 within our container which is mysql's default port from outside of our container within our host system we can just access this port 3307 because port 3307 on our host is mapped to 3306 within our container so now let's go ahead and start adding the configuration for our grpc service so configuring our grpc service instead of pulling an image like we did up here we're going to actually build an image from our docker file within our host current working directory where our docker compose file is so we're going to do build instead of image and we're just going to put dot here for our current directory and that's going to build the image from our docker file and we're also going to need an environment and for our environment variables within this container we're going to need to set the environment variables for our database so if we go back to our main.go file we can remember that we set dv driver and ds name to the environment variables names that we want to use for dbase driver in d source name so we want to create these environment variables within our container and then our application is going to get these environment variables and use them to connect to our database so let's go back and here we can just do db driver and for now let's just set it to an empty string and dsname and those are going to be the environment variables that we set and we're going to fill in those environment variables in just a second and the next key that we want to add is depends on and we're going to say that this container depends on db which is our database service and last but not least we want to override the default command for our container because when we run docker compose we just want to run our tests so we're going to set our command to run go test and the command we're going to set it to is going to be within these brackets and we're going to say go test b for verbose and we're going to use this here to emphasize that we want to traverse the tree structure of our directory to check for test files so the next thing that we want to do is we want to go ahead and add in our driver the driver is just going to be mysql because we're using a mysql database and ds name we're going to use root as our user and admin 123 is the password because that's the password that we're setting for our root in this environment variable here and we're going to do at tcp and within these parentheses here we need to put our host and the port and with docker compose both of these services are going to be built within the same network so we can actually reference the host by just using the service name so db is the name of our mysql database service and since both of these services are running within the same network we can just do dv for the host instead of putting in an entire host with ip addresses and stuff like that and the port is going to be 3306 which is the default port for mysql and our database is going to come after this forward slash and if we go up here we can see that our mysql database is set to hex test so here we can just put hex test and then let's go ahead and save this and now we want to make that testdb folder that we're going to be mounting to our mysql container so we need to go ahead and create this folder here so we can just do make dir testdb now we have our testdb folder and then now we can just bim testdb init.sql so we're going to create a file within the testdb folder called init.sql and all we're going to do within this file is create one table so it's just going to be create table and it's going to be called a rift history and it's going to have a date column which is going to be a date and it's not going to be nullable it's going to have an answer column which is going to be an int and it's not going to be nullable either and lastly it will have an operation column which will be a var char and we'll just do a maximum of 255 and it's not going to be nullable and this is going to be the table that gets created when we start up our mysql docker container so we can just save that and now we can start writing our tests so we're going to do both unit tests and end-to-end test and we'll start with the unit tests so basically if we treat our directory structure so basically the unit tests are just going to cover the core functionality of our application so we're just going to make tests for arithmetic.go so for each method within arithmetic.go we're going to create a test for it and each one of those tests will be an individual unit test an end-to-end test is to test the end-to-end functionality of our application so in that case we're going to need to test our grpc server in order to do that we need to use a package that allows us to create sort of a dummy server based on our grpc code and you'll see what i mean when we get to that so let's just start with our unit test so we can just vim and we'll go internal adapters and core and within this core directory we want to create a file called arithmetic underscore test.go and that's how tests work in go so you need to create a file with the name of the file that you want to test prefixed with an underscore and then the word test so it'll look like this it'll be arithmetic underscore test.go and this is how you create a test file and go and when we run the go test command it's going to check for files that have these names and it's going to test them against their corresponding go file so let's go in here and the package is still going to be the same it's going to be arithmetic and we're going to import github.com stretcher slash testify require and then we're going to import testing and we can write our first test by doing funk and the function name has to start with test and then we'll do the name of the method that we're testing and we have to pass in a pointer to testing.t and this function is going to be our unit test function so we'll create a new adapter and this is going to be an arithmetic adapter remember we're still in the arithmetic package and we're going to do answer error equals with dot addition and we're going to pass in one and one and if air not equal nil we're going to do t dot fatal and we'll do expected v dot v then we're going to say we expected nil and got an error and then we're going to do require dot equal so we want to require the values to be equal so we're going to pass in t we have to pass in t for this so if we go to our method here it says that we need to pass in the testing t and then we're going to pass in the answer and n32 the number two and we have to convert it to an n32 because this two is just going to be an int so we're doing addition right and one plus one is two so we're expecting our answer which is going to be an n32 that is two to be equal to this n32 that is two and that's going to be it for our addition and all the other tests are going to be pretty much exactly the same so let's just copy this and then do subtraction and this is going to be subtraction as well but this time we're doing one minus one so the answer is going to be zero and then we can do multiplication and one times one is going to be one and last but not least division and 1 divided by 1 is also going to be 1. now in a real world application your unit tests should definitely have more coverage and what i mean by that is all of these tests are just going to be valid test cases but you also want to test multiple variations of invalid test cases depending on your application but of course this tutorial isn't about how to write efficient tests it's just about the overall process of implementing tests within the architecture so feel free to expand on these tests if you feel inclined to do so so let's go ahead and save this and we'll clear so our unit tests are done and now we need to do our into in tests which are going to be a little bit more complicated so let's go ahead and vim internal adapters framework left and we're going to want to create our tests within this grpc directory and we're going to be testing these grpc endpoints in this file in this rpc.go file so we're going to want to create the rpc underscore test.go file and we can go into this file and it's still going to be part of the same rpc package and we'll do import and we're going to do a few imports but first let's go over what we're going to be using to emulate server behavior so if we head to this url we'll arrive at this buffcom package and it says that packagebuffcom provides a net.connection implemented by a buffer and related dialing and listening functionality so basically it emulates connection functionality and we don't really need to know too many details about how this is working there's really not much here in the documentation but when we write out the code you'll see what's going on so let's go ahead and import that so we'll do dot google.goling.org forward slash grpc forward slash test forward slash buff con and we also need grpc and we're going to need require again so we'll do github.com stretcher slash testify slash require and what we want to define first is the buffer size that we passed to our buff con listener so we'll do const buff size equals 10 24 by 10 24 and then we'll do var bliss is a pointer to buff con.listener so we're then going to want to create a func called init and this init function is going to be run before any of our tests are run so this is where we're going to add the code to orchestrate the startup of our application as well as connect to this dummy network connection so we'll start with error and then we'll create listener or list which is going to be buffcon dot listen and then we're going to pass in the buff size and then we're going to create our grpc server which is going to equal grpc dot new server and if you remember from before this new server creates an empty grpc server which has no service registered so we still have to register our service to this server and then just like in our main.go file we're going to create our ports and we'll start with far actually for this part we can just split our main.go file and we can just copy our ports and we're also going to need to do our imports so actually we can just do split again and go to the top here and take our imports from there then just paste those in so we have our database import our arithmetic import and our api import as well as our ports so as you can see most of what's happening in this init function is what is happening in our main function and this main function is going to start our application for deployment but this init function is only starting our application to test the end-to-end functionality of our grpc endpoints so actually we can just go ahead and copy the rest of these things as well aside from the grpc adapter.run so we're going to need to import os for these lines here so let's go back up here and do os and we're also going to need to import log so we can go back up here and import log as well and we can remove that and for our grpc adapter we're already in our rpc package so we can just do new adapter because we're already in the package to create the adapter for our rpc service and the reason that we don't need to do our run method in this init function is because within this init function we're actually going to register our service here so we'll do pb dot register arithmetic service server and we're going to pass in our empty grpc server as a parameter as well as our grpc adapter and we also have to import pv so we'll just take this and we'll just change this to framework left grpc tv and once we've registered our service we'll do go func if error grpc server dot server test server start error and we want to execute this go routine oh and this is just error and then we also want to create another function called buff dollar and it's going to take in context dot context and a string and it's going to return a network connection and an error we're going to return listen.dial and we need to import context and net and next we need to create another function called git grpc connection and that's going to also take in context and it's going to also take in testing.t it's going to return a pointer to grpc.client connection and we need to import testing and we're going to do connection error equals grpc dot dial context we're going to pass in our context our buffnet and grpc dot with context dialer and in there we're going to pass in our buff dialer that we just created above then grpc dot with insecure and if error not equal nil we'll do t dot fatal f fail to dial buff net and then error but if all is good we'll return connection so let's go over what's happening here so this function here we're going to call this within our test functions to get our buff con connection so if we go to this dial context function we see that dial context creates a client connection to the given target so if we go back down here we see that the parameters are context and then target here which is a string and our target that we're passing in is buffnet and we're also passing in this with context dialer we see that it returns a dial option that sets a dollar to create connections and we need to pass in a function that returns a network connection and if we go back here we see that if we go to definition of our buff dialer our buff dialer is returning list.dial and if we go to lists this is our con listener so we're dialing our buffcon listener and as you see here it says dial creates an in-memory full duplex network connection which essentially just means that we're creating a local in-memory connection that emulates an actual network connection so that's what this buff dialer function is for it's being passed to this with context dialer which is being used to get our connection and we're passing in this with insecure because it disables transport security because as mentioned here it says note that transport security is required unless with insecure is set but we're going to disable transport security because we're only running test here and if we go back up to our init function we're basically doing the same thing that we do in our main.go file aside from a couple of minor differences we're creating this buff gone listener and we're also creating an empty grpc server and then we're creating our ports and then we're doing our environment variables for our database connection and then we're plugging our database adapter into our dv port and the same thing for our core app adapter and grpc adapter and then we're registering our service here and here we're serving our grpc server so now we can start making use of all of these things when writing our tests so we'll do test git edition and it's going to take in testing.t and we're going to have a context which is just going to be context.background so if we go here we see background returns a non-nil empty context so we're just going to use an empty context but we need to have a context because we have to pass the context to our git grpc connection so connection equals get grpc connection and we pass in our context and t then we'll defer connection.close which will just close our connection after the completion of this function and then we can do client equals pb.new arithmetic service client and we'll pass in our connection if we go here this new arithmetic service client comes from our generated code and it returns an arithmetic service client which essentially just allows us to call our grpc service methods by using this client variable so for example we do client dot git edition to call our addition method and then we can create our params which will be equal to a pointer to pb.operation parameters which will be a struct literal and it's going to contain a which is one and b which is one we just want to do one plus one and then we'll do answer error equals client.kit edition and we're going to pass in context and params and if error not equal nil we're going to do t dot fatal fatal f expected v got v and then we expected nil and got an error and i'm not sure if i mentioned this before but whenever we're using this fatal f from this t variable it's going to make our test fail which is the behavior that we want so if one of these fails the entire batch of tests fails and then lastly we want to require equal and we'll pass in t and we'll do answer.value and then we'll do in 32 and 2. so we want the value from our answer to be init 32 that is 2 because 1 plus 1 is two and that's test addition and now we can do the same exact thing for subtraction we only need to change this to get subtraction and of course subtraction this is going to be zero one minus one is zero and then we can do the same thing for multiplication and one times one is one and lastly we'll do the same thing for division and one divided by one is one so let's save this and at this point we've written both our end-to-end tests and our unit tests so now we can finally start to get into running our docker compose file to spin up our two containers and run our tests using both the mysql container and our application container and to do that what we're going to want to do is in the same directory as our docker compose file we want to run the command docker compose up and pass the flag build and it seems we have an error additional property service is not allowed and let's go into our docker compose file and see what the issue is ah this should be services and not service so let's see if that works and we'll do docker compose up build and as you can see our containers are now being built and here you can see all of the steps from our docker file being executed and now our database service is starting up and our grpc service is waiting for the database to be available and this comes from our entry point file and we're getting error cannot connect okay so this error here these errors that we're getting here are coming from our grpc service which means that this error is happening in our entry point file and as you can see it's unable to connect after 10 tries because our entry point tried to connect to mysql 10 tries and was unsuccessful and actually this is happening because in our grpc entry point file we're connecting we're trying to connect to the mysql database using these environment variables mysql host mysql port mysql user and mysql password and we're not setting these environment variables in our grpc service within our docker compose file so let's go ahead and add these so if we go back into our docker compose file as you can see the only environment variables that we have are these two here and these are the ones that are going to be used within our application but we also need the environment variables that are going to be used within our shell script for our entry point to test the mysql connection so let's go ahead and add mysql host is going to be db password is going to be admin 123 2 mysql db is x test mysql user is root and lastly mysql port is 3306. and let's save and let's do docker compose down to remove our containers and then we want to build them again so we'll do docker compose up and build again oh and there's a typo in the database name hextest so let's stop the container and remove that t and then save docker compose up actually docker compose down and docker compose up build and as you can see this time we were able to successfully connect and run all of our tests and all of our tests passed so let's let's actually go up here and have a look at what's happening as you can see here at this line here you can see that the mysql container is running docker entry point init db.d the directory with our init sql file that was mounted to this directory and throughout the process of this mysql container starting up you see that our entry point is waiting for the db to become available and then it tries to connect and it says can't connect to mysql server on db and it's going to keep doing that it tries to connect again for a second time but mysql is still starting up and as you can see it continues to do it until we get to a point where it is able to connect and run our show tables command that's in our grpc entry point script then it shows this arith history table that we've created within our database using our init.sql file and then finally our tests are run via the command in our docker compose file and as you can see these top tests here are our unit tests so as you can see each individual test test addition test addition passed test subtraction test subtraction passed and so on and so forth and then as you can see here it shows that this directory this arithmetic package the tests pass for this package and then down here are our end-to-end tests and as you can see it's test kit edition and it passed and they all passed so since they all passed this pass happens here as well the whole package passed and as you can see it shows here the directory of this package as well and those are our tests so we can control c and let's clear this and we can just ls and we can just go ahead and tree this entire directory structure and as you can see here we have our application that is structured using the hexagonal architecture and we've also created a docker compose file to orchestrate the running of the two containers necessary to test our application and at this point you have a pretty bare bones hex arc grpc application that can be extended or expanded upon to meet your needs and the docker compose setup that we use for testing can also easily be applied to a cicd pipeline for adding a testing stage to a deployment and also the code is modular such that you should be able to easily make use of different storage options as well as add additional entry points such as command line interface or a rest api and i encourage you to try to add additional adapters to this application and improve upon or make changes to this application where you see fit and that's going to be it for this tutorial i hope that you all have learned something and it's been a pleasure and that's going to be it for this tutorial congratulations for making it to the end if you enjoyed this video be sure to like and subscribe anyways i hope that you all have learned something and it's been a pleasure and i'll see you in the next one
Info
Channel: selikapro
Views: 5,069
Rating: undefined out of 5
Keywords: software development, gophercon, golang, programming, go programming language, google programming language, how to code, rest api, go rest api, learn software engineering, software engineer, getting started with go, cpop, go app, hexagonal architecture, go clean architecture, go tutorial, golang tutorial, microservices in go, robert martin golang, robert martin go, uncle bob architecture, uncle bob go programming language, kat zien example, hexarch, domain driven design
Id: MpFog2kZsHk
Channel Id: undefined
Length: 165min 5sec (9905 seconds)
Published: Sun May 02 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.