Hey, in this video, we're going to create
a GitHub code analysis agency that will analyze your code according to your standard operating
procedures and will leave analysis comments on your pull requests. This is an example of a non-standard AI agent
automation that runs only on the backend and allows you to significantly increase the efficiency
of your operations. Even in our agency, we recently released a
new feature on our SaaS platform that allows you to create those custom GPT apps, white
label them, add multiple agents and control user access. But the problem is that as our code base continues
to grow and we keep adding more and more features, it becomes increasingly harder to scale. This is where SOPs or Standard Operating Procedures
come in. If all your developers follow the same code
quality standards, it becomes much easier for your team to collaborate and add new features. And secondly, I have personally received more
than 5 requests for this exact AI solution from various businesses around the world. So it seems like there is a real need for
this solution across large corporations. Besides, it is a great use case to get started
with because you can build upon it and even add additional capabilities. Like for example, you can make your agents
actually fix the problematic code and then push it on GitHub as well. So let's see how this works. Okay, so this is my agency here on the left
that I created in this video and it consists of three agents, the CEO, the code analyzer,
and the report generator agent. Inside the code analyzer agent instructions,
I'm providing the code quality standards that this agent must adhere to. So let's try to break them. Here's a function to create a chat name automatically
after a user sends the first message. So let's try to modify this. Say for example I made an error and instead
of calling the DB class, I updated the document in Firestore directly with the new chat name. Now I'm going to push these changes on GitHub. After I've pushed these changes, I'm now going
to create a new pull request. Awesome, so now as you can see, we've triggered
our agency on GitHub actions automatically, which means that they're currently running
and analyzing our code base for any code quality issues. Perfect, so then as you can see, we get the
code quality report on our pull request that identifies the exact issue I have just created
with the direct Firestore call. It even specifies the file path and some details
like that the DB collections interface was not properly used to ensure encapsulation
and maintainability. It even provides a recommendation that to
adhere to our code quality standards, it is recommended that all Firestore interactions
must be refactored to use the DB Collections interface. This will ensure that our codebase remains
clean, maintainable, and in compliance with our encapsulation policies. So now let's see how you can make this agency
yourself. For everyone who's new, please make sure to
install Agency Swarm with pip install agency swarm and then to get started much faster
you can run agency swarm genesis command. This is how I personally start all of my projects
because the genesis agency will create all the agent folders and tools for you. Of course it does not get everything right
on the first attempt just yet but it does speed up the process significantly. When chatting with the genesis agency make
sure to clarify what your mission is, what your goals are, what tools or APIs you want
to utilize, and how many agents you want to create. Also include the communication flows between
your agents. In my prompt, I'm just going to say that I
need a code analysis agency with three agents, and
the Report Generator agents. It does take some time for your agents to
be created, so we're going to skip this part and I'll see you again when we are ready to
fine-tune this agency. We will have three agents, the CEO who will
initiate the communication that will fetch all of the changes from the GitHub, and the
report generator agent that will create comments on our pull requests and highlight any problematic
lines of code. Okay, as you can see, we now got the GitHub
pull request fetcher tool, which uses GitHub API to get all the changes from the pull request,
which is already super helpful because it saved me a lot of time on browsing GitHub
API documentation and writing all of this boilerplate code. In the meantime, I'll start working on the
agency instructions because this is where you can really tailor the agency for your
specific process in your company. Okay, awesome. So now the agency creation process is complete
and we can see all the agent files here on the left with the tools defined accordingly. The best part is how close the tools are to
what I actually wanted to implement myself. So now all we have to do is just adjust these
tools accordingly. To test your tool, what you can do is simply
add if name equals main at the end of the tool file and then hard code the parameters
when initializing this tool. Then simply execute
the run method inside the print statement. The pull request idea I'm also going to hard-code
for now just for testing, but in the production environment we will use GitHub event to get
this dynamically. Okay, so after I have adjusted this tool we
are finally ready to test it. Simply run this file and then you should get
a response from the GitHub API right here. Awesome, now as you can see on the pull request
we get a comment which was created by myself via API. So then we also need to adjust and test the
GitHub pull request etcher tool. Okay, it seems like our agent wanted to keep
only the TypeScript files, because this is the way I defined my prompt when I was chatting
with the Genesis agency. But for this tutorial, I do want it to analyze
all of the files and see all of the file changes, which is why I'm basically getting the file
name and then listing all of the changes accordingly and then returning them from this tool. So if we run this tool, as you can see, we
now get a list of all of the file changes that includes the file name and then all of
the lines that were changed in this pull request. So we are now almost ready to test this whole
agency. Literally, I just adjusted two tools and we
are almost there. It's incredible how much of the work was done
by the Genesis agency for me. So what I'm going to do next is adjust all
of the instructions and define our code quality standards just so our agent knows what to
look at when analyzing those files. So I already have this standard operating
procedure for how to work on our project backends right here. And basically, I'm just going to copy and
paste some of the checklist items into the instructions for the Code Analyzer agent. Because this is an integration that will be
running solely on the backend, we actually don't need to run the demo gradio or the run
demo commands because our agency will essentially be triggered by a pull request on GitHub. So what we can do instead is simply use the
agency.getCompletion method with a simple prompt like please analyze the code and generate
a report. So let's test it out. Okay, cool. So after we launched our agency, you can see
all the threads appear in the terminal. So you can click on them and check out the
conversations between your agents. For example, here we can see how the code
analyzer agent just used the pull request fetcher tool and got all of the file changes. Awesome. So then the agency get completion method returns
that the analysis of the TypeScript codebase has been completed, and we can even actually
add a link to the issue comment, which we can check out like this. Now the only thing left to do is just deploy
this agency in production and make sure it runs on every pull request automatically. However, to run this agency live, we need
to replace the hard-coded pull request number with the pull request which triggered the
action. To do so, I basically chatted with ChatGPT
and figured out that GitHub uses GitHub event path environment variable to store all of
the information about the event that triggered this action. So, what we can do is simply parse this file
and then extract the pull request ID just like that. Then we can use this pull request ID inside
our URL. I'm going to do the same in the report generator
tool. So now that our agency is ready for deployment
what we can do is simply copy the whole agency folder and then drop it into our OpenAI widget
project repository. Add all of the files to GitHub and don't forget
to also add an environment file with your OpenAI key and the GitHub token. Then I'm going to create a new workflow which
will essentially just trigger this agency and run it on the back end. To do so, I'm simply going to tell ChargePT
to create a basic GitHub workflow file that executes a Python file. Then I'm going to copy this file and paste
it inside our repository. Awesome, so now that our agency and the workflow
files are done, we can simply push this code to GitHub. After that, I'm going to intentionally break
one of the files just so we can test this workflow. So now that I pushed this code, I'm going
to create a pull request to trigger the agency. Perfect. So now as you can see, our action has been
executed on the backend and we should see the analysis report right here. And it indeed states that the Firestore direct
access violation was detected and that the call to Firestore database was not done through
the DB collections class as I have specified in my standard operating procedures. It also provides some recommendations on how
to fix this. Honestly, this is incredibly helpful. I'm probably actually going to run this on
almost every single project from now, so we can always keep track of our code quality
standards and ensure that there are no issues before we merge any pull requests. Like always, all of the code will be on my
new agency swarm lab repository where you can directly copy this agency and use it for
your own code base with your own standard operating procedures or even tailor it for
a different process. Like, for example, you can even make it generate
code and fix those issues and then commit them on GitHub as well, or even leave comments
on the specific lines of code where the problems were encountered and then commit them on GitHub
as well, or even leave comments on the specific lines of code where the problems were encountered
using GitHub API. Thank you for watching and don't forget to
subscribe.