This video is aimed at giving you a short,
but comprehensive overview of the core DevOps tools, that you need to build DevOps
processes. So let's get to it right away! At the very core of DevOps we have a release
pipeline, commonly known as a CI/CD pipeline. So CI/CD tool is the most essential part of DevOps
Engineer's toolkit. The most popular and still most widely used one being Jenkins. There are
alternatives like GitLab CI is becoming really good or GitHub Actions, Circle CI and many more.
So these tools are about how to create automated release pipelines, which run tests, build the
application, do different types of application scanning and deploy to the end environment.
And that involves its integrations with Git, Docker registry, cloud platforms, pipeline as code
with Jenkinsfile and so on. Okay, we're testing and releasing application and deploying it, but
where are we deploying the application. We need a deployment environment and that's where cloud
platforms, like AWS come in. So AWS services, the virtual instances, security groups around
servers, access to application running on the server, configuring the server and so on. Okay
cool, we are releasing and deploying applications to AWS virtual machines let's say, but what are we
releasing exactly? And in which form? You need to understand how the application is packaged and how
it runs on the end environment. The new standard way of packaging and running applications is
Docker. Docker packages software into standardized units called "containers" that have everything the
software needs to run including libraries, system tools, code and runtime. And this improves the
development in deployment process. You can quickly deploy and scale applications into any environment
and know your code will run. Again there are similar tools, but Docker wins here as well. So we
would create Docker images in the CI/CD pipeline and run the application as Docker container
on AWS server for example. Now Docker made it easy to create and run applications, so engineers
went wild and scaled up applications, because it is easy to do with Docker. But that made the lives
of application operations team harder again. With DevOps we are saying no separate Dev and Ops,
we want to unify them, so how to make running dockerized microservices applications easier?
Docker is lightweight and cool, but ephemeral and stateless. So how do we restart applications
when they fail, how do we scale and replicate applications or microservices if they are getting
a lot of requests? How do we run distributed applications like database clusters and so on.
Making sure that application is always available, even if some parts of it fail. Also a network of
hundreds of containers, when they run on multiple servers, how do we manage that? So Kubernetes,
which is a container orchestration platform, comes to rescue with all these solutions and even
more complexity. So Kubernetes has an auto-healing feature and the network layer that makes thousands
of containers seem like part of one server. It has auto-scheduling and much more. Scaling
applications up and down as we need is super easily done. Just specifying replica counts in
Kubernetes deployments. And you can also scale up and down the servers by adding additional worker
nodes or control plane nodes easily. Now I know I spent like half of those 10 minutes on Kubernetes
alone, but I'm sure you will understand if you know my channel and my passion for Kubernetes.
Cool, we have thousands of containers or even tens of thousands of containers, which is great
and Kubernetes manages a lot of the operations automatically. And that's great, but what if
things go wrong in the cluster, let's say we have applications equipped with great logging and
we have all the information, but we can't possibly manually look into logs and metrics of thousands
of applications and see what's going on. Maybe someone is trying to hack into our application and
our application is logging and screaming about it, but we don't know. What about third-party
applications, maybe databases is under heavy load or the servers are under attack. Somebody's
trying to SSH into it or do a port scanning to see what ports are open and so on. With so much
workload, we need automatic monitoring and alerting in place that uses the data that we have
in the logs and alerts us if something is out of natural behavior. Again security attacks or maybe
a harmless misconfiguration in Kubernetes manifest file that has created a mess in the cluster. So
monitoring and alerting is essential on all levels like infrastructure runtime and application
itself and for Kubernetes specifically, a popular monitoring tool is Prometheus, which
comes with a whole stack for monitoring, alerting and visualizing the metrics data. Talking about
issues in the cluster they may make the cluster to crash and get into a state that we can't
recover. Imagine we configure the cluster on AWS, we have thousands of servers with tens of
thousands of containers running on them and we have configured monitoring and 100 other services
in the cluster and now it's all gone, because of misconfiguration issues or hacking attacks or
whatever. How can we possibly recover all that? How can we recreate this state again? And that's
where Infrastructure as Code helps, because it's really difficult and sometimes impossible to
do that manually or it would take just weeks or month. So with Infrastructure as Code we
actually script this entire setup: spinning up AWS resources, Kubernetes cluster, installing
all the services. And if something happens, we just run the script again and it recreates
everything. Terraform is the most popular tool that allows infrastructure as code. Now sometimes
if we're working directly on the operating system like installing packages, maybe doing security
patches etc like on Kubernetes worker nodes, that's where configuration management tools like
Ansible may be helpful. Again with the scale of Kubernetes, we may have hundreds or thousands of
worker nodes and let's say if you need to do a security patch on those or do an upgrade to the
latest container runtime, you don't want to be login into each server manually and executing the
scripts. With Ansible, just write a script once provided with a list of servers as targets and it
will automatically push out and execute scripts on those targets and give you a nice output summary
of the state. Now infrastructure as code is code, configuration as code is also code. again if
you're writing Jenkinsfile, that's also code. Or the Dockerfile or Kubernetes manifest files. So we
need to write all of these in a code editor such as Visual Studio Code, provide a bunch of plugins
and features for specific languages or tools that actually help you write those scripts. They have
auto-completion or error checking integrated and so on and it's a simple tool, but it is definitely
a needed one in DevOps. Now obviously you aren't working alone. Well, hopefully not! But rather in
a team with other engineers, so as DevOps engineer you aren't coding the application features
themselves, but you are writing pipeline code, Dockerfiles, Helm charts etc. So basically
code, which is part of the application or you are writing infrastructures code scripts,
which are in a separate project. Well, you need to make that code available and transparent for
teams for other engineers, ideally with history of changes and ideally with its own release pipeline
to apply infrastructure changes the same way as application changes are applied. Well that's
where you need the knowledge of Git to do all that with your infrastructures code as well as just
collaborate with other engineers on code changes. Now this is an obvious one, but obviously you
can't do much if you don't know Linux and Linux command line. Docker is a lightweight virtual
computer, mostly based on Linux, Worker nodes in Kubernetes are servers mostly with Linux
operating system. So even with Infrastructure as Code and all the automations, you will still be
working a lot with Linux and working with command line interface. So that's kind of a must here.
Now as you see, when building DevOps processes these tools need to be combined and used together.
So even if you know them individually you need to learn how to integrate these tools, like deploy
from Jenkins to Kubernetes environment, which is running on AWS and has AWS service integrations,
and all that written in Terraform. And again for that Terraform code that lives in Git repository,
you may build a CI/CD pipeline. And all of this is containerized, even Jenkins instances may be
running as containers. And learning these tools in isolation is already challenging, but learning
to combine them in a secure properly configured way with industry best practices is way more
challenging and that's exactly why we created the DevOps Bootcamp and are now working on DevSecOps
course to teach exactly that, building complete DevOps and DevSecOps processes with all these
tools and even more. And more importantly teaching the underlying concepts for each step so that
you can easily replace and swap out the tools, when you need to, because when you understand
what you are doing and why on a conceptual level, tools just become means to an end and easily
replaceable. And for us that was extremely important part of creating those courses. If
you want to learn all that or get more details, you can check out the video description for more
information on our courses and programs. Now I hope I was able to give you some valuable, quick
information in this video. Feel free to share the video with others, who want to get a short
overview of DevOps tools and also let us know in the comments what interesting exciting DevOps
tools do you work with or work for besides the ones I mentioned here. And with that thank you
for watching and see you in the next video! :)