When you decide to deploy any
application to Kubernetes, you usually create the following three
components: A Deployment, which is a blueprint on how to run your application and how
many replicas you want. Then we have a Service, which is an internal load balancer that routes
traffic to your Pods. And finally, an Ingress, which is a description of how to route traffic
from outside of the cluster to your service. This is the most basic setup. In Kubernetes, your
applications are exposed through two layers of load balancers: internal and external. The
internal load balancer is called a Service, and the external one is called an Ingress. Also,
keep in mind that pods are not directly deployed to Kubernetes; instead, they are managed by a
higher-level abstraction which is the Deployment. Let's say you want to deploy a simple "Hello
World" application to Kubernetes. The YAML would look like this: the Deployment
object, Service Object, and Ingress. If you have never used it before, it may look
a bit complex, especially how those components relate to each other. For example, when should
you use port 7070 and when port 8080? Should you use a different port number for each service
so that they don't clash? What about labels? Should you use the same labels everywhere?
Before focusing on debugging Kubernetes, let’s refresh how to link all those components,
and we’ll start with Deployment and Service. At this point, your pods should
be running and in a ready state, and the service can distribute traffic to
the pods. However, if you still can't see a response from your app. This most likely
means that the Ingress is misconfigured. Since the Ingress controller is a
third-party component in the cluster, there are different debugging techniques depending
on the type of Ingress controller. There are also cloud-native ingress controllers
that use a Layer 7 load balancer, like the Application Load Balancer, but for
this video, we'll talk about common issues. Now, before diving into Ingress-specific tools,
there's something straightforward that you could check. The Ingress uses the service.name and
service.port to connect to the Service. First, you should check that these are correctly
configured. You can use the following command. If the Backend column is empty, then there
must be an error in the configuration. If you can see the endpoints in the Backend
column but still can't access the application, it may be an infrastructure problem. Check if the underlying load balancer that
the ingress uses is properly exposed to the internet. Also, check how you expose your entire
cluster to the internet. To pinpoint the problem, you can isolate infrastructure issues from
Ingress by connecting directly to the Ingress Pod. First, you need to get the Pod for
your Ingress controller. In my case, it’s in the ingress-nginx namespace. You can
describe the pod to find exposed ports and finally connect to the pod using the
port forward command. At this point, every time you visit port 3000 on your computer,
the request is forwarded to port 80 on the Pod. If it works now and you can access
the application, the issue is in the infrastructure. And you should investigate how
the traffic is routed to your cluster. If it doesn't work, the problem lies with the Ingress
controller. And you should debug the Ingress. There are many different ingress controllers
such as Nginx, HAProxy, Traefik, and others. So, you should check the documentation of your Ingress
controller to find a troubleshooting guide. You should always remember to approach
the problem from the bottom up: start with the Pods and move up the stack
to the Service and Ingress. You can also apply the same debugging techniques to Jobs,
CronJobs, StatefulSets, and DaemonSets. Soon, I’ll create a dedicated tutorial on how to
debug the Ingress with TLS and cert-manager. If you want to learn more about Kubernetes,
I have a playlist, and you may also be interested in Kafka or benchmarks. Thank you for
watching, and I’ll see you in the next video.