This is the first of a series of articles the covers Kubernetes from scratch, start to finish. The reason why I'm creating this series is that once upon a time I was in your shoes. Suddenly, I found myself working in an infrastructure that uses Kubernetes for virtually everything. I was a newbie back then. I knew that Kubernetes was a container orchestration system like Docker Swarm but nothing more. I had to search through the official documentation, read many books and articles, and watch videos. The problem was that they're either too theoretical, with very little hands-on labs, or they're very practical that you don't know why that thing was done that way or what this action causes to the cluster. So, in this series, I share with you my knowledge in an easy-to-follow way. I spare no theory when there's a need for it, and I also make sure that every bit of information mentioned has its own practical use case. Ready? let's jump into the first topic: pods.

What is a Kubernetes Pod?

Assume that you are in a company that has several teams for different types of tasks: marketing, sales, finance, IT, and logistics. Each team consists of one or more employees. To achieve maximum efficiency, each employee is assigned one and only one task that he/she does it best. If Kubernetes is the company in our analogy, then containers are the employees and Pods are the teams. A Pod can host one or more containers. Collectively, containers inside a pod are tasked with the same job, but each container does a different part of it. Back to our analogy, the marketing team may assign a person for collecting phone numbers and emails of potential clients, and another for making calls and sending promotional emails to the lists created by her colleague. Similarly, a pod may host a container for serving web pages (for example, Nginx) while the second container (often referred to as sidecar) handles processing the logs that the first container produces. You may be thinking: why not just hand both tasks to one container? because, as we mentioned earlier, each container (employee) is tasked with one and only one task that he knows how to do it best.

A Kubernetes cluster works as a company with teams of employees

Why does Kubernetes use Pods and not just containers?

Because, if Kubernetes used just containers, it'll be just like a company that hires employees but does not assign them to teams, since it - simply - does not have any. Can you imagine that? you have two or three people doing the HR stuff, but each of them is working on his/her own. They don't coordinate their work with each other, they don't share a common workspace, and - more importantly - they don't have a team leader to organize their work and prioritize their tasks. The team leader in our analogy is the Pod Controller. More on that later.

OK, so why don't I just put all my containers in one big pod?

If you're the CEO of our hypothetical company, would you put all your employees (logistics, administrative, finance, sales, customer success, etc.) in just one team? Of course not. Only if you have a really small company that has only you and two other people as their staff can you do this. Similarly, if your application consists of only a web server that serves static files, then a single pod is all that you need. The moment your application gets more complex, you must consider using a pod for every container/group of containers that share the same single responsibility.

Create your first Kubernetes Pod

This article assumes that you already have access to a running Kubernetes cluster. If you don't, you can simply use minikube if you're running Linux. Alternatively, you can use Docker for Desktop if you're on a Windows or a macOS box. You should also have the kubectl tool installed and configured to access the cluster.

Kubernetes uses the concept of resources and controllers to work. A resource contains the necessary instructions that a relevant controller translates into actions. This is referred to as declarative programming in which you just inform the program about what you want to accomplish and leave the implementation details (the how) to the system to decide. This is the opposite of imperative programming where you have to explicitly tell the system what you want to do step by step.

The following is the definition file for a simple Pod that hosts an Nginx-based container. Create a file and call it mypod.yaml (the name is of no significance here). Add the following lines to it:

apiVersion: v1
kind: Pod
metadata:
  name: webserver
spec:
  containers:
  - name: webserver
    image: nginx:latest
    ports:
    - containerPort: 80</code>

Let's have a look at this file before we apply it to our cluster:

Line 1: specifies the API version that accepts this resource. Every request that you send to Kubernetes is actually routed through an HTTP request to the API server.

Line 2: The kind of resource is Pod. Notice that the kind of resources always has its first letter in capital.

Line 4: The name of the Pod. You can later refer to this pod by its name to modify or destroy it.

Lines 5 and 6: the specifications of this Pod and what it will actually do. A Pod can host containers so we have the containers array.

Line 7: The first container has a name of webserver

Line 8: We're still referring to the same container since we are at the same indentation level. The image that this container uses is nginx. In this lab, we don't care much about which version of Nginx we'll be using. However, in real-world scenarios, you should always specify the version of the image you're using.

Line 9 and 10: This container will listen to port 80. Notice that the presence or absence of this line does NOT affect whether or not the pod will be accessible at port 80 as, in all cases, pods will listen to the ports that their applications expose on the internal cluster network.

The mypod.yaml file represents our wishes but it tells nothing about how Kubernetes will act upon it, how it will create the pod, pull the image, expose the port and so on. All that we need to do is send this file over to the API server to be executed and watch the results. Although we can use any HTTP client like curl, postman, etc. to send this file since it's just a regular HTTP POST request, using the kubectl tool is highly recommended as it abstracts a lot of the heavy lifting for us (like sending the necessary authentication headers). Let's apply this definition now:

kubectl apply -f mypod.yaml</code>

You should have the following output:

pod/webserver created</code>

But now we need to access our container and ensure that our Nginx web server is running.

Access your Kubernetes Pod

When Pods are created in Kubernetes, they can be reached on the internal network only. In our lab, to access our new pod, we need to be inside the internal network to communicate with our pod. Using kubectl, this is as easy as running the following command:

kubectl port-forward  pod/webserver 8080:80</code>

The port-forward is one of the powerful subcommands of kubectl. It lets you forward traffic coming at the local interface (localhost) on a port that you specify (8080 in our case) to the cluster network to a specific resource/name (pod/webserver) on the specified target port (80). Now, any traffic arriving at localhost:8080 will be routed directly to our pod. Once the pod receives the traffic, it routes it to the container listening at the specified port. Notice that if you have more than one container running in the same pod, they cannot listen at the same port as all of them will be using the same pod interface.

Now, open your browser and navigate to http://localhost:8080. You should see something like this:

Managing the pod

We can do several things to our pod once it is running. Let's start by monitoring it.

Is our pod running?

The following command is used by Kubernetes novices as well as the most experienced Kubernetes gurus:

kubectl get pods</code>

This will get you all the pods in the current namespace. You should have an output similar to the following

NAME        READY   STATUS    RESTARTS   AGE
webserver   1/1     Running   0          18m</code>

The output is self-explanatory. Sometimes, however, the STATUS is not Running. That's when you have to dig deeper. When the pod is not running when it should be, there are basically two possibilities:

The pod won't start because of a pod error

There is a problem with starting the pod itself (like the image name was incorrect, the image was not found, you need authentication to pull it, there isn't a suitable node to run the pod, etc.) In this case, we use the following command to investigate

kubectl describe pods pod_name</code>

Let's have a practical example. I will intentionally change the image name to something that does not exist and use the above command to investigate:

$ kubectl get pods
NAME        READY   STATUS         RESTARTS   AGE
webserver   0/1     ErrImagePull   0          34m
$ kubectl edit pods webserver 
$ kubectl describe pods webserver
Name:         webserver
Namespace:    default
------------------------ OUTPUT TRIMMED FOR BREVITY --------------------------------------------
  Normal   BackOff    27s                kubelet, gke-test-lab-default-pool-46f98c95-qsdj  Back-off pulling image "ngin"
  Warning  Failed     27s                kubelet, gke-test-lab-default-pool-46f98c95-qsdj  Error: ImagePullBackOff
  Normal   Pulling    13s (x2 over 28s)  kubelet, gke-test-lab-default-pool-46f98c95-qsdj  pulling image "ngin"
  Warning  Failed     13s (x2 over 28s)  kubelet, gke-test-lab-default-pool-46f98c95-qsdj  Failed to pull image "ngin": rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngin, repository does not exist or may require 'docker login'
  Warning  Failed     13s (x2 over 28s)  kubelet, gke-security-lab-default-pool-46f98c95-qsdj  Error: ErrImagePull</code>

First, we used the kubectl edit pods webserver to modify the pod definition on the fly. The command opens a copy of the definition file that was used to create the pod (with some additions the Kubernetes adds). You can make changes to the definition and save the file to apply them. In our case, I changed the image name to be ngin instead of nginx to cause an error.

So, from the above output, it's clear that the problem is that the image name is incorrect (ngin). Go ahead and use kubectl edit pods webserver, change the name to the correct one and save. You should see that the pod is back in the running state.

The pod won't start because of a container error

The second reason why a pod won't start is that the container that it hosts has issues starting. In this scenario, the image was successfully pulled, the pod was deployed to a node, but the pod cannot start because the container is throwing an error and cannot start. In this case, we can a combination of the previous command and the following command:

kubectl logs pod_name</code>

To see this case in action, modify the mypod.yaml file to look like this:

apiVersion: v1
kind: Pod
metadata:
  name: webserver
spec:
  containers:
  - name: webserver
    image: nginx:latest
    ports:
    - containerPort: 80
    command:
    - thiswontwork</code>

Notice the extra part that we added to the end of the file. The command stanza allows you to override the default command that the container uses to start (in our case, this should be [nginx", "-g", "daemon off;"]) and replace it with your own command. In this example, we intentionally wrote a gibberish string that won't be executed. Now, let's apply this

$ kubectl delete pods webserver
pod "webserver" deleted
$ kubectl apply -f mypod.yaml                                                                                                                            
pod/webserver created
$ kubectl get pods                                                                                                                                        
NAME        READY   STATUS              RESTARTS   AGE
webserver   0/1     RunContainerError   0          5s</code>

First, we delete the running pod. This is required because you cannot change the startup command of a container that has already started. Then, we apply our faulty definition file. Finally, we monitor the pod status and - unsurprisingly - the pod STATUS reports RunContainerError. Assuming that we don't know what the problem is, let's debug:

$ kubectl logs 
$ kubectl describe pod webserver
Name:         webserver
Namespace:    default
Priority:     0
---------------------- OUTPUT TRIMMED FOR BREVITY ---------------------------
  Warning  Failed     7m59s (x5 over 9m23s)   kubelet, gke-test-lab-default-pool-46f98c95-qsdj  Error: failed to start container "webserver": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"thiswontwork\": executable file not found in $PATH": unknown
  Warning  BackOff    4m12s (x22 over 8m59s)  kubelet, gke-test-lab-default-pool-46f98c95-qsdj  Back-off restarting failed container</code>

First, we tried using kubectl logs but the container was not able to start in the first place to be able to throw errors, so we had an empty output. Hence, we reverted to the second command in our toolset: kubectl describe pods and we were able to spot the error that "thiswontwork" is not a command and it was not found in the $PATH variable. However, if the container started but it was throwing errors before crashing and restarting again (the pod restarts the container when it crashes), effectively entering what is called CrashLoop, we could use kubectl logs to see what's happening.

A namespace?

In Kubernetes terminology, a namespace is just a logical grouping of resources. Back to our analogy, assume that our hypothetical company has grown so large that it has now multiple branches in a number of countries. Every branch has its own set of teams (marketing, sales, finance, etc.). So, to avoid ambiguity, each team suffixes its name with the name of the country the company branch is operating in. For example, finance.spain has hired a new employee or sales.france has made its target for this quarter. Back to Kubernetes, you may have a pod that lives with other resources in the db namespace, middleware, application, or frontend. If you don't specify any namespaces when using the kubectl command, a namespace called default is used.

The Pod Controller: The Team Leader

Let's finish the article with a bit of the required theory. Throughout the labs we did, we only wrote a definition file, posted it to the API server and that's it. What created the Pods and the container inside it? when we changed the image on-the-fly to an incorrect one, how was that immediately interpreted so that the container was restarted and tried using the image with the faulty name that we specified? The answer is the Pod Controller. At the start of this article, we mentioned that Kubernetes works by using resources and controllers. the Pod is a resource that can be defined using the YAML or JSON file. Once this file is posed to the API server, the Pod Controller, which is continuously watching the API server for changes, detects that there is a new resource created with the type Pod (remember the kind: Pod part of the definition?). Once that happens, it parses the definition file for the instructions that it needs. For example, what is the name of this pod, how many containers it would host, what are their images? and so on. Once the pod is running, the Pod Controller is responsible for maintaining the state of the containers running inside it so that they're always in the running state. If a container crashes, it restarts it.

You can think of the Pod Controller as the team leader in our company's analogy. The logistics team (pod) needs a leader (controller) to assign work to each of them (pull images), find a suitable office (node) for them to work, and ensure that if a team member (container) needs assistance doing something (crashes), the team leader helps him/her (restarts).

Deleting the pod

Once we're finished with our pod, we need to delete it. If a pod is deleted, the Pod Controller does nothing about it, it does not restart it again (otherwise, we'll never be able to delete a pod!). Delete our webserver pod using the following command:

kubectl delete pods webserver</code>