
SadServers: medium task - Kubernetes Pod Crashing
Sadservers is a platform that delivers solving problem tasks . You receive a server from them and simply solve the problem. It’s an intriguing platform for learning debugging.
I encourage you to first attempt to solve the task on your own, then observe how I approached it. If you find yourself stuck, you can always refer back here.
Website: https://sadservers.com/ Task: “Buenos Aires”: Kubernetes Pod Crashing
Solution:
I begin by reading the task description attentively. Description: There are two brother pods, logger and logshipper, residing in the default namespace. Unfortunately, logshipper is experiencing an issue (crashlooping) and is unable to see what logger is trying to communicate. Could you assist in fixing Logshipper? You can check the status of the pods with ‘kubectl get pods’.
As the task type is ‘Fix’, we need to resolve the problem in the Kubernetes cluster.
Okay so I’ve written description. Now I am checking the status of the pods
sudo kubectl get pods
Reults:
admin@i-0b93d760028a401a4:~$ sudo kubectl get pod
NAME READY STATUS RESTARTS AGE
logger-574bd75fd9-wpg4w 1/1 Running 5 (75s ago) 56d
logshipper-7d97c8659-npscd 1/1 Running 20 (56d ago) 56d
admin@i-0b93d760028a401a4:~$
Now I am wanting to check what is happening in logger so:
sudo kubectl logs logger-574bd75fd9-wpg4w
Result:
Hi, My brother logshipper cannot see what I am saying. Can you fix him
Okay so let’s focus now on logshipper and investigate if service account attached to the pod
sudo kubectl get pod logshipper-7d97c8659-npscd -o yaml | grep serviceAccount
Result:
serviceAccount: logshipper-sa
Okay so we have service account. Let’s check what is happening with it ClusterRole.
sudo kubectl get ClusterRole logshipper-sa -o yaml
Result:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
descripton: Think about what verbs you need to add.
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"descripton":"Think about what verbs you need to add."},"name":"logshipper-cluster-role"},"rules":[{"apiGroups":[""],"resources":["namespaces","pods","pods/log"],"verbs":["list"]}]}
creationTimestamp: "2023-10-23T02:40:43Z"
name: logshipper-cluster-role
resourceVersion: "1286"
uid: eb5541f8-870c-40e0-ac57-17f7c4d24fd5
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- pods/log
verbs:
- list
Here we need add get verb and watch in case of reading
Then you need to restart deployment via rollout deployment
sudo kubectl rollout restart deployment logshipper
Result:
deployment.apps/logshipper restarted
Now we can check pod once again and everything should be OK now.
Summary
In this article, I showed you how to solve the problem of the medium from sadservers. It’s a good exercise in Kubernetes problem solving. I hope you enjoyed this article and see you next time.