A small collection of rants and observations about Kubernetes :)
Connascence (/kəˈneɪsəns/) is a software quality metric invented by Meilir Page-Jones to allow reasoning about the complexity caused by dependency relationships in object-oriented design much like coupling did for structured design. In software engineering, two components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. In addition to allowing categorization of dependency relationships, connascence also provides a system for comparing different types of dependency. Such comparisons between potential designs can often hint at ways to improve the quality of the software. source: Wikipedia
The resources described in k8s often have a strong relationship between each-other. But are loosely combined in the YAML code. A classic example is a Service matching on Pod labels. Cert-manager provisioning a Secret resource and the pod has to make sure to mount the same name.
You have a collection of resources that are connected to each-other, but their relationship is not obvious.
The side-effect of that is that:
Kubernetes pretends that it abstracts away the platform. This is a lie.
Pretty much every deployment will rely on metadata annotations. These are little snippets of extra config that is platform-specific.
It’s even in the official documentation:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Fundamentally, I like to compose things together.
Whenever a resource needs special metadata annotation to work, it’s inherently inheritance.
A platform truly fails when it is not able to curtail complexity.
If you deploy k8s, pretty soon your DevOps people will want to deploy Istio, Knative, monitoring in a second cluster, … The platform has reached a critical mass of complexity that generates its own complexity.
Everybody who starts with kubernetes is hit with this issue: a pod didn’t start. How can I track the issue? The answer should be: connect to and read the logs.
The pod might fail because of an ImagePullBackOff, which only appears in the events, not in the pod logs.
_____ < EOF > ----- \ (\/) \ (_o | / | \ \______ \ )o /|----- | \| /|