Tips and Tricks to Setting Up and Configuring Your Kubernetes Cluster to Orchestrate Containers
This note outlines some tips and tricks that you should be aware of when embarking on the installation and configuration of a Kubernetes cluster. Such an endeavor should only be attempted if the need for an enterprise-grade container orchestration solution is required.
Recommendations
Kubernetes is an excellent solution for those who have the need to quickly scale workloads on the cloud or between clouds effectively, efficiently, and with high availability. However, there is a predicate of this process which is to containerize all the applications, services, data, and workloads that need to be scaled in the cloud. These components also need to be developed or modified to be asynchronous and to be aware of the ephemeral storage situation in the container environment.
Administration and Upgrades
- When a Kubernetes feature is deprecated, plan to replace that functionality before it is removed from a future release. Otherwise there is a risk of a loss of functionality, or worse, a complete loss of container orchestration.
- Windows 2019 is the only worker node Windows variant that is supported. Kubernetes was built to work most effectively on Linux platforms such as Ubuntu.
- Kubernetes often introduces breaking features of older versions. Use “Apt-mark hold Kubelet Kubeadm Kubectl” to stop the updates from pulling the latest ones down.
- You can use can-I before commands in a script to test if the user has access before executing them. This is useful for automated scripts to test if permissions exist before trying to run them.
- The Kubernetes init command creates a token that expires after two hours to use to connect the worker nodes to the control plane. Once this expires, generate a new one before connecting additional nodes.
- You can deploy an ephemeral container into a pod just for troubleshooting purposes. It contains a troubleshooting toolchain that can access the filesystem and resources of the other containers in that pod.
Configuration
|
Implementing Specific Patterns
- With the right network connectivity setup you could run a single Kubernetes cluster across multiple clouds, which will enable elegant FinOps functionality.
- It is a good practice to put resource limits and requests limits on the containers that are in turn passed to the container engines.
- Scripts need to handle 409 conflict errors in case there are two commands from different users who act with conflicting commands on the same resources at the same time.
- Use CRON jobs to scale to zero overnight and hot start with a number of pods, then the horizontal autoscaler can take it from there.
- One could configure Kubernetes to have a minimum number of worker nodes at night when resource need is low, which are small or extra small, then scale to larger ones in the day when resource need is higher.
- If you need non-ephemeral persistence, use the pod. The data will persist for the life of the pod, not the container.
- Use traffic policies of “node” to prioritize using containers in the same node, if possible, to reduce latency (if you have local endpoints on the same node that are needed). To fix this, use Daemon sets to ensure there is one pod of that type on each node during scheduling.
- Use podAntiAffinity to deploy pods for higher availability.
- Use podAffinity to bring together pods that are particularly chatty with each other.
- You can use “taints” to specify which applications can run on which pods. You might use this to reserve a pod that is GPU enabled for example.
Extending Kubernetes
|
Bottom Line
While Kubernetes is an excellent solution for container orchestration, one should be aware of the underlying architecture to take advantage of the full power of the solution. As there is no complete architecture diagram in the Kubernetes documentation from the Linux Foundation, here is a good meta model with most of the main elements defined:
Want to Know More?
Please book a call with an analyst to talk about this in more detail.