In-place Pod Vertical Scaling in Kubernetes 1.27
In-place Pod Vertical Scaling was one of the most obvious wanted features has appeared in the latest Kubernetes version 1.27 in the alpha stage. For it was obvious to me why the hack I have to restart the pod each time that modify the resources, especially in the break glass scenarios when a single change triggers a lengthy rolling update. The same applies to the Vertical Pod Autoscaler which becomes more usable.
This feature triggers pretty extensive discussion, user documentation is presented here.
Let’s present it in a live demo using the kind cluster:
$ cat kind.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
"InPlacePodVerticalScaling": true
nodes:
- role: control-plane
image: kindest/node:v1.27.0
- role: worker
image: kindest/node:v1.27.0
$ kind create cluster --config kind.yaml
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: "nginx"
namespace: default
labels:
app: "nginx"
spec:
containers:
- name: nginx
image: "nginx:latest"
resizePolicy:
- resourceName: "cpu"
restartPolicy: "NotRequired"
- resourceName: "memory"
restartPolicy: "NotRequired"
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
restartPolicy: Always
$ kubectl apply -f pod.yaml
# lets find out what are the values in linux cgroup2
root@kind-worker:/sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod1099aaae_3a28_4f78_86c2_bdccc54f22f4.slice/cri-containerd-bc261e8d2a6eb2
d5bf454d9fe9a14cc5efea22b475f21cd0bfe2f32e6e3fc03a.scope# cat cpu.max
20000 100000
root@kind-worker:/sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod1099aaae_3a28_4f78_86c2_bdccc54f22f4.slice/cri-containerd-bc261e8d2a6eb2
d5bf454d9fe9a14cc5efea22b475f21cd0bfe2f32e6e3fc03a.scope# cat memory.max
209715200
# as expected, so know change the cpu/mem limits to 500m/800Mi and cpu/mem request to 400m/500Mi
root@kind-worker:/sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod1099aaae_3a28_4f78_86c2_bdccc54f22f4.slice/cri-containerd-bc261e8d2a6eb2
d5bf454d9fe9a14cc5efea22b475f21cd0bfe2f32e6e3fc03a.scope# cat cpu.max
50000 100000
root@kind-worker:/sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod1099aaae_3a28_4f78_86c2_bdccc54f22f4.slice/cri-containerd-bc261e8d2a6eb2
d5bf454d9fe9a14cc5efea22b475f21cd0bfe2f32e6e3fc03a.scope# cat memory.max
838860800
# looks ok, pod nginx output in .status.containerStatuses reflects this change
$ kubectl get pod nginx -o yaml
...
containerStatuses:
- allocatedResources:
cpu: 400m
memory: 500Mi
# when I want to set more resources that this node has it will give up
$ kubectl get pod nginx -o yaml
...
resize: Infeasible
Nice, let’s hope it will graduate stable soon.
powered by Hugo and Noteworthy theme