This is the multi-page printable view of this section. Click here to print.
Manage Cluster Daemons
1 - Perform a Rolling Update on a DaemonSet
This page shows how to perform a rolling update on a DaemonSet.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
DaemonSet Update Strategy
DaemonSet has two update strategy types:
OnDelete
: WithOnDelete
update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods. This is the same behavior of DaemonSet in Kubernetes version 1.5 or before.RollingUpdate
: This is the default update strategy.
WithRollingUpdate
update strategy, after you update a DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods will be created automatically, in a controlled fashion. At most one pod of the DaemonSet will be running on each node during the whole update process.
Performing a Rolling Update
To enable the rolling update feature of a DaemonSet, you must set its
.spec.updateStrategy.type
to RollingUpdate
.
You may want to set
.spec.updateStrategy.rollingUpdate.maxUnavailable
(default to 1),
.spec.minReadySeconds
(default to 0) and
.spec.updateStrategy.rollingUpdate.maxSurge
(a beta feature and defaults to 0) as well.
Creating a DaemonSet with RollingUpdate
update strategy
This YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate'
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# these tolerations are to have the daemonset runnable on control plane nodes
# remove them if your control plane nodes should not run pods
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
Alternatively, use kubectl apply
to create the same DaemonSet if you plan to
update the DaemonSet with kubectl apply
.
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
Checking DaemonSet RollingUpdate
update strategy
Check the update strategy of your DaemonSet, and make sure it's set to
RollingUpdate
:
kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system
If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead:
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
The output from both commands should be:
RollingUpdate
If the output isn't RollingUpdate
, go back and modify the DaemonSet object or
manifest accordingly.
Updating a DaemonSet template
Any updates to a RollingUpdate
DaemonSet .spec.template
will trigger a rolling
update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different kubectl
commands.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# these tolerations are to have the daemonset runnable on control plane nodes
# remove them if your control plane nodes should not run pods
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Declarative commands
If you update DaemonSets using
configuration files,
use kubectl apply
:
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset-update.yaml
Imperative commands
If you update DaemonSets using
imperative commands,
use kubectl edit
:
kubectl edit ds/fluentd-elasticsearch -n kube-system
Updating only the container image
If you only need to update the container image in the DaemonSet template, i.e.
.spec.template.spec.containers[*].image
, use kubectl set image
:
kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system
Watching the rolling update status
Finally, watch the rollout status of the latest DaemonSet rolling update:
kubectl rollout status ds/fluentd-elasticsearch -n kube-system
When the rollout is complete, the output is similar to this:
daemonset "fluentd-elasticsearch" successfully rolled out
Troubleshooting
DaemonSet rolling update is stuck
Sometimes, a DaemonSet rolling update may be stuck. Here are some possible causes:
Some nodes run out of resources
The rollout is stuck because new DaemonSet pods can't be scheduled on at least one node. This is possible when the node is running out of resources.
When this happens, find the nodes that don't have the DaemonSet pods scheduled on
by comparing the output of kubectl get nodes
and the output of:
kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system
Once you've found those nodes, delete some non-DaemonSet pods from the node to make room for new DaemonSet pods.
Broken rollout
If the recent DaemonSet template update is broken, for example, the container is crash looping, or the container image doesn't exist (often due to a typo), DaemonSet rollout won't progress.
To fix this, update the DaemonSet template again. New rollout won't be blocked by previous unhealthy rollouts.
Clock skew
If .spec.minReadySeconds
is specified in the DaemonSet, clock skew between
master and nodes will make DaemonSet unable to detect the right rollout
progress.
Clean up
Delete DaemonSet from a namespace :
kubectl delete ds fluentd-elasticsearch -n kube-system
What's next
2 - Perform a Rollback on a DaemonSet
This page shows how to perform a rollback on a DaemonSet.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be at or later than version 1.7. To check the version, enterkubectl version
.
You should already know how to perform a rolling update on a DaemonSet.
Performing a rollback on a DaemonSet
Step 1: Find the DaemonSet revision you want to roll back to
You can skip this step if you only want to roll back to the last revision.
List all revisions of a DaemonSet:
kubectl rollout history daemonset <daemonset-name>
This returns a list of DaemonSet revisions:
daemonsets "<daemonset-name>"
REVISION CHANGE-CAUSE
1 ...
2 ...
...
- Change cause is copied from DaemonSet annotation
kubernetes.io/change-cause
to its revisions upon creation. You may specify--record=true
inkubectl
to record the command executed in the change cause annotation.
To see the details of a specific revision:
kubectl rollout history daemonset <daemonset-name> --revision=1
This returns the details of that revision:
daemonsets "<daemonset-name>" with revision #1
Pod Template:
Labels: foo=bar
Containers:
app:
Image: ...
Port: ...
Environment: ...
Mounts: ...
Volumes: ...
Step 2: Roll back to a specific revision
# Specify the revision number you get from Step 1 in --to-revision
kubectl rollout undo daemonset <daemonset-name> --to-revision=<revision>
If it succeeds, the command returns:
daemonset "<daemonset-name>" rolled back
--to-revision
flag is not specified, kubectl picks the most recent revision.
Step 3: Watch the progress of the DaemonSet rollback
kubectl rollout undo daemonset
tells the server to start rolling back the
DaemonSet. The real rollback is done asynchronously inside the cluster
control plane.
To watch the progress of the rollback:
kubectl rollout status ds/<daemonset-name>
When the rollback is complete, the output is similar to:
daemonset "<daemonset-name>" successfully rolled out
Understanding DaemonSet revisions
In the previous kubectl rollout history
step, you got a list of DaemonSet
revisions. Each revision is stored in a resource named ControllerRevision.
To see what is stored in each revision, find the DaemonSet revision raw resources:
kubectl get controllerrevision -l <daemonset-selector-key>=<daemonset-selector-value>
This returns a list of ControllerRevisions:
NAME CONTROLLER REVISION AGE
<daemonset-name>-<revision-hash> DaemonSet/<daemonset-name> 1 1h
<daemonset-name>-<revision-hash> DaemonSet/<daemonset-name> 2 1h
Each ControllerRevision stores the annotations and template of a DaemonSet revision.
kubectl rollout undo
takes a specific ControllerRevision and replaces
DaemonSet template with the template stored in the ControllerRevision.
kubectl rollout undo
is equivalent to updating DaemonSet template to a
previous revision through other commands, such as kubectl edit
or kubectl apply
.
.revision
field) of the
ControllerRevision being rolled back to will advance. For example, if you
have revision 1 and 2 in the system, and roll back from revision 2 to revision
1, the ControllerRevision with .revision: 1
will become .revision: 3
.