A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods. In Kubernetes, a Pod
represents a set of running containers on your cluster.
Kubernetes pods have a defined lifecycle. For example, once a pod is running in your cluster then a critical fault on the node where that pod is running means that all the pods on that node fail. Kubernetes treats that level of failure as final: you would need to create a new Pod
to recover, even if the node later becomes healthy.
However, to make life considerably easier, you don't need to manage each Pod
directly. Instead, you can use workload resources that manage a set of pods on your behalf. These resources configure controllers that make sure the right number of the right kind of pod are running, to match the state you specified.
Kubernetes provides several built-in workload resources:
Deployment
and ReplicaSet
(replacing the legacy resource ReplicationController). Deployment
is a good fit for managing a stateless application workload on your cluster, where any Pod
in the Deployment
is interchangeable and can be replaced if needed.StatefulSet
lets you run one or more related Pods that do track state somehow. For example, if your workload records data persistently, you can run a StatefulSet
that matches each Pod
with a PersistentVolume
. Your code, running in the Pods
for that StatefulSet
, can replicate data to other Pods
in the same StatefulSet
to improve overall resilience.DaemonSet
defines Pods
that provide node-local facilities. These might be fundamental to the operation of your cluster, such as a networking helper tool, or be part of an add-on.DaemonSet
, the control plane schedules a Pod
for that DaemonSet
onto the new node.Job
and CronJob
define tasks that run to completion and then stop. Jobs represent one-off tasks, whereas CronJobs
recur according to a schedule.In the wider Kubernetes ecosystem, you can find third-party workload resources that provide additional behaviors. Using a custom resource definition, you can add in a third-party workload resource if you want a specific behavior that's not part of Kubernetes' core. For example, if you wanted to run a group of Pods
for your application but stop work unless all the Pods are available (perhaps for some high-throughput distributed task), then you can implement or install an extension that does provide that feature.
As well as reading about each resource, you can learn about specific tasks that relate to them:
Deployment
CronJob
To learn about Kubernetes' mechanisms for separating code from configuration, visit Configuration.
There are two supporting concepts that provide backgrounds about how Kubernetes manages pods for applications:
Once your application is running, you might want to make it available on the internet as a Service
or, for web application only, using an Ingress
.
© 2022 The Kubernetes Authors
Documentation Distributed under CC BY 4.0.
https://kubernetes.io/docs/concepts/workloads/