Do you want to ensure that multiple POD of same service should not be scheduled on same node. You are concerned about node failure? K8s have approach to enforce this as a POD scheduling policy
Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on nodes. The rules are of the form “this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y”.
Interpod Affinity and AntiAffinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. One can easily configure that a set of workloads should be co-located in the same defined topology, eg., the same node.
Below yaml snippet of a simple redis deployment with three replicas and selector label app=store. The deployment has PodAntiAffinity configured to ensure the scheduler does not co-locate replicas on a single node
PODs are not co-located in same node
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cache
spec:
selector:
matchLabels:
app: store
replicas: 3
template:
metadata:
labels:
app: store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: redis-server
image: redis:3.2-alpine
In below output, notice that only one POD is created since there is only one worker node and master node is not tainted.
Output in single worker node cluster
root@ubuntu-232:~/deepak/test# kubectl get deployments | grep redis
redis-cache 3 3 3 1 2m
root@ubuntu-232:~/deepak/test# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-231 Ready <none> 242d v1.10.0
ubuntu-232 Ready master 242d v1.10.0
Reference
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/