If you are using Kubernetes, you know that 100's of nodes can be managed in a single K8s cluster. You may want to group nodes residing close to each other. If you are using Load balancer for N-S traffic, then you may want that this Load balancer should forward traffic to PODs which is closer to it. It will help to improve response time. This can help user experience.
You can constrain a Pod to only be able to run on particular Node(s), or to prefer to run on particular nodes.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.) but there are some circumstances where you may want more control on a node where a pod lands, e.g. to ensure that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone.
nodeSelector is the simplest recommended form of node selection constraint. However it gives hard requirements in the sense that POD will not be scheduled even node requirement doesn't match. Below are the steps
Step One: Attach label to the node
Step Two: Add a nodeSelector field to your pod configuration so that pods will be scheduled in the given node only
Node affinity is conceptually similar to nodeSelector – it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node. However, it provides additional capability to support hard and soft requirement. Soft requirements are tried but not mandated. These requirements are specified via preferredDuringSchedulingIgnoredDuringExecution and requiredDuringSchedulingIgnoredDuringExecution options
Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on nodes.
The rules are of the form “this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y”.
Using this capability, One can easily configure that a set of workloads should be co-located in the same defined topology, eg., the same node.
Reference
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/