You must be knowing that k8s seamlessly integrate multiple machines/VMs. So, a container in one host can communicate to a container in other host. Think the case where you have to configure host so that it will send traffic to another host. It will hard as well as error-prone. K8s takes care of this automatically. This document describes it.
Basics of networking in K8s
Each of the networking tools works by creating an overlay network on top of an existing infrastructure. Containers that attach to these networks are assigned a unique IP address, which they can use to directly reach other containers. This eliminates the need to set up complex networking rules on the host, since all cross-container traffic passes through the overlay network.
Each host in a flannel cluster runs an agent called flanneld. Flanneld assigns each host a subnet which acts as the IP address pool for containers running on the host. Containers can then contact other containers directly using their IP address. It uses VXLAN tunnelling for inter-host communication
Below is the packet flow for Flannel CNI
It is best known for its strong network policy management and access control lists (ACLs). Using Calico, you can configure inbound and outbound rules by port, direction, protocol, and other attributes. Like Flannel, Calico runs an agent on each host.
Unlike flannel, Calico connects hosts using the Border Gateway Protocol (BGP). Each host runs a BGP client which tracks routes and distributes them to other hosts. Not only does this reduce the overhead of encapsulating packets, but it lets you scale and distribute clusters more easily.
Using a feature called fast datapath, it encapsulates and routes VXLAN packets in the kernel instead of user space, saving CPU overhead and latency.
Internally, Weave Net uses a DNS service called weaveDNS. WeaveDNS provides name resolution, automated service discovery, load balancing, and fault tolerance. In addition, Weave Net includes a built-in encryption system for securing all traffic between hosts.
Canal combines two of the projects in this list—Flannel and Calico—to create a unified networking solution. More specifically, it combines Flannel's network architecture with Calico's policy management API. Canal is more of a deployment tool for installing and configuring both Flannel and Calico, as well as integrating them into your orchestration engine.
Typically, in Kubernetes each pod only has one network interface (apart from a loopback) – with Multus you can create a multi-homed pod that has multiple interfaces.
Multus is a CNI proxy and arbiter of other CNI plugins. Multus invokes other CNI plugins for network interface creation. When Multus is used, a master plugin (flannel, Calico, weave) is identified to manage the primary network interface (eth0) for the pod and it is returned to Kubernetes.Other CNI minion plugins (SR-IOV, vHost CNI, etc.) can create additional pod interfaces (net0, net1, etc.) during their normal instantiation process.
Kubelet is responsible for establishing the network interfaces for each pod; it does this by invoking its configured CNI plugin. When Multus is invoked, it recovers pod annotations related to Multus, in turn, then it uses these annotations
to recover a Kubernetes custom resource definition (CRD), which is an object that informs which plugins to invoke and the configuration needing to be passed to them. The order of plugin invocation is important as is the identity of the master plugin.
brctl show
It show bridges on a given host and the bridge ports/interfaces.
ip netns list
It shows network namespace. If the container runtime is Docker, you will not be able to list namespaces this way because Docker does not sym-link the namespaces under /var/run/netns. You can enable ip netns namespace functionality by creating the necessary sym-link yourself: ln -s /var/run/docker/netns /var/run/netns
ip link list
It helps in identifying veth pair endpoints which is used to send traffic in and out of a given POD.
arp
Show the ARP cache which is useful when debugging L2 issues
ethtool
Show interface related parameters, including offload features.
route
Show route configuration info
tcpdump/wireshark
It captures traffic
https://www.praqma.com/stories/debugging-kubernetes-networking/
https://blog.aquasec.com/popular-docker-networking-and-kubernetes-networking-tools
https://docs.projectcalico.org/reference/architecture/data-path
https://www.weave.works/docs/net/latest/concepts/fastdp-how-it-works/
https://intel.github.io/multus-cni/
https://builders.intel.com/docs/networkbuilders/multiple-network-interfaces-support-in-kubernetes-feature-brief.pdf