from an article
Master server
This server acts as a gateway and brain for the cluster by exposing an API for users and clients, health checking other servers, deciding how best to split up and assign work (known as "scheduling"), and orchestrating communication between other components. The master server acts as the primary point of contact with the cluster and is responsible for most of the centralized logic k8s provides.
Nodes
Servers responsible for accepting and running workloads. A container runtime must be available. The nodes receives work instructions from the master and create/destroy containers on the fly.
k8s objects
Pod
A pod is a group of containerized components co-located on the same host machine that can share resources. Pod is the basic scheduling unit in k8s.
Each pop is assigned a unique Pod IP address. Containers within a pod can reference each other. To visit a container in another pop, it has to use Pod IP address which is not recommended because Pod IP address changes. Instead it should use a reference to a Service which holds a reference to the target pod.
Why does Kubernetes allow more than one container in a Pod?
Containers in a Pod runs on a "logical host": they use the same network namespace (same IP address and port space), IPC namespace and, optionally, they can use shared volumes. Therefore, these containers can efficiently communicate, ensuring data locality. Also, Pods allow managing several tightly coupled application containers as a single unit.
So, if an application needs several containers running on the same host, why not to make a single container with everything from those containers? First, putting many things into one container will probably violate the "one process per container" principle. Second, using several containers for an application is simpler to use, more transparent, and allows decoupling software dependencies. Also, more granular containers can be reused between teams.
Service
A service is a set of pods that work together. The set of pods that constitute a service are defined by a label selector.
A service discovered through environmental variables / k8s DNS. It can be exposed to both inside a cluster and outside a cluster.
Volume
The file system in a k8s container provides ephemeral storage which mean it is temporarily and can be wiped out anytime.
A volume is mount specific mount point within a container to provide persistent storage.
Replica set
A Replica Set declares the number of instances of a pod that is needed.
NameSpace
A mechanism to partition resources into non-overlapping sets. Each set is called a namespace.
e.g. it can be used to divide resources for different teams / projects / dev, tst, prod.
ConfigMaps & Secrets
A secure way to store sensitive information. The biggest difference between a secret and a configmap is that data in a secret is base64 encoded.
Managing k8s objects
Label and selector
One can attach labels to any api object in the system, such as pods and nodes. A selector is a query against the labels that to match objects.
Replication controller
A Replication Controller manages the system so that the number of healthy pods that are running matches the number of pods declared in the Replica Set.
Cluster API
The functions to create, configure and manage k8s cluster. There are core api and could-provider specific api.
k8s architecture
Control plane (Primary) is the main controlling unit of the cluster, managing workload and communication across the whole cluster.
The control plane has the following components, each process can run on a single primary node / multiple primary nodes.
etcd
This is a daemon implemented by CoreOS Linux (Container Linux) to provide a dynamic configuration registry, allowing various config data to be easily and reliably shared between cluster members.
The key-value pair data stored within etcd is automatically distributed and replicated across the entire cluster.
API server
It providers both internal and external interface to K8s using JSON over HTTP.
The API server processes REST requests and updates state of the API objects in etcd, allowing configuration of workload and containers across worker nodes.
Scheduler
It selects which node an unscheduled pod runs on based on resource availability.
The scheduler tracks resource use on each node to ensure that workload is not scheduled in excess of available resources. For this purpose it must know the resource requirements, availability and other constraints, data locality, etc. In essence, the scheduler's role is to match resource 'supply' to workload 'demand'.
Controller & Controller manager
A controller is a reconciliation loop that derives actual cluster state toward the desired cluster state, communicating with the API server to manage resources (pods, service endpoints, etc). Examples are replication controller, DaemonSet controller, Job controller.
The controller manager is a process that manages a set of core K8s controllers.
A Node is also called a worker / minion / slave. It's a machine where containers (workloads) are deployed. Every node in the cluster must run a container runtime such as Docker and the following components, for communication with the control plane for network configuration.
Kubelet
Kubelet is responsible for the running state of each node, ensuring all containers on the nodes are healthy. It takes care of starting, stopping and maintaining containers organized into pods as directed by the control plane. It monitors the state of a pod, and if not in the desired state, the pod re-deploys to the same node. Node status is relayed every few seconds via heartbeat messages to the control plane.
Kube-proxy
It's an implementation of a network proxy and a load balancer (sounds similar to nginx). It routes traffic to the appropriate container based on IP and port number of the incoming request.
Container runtime
The container is the lowest level of a micro-service, which holds the running application and its libraries. Containers can be exposed to the world through an external ip.
k8s support docker and rkt.
StatefulSets
k8s is designed for stateless application which is easy to be scaled by adding more pods.
Stateful workload is harder because the state needs to be preserved if a pod is restarted, and the state may need to be redistributed if the application is scaled up or down.
Database is an example of stateful workload. When run in high-availability mode with multiple instances, it needs to indicate which is primary instance and which is secondary, i.e. the ordering of instances is important. Other app like Kafka distribute data amongst brokers - so one broker is not the same as another, i.e. the uniqueness is important.
Add-ons
They operate just like any other applications running within the cluster, and are different in that they implement features of k8s, such as DNS, Web UI, Container resource monitoring and logging, etc.