Instructions are for all nodes unless otherwise stated.
# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2Add Docker repository.
yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repoInstall Docker CE.
yum update && yum install docker-ce-18.06.2.ceCreate /etc/docker directory.
mkdir /etc/dockerSetup daemon.
cat > /etc/docker/daemon.json <<EOF{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ]}EOFmkdir -p /etc/systemd/system/docker.service.dRestart Docker
systemctl daemon-reloadsystemctl restart dockerEnsure all servers have the correct time. Not having the correct time can cause problems when joining nodes to the cluster.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpgEOFSet SELinux in permissive mode (effectively disabling it)
setenforce 0sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/configInstall Packages
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetessystemctl enable --now kubeletNote:
Setting SELinux in permissive mode by running setenforce 0 and sed ... effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet.
Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call modprobe br_netfilter.
cat <<EOF > /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --systemDisable swap and remove from fstab
swapoff -avim /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 0Configure the firewall
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.244.0.0/16" accept'firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.0/24" port port=8285 protocol="udp" accept'firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.0/24" port port=8472 protocol="udp" accept'firewall-cmd --reloadEnsure all hosts can resolve in DNS/host file
### On the Kube Master
Allow port 6443 access
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.0/24" port port=6443 protocol="tcp" accept'firewall-cmd --reloadReboot ALL Servers
systemctl reboot### This is on the Kube Master
# Bootstrap the cluster.
kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 192.168.56.2[init] Using Kubernetes version: v1.15.2[preflight] Running pre-flight checks...Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.56.2:6443 --token ss3zw0.ftg4itdcf7d4j7uv \ --discovery-token-ca-cert-hash sha256:805e0ba0624a7d3d8159259eb2508717df40f75b95fe32c2b41955edf124ca45Turn on iptables bridge calls
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.confsudo sysctl -pRun as your not root user
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAdd the Flannel add-on. Again as your non root user.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.ymlOn your nodes you can now join any number of worker nodes by running the following on each as root
kubeadm join 192.168.56.2:6443 --token ss3zw0.ftg4itdcf7d4j7uv \ --discovery-token-ca-cert-hash sha256:805e0ba0624a7d3d8159259eb2508717df40f75b95fe32c2b41955edf124ca45