Instructions are for all nodes unless otherwise stated.
# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2
Add Docker repository.
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Install Docker CE.
yum update && yum install docker-ce-18.06.2.ce
Create /etc/docker directory.
mkdir /etc/docker
Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
Restart Docker
systemctl daemon-reload
systemctl restart docker
Ensure all servers have the correct time. Not having the correct time can cause problems when joining nodes to the cluster.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Install Packages
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
Note:
Setting SELinux in permissive mode by running setenforce 0
and sed ...
effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet.
Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables
is set to 1 in your sysctl
config, e.g.
Make sure that the br_netfilter
module is loaded before this step. This can be done by running lsmod | grep br_netfilter
. To load it explicitly call modprobe br_netfilter
.
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Disable swap and remove from fstab
swapoff -a
vim /etc/fstab
# /dev/mapper/centos-swap swap swap defaults 0 0
Configure the firewall
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.244.0.0/16" accept'
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.0/24" port port=8285 protocol="udp" accept'
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.0/24" port port=8472 protocol="udp" accept'
firewall-cmd --reload
Ensure all hosts can resolve in DNS/host file
### On the Kube Master
Allow port 6443 access
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.0/24" port port=6443 protocol="tcp" accept'
firewall-cmd --reload
Reboot ALL Servers
systemctl reboot
### This is on the Kube Master
# Bootstrap the cluster.
kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 192.168.56.2
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.2:6443 --token ss3zw0.ftg4itdcf7d4j7uv \
--discovery-token-ca-cert-hash sha256:805e0ba0624a7d3d8159259eb2508717df40f75b95fe32c2b41955edf124ca45
Turn on iptables bridge calls
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Run as your not root user
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Add the Flannel add-on. Again as your non root user.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
On your nodes you can now join any number of worker nodes by running the following on each as root
kubeadm join 192.168.56.2:6443 --token ss3zw0.ftg4itdcf7d4j7uv \
--discovery-token-ca-cert-hash sha256:805e0ba0624a7d3d8159259eb2508717df40f75b95fe32c2b41955edf124ca45