Centos8 上の kvm 環境で、 Ubuntu 20 を2ノード上げて、 kubernetes で遊ぶ。
このリポジトリから、最新版を持ってくる。kubeadm で設定をする。
kanda@ubuntu20:~$ cat /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
kanda@ubuntu20:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
kubeworker NotReady <none> 19h v1.18.6
ubuntu20 Ready master 21h v1.18.6
kanda@ubuntu20:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e44d03265688 67da37a9a360 "/coredns -conf /etc…" 5 seconds ago Up 4 seconds k8s_coredns_coredns-66bff467f8-7xv2j_kube-system_8c8fa435-2577-4756-8245-56c0499cf1b2_4
b1767990faa9 k8s.gcr.io/pause:3.2 "/pause" 7 seconds ago Up 5 seconds k8s_POD_coredns-66bff467f8-7xv2j_kube-system_8c8fa435-2577-4756-8245-56c0499cf1b2_6
cc24832e43ed 67da37a9a360 "/coredns -conf /etc…" 9 seconds ago Up 8 seconds k8s_coredns_coredns-66bff467f8-tvvv5_kube-system_1625a384-045f-4637-b1e4-4f5a8b33d41f_4
1db445274e1e k8s.gcr.io/pause:3.2 "/pause" 10 seconds ago Up 9 seconds k8s_POD_coredns-66bff467f8-tvvv5_kube-system_1625a384-045f-4637-b1e4-4f5a8b33d41f_9
497fa028b05b 15f795b449d2 "start_runit" 20 seconds ago Up 19 seconds k8s_calico-node_calico-node-b6s9x_kube-system_b2211f31-8a64-47d5-bb6c-7d1946cbdcb2_61
df30bf91cce1 c3d62d6fe412 "/usr/local/bin/kube…" 38 seconds ago Up 36 seconds k8s_kube-proxy_kube-proxy-8dgfl_kube-system_3b933e50-9342-4837-8832-fb52dec387e5_4
507a82d696a7 k8s.gcr.io/pause:3.2 "/pause" 38 seconds ago Up 35 seconds k8s_POD_calico-node-b6s9x_kube-system_b2211f31-8a64-47d5-bb6c-7d1946cbdcb2_4
0903643ba735 k8s.gcr.io/pause:3.2 "/pause" 39 seconds ago Up 37 seconds k8s_POD_kube-proxy-8dgfl_kube-system_3b933e50-9342-4837-8832-fb52dec387e5_4
8199b079fad4 303ce5db0e90 "etcd --advertise-cl…" About a minute ago Up About a minute k8s_etcd_etcd-ubuntu20_kube-system_628bb24559b0a3733400c564e8ff778f_4
130818376b58 0e0972b2b5d1 "kube-scheduler --au…" About a minute ago Up About a minute k8s_kube-scheduler_kube-scheduler-ubuntu20_kube-system_3dd66788a2c7782d910d05ea37b91678_5
665ff92884fc ffce5e64d915 "kube-controller-man…" About a minute ago Up About a minute k8s_kube-controller-manager_kube-controller-manager-ubuntu20_kube-system_104dcd565cfde7e10818df003b3b889f_5
7644b819fabc 56acd67ea15a "kube-apiserver --ad…" About a minute ago Up About a minute k8s_kube-apiserver_kube-apiserver-ubuntu20_kube-system_62ef3bfd2dbd1d709309bdda20755a2f_4
5871af6dd62c k8s.gcr.io/pause:3.2 "/pause" About a minute ago Up About a minute k8s_POD_kube-controller-manager-ubuntu20_kube-system_104dcd565cfde7e10818df003b3b889f_4
bfca29fd9796 k8s.gcr.io/pause:3.2 "/pause" About a minute ago Up About a minute k8s_POD_kube-apiserver-ubuntu20_kube-system_62ef3bfd2dbd1d709309bdda20755a2f_4
3880da7f4271 k8s.gcr.io/pause:3.2 "/pause" About a minute ago Up About a minute k8s_POD_kube-scheduler-ubuntu20_kube-system_3dd66788a2c7782d910d05ea37b91678_4
ec10099fa8f9 k8s.gcr.io/pause:3.2 "/pause" About a minute ago Up About a minute k8s_POD_etcd-ubuntu20_kube-system_628bb24559b0a3733400c564e8ff778f_4
kanda@ubuntu20:~$
kanda@ubuntu20:~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 8cf1bfb43ff5 2 days ago 132MB
k8s.gcr.io/kube-proxy v1.18.6 c3d62d6fe412 9 days ago 117MB
k8s.gcr.io/kube-apiserver v1.18.6 56acd67ea15a 9 days ago 173MB
k8s.gcr.io/kube-controller-manager v1.18.6 ffce5e64d915 9 days ago 162MB
k8s.gcr.io/kube-scheduler v1.18.6 0e0972b2b5d1 9 days ago 95.3MB
calico/node v3.9.6 15f795b449d2 8 weeks ago 195MB
calico/pod2daemon-flexvol v3.9.6 63fbf227cf10 8 weeks ago 9.78MB
calico/cni v3.9.6 0ce7550069ed 8 weeks ago 167MB
calico/kube-controllers v3.9.6 081a5bf738ad 8 weeks ago 56MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 5 months ago 683kB
k8s.gcr.io/coredns 1.6.7 67da37a9a360 5 months ago 43.8MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 9 months ago 288MB
kanda@ubuntu20:~$ ip link
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:9f:38:8c brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:7a:e7:ec:bc brd ff:ff:ff:ff:ff:ff
4: calib6e6e6b9abc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: calie84dc3508a1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
6: cali35747116f24@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
kanda@ubuntu20:~$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02427ae7ecbc no
CrashLoopBackOff って、なんか、変なのだよね。
kanda@ubuntu20:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-5fbfc9dfb6-4c8r9 1/1 Running 4 21h
kube-system calico-node-4fjlw 0/1 CrashLoopBackOff 21 19h
kube-system calico-node-b6s9x 0/1 CrashLoopBackOff 66 21h
kube-system coredns-66bff467f8-7xv2j 1/1 Running 4 21h
kube-system coredns-66bff467f8-tvvv5 1/1 Running 4 21h
kube-system etcd-ubuntu20 1/1 Running 4 21h
kube-system kube-apiserver-ubuntu20 1/1 Running 4 21h
kube-system kube-controller-manager-ubuntu20 1/1 Running 5 21h
kube-system kube-proxy-8dgfl 1/1 Running 4 21h
kube-system kube-proxy-k845c 1/1 Running 3 19h
kube-system kube-scheduler-ubuntu20 1/1 Running 5 21h
ワーカーノードを上げて7分ほどすると、マスターもワーカーも、 ssh, ping が通らなくなる。コンソールアクセスは正常にできる。ネットワークが死んでいるみたい。
CALICO_IPV4POOL_CIDR が、かぶっていた。
kanda@ubuntu20:~$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
kanda@ubuntu20:~$ kubectl describe deployment nginx
Name: nginx
Namespace: default
CreationTimestamp: Sat, 25 Jul 2020 01:12:18 +0000
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=nginx
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-f89759699 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 31s deployment-controller Scaled up replica set nginx-f89759699 to 1
kanda@ubuntu20:~$ docker image inspect 8cf1bfb43ff5
[
{
"Id": "sha256:8cf1bfb43ff5d9b05af9b6b63983440f137c6a08320fa7592197c1474ef30241",
"RepoTags": [
"nginx:latest"
],
"RepoDigests": [
"nginx@sha256:0e188877aa60537d1a1c6484b8c3929cfe09988145327ee47e8e91ddf6f76f5c"
],
"Parent": "",
"Comment": "",
"Created": "2020-07-22T03:23:26.691138735Z",
"Container": "072f077fe42b39ffb92ca368fa5d79be975faa97b33bffb5b5383eb6abfb4838",
"ContainerConfig": {
"Hostname": "072f077fe42b",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.19.1",
"NJS_VERSION=0.4.2",
"PKG_RELEASE=1~buster"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"nginx\" \"-g\" \"daemon off;\"]"
],
"ArgsEscaped": true,
"Image": "sha256:c36a062005a6c2badac7fd50caa402b8566fc044e47c9901ba0a6704af943d66",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>"
},
"StopSignal": "SIGTERM"
},
"DockerVersion": "18.09.7",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.19.1",
"NJS_VERSION=0.4.2",
"PKG_RELEASE=1~buster"
],
"Cmd": [
"nginx",
"-g",
"daemon off;"
],
"ArgsEscaped": true,
"Image": "sha256:c36a062005a6c2badac7fd50caa402b8566fc044e47c9901ba0a6704af943d66",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>"
},
"StopSignal": "SIGTERM"
},
"Architecture": "amd64",
"Os": "linux",
"Size": 132484488,
"VirtualSize": 132484488,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/34be9a75323a114ffe4ac89438c2743170153c70627454b9fe81b5109cd63aae/diff:/var/lib/docker/overlay2/e476525867671c775ce3f8e19e746e56a944173044ba5aa89e1185b3f4185dcf/diff:/var/lib/docker/overlay2/bab49a76b9e7646ad7bca402733267dbeeb49e04445a22e508a1768e9e6e3d4f/diff:/var/lib/docker/overlay2/ebbb06e584c7bec0ab011690f3b6d5771cdf3371b7a0e7b13249fc5e40b633a4/diff",
"MergedDir": "/var/lib/docker/overlay2/a1b3e9b180503ad21a6df41c76c2a81914a6499e3a595b0fedbfb906b72329ae/merged",
"UpperDir": "/var/lib/docker/overlay2/a1b3e9b180503ad21a6df41c76c2a81914a6499e3a595b0fedbfb906b72329ae/diff",
"WorkDir": "/var/lib/docker/overlay2/a1b3e9b180503ad21a6df41c76c2a81914a6499e3a595b0fedbfb906b72329ae/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:95ef25a3204339de1edf47feaa00f60b5ac157a498964790c58c921494ce7ffd",
"sha256:d899691659b0a023be369ea77d6dedcf959aa50fcfdfd97603983b5f94296c20",
"sha256:227442bb48dc0fb87876f4e4b767718f45c88f888f15ad22cca38193f99559f7",
"sha256:1698c1b7e3e687db0079af289f1685cd2526c21d80d5b13b3ba0e9b191b9fb6f",
"sha256:98b4c818e603e05abe4963d66b975deaeab44997972f7f69533af9be1293de78"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]
kanda@ubuntu20:~$ kubectl get deployments,pods
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 8m17s
NAME READY STATUS RESTARTS AGE
pod/nginx-d46f5678b-xc4gr 1/1 Running 0 2m2s
kanda@ubuntu20:~$ kubectl expose deployment/nginx
service/nginx exposed
kanda@ubuntu20:~$ kubectl get svc nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.108.248.175 <none> 80/TCP 40s
kanda@ubuntu20:~$ kubectl get ep nginx
NAME ENDPOINTS AGE
nginx 192.168.219.217:80 48s
このアドレスは、誰が、どう振って、転送しているのだろう。
kanda@ubuntu20:~$ ip addr
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:9f:38:8c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.156/24 brd 192.168.122.255 scope global dynamic enp1s0
valid_lft 2059sec preferred_lft 2059sec
inet6 fe80::5054:ff:fe9f:388c/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:7a:e7:ec:bc brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: calib6e6e6b9abc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
5: calie84dc3508a1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
6: cali35747116f24@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
8: calif65742e9417@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
kanda@ubuntu20:~$ docker exec -it d0265b164d07 /bin/bash
root@nginx-d46f5678b-xc4gr:/# which nginx
/usr/sbin/nginx
root@nginx-d46f5678b-xc4gr:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 20G 8.7G 12G 44% /
tmpfs 64M 0 64M 0% /dev
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda2 20G 8.7G 12G 44% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 2.0G 12K 2.0G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 2.0G 0 2.0G 0% /proc/acpi
tmpfs 2.0G 0 2.0G 0% /proc/scsi
tmpfs 2.0G 0 2.0G 0% /sys/firmware
root@nginx-d46f5678b-xc4gr:/# ip addr
bash: ip: command not found
root@nginx-d46f5678b-xc4gr:/# mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/YDSEXNDQWKMETUFRZHY2UCEVCI:/var/lib/docker/overlay2/l/SU6H657APV5NFIO2ZMSKZ43END:/var/lib/docker/overlay2/l/33DFTZZ5KYG54GBTFCGOXUHXRT:/var/lib/docker/overlay2/l/D7SLWK7VILMYHHKA5QRVDAPZZI:/var/lib/docker/overlay2/l/7LMVO7EYHNEPE2TOD3EILWB5VL:/var/lib/docker/overlay2/l/CGWMVFRHXHUZ5N7UY3NKBEDOLE,upperdir=/var/lib/docker/overlay2/f4e12d2e066f2e84739f16961a252dcd7e7098b5f2c78c7ab732feed22d1a4b8/diff,workdir=/var/lib/docker/overlay2/f4e12d2e066f2e84739f16961a252dcd7e7098b5f2c78c7ab732feed22d1a4b8/work,xino=off)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/rdma type cgroup (ro,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/vda2 on /dev/termination-log type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/vda2 on /etc/resolv.conf type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/vda2 on /etc/hostname type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/vda2 on /etc/hosts type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
proc on /proc/asound type proc (ro,relatime)
proc on /proc/bus type proc (ro,relatime)
proc on /proc/fs type proc (ro,relatime)
proc on /proc/irq type proc (ro,relatime)
proc on /proc/sys type proc (ro,relatime)
proc on /proc/sysrq-trigger type proc (ro,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/keys type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/sched_debug type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/scsi type tmpfs (ro,relatime)
tmpfs on /sys/firmware type tmpfs (ro,relatime)
root@nginx-d46f5678b-xc4gr:/#
root@nginx-d46f5678b-xc4gr:/# printenv
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=nginx-d46f5678b-xc4gr
PWD=/
PKG_RELEASE=1~buster
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
NJS_VERSION=0.4.2
TERM=xterm
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_VERSION=1.19.1
_=/usr/bin/printenv
etcd のデータベースが見えないか。
8199b079fad4 303ce5db0e90 "etcd --advertise-cl…" 41 minutes ago Up 41 minutes k8s_etcd_etcd-ubuntu20_kube-system_628bb24559b0a3733400c564e8ff778f_4
kanda@ubuntu20:~$ docker exec -it 8199b079fad4 /bin/bash
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
kanda@ubuntu20:~$ docker exec -it 8199b079fad4 etcdctl ls
Error: unknown command "ls" for "etcdctl"
Run 'etcdctl --help' for usage.
Error: unknown command "ls" for "etcdctl"
kanda@ubuntu20:~$ docker exec -it 8199b079fad4 etcdctl help
NAME:
etcdctl - A simple command line client for etcd3.
USAGE:
etcdctl [flags]
VERSION:
3.4.3
API VERSION:
3.4
kanda@ubuntu20:~$ docker exec -it 8199b079fad4 etcdctl get
Error: get command needs one argument as key and an optional argument as range_end
kanda@ubuntu20:~$ docker exec -it 8199b079fad4 etcdctl member list
{"level":"warn","ts":"2020-07-25T01:43:33.589Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-8600eabd-205a-4fe1-9453-ef0d4b2951e0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection closed"}
Error: context deadline exceeded
kanda@ubuntu20:~$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP all -- !localhost/8 localhost/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (3 references)
target prot opt source destination
kanda@ubuntu20:~$ kubectl run -i -t busybox --image=busybox --restart=Never
If you don't see a command prompt, try pressing enter.
/ # ip addr
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1440 qdisc noqueue
link/ether 3a:c2:dc:43:80:bf brd ff:ff:ff:ff:ff:ff
inet 192.168.219.253/32 scope global eth0
valid_lft forever preferred_lft forever
# ip route
default via 169.254.1.1 dev eth0
169.254.1.1 dev eth0 scope link
/ # traceroute 192.168.122.156
traceroute to 192.168.122.156 (192.168.122.156), 30 hops max, 46 byte packets
1 192-168-122-156.kubernetes.default.svc.cluster.local (192.168.122.156) 0.003 ms 0.018 ms 0.005 ms
/ # traceroute 1.2.3.4
traceroute to 1.2.3.4 (1.2.3.4), 30 hops max, 46 byte packets
1 192-168-122-156.kubernetes.default.svc.cluster.local (192.168.122.156) 0.006 ms 0.002 ms 0.001 ms
2 192.168.122.1 (192.168.122.1) 0.206 ms 0.139 ms 0.112 ms
3 192.168.0.1 (192.168.0.1) 3.048 ms 3.054 ms 1.158 ms
kanda@ubuntu20:~$ kubectl --v=10 get pods
..
I0725 04:19:21.697368 187988 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" -H "User-Agent: kubectl/v1.18.6 (linux/amd64) kubernetes/dff82dc" 'https://k8smaster:6443/api/v1/namespaces/default/pods?limit=500'
I0725 04:19:21.702899 187988 round_trippers.go:443] GET https://k8smaster:6443/api/v1/namespaces/default/pods?limit=500 200 OK in 5 milliseconds
I0725 04:19:21.702933 187988 round_trippers.go:449] Response Headers:
I0725 04:19:21.702940 187988 round_trippers.go:452] Content-Type: application/json
I0725 04:19:21.702945 187988 round_trippers.go:452] Date: Sat, 25 Jul 2020 04:19:21 GMT
I0725 04:19:21.702950 187988 round_trippers.go:452] Cache-Control: no-cache, private
I0725 04:19:21.703085 187988 request.go:1068] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","
kanda@ubuntu20:~$ curl -k -v -XGET https://k8smaster:6443/api/v1/namespaces/default/pods
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 192.168.122.156:6443...
* TCP_NODELAY set
* Connected to k8smaster (192.168.122.156) port 6443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=kube-apiserver
* start date: Jul 24 03:57:11 2020 GMT
* expire date: Jul 24 03:57:11 2021 GMT
* issuer: CN=kubernetes
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x555ec0b8ddb0)
> GET /api/v1/namespaces/default/pods HTTP/2
> Host: k8smaster:6443
> user-agent: curl/7.68.0
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 403
< cache-control: no-cache, private
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 310
< date: Sat, 25 Jul 2020 04:21:57 GMT
<
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
* Connection #0 to host k8smaster left intact
キーはここ。
kanda@ubuntu20:~$ export client=$(grep client-cert ~/.kube/config |cut -d" " -f 6)
kanda@ubuntu20:~$ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6)
kanda@ubuntu20:~$ export auth=$(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6)
kanda@ubuntu20:~$ echo $client | base64 -d - > ./client.pem
kanda@ubuntu20:~$ echo $key | base64 -d - > ./client-key.pem
kanda@ubuntu20:~$ echo $auth | base64 -d - > ./ca.pem
kanda@ubuntu20:~$ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/pods",
"resourceVersion": "56661"
},
"items": [
{
"metadata": {
"name": "nginx-d46f5678b-xc4gr",
"generateName": "nginx-d46f5678b-",
トークンを使った認証もできる。
kanda@ubuntu20:~$ export token=$(kubectl describe secret default-token-zzgrf | grep ^token | cut -f7 -d ' ')
kanda@ubuntu20:~$ curl https://localhost:6443/apis --header "Authorization: Bearer $token" -k
{
"kind": "APIGroupList",
"apiVersion": "v1",
"groups": [
pods 内で、トークンはここに見える。
root@nginx-d46f5678b-xc4gr:/var/run/secrets/kubernetes.io/serviceaccount# ls -l
total 0
lrwxrwxrwx 1 root root 13 Jul 25 01:18 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Jul 25 01:18 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Jul 25 01:18 token -> ..data/token
root@nginx-d46f5678b-xc4gr:/var/run/secrets/kubernetes.io/serviceaccount# head token
eyJhbGciOiJSUzI1NiIsImt
kanda@ubuntu20:~$ kubectl proxy --api-prefix=/ &
[1] 287404
kanda@ubuntu20:~$ Starting to serve on 127.0.0.1:8001
kanda@ubuntu20:~$ curl https://127.0.0.1:8001/api/v1 -k
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
kanda@ubuntu20:$ kubectl create -f PVol.yaml
persistentvolume/pvvol-1 created
kanda@ubuntu20:$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvvol-1 1Gi RWX Retain Available 11s
$ kubectl create -f pvc.yaml
persistentvolumeclaim/pvc-one created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-one Bound pvvol-1 1Gi RWX 6s
$ kubectl create -f nfs-pod.yaml
deployment.apps/nginx-nfs created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-d46f5678b-xc4gr 1/1 Running 3 13h
nginx-nfs-5f6d8b9f77-lttpp 1/1 Running 0 10s
https://github.com/container-storage-interface
calico とは
https://techblog.yahoo.co.jp/infrastructure/kubernetes_calico_networking/
https://foobaron.hatenablog.com/entry/k8s-pod-networking-calico
このメッセージが出たあと、マスターもワーカーも、ネットワークがつながらなくなる。
Jul 26 04:55:23 kubeworker kernel: ipip: IPv4 and MPLS over IPv4 tunneling driver
Jul 26 04:55:23 kubeworker systemd-udevd[4008]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jul 26 04:55:23 kubeworker networkd-dispatcher[711]: WARNING:Unknown index 4 seen, reloading interface list
Jul 26 04:55:23 kubeworker networkd-dispatcher[711]: WARNING:Unknown index 5 seen, reloading interface list
Jul 26 04:55:23 kubeworker systemd-udevd[4009]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jul 26 04:55:23 kubeworker systemd-udevd[4009]: Using default interface naming scheme 'v245'.
Jul 26 04:55:23 kubeworker systemd-udevd[4009]: calico_tmp_B: Could not generate persistent MAC: No data available
Jul 26 04:55:23 kubeworker systemd-udevd[4008]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jul 26 04:55:23 kubeworker systemd-udevd[4008]: Using default interface naming scheme 'v245'.
Jul 26 04:55:23 kubeworker systemd-udevd[4008]: calico_tmp_A: Could not generate persistent MAC: No data available
Jul 26 04:55:24 kubeworker systemd-networkd[676]: tunl0: Link UP
Jul 26 04:55:24 kubeworker systemd-networkd[676]: tunl0: Gained carrier
top - 04:55:24 up 6 min, 3 users, load average: 0.28, 0.06, 0.02
Tasks: 186 total, 1 running, 185 sleeping, 0 stopped, 0 zombie
%Cpu(s): 6.0 us, 4.5 sy, 0.0 ni, 87.3 id, 2.2 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3936.1 total, 2824.0 free, 329.5 used, 782.6 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 3378.5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4157 root 20 0 1189844 41816 32064 S 4.7 1.0 0:00.14 calico-+
3303 root 20 0 1322412 82128 61412 S 1.3 2.0 0:00.38 kubelet
739 root 20 0 1320320 88472 46316 S 1.0 2.2 0:00.98 dockerd
4158 root 20 0 821184 39040 31428 S 1.0 1.0 0:00.03 calico-+
4154 root 20 0 747452 37076 29104 S 0.7 0.9 0:00.02 calico-+
1 root 20 0 102012 11672 8496 S 0.3 0.3 0:00.98 systemd
kanda@ubuntu20:~$ kubectl run ubuntu --image=ubuntu -- sleep 3600
pod/ubuntu created
kanda@ubuntu20:~$ kubectl exec ubuntu -it -- /bin/bash
root@ubuntu:/# ip route
bash: ip: command not found
root@ubuntu:~# cat /etc/issue
Ubuntu 20.04 LTS \n \l
root@ubuntu:~# apt-get install iproute2
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package iproute2
root@ubuntu:~# apt-get update
Err:1 http://security.ubuntu.com/ubuntu focal-security InRelease
Temporary failure resolving 'security.ubuntu.com'
root@ubuntu20:/var/log# vi syslog
Jul 26 11:45:12 ubuntu20 kubelet[10265]: 2020-07-26 11:45:12.644 [INFO][10265] ipam.go 570: Auto-assigned 1 out of 1 IPv4s: [192.168.219.209/26] handle="k8s-pod-network.b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" host="ubuntu20"
Jul 26 11:45:12 ubuntu20 kubelet[10265]: 2020-07-26 11:45:12.644 [INFO][10265] ipam_plugin.go 235: Calico CNI IPAM assigned addresses IPv4=[192.168.219.209/26] IPv6=[] ContainerID="b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" HandleID="k8s-pod-network.b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" Workload="ubuntu20-k8s-ubuntu-eth0"
Jul 26 11:45:12 ubuntu20 kubelet[10265]: 2020-07-26 11:45:12.644 [INFO][10265] ipam_plugin.go 261: IPAM Result ContainerID="b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" HandleID="k8s-pod-network.b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" Workload="ubuntu20-k8s-ubuntu-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc0003e4600)}
Jul 26 11:45:12 ubuntu20 kubelet[10257]: 2020-07-26 11:45:12.645 [INFO][10257] k8s.go 359: Populated endpoint ContainerID="b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" Namespace="default" Pod="ubuntu" WorkloadEndpoint="ubuntu20-k8s-ubuntu-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ubuntu20-k8s-ubuntu-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f36973f1-499c-4733-a30a-53e0d76eed4a", ResourceVersion:"115458", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731360710, loc:(*time.Location)(0x26ee2a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "run":"ubuntu"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ubuntu20", ContainerID:"", Pod:"ubuntu", Endpoint:"eth0", IPNetworks:[]string{"192.168.219.209/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califb3eb82ef50", MAC:"", Ports:[]v3.EndpointPort(nil)}}
Jul 26 11:45:12 ubuntu20 kubelet[10257]: 2020-07-26 11:45:12.645 [INFO][10257] k8s.go 360: Calico CNI using IPs: [192.168.219.209/32] ContainerID="b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" Namespace="default" Pod="ubuntu" WorkloadEndpoint="ubuntu20-k8s-ubuntu-eth0"
Jul 26 11:45:12 ubuntu20 kubelet[10257]: 2020-07-26 11:45:12.645 [INFO][10257] linux_dataplane.go 61: Setting the host side veth name to califb3eb82ef50 ContainerID="b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" Namespace="default" Pod="ubuntu" WorkloadEndpoint="ubuntu20-k8s-ubuntu-eth0"
Jul 26 11:45:12 ubuntu20 kubelet[10257]: 2020-07-26 11:45:12.646 [INFO][10257] linux_dataplane.go 385: Disabling IPv4 forwarding ContainerID="b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" Namespace="default" Pod="ubuntu" WorkloadEndpoint="ubuntu20-k8s-ubuntu-eth0"
Jul 26 11:45:12 ubuntu20 systemd-networkd[684]: califb3eb82ef50: Link UP
Jul 26 11:45:12 ubuntu20 systemd-networkd[684]: califb3eb82ef50: Gained carrier
Jul 26 11:45:12 ubuntu20 kubelet[10257]: 2020-07-26 11:45:12.664 [INFO][10257] k8s.go 391: Added Mac, interface name, and active container ID to endpoint ContainerID="b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db" Namespace="default" Pod="ubuntu" WorkloadEndpoint="ubuntu20-k8s-ubuntu-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ubuntu20-k8s-ubuntu-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f36973f1-499c-4733-a30a-53e0d76eed4a", ResourceVersion:"115458", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731360710, loc:(*time.Location)(0x26ee2a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "run":"ubuntu"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ubuntu20", ContainerID:"b94974b9fbdc6d6cb093b96e891dbec7b043c112e05de176713bbf0743ba90db", Pod:"ubuntu", Endpoint:"eth0", IPNetworks:[]string{"192.168.219.209/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califb3eb82ef50", MAC:"b6:9e:c1:42:74:a4", Ports:[]v3.EndpointPort(nil)}}
root@ubuntu20:~# modinfo veth
filename: /lib/modules/5.4.0-42-generic/kernel/drivers/net/veth.ko
alias: rtnl-link-veth
license: GPL v2
description: Virtual Ethernet Tunnel
root@ubuntu20:~# ip addr
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:a6:13:4a:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.219.192/32 brd 192.168.219.192 scope global tunl0
valid_lft forever preferred_lft forever
7: calib6e6e6b9abc@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
8: cali7b89557dd0b@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
9: calif65742e9417@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
10: calie84dc3508a1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
12: califb3eb82ef50@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
root@ubuntu20:~# ip route
default via 192.168.122.1 dev enp1s0 proto dhcp src 192.168.122.156 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.156
192.168.122.1 dev enp1s0 proto dhcp scope link src 192.168.122.156 metric 100
blackhole 192.168.219.192/26 proto bird
192.168.219.199 dev calib6e6e6b9abc scope link
192.168.219.200 dev cali7b89557dd0b scope link
192.168.219.205 dev calif65742e9417 scope link
192.168.219.206 dev calie84dc3508a1 scope link
192.168.219.209 dev califb3eb82ef50 scope link
blackhole 192.168.220.0/26 proto bird
root@ubuntu20:~# iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
cali-INPUT all -- anywhere anywhere /* cali:Cz_u1IQiXIMmKD4c */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
cali-FORWARD all -- anywhere anywhere /* cali:wUHhoiAYhphO9Mso */
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
cali-OUTPUT all -- anywhere anywhere /* cali:tVnHkvAo15HuiPy0 */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP all -- !localhost/8 localhost/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (3 references)
target prot opt source destination
Chain cali-FORWARD (1 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:vjrMJCRpqwy5oRoX */ MARK and 0xfff1ffff
cali-from-hep-forward all -- anywhere anywhere /* cali:A_sPAO0mcxbT9mOV */ mark match 0x0/0x10000
cali-from-wl-dispatch all -- anywhere anywhere /* cali:8ZoYfO5HKXWbB3pk */
cali-to-wl-dispatch all -- anywhere anywhere /* cali:jdEuaPBe14V2hutn */
cali-to-hep-forward all -- anywhere anywhere /* cali:12bc6HljsMKsmfr- */
ACCEPT all -- anywhere anywhere /* cali:MH9kMp5aNICL-Olv */ /* Policy explicitly accepted packet. */ mark match 0x10000/0x10000
Chain cali-INPUT (1 references)
target prot opt source destination
ACCEPT ipencap-- anywhere anywhere /* cali:PajejrV4aFdkZojI */ /* Allow IPIP packets from Calico hosts */ match-set cali40all-hosts-net src ADDRTYPE match dst-type LOCAL
DROP ipencap-- anywhere anywhere /* cali:_wjq-Yrma8Ly1Svo */ /* Drop IPIP packets from non-Calico hosts */
cali-wl-to-host all -- anywhere anywhere [goto] /* cali:8TZGxLWh_Eiz66wc */
ACCEPT all -- anywhere anywhere /* cali:6McIeIDvPdL6PE1T */ mark match 0x10000/0x10000
MARK all -- anywhere anywhere /* cali:YGPbrUms7NId8xVa */ MARK and 0xfff0ffff
cali-from-host-endpoint all -- anywhere anywhere /* cali:2gmY7Bg2i0i84Wk_ */
ACCEPT all -- anywhere anywhere /* cali:q-Vz2ZT9iGE331LL */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000
Chain cali-OUTPUT (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:Mq1_rAdXXH3YkrzW */ mark match 0x10000/0x10000
RETURN all -- anywhere anywhere /* cali:69FkRTJDvD5Vu6Vl */
ACCEPT ipencap-- anywhere anywhere /* cali:AnEsmO6bDZbQntWW */ /* Allow IPIP packets to other Calico hosts */ match-set cali40all-hosts-net dst ADDRTYPE match src-type LOCAL
MARK all -- anywhere anywhere /* cali:9e9Uf3GU5tX--Lxy */ MARK and 0xfff0ffff
cali-to-host-endpoint all -- anywhere anywhere /* cali:OB2pzPrvQn6PC89t */
ACCEPT all -- anywhere anywhere /* cali:tvSSMDBWrme3CUqM */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000
Chain cali-from-hep-forward (1 references)
target prot opt source destination
Chain cali-from-host-endpoint (1 references)
target prot opt source destination
Chain cali-from-wl-dispatch (2 references)
target prot opt source destination
cali-fw-cali7b89557dd0b all -- anywhere anywhere [goto] /* cali:Ixn9RpwtTor9kePb */
cali-fw-calib6e6e6b9abc all -- anywhere anywhere [goto] /* cali:KxBk7ZNEZ0VKBrUU */
cali-fw-calie84dc3508a1 all -- anywhere anywhere [goto] /* cali:NJ-GP7rr69i1hM79 */
cali-from-wl-dispatch-f all -- anywhere anywhere [goto] /* cali:HhYWDHOjNb--fW10 */
DROP all -- anywhere anywhere /* cali:i1rkdPMVD_zU0l6- */ /* Unknown interface */
Chain cali-from-wl-dispatch-f (1 references)
target prot opt source destination
cali-fw-calif65742e9417 all -- anywhere anywhere [goto] /* cali:tuzIZiEgdJcXs7Ug */
cali-fw-califb3eb82ef50 all -- anywhere anywhere [goto] /* cali:ZPY90vN4cqnCmq1P */
DROP all -- anywhere anywhere /* cali:rdPLWg8wHcR5P6in */ /* Unknown interface */
Chain cali-fw-cali7b89557dd0b (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:eCqkZtufeP6bIjWD */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:4jURMILUp9IhA_1h */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:6M4nQa6Trvzs129c */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:nJbS_Wdhqb4IpeKZ */ /* Drop VXLAN encapped packets originating in pods */ multiport dports 4789
DROP ipencap-- anywhere anywhere /* cali:zZ631xn6Qxt827YE */ /* Drop IPinIP encapped packets originating in pods */
cali-pro-kns.kube-system all -- anywhere anywhere /* cali:tvpDNLR2rNS7COGD */
RETURN all -- anywhere anywhere /* cali:FELhZZ30zmK4WoZo */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-_PTRGc0U-L5Kz7V6ERW all -- anywhere anywhere /* cali:kkEOhid0bnqlZsos */
RETURN all -- anywhere anywhere /* cali:AncnndCNFZCPAIY3 */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:5xYDSwNAkzE9dPpC */ /* Drop if no profiles matched */
Chain cali-fw-calib6e6e6b9abc (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:dah6bH0nMokKDRH6 */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:YbJjN08vAVSGUEq7 */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:L8l6xxg9xxD1xxyk */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:b0n7mjkaWGN8g6pQ */ /* Drop VXLAN encapped packets originating in pods */ multiport dports 4789
DROP ipencap-- anywhere anywhere /* cali:RaISk_P5e90P7wpL */ /* Drop IPinIP encapped packets originating in pods */
cali-pro-kns.kube-system all -- anywhere anywhere /* cali:--GRq9B3AvZqVUjR */
RETURN all -- anywhere anywhere /* cali:QtioEtxVMWI9usWw */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-_u2Tn2rSoAPffvE7JO6 all -- anywhere anywhere /* cali:_271CsrKZBcjJKhS */
RETURN all -- anywhere anywhere /* cali:xmkMRCzmmR1k-17g */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:bnQ38KXhrrqn_c6o */ /* Drop if no profiles matched */
Chain cali-fw-calie84dc3508a1 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:2Ca1PFdI4RhHWeqD */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:bUKzzrayG4Bk9Ir_ */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:iBlZx3ru8g8t5VeX */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:uNbLXSZt-N1Tzm3e */ /* Drop VXLAN encapped packets originating in pods */ multiport dports 4789
DROP ipencap-- anywhere anywhere /* cali:2pr8XtdLhFahgm8d */ /* Drop IPinIP encapped packets originating in pods */
cali-pro-kns.kube-system all -- anywhere anywhere /* cali:ZJJlL0OrJiutk-Ez */
RETURN all -- anywhere anywhere /* cali:JE8ohn2cTksTkimn */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-_u2Tn2rSoAPffvE7JO6 all -- anywhere anywhere /* cali:c0nqV1LYuiqPkeKU */
RETURN all -- anywhere anywhere /* cali:GDThGEsHXDd1EVwL */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:c1rYWPfOtx_l0I9i */ /* Drop if no profiles matched */
Chain cali-fw-calif65742e9417 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:dx_sdXAmp_2Qk1Ft */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:-a7s1JcdoZffsY_0 */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:g6O34T1cOOOj3vo- */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:_VhDv1yfVKbDX3mJ */ /* Drop VXLAN encapped packets originating in pods */ multiport dports 4789
DROP ipencap-- anywhere anywhere /* cali:_r3SRIOEaqDIQ2Tl */ /* Drop IPinIP encapped packets originating in pods */
cali-pro-kns.default all -- anywhere anywhere /* cali:N-lt6MIc72E7reZG */
RETURN all -- anywhere anywhere /* cali:Snxt_mx2gWz_39op */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-ksa.default.default all -- anywhere anywhere /* cali:v8teqONaqgZcnQeA */
RETURN all -- anywhere anywhere /* cali:ysJqaN0-iQKfj-Ey */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:5e5IJszC0cKx-Epg */ /* Drop if no profiles matched */
Chain cali-fw-califb3eb82ef50 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:5LmjKihAskz6NaV1 */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:P_ppxplgtnI7HY2s */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:Bc3oo44udCSQNPEt */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:w25Wn7nc2iKZrUZS */ /* Drop VXLAN encapped packets originating in pods */ multiport dports 4789
DROP ipencap-- anywhere anywhere /* cali:JTgIjcy1gda4fo3q */ /* Drop IPinIP encapped packets originating in pods */
cali-pro-kns.default all -- anywhere anywhere /* cali:w16_7owAJkdiojrO */
RETURN all -- anywhere anywhere /* cali:NNBLuZvYyA79sqtw */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-ksa.default.default all -- anywhere anywhere /* cali:26dGbPi3qgWD-qUL */
RETURN all -- anywhere anywhere /* cali:DWOvdkmLwkFtA-o2 */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:NlRD0IqMU91pES77 */ /* Drop if no profiles matched */
Chain cali-pri-_PTRGc0U-L5Kz7V6ERW (1 references)
target prot opt source destination
Chain cali-pri-_u2Tn2rSoAPffvE7JO6 (2 references)
target prot opt source destination
Chain cali-pri-kns.default (2 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:7Fnh7Pv3_98FtLW7 */ MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:ZbV6bJXWSRefjK0u */ mark match 0x10000/0x10000
Chain cali-pri-kns.kube-system (3 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:zoH5gU6U55FKZxEo */ MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:bcGRIJcyOS9dgBiB */ mark match 0x10000/0x10000
Chain cali-pri-ksa.default.default (2 references)
target prot opt source destination
Chain cali-pro-_PTRGc0U-L5Kz7V6ERW (1 references)
target prot opt source destination
Chain cali-pro-_u2Tn2rSoAPffvE7JO6 (2 references)
target prot opt source destination
Chain cali-pro-kns.default (2 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:oLzzje5WExbgfib5 */ MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:4goskqvxh5xcGw3s */ mark match 0x10000/0x10000
Chain cali-pro-kns.kube-system (3 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:-50oJuMfLVO3LkBk */ MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:ztVPKv1UYejNzm1g */ mark match 0x10000/0x10000
Chain cali-pro-ksa.default.default (2 references)
target prot opt source destination
Chain cali-to-hep-forward (1 references)
target prot opt source destination
Chain cali-to-host-endpoint (1 references)
target prot opt source destination
Chain cali-to-wl-dispatch (1 references)
target prot opt source destination
cali-tw-cali7b89557dd0b all -- anywhere anywhere [goto] /* cali:BRvhQki2G8gGfZdL */
cali-tw-calib6e6e6b9abc all -- anywhere anywhere [goto] /* cali:eWRBQUdlVFYx5RLy */
cali-tw-calie84dc3508a1 all -- anywhere anywhere [goto] /* cali:3s1uLCSl3RPdOBXh */
cali-to-wl-dispatch-f all -- anywhere anywhere [goto] /* cali:7q2kpV49aLOCdhEn */
DROP all -- anywhere anywhere /* cali:X-jqDNZAHJuTY9TC */ /* Unknown interface */
Chain cali-to-wl-dispatch-f (1 references)
target prot opt source destination
cali-tw-calif65742e9417 all -- anywhere anywhere [goto] /* cali:2fsh-mtEOna-8BA7 */
cali-tw-califb3eb82ef50 all -- anywhere anywhere [goto] /* cali:Z1ty6BOv3HEts-2O */
DROP all -- anywhere anywhere /* cali:rKIOo2U7KA1scG-Y */ /* Unknown interface */
Chain cali-tw-cali7b89557dd0b (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:czdRr_mhwOtELWWS */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:B-ldmzkzutJfX-TC */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:M2TttchrkXGk7aU4 */ MARK and 0xfffeffff
cali-pri-kns.kube-system all -- anywhere anywhere /* cali:Ire_KK5PSBXOGinf */
RETURN all -- anywhere anywhere /* cali:_xHIvEcDnQ7hr3yI */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-_PTRGc0U-L5Kz7V6ERW all -- anywhere anywhere /* cali:mYpHeHG_WUYv7E0y */
RETURN all -- anywhere anywhere /* cali:g26RLBBq6JqjowTM */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:LV77tsZoYW3aKFvE */ /* Drop if no profiles matched */
Chain cali-tw-calib6e6e6b9abc (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:tbiPUn78HGMSGNNC */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:dbbF2VwIIoPxzWwR */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:RYYWTZ0UPHj_4IsP */ MARK and 0xfffeffff
cali-pri-kns.kube-system all -- anywhere anywhere /* cali:bujdTJzuqbi0abce */
RETURN all -- anywhere anywhere /* cali:RaXu84AGsSOF44wD */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-_u2Tn2rSoAPffvE7JO6 all -- anywhere anywhere /* cali:c-kbosxue8DwUXIv */
RETURN all -- anywhere anywhere /* cali:IJEl5muabWlFU52q */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:1evwM09yZk63W1MH */ /* Drop if no profiles matched */
Chain cali-tw-calie84dc3508a1 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:xnFtf_Gw92d2TAIp */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:m8V_G5tTOfU5J4Jp */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:wV1_M0cK7j7pmYxw */ MARK and 0xfffeffff
cali-pri-kns.kube-system all -- anywhere anywhere /* cali:FoLGS9qt-6vQN4Yy */
RETURN all -- anywhere anywhere /* cali:YD0UJqdKK0UJxxJf */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-_u2Tn2rSoAPffvE7JO6 all -- anywhere anywhere /* cali:dUbcflPpDIdygjIl */
RETURN all -- anywhere anywhere /* cali:FNELkK15LL-Om5K- */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:VMH8ITJuj-egwUEZ */ /* Drop if no profiles matched */
Chain cali-tw-calif65742e9417 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:bGAHH4y6P3dF0HMu */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:hWMljdHrnfjgRb0c */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:a2KQ8vITVezqaGSF */ MARK and 0xfffeffff
cali-pri-kns.default all -- anywhere anywhere /* cali:6RuXdxg6wzSRxkfv */
RETURN all -- anywhere anywhere /* cali:V6dD5oTTBM51rXgg */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-ksa.default.default all -- anywhere anywhere /* cali:b3oHZFXwpa1nlRqO */
RETURN all -- anywhere anywhere /* cali:Y4iJ9LsLeCOw7q24 */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:g96JIysAtLAWlcbt */ /* Drop if no profiles matched */
Chain cali-tw-califb3eb82ef50 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:yWpHlQiBOHalXot5 */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:E89cppgzC5ZqrKrj */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:txwUmC77ypU5E6qo */ MARK and 0xfffeffff
cali-pri-kns.default all -- anywhere anywhere /* cali:6tjttx1UK4GVIiYe */
RETURN all -- anywhere anywhere /* cali:RvhYOjWOi_2NaAYi */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-ksa.default.default all -- anywhere anywhere /* cali:eqkIsGyTELbyKnpt */
RETURN all -- anywhere anywhere /* cali:diulqWNzX_XchrfX */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:LtzVLSImtLoDs2Md */ /* Drop if no profiles matched */
Chain cali-wl-to-host (1 references)
target prot opt source destination
cali-from-wl-dispatch all -- anywhere anywhere /* cali:Ee9Sbo10IpVujdIY */
ACCEPT all -- anywhere anywhere /* cali:nSZbcOoG1xPONxb8 */ /* Configured DefaultEndpointToHostAction */
root@ubuntu20:~#
root@ubuntu:/sys/class/net# ls -l
total 0
lrwxrwxrwx 1 root root 0 Jul 26 12:01 eth0 -> ../../devices/virtual/net/eth0
lrwxrwxrwx 1 root root 0 Jul 26 12:01 lo -> ../../devices/virtual/net/lo
lrwxrwxrwx 1 root root 0 Jul 26 12:01 tunl0 -> ../../devices/virtual/net/tunl0
kanda@ubuntu20:~$ ethtool --driver enp1s0
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
kanda@ubuntu20:~$ ethtool --driver calib6e6e6b9abc@if4
ethtool: bad command line argument(s)
root@ubuntu20:/var/log# ls pods/
default_nginx-d46f5678b-xc4gr_567115d7-3071-40a9-b536-5c132d42ec0d
lrwxrwxrwx 1 root root 165 Jul 26 05:00 pods/default_nginx-d46f5678b-xc4gr_567115d7-3071-40a9-b536-5c132d42ec0d/nginx/7.log -> /var/lib/docker/containers/a92d25b54a624ceca90502e721eac953d232d665ad16711b80fe2a1443f4367e/a92d25b54a624ceca90502e721eac953d232d665ad16711b80fe2a1443f4367e-json.log
lrwxrwxrwx 1 root root 165 Jul 26 05:00 pods/default_nginx-d46f5678b-xc4gr_567115d7-3071-40a9-b536-5c132d42ec0d/nginx/7.log -> /var/lib/docker/containers/a92d25b54a624ceca90502e721eac953d232d665ad16711b80fe2a1443f4367e/a92d25b54a624ceca90502e721eac953d232d665ad16711b80fe2a1443f4367e-json.log
root@ubuntu20:/etc/kubernetes/manifests# kubeadm config view
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: k8smaster:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.6
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
root@ubuntu20:~# kubeadm config print init-defaults
W0727 13:00:37.237534 28217 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: ubuntu20
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
kanda@ubuntu20:~$ kubectl -n kube-system exec -it etcd-ubuntu20 -- /bin/sh
#
# ip addr
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:9f:38:8c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.156/24 brd 192.168.122.255 scope global dynamic enp1s0
valid_lft 2803sec preferred_lft 2803sec
inet6 fe80::5054:ff:fe9f:388c/64 scope link
valid_lft forever preferred_lft forever
# ETCDCTL_API=3 etcdctl -w table --endpoints localhost:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key endpoint status
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| localhost:2379 | 56d928b4f5d71576 | 3.4.3 | 5.7 MB | true | false | 16 | 152443 | 152443 | |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
https://stackoverflow.com/questions/47807892/how-to-access-kubernetes-keys-in-etcd
# etcdctl --endpoints localhost:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key get / --prefix --keys-only
/registry/apiextensions.k8s.io/customresourcedefinitions/bgpconfigurations.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/bgppeers.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/blockaffinities.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/clusterinformations.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/felixconfigurations.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/globalnetworkpolicies.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/globalnetworksets.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/hostendpoints.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/ipamblocks.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/ipamconfigs.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/ipamhandles.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/ippools.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/networkpolicies.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/networksets.crd.projectcalico.org
/registry/apiregistration.k8s.io/apiservices/v1.
/registry/apiregistration.k8s.io/apiservices/v1.admissionregistration.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.apiextensions.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.apps
/registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.autoscaling
/registry/apiregistration.k8s.io/apiservices/v1.batch
/registry/apiregistration.k8s.io/apiservices/v1.coordination.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.crd.projectcalico.org
/registry/apiregistration.k8s.io/apiservices/v1.networking.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.rbac.authorization.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.admissionregistration.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.apiextensions.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.authentication.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.authorization.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.batch
/registry/apiregistration.k8s.io/apiservices/v1beta1.certificates.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.coordination.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.discovery.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.events.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.extensions
/registry/apiregistration.k8s.io/apiservices/v1beta1.networking.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.node.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.policy
/registry/apiregistration.k8s.io/apiservices/v1beta1.rbac.authorization.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.scheduling.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1beta1.storage.k8s.io
/registry/apiregistration.k8s.io/apiservices/v2beta1.autoscaling
/registry/apiregistration.k8s.io/apiservices/v2beta2.autoscaling
/registry/clusterrolebindings/calico-kube-controllers
/registry/clusterrolebindings/calico-node
/registry/clusterrolebindings/cluster-admin
/registry/clusterrolebindings/kubeadm:get-nodes
/registry/clusterrolebindings/kubeadm:kubelet-bootstrap
/registry/clusterrolebindings/kubeadm:node-autoapprove-bootstrap
/registry/clusterrolebindings/kubeadm:node-autoapprove-certificate-rotation
/registry/clusterrolebindings/kubeadm:node-proxier
/registry/clusterrolebindings/system:basic-user
/registry/clusterrolebindings/system:controller:attachdetach-controller
/registry/clusterrolebindings/system:controller:certificate-controller
/registry/clusterrolebindings/system:controller:clusterrole-aggregation-controller
/registry/clusterrolebindings/system:controller:cronjob-controller
/registry/clusterrolebindings/system:controller:daemon-set-controller
/registry/clusterrolebindings/system:controller:deployment-controller
/registry/clusterrolebindings/system:controller:disruption-controller
/registry/clusterrolebindings/system:controller:endpoint-controller
/registry/clusterrolebindings/system:controller:endpointslice-controller
/registry/clusterrolebindings/system:controller:expand-controller
/registry/clusterrolebindings/system:controller:generic-garbage-collector
/registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler
/registry/clusterrolebindings/system:controller:job-controller
/registry/clusterrolebindings/system:controller:namespace-controller
/registry/clusterrolebindings/system:controller:node-controller
/registry/clusterrolebindings/system:controller:persistent-volume-binder
/registry/clusterrolebindings/system:controller:pod-garbage-collector
/registry/clusterrolebindings/system:controller:pv-protection-controller
/registry/clusterrolebindings/system:controller:pvc-protection-controller
/registry/clusterrolebindings/system:controller:replicaset-controller
/registry/clusterrolebindings/system:controller:replication-controller
/registry/clusterrolebindings/system:controller:resourcequota-controller
/registry/clusterrolebindings/system:controller:route-controller
/registry/clusterrolebindings/system:controller:service-account-controller
/registry/clusterrolebindings/system:controller:service-controller
/registry/clusterrolebindings/system:controller:statefulset-controller
/registry/clusterrolebindings/system:controller:ttl-controller
/registry/clusterrolebindings/system:coredns
/registry/clusterrolebindings/system:discovery
/registry/clusterrolebindings/system:kube-controller-manager
/registry/clusterrolebindings/system:kube-dns
/registry/clusterrolebindings/system:kube-scheduler
/registry/clusterrolebindings/system:node
/registry/clusterrolebindings/system:node-proxier
/registry/clusterrolebindings/system:public-info-viewer
/registry/clusterrolebindings/system:volume-scheduler
/registry/clusterroles/admin
/registry/clusterroles/calico-kube-controllers
/registry/clusterroles/calico-node
/registry/clusterroles/cluster-admin
/registry/clusterroles/edit
/registry/clusterroles/kubeadm:get-nodes
/registry/clusterroles/system:aggregate-to-admin
/registry/clusterroles/system:aggregate-to-edit
/registry/clusterroles/system:aggregate-to-view
/registry/clusterroles/system:auth-delegator
/registry/clusterroles/system:basic-user
/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient
/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver
/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
/registry/clusterroles/system:certificates.k8s.io:kubelet-serving-approver
/registry/clusterroles/system:certificates.k8s.io:legacy-unknown-approver
/registry/clusterroles/system:controller:attachdetach-controller
/registry/clusterroles/system:controller:certificate-controller
/registry/clusterroles/system:controller:clusterrole-aggregation-controller
/registry/clusterroles/system:controller:cronjob-controller
/registry/clusterroles/system:controller:daemon-set-controller
/registry/clusterroles/system:controller:deployment-controller
/registry/clusterroles/system:controller:disruption-controller
/registry/clusterroles/system:controller:endpoint-controller
/registry/clusterroles/system:controller:endpointslice-controller
/registry/clusterroles/system:controller:expand-controller
/registry/clusterroles/system:controller:generic-garbage-collector
/registry/clusterroles/system:controller:horizontal-pod-autoscaler
/registry/clusterroles/system:controller:job-controller
/registry/clusterroles/system:controller:namespace-controller
/registry/clusterroles/system:controller:node-controller
/registry/clusterroles/system:controller:persistent-volume-binder
/registry/clusterroles/system:controller:pod-garbage-collector
/registry/clusterroles/system:controller:pv-protection-controller
/registry/clusterroles/system:controller:pvc-protection-controller
/registry/clusterroles/system:controller:replicaset-controller
/registry/clusterroles/system:controller:replication-controller
/registry/clusterroles/system:controller:resourcequota-controller
/registry/clusterroles/system:controller:route-controller
/registry/clusterroles/system:controller:service-account-controller
/registry/clusterroles/system:controller:service-controller
/registry/clusterroles/system:controller:statefulset-controller
/registry/clusterroles/system:controller:ttl-controller
/registry/clusterroles/system:coredns
/registry/clusterroles/system:discovery
/registry/clusterroles/system:heapster
/registry/clusterroles/system:kube-aggregator
/registry/clusterroles/system:kube-controller-manager
/registry/clusterroles/system:kube-dns
/registry/clusterroles/system:kube-scheduler
/registry/clusterroles/system:kubelet-api-admin
/registry/clusterroles/system:node
/registry/clusterroles/system:node-bootstrapper
/registry/clusterroles/system:node-problem-detector
/registry/clusterroles/system:node-proxier
/registry/clusterroles/system:persistent-volume-provisioner
/registry/clusterroles/system:public-info-viewer
/registry/clusterroles/system:volume-scheduler
/registry/clusterroles/view
/registry/configmaps/kube-public/cluster-info
/registry/configmaps/kube-system/calico-config
/registry/configmaps/kube-system/coredns
/registry/configmaps/kube-system/extension-apiserver-authentication
/registry/configmaps/kube-system/kube-proxy
/registry/configmaps/kube-system/kubeadm-config
/registry/configmaps/kube-system/kubelet-config-1.18
/registry/controllerrevisions/kube-system/calico-node-647dd4476c
/registry/controllerrevisions/kube-system/calico-node-7fc8db5964
/registry/controllerrevisions/kube-system/kube-proxy-5cbdd7f5df
/registry/crd.projectcalico.org/blockaffinities/kubeworker-192-168-42-0-26
/registry/crd.projectcalico.org/blockaffinities/ubuntu20-192-168-219-192-26
/registry/crd.projectcalico.org/blockaffinities/ubuntu20-192-168-220-0-26
/registry/crd.projectcalico.org/clusterinformations/default
/registry/crd.projectcalico.org/felixconfigurations/default
/registry/crd.projectcalico.org/ipamblocks/192-168-219-192-26
/registry/crd.projectcalico.org/ipamblocks/192-168-42-0-26
/registry/crd.projectcalico.org/ipamhandles/ipip-tunnel-addr-kubeworker
/registry/crd.projectcalico.org/ipamhandles/ipip-tunnel-addr-ubuntu20
/registry/crd.projectcalico.org/ipamhandles/k8s-pod-network.54f307ded01ef9f0d58e07797c0d3c362856c0d7232c5bfdd6a30a2a3c1994f5
/registry/crd.projectcalico.org/ipamhandles/k8s-pod-network.6d0b9aea0d78dec2bef7392fea210bd0b4088e20f9562f0a8c4b4674b0227cb8
/registry/crd.projectcalico.org/ipamhandles/k8s-pod-network.794a3b575bcca892efed20fde2ef420d1ebf5401a182cc0f000537249985c66e
/registry/crd.projectcalico.org/ipamhandles/k8s-pod-network.f2af66adc813b1610ab9e46c07e807036006a927029065ac02607ff64dad1cfa
/registry/crd.projectcalico.org/ipamhandles/k8s-pod-network.f7160fc542cd5a83dae755f5af54f926bc894a4b964ec40230e25517d1b775c1
/registry/crd.projectcalico.org/ippools/default-ipv4-ippool
/registry/csinodes/kubeworker
/registry/csinodes/ubuntu20
/registry/daemonsets/kube-system/calico-node
/registry/daemonsets/kube-system/kube-proxy
/registry/deployments/default/nginx
/registry/deployments/kube-system/calico-kube-controllers
/registry/deployments/kube-system/coredns
/registry/endpointslices/default/kubernetes
/registry/endpointslices/default/nginx-d9kwk
/registry/endpointslices/kube-system/kube-dns-rvt8f
/registry/events/default/kubeworker.16259c4925d067d2
/registry/events/default/nginx-d46f5678b-xc4gr.16259c437f679f4e
/registry/events/default/nginx-d46f5678b-xc4gr.16259c4fbd6a30a2
/registry/events/default/nginx-d46f5678b-xc4gr.16259c50feac35eb
/registry/events/default/nginx-d46f5678b-xc4gr.16259c516d617b6b
/registry/events/default/nginx-d46f5678b-xc4gr.16259c51e63f059a
/registry/events/default/ubuntu.16254ad2e8f58dc1
/registry/events/default/ubuntu.16254ad3bdbdf969
/registry/events/default/ubuntu.16254ad3e074b306
/registry/events/default/ubuntu.16254ad3eeeb469b
/registry/events/default/ubuntu.16259c43b4e2f94b
/registry/events/default/ubuntu.16259c4f75db5fe2
/registry/events/default/ubuntu.16259c503b2d9c00
/registry/events/default/ubuntu.16259c50a625cb6a
/registry/events/default/ubuntu.16259c50cdeb6f40
/registry/events/default/ubuntu20.16259c364cb15add
/registry/events/default/ubuntu20.16259c36596a01e1
/registry/events/default/ubuntu20.16259c36596a1331
/registry/events/default/ubuntu20.16259c36596a1b04
/registry/events/default/ubuntu20.16259c3663bf5b5b
/registry/events/default/ubuntu20.16259c45c3c87e0e
/registry/events/default/ubuntu20.16259c4930eedfaf
/registry/events/kube-system/calico-kube-controllers-578894d4cd-7kp4d.16254a9e12a725e3
/registry/events/kube-system/calico-kube-controllers-578894d4cd-7kp4d.16259c43a8b70571
/registry/events/kube-system/calico-kube-controllers-578894d4cd-7kp4d.16259c502e7bfbee
/registry/events/kube-system/calico-kube-controllers-578894d4cd-7kp4d.16259c50afa884b7
/registry/events/kube-system/calico-kube-controllers-578894d4cd-7kp4d.16259c50e7486b0c
/registry/events/kube-system/calico-kube-controllers-578894d4cd-7kp4d.16259c51ce6ad00d
/registry/events/kube-system/calico-node-t4wb6.16254aac10a33514
/registry/events/kube-system/calico-node-t4wb6.16259c4393d2c590
/registry/events/kube-system/calico-node-t4wb6.16259c442c932f8c
/registry/events/kube-system/calico-node-t4wb6.16259c44ff0474a7
/registry/events/kube-system/calico-node-t4wb6.16259c454b8c2311
/registry/events/kube-system/calico-node-t4wb6.16259c4601e84e18
/registry/events/kube-system/calico-node-t4wb6.16259c49db350c1f
/registry/events/kube-system/calico-node-t4wb6.16259c4a01e2468a
/registry/events/kube-system/calico-node-t4wb6.16259c4a1fcb5a28
/registry/events/kube-system/calico-node-t4wb6.16259c4bc4a02f84
/registry/events/kube-system/calico-node-t4wb6.16259c4be1bc2527
/registry/events/kube-system/calico-node-t4wb6.16259c4bff11dee5
/registry/events/kube-system/calico-node-t4wb6.16259c4c79c07819
/registry/events/kube-system/calico-node-t4wb6.16259c4caade6547
/registry/events/kube-system/calico-node-t4wb6.16259c4d07d8d303
/registry/events/kube-system/calico-node-t4wb6.16259c4e3cbe0be3
/registry/events/kube-system/calico-node-t4wb6.16259c509fe120f3
/registry/events/kube-system/calico-node-t4wb6.16259c52e4f8b54a
/registry/events/kube-system/calico-node-t4wb6.16259c5538eae9bf
/registry/events/kube-system/calico-node-t4wb6.16259c578c6148f3
/registry/events/kube-system/calico-node-t4wb6.16259c59e0bf7a7a
/registry/events/kube-system/calico-node-t4wb6.16259c5c354cd5ef
/registry/events/kube-system/calico-node-t4wb6.16259c5e890c37bd
/registry/events/kube-system/calico-node-t4wb6.16259c633089ddc9
/registry/events/kube-system/coredns-66bff467f8-7xv2j.16259c43769147a9
/registry/events/kube-system/coredns-66bff467f8-7xv2j.16259c4ff164ef34
/registry/events/kube-system/coredns-66bff467f8-7xv2j.16259c5036619e2c
/registry/events/kube-system/coredns-66bff467f8-7xv2j.16259c508a54335f
/registry/events/kube-system/coredns-66bff467f8-tvvv5.16259c43712ca220
/registry/events/kube-system/coredns-66bff467f8-tvvv5.16259c478be0fd4c
/registry/events/kube-system/coredns-66bff467f8-tvvv5.16259c47cf1bbd8f
/registry/events/kube-system/coredns-66bff467f8-tvvv5.16259c47ee74fa64
/registry/events/kube-system/coredns-66bff467f8-tvvv5.16259c4867a7d866
/registry/events/kube-system/etcd-ubuntu20.16259c3746ce3bb1
/registry/events/kube-system/etcd-ubuntu20.16259c393ff29f1c
/registry/events/kube-system/etcd-ubuntu20.16259c39ac0c13a3
/registry/events/kube-system/etcd-ubuntu20.16259c3a2c58a8aa
/registry/events/kube-system/kube-apiserver-ubuntu20.16259c3761eba88f
/registry/events/kube-system/kube-apiserver-ubuntu20.16259c391d844acd
/registry/events/kube-system/kube-apiserver-ubuntu20.16259c39836c815e
/registry/events/kube-system/kube-apiserver-ubuntu20.16259c39f52f6938
/registry/events/kube-system/kube-controller-manager-ubuntu20.16259c3700d6fdb3
/registry/events/kube-system/kube-controller-manager-ubuntu20.16259c38b8a6095e
/registry/events/kube-system/kube-controller-manager-ubuntu20.16259c393fa3b17c
/registry/events/kube-system/kube-controller-manager-ubuntu20.16259c39d3ebbd0a
/registry/events/kube-system/kube-controller-manager.16259c45ee88c56e
/registry/events/kube-system/kube-controller-manager.16259c45ee895625
/registry/events/kube-system/kube-proxy-8dgfl.16259c436914108d
/registry/events/kube-system/kube-proxy-8dgfl.16259c43f6288afe
/registry/events/kube-system/kube-proxy-8dgfl.16259c4427928424
/registry/events/kube-system/kube-proxy-8dgfl.16259c44cd8c09ab
/registry/events/kube-system/kube-scheduler-ubuntu20.16259c36f55d15f0
/registry/events/kube-system/kube-scheduler-ubuntu20.16259c38d6e29971
/registry/events/kube-system/kube-scheduler-ubuntu20.16259c393fa2c5bc
/registry/events/kube-system/kube-scheduler-ubuntu20.16259c39bcded36f
/registry/events/kube-system/kube-scheduler.16259c4998a8b0ed
/registry/events/kube-system/kube-scheduler.16259c4998a9018a
/registry/leases/kube-node-lease/kubeworker
/registry/leases/kube-node-lease/ubuntu20
/registry/leases/kube-system/kube-controller-manager
/registry/leases/kube-system/kube-scheduler
/registry/masterleases/192.168.122.156
/registry/minions/kubeworker
/registry/minions/ubuntu20
/registry/namespaces/default
/registry/namespaces/kube-node-lease
/registry/namespaces/kube-public
/registry/namespaces/kube-system
/registry/persistentvolumeclaims/default/pvc-one
/registry/persistentvolumes/pvvol-1
/registry/pods/default/nginx-d46f5678b-xc4gr
/registry/pods/default/ubuntu
/registry/pods/kube-system/calico-kube-controllers-578894d4cd-7kp4d
/registry/pods/kube-system/calico-node-jm766
/registry/pods/kube-system/calico-node-t4wb6
/registry/pods/kube-system/coredns-66bff467f8-7xv2j
/registry/pods/kube-system/coredns-66bff467f8-tvvv5
/registry/pods/kube-system/etcd-ubuntu20
/registry/pods/kube-system/kube-apiserver-ubuntu20
/registry/pods/kube-system/kube-controller-manager-ubuntu20
/registry/pods/kube-system/kube-proxy-8dgfl
/registry/pods/kube-system/kube-proxy-k845c
/registry/pods/kube-system/kube-scheduler-ubuntu20
/registry/priorityclasses/system-cluster-critical
/registry/priorityclasses/system-node-critical
/registry/ranges/serviceips
/registry/ranges/servicenodeports
/registry/replicasets/default/nginx-d46f5678b
/registry/replicasets/kube-system/calico-kube-controllers-578894d4cd
/registry/replicasets/kube-system/calico-kube-controllers-5fbfc9dfb6
/registry/replicasets/kube-system/coredns-66bff467f8
/registry/rolebindings/kube-public/kubeadm:bootstrap-signer-clusterinfo
/registry/rolebindings/kube-public/system:controller:bootstrap-signer
/registry/rolebindings/kube-system/kube-proxy
/registry/rolebindings/kube-system/kubeadm:kubeadm-certs
/registry/rolebindings/kube-system/kubeadm:kubelet-config-1.18
/registry/rolebindings/kube-system/kubeadm:nodes-kubeadm-config
/registry/rolebindings/kube-system/system::extension-apiserver-authentication-reader
/registry/rolebindings/kube-system/system::leader-locking-kube-controller-manager
/registry/rolebindings/kube-system/system::leader-locking-kube-scheduler
/registry/rolebindings/kube-system/system:controller:bootstrap-signer
/registry/rolebindings/kube-system/system:controller:cloud-provider
/registry/rolebindings/kube-system/system:controller:token-cleaner
/registry/roles/kube-public/kubeadm:bootstrap-signer-clusterinfo
/registry/roles/kube-public/system:controller:bootstrap-signer
/registry/roles/kube-system/extension-apiserver-authentication-reader
/registry/roles/kube-system/kube-proxy
/registry/roles/kube-system/kubeadm:kubeadm-certs
/registry/roles/kube-system/kubeadm:kubelet-config-1.18
/registry/roles/kube-system/kubeadm:nodes-kubeadm-config
/registry/roles/kube-system/system::leader-locking-kube-controller-manager
/registry/roles/kube-system/system::leader-locking-kube-scheduler
/registry/roles/kube-system/system:controller:bootstrap-signer
/registry/roles/kube-system/system:controller:cloud-provider
/registry/roles/kube-system/system:controller:token-cleaner
/registry/secrets/default/default-token-zzgrf
/registry/secrets/kube-node-lease/default-token-5fbs4
/registry/secrets/kube-public/default-token-2mmdq
/registry/secrets/kube-system/attachdetach-controller-token-xlzg2
/registry/secrets/kube-system/bootstrap-signer-token-jgz5m
/registry/secrets/kube-system/calico-kube-controllers-token-5zcqz
/registry/secrets/kube-system/calico-kube-controllers-token-j6mg8
/registry/secrets/kube-system/calico-node-token-krrpj
/registry/secrets/kube-system/calico-node-token-srrxd
/registry/secrets/kube-system/certificate-controller-token-k5s8c
/registry/secrets/kube-system/clusterrole-aggregation-controller-token-xqqsg
/registry/secrets/kube-system/coredns-token-fhtwf
/registry/secrets/kube-system/cronjob-controller-token-zjq5k
/registry/secrets/kube-system/daemon-set-controller-token-chx6t
/registry/secrets/kube-system/default-token-m8wfx
/registry/secrets/kube-system/deployment-controller-token-l2h48
/registry/secrets/kube-system/disruption-controller-token-7j75q
/registry/secrets/kube-system/endpoint-controller-token-s8bsp
/registry/secrets/kube-system/endpointslice-controller-token-zrfwh
/registry/secrets/kube-system/expand-controller-token-pxjrz
/registry/secrets/kube-system/generic-garbage-collector-token-fhgjd
/registry/secrets/kube-system/horizontal-pod-autoscaler-token-fnfd5
/registry/secrets/kube-system/job-controller-token-27dr9
/registry/secrets/kube-system/kube-proxy-token-p24cc
/registry/secrets/kube-system/namespace-controller-token-vxfwt
/registry/secrets/kube-system/node-controller-token-cvgvt
/registry/secrets/kube-system/persistent-volume-binder-token-xzs24
/registry/secrets/kube-system/pod-garbage-collector-token-tft8n
/registry/secrets/kube-system/pv-protection-controller-token-r55sr
/registry/secrets/kube-system/pvc-protection-controller-token-vd44q
/registry/secrets/kube-system/replicaset-controller-token-75mjk
/registry/secrets/kube-system/replication-controller-token-b7qdt
/registry/secrets/kube-system/resourcequota-controller-token-6cmxz
/registry/secrets/kube-system/service-account-controller-token-q6f7l
/registry/secrets/kube-system/service-controller-token-hkg2t
/registry/secrets/kube-system/statefulset-controller-token-cptvq
/registry/secrets/kube-system/token-cleaner-token-6n5n7
/registry/secrets/kube-system/ttl-controller-token-j8s8s
/registry/serviceaccounts/default/default
/registry/serviceaccounts/kube-node-lease/default
/registry/serviceaccounts/kube-public/default
/registry/serviceaccounts/kube-system/attachdetach-controller
/registry/serviceaccounts/kube-system/bootstrap-signer
/registry/serviceaccounts/kube-system/calico-kube-controllers
/registry/serviceaccounts/kube-system/calico-node
/registry/serviceaccounts/kube-system/certificate-controller
/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller
/registry/serviceaccounts/kube-system/coredns
/registry/serviceaccounts/kube-system/cronjob-controller
/registry/serviceaccounts/kube-system/daemon-set-controller
/registry/serviceaccounts/kube-system/default
/registry/serviceaccounts/kube-system/deployment-controller
/registry/serviceaccounts/kube-system/disruption-controller
/registry/serviceaccounts/kube-system/endpoint-controller
/registry/serviceaccounts/kube-system/endpointslice-controller
/registry/serviceaccounts/kube-system/expand-controller
/registry/serviceaccounts/kube-system/generic-garbage-collector
/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler
/registry/serviceaccounts/kube-system/job-controller
/registry/serviceaccounts/kube-system/kube-proxy
/registry/serviceaccounts/kube-system/namespace-controller
/registry/serviceaccounts/kube-system/node-controller
/registry/serviceaccounts/kube-system/persistent-volume-binder
/registry/serviceaccounts/kube-system/pod-garbage-collector
/registry/serviceaccounts/kube-system/pv-protection-controller
/registry/serviceaccounts/kube-system/pvc-protection-controller
/registry/serviceaccounts/kube-system/replicaset-controller
/registry/serviceaccounts/kube-system/replication-controller
/registry/serviceaccounts/kube-system/resourcequota-controller
/registry/serviceaccounts/kube-system/service-account-controller
/registry/serviceaccounts/kube-system/service-controller
/registry/serviceaccounts/kube-system/statefulset-controller
/registry/serviceaccounts/kube-system/token-cleaner
/registry/serviceaccounts/kube-system/ttl-controller
/registry/services/endpoints/default/kubernetes
/registry/services/endpoints/default/nginx
/registry/services/endpoints/kube-system/kube-controller-manager
/registry/services/endpoints/kube-system/kube-dns
/registry/services/endpoints/kube-system/kube-scheduler
/registry/services/specs/default/kubernetes
/registry/services/specs/default/nginx
/registry/services/specs/kube-system/kube-dns
#
# etcdctl --endpoints localhost:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key -w json get /registry/ranges/serviceips
{"header":{"cluster_id":9081789234443183680,"member_id":6258077914891752822,"revision":137026,"raft_term":16},"kvs":[{"key":"L3JlZ2lzdHJ5L3Jhbmdlcy9zZXJ2aWNlaXBz","create_revision":2,"mod_revision":30696,"version":8,"value":"azhzAAoVCgJ2MRIPUmFuZ2VBbG
kanda@ubuntu20:~$ ip link show type veth
4: calie84dc3508a1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: calib6e6e6b9abc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
6: calif65742e9417@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
7: califb3eb82ef50@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
8: cali7b89557dd0b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 4
kanda@ubuntu20:~$ ip link show type ipip
9: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
kanda@ubuntu20:~$ ip link show type bridge
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:ee:cb:b2:5b brd ff:ff:ff:ff:ff:ff
root@ubuntu20:/home/kanda# cat /proc/2570/cgroup
12:perf_event:/kubepods/burstable/pod104dcd565cfde7e10818df003b3b889f/bb8dcef3c33989780b6b44428b9b2bab6224d07a5c57865a832dcbffd73c0c70
11:blkio:/kubepods/burstable/pod104dcd565cfde7e10818df003b3b889f/bb8dcef3c33989780b6b44428b9b2bab6224d07a5c57865a832dcbffd73c0c70
ip コマンドで、 veth ペアが簡単に作れる
# ip netns add ns1
# ip netns add ns2
# ip link add veth1 type veth peer name veth2
# ip link set veth1 netns ns1
# ip link set veth2 netns ns2
# ip netns exec ns1 ip link set dev veth1 up
# ip netns exec ns2 ip link set dev veth2 up
# ip netns exec ns1 ifconfig veth1 192.168.1.1 up
# ip netns exec ns2 ifconfig veth2 192.168.1.2 up
# ip netns exec ns1 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.015 ms
^C
--- 192.168.1.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.015/0.021/0.028/0.006 ms
# ip netns exec ns1 ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
^C
--- 192.168.1.1 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4091ms
# ip link set veth2 netns ns1
Cannot find device "veth2"
# ip link add veth3 type veth peer name veth4
# ip link set dev veth3 up
# ip link set dev veth4 up
# ifconfig veth3 192.168.1.3 up
# ifconfig veth4 192.168.1.4 up
# ping 192.168.1.3
PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data.
64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.044 ms
64 bytes from 192.168.1.3: icmp_seq=2 ttl=64 time=0.030 ms
^C
--- 192.168.1.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1025ms
rtt min/avg/max/mdev = 0.030/0.037/0.044/0.007 ms
# ping 192.168.1.4
PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data.
64 bytes from 192.168.1.4: icmp_seq=1 ttl=64 time=0.032 ms
^C
--- 192.168.1.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
# traceroute 192.168.1.2
traceroute to 192.168.1.2 (192.168.1.2), 64 hops max
1 * 192.168.1.3 57.932ms !H 0.001ms !H
Capturing on 'veth3'
3 62.795185818 6a:44:cb:03:ed:39 → Broadcast ARP 42 Who has 192.168.1.2? Tell 192.168.1.3
# modinfo aufs
filename: /lib/modules/5.4.0-42-generic/kernel/fs/aufs/aufs.ko
alias: fs-aufs
version: 5.4.3-20200302
description: aufs -- Advanced multi layered unification filesystem
kanda@ubuntu20:~$ docker container top 4998756017ec
UID PID PPID C STIME TTY TIME CMD
root 4893 4877 0 14:00 ? 00:00:00 nginx: master process nginx -g daemon off;
systemd+ 5100 4893 0 14:00 ? 00:00:00 nginx: worker process
kanda@ubuntu20:~$ docker container top 6bd5bd2d9043
UID PID PPID C STIME TTY TIME CMD
root 2566 2548 2 13:58 ? 00:00:25 kube-apiserver --advertise-address=192.168.122.156 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
kanda@ubuntu20:~$ docker logs 4998756017ec
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
kanda@ubuntu20:~$ docker container exec 079b9e34c9ed mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/GGYOM6ROTGH42UBGMRGXHKK77J:/var/lib/docker/overlay2/l/QGDAP7QX6CEB65EOWLQVYFQBYY:/var/lib/docker/overlay2/l/UNWQU63SOFWKA5XS62I62HE3RU:/var/lib/docker/overlay2/l/RE2BJRL3IKOPGNWALF47NLVTES:/var/lib/docker/overlay2/l/2364GOB2MYCSA7EQGUMBDYAKX2,upperdir=/var/lib/docker/overlay2/f6689cbe20be0b05fad51645f5153e45a187c20640b57f02e730b2b5dfa0ccc2/diff,workdir=/var/lib/docker/overlay2/f6689cbe20be0b05fad51645f5153e45a187c20640b57f02e730b2b5dfa0ccc2/work,xino=off)
..
/dev/vda2 on /dev/termination-log type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/vda2 on /etc/resolv.conf type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/vda2 on /etc/hostname type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/vda2 on /etc/hosts type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
kanda@ubuntu20:~$ docker container exec 079b9e34c9ed lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 69M 1 loop
loop1 7:1 0 55M 1 loop
loop2 7:2 0 27.1M 1 loop
loop3 7:3 0 55M 1 loop
loop4 7:4 0 71.3M 1 loop
loop5 7:5 0 29.9M 1 loop
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
|-vda1 252:1 0 1M 0 part
`-vda2 252:2 0 20G 0 part /etc/hosts
kanda@ubuntu20:~$ docker container export 4c49b8a89f3d > nginx.tar
kanda@ubuntu20:~$ tar tf nginx.tar | head
.dockerenv
bin/
bin/bash
これが、 nginx を作るための Dockerfile らしい。
https://github.com/nginxinc/docker-nginx/blob/master/stable/alpine/Dockerfile
これはわかる。
FROM alpine:3.11
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
なんだこりゃ。
ENTRYPOINT ["/docker-entrypoint.sh"]
/docker-entrypoint.d/ の下にある補助スクリプトを全部実行して、CMD を呼ぶらしい。
[kanda@centos8 helloworld]$ cat helloworld.go
package main
import (
"fmt"
"time"
)
func main() {
i := 0
for {
fmt.Printf("Hello, World %d\n", i)
time.Sleep(10 * time.Second)
i++
}
}
kanda@ubuntu20:~/helloworld$ cat Dockerfile
FROM scratch
COPY helloworld /
CMD ["/helloworld"]
kanda@ubuntu20:~/helloworld$ docker build .
Sending build context to Docker daemon 2.028MB
Step 1/3 : FROM scratch
--->
Step 2/3 : COPY helloworld /
---> 98e78e559d8a
Step 3/3 : CMD ["/helloworld"]
---> Running in 579b7ebad998
Removing intermediate container 579b7ebad998
---> 34e122437699
Successfully built 34e122437699
kanda@ubuntu20:~$ docker container run 34e122437699
Hello, World 0
Hello, World 1
Go バイナリは、静的リンクで、単独で動くのだ。
kanda@ubuntu20:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
27df43ae2cad bridge bridge local
90630c068534 host host local
804b6614f1c2 none null local
kanda@ubuntu20:~$ docker network inspect bridge
[
{
"Name": "bridge",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
対応するのはこれ。
kanda@ubuntu20:~$ ip addr
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:45:f5:4b:f4 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
kanda@ubuntu20:~$ docker container run -d -p 12345:80 nginx
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b77924454fce nginx "/docker-entrypoint.…" 18 seconds ago Up 17 seconds 0.0.0.0:12345->80/tcp sweet_robinson
kanda@ubuntu20:~$ curl localhost:12345
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
kanda@ubuntu20:~$ docker exec -it b77924454fce /bin/bash
root@b77924454fce:/# apt-get install iproute2
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package iproute2
update すればいい。
root@b77924454fce:/# apt update
root@b77924454fce:/# apt-get install iproute2
Reading package lists... Done
Do you want to continue? [Y/n] y
Get:1 http://deb.debian.org/debian buster/main amd64 libcap2 amd64 1:2.25-2 [17.6 kB]
root@b77924454fce:/# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2
root@b77924454fce:/# ip addr
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
これ、面白かった。
11. dockerネットワーキングとか、kubernetesネットワーキングとか https://fukabori.fm/episode/11
192.168.0.0 って、うちの Wifi ルータがアサインするアドレスだった。
インストールからやりなおし。
kanda@ubuntu20:~$ kubectl delete -f calico.yaml
configmap "calico-config" deleted
customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "ipamblocks.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "blockaffinities.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "ipamhandles.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "ipamconfigs.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "bgppeers.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "hostendpoints.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "globalnetworksets.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "networksets.crd.projectcalico.org" deleted
clusterrole.rbac.authorization.k8s.io "calico-kube-controllers" deleted
clusterrolebinding.rbac.authorization.k8s.io "calico-kube-controllers" deleted
clusterrole.rbac.authorization.k8s.io "calico-node" deleted
clusterrolebinding.rbac.authorization.k8s.io "calico-node" deleted
daemonset.apps "calico-node" deleted
serviceaccount "calico-node" deleted
deployment.apps "calico-kube-controllers" deleted
serviceaccount "calico-kube-controllers" deleted
kanda@ubuntu20:~$ kubectl delete -f rbac-kdd.yaml
Error from server (NotFound): error when deleting "rbac-kdd.yaml": clusterroles.rbac.authorization.k8s.io "calico-node" not found
Error from server (NotFound): error when deleting "rbac-kdd.yaml": clusterrolebindings.rbac.authorization.k8s.io "calico-node" not found
root@ubuntu20:/home/kanda# kubeadm reset -f
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks
[reset] Removing info for node "ubuntu20" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
{"level":"warn","ts":"2020-08-06T12:14:21.344Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-16c41e51-b14a-4749-83bf-79a19826515a/192.168.122.156:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
9826515a/192.168.122.156:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
W0806 12:15:13.707094 12524 removeetcdmember.go:61] [reset] failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
.Please manually remove this etcd member using etcdctl
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@kubeworker:/home/kanda# kubeadm reset -f
[preflight] Running pre-flight checks
W0806 12:38:07.873282 1797 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@kubeworker:/home/kanda# kubeadm join k8smaster:6443 --token ggahup.n3gfadam96u10hzv --discovery-token-ca-cert-hash sha256:xxx --control-plane --certificate-key yyy
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeworker kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8smaster] and IPs [10.96.0.1 192.168.122.230]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeworker localhost] and IPs [192.168.122.230 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeworker localhost] and IPs [192.168.122.230 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0806 12:41:45.222767 2313 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0806 12:41:45.226938 2313 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0806 12:41:45.227828 2313 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[kubelet-check] Initial timeout of 40s passed.
{"level":"warn","ts":"2020-08-06T12:42:27.169Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.122.230:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node kubeworker as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeworker as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
root@kubeworker:/home/kanda#
$ cat basic.yaml
apiVersion: v1
kind: Pod
metadata:
name: basicpod
labels:
type: webserver
spec:
containers:
- name: webcont
image: nginx
ports:
- containerPort: 80
$ kubectl create -f basic.yaml
pod/basicpod created
$ cat basicservice.yaml
apiVersion: v1
kind: Service
metadata:
name: basicservice
spec:
selector:
type: webserver
type: NodePort
ports:
- protocol: TCP
port: 80
$ kubectl create -f basicservice.yaml
service/basicservice created
$ curl http://10.101.221.253:80
<!DOCTYPE html>
[kanda@centos8 ~]$ curl http://192.168.122.156:31491
<!DOCTYPE html>
$ sudo iptables --list -t nat | grep 31491
KUBE-MARK-MASQ tcp -- anywhere anywhere /* default/basicservice: */ tcp dpt:31491
KUBE-SVC-JOP6X6VNNQDQJAZO tcp -- anywhere anywhere /* default/basicservice: */ tcp dpt:31491
$ sudo iptables --list -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
cali-PREROUTING all -- anywhere anywhere /* cali:6gwbT8clXdHdC1b1 */
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
cali-OUTPUT all -- anywhere anywhere /* cali:tVnHkvAo15HuiPy0 */
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere !localhost/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
cali-POSTROUTING all -- anywhere anywhere /* cali:O3lYWMrLQYEMJtB5 */
KUBE-POSTROUTING all -- anywhere anywhere /* kubernetes postrouting rules */
MASQUERADE all -- 172.17.0.0/16 anywhere
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-MARK-DROP (0 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x8000
Chain KUBE-MARK-MASQ (19 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- anywhere anywhere /* default/basicservice: */ tcp dpt:31491
KUBE-SVC-JOP6X6VNNQDQJAZO tcp -- anywhere anywhere /* default/basicservice: */ tcp dpt:31491
Chain KUBE-POSTROUTING (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere mark match ! 0x4000/0x4000
MARK all -- anywhere anywhere MARK xor 0x4000
MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ random-fully
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SEP-2YHVZQKD4BHSF4YA (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.16.219.230 anywhere /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ tcp to:172.16.219.230:53
Chain KUBE-SEP-43YIGEUU2DN7ZRN5 (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.16.219.229 anywhere /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ tcp to:172.16.219.229:53
Chain KUBE-SEP-EUWYZTPCE5IFJIDE (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.16.42.21 anywhere /* default/basicservice: */
DNAT tcp -- anywhere anywhere /* default/basicservice: */ tcp to:172.16.42.21:80
Chain KUBE-SEP-GC5TY5IM5U6QPTE7 (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- kubeworker anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ tcp to:192.168.122.230:6443
Chain KUBE-SEP-OAR2FUBB2TVKP7FS (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.16.42.18 anywhere /* default/nginx:443 */
DNAT tcp -- anywhere anywhere /* default/nginx:443 */ tcp to:172.16.42.18:443
Chain KUBE-SEP-OKPJNMFJDVL45AP5 (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.16.219.230 anywhere /* kube-system/kube-dns:metrics */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:metrics */ tcp to:172.16.219.230:9153
Chain KUBE-SEP-P6TJWKONGB27ALWV (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.16.219.230 anywhere /* kube-system/kube-dns:dns */
DNAT udp -- anywhere anywhere /* kube-system/kube-dns:dns */ udp to:172.16.219.230:53
Chain KUBE-SEP-TSCCO5IA2KIS5DLL (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- k8smaster anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ tcp to:192.168.122.156:6443
Chain KUBE-SEP-UPEADSX2VJKCY4LM (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.16.219.229 anywhere /* kube-system/kube-dns:metrics */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:metrics */ tcp to:172.16.219.229:9153
Chain KUBE-SEP-XHDLHLGAFZC5DBAG (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.16.219.229 anywhere /* kube-system/kube-dns:dns */
DNAT udp -- anywhere anywhere /* kube-system/kube-dns:dns */ udp to:172.16.219.229:53
Chain KUBE-SEP-YNBXLYXTPVYWEY5C (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.16.42.19 anywhere /* default/registry:5000 */
DNAT tcp -- anywhere anywhere /* default/registry:5000 */ tcp to:172.16.42.19:5000
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- !172.16.0.0/16 10.101.221.253 /* default/basicservice: cluster IP */ tcp dpt:http
KUBE-SVC-JOP6X6VNNQDQJAZO tcp -- anywhere 10.101.221.253 /* default/basicservice: cluster IP */ tcp dpt:http
KUBE-MARK-MASQ tcp -- !172.16.0.0/16 10.99.187.122 /* default/nginx:443 cluster IP */ tcp dpt:https
KUBE-SVC-2W5KDIUMYNHWAZPV tcp -- anywhere 10.99.187.122 /* default/nginx:443 cluster IP */ tcp dpt:https
KUBE-MARK-MASQ tcp -- !172.16.0.0/16 10.102.188.209 /* default/registry:5000 cluster IP */ tcp dpt:5000
KUBE-SVC-JLIRBZB424R4EFX6 tcp -- anywhere 10.102.188.209 /* default/registry:5000 cluster IP */ tcp dpt:5000
KUBE-MARK-MASQ tcp -- !172.16.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-MARK-MASQ tcp -- !172.16.0.0/16 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
KUBE-SVC-JD5MR3NA4I4DYORP tcp -- anywhere 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
KUBE-MARK-MASQ udp -- !172.16.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-SVC-TCOU7JCQXEZGVUNU udp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-MARK-MASQ tcp -- !172.16.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- anywhere 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-2W5KDIUMYNHWAZPV (1 references)
target prot opt source destination
KUBE-SEP-OAR2FUBB2TVKP7FS all -- anywhere anywhere /* default/nginx:443 */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
target prot opt source destination
KUBE-SEP-43YIGEUU2DN7ZRN5 all -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ statistic mode random probability 0.50000000000
KUBE-SEP-2YHVZQKD4BHSF4YA all -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-JD5MR3NA4I4DYORP (1 references)
target prot opt source destination
KUBE-SEP-UPEADSX2VJKCY4LM all -- anywhere anywhere /* kube-system/kube-dns:metrics */ statistic mode random probability 0.50000000000
KUBE-SEP-OKPJNMFJDVL45AP5 all -- anywhere anywhere /* kube-system/kube-dns:metrics */
Chain KUBE-SVC-JLIRBZB424R4EFX6 (1 references)
target prot opt source destination
KUBE-SEP-YNBXLYXTPVYWEY5C all -- anywhere anywhere /* default/registry:5000 */
Chain KUBE-SVC-JOP6X6VNNQDQJAZO (2 references)
target prot opt source destination
KUBE-SEP-EUWYZTPCE5IFJIDE all -- anywhere anywhere /* default/basicservice: */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target prot opt source destination
KUBE-SEP-TSCCO5IA2KIS5DLL all -- anywhere anywhere /* default/kubernetes:https */ statistic mode random probability 0.50000000000
KUBE-SEP-GC5TY5IM5U6QPTE7 all -- anywhere anywhere /* default/kubernetes:https */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
target prot opt source destination
KUBE-SEP-XHDLHLGAFZC5DBAG all -- anywhere anywhere /* kube-system/kube-dns:dns */ statistic mode random probability 0.50000000000
KUBE-SEP-P6TJWKONGB27ALWV all -- anywhere anywhere /* kube-system/kube-dns:dns */
Chain cali-OUTPUT (1 references)
target prot opt source destination
cali-fip-dnat all -- anywhere anywhere /* cali:GBTAv2p5CwevEyJm */
Chain cali-POSTROUTING (1 references)
target prot opt source destination
cali-fip-snat all -- anywhere anywhere /* cali:Z-c7XtVd2Bq7s_hA */
cali-nat-outgoing all -- anywhere anywhere /* cali:nYKhEzDlr11Jccal */
MASQUERADE all -- anywhere anywhere /* cali:SXWvdsbh4Mw7wOln */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL random-fully
Chain cali-PREROUTING (1 references)
target prot opt source destination
cali-fip-dnat all -- anywhere anywhere /* cali:r6XmIziWUJsdOK6Z */
Chain cali-fip-dnat (2 references)
target prot opt source destination
Chain cali-fip-snat (1 references)
target prot opt source destination
Chain cali-nat-outgoing (1 references)
target prot opt source destination
MASQUERADE all -- anywhere anywhere /* cali:flqWnvo8yq4ULQLa */ match-set cali40masq-ipam-pools src ! match-set cali40all-ipam-pools dst random-fully
$
kanda@ubuntu20:~$ cat PVol.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvvol-1
spec:
capacity:
storage: 128Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /srv
server: k8smaster
readOnly: false
kanda@ubuntu20:~$ kubectl create -f PVol.yaml
persistentvolume/pvvol-1 created
kanda@ubuntu20:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvvol-1 128Mi RWX Retain Available 45s
$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-one
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
kanda@ubuntu20:~$ kubectl create -f pvc.yaml
persistentvolumeclaim/pvc-one created
kanda@ubuntu20:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-one Bound pvvol-1 128Mi RWX 27s
CSI NFS driver
$ git clone https://github.com/kubernetes-csi/csi-driver-nfs.git
kanda@ubuntu20:~/csi-driver-nfs$ kubectl -f deploy/kubernetes create
csidriver.storage.k8s.io/nfs.csi.k8s.io created
daemonset.apps/csi-nodeplugin-nfsplugin created
serviceaccount/csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/csi-nodeplugin created
kanda@ubuntu20:~$ kubectl get CSIDriver
NAME ATTACHREQUIRED PODINFOONMOUNT MODES AGE
nfs.csi.k8s.io false true Persistent 2m36s
kanda@ubuntu20:~/csi-driver-nfs$ kubectl -f examples/kubernetes/nginx.yaml create
persistentvolume/data-nfsplugin created
persistentvolumeclaim/data-nfsplugin created
pod/nginx created
kanda@ubuntu20:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-nodeplugin-nfsplugin-l77bf 2/2 Running 0 11m
csi-nodeplugin-nfsplugin-qtn29 2/2 Running 0 11m
nginx 0/1 ContainerCreating 0 8m21s XXX なんか変だ
kanda@ubuntu20:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-nfsplugin Bound data-nfsplugin 100Mi RWX 21m
nginx-claim0 Bound task-pv-volumes 200Mi RWO 25h
kanda@ubuntu20:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-nfsplugin 100Mi RWX Retain Bound default/data-nfsplugin 22m
kanda@ubuntu20:~$ kubectl exec -it csi-nodeplugin-nfsplugin-l77bf /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Defaulting container name to node-driver-registrar.
Use 'kubectl describe pod/csi-nodeplugin-nfsplugin-l77bf -n default' to see all of the containers in this pod.
/ # cd /tmp
/tmp # ls
/tmp # ls -la
total 0
drwxrwxrwt 2 root root 6 Dec 20 2018 .
drwxr-xr-x 1 root root 63 Aug 10 03:36 ..
/tmp # ps -e
PID USER TIME COMMAND
1 root 0:00 /node-driver-registrar --v=5 --csi-address=/plugin/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/csi-nfsplugin/csi.sock
11 root 0:00 /bin/sh
23 root 0:00 ps -e
/ # kanda@ubuntu20:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bd5a97446e04 quay.io/k8scsi/nfsplugin "/nfsplugin --nodeid…" 13 minutes ago Up 13 minutes k8s_nfs_csi-nodeplugin-nfsplugin-l77bf_default_2250da2d-686f-4d13-a9b2-9226745cd7db_0
7de35c01a6a9 ffce5e64d915
sh-4.4# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 03:36 ? 00:00:00 /nfsplugin --nodeid=ubuntu20 --endpoint=unix://plugin/csi.sock
golang のログって、どのファイルに出るのだ?
root@ubuntu20:/var/log/containers# more csi-nodeplugin-nfsplugin-t8tlj_default_nfs-674f3354ab6b0f9eaa16d5caa34c288d28aba763e008d9ca330e581839a137f0.log
{"log":"I0810 04:16:09.715220 1 nfs.go:47] Driver: nfs.csi.k8s.io version: 2.0.0\n","stream":"stderr","time":"2020-08-10T04:16:09.715832024Z"}
{"log":"I0810 04:16:09.715420 1 nfs.go:96] Enabling volume access mode: SINGLE_NODE_WRITER\n","stream":"stderr","time":"2020-08-10T04:16:09.715863803Z"}
{"log":"I0810 04:16:09.715425 1 nfs.go:96] Enabling volume access mode: SINGLE_NODE_READER_ONLY\n","stream":"stderr","time":"2020-08-10T04:16:09.715868139Z"}
{"log":"I0810 04:16:09.715429 1 nfs.go:96] Enabling volume access mode: MULTI_NODE_READER_ONLY\n","stream":"stderr","time":"2020-08-10T04:16:09.715870766Z"}
{"log":"I0810 04:16:09.715432 1 nfs.go:96] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER\n","stream":"stderr","time":"2020-08-10T04:16:09.715873309Z"}
{"log":"I0810 04:16:09.715438 1 nfs.go:96] Enabling volume access mode: MULTI_NODE_MULTI_WRITER\n","stream":"stderr","time":"2020-08-10T04:16:09.715875871Z"}
{"log":"I0810 04:16:09.715489 1 nfs.go:107] Enabling controller service capability: UNKNOWN\n","stream":"stderr","time":"2020-08-10T04:16:09.715878463Z"}
{"log":"I0810 04:16:09.722049 1 server.go:92] Listening for connections on address: \u0026net.UnixAddr{Name:\"/plugin/csi.sock\", Net:\"unix\"}\n","stream":"stderr","time":"2020-08-10T
04:16:09.722172516Z"}
root@ubuntu20:/var/log/containers# cat csi-nodeplugin-nfsplugin-t8tlj_default_node-driver-registrar-d5c035baa44bbe8f53a1af7160fc81eedeaf0e347f62d27a7466902bcd7a1d80.log
{"log":"I0810 04:16:04.730125 1 main.go:108] Version: v1.0.2-rc1-0-g2edd7f10\n","stream":"stderr","time":"2020-08-10T04:16:04.730285865Z"}
{"log":"I0810 04:16:04.730166 1 main.go:115] Attempting to open a gRPC connection with: \"/plugin/csi.sock\"\n","stream":"stderr","time":"2020-08-10T04:16:04.730310991Z"}
{"log":"I0810 04:16:04.730171 1 connection.go:69] Connecting to /plugin/csi.sock\n","stream":"stderr","time":"2020-08-10T04:16:04.730313609Z"}
{"log":"I0810 04:16:04.730352 1 connection.go:96] Still trying, connection is CONNECTING\n","stream":"stderr","time":"2020-08-10T04:16:04.73038998Z"}
{"log":"I0810 04:16:04.730469 1 connection.go:96] Still trying, connection is TRANSIENT_FAILURE\n","stream":"stderr","time":"2020-08-10T04:16:04.730497223Z"}
{"log":"I0810 04:16:05.730567 1 connection.go:96] Still trying, connection is TRANSIENT_FAILURE\n","stream":"stderr","time":"2020-08-10T04:16:05.730641943Z"}
{"log":"I0810 04:16:06.772582 1 connection.go:96] Still trying, connection is TRANSIENT_FAILURE\n","stream":"stderr","time":"2020-08-10T04:16:06.772645835Z"}
{"log":"I0810 04:16:07.948794 1 connection.go:96] Still trying, connection is CONNECTING\n","stream":"stderr","time":"2020-08-10T04:16:07.948882772Z"}
{"log":"I0810 04:16:07.948889 1 connection.go:96] Still trying, connection is TRANSIENT_FAILURE\n","stream":"stderr","time":"2020-08-10T04:16:07.948919477Z"}
{"log":"I0810 04:16:09.014823 1 connection.go:96] Still trying, connection is CONNECTING\n","stream":"stderr","time":"2020-08-10T04:16:09.014900176Z"}
{"log":"I0810 04:16:09.014858 1 connection.go:96] Still trying, connection is TRANSIENT_FAILURE\n","stream":"stderr","time":"2020-08-10T04:16:09.014912204Z"}
{"log":"I0810 04:16:09.990326 1 connection.go:96] Still trying, connection is CONNECTING\n","stream":"stderr","time":"2020-08-10T04:16:09.990636511Z"}
{"log":"I0810 04:16:09.990343 1 connection.go:93] Connected\n","stream":"stderr","time":"2020-08-10T04:16:09.990648624Z"}
{"log":"I0810 04:16:09.990348 1 main.go:123] Calling CSI driver to discover driver name.\n","stream":"stderr","time":"2020-08-10T04:16:09.990666628Z"}
{"log":"I0810 04:16:09.990352 1 connection.go:137] GRPC call: /csi.v1.Identity/GetPluginInfo\n","stream":"stderr","time":"2020-08-10T04:16:09.990668308Z"}
{"log":"I0810 04:16:09.990355 1 connection.go:138] GRPC request: {}\n","stream":"stderr","time":"2020-08-10T04:16:09.990874657Z"}
{"log":"I0810 04:16:09.991442 1 connection.go:140] GRPC response: {\"name\":\"nfs.csi.k8s.io\",\"vendor_version\":\"2.0.0\"}\n","stream":"stderr","time":"2020-08-10T04:16:09.991813042Z"}
{"log":"I0810 04:16:09.991835 1 connection.go:141] GRPC error: \u003cnil\u003e\n","stream":"stderr","time":"2020-08-10T04:16:09.991847175Z"}
{"log":"I0810 04:16:09.991858 1 main.go:131] CSI driver name: \"nfs.csi.k8s.io\"\n","stream":"stderr","time":"2020-08-10T04:16:09.991868513Z"}
{"log":"I0810 04:16:09.991928 1 node_register.go:54] Starting Registration Server at: /registration/nfs.csi.k8s.io-reg.sock\n","stream":"stderr","time":"2020-08-10T04:16:09.99194197Z"}
{"log":"I0810 04:16:09.991991 1 node_register.go:61] Registration Server started at: /registration/nfs.csi.k8s.io-reg.sock\n","stream":"stderr","time":"2020-08-10T04:16:09.992004992Z"}
{"log":"I0810 04:16:22.544126 1 main.go:76] Received GetInfo call: \u0026InfoRequest{}\n","stream":"stderr","time":"2020-08-10T04:16:22.544207778Z"}
{"log":"I0810 04:16:23.026245 1 main.go:86] Received NotifyRegistrationStatus call: \u0026RegistrationStatus{PluginRegistered:true,Error:,}\n","stream":"stderr","time":"2020-08-10T04:16:23.02634376Z"}
$ kubectl describe pod csi-nodeplugin-nfsplugin-t8tlj
Name: csi-nodeplugin-nfsplugin-t8tlj
Namespace: default
Priority: 0
Node: ubuntu20/192.168.122.156
Start Time: Mon, 10 Aug 2020 04:15:47 +0000
Labels: app=csi-nodeplugin-nfsplugin
controller-revision-hash=68d449f88c
pod-template-generation=1
Annotations: cni.projectcalico.org/podIP: 172.16.219.238/32
cni.projectcalico.org/podIPs: 172.16.219.238/32
Status: Running
IP: 172.16.219.238
IPs:
IP: 172.16.219.238
Controlled By: DaemonSet/csi-nodeplugin-nfsplugin
Containers:
node-driver-registrar:
Container ID: docker://d5c035baa44bbe8f53a1af7160fc81eedeaf0e347f62d27a7466902bcd7a1d80
Image: quay.io/k8scsi/csi-node-driver-registrar:v1.0.2
Image ID: docker-pullable://quay.io/k8scsi/csi-node-driver-registrar@sha256:ffecfbe6ae9f446e5102cbf2c73041d63ccf44bcfd72e2f2a62174a3a185eb69
Port: <none>
Host Port: <none>
Args:
--v=5
--csi-address=/plugin/csi.sock
--kubelet-registration-path=/var/lib/kubelet/plugins/csi-nfsplugin/csi.sock
State: Running
Started: Mon, 10 Aug 2020 04:16:04 +0000
Ready: True
Restart Count: 0
Environment:
KUBE_NODE_NAME: (v1:spec.nodeName)
Mounts:
/plugin from plugin-dir (rw)
/registration from registration-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from csi-nodeplugin-token-7dk7h (ro)
nfs:
Container ID: docker://674f3354ab6b0f9eaa16d5caa34c288d28aba763e008d9ca330e581839a137f0
Image: quay.io/k8scsi/nfsplugin:v2.0.0
Image ID: docker-pullable://quay.io/k8scsi/nfsplugin@sha256:63a703f16ce771046fe9927ed42e68a0625b4fcea24e8c0bc1ffae955e51529f
Port: <none>
Host Port: <none>
Args:
--nodeid=$(NODE_ID)
--endpoint=$(CSI_ENDPOINT)
State: Running
Started: Mon, 10 Aug 2020 04:16:09 +0000
Ready: True
Restart Count: 0
Environment:
NODE_ID: (v1:spec.nodeName)
CSI_ENDPOINT: unix://plugin/csi.sock
Mounts:
/plugin from plugin-dir (rw)
/var/lib/kubelet/pods from pods-mount-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from csi-nodeplugin-token-7dk7h (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
plugin-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/plugins/csi-nfsplugin
HostPathType: DirectoryOrCreate
pods-mount-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/pods
HostPathType: Directory
registration-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/plugins_registry
HostPathType: Directory
csi-nodeplugin-token-7dk7h:
Type: Secret (a volume populated by a Secret)
SecretName: csi-nodeplugin-token-7dk7h
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/csi-nodeplugin-nfsplugin-t8tlj to ubuntu20
Normal Pulled 3m41s kubelet, ubuntu20 Container image "quay.io/k8scsi/csi-node-driver-registrar:v1.0.2" already present on machine
Normal Created 3m39s kubelet, ubuntu20 Created container node-driver-registrar
Normal Started 3m39s kubelet, ubuntu20 Started container node-driver-registrar
Normal Pulled 3m39s kubelet, ubuntu20 Container image "quay.io/k8scsi/nfsplugin:v2.0.0" already present on machine
Normal Created 3m34s kubelet, ubuntu20 Created container nfs
Normal Started 3m33s kubelet, ubuntu20 Started container nfs
nodeserver.go
65: err = ns.mounter.Mount(source, targetPath, "nfs", mo)
結局、 nfs csi ドライバーって、何をするのだ。pvc の延長で、ソケット経由で呼ばれて、 mount する?