Kubernetes POD creation lifecycle
https://youtu.be/BgrQ16r84pM
A sample first POD with ubuntu
root@master:/var/triton/wkg# kubectl get pods
NAME READY STATUS RESTARTS AGE
cpx-mdf28 1/1 Running 0 11d
cpx-zhgpj 1/1 Running 1 11d
exporter-vs5sz 1/1 Running 0 25d
triton 1/1 Running 0 14h
web-frontend-bw79t 1/1 Running 0 1d
root@master:/var/triton/wkg# kubectl create -f ub.yaml
pod "dkub" created
root@master:/var/triton/wkg# kubectl get pods
NAME READY STATUS RESTARTS AGE
cpx-mdf28 1/1 Running 0 11d
cpx-zhgpj 1/1 Running 1 11d
testub 1/1 Running 0 3s
exporter-vs5sz 1/1 Running 0 25d
triton 1/1 Running 0 14h
web-frontend-bw79t 1/1 Running 0 1d
root@master:/var/triton/wkg# cat ub.yaml
apiVersion: v1
kind: Pod
metadata:
name: testub
labels:
app: testub
spec:
containers:
- name: testub
image: ubuntu:14.04
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
root@master:/var/triton/wkg#
Useful link: https://stackoverflow.com/questions/31870222/how-can-i-keep-container-running-on-kubernetes
Sample POD yaml revisited (installed curl in POD)
root@master:/var/triton/wkg/tryout# kubectl get pods
NAME READY STATUS RESTARTS AGE
cpx-mdf28 1/1 Running 0 11d
cpx-zhgpj 1/1 Running 1 11d
exporter-vs5sz 1/1 Running 0 25d
triton 1/1 Running 0 15h
web-frontend-bw79t 1/1 Running 0 1d
root@master:/var/triton/wkg/tryout#
root@master:/var/triton/wkg/tryout#
root@master:/var/triton/wkg/tryout# cat ub.yaml
apiVersion: v1
kind: Pod
metadata:
name: dkub
labels:
app: dkub
spec:
hostAliases:
- ip: "91.189.91.23"
hostnames:
- "security.ubuntu.com"
- ip: "91.189.88.149"
hostnames:
- "archive.ubuntu.com"
containers:
- name: dkub
image: ubuntu:16.04
command: [ "/bin/bash", "-c", "--" ]
args: [ "apt-get update;apt-get install -y curl;while true; do sleep 30; done;" ]
root@master:/var/triton/wkg/tryout# kubectl get pods
NAME READY STATUS RESTARTS AGE
cpx-mdf28 1/1 Running 0 11d
cpx-zhgpj 1/1 Running 1 11d
exporter-vs5sz 1/1 Running 0 25d
triton 1/1 Running 0 15h
web-frontend-bw79t 1/1 Running 0 1d
root@master:/var/triton/wkg/tryout#
Useful link: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
Sample Service file
Text Box
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: web
spec:
ports:
- port: 80
protocol: TCP
selector:
app: nginx
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Useful link: https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/
Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), handling of nodes life.
Master components (api-server, etcd, controller-manager) can be run on any node in the cluster. However, for simplicity, set up scripts typically start all master components on the same VM, and do not run user containers on this VM.
All communication paths from the cluster to the master terminate at the apiserver (none of the other master components are designed to expose remote services).
api-server communicates to rest in two paths. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.
A node may be a VM or physical machine, depending on the cluster. Each node has the services necessary to run pods and is managed by the master components. The services on a node include Docker, kubelet and kube-proxy.
The pause container is a container which holds the network namespace for the pod.This means that your 'apache' container can die, and come back to life, and all of the network setup will still be there. Normally if the last process in a network namespace dies the namespace would be destroyed and creating a new apache container would require creating all new network setup. With pause, you'll always have that one last thing in the namespace.
Useful link: https://groups.google.com/forum/#!topic/kubernetes-users/jVjv0QK4b_o
A Pod is the basic building block of Kubernetes. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
Note: Restarting a container in a Pod should not be confused with restarting the Pod. The Pod itself does not run, but is an environment the containers run in and persists until it is deleted.
Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory. Containers in different pods have distinct IP addresses and can not communicate by IPC without special configuration. These containers usually communicate with each other via Pod IP addresses.
Applications within a pod also have access to shared volumes, which are defined as part of a pod and are made available to be mounted into each application’s filesystem.
In general, Pods do not disappear until someone destroys them. This might be a human or a controller.
A Kubernetes object is a “record of intent”–once you create the object, the Kubernetes system will constantly work to ensure that object exists. When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name).
Most often, you provide the information to kubectl in a .yaml file. kubectl converts the information to JSON when making the API request.
At any given time, the Kubernetes Control Plane actively manages an object’s actual state to match the desired state you supplied.
Example of kubernetes object yaml
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Useful link: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as replication.
Replicated Pods are usually created and managed as a group by an abstraction called a Controller.
A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node.
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day. Please refer https://kubernetes.io/docs/getting-started-guides/minikube/
All the operations in cluster are performed by kubectl. JSON or YAML format is used for the command input.
Kube components
root@master:/var/triton# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
etcd-master 1/1 Running 1 11d
kube-apiserver-master 1/1 Running 1 11d
kube-controller-manager-master 1/1 Running 2 11d
kube-dns-545bc4bfd4-g7hlg 3/3 Running 3 11d
kube-flannel-ds-57lbp 1/1 Running 2 11d
kube-flannel-ds-5mpgr 1/1 Running 1 11d
kube-proxy-7xzch 1/1 Running 2 11d
kube-proxy-v2vdc 1/1 Running 1 11d
kube-scheduler-master 1/1 Running 2 11d
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.
Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name.
Assume a Service named foo in the Kubernetes namespace bar. A Pod running in namespace bar can look up this service by simply doing a DNS query for foo. A Pod running in namespace quux can look up this service by doing a DNS query for foo.bar.
kube-apiserver exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is point of contact for every agents
It runs on 6443 port for https and 8080 for http
Sample output for apiserver
root@ubuntu:~# curl -k http://10.217.212.231:8080
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1beta1",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1alpha1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/extensions/third-party-resources",
"/healthz/poststarthook/rbac/bootstrap-roles",
"/logs",
"/metrics",
"/swaggerapi/",
"/ui/",
"/version"
]
}root@ubuntu:~#
Sample event output from Kube-apiserver is at https://docs.openstack.org/kuryr/0.2.0/devref/k8s_api_watcher_design.html
etcd is used as Kubernetes’ backing store. All cluster data is stored here. Only api-server accesses etcd store.
https://rafay.co/the-kubernetes-current/etcd-kubernetes-what-you-should-know/ for multi-node etcd
kube-controller-manager runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
These controllers include:
Node Controller: Responsible for noticing and responding when nodes go down.
Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods).
Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.
kube-scheduler watches newly created pods that have no node assigned, and selects a node for them to run on.
kubelet is the primary node agent. The kubelet is responsible for maintaining a set of pods, which are composed of one or more containers, on a local system. Within a Kubernetes cluster, the kubelet functions as a local agent that watches for pod specs via the Kubernetes API server. The kubelet is also responsible for registering a node with a Kubernetes cluster, sending events and pod status, and reporting resource utilization.
It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) and:
Mounts the pod’s required volumes.
Downloads the pod’s secrets.
Runs the pod’s containers via docker (or, experimentally, rkt).
Periodically executes any requested container liveness probes.
Reports the status of the pod back to the rest of the system, by creating a mirror pod if necessary.
Reports the status of the node back to the rest of the system.
Checks about POD health and restart unhealthy POD automatically
Useful link: https://coreos.com/blog/introducing-the-kubelet-in-coreos.html
kube-proxy enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. It is the default load balancer. Citrix NetScaler CPX, HAProxy, NGINX, Avi Vantage replaces this load-balancer.
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
Some typical uses of a DaemonSet are:
running a cluster storage daemon, such as glusterd, ceph, on each node.
running a logs collection daemon on every node, such as fluentd or logstash.
running a node monitoring daemon on every node, such as Prometheus Node Exporter, collectd, Datadog agent, New Relic agent, or Ganglia gmond.
The key to connecting a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A Service uses selector labels to find the Pods that it routes traffic to.
A ReplicationController ensures that a specified number of pod replicas are running at any one time.
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated. For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade. For this reason, you should use a ReplicationController even if your application requires only a single pod.
A job is a supervisor for pods carrying out batch processes, that is, a process that runs for a certain time to completion, for example a calculation or a backup operation.
Useful link: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
http://kubernetesbyexample.com/jobs/
Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere.
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
internet | [ Ingress ] --|-----|-- [ Services ]
Use a Job for Pods that are expected to terminate, for example, batch computations. Jobs are appropriate only for Pods with restartPolicy equal to OnFailure or Never.
Use a ReplicationController, ReplicaSet, or Deployment for Pods that are not expected to terminate, for example, web servers. ReplicationControllers are appropriate only for Pods with a restartPolicy of Always.
Use a DaemonSet for Pods that need to run one per machine, because they provide a machine-specific system service.
Service Account binds to PODs and it determines the access capability of processes in this POD. For example, in case a process within POD container wants to access Kube API-Server, then service account of the POD comes into picture. If service account is not good enough, Kube-apiserver access will result in "Access Denied" error.
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/podname -o yaml), you can see the spec.serviceAccountName field has beenautomatically set.
Method to view service account assigned to a POD
root@master:/var/triton/wkg/cpx/cpx# kubectl get pods triton -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2018-04-09T12:43:09Z
labels:
app: triton
name: triton
....
.....
spec:
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
....
....
root@master:/var/triton/wkg# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-master 1/1 Running 3 130d
kube-apiserver-master 1/1 Running 3 130d
kube-controller-manager-master 1/1 Running 5 130d
kube-dns-545bc4bfd4-g7hlg 3/3 Running 9 130d
kube-flannel-ds-57lbp 1/1 Running 5 130d
kube-flannel-ds-5mpgr 1/1 Running 1 130d
kube-proxy-7xzch 1/1 Running 2 130d
kube-proxy-v2vdc 1/1 Running 4 130d
kube-scheduler-master 1/1 Running 4 130d
root@master:/var/triton/wkg#
root@master:/var/triton/wkg# kubectl get pods kube-flannel-ds-57lbp -n kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
....
spec:
....
....
securityContext: {}
serviceAccount: flannel
serviceAccountName: flannel
terminationGracePeriodSeconds: 30
====
From within POD container
====
root@nsingresscontroller:/# ls /run/secrets/kubernetes.io/serviceaccount/token/
ls: cannot access '/run/secrets/kubernetes.io/serviceaccount/token/': Not a directory
root@nsingresscontroller:/# ls /run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token
root@nsingresscontroller:/# ls /run/secrets/kubernetes.io/serviceaccount/namespace
/run/secrets/kubernetes.io/serviceaccount/namespace
root@nsingresscontroller:/# cat /run/secrets/kubernetes.io/serviceaccount/namespace
defaultroot@nsingresscontroller:/#
root@nsingresscontroller:/# KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)
root@nsingresscontroller:/# curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME > op
root@nsingresscontroller:/# head op
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "nsingresscontroller",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/nsingresscontroller",
"uid": "ef69947e-88c6-11e8-b4a7-eabc5652ca9b",
"resourceVersion": "1781477",
"creationTimestamp": "2018-07-16T07:07:43Z",
root@nsingresscontroller:/#
Useful link: https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
Method to view service accounts
root@master:/var/triton/wkg# kubectl get serviceaccounts
NAME SECRETS AGE
default 1 130d
root@master:/var/triton/wkg# kubectl get serviceaccounts -n kube-system
NAME SECRETS AGE
attachdetach-controller 1 130d
bootstrap-signer 1 130d
certificate-controller 1 130d
cronjob-controller 1 130d
daemon-set-controller 1 130d
default 1 130d
deployment-controller 1 130d
disruption-controller 1 130d
endpoint-controller 1 130d
flannel 1 130d
generic-garbage-collector 1 130d
horizontal-pod-autoscaler 1 130d
job-controller 1 130d
kube-dns 1 130d
kube-proxy 1 130d
namespace-controller 1 130d
node-controller 1 130d
persistent-volume-binder 1 130d
pod-garbage-collector 1 130d
replicaset-controller 1 130d
replication-controller 1 130d
resourcequota-controller 1 130d
service-account-controller 1 130d
service-controller 1 130d
statefulset-controller 1 130d
....
root@master:/var/triton/wkg#
A Pod can have multiple Containers running apps within it, but it can also have one or more Init Containers, which are run before the app Containers are started. This is init container.
Init container example
root@ubuntu-232:~/deepak# cat init.yaml
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
root@ubuntu-232:~/deepak# kubectl get pods
NAME READY STATUS RESTARTS AGE
cpx-ingress-ptm4v 2/2 Running 0 18m
cpx-nvz4s 1/1 Running 0 9d
cpx-xgh9v 1/1 Running 0 9d
dapi-test-pod 1/1 Running 0 11d
init-demo 0/1 Init:0/1 0 6s
tritoncpx 1/1 Running 0 5d
root@ubuntu-232:~/deepak# kubectl describe pod init-demo
Name: init-demo
Namespace: default
Node: ubuntu-231/10.106.73.231
Start Time: Mon, 21 May 2018 09:53:16 +0530
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.2.90
Init Containers:
install:
Container ID: docker://49c90392fa08f757e3e9e524d2540b240ca0f64629a8cb78b79a4ec2ee99d930
Image: busybox
Image ID: docker-pullable://busybox@sha256:58ac43b2cc92c687a32c8be6278e50a063579655fe3090125dcb2af0ff9e1a64
Port: <none>
Host Port: <none>
Command:
wget
-O
/work-dir/index.html
http://kubernetes.io
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 21 May 2018 09:53:24 +0530
Finished: Mon, 21 May 2018 09:53:26 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fxbc6 (ro)
/work-dir from workdir (rw)
Containers:
nginx:
Container ID: docker://736d58c1a99e1201759b6a992ead9325560596252499570f52da873b8ca27a12
Image: nginx
Image ID: docker-pullable://nginx@sha256:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 21 May 2018 09:53:42 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/nginx/html from workdir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fxbc6 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
workdir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-fxbc6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-fxbc6
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 28s kubelet, ubuntu-231 MountVolume.SetUp succeeded for volume "workdir"
Normal SuccessfulMountVolume 28s kubelet, ubuntu-231 MountVolume.SetUp succeeded for volume "default-token-fxbc6"
Normal Scheduled 28s default-scheduler Successfully assigned init-demo to ubuntu-231
Normal Pulling 27s kubelet, ubuntu-231 pulling image "busybox"
Normal Pulled 21s kubelet, ubuntu-231 Successfully pulled image "busybox"
Normal Created 21s kubelet, ubuntu-231 Created container
Normal Started 21s kubelet, ubuntu-231 Started container
Normal Pulling 18s kubelet, ubuntu-231 pulling image "nginx"
Normal Pulled 3s kubelet, ubuntu-231 Successfully pulled image "nginx"
Normal Created 3s kubelet, ubuntu-231 Created container
Normal Started 3s kubelet, ubuntu-231 Started container
root@ubuntu-232:~/deepak#
Useful link: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Useful link: https://kubernetes.io/docs/concepts/cluster-administration/federation/
It helps to add extra information which can be consumed by the someone who wants to know more about. For example, for a North-South load balancer, annotation can provide Virtual IP, ssl redirect option etc.
You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects.
If you want to annotate your setup with data which will help people or tools, but not Kubernetes itself, it’s better to put it into annotation meta data.
Some known annotations are as below
Key-value
ingress.kubernetes.io/ssl-redirect=true
nginx.ingress.kubernetes.io/ssl-redirect: "false"
Meaning
ssl redirect from f5
Example
https://clouddocs.f5.com/containers/v1/kubernetes/kctlr-ingress.html
https://github.com/kubernetes/ingress-nginx/issues/1567
ssl redirect from nginx
Useful link: https://vsupalov.com/kubernetes-labels-annotations-difference/
<kubenetes admin specific>
kubeadm init
Example: kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.102.53.237 --skip-preflight-checks --kubernetes-version stable-1.6
Description: Run this in order to set up the Kubernetes master
kubeadm reset
Example: kubeadm reset
Description:Run this to revert any changes made to this host by 'kubeadm init' or 'kubeadm join'.
kubeadm join
Example: kubeadm join --token 22bed6.6a36eb15a03e608a 10.102.53.237:6443
Description: Joining a kubeadm initialized cluster
kubeadm token list
Example:kubeadm token list
Description:This command will manage Bootstrap Token.
<kubernetes control specific>
kubectl apply
Example: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
Description: Apply a configuration
kubectl create
Example: kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Description: Create a resource (for example POD)
Example for POD creation
root@ubuntu-232:~/deepak/netadmin# kubectl create clusterrole test --verb=get,list,watch --resource=endpoints --dry-run
clusterrole.rbac.authorization.k8s.io "test" created (dry run)
kubectl taint
Example: kubectl taint nodes --all node-role.kubernetes.io/master-
Description: Modify nodes property
kubectl get
Example: kubectl get all --all-namespace, kubectl get pods, kubectl get all --namespace=kube-system
kubectl get services
Description: Get resource info (for example, pod)
Example: kubectl get secrets default-token-qgzth -o yaml
Description: Display secret info for default-token-qgzth in yaml form
Example: kubectl get pods --show-labels
Description: Shows all labels
kubectl describe
Example: kubectl describe pods guids-2617315942-nw6l1
Description: Get detailed info of resource (for example, pod)
kubectl run
Example: kubectl run guids --image=alexellis2/guid-service:latest --port 9010
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
Description: Create and run a particular image
kubectl logs
Example: kubectl logs guids-2617315942-nw6l1 | grep IP
Description: Print the logs for a container in a pod
kubectl scale
Example:kubectl scale deployment/guids --replica=2
Description: Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job.
kubectl delete
Example: kubectl delete pods ub14
Description: Delete resources
kubectl version: To know version of running kubernetes
kubectl cluster-info
Example: kubectl cluster-info
Description: Provides cluster info
cluster-info
root@master:/var/triton# kubectl cluster-info
Kubernetes master is running at https://10.102.53.236:6443
KubeDNS is running at https://10.102.53.236:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl explain
Example: kubectl explain pods
Description: Man page for pods
Example: kubectl explain pods.spec.containers
Description: Man page for pods->spec->containers field
kubectl explain services
root@master:/var/triton# kubectl explain services
DESCRIPTION:
Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy.
FIELDS:
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
spec <Object>
Spec defines the behavior of a service. https://git.k8s.io/community/
contributors/devel/api-conventions.md#spec-and-status/
status <Object>
Most recently observed status of the service. Populated by the system.
Read-only. More info: https://git.k8s.io/community/contributors/devel/
api-conventions.md#spec-and-status/
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
Explain manage for an entry within yaml
(qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl explain pods.spec.containers | m
ore
RESOURCE: containers <[]Object>
DESCRIPTION:
List of containers belonging to the pod. Containers cannot currently
be
added or removed. There must be at least one container in a Pod. Can
not be
updated.
A single application container that you want to run within a pod.
Method to know flannel subnet
root@master:/var/triton/sslcert# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
kubectl create secret command is used for this.
Steps:-
Ensure that cert key pair is available
Use kubectl create secret command for creating secret object
Example way to generate self signed certificate and key pair
614 openssl genrsa -out ingress.key 2048
615 openssl req -new -sha256 -key ingress.key -out csr.csr
616 ls
617 ls csr.csr
618 openssl req -x509 -sha256 -days 365 -key ingress.key -in csr.csr -out ingress.crt
619 ls ingress.crt
Creation of kubernetes secret
622 kubectl create secret tls web-ingress-secret --cert=ingress.crt --key=ingress.key
root@ubuntu-232:~/cfssl# kubectl describe secret web-ingress-secret
Name: web-ingress-secret
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 1367 bytes
tls.key: 1679 bytes
root@ubuntu-232:~/cfssl#
Useful link: https://kubernetes.io/docs/concepts/configuration/secret/
Example YAML to restrict cpu and memory
(qwiklabs-gcp-34c77ba3a9ccae55)$ cat pods/monolith.yaml | more
apiVersion: v1
kind: Pod
metadata:
name: monolith
labels:
app: monolith
spec:
containers:
- name: monolith
image: kelseyhightower/monolith:1.0.0
args:
- "-http=0.0.0.0:80"
- "-health=0.0.0.0:81"
- "-secret=secret"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
(qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl create -f pods/monolith.yaml
pod "monolith" created
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
monolith 1/1 Running 0 8s
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl describe pods monolith
Name: monolith
Namespace: default
Node: gke-bootcamp-default-pool-aa5ecaae-kf4g/10.128.0.5
Start Time: Sun, 17 Jun 2018 05:49:24 +0530
Labels: app=monolith
Annotations: <none>
Status: Running
IP: 10.48.1.8
Containers:
monolith:
Container ID: docker://46e05b992714455d7028b8142191905912fe090039d340150c7a1f7b29d83d7a
Image: kelseyhightower/monolith:1.0.0
Image ID: docker-pullable://kelseyhightower/monolith@sha256:72c3f41b6b01c21d9fdd2f45a89c6e5d59b8299b52d7dd0c9491745e73db3a35
Ports: 80/TCP, 81/TCP
Args:
-http=0.0.0.0:80
-health=0.0.0.0:81
-secret=secret
State: Running
Started: Sun, 17 Jun 2018 05:49:27 +0530
Ready: True
Restart Count: 0
Limits:
cpu: 200m
memory: 10Mi
Requests:
cpu: 200m
memory: 10Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ppndh (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-ppndh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-ppndh
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 33s default-scheduler Successfully assigned monolith to gke-bootcamp-default-po
ol-aa5ecaae-kf4g
Normal SuccessfulMountVolume 33s kubelet, gke-bootcamp-default-pool
-aa5ecaae-kf4g MountVolume.SetUp succeeded for volume "default-token-ppn
dh"
Normal Pulling 33s kubelet, gke-bootcamp-default-pool
-aa5ecaae-kf4g pulling image "kelseyhightower/monolith:1.0.0"
Normal Pulled 31s kubelet, gke-bootcamp-default-pool
-aa5ecaae-kf4g Successfully pulled image "kelseyhightower/monolith:1.0.0
"
Normal Created 31s kubelet, gke-bootcamp-default-pool
-aa5ecaae-kf4g Created container
Normal Started 30s kubelet, gke-bootcamp-default-pool
-aa5ecaae-kf4g Started container
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes
(qwiklabs-gcp-34c77ba3a9ccae55)$
Kubernetes provides API to edit label of a running POD
Text Box
es (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl get pods -l "app=monolith"
NAME READY STATUS RESTARTS AGE
healthy-monolith 1/1 Running 0 29m
monolith 1/1 Running 0 1h
secure-monolith 2/2 Running 0 17m
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernet
es (qwiklabs-gcp-34c77ba3a9ccae55)$
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernet
es (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl get pods -l "app=monolith,
secure=enabled"
No resources found.
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernet
es (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl label pods secure-monolith
'secure=enabled'
pod "secure-monolith" labeled
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernet
es (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl get pods -l "app=monolith,
secure=enabled"
NAME READY STATUS RESTARTS AGE
secure-monolith 2/2 Running 0 18m
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernet
es (qwiklabs-gcp-34c77ba3a9ccae55)$
Docker containers can be terminated any time, due to an auto-scaling policy, pod or deployment deletion or while rolling out an update. In most of such cases, you will probably want to graceful shutdown your application running inside the container.
When a pod should be terminated:
A SIGTERM signal is sent to the main process (PID 1) in each container, and a “grace period” countdown starts (defaults to 30 seconds - see below to change it).
Upon the receival of the SIGTERM, each container should start a graceful shutdown of the running application and exit.
If a container doesn’t terminate within the grace period, a SIGKILL signal will be sent and the container violently terminated.
There’re some circumstances where a SIGTERM violently kill the application, vanishing all your efforts to gracefully shutdown it. Nginx, for example, quickly exit on SIGTERM, while you should run /usr/sbin/nginx -s quit to gracefully terminate it.
In such cases, you can use the preStop hook. According to the Kubernetes doc, PreStop works as follow:
The preStop hook is configured at container level and allows you to run a custom command before the SIGTERM will be sent (please note that the termination grace period countdown actually starts before invoking the preStop hook and not once the SIGTERM signal will be sent).
The following example, taken from the Kubernetes doc, shows how to configure a preStopcommand.
preStop for container
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginxspec: template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 lifecycle: preStop: exec: # SIGTERM triggers a quick exit; gracefully terminate instead command: ["/usr/sbin/nginx","-s","quit"]
Useful link: https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html
https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
Health checks can be performed on each container in a pod. Readiness probes indicate when a pod is "ready" to serve traffic. Liveness probes indicate whether a container is "alive." If a liveness probe fails multiple times, the container is restarted. Liveness probes that continue to fail cause a pod to enter a crash loop. If a readiness check fails, the container is marked as not ready and is removed from any load balancers.
Health check of a container
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (q
wiklabs-gcp-34c77ba3a9ccae55)$ cat pods/healthy-monolith.yaml | more
apiVersion: v1
kind: Pod
metadata:
name: "healthy-monolith"
labels:
app: monolith
spec:
containers:
- name: monolith
image: kelseyhightower/monolith:1.0.0
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (q
wiklabs-gcp-34c77ba3a9ccae55)$
wiklabs-gcp-34c77ba3a9ccae55)$ kubectl describe pod healthy-monolith
Name: healthy-monolith
Namespace: default
Node: gke-bootcamp-default-pool-aa5ecaae-6nqb/10.128.0.2
Start Time: Sun, 17 Jun 2018 06:21:01 +0530
Labels: app=monolith
Annotations: <none>
Status: Running
IP: 10.48.3.6
Containers:
monolith:
Container ID: docker://af657dce3331e326d09959d802f8b2072f0b04cec1cf8a74f4d915ca858abd09
Image: kelseyhightower/monolith:1.0.0
Image ID: docker-pullable://kelseyhightower/monolith@sha256:72c3f41b6b01c21d9fdd2f45a89c6e5d59b8299b52d7dd0c9491745e73db3a35
Ports: 80/TCP, 81/TCP
State: Running
Started: Sun, 17 Jun 2018 06:21:04 +0530
Ready: True
Restart Count: 0
Limits:
cpu: 200m
memory: 10Mi
Requests:
cpu: 200m
memory: 10Mi
Liveness: http-get http://:81/healthz delay=5s timeout=5s period=15s #success=1 #failure=3
Readiness: http-get http://:81/readiness delay=5s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ppndh (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-ppndh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-ppndh
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16s default-scheduler Successfully assigned healthy-monolith to gke-bootcamp-default-pool-aa5ecaae-6nqb
Normal SuccessfulMountVolume 16s kubelet, gke-bootcamp-default-pool-aa5ecaae-6nqb MountVolume.SetUp succeeded for volume "default-token-ppndh"
Normal Pulling 15s kubelet, gke-bootcamp-default-pool-aa5ecaae-6nqb pulling image "kelseyhightower/monolith:1.0.0"
Normal Pulled 14s kubelet, gke-bootcamp-default-pool-aa5ecaae-6nqb Successfully pulled image "kelseyhightower/monolith:1.0.0"
Normal Created 13s kubelet, gke-bootcamp-default-pool-aa5ecaae-6nqb Created container
Normal Started 13s kubelet, gke-bootcamp-default-pool-aa5ecaae-6nqb Started container
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl get pod healthy-monolith
NAME READY STATUS RESTARTS AGE
healthy-monolith 1/1 Running 0 1m
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (q
wiklabs-gcp-34c77ba3a9ccae55)$
Configmap creation and apply to yaml
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (q
wiklabs-gcp-34c77ba3a9ccae55)$ kubectl get configmap
No resources found.
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (qwiklabs-gcp-34c77ba3a9ccae55)$ cat nginx/proxy.conf
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://127.0.0.1:80;
}
}
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl create configmap nginx-proxy-conf --from-file nginx/proxy.conf
configmap "nginx-proxy-conf" created
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl get configmap
NAME DATA AGE
nginx-proxy-conf 1 5s
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (qwiklabs-gcp-34c77ba3a9ccae55)$ kubectl describe configmap
Name: nginx-proxy-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
proxy.conf:
----
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://127.0.0.1:80;
}
}
Events: <none>
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (q
wiklabs-gcp-34c77ba3a9ccae55)$ cat pods/secure-monolith.yaml
apiVersion: v1
kind: Pod
metadata:
name: "secure-monolith"
labels:
app: monolith
spec:
containers:
- name: nginx
image: "nginx:1.9.14"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: "nginx-proxy-conf"
mountPath: "/etc/nginx/conf.d"
- name: "tls-certs"
mountPath: "/etc/tls"
- name: monolith
image: "kelseyhightower/monolith:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
volumes:
- name: "tls-certs"
secret:
secretName: "tls-certs"
- name: "nginx-proxy-conf"
configMap:
name: "nginx-proxy-conf"
items:
- key: "proxy.conf"
path: "proxy.conf"
google552496_student@cloudshell:~/orchestrate-with-kubernetes/kubernetes (q
wiklabs-gcp-34c77ba3a9ccae55)$
RBAC rule for configmap access via kubeapiserver
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]
Useful link: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Useful links: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
You can use CronJobs to run jobs on a time-based schedule. These automated jobs run like Cron tasks on a Linux or UNIX system. The cronJob Kind excepts the schedule frequency. It also provides a deadline which can be used to set the time by which K8s must schedule. If this deadline misses, then k8s will not try to schedule.
If K8s misses scheduling for 100 times, it stops the job and notifies error.
Reference: https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/
Useful links: https://blog.openshift.com/kubernetes-deep-dive-api-server-part-1/
https://github.com/coreos/coreos-kubernetes/issues/215
Python example to get list of pods
root@ip-10-0-0-47:/etc/kubernetes# cat test.py
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
root@ip-10-0-0-47:/etc/kubernetes# python test.py
Listing pods with their IPs:
10.0.0.25 default cpx-1
10.244.0.54 default web-backend-fmjf6
10.244.1.97 default web-backend-k9m8r
10.244.1.96 default web-frontend-fklr4
10.244.0.53 default web-frontend-nwclt
10.0.0.47 kube-system etcd-ip-10-0-0-47
10.0.0.47 kube-system kube-apiserver-ip-10-0-0-47
10.0.0.47 kube-system kube-controller-manager-ip-10-0-0-47
10.244.0.5 kube-system kube-dns-6f4fd4bdf-cf4rl
10.0.0.47 kube-system kube-flannel-ds-qztws
10.0.0.25 kube-system kube-flannel-ds-rn9z6
10.0.0.47 kube-system kube-proxy-7xv8g
10.0.0.25 kube-system kube-proxy-b4rbq
10.0.0.47 kube-system kube-scheduler-ip-10-0-0-47
root@ip-10-0-0-47:/etc/kubernetes#
Useful link:https://github.com/kubernetes-client/python
hostport configuration at port 8086
root@ubuntu-205:~# cat hostport.yaml
apiVersion: v1
kind: Pod
metadata:
name: influxdb
spec:
containers:
- name: influxdb
image: influxdb
ports:
- containerPort: 8086
hostPort: 8086
root@ubuntu-205:~# kubectl describe pods influxdb
Name: influxdb
....
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 48s default-scheduler Successfully assigned influxdb to ubuntu-206
Normal SuccessfulMountVolume 48s kubelet, ubuntu-206 MountVolume.SetUp succeeded for volume "default-token-8zqd2"
Normal Pulling 47s kubelet, ubuntu-206 pulling image "influxdb"
Normal Pulled 43s kubelet, ubuntu-206 Successfully pulled image "influxdb"
Normal Created 43s kubelet, ubuntu-206 Created container
Normal Started 43s kubelet, ubuntu-206 Started container
root@ubuntu-205:~# telnet ubuntu-206 8086
Trying 10.106.73.206...
Connected to ubuntu-206.
Escape character is '^]'.
^]
telnet> ^CConnection closed.
root@ubuntu-205:~#
root@ubuntu-205:~# ssh root@ubuntu-206
The authenticity of host 'ubuntu-206 (10.106.73.206)' can't be established.
ECDSA key fingerprint is SHA256:rcZWBSDwgqKGg3O9HMnwhXaL8oJKSW2jbTQCtr12vlo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ubuntu-206' (ECDSA) to the list of known hosts.
root@ubuntu-206's password:
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-21-generic x86_64)
* Documentation: https://help.ubuntu.com/
141 packages can be updated.
1 update is a security update.
*** System restart required ***
Last login: Tue May 22 11:02:24 2018 from 10.106.73.205
root@ubuntu-206:~# iptables -t nat -L -n | grep 8086
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8086 to:192.168.175.76:8086
MASQUERADE tcp -- 127.0.0.1 192.168.175.76 tcp dpt:8086
Hostport will be opened only in the node where the POD is running. So, in below example, 5080 port will be opened in node-3
Example showing how hostport works
root@ubuntu-232:~/deepak# cat cpx_ingress_test.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: cpx-ingress
spec:
replicas: 1
selector:
app: cpx-ingress-device
template:
metadata:
name: cpx-ingress
annotations:
NETSCALER_AS_APP: "True"
labels:
app: cpx-ingress-device
spec:
containers:
- name: cpx-ingress
image: "cpx:12.0-41.16"
securityContext:
privileged: true
env:
- name: "EULA"
value: "yes"
- name: "NS_MGMT_SERVER"
value: "10.217.212.226"
- name: "NS_MGMT_FINGER_PRINT"
value: "74:EA:04:90:2C:FA:BF:7A:31:C9:52:64:D3:9C:BC:D3:O8:9F:9A:O4"
- name: "NS_ROUTABLE"
value: "FALSE"
- name: "NS_HTTP_PORT"
value: "5080"
- name: "NS_HTTPS_PORT"
value: "5443"
- name: "NS_LB_ROLE"
value: "SERVER"
- name: "HOST"
value: ""
- name: "KUBERNETES_TASK_ID"
valueFrom:
fieldRef:
fieldPath: metadata.name
- name:"HOST"
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 80
hostPort: 5080
- containerPort: 443
hostPort: 5443
- containerPort: 5080
hostPort: 80
- containerPort: 5443
hostPort: 443
imagePullPolicy: Always
nodeSelector:
node-role: ingress
root@ubuntu:~/demo_yaml# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
catalogue-7f55d7b89c-4bf9d 1/1 Running 1 21h 10.244.1.57 node1
catalogue-7f55d7b89c-9sg9x 1/1 Running 0 22h 10.244.0.19 ubuntu
catalogue-7f55d7b89c-c7wb7 1/1 Running 1 21h 10.244.1.60 node1
catalogue-7f55d7b89c-fndtb 1/1 Running 0 22h 10.244.0.18 ubuntu
cpx-5mqg2 1/1 Running 0 22h 10.106.76.232 ubuntu
cpx-77vjh 1/1 Running 3 22h 10.106.76.235 node1
cpx-ingress 1/1 Running 0 3m 10.244.2.83 node3
cpx-klzf2 1/1 Running 4 22h 10.106.76.237 node3
cpx-nnc5p 1/1 Running 5 22h 10.106.76.236 node2
frontend-bcc4484f6-4vphb 1/1 Running 1 21h 10.244.1.58 node1
frontend-bcc4484f6-cnth5 1/1 Running 0 22h 10.244.0.20 ubuntu
frontend-bcc4484f6-wgwq5 1/1 Running 1 21h 10.244.1.59 node1
frontend-bcc4484f6-wnr8l 1/1 Running 0 22h 10.244.0.17 ubuntu
nginx-qds56 1/1 Running 0 17h 10.244.2.75 node3
nginx-wtrgm 1/1 Running 0 17h 10.244.1.71 node1
root@ubuntu:~/demo_yaml#
root@ubuntu:~/demo_yaml# kubectl describe nodes node3
Name: node3
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=node3
node-role=ingress
Annotations: flannel.alpha.coreos.com/backend-data={"VtepMAC":"b2:d7:d9:0f:42:f7"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=10.106.76.237
kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Tue, 24 Jul 2018 20:14:56 +0530
Taints: <none>
Unschedulable: false
root@node3:~# iptables -t nat --list-rules | grep 5080
-A CNI-DN-ab75111bf1b02b4e69ec3 -p tcp -m tcp --dport 5080 -j DNAT --to-destination 10.244.2.83:80
-A CNI-DN-ab75111bf1b02b4e69ec3 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.244.2.83:5080
-A CNI-SN-ab75111bf1b02b4e69ec3 -s 127.0.0.1/32 -d 10.244.2.83/32 -p tcp -m tcp --dport 5080 -j MASQUERADE
use curl and provide client cert/key info. Use below mentioned steps
Collect client cert and key data
Collect client cert info from the kube config file
root@master:/var/triton/wkg/tryout# cat /etc/kubernetes/admin.conf | grep client-key-data:
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcmdobWlUNHhyTGJrc1ord1BpWWpOL0VqeDM1d29sQnhuRXNXRUlBTFFWaThqWXU3CndLaXRFa0xmWllmZ2w1cE5iQWY1b01CbW1TbHhaVWI1OENqUE50UWNRYllVc3NEQ3ZrRzY5aXVVSDZBeUJtS2QKbzVPWFY4UStTMkI2UTN6R3E1UVg2MGRuQ3QwWUtKaTZWRnorRW94UHd2eVE4b0NTM00xSEhtVTdTaVU2N1dVSApCUVJlSy9PQlJqRnp1RVFUNXNMOEdDTk1lUVp4WmVrc2tnZmczN0YwMGdEV0FQODRRcm9HWGVDdWNrbWUwSUpWCjl5MDdWSDJuVnRpK1Q4ZkI4cG5QaGo2cmR5Y0F0N0k3bm9jZ1JpWFFlc01ZN0hqYWxBbEVESlhRNEIzenpNS3cKTnZiN0ZSRjVUbTBTOFA5WEhJK0VhaHViN2xoNkk0cU5KWEdoZVFJREFRQUJBb0lCQVFDYS9qSkxvbzlkRWRubApjNkhrQjlVdjJsd1NMTUtsWEYyQ1k4RDMyd0dySmh0dk9IWnpaQVlYa0dVaktIdFdxWDZ3YXYzZ005cHNKK09zCjNpUjB4Zk9lRWhSRVZhUmplcGMyR0pZbzdiRFM1Ym9IdzhZL1M0L3JBNFN6WHU0a3NyakJVSGhvKzBPREFsdWsKdERpbUw5ZTdyeWpPTUYvckNhVkNiclFiRnU5UjNSUjgrNmxGT1JmVElhTVV4aks3Nmp3ZFZuNm9ROXZWYnZlSQpmK1JLaVh3Y0VhOE9iWmJiVnA5TFJoL1o0SG1xMHNsMFlOWnRMb3dwcGh2MU9ubnkwL3cxNVNvekJ5alFrbDQ2CkxCWlBZa2tEK24xUEVkVXFDTzA3c095MUpkYXkrRnVMOWNOdUMxZmpHdE1GYmhxSWpyTDhEcXJYUENNbWxFYkwKMFJlUXdpNnhBb0dCQU5UNU5iTExmT0hJZ1N6Zk5OcWVJSEhPQ2ljNjNUMGFEcnBWUmt4YnBXYWFxajUyZHp2Swo0SCtTMzJGcWdoUzkrUEhLdktENWRDM0txYSs1aWNhVUlwWU5XclcxT3IwczdjU2ZwM3JaMHJ3TS9rN2t5Q3poCk04elVTT2dBWUNKUXBOTXQzMFNGQStwNHZVZ21tV0RVQlpzZGorVmZGUXdLRGwxTU0wUGNWT1VOQW9HQkFORXgKTjBuZXFwdnRjQ3U3cmc4MFdzejQxQlBkMGRWOHM1UC9IZFNpdzhRVFU1TytvT1FaVlJNcG9YSUllbVdROGJIbwphazJDM0hrWXJQMS93eDlYakJybWJkZXJkTVJMZHNvUU1zbGloNERGVlpiTWRvTEhscDZsM21KMzhpd3d6TnZxCmxNN1hweUpSekZSWkdXSzlrY0dyRmhta1gvWUd2VzdqOG1vWkpLc2RBb0dBQ1Azb25YN242K1I2UjdtNDBvNGcKa28xL2NqNlMvclJ0OE1JTzhNUmh6RjQxVitQS2p2UzIyOHdJc0dVOXpzQmlsVnJZOGZiMlI4U3B1MmlhLzQ1YgprM2hHM3lzaXFzQU4zZUpid04wWGY2Y1F5YVh4S2F2c2N2WjNpWXdTZ3dCaXBTUG5yRTN0WjJYbm4vYzVQSlJYCkZFQ0FSYy9vNUpROEhRWk5sOHppckxVQ2dZQURDRW1hNG9WcW1UaUZDY1Z1SnY0aDlvRnNnRXlvWVpSZzB0UGwKM3k0alMzeHNxZGkvTmJiTC9sQit6S3lwaUQ1WXE5dk9uOVQrVkdNOWtYcU1tOEpHS1l6eUVXUXg3RDRlazdtSQp2Y3JsRFBjK3Bsd1piVGM3dVgvTndadGJGS0lEbGhUdUlxWWpremY1Q1FtYkt0QlFGR0RQb2xoWndxTkFWa0dQClZDbjU2UUtCZ0FINks5cXNreGRsQnlSSmo0aUFjVy85UGZEaFRsYWVtWVRDNFFHUWVkSFFveGFFSkJLU3ZYTVoKbS9GQWdLUkVEbjhibzdXYjdaZFJVdS9wSmMzRUdkOERsaFZBUzVNdlFGcHB3SkhXM29Ra3oyNzBtUHJ1YjBYRApVRjVGWUNCeEZHend1YVF5RWxxRG5jdFo0MFZXNHdUcFE2TjNLWFZHbUEzT0NndXVQOTBOCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
root@master:/var/triton/wkg/tryout# cat /etc/kubernetes/admin.conf | grep client-certificate-data:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJQktDbStsZ2huM0F3RFFZSktvWklodmNOQVFFT
EJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4TnpFeE16QXhPVE16TXpkYUZ3MHhPREV4TXpBeE9UTXpNemxhTURReApGekFWQmdOVkJBb1REbk4
1YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQ
XJnaG1pVDR4ckxia3NaK3cKUGlZak4vRWp4MzV3b2xCeG5Fc1dFSUFMUVZpOGpZdTd3S2l0RWtMZlpZZmdsNXBOYkFmNW9NQm1tU2x4WlViNQo4Q2pQTnRRY1FiWVVzc0RDdmtHNjlpdVVINkF5Qm1LZG81T1hWOFErUzJCNlEzekdxNVFYNjBkbkN0MFlLSmk2ClZGeitFb3hQd3Z5UThvQ1MzTTFISG1VN1NpVTY3V1VIQlFSZUsvT0JSakZ6dUVRVDVzTDhHQ05NZVFaeFpla3MKa2dmZzM3RjAwZ0RXQVA4NFFyb0dYZUN1Y2ttZTBJSlY5eTA3VkgyblZ0aStUOGZCOHBuUGhqNnJkeWNBdDdJNwpub2NnUmlYUWVzTVk3SGphbEFsRURKWFE0QjN6ek1Ld052YjdGUkY1VG0wUzhQOVhISStFYWh1YjdsaDZJNHFOCkpYR2hlUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKenpFOVQvYzdocGc2MkdOTCs3V25adkdqSmd4dFRQM0ptTQpqbmlVTUMzdmlub2w5QWxwb3lLV1Z0bDNFdklQYldoazc0MWlLTThESEFxbVpJQWZla1pnUTc5clh6OGJ2aEZVCkhTNXI4V2F5MnU3Q1hkZVp1cG1pbFNNSHJWbEVhZ3lRNHBnc2dSOXdyZjlwRml6WS9rRUVWamNHMzZXQWd3ODUKMitpTkJRdW1sSTBvaWJ6QldZYStPbnM1M2NjUmFCODU0d3BFcFZWWE96a1lZMm1sRmpMVHV0Q0hDZDlkTGpkbwppZ1hjRTAwM2xkd2Z2TmNZWGRveVdBN1RyNHBQQ0lwZUdvSlBURWVlMEhIbXoxV095aU1SU1VRS3Z1dytSL0trCklKbnB5bHZDcENKV2psK0NYR2duM0M3aHBrN3FMMXVDcW1vTTlwQWVHL2tVdDlRbTVRUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
Convert cert-data and key-data in the PEM format
Kubernetes secret is stored in base64 format. Please use https://www.base64decode.org to decode it. Store certificate output in tls.crt and key output in tls.key
Convert client-certificate-data and client-key-data in PEM format
root@dkub:/etc/secret-volume# cat tls.key
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEArghmiT4xrLbksZ+wPiYjN/Ejx35wolBxnEsWEIALQVi8jYu7
wKitEkLfZYfgl5pNbAf5oMBmmSlxZUb58CjPNtQcQbYUssDCvkG69iuUH6AyBmKd
o5OXV8Q+S2B6Q3zGq5QX60dnCt0YKJi6VFz+EoxPwvyQ8oCS3M1HHmU7SiU67WUH
BQReK/OBRjFzuEQT5sL8GCNMeQZxZekskgfg37F00gDWAP84QroGXeCuckme0IJV
9y07VH2nVti+T8fB8pnPhj6rdycAt7I7nocgRiXQesMY7HjalAlEDJXQ4B3zzMKw
Nvb7FRF5Tm0S8P9XHI+Eahub7lh6I4qNJXGheQIDAQABAoIBAQCa/jJLoo9dEdnl
c6HkB9Uv2lwSLMKlXF2CY8D32wGrJhtvOHZzZAYXkGUjKHtWqX6wav3gM9psJ+Os
3iR0xfOeEhREVaRjepc2GJYo7bDS5boHw8Y/S4/rA4SzXu4ksrjBUHho+0ODAluk
tDimL9e7ryjOMF/rCaVCbrQbFu9R3RR8+6lFORfTIaMUxjK76jwdVn6oQ9vVbveI
f+RKiXwcEa8ObZbbVp9LRh/Z4Hmq0sl0YNZtLowpphv1Onny0/w15SozByjQkl46
LBZPYkkD+n1PEdUqCO07sOy1Jday+FuL9cNuC1fjGtMFbhqIjrL8DqrXPCMmlEbL
0ReQwi6xAoGBANT5NbLLfOHIgSzfNNqeIHHOCic63T0aDrpVRkxbpWaaqj52dzvK
4H+S32FqghS9+PHKvKD5dC3Kqa+5icaUIpYNWrW1Or0s7cSfp3rZ0rwM/k7kyCzh
M8zUSOgAYCJQpNMt30SFA+p4vUgmmWDUBZsdj+VfFQwKDl1MM0PcVOUNAoGBANEx
N0neqpvtcCu7rg80Wsz41BPd0dV8s5P/HdSiw8QTU5O+oOQZVRMpoXIIemWQ8bHo
ak2C3HkYrP1/wx9XjBrmbderdMRLdsoQMslih4DFVZbMdoLHlp6l3mJ38iwwzNvq
lM7XpyJRzFRZGWK9kcGrFhmkX/YGvW7j8moZJKsdAoGACP3onX7n6+R6R7m40o4g
ko1/cj6S/rRt8MIO8MRhzF41V+PKjvS228wIsGU9zsBilVrY8fb2R8Spu2ia/45b
k3hG3ysiqsAN3eJbwN0Xf6cQyaXxKavscvZ3iYwSgwBipSPnrE3tZ2Xnn/c5PJRX
FECARc/o5JQ8HQZNl8zirLUCgYADCEma4oVqmTiFCcVuJv4h9oFsgEyoYZRg0tPl
3y4jS3xsqdi/NbbL/lB+zKypiD5Yq9vOn9T+VGM9kXqMm8JGKYzyEWQx7D4ek7mI
vcrlDPc+plwZbTc7uX/NwZtbFKIDlhTuIqYjkzf5CQmbKtBQFGDPolhZwqNAVkGP
VCn56QKBgAH6K9qskxdlByRJj4iAcW/9PfDhTlaemYTC4QGQedHQoxaEJBKSvXMZ
m/FAgKREDn8bo7Wb7ZdRUu/pJc3EGd8DlhVAS5MvQFppwJHW3oQkz270mPrub0XD
UF5FYCBxFGzwuaQyElqDnctZ40VW4wTpQ6N3KXVGmA3OCguuP90N
-----END RSA PRIVATE KEY-----
root@dkub:/etc/secret-volume# cat tls.crt
-----BEGIN CERTIFICATE-----
MIIC8jCCAdqgAwIBAgIIBKCm+lghn3AwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0xNzExMzAxOTMzMzdaFw0xODExMzAxOTMzMzlaMDQx
FzAVBgNVBAoTDnN5c3RlbTptYXN0ZXJzMRkwFwYDVQQDExBrdWJlcm5ldGVzLWFk
bWluMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArghmiT4xrLbksZ+w
PiYjN/Ejx35wolBxnEsWEIALQVi8jYu7wKitEkLfZYfgl5pNbAf5oMBmmSlxZUb5
8CjPNtQcQbYUssDCvkG69iuUH6AyBmKdo5OXV8Q+S2B6Q3zGq5QX60dnCt0YKJi6
VFz+EoxPwvyQ8oCS3M1HHmU7SiU67WUHBQReK/OBRjFzuEQT5sL8GCNMeQZxZeks
kgfg37F00gDWAP84QroGXeCuckme0IJV9y07VH2nVti+T8fB8pnPhj6rdycAt7I7
nocgRiXQesMY7HjalAlEDJXQ4B3zzMKwNvb7FRF5Tm0S8P9XHI+Eahub7lh6I4qN
JXGheQIDAQABoycwJTAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUH
AwIwDQYJKoZIhvcNAQELBQADggEBAJzzE9T/c7hpg62GNL+7WnZvGjJgxtTP3JmM
jniUMC3vinol9AlpoyKWVtl3EvIPbWhk741iKM8DHAqmZIAfekZgQ79rXz8bvhFU
HS5r8Way2u7CXdeZupmilSMHrVlEagyQ4pgsgR9wrf9pFizY/kEEVjcG36WAgw85
2+iNBQumlI0oibzBWYa+Ons53ccRaB854wpEpVVXOzkYY2mlFjLTutCHCd9dLjdo
igXcE003ldwfvNcYXdoyWA7Tr4pPCIpeGoJPTEee0HHmz1WOyiMRSUQKvuw+R/Kk
IJnpylvCpCJWjl+CXGgn3C7hpk7qL1uCqmoM9pAeG/kUt9Qm5QQ=
-----END CERTIFICATE-----
root@dkub:/etc/secret-volume#
Use curl
Use curl with client auth info
root@dkub:/etc/secret-volume# curl -k -v --cert ./tls.crt --key ./tls.key https://10.102.53.236:6443/api/v1/services | more
* Trying 10.102.53.236...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 10.102.53.236 (10.102.53.236) port 64
43 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 592 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* common name: kube-apiserver (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: CN=kube-apiserver
* start date: Thu, 30 Nov 2017 19:33:37 GMT
* expire date: Fri, 30 Nov 2018 19:33:37 GMT
* issuer: CN=kubernetes
* compression: NULL
* ALPN, server accepted to use http/1.1
> GET /api/v1/services HTTP/1.1
> Host: 10.102.53.236:6443
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Wed, 11 Apr 2018 22:56:53 GMT
< Transfer-Encoding: chunked
<
{ [1063 bytes data]
100 {
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/services",
"resourceVersion": "13714233"
},
"items": [
{
"metadata": {
"name": "exporter",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/exporter",
"uid": "65f4e379-296b-11e8-860a-42d99b9ef27a",
"resourceVersion": "11021231",
"creationTimestamp": "2018-03-16T22:43:08Z",
....
Useful link: https://stackoverflow.com/questions/31305376/using-client-certificate-in-curl-command
Use kubctl get nodes command
List all nodes in kubernetes cluster
root@master:/var/triton# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 10d v1.8.4
minion1 Ready <none> 10d v1.8.4
Use kubectl get services command
Listing running services
root@master:/var/triton# cat nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
root@master:/var/triton# kubectl create -f nginx.yaml
service "my-nginx" created
root@master:/var/triton# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 37s
cpx-gwjpm 1/1 Running 0 17h
cpx-vz245 1/1 Running 0 17h
root@master:/var/triton# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
my-nginx ClusterIP 10.110.78.90 <none> 80/TCP 9s
root@master:/var/triton#
Useful link:https://stackoverflow.com/questions/33970251/how-to-expose-a-kubernetes-service-externally-using-nodeport
A: If you create a pod, it will result in creation of cni interface in the minion node
Example:
kubectl create namespace sock-shop
kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"
Useful link: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
A. Use kubernetes DaemonSet kind
kubectl create -f daemonset.yaml
Example daemonset.yaml file for daemonset kind POD
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: cpx
spec:
template:
metadata:
name: cpx
labels:
app: cpx-daemon
annotations:
NETSCALER_AS_APP: "True"
spec:
hostNetwork: true
containers:
- name: cpx
image: "store/citrix/netscalercpx:12.0-53.16"
securityContext:
privileged: true
env:
- name: "EULA"
value: "yes"
- name: "NS_NETMODE"
value: "HOST"
- name: "kubernetes_url"
value: "https://10.102.53.236:6443"
- name: "NS_MGMT_SERVER"
value: "10.102.53.243"
- name: "NS_MGMT_FINGER_PRINT"
value: "DC:A2:F9:52:28:E4:FD:D2:C8:91:E9:E5:07:B0:67:11:BE:8E:EE:94"
- name: "NS_ROUTABLE"
value: "FALSE"
- name: "KUBERNETES_TASK_ID"
valueFrom:
fieldRef:
fieldPath: metadata.name
imagePullPolicy: Never
Useful link: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
YAML for creating services with command line arguments to container
Example YAML for creating Services with command line arguments to container
apiVersion: v1
kind: Service
metadata:
name: exporter
labels:
app: exporter
spec:
type: ClusterIP
ports:
- port: 8080
protocol: TCP
name: http
selector:
app: exporter
---
apiVersion: v1
kind: ReplicationController
metadata:
name: exporter
spec:
replicas: 1
template:
metadata:
labels:
app: exporter
spec:
containers:
- name: exporter
image: exporter:1.0
ports:
- containerPort: 8080
args: ["--target-nsip", "10.102.53.236:5080", "--port", "8080"]
Useful link: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
A. Use service kind POD
kubectl create -f web_backend.yaml
kubectl create -f web_backend.yaml
apiVersion: v1
kind: Service
metadata:
name: web-backend
labels:
app: web-backend
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
name: http
selector:
app: web-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: web-backend
spec:
replicas: 15
template:
metadata:
labels:
app: web-backend
spec:
containers:
- name: web-backend
image: 10.217.6.101:5000/web-test:latest
ports:
- containerPort: 80
Useful link:https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/
Use same yaml in kubectl delete command which one was used for creation.
root@master:/var/# kubectl delete -f web-backend.yaml
service "web-backend" deleted
Use 'kubctl describe replicationController/<PODNAME>'
Method to get info about ReplicaController PODs
root@master:/var/triton# cat web-backend.yaml
apiVersion: v1
kind: Service
metadata:
name: web-backend
labels:
app: web-backend
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
name: http
selector:
app: web-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: web-backend
spec:
replicas: 5
template:
metadata:
labels:
app: web-backend
spec:
containers:
- name: web-backend
image: 10.217.6.101:5000/web-test:latest
ports:
- containerPort: 80
root@master:/var/triton# kubectl describe replicationController/web-backend
Name: web-backend
Namespace: default
Selector: app=web-backend
Labels: app=web-backend
Annotations: <none>
Replicas: 5 current / 5 desired
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=web-backend
Containers:
web-backend:
Image: 10.217.6.101:5000/web-test:latest
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 8m replication-controller Created pod: web-backend-jzknf
Normal SuccessfulCreate 8m replication-controller Created pod: web-backend-b4wjm
Normal SuccessfulCreate 8m replication-controller Created pod: web-backend-6xtgd
Normal SuccessfulCreate 8m replication-controller Created pod: web-backend-82brj
Normal SuccessfulCreate 8m replication-controller Created pod: web-backend-g8fh5
Useful link: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/
Use 'kubectl describe svc'
get service type pod info
root@master:/var/triton# kubectl describe svc web-backend
Name: web-backend
Namespace: default
Labels: app=web-backend
Annotations: <none>
Selector: app=web-backend
Type: ClusterIP
IP: 10.105.246.101
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.31:80,10.244.0.32:80,10.244.1.53:80 + 2 more...
Session Affinity: None
Events: <none>
root@master:/var/triton# docker ps | grep web-backend
b41dedc76870 10.217.6.101:5000/web-test "/bin/bash /start.sh" 16 seconds ago Up 15 seconds k8s_web-backend_web-backend-vs7qb_default_
753e2dca-dec0-11e7-b198-42d99b9ef27a_0
6ad0b6604bd8 10.217.6.101:5000/web-test "/bin/bash /start.sh" 17 seconds ago Up 16 seconds k8s_web-backend_web-backend-gsr5r_default_
75408601-dec0-11e7-b198-42d99b9ef27a_0
9610b6fde41e gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 seconds ago Up 18 seconds k8s_POD_web-backend-gsr5r_default_75408601
-dec0-11e7-b198-42d99b9ef27a_0
679330e15697 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 seconds ago Up 18 seconds k8s_POD_web-backend-vs7qb_default_753e2dca
-dec0-11e7-b198-42d99b9ef27a_0
root@master:/var/triton# docker exec -it b41dedc76870 bash
root@web-backend-vs7qb:/# ifconfig
eth0 Link encap:Ethernet HWaddr 0a:58:0a:f4:00:20
inet addr:10.244.0.32 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:9 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:690 (690.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@web-backend-vs7qb:/# exit
root@master:/var/triton# curl -k http://10.105.246.101
<html>
<head>
<title> Front End App - v1 </title>
<style>
.outer-container {
border: 1px dotted black;
width: 100%;
height: 100%;
text-align: center;
background-color: light-grey;
}
.inner-container {
border: 1px solid black;
display: inline-block;
position: relative;
}
.video-overlay-frontend {
position: absolute;
left: 10%;
top: 10%;
margin: 10px;
xpadding: 5px 5px;
font-size: 20px;
font-family: Helvetica;
color: rgb(255,255,51);
You should be able to curl the nginx Service on <CLUSTER-IP>:<PORT> (10.105.246.101:80 in above case)from any node in your cluster.
Useful link: https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
Use kubectl get pods command
To know kube dns service status
root@master:/var/triton# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
kube-dns-545bc4bfd4-g7hlg 3/3 Running 3 11d
Useful link:https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
kube-apiserver listens on 6443 port. So, query on this port
To get info from kube-apiserver
root@master:/var# curl --insecure https://127.0.0.1:6443/apis/batch/v1
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "batch/v1",
"resources": [
{
"name": "jobs",
"singularName": "",
"namespaced": true,
"kind": "Job",
"verbs": [
"create",
"delete",
"deletecollection",
"get",
"list",
"patch",
"update",
"watch"
],
"categories": [
"all"
]
},
{
"name": "jobs/status",
"singularName": "",
"namespaced": true,
"kind": "Job",
"verbs": [
"get",
"patch",
"update"
]
}
]
}root@master:/var#
root@master:/var/# netstat -atpun | grep 6443
tcp 0 0 10.102.53.236:59288 10.102.53.236:6443 ESTABLISHED 31697/python
tcp 0 0 10.102.53.236:59286 10.102.53.236:6443 ESTABLISHED 31697/python
tcp 0 0 10.102.53.236:36432 10.102.53.236:6443 ESTABLISHED 31697/python
tcp 0 0 10.102.53.236:34802 10.102.53.236:6443 ESTABLISHED 4723/kube-scheduler
tcp 0 0 10.102.53.236:34078 10.102.53.236:6443 ESTABLISHED 31697/python
tcp 0 0 10.102.53.236:34748 10.102.53.236:6443 ESTABLISHED 24777/kubelet
tcp 0 0 10.102.53.236:34812 10.102.53.236:6443 ESTABLISHED 4654/kube-controlle
tcp 0 0 10.102.53.236:34782 10.102.53.236:6443 ESTABLISHED 4654/kube-controlle
tcp 0 0 10.102.53.236:55448 10.102.53.236:6443 ESTABLISHED 31697/python
tcp 0 0 127.0.0.1:37806 127.0.0.1:6443 ESTABLISHED 3149/kube-apiserver
tcp 0 0 10.102.53.236:34750 10.102.53.236:6443 ESTABLISHED 3189/kube-proxy
tcp6 0 0 :::6443 :::* LISTEN 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:34782 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:34802 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.237:46922 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:34078 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.237:54164 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.237:54188 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.243:19777 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.237:54166 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.243:19733 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:34750 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:55448 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.237:54186 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.237:40674 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.237:46920 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.244.0.4:40394 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:34812 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:36432 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.237:54184 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:41534 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 127.0.0.1:6443 127.0.0.1:37806 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:59288 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.243:19778 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.243:19779 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:59286 ESTABLISHED 3149/kube-apiserver
tcp6 0 610 10.102.53.236:6443 10.102.53.243:19776 ESTABLISHED 3149/kube-apiserver
tcp6 0 0 10.102.53.236:6443 10.102.53.236:34748 ESTABLISHED 3149/kube-apiserver
Useful link: https://blog.openshift.com/kubernetes-deep-dive-api-server-part-1/
Use 'kubectl describe pods'
Get detailed info for running PODs
root@master:/var/triton# cat web-backend.yaml
apiVersion: v1
kind: Service
metadata:
name: web-backend
labels:
app: web-backend
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
name: http
selector:
app: web-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: web-backend
spec:
replicas: 3
template:
metadata:
labels:
app: web-backend
spec:
containers:
- name: web-backend
image: 10.217.6.101:5000/web-test:latest
ports:
- containerPort: 80
root@master:/var/triton# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 624 26d
cpx-gwjpm 1/1 Running 0 26d
cpx-vz245 1/1 Running 0 26d
web-backend-5cfdb 1/1 Running 0 25d
web-backend-cg5pf 1/1 Running 0 25d
web-backend-dlkjh 1/1 Running 0 25d
root@master:/var/triton# kubectl describe pods web-backend
Name: web-backend-5cfdb
Namespace: default
Node: minion1/10.102.53.237
Start Time: Wed, 13 Dec 2017 20:24:16 +0530
Labels: app=web-backend
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web-backend","uid":"7dd6d053-e015-11e7-b1
98-42d99b9ef...
Status: Running
IP: 10.244.1.62
Created By: ReplicationController/web-backend
Controlled By: ReplicationController/web-backend
Containers:
web-backend:
Container ID: docker://4419c5211de82140b9e4febeb78d77a63e5c6f029344fc97e5cb8c7c499e4942
Image: 10.217.6.101:5000/web-test:latest
Image ID: docker-pullable://10.217.6.101:5000/web-test@sha256:a9866688908482a8adbe25937bfdc5bf1f4cb5830e2843d99d3d0ef068ddcd7b
Port: 80/TCP
State: Running
Started: Wed, 13 Dec 2017 20:24:19 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qgzth (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-qgzth:
Type: Secret (a volume populated by a Secret)
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Name: web-backend-dlkjh
Namespace: default
Node: master/10.102.53.236
Start Time: Wed, 13 Dec 2017 20:24:16 +0530
Labels: app=web-backend
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web-backend","uid":"7dd6d053-e015-11e7-b198-42d99b9ef...
Status: Running
IP: 10.244.0.35
Created By: ReplicationController/web-backend
Controlled By: ReplicationController/web-backend
Containers:
web-backend:
Container ID: docker://ee36b7ac2c2ce2ba64afb844216cb63695c9c5f1371e6c57304e7844218ccabf
Image: 10.217.6.101:5000/web-test:latest
Image ID: docker-pullable://10.217.6.101:5000/web-test@sha256:a9866688908482a8adbe25937bfdc5bf1f4cb5830e2843d99d3d0ef068ddcd7b
Port: 80/TCP
State: Running
Started: Wed, 13 Dec 2017 20:24:19 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qgzth (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-qgzth:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qgzth
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
root@master:/var/triton#
root@master:/var/triton# kubectl describe pods cpx
Name: cpx-gwjpm
Namespace: default
Node: minion1/10.102.53.237
Start Time: Tue, 12 Dec 2017 03:31:00 +0530
Labels: app=cpx-daemon
controller-revision-hash=1259225574
pod-template-generation=1
Annotations: NETSCALER_AS_APP=True
kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"DaemonSet","namespace":"default","name":"cpx","uid":"c5c41790-debe-11e7-b198-42d99b9ef27a","ap
iVersion":"e...
Status: Running
IP: 10.102.53.237
Created By: DaemonSet/cpx
Controlled By: DaemonSet/cpx
Containers:
cpx:
Container ID: docker://96ccb0a1c8751f7368c930df3f841862554ff6a05e72080ef2335d80201cb336
Image: cpx:12.0-53.16
Image ID: docker-pullable://store/citrix/netscalercpx@sha256:e6540ea99c6ed024a69d9c289165d6e97cec8e8f207db1cd9a43007342ec608f
Port: <none>
State: Running
Started: Tue, 12 Dec 2017 03:31:00 +0530
Ready: True
Restart Count: 0
Environment:
EULA: yes
NS_NETMODE: HOST
kubernetes_url: https://10.102.53.236:6443
NS_MGMT_SERVER: 10.102.53.243
NS_MGMT_FINGER_PRINT: DC:A2:F9:52:28:E4:FD:D2:C8:91:E9:E5:07:B0:67:11:BE:8E:EE:94
NS_ROUTABLE: FALSE
KUBERNETES_TASK_ID: cpx-gwjpm (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qgzth (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-qgzth:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qgzth
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute
node.alpha.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
Events: <none>
Name: cpx-vz245
Namespace: default
Node: master/10.102.53.236
Start Time: Tue, 12 Dec 2017 03:31:00 +0530
Labels: app=cpx-daemon
controller-revision-hash=1259225574
pod-template-generation=1
Annotations: NETSCALER_AS_APP=True
kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"DaemonSet","namespace":"default","name":"cpx","uid":"c5c41790-debe-11e7-b198-42d99b9ef27a","apiVersion":"e...
Status: Running
IP: 10.102.53.236
Created By: DaemonSet/cpx
Controlled By: DaemonSet/cpx
Containers:
cpx:
Container ID: docker://f2529f27a0425427b1296b2ce0191aca8b52d30e5c107a1a888875ac80766e0c
Image: cpx:12.0-53.16
Image ID: docker-pullable://store/citrix/netscalercpx@sha256:e6540ea99c6ed024a69d9c289165d6e97cec8e8f207db1cd9a43007342ec608f
Port: <none>
State: Running
Started: Tue, 12 Dec 2017 03:31:00 +0530
Ready: True
Restart Count: 0
Environment:
EULA: yes
NS_NETMODE: HOST
kubernetes_url: https://10.102.53.236:6443
NS_MGMT_SERVER: 10.102.53.243
NS_MGMT_FINGER_PRINT: DC:A2:F9:52:28:E4:FD:D2:C8:91:E9:E5:07:B0:67:11:BE:8E:EE:94
NS_ROUTABLE: FALSE
KUBERNETES_TASK_ID: cpx-vz245 (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qgzth (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-qgzth:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qgzth
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute
node.alpha.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
Events: <none>
root@master:/var/triton# cat cpx_daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: cpx
spec:
template:
metadata:
name: cpx
labels:
app: cpx-daemon
annotations:
NETSCALER_AS_APP: "True"
spec:
hostNetwork: true
containers:
- name: cpx
image: "cpx:12.0-53.16"
securityContext:
privileged: true
env:
- name: "EULA"
value: "yes"
- name: "NS_NETMODE"
value: "HOST"
- name: "kubernetes_url"
value: "https://10.102.53.236:6443"
- name: "NS_MGMT_SERVER"
value: "10.102.53.243"
- name: "NS_MGMT_FINGER_PRINT"
value: "DC:A2:F9:52:28:E4:FD:D2:C8:91:E9:E5:07:B0:67:11:BE:8E:EE:94"
- name: "NS_ROUTABLE"
value: "FALSE"
- name: "KUBERNETES_TASK_ID"
valueFrom:
fieldRef:
fieldPath: metadata.name
imagePullPolicy: Never
root@master:/var/triton#
use 'kubectl get secrets' to get token
Example for accessing kube api-server using token
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure{ "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "10.0.1.149:443" } ]}
Ensure that container is created with valid service-account
Get Token info from /var/run/secrets/kubernetes.io/token file
Then run the curl command
Text Box
$ APISERVER=$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS
$ TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
root@cpx-cic-deepaktest-d8b94867-swzxb:/# curl -k https://$APISERVER/api/v1/services --header "Authorization: Bearer $TOKEN" | more
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/services",
"resourceVersion": "5879968"
},
"items": [
{
"metadata": {
"name": "catalogue",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/catalogue",
"uid": "ec980277-64ec-11e9-9069-a27cf4ea5efa",
"resourceVersion": "5776528",
"creationTimestamp": "2019-04-22T10:53:55Z",
"labels": {
"app": "catalogue"
.....
Accessing any api from master node via 8080 port
root@master:/var/triton# kubectl proxy --address=0.0.0.0 --port=8080 &
[1] 14858
root@master:/var/triton# Starting to serve on [::]:8080
root@master:/var/triton# curl -k http://127.0.0.1:8080/api/v1/services
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/services",
"resourceVersion": "3944721"
},
"items": [
{
"metadata": {
"name": "kubernetes",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/kubernetes",
"uid": "80a5b531-d605-11e7-b8ac-42d99b9ef27a",
"resourceVersion": "47",
"creationTimestamp": "2017-11-30T19:34:37Z",
"labels": {
"component": "apiserver",
"provider": "kubernetes"
}
},
"spec": {
"ports": [
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": 6443
}
],
"clusterIP": "10.96.0.1",
"type": "ClusterIP",
"sessionAffinity": "ClientIP",
"sessionAffinityConfig": {
"clientIP": {
"timeoutSeconds": 10800
}
}
},
"status": {
"loadBalancer": {
}
}
},
{
"metadata": {
"name": "my-service",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/my-service",
"uid": "6abfde30-f3d0-11e7-b198-42d99b9ef27a",
"resourceVersion": "3940808",
"creationTimestamp": "2018-01-07T17:30:12Z"
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 9376
}
],
"selector": {
"app": "MyApp"
},
"clusterIP": "10.111.62.19",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
},
{
"metadata": {
"name": "web-backend",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/web-backend",
"uid": "7dd4fcf4-e015-11e7-b198-42d99b9ef27a",
"resourceVersion": "1333587",
"creationTimestamp": "2017-12-13T14:54:16Z",
"labels": {
"app": "web-backend"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 80
}
],
"selector": {
"app": "web-backend"
},
"clusterIP": "10.111.161.63",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
},
{
"metadata": {
"name": "kube-dns",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kube-dns",
"uid": "829db6b0-d605-11e7-b8ac-42d99b9ef27a",
"resourceVersion": "177",
"creationTimestamp": "2017-11-30T19:34:41Z",
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "KubeDNS"
}
},
"spec": {
"ports": [
{
"name": "dns",
"protocol": "UDP",
"port": 53,
"targetPort": 53
},
{
"name": "dns-tcp",
"protocol": "TCP",
"port": 53,
"targetPort": 53
}
],
"selector": {
"k8s-app": "kube-dns"
},
"clusterIP": "10.96.0.10",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
]
root@master:/var/triton#
Example to create ingress controller at default namespace
root@master:/var/triton# netstat -atpun | grep 9080
root@master:/var/triton# kubectl get ingress
No resources found.
root@master:/var/triton# kubectl create -f test_ingress.yaml
ingress "test-ingress" created
root@master:/var/triton# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
test-ingress * 80 2s
root@master:/var/triton# netstat -atpun | grep 9080
root@master:/var/triton# kubectl describe ingress test-ingress
Name: test-ingress
Namespace: default
Address:
Default backend: testsvc:9080 (<none>)
Rules:
Host Path Backends
---- ---- --------
* * testsvc:9080 (<none>)
Annotations:
Events: <none>
root@master:/var/triton# cat test_ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 9080
root@master:/var/triton#
Useful link: https://kubernetes.io/docs/concepts/services-networking/ingress/
Method to create ingress with service at custom namespace
root@ubuntu:~/deepak/deepak# kubectl get ing -n sock-shop
NAME HOSTS ADDRESS PORTS AGE
web-ingress sockshop.cpx-lab.org 80 7h
root@ubuntu:~/deepak/deepak# kubectl describe ing -n sock-shop
Name: web-ingress
Namespace: sock-shop
Address:
Default backend: web-test-404:8080 (<none>)
Rules:
Host Path Backends
---- ---- --------
sockshop.cpx-lab.org
/ front-end:80 (192.169.64.2:8079)
Annotations:
Events: <none>
root@ubuntu:~/yaml_file# cat ingress_mpx.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: sock-shop
annotations:
NETSCALER_HTTP_PORT: "80"
NETSCALER_VIP: "10.102.169.195"
spec:
backend:
serviceName: web-test-404
servicePort: 8080
rules:
- host: sockshop.cpx-lab.org
http:
paths:
- path: /
backend:
serviceName: front-end
servicePort: 80
Useful link: https://kubernetes.io/docs/concepts/services-networking/ingress/
Source auto completion script
sourcing auto completion script
root@master1:~# kubectl
.bash_history .cache/ docker_startup.sh .lesshst personal/ .screenrc .vim/ .vimrc
.bashrc .distlib/ .kube/ .p4enviro .profile .ssh/ .viminfo
root@master1:~# source <(kubectl completion bash)
root@master1:~# kubectl
annotate autoscale config create edit get patch rolling-update set version
api-versions certificate convert delete exec label port-forward rollout taint
apply cluster-info cordon describe explain logs proxy run top
attach completion cp drain expose options replace scale uncordon
root@master1:~#
Nginx with external IP
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl run nginx --image=nginx:1.10.0
deployment "nginx" created
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-1803751077-brgmw 0/1 ContainerCreating 0 22s
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.27.240.1 <none> 443/TCP 18m
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl expose deployment nginx --port 80 --type LoadBalancer
service "nginx” exposed
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl scale deployment nginx --replicas 3
deployment "nginx" scaled
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.27.240.1 <none> 443/TCP 18m
nginx LoadBalancer 10.27.247.169 35.194.109.142 80:30830/TCP 4s
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ curl -k http://35.194.109.142:80
<!DOCTYPE html><html><head><title>Welcome to nginx!</title>
<style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p><p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p></body></html>
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl delete deployment nginx
deployment "nginx” deleted
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.27.240.1 <none> 443/TCP 30m
nginx LoadBalancer 10.27.247.169 35.194.109.142 80:30830/TCP 12m
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl delete services nginx
service "nginx” deleted
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.27.240.1 <none> 443/TCP 31m
google184094_student@qwiklabs-gcp-4bbc0f5997bc31b5:~/orchestrate-with-kubernetes/kubernetes$
handling failure of kubectl
root@ip-10-0-0-38:/home/ubuntu# kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@ip-10-0-0-38:/home/ubuntu# netstat -atpun | grep kube-apiser | grep LISTEN
tcp6 0 0 :::6443 :::* LISTEN 20590/kube-apiserve
root@ip-10-0-0-38:~# export KUBECONFIG=/etc/kubernetes/kubelet.conf
root@ip-10-0-0-38:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-38 Ready master 20h v1.9.2
ip-10-0-0-45 Ready <none> 20h v1.9.2
Handling the stale pods entry in kubernetes
root@master:/var/triton# kubectl delete pods --all
pod "cpx-xjkt5" deleted
pod "cpx2-9kkfn" deleted
pod "web-backend-th7kb" deleted
pod "web-backend2-rgn7z" deleted
pod "web-backend2-vjlhr" deleted
root@master:/var/triton# kubectl get pods
NAME READY STATUS RESTARTS AGE
cpx-xjkt5 0/1 Terminating 0 3h
cpx2-9kkfn 0/1 Terminating 0 3h
web-backend-th7kb 1/1 Terminating 0 27d
web-backend2-rgn7z 1/1 Terminating 0 6d
web-backend2-vjlhr 1/1 Terminating 0 6d
root@master:/var/triton# kubectl delete pods --all --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continu
e to run on the cluster indefinitely.
pod "cpx-xjkt5" deleted
pod "cpx2-9kkfn" deleted
pod "web-backend-th7kb" deleted
pod "web-backend2-rgn7z" deleted
pod "web-backend2-vjlhr" deleted
root@master:/var/triton# kubectl get pods
No resources found.
Useful link: https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/
Use kubectl scale command
'Kubectl scale' command for scaling up PODS
root@master:/var/triton# kubectl describe service web-frontend
Name: web-frontend
Namespace: default
Labels: app=web-frontend
Annotations: <none>
Selector: app=web-frontend
Type: LoadBalancer
IP: 10.103.92.57
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30550/TCP
Endpoints: 10.244.0.149:80,10.244.1.14:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
root@master:/var/triton# kubectl scale --replicas=3 -f web-frontend.yaml
replicationcontroller "web-frontend" scaled
error: no scaler has been implemented for {"" "Service"}
root@master:/var/triton# kubectl describe service web-frontend
Name: web-frontend
Namespace: default
Labels: app=web-frontend
Annotations: <none>
Selector: app=web-frontend
Type: LoadBalancer
IP: 10.103.92.57
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30550/TCP
Endpoints: 10.244.0.149:80,10.244.1.14:80,10.244.1.15:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Use volumeMounts and volumes options
Mounting host directory in the container
root@master:/var/triton/wkg# cat triton.yaml
apiVersion: v1
kind: Pod
metadata:
name: triton
labels:
app: triton
spec:
containers:
- name: triton
image: triton:v8_2
env:
- name: "NS_IP"
value: "10.102.53.236"
- name: "NS_PORT"
value: "5080"
volumeMounts:
- mountPath: /var/kube/admin.conf
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /var/triton/wkg/secret/config
type: File
nodeSelector:
disktype: ssd
root@master:/var# kubectl create -f triton.yaml
pod "triton" created
Useful link:https://kubernetes.io/docs/concepts/storage/volumes/
It could be due to exit of main process. If so, put the while loop on the entry process (PID:1)
Useful link:https://stackoverflow.com/questions/31870222/how-can-i-keep-container-%20running-on-kubernetes
Use KUBERNETES_SERVICE_HOST for IP address and KUBERNETES_PORT_443_TCP_PORT for port
KUBE env variable for api info
root@triton:/var/triton_ingress/triton# env | grep KUBE
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
root@triton:/var/triton_ingress/triton#
Useful link:https://stackoverflow.com/questions/30690186/how-do-i-access-the-kubernetes-api-from-within-a-pod-container
use kubeadm create token. You can also use kubeadm token list to see if any token is available.
Recreate token
root@ip-20-10-1-90:~/deepak/triton_e_w# kubeadm token create --print-join-command
kubeadm join --token 9c6401.002ff027fa39ca1a 20.10.1.90:6443 --discovery-token-ca-cert-hash sha256:3d420b4415f3b4a16551a3cd05bc980f9550b19e0ed3b542abe73646c0a68bec
root@ip-20-10-1-90:~/deepak/triton_e_w#
Useful link: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/
https://stackoverflow.com/questions/47126779/join-cluster-after-init-token-expired?rq=1
Use fieldPATH attribute in YAML as status.hostIP
GET HOST IP info in the POD
root@ubuntu-232:~/deepak# cat test.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "sleep 1000000" ]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
restartPolicy: Never
root@ubuntu-232:~/deepak#
root@ubuntu-232:~/deepak# kubectl get pods
NAME READY STATUS RESTARTS AGE
cpx-p4jsp 1/1 Running 0 2h
cpx-rq64d 1/1 Running 0 2h
dapi-test-pod 1/1 Running 0 45m
tritoncpx 1/1 Running 0 2h
web-backend-tjd6h 1/1 Running 0 2h
web-backend-xvrr8 1/1 Running 0 2h
web-frontend-5z9rn 1/1 Running 0 2d
web-frontend-pg55n 1/1 Running 0 2d
# root@ubuntu-233:~/deepak# docker ps | grep -a dapi
00cafa3ab4ad gcr.io/google_containers/busybox "/bin/sh -c 'sleep 1?" 25 seconds ago Up 24 seconds k8s_test-container_dapi-test-pod_default_c65adb69-53de-11e8-a1d8-eabc5652ca9b_0
46b66423eb7a k8s.gcr.io/pause-amd64:3.1 "/pause" 27 seconds ago Up 26 seconds k8s_POD_dapi-test-pod_default_c65adb69-53de-11e8-a1d8-eabc5652ca9b_0
root@ubuntu-233:~/deepak# docker exec -it 4e6bcbceda71 sh
Error: No such container: 4e6bcbceda71
root@ubuntu-233:~/deepak# docker exec -it 00cafa3ab4ad sh
/ # env | grep NODE
MY_NODE_IP=10.106.73.233
/ # exit
root@ubuntu-233:~/deepak# ip addr show | grep 233
inet 10.106.73.233/25 brd 10.106.73.255 scope global eth0
Useful link: https://kubernetes-v1-4.github.io/docs/user-guide/downward-api/
https://github.com/kubernetes/kubernetes/issues/24657
Use api/v1/services?watch=true kind of url, for example 'http://10.217.212.231:8080/api/v1/services?watch=true'
Refer below example for output.
Useful link: https://github.com/kubernetes/kubernetes/issues/16429
Remove a node from the cluster
====
On Master node
====
root@ubuntu-232:~/deepak# kubectl drain ubuntu-233
node "ubuntu-233" already cordoned
error: unable to drain node "ubuntu-233", aborting command...
There are pending nodes to be drained:
ubuntu-233
error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): kube-flannel-ds-rdlvb
root@ubuntu-232:~/deepak# kubectl drain ubuntu-233 --ignore-daemonsets
node "ubuntu-233" already cordoned
WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-rdlvb
node "ubuntu-233" drained
root@ubuntu-232:~/deepak# kubectl delete node ubuntu-233
node "ubuntu-233" deleted
root@ubuntu-232:~/deepak# kubectl get nodes ubuntu-233 -o wide
Error from server (NotFound): nodes "ubuntu-233" not found
====
On worker node which needs to be deleted
====
root@ubuntu-233:~# kubeadm reset
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
use clusterIP: None option in the yaml
Headless service example (clusterIP: None)
root@ubuntu-232:~/deepak/duke# cat web-frontend.yaml
apiVersion: v1
kind: Service
metadata:
name: web-frontend
labels:
app: web-frontend
spec:
clusterIP: None
ports:
- port: 80
protocol: TCP
name: http
selector:
app: web-frontend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: web-frontend
spec:
replicas: 2
template:
metadata:
labels:
app: web-frontend
spec:
containers:
- name: web-frontend
image: in-docker-reg.eng.citrite.net/cpx-dev/web-test:latest
ports:
- containerPort: 80
root@ubuntu-232:~/deepak/duke# kubectl describe svc web-frontend
Name: web-frontend
Namespace: default
Labels: app=web-frontend
Annotations: <none>
Selector: app=web-frontend
Type: ClusterIP
IP: None
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.2.100:80,10.244.2.101:80
Session Affinity: None
Events: <none>
Useful link:https://kubernetes.io/docs/concepts/services-networking/service/
Use nodeport. It will create a port on each node including master. So, user can access the service from any kubernetes node IP
Example to expose nodeport
root@ubuntu-232:~/deepak# cat nodeport_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: dkub
labels:
app: dkub
spec:
containers:
- name: dkub
image: nginx:latest
ports:
- containerPort: 80
root@ubuntu-232:~/deepak#
root@ubuntu-232:~/deepak# cat nodeport_service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-pod-service
labels:
app: my-pod-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30081
selector:
app: dkub
root@ubuntu-232:~/deepak#
root@ubuntu-232:~/deepak# netstat -atpun | grep 30081
root@ubuntu-232:~/deepak#
root@ubuntu-232:~/deepak# kubectl create -f nodeport_pod.yaml
pod "dkub" created
root@ubuntu-232:~/deepak# kubectl get pods -o wide| grep dkub
dkub 1/1 Running 0 2m 10.244.1.232 ubuntu-231
root@ubuntu-232:~/deepak# curl -k http://10.244.1.232 | more
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 592k 0 --:--:-- --:--:-- --:--:-- 597k
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
...
root@ubuntu-232:~/deepak# ip addr show | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
inet 10.106.73.232/25 brd 10.106.73.255 scope global eth0
6: veth0ab4b12f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
root@ubuntu-232:~/deepak# curl -k http://10.106.73.232:30081
curl: (7) Failed to connect to 10.106.73.232 port 30081: Connection refused
root@ubuntu-232:~/deepak# kubectl create -f nodeport_service.yaml
service "my-pod-service" created
root@ubuntu-232:~/deepak# kubectl get svc -o wide | grep my-pod
my-pod-service NodePort 10.101.99.1 <none> 80:30081/TCP 35s app=dkub
root@ubuntu-232:~/deepak# netstat -atpun | grep 30081 | grep LISTEN
tcp6 0 0 :::30081 :::* LISTEN 6077/kube-proxy
root@ubuntu-232:~/deepak# netstat -atpun | grep 30081 | grep LISTEN
tcp6 0 0 :::30081 :::* LISTEN 6077/kube-proxy
root@ubuntu-232:~/deepak# curl -k http://10.106.73.232:30081 | more
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 491k 0 --:--:-- --:--:-- --:--:-- 597k
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
....
root@ubuntu-232:~/deepak#
Useful link: https://console.bluemix.net/docs/containers/cs_nodeport.html#nodeport
Refer https://github.com/coredns/coredns/issues/3681
Reference
https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/
https://www.youtube.com/watch?v=o85VR90RGNQ
https://www.youtube.com/watch?v=4ht22ReBjno
https://kubernetes.io/docs/concepts/overview/components/
https://kubernetes.io/docs/concepts/services-networking/ingress/
https://avinetworks.com/docs/17.2/replace-kube-proxy-in-kubernetes-environment-with-avi-vantage/
https://kubernetes.io/docs/concepts/workloads/pods/pod/
https://kubernetes.io/docs/concepts/architecture/nodes/
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0