Create a namespace called ggckad-s0 in your cluster.
Run the following pods in this namespace.
A pod called pod-a with a single container running the kubegoldenguide/simple-http-server image
A pod called pod-b that has one container running the
kubegoldenguide/alpine-spin:1.0.0 image, and one container running nginx:1.7.9
Write down the output of kubectl get pods for the ggckad-s0 namespace.
All operations in this question should be performed in the ggckad-s2 namespace.
Create a ConfigMap called app-config that contains the following two entries:
'connection_string' set to 'localhost:4096'
'external_url' set to 'google.com'
Run a pod called question-two-pod with a single container running the kubegoldenguide/alpine-spin:1.0.0 image, and expose these configuration settings as environment variables inside the container.
All operations in this question should be performed in the ggckad-s2 namespace. Create a pod that has two containers. Both containers should run the kubegoldenguide/alpine-spin:1.0.0 image. The first container should run as user ID 1000, and the second container with user ID 2000. Both containers should use file system group ID 3000.
All operations in this question should be performed in the ggckad-s4 namespace. This question will require you to create a pod that runs the image kubegoldenguide/question-thirteen. This image is in the main Docker repository at hub.docker.com.
This image is a web server that has a health endpoint served at '/health'. The web server listens on port 8000. (It runs Python’s SimpleHTTPServer.) It returns a 200 status code response when the application is healthy. The application typically takes sixty seconds to start.
Create a pod called question-13-pod to run this application, making sure to define liveness and readiness probes that use this health endpoint.
All operations in this question should be performed in the ggckad-s5 namespace. Create a file called question-5.yaml that declares a deployment in the ggckad-s5 namespace, with six replicas running the nginx:1.7.9image.
Each pod should have the label app=revproxy. The deployment should have the label client=user. Configure the deployment so that when the deployment is updated, the existing pods are killed off before new pods are created to replace them.
Create a new namespace k8s-challenge-2-a and assure all following operations (unless different namespace is mentioned) are done in this namespace.
Create a deployment named nginx-deployment of three pods running image nginx with a memory limit of 64MB.
Expose this deployment under the name nginx-service inside our cluster on port 4444, so point the service port 4444 to pod ports 80.
Spin up a temporary pod named pod1 of image cosmintitei/bash-curl, ssh into it and request the default nginx page on port 4444 of our nginx-serviceusing curl.
https://medium.com/faun/kubernetes-ckad-weekly-challenge-3-cronjobs-dbd400526673
Create a static PersistentVolume of 50MB to your nodes/localhosts /tmp/k8s-challenge-3 directory.
Create a PersistentVolumeClaim for this volume for 40MB.
Create a CronJob which runs two instances every minute of: a pod mounting the PersistentStorageClaim into /tmp/vol and executing the command hostname >> /tmp/vol/storage.
We only need to keep the last 4 successful executed jobs in the cronjob history.
Check your local filesystem for the hostnames of these pods with tail -f /tmp/k8s-challenge-3/storage.
Create a deployment of 15 pods with image nginx:1.14.2 in namespace one.
Confirm that all pods are running that image.
Edit the deployment to change the image of all pods to nginx:1.15.10.
Confirm that all pods are running image nginx:1.15.10.
Edit the deployment to change image of all pods to nginx:1.15.666.
Confirm that all pod are running image nginx:1.15.666 and have no errors. Show error if there is one.
Woops! Something went crazy wrong here! Rollback the change, so all pods should run nginx:1.15.10 again.
Confirm that all pods are running image nginx:1.15.10.
https://medium.com/faun/kubernetes-ckad-weekly-challenge-6-networkpolicy-6cc1d390f289
Note: It will not RUN on SGServer Baremetal by default. Need to configure network policy plugin
Make sure all your NetworkPolicies still allow DNS resolution.
implement a NetworkPolicy for nginx pods to only allow egress to the internal api pods on port 3333. No access to the outer world (but DNS).
implement a NetworkPolicy for api pods to only allow ingress on port 3333 from the internal nginx pods. To test negative: check from api to api.
implement a NetworkPolicy for api pods to only allow egress to (IP of google.com) port 443.
https://codeburst.io/kubernetes-ckad-weekly-challenge-8-user-authorization-rbac-31b6d01a8143
Create a ClusterRole and ClusterRoleBinding so that user secret@test.com can only access and manage secrets. Test it.
Create a ClusterRole and ClusterRoleBinding so that user deploy@test.com can only deploy and manage pods named compute. Test it.
Create an additional ClusterRole and ClusterRoleBinding for deploy@test.com to add read permission to secrets named compute-secret. Test it.
https://codeburst.io/kubernetes-ckad-weekly-challenge-9-logging-sidecar-67b2be91aa93
Add a sidecar container of image bash to the nginx pod
Mount the pod scoped volume named logs into the sidecar, so same as nginx container does.
Our sidecar should pipe the content of file access.log (that’s inside the volume logs because nginx container writes it there) to stdout
Check if you can access the logs of your sidecar using kubectl logs.
Create a deployment of image nginx with 3 replicas and check the kubectl get events that this caused.
Manually create a single pod of image nginx and try to smuggle it into the custody of the existing deployment (without changing the deployment configuration).
Did it work? Are there now 4 pods? Check kubectl get events for what happened.
We have the following file containing environment variables:
CREDENTIAL_001=-bQ(ETLPGE[uT?6C;ed
CREDENTIAL_002=C_;SU@ev7yg.8m6hNqS
CREDENTIAL_003=ZA#$$-Ml6et&4?pKdvy
CREDENTIAL_004=QlIc3$5*+SKsw==9=p{
CREDENTIAL_005=C_2\a{]XD}1#9BpE[k?
CREDENTIAL_006=9*KD8_w<);ozb:ns;JC
CREDENTIAL_007=C[V$Eb5yQ)c~!..{LRT
SETTING_USE_SEC=true
SETTING_ALLOW_ANON=true
SETTING_PREVENT_ADMIN_LOGIN=true
Create a Secret that contains all environment variables from that file
Create a pod of image nginx that makes all Secret entries available as environment variables. For example usable by echo $CREDENTIAL_001etc…