OKD single-node on AWS (OpenShift)

Installation OKD 4.15

Following indications in the reference "Installing single-node OpenShift on AWS".

The minimum resource requirements for high availability cluster installation include a control plane node with 4 vCPUs and 100GB of storage. For a single-node OKD cluster, you must have a minimum of 8 vCPU cores and 120GB of storage.


The controlPlane.replicas setting in the install-config.yaml file should be set to 1.


The compute.replicas setting in the install-config.yaml file should be set to 0. This makes the control plane node schedulable.

Supported AWS provider for single-node OKD

CPU architecture: x86_64 and AArch64 




Configured an AWS account to host the cluster

> Route 53

Register the domain to be used w/ OpenShift, eg: mydomain.click

> Key pair for cluster node SSH access

There were already available in my machine, they must be created otherwise:

/home/<myuser>/.ssh/id_ed25519                            --private key

/home/<myuser>/.ssh/id_ed25519.pub                        --public key

If the ssh-agent process is not already running for your local user, start it as a background task: 

eval "$(ssh-agent -s)"

Add your SSH private key to the ssh-agent

ssh-add ~/.ssh/id_ed25519



Obtaining the installation program

You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

Download installer ( openshift-install-linux-4.15.0-0.okd-2024-03-10-010116.tar.gz ) to your directory "okd-install-on-aws" from https://github.com/openshift/okd/releases :

wget https://github.com/okd-project/okd/releases/download/4.15.0-0.okd-2024-03-10-010116/openshift-install-linux-4.15.0-0.okd-2024-03-10-010116.tar.gz


Extract it w/:

tar -xvf openshift-install-linux-4.15.0-0.okd-2024-03-10-010116.tar.gz


Pull secret for a private registry

You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use

{"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}

as the pull secret when prompted during the installation.



Creating the installation configuration file

https://docs.okd.io/latest/installing/installing_aws/ipi/installing-aws-customizations.html#installation-initializing_installing-aws-customizations

If already using other aws accounts, create a new aws profile for the OKD installation, eg: justicia

/home/jose/.aws/config

[default]

region = eu-west-1

output = json


[profile justicia]

region = eu-north-1

output = json

/home/jose/.aws/credentials

[default]

aws_access_key_id = AKfakedefault

aws_secret_access_key = TIf4k3


[justicia]

aws_access_key_id = AKfakejusticia

aws_secret_access_key = YrF4k3


Create the installation configuration file using the desired aws profile

export AWS_PROFILE=justicia && ./openshift-install create install-config --dir install-dir

SSH Public Key: /home/<myuser>/.ssh/id_ed25519.pub

Platform: aws

Region: eu-north-1 (Europe (Stockholm))

Base Domain: mydomain.click

Cluster Name: okdcluster

Pull Secret: {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}


Let's configure install-dir/install-config.yaml to use the intended aws profile by default (although the AWS_PROFILE environment variable is recommended):

Note: The 'profile' seems not to be recognized by the installer, better not to add it (platform.aws.profile: justicia)

...

platform:

  aws:

    region: eu-north-1

    propagateUserTags: true

    userTags:

      ci: justicia

      okd: 'Created_automatically_by_OKD'

...


Modify the install-config.yaml file

additionalTrustBundlePolicy: Proxyonly

apiVersion: v1

baseDomain: mydomain.click

compute:

- architecture: amd64

  hyperthreading: Enabled

  name: worker

  platform: {}

  replicas: 0

controlPlane:

  architecture: amd64

  hyperthreading: Enabled

  name: master

  platform:

    aws:

      rootVolume:

        iops: 3000

        size: 350

        type: gp3

      type: m5.2xlarge

  replicas: 1

metadata:

  creationTimestamp: null

  name: myokd

platform:

  aws:

    region: eu-north-1

    propagateUserTags: true

    userTags:

      ci: myci

      okd: 'created_automatically'

publish: External

pullSecret: '{"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}'

sshKey: |

  ssh-ed25519 AAAAF4k3aC1lZDI1NTE5AAAAIFH4f4k3/f4K31q+wXu79JtBg2DyHvqkqDbQLtMffaK3 my@whatever


Back up the install-config.yaml file so that you can use it to install multiple clusters

IMPORTANT: The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.



Installing the OpenShift CLI by downloading the binary

https://docs.okd.io/latest/installing/installing_aws/ipi/installing-aws-customizations.html#cli-installing-cli_installing-aws-customizations

You can install the OpenShift CLI (oc) to interact with OKD from a command-line interface.

Download oc.tar.gz for your OS and arch from https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ 

wget https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/linux/oc.tar.gz

tar xvf oc.tar.gz

Place the oc and kubectl binaries in a directory that is on your PATH

mv oc ~/.local/bin/

mv kubectl ~/.local/bin/

The ~/.bashrc should contain something like

# INI .local/bin

export PATH="$HOME/.local/bin:$PATH"

# END .local/bin

Verify:

$ oc version

Client Version: 4.8.11



Generates the Kubernetes manifests

Warning: Not sure if this step is necessary, the OKD documentation only mentions it when NO usint the "default, administrator secrets are stored in the kube-system project".

export AWS_PROFILE=justicia && ./openshift-install create manifests --dir install-dir



Deploying the cluster

export AWS_PROFILE=justicia && ./openshift-install create cluster --dir install-dir --log-level=info

Output:

WARNING failed to parse first occurrence of unknown field: failed to unmarshal install-config.yaml: error unmarshaling JSON: while decoding JSON: json: unknown field "profile"

INFO Attempting to unmarshal while ignoring unknown keys because strict unmarshaling failed

INFO Credentials loaded from the "justicia" profile in file "/home/<myuser>/.aws/credentials"

WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings

INFO Consuming Install Config from target directory

INFO Creating infrastructure resources...

INFO Waiting up to 20m0s (until 12:19PM CEST) for the Kubernetes API at https://api.okdcluster.<mydomain>.click:6443...

INFO API v1.28.2-3598+6e2789bbd58938-dirty up

INFO Waiting up to 30m0s (until 12:34PM CEST) for bootstrapping to complete...

INFO Destroying the bootstrap resources...

INFO Waiting up to 40m0s (until 1:00PM CEST) for the cluster at https://api.okdcluster.<mydomain>.click:6443 to initialize...

INFO Waiting up to 30m0s (until 12:58PM CEST) to ensure each cluster operator has finished progressing...

INFO All cluster operators have completed progressing

INFO Checking to see if there is a route at openshift-console/console...

INFO Install complete!

INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/<myuser>/ib/gencat/okd-install-on-aws/install-dir/auth/kubeconfig'

INFO Access the OpenShift web-console here: https://console-openshift-console.apps.okdcluster.<mydomain>.click

INFO Login to the console with user: "kubeadmin", and password: "<the-password>"

INFO Time elapsed: 35m24s


IMPORTANT: Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.



Logging in to the cluster by using the CLI

https://docs.okd.io/latest/installing/installing_aws/ipi/installing-aws-customizations.html#cli-logging-in-kubeadmin_installing-aws-customizations

Export the kubeadmin credentials:

export KUBECONFIG=~/<comp>/<cust>/okd-install-on-aws-install-dir/auth/kubeconfig

Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

system:admin



Logging in to the cluster by using the web console

List the OKD web console route: 

$ oc get routes -n openshift-console | grep 'console-openshift'

console     console-openshift-console.apps.okdcluster.mydomain.click            console     https   reencrypt/Redirect   None

Navigate to the route detailed in the output of the preceding command in a web browser.

If you get Error code: SEC_ERROR_UNKNOWN_ISSUER then Accept the Risc and Continue.


ERROR: the redirection to

https://oauth-openshift.apps.okdcluster.mydomain.click/oauth/authorize?client_id=console&redirect_uri=https%3A%2F%2Fconsole-openshift-console.apps.okdcluster.ctti2024.click%2Fauth%2Fcallback&response_type=code&scope=user%3Afull&state=b17bf4f8

was not responding.


Same issue, w/ no solution (besides reinstalling OKD) was reported at:

https://stackoverflow.com/questions/76827100/unable-to-access-authentication-in-openshift-installed-on-prem-though-command-li



TBD !!!!!!!!!!!!!!!!!

 and log in as the kubeadmin user:



Destroy a cluster that uses installer-provisioned infrastructure

It removes a cluster that uses installer-provisioned infrastructure from your cloud.


Prerequisites

Procedure

Syntax: ./openshift-install destroy cluster --dir <installation_directory> --log-level info

export AWS_PROFILE=justicia && ./openshift-install destroy cluster --dir install-dir --log-level info