Wednesday, February 22, 2023

Rancher Suite K8S Adventure - Chapter 008 - Rancher Cluster First RKE2 Install

Rancher Suite K8S Adventure - Chapter 008 - Rancher Cluster First RKE2 Install

A travelogue of converting from OpenStack to Suse's Rancher Suite for K8S including RKE2, Harvester, kubectl, helm.

I chose not to automate this install as RKE2 doesn't support anything more modern than downloading shell scripts from the internet; I suppose the entire point of installing Rancher as a cluster orchestrator is to avoid this kind of weirdness in the future.

The strange looking RKE2_VERSION line below more or less comes from:

https://update.rke2.io/v1-release/channels

First member of a cluster install process:

# curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION="v1.24.10+rke2r1" sh -

# systemctl enable rke2-server.service

# systemctl start rke2-server.service

(Insert an extremely dramatic very long pause here)

Eventually these two commands will settle down and look normal.

systemctl status rke2-server

journalctl -u rke2-server -f

Note that a kubectl file will be written to /etc/rancher/rke2/rke2.yaml

and a token file will be here /var/lib/rancher/rke2/server/node-token

scp /var/lib/rancher/rke2/server/node-token root@otherHosts:

(Repeat above for all future additional nodes in your cluster)

scp /etc/rancher/rke2/rke2.yaml vince@ubuntu:

(Or whatever your "local" experimenting system)

Some fun commands to try as root on your first node:

export KUBECONFIG=/etc/rancher/rke2/rke2.yaml

kubectl get node

kubectl get pods --all-namespaces

helm ls --all-namespaces

Log into your experimental machine, for me that's vince@ubuntu: and recall you scp'd over the kubectl config, filename rke2.yaml.  That yaml specifies the server address as 127.0.0.1, that's not going to do.

First, mkdir ~/.kube then cp rke2.yaml ~/.kube/config

vi ~/.kube/config and change the "server: https://127.0.0.1:6443" to something probably reminiscent of "server: https://rancher1.cedar.mulhollon.com:6443" obviously your DNS name will be different.

At this point you should be able to run "kubectl get node" from a remote machine, or try "helm ls --all-namespaces".  Cool.

Tomorrow, we add the rest of the nodes to the cluster.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.