Monday, February 13, 2023

Rancher Suite K8S Adventure - Chapter 001 - Goals

Rancher Suite K8S Adventure - Chapter 001 - Goals

A travelogue of converting from OpenStack to Suse's Rancher Suite for K8S including RKE2, Harvester, kubectl, helm.

Goal

The goal of this new project is to convert a small OpenStack cluster to the Suse 'Rancher Suite'.  AFAIK the integrated set of Suse K8S cluster projects doesn't have a formal name so I'm calling it the 'Rancher Suite'.

History

Here's how I see the historical epic of virtualization.  (epic rant incoming)  Around the turn of the century was the rise of VM virtualization, run a hypervisor on the bare metal, which emulated/virtualized a VM, which runs a OS that usually doesn't entirely realize it's a VM, then run your apps on the OS.  I had admin access on some corporate VMware clusters and a little later, on some large corporate OpenStack clusters.  Meanwhile in the mid 00s, a different approach was becoming popular; chroot jails for enhanced security had been a thing since at least the 90s, maybe before, and the Linux kernel was "threaded" sorta by the mid 00s such that LXC containers could run multiple parallel runtime instances of the OS on top of one running kernel.  This is not theoretical; in the late 00s I was running multiple virtual machines on LXC on some Linux servers and it worked quite well.  Starting with LXC-type containerization, the addition of overlay filesystems, and, admittedly, a lot of hype, then Docker was born.  Then Docker led to Docker-Swarm (Docker-Swarm is like 'Fight Club' in that the first rule of Docker-Swarm is nobody talks about Docker-Swarm).  Docker-Swarm plus a lot of cool/complicated stuff leads to Kubernetes, which everyone abbreviates to K8S.

So, how to square the circle of virtualization?  Do we run virtual VMs on hypervisors and put workload on top of that, or run containers on K8S clusters and put workload on top of that?  The 'Marketing Department' name for trying to mix VMs with Containers seems to be 'Hyper Converged Infrastructure'.

I've tried several HCI strategies in the past.  Running hand configured VMs that host Docker, Docker-Swarm, and small K8S clusters like the K3S project worked fine on VMware and OpenStack.  Likewise, as you'd expect, there are automation 'solutions' for VMware and OpenStack that eliminate the hand configuration aspect by being, more or less, very small shell scripts that install preconfigured images for you.  Another way around it is OpenStack has the Zun project, which provides Docker Engine drivers for OpenStack resources such that containers can natively connect to OpenStacks block storage and network infrastructure, and this works very well for me.  Going the opposite direction, although I've never personally worked with it, the kubevirt.io project lets you run VMs on top of K8S containers, at least as I understand it.

An interesting looking HCI system I want to try with this project, is the Suse 'Rancher Suite' of tightly integrated projects.  Rancher is a controller and orchestrator of K8S clusters, Longhorn is a cool distributed block storage for K8S (Analogy is a vSAN for containers...) Harvester is a linux bare metal OS designed to hold K8S clusters and VMs, RKE2 (and, I suppose, RKE1) is a really nice K8S implementation.  What's noteworthy about this list of projects from SUSE, is they're all very compatible with each other, so I will deploy them as an integrated system.  So I will move the entire infrastructure from OpenStack to Rancher Suite.  What could possibly go wrong?

Why blog this project?

I think Rancher Suite sounds really cool.  If we're honest with ourselves most technology selections are based primarily on this criteria, although there's usually a dense wrapper of rationalization surrounding the decision.

The usual self-promotion, of course.  Need a C2C or W2 contract systems engineer?

I'm dissatisfied with the design process seen in YouTube and blog posts of similar deployments.  Just install 'latest' or 'main' branch of all 15 software components and hope for the best.  Its a moving target.  Its like throwing a net at a flock of birds after they've already scattered.  Why doesn't SUSE sell/promote a package deal named 'SUSE cluster 2023' where it's all guaranteed to "just work" together?

Likewise I'm dissatisfied with the demarcation point seen in other creative expositions.  Too many end with the admin running 'kubectl get nodes' then concluding the demonstration.  Wait a minute, I have many containers and VMs that need to be imported into the cluster and there's numerous other day to day 'operational concerns' such as logging and monitoring and backups that have not been addressed.  I'm reminded of the bad old days of Linux OS reviews in the 90s when the review would end at the conclusion of the installation process; wait a minute, I need to do actual stuff AFTER the install; where's the review about anything more than five minutes after the install?  LOL some things never change, do they?

Finally I write these things to informally document.  I have IPAM using Netbox, and I keep runbooks in MatterMost, and I keep detailed documentation in Redmine just like any other civilized engineer; but sometimes informal prose will effectively jog my memory.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.