Friday, July 1, 2022

Adventures of a Small Time OpenStack Sysadmin Chapter 004 - Ending Conditions

Adventures of a Small Time OpenStack Sysadmin relate the experience of converting a small VMware cluster into two small OpenStack clusters, and the adventures and friends I made along the way.

Adventures of a Small Time OpenStack Sysadmin Chapter 004 - Ending Conditions

To end up where you need to be, first you have to figure out where you need to go.

Originally I planned on one big OS cluster, as seen in previous posts.  You probably noticed the intro to all the posts in this blog series, mentions I have two small OS clusters instead of one large cluster.  I'm writing this blog series some months after the "fun" so some of the chronology is out of order, and nothing motivates OpenStack administrators to use Kolla-Ansible to install OS clusters, quite like trying to use the online "Installation Tutorials".  So my idea is to always have two clusters, and I can alternate between a cluster being in a relatively stable "production" status and temporarily being in a "dev" or "upgrading" status, and keep my images on the move to either load balance across the clusters or push everything to one to deploy a new one.  Its all automated so its not any work, for me anyway, just watts of CPU power.  In summary, the original plan gradually evolved from a single six host cluster (which was the VMware cluster architecture for many years) to a dual three host cluster.  I wanted everything managed in my existing Ansible; seems I will be multi-Ansible soon, with Kolla-Ansible for the OS clusters, and my own internal system.

My plan for storage was pretty simple, Cinder on LVM on the "bulk" terabyte SSDs on each host, and Swift could use the entire M.2 NVME SSD on each host to store objects, especially backups.  Also I could cross store backups such that cluster2 could have a copy of all the backups in cluster1's Swift store, certainly make cluster "disaster recovery" simple if each cluster has all backups all the time.  I still plan to do multi-backend Cinder connecting to the NAS over NFS, or, I suppose, iSCSI.  I have some images for fileservers and such exporting iSCSI attached storage at the image level from the NAS and that works well, at least iSCSI works on FreeBSD (LOL, more on that later).

I didn't really understand networking on OpenStack until I extensively experimented with it.  Looking thru a lens of "its like NSX, but its not NSX" helps.  I originally decided to set up my provider network ports as non-tagged non-vlan, and that worked, although it was a HUGE headache later on, for reasons I will explain later, and the second cluster VLAN definitely attaches provider networks with VLAN tagging such the cluster has access to everything... this became VERY important later on (more detail in a later post...).

Generally I planned my goal as standardizing everything on FreeBSD for "real" workloads and Ubuntu for unix-like workloads (basically, Ubuntu only as hosts for Docker containers, at least until I install and figure out OpenStack Zun).  I was pretty successful in this.  Its fun to have images on your VMware cluster of msdos with Samba network access or Devuan or GUIX or old installs of GNU Hurd, but that was mostly for experimenting and I can experiment on OpenStack AFTER I get the clusters operational.  So, for a little while, I planned to compress everything into working on FreeBSD 13.1 or if necessary Ubuntu 20.04, and this temporary "compression" was entirely one hundred percent successful.

Stay tuned for the next chapter!

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.