Monday, November 20, 2023

Proxmox VE Cluster - Chapter 013 - Install Proxmox VE on the old OS1 cluster hardware

Proxmox VE Cluster - Chapter 013 - Install Proxmox VE on the old OS1 cluster hardware


A voyage of adventure, moving a diverse workload running on OpenStack, Harvester, and RKE2 K8S clusters over to a Proxmox VE cluster.


Some notes on installing Proxmox VE on the old OS1 cluster hardware.  The main reference document:

https://pve.proxmox.com/pve-docs/chapter-pve-installation.html

The plan to work around the networking challenges is to get everything working on a single plain temporary 1G ethernet connection, then use that as a management web interface to get the dual 10G LAG with VLANs up and running, then use the new "20G" ethernet connected web management interface to connect and reconfigure the dual 1G LAG / VLAN ethernet ports, at which point everything will be working.

First Steps

Install using IPMI KVM and USB key, so find the usb key and plug in the IPMI ethernet.


On boot, DEL for setup, alter the boot options to include the USB drive, reboot, hit F11 for boot menu, then boot off the USB.

Proxmox "OS" install process

  • I have a habit of using the console install environment.
  • Installer wants to default to install on the M2 drive although I am using the SATA.
  • Country: United States
  • Timezone: The "timezone" field will not let me enter a timezone, only city names, none of which are nearby.  Super annoying I can't just enter a timezone like a real operating system.  I ended up selecting a city a thousand miles away.  This sucks.  Its a "timezone" setting not "name a far away city that coincidentally is in the same timezone".  I expect better from Proxmox.
  • Keyboard Layout: U.S. English
  • Password: (mind your caps-lock)
  • Administrator email: vince.mulhollon@springcitysolutions.com
  • Management Interface: the first 1G ethernet (eno1, aka the "bottom left corner")
  • Hostname FQDN: as appropriate, as per the sticker on the device
  • IP address (CIDR): as appropriate, as per the sticker on the device / 016
  • Gateway address: 10.10.1.1
  • DNS server address: 10.10.8.221
  • Note you can't set up VLANs in the installer, AFAIK.
  • Hit enter to reboot, yank the USB flash install drive, yank the USB keyboard, watch the monitor... seems to boot properly...
  • Web interface is on port 8006.  Log in as root.  Note I installed 8.0-2 and on the first boot, the web gui reports version 8.0.3, it must have auto-updated as part of the install process?

Upgrade the new Proxmox VE node

  1. Double check there's no production workload on the server; its a new install there shouldn't be anything, but its a good habit.
  2. Select the "Server View" then node name, then on the right side, "Updates", "Repositories", disable both enterprise license repos.  Add the community repos as explained at https://pve.proxmox.com/wiki/Package_Repositories
  3. Or in summary, click "add", select "No-subscription", "add", then repeat for the "Ceph Quincy No-Subscription" repo.
  4. In right pane, select "Updates" then "Refresh" and watch the update.  Click "Upgrade" and watch the upgrade.
  5. Optimistically get a nice message on the console of "Your system is up-to-date" and a request to reboot.
  6. Reboot and verify operation.

Install hardware in permanent location with temporary ethernet cables

  1. Perform some basic operation testing
  2. In the web UI "Shutdown" then wait for power down.
  3. Reinstall in permanent location.
  4. Connect eno1 to any untagged "Prod" VLAN 10 access-only ethernet port, temporarily, for remote management via the web interface.
  5. Connect the 10G ethernets eno3 and eno4 to the LAG'd and VLAN'd 10G ethernet switch ports.

Move the Linux Bridge from single 1 gig eno1 to dual 10 gig LAG on eno3 and eno4

You are going to need this:
  1. Modify eno3 and eno4, checkmark "Advanced", change MTU to 9000.
  2. Create a Linux Bond named bond1, Checkmark "Advanced", change MTU to 9000, Mode "balance-xor", slaves "eno3 eno4" (note space in between, not comma etc).  Note bond0 will eventually be the 1G LAG, and the old OpenStack used "balance-xor" so I will start with that on the Proxmox.
  3. Create Linux VLAN named bond1.10 with MTU 9000, can create the other VLANs now if you want.
  4. Edit vmbr0 Linux bridge to have a MTU of 9000 and Bridge Ports of bond1.10
  5. Double check everything then "Apply Configuration", and after about twelve to thirteen heart stopping seconds it should be up and working.
At some later date I will try some LAG bond modes more interesting than "balance-xor".

Note the network interfaces do not have "VLAN aware" checked.  Everything works.  I will research this later in a dedicated advanced networking post.

Convert the single 1 gig eno1 to dual 1 gig LAG on eno1 and eno2

  1. Edit eno1 and eno2 and set MTU to 9000
  2. Create a Linux Bond named bond0, Checkmark "Advanced", change MTU to 9000, Mode "balance-xor", slaves "eno1 eno2" (space in between).
  3. Create VLAN interfaces now on bond0, or create them later.

Final Installation Tasks

  1. Join the new node(s) to the existing cluster.
  2. Verify information in Netbox to include MAC, serial number, ethernet cabling, platform should be Proxmox VE, remove old Netbox device information.
  3. Add new hosts to Zabbix.
The next post will be about setting up NTP on the new Proxmox VE cluster.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.