Monday, November 13, 2023

Proxmox VE Cluster - Chapter 010 - OS Images on Proxmox and manual VM operations

Proxmox VE Cluster - Chapter 010 - OS Images on Proxmox and manual VM operations


A voyage of adventure, moving a diverse workload running on OpenStack, Harvester, and RKE2 K8S clusters over to a Proxmox VE cluster.


Today is about manually testing and experimenting with the Proxmox VE cluster.


References for this post:

https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines


Load some ISO images into the shared Proxmox Cluster:

Download some ISO files, such as Ubuntu 20.04.6 LTS from 

https://ubuntu.com/download/server

Add the ISO install image to Proxmox:

  1. Log into the Proxmox web UI
  2. Select any cluster node
  3. Expand in the left pane
  4. Select proxmox-isoimages as a storage location
  5. Click "ISO Images" on the right pane
  6. "Upload"
  7. Wait for the pop up window to report "TASK OK" then close the window out.

Verify the ISO image uploaded using the CLI on any Linux box.

  1. Previously when I set up the NFS shares I added symlinks from my NFS home directory to the proper automounter location.  So ~/proxmox-isoimages is symlinked to /net/freenas/mnt/freenas-pool/proxmox-isoimages.  This means I can log into any Linux system on the LAN (Via SSO using Samba and Active Directory) and "cd ~/proxmox-isoimages"
  2. The "~/proxmox-isoimages" directory contains "template"
  3. The "~/proxmox-isoimages/template" directory contains "iso"
  4. The "~/proxmox-isoimages/template/iso" directory contains the Ubuntu Server install ISO uploaded previously.

Verify the ISO image uploaded using the Web UI.

  1. Log into any Proxmox cluster node web UI.
  2. In the left pane select "Folder View" then "Storage" then any of the node's "proxmox-isoimages".
  3. In the right pane select "ISO Images" on the left side, and the right side of the right pane should populate with a list of ISO images identical to the directory contents seen in the previous verification task.

Create a test VM and experiment with it.

  1. Select any node. Click "Create VM". 
  2. In general pane, Name: Test, probably want to check "Start at boot" for most VMs.
  3. In OS pane, Storage: "proxmox-isoimages", ISO image: select the Ubuntu dropdown.
  4. In System pane, default for everything EXCEPT checkmark "Qemu Agent".
  5. In Disks pane, Storage: proxmox-diskimages, Disk size: 16 GB.
  6. In CPU pane, default single core seems OK?
  7. In Memory, default 2GB seems OK?
  8. In Network, default bridge seems OK, should get a DHCP address.
  9. Select the test VM in the left pane, click "Start" button.
  10. Click on the console and install the operating system.

Note that for Ubuntu 20.04, the default LVM config will only use about half the PV, have to modify the LV to use most of the disk rather than leaving most of it unused.

After the install I disconnect the media from the virtual cdrom although I leave the virtual cdrom drive in place for possible later use.


Verify disk image in CLI

  1. Previously when I set up the NFS shares I added symlinks from my NFS home directory to the proper automounter location.  So ~/proxmox-diskimages is symlinked to /net/freenas/mnt/freenas-pool/proxmox-diskimages.  This means I can log into any Linux system on the LAN (Via SSO using Samba and Active Directory) and "cd ~/proxmox-diskimages
  2. The "~/proxmox-isoimages" directory contains "images"
  3. The "~/proxmox-isoimages/images" directory contains a directory with the VM ID number, probably 100 for your first VM.
  4. The "~/proxmox-isoimages/images/100" directory contains the qcow2 disk image, probably named something like "vm-100-disk-0.qcow2"  As near as I can tell it's thin provisioned, I allocated sixteen gigs at the time of VM creation, but I'm using maybe six or so gigs per VM image.

Verify disk image in Web UI

  1. Log into the Proxmox web UI, any cluster member is fine.
  2. Left pane, "Folder View", "Datacenter", "Storage", select any host's copy of "proxmox-diskimages"
  3. Right pane, "VM Disks", see the list of disks.

Two interesting things to note about VM disk storage in Proxmox, at least in Proxmox version 8:
  1. Configure a VM's disk image as 16 gigs, it thin-provisions on the NFS server using perhaps six or so gigs, it displays in the Web UI in "Storage" as exactly 17.18 GB.
  2. You CAN NOT migrate storage, or what VMware would call "Storage vMotion" in the web UI from "Folder View" "Datacenter" "Storage" "Any shared NFS mount".  However, you CAN migrate storage from "Folder View" "Datacenter" "Virtual Machine" "Any VM" right pane "Hardware", "Hard Disk (scsi0)" (which isn't the scsi controller its the virtual drive attached to the virtual scsi controller), then in the menu bar "Disk Action" dropdown, then "Move Storage".  Just a UI peculiarity.


Enable the QEMU guest agent

Have to enable the guest agent in Proxmox VE first, then enable in the VM, then power cycle the VM for Proxmox to make the internal changes necessary to connect to the agent.  Apparently it connect to the agent over a Virt-IO port which can not hot-add, or not hot-add reliably.

If not enabled when the VM was created, in the config for the Proxmox VM, "Options" "QEMU Guest Agent", "Edit", checkmark "Use QEMU Guest Agent".

In the console for the VM assuming Ubuntu 20.04 as a test VM:

"sudo apt-get install qemu-guest-agent"

Shutdown the VM "sudo shutdown now", then start the image in the web UI to let Proxmox connect it's guest agent tentacles into the VM.  Just rebooting usually will not let Proxmox initially connect to the VM.

Verify QEMU guest agent operation by looking at the "Summary" tab for the VM.  If the guest agent is connected the "IP" section will list the current VM IP addresses.  Which is handy, because if you installed SSHD on the VM when you initially installed the OS, you can now SSH to the VM at that IP address.  On the other hand if the guest agent is not working the "IP" section will contain a complaint to that effect.


Some basic cluster operations testing ideas:

  • Run a test Ubuntu server doing "something" in a tmux/screen session (more than just idling, perhaps running "top") for a couple days.
  • Use live migration to send a working VM to visit every host in the cluster. Total downtime to visit every host on the cluster one time added up to around half a second and no individual host ever exceeded a tenth of a second downtime.


Next post will be about migrating production workload off OpenStack cluster number 1 into the new small Proxmox VE cluster.  Later on, the hosts formerly in cluster OS1 will be migrated into the Proxmox VE cluster, providing a lot more capacity.  The other advantage of slowly rolling workload is this is a good test strategy for the Proxmox cluster system.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.