Monday, July 18, 2022

Adventures of a Small Time OpenStack Sysadmin Chapter 020 - OpenStack Neutron Network Service

Adventures of a Small Time OpenStack Sysadmin relate the experience of converting a small VMware cluster into two small OpenStack clusters, and the adventures and friends I made along the way.

Adventures of a Small Time OpenStack Sysadmin Chapter 020 - OpenStack Neutron Network Service

My references for installing Neutron:

Administration Guide aka OpenStack Networking Guide

https://docs.openstack.org/neutron/yoga/admin/config.html

Configuration and Policy References

https://docs.openstack.org/neutron/yoga/configuration/

Networking service Installation Guide

https://docs.openstack.org/neutron/yoga/install/

Note that in the usual circular fashion Nova has to be configured to contact Neutron later on.

Manual install like this uses LinuxBridge, Kolla-Ansible uses OpenVSwitch.  So experience with LinuxBridge will not transfer over to Kolla-Ansible.

I was confused by the setting for local_ip in /etc/neutron/plugins/ml2/linuxbridge_agent.ini for the vxlan and the solution in the end is that should be my overlay network's IP address.

I configured my MTU as per:

https://docs.openstack.org/neutron/yoga/admin/config-mtu.html

This was fairly uneventful; just remember to set the MTU in both neutron.conf AND ml2_conf.ini.  And don't forget to test using long ping packets.  I did run into a web UI problem with old firmware on an old NetGear managed ethernet switch where its possible to configure the UI to a longer packet length than the device can actually pass, which was "funny" although easy to detect and fix.

I followed the instructions here to configure my overlay networks, it was uneventful and just works:

https://docs.openstack.org/neutron/yoga/admin/deploy-lb-selfservice.html

A pretty good reference for metadata is:

https://docs.openstack.org/nova/yoga/user/metadata.html

I experimented with linking Designate to Neutron, which technically worked, but in the long run I prefer configuring my DNS somewhat differently than my project organization.  Some DNS entries should be under project DNS zones, others global, it can be complicated.  Its a nice idea to have automatic DNS entries.

For the manual install, I made one flat provider access network, which turned out to be a big mistake when my hardware firewall crashed and I wanted to spawn a software firewall and needed to do that on the old VMware cluster, delaying the deployment of OpenStack Cluster 2 until the firewall was replaced.  This is why Plan/Cluster 2.0 has provider networks for all my VLANs so I could run pfsense as a firewall if I had to.

Configuring my flat provider network looked like this:

openstack network create --share --provider-physical-network provider --provider-network-type flat provider1 --mtu 9000

openstack subnet create --subnet-range 10.10.0.0/16 --gateway 10.10.1.1 --network provider1 --allocation-pool start=10.10.248.2,end=10.10.251.253 --dns-nameserver 10.10.200.41 provider1-v4 --no-dhcp

I shudder to imagine what happens if you leave "--no-dhcp" off a provider subnet.  I suspect the result would be very bad indeed if two DHCP servers, the one in OpenStack and whatever existing external solution you have, get into a fight on a LAN.

The neutron metadata server, as default configured, will put nonsense in the /etc/resolv.conf file.  Nothing some hand editing and chattr +i /etc/resolv.conf can't fix.  Most automation invented to "make /etc/resolv.conf simpler" just makes it harder.

I experimented with installing Kuryr because I wanted to do OpenStack containers, and this went very poorly.  I am uncertain if the docs or I were in error, but somehow I installed local Python packages for iproute2 that were compatible with Kuryr and INCOMPATIBLE with Neutron, in fact Neutron was impossible to restart until I wiped my install for Kuryr.  So, that was moderately painful and inadvisable.  This was about when I started thinking Plan 2.0 would involve Kolla-Ansible rather than hand installation, as Kuryr and Zun work out of the box on Kolla but are seemingly uninstallable by hand as I currently understand things.

One of the big problems with cookbook solutions like the installation guide is they encourage people to not learn how the system works. Then pile on multiple layers of abstraction, with no or minimal debugging facility, and problems develop later.

At the time of cutting and pasting, I was setting up a rather unambitious "flat" provider network using one unbonded ethernet connection and linuxbridge as my virtual switch, and the cookbook instructions ask me to add a single line to linuxbridge_agent.ini, "physical_interface_mappings = provider:eno1". How nice of OpenStack to let me tag my provider interface as a provider. I never noticed later on when I cut and pasted in the network configuration line that I was attaching my OpenStack network to "provider1" not eno1 or something like that. Have to admit, it works great although it confuses sysadmins.

This vast simplification of how the very complicated network system works on OpenStack made life VERY exciting later on, when using Kolla-Ansible and its default OpenVswitch and I'm trying to set up multiple VLANs on a bonded pair of ethernet ports and Kolla-Ansible doesn't use the name "provider1"? Oh and don't forget to add, this is OpenStack where the only easily accessible error message you'll get is instance deploys fail, and of course Centralized Logging is cool when it works but there's an ElasticSearch version incompatibility in Kolla-Ansible Yoga (or, was an incompatibility, maybe fixed by the time you read this?) so your only log access is logging into Docker containers and poking around as best you can. But, that "fun" is a story for another day.  

Anyway, today, Neutron is working.  Tomorrow is Nova day.

Stay tuned for the next chapter!

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.