Tuesday, August 23, 2022

Adventures of a Small Time OpenStack Sysadmin Chapter 055 - Kolla-Ansible installation on Cluster 1 aka hosts 1, 2, and 3

Adventures of a Small Time OpenStack Sysadmin relate the experience of converting a small VMware cluster into two small OpenStack clusters, and the adventures and friends I made along the way.

Adventures of a Small Time OpenStack Sysadmin Chapter 055 - Kolla-Ansible installation on Cluster 1 aka hosts 1, 2, and 3

References

https://docs.openstack.org/kolla-ansible/yoga/user/quickstart.html

https://docs.openstack.org/kolla-ansible/yoga/reference/index.html

Pre-Kolla-Ansible Preparation

Ansible installs the OS packages.  Begin following along with the web instructions for Kolla-Ansible (linked above) at the VENV stage.

The new controller 1 will be on OS3 as root.

Python Prep

python3 -m venv /root/kolla-ansible

source /root/kolla-ansible/bin/activate

pip install -U pip

pip install 'ansible>=4,<6'

Kolla-Ansible Installation

This is installing Kolla-Ansible, not using Kolla-Ansible to install an OpenStack cluster (which will be done later)

pip install git+https://opendev.org/openstack/kolla-ansible@stable/yoga

mkdir /etc/kolla

chown root:root /etc/kolla

cp /root/kolla-ansible/share/kolla-ansible/etc_examples/kolla/* /etc/kolla

cp /root/kolla-ansible/share/kolla-ansible/inventory/* .

kolla-ansible install-deps

ssh-keygen to create a ssh key for root, and add to gitlab, so I can access the repos which store my configs and scripts and templates.

git clone the openstack-scripts repo and the glance loader repo.  Feel free to adapt these to your own needs.  I intentionally make these repos public to be seen and used.

mkdir /etc/kolla/globals.d

cp /root/openstack-scripts/backup/whatever/globals.d/* /etc/kolla/globals.d and edit them if necessary

mkdir /etc/kolla/config

cp -R /root/openstack-scripts/backup/whatever/config/* /etc/kolla/config and edit them if necessary

The online docs recommend setting some ansible config options; the instructions are for a non-VENV install, so I put my config in /root/ansible.cfg instead.

Edit /root/multinode, notice this inventory is for Cluster 1 which uses hosts 1, 2, and 3.

Final Pre-Deployment Config

kolla-genpwd and examine /etc/kolla/passwords.yml and note that some will need changing.

Be sure to edit /etc/kolla/passwords.yml line "keystone_admin_password" as you're not going to like the admittedly highly secure autogenerated password.

Also need to edit /etc/kolla/passwords.yml and edit the line kibana_password as you're not going to like the autogenerated kibana password.

Be sure to edit or verify /etc/kolla/config/ml2_conf.ini to set the network_vlan_ranges variable to a reasonable range of VLAN ids, such as bond12:1:1000 (I'm only using 10,20,30 thru 60 on interface bond12)

Run the Swift disk labeler on ALL swift disks on each host

apt install docker because the swift ring generator requires docker

Make sure /etc/kolla/config/swift is empty... for now.

Run the ring maker script in openstack-scripts to make the rings.

Configure Individual Kolla-Ansible Product Globals.d Files

For every service in the product reference, there will be a .yml file in /etc/kolla/globals.d for example neutron.yml to configure the OpenStack Neutron service.  There are no changes to the as-shipped globals.yml file other than one line for various bug reasons (explained later on).  My backups of these files work for me.  You may find my configurations inspirational or at least amusing as a starting point for your own cluster.  Some typical starting points for configuration:

kolla_ansible.yml

distro, vip addrs, and keepalived virt router ID

central_logging.yml

enable_central_logging: "yes"

That will eventually provide kibana on port 5601

Also need to edit /etc/kolla/passwords.yml and edit the line kibana_password as you're not going to like the autogenerated kibana password.

cinder.yml

Enable LVM backend, use the ssd volume group, and swift as backup driver.

I go back and forth on using swift or a shared NFS mount for backups; probably 51% better off with swift and tools like rclone.

glance.yml

Disable file backend, enable swift backend

heat.yml

Empty initial config

horizon.yml

Empty initial config

keystone.yml

Empty initial config

neutron.yml

network_interface: "whatever the bind interface is for the dual 10G on Prod VLAN"

neutron_external_interface: "whatever the bind interface is for the dual 1G"

kolla_internal_vip_address: "10.10.20.62" aka controller2.cedar.mulhollon.com

keepalive_virtual_router_id: "cluster number, 2 in this case"

Note that Kolla-Ansible uses the openvswitch agent whereas all my experience is with linuxbridge.

nova.yml

Empty initial config

swift.yml

I have to set up swift rings by hand as per:

https://docs.openstack.org/kolla-ansible/yoga/reference/storage/swift-guide.html

Note that the kolla-ansible docs provide an opaque process using docker to generate rings, with a pointer to the swift docs as an explanation, whereas the swift docs use a completely different method.  So that's confusing.

Work around the bootstrapping bug

Because I'm not modifying the globals.yml file and am doing all configuration in individual yml files in global.d, that triggers a bug:

https://bugs.launchpad.net/kolla-ansible/+bug/1970638

Instead of setting a dummy variable, I make ONE edit to /etc/globals.yml to set the base distro to ubuntu.  Now bootstrapping works...

Bootstrap

Make sure the venv is activated (if no (kolla-ansible)  in the prompt, source /root/kolla-ansible/bin/activate)

kolla-ansible -i ./multinode bootstrap-servers

Pre-Deployment Checks

Make sure the venv is activated (if no (kolla-ansible)  in the prompt, source /root/kolla-ansible/bin/activate)

kolla-ansible -i ./multinode prechecks

Deployment

Make sure the venv is activated (if no (kolla-ansible)  in the prompt, source /root/kolla-ansible/bin/activate)

kolla-ansible -i ./multinode deploy

Generate the admin-openrc.sh file:

kolla-ansible post-deploy

. /etc/kolla/admin-openrc.sh

Now copy /etc/kolla/admin-openrc.sh where-ever you need it.

Possibly create demonstration data:

/root/kolla-ansible/share/kolla-ansible/init-runonce

Or more likely just use the HEAT templates.

Prepare the CLI

If on OS3, Make sure the venv is activated (if no (kolla-ansible)  in the prompt, source /root/kolla-ansible/bin/activate)

pip install python-openstackclient -c https://releases.openstack.org/constraints/upper/yoga

Or, more likely:

run the install-cli script from openstack-scripts repo.

Local Configuration

Run network scripts to create provider nets and ip pools

Run the keypair script to upload ssh keys

Run the flavor script to upload the flavors

Run the glance-loader repo scripts to upload some usable install images

Run the heat scripts to set up all the projects

Use the web ui to add myself and admin user to all the projects with roles of admin for both

Create the /etc/kolla/admin-project-name openrc scripts for each individual project.  Look at how individual scripts in the projects handle the issue.  There are probably more elegant options to do this without individual files.

Run the individual heat project scripts to set up individual projects (had previously set up all the projects as a group)

Test the backup script in openstack-scripts

Run the heat scripts to set up some test instances.

Conclusion

I completed this step in a long afternoon.  It took hours using Kolla-Ansible to go twice as far as I got when configuring OpenStack by hand over the course of about two weeks.

After this monstrous long post, tomorrow will be a short post about Prometheus.

Stay tuned for the next chapter!

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.