Keeping the nodes in SUSE OpenStack Cloud up-to-date requires an appropriate setup of the update and pool repositories and the deployment of either the barclamp or the SUSE Manager barclamp. For details, see Section 5.2, “Update and Pool Repositories”, Section 9.4.1, “Deploying Node Updates with the Updater Barclamp”, and Section 9.4.2, “Configuring Node Updates with the Barclamp”.
If one of those barclamps is deployed, patches are installed on the nodes. Installing patches that do not require a reboot of a node does not come with any service interruption. If a patch (for example, a kernel update) requires a reboot after the installation, services running on the machine that is rebooted will not be available within SUSE OpenStack Cloud. Therefore it is strongly recommended to install those patches during a maintenance window.
As of SUSE OpenStack Cloud 6 it is not possible to put SUSE OpenStack Cloud into “Maintenance Mode”.
While the Administration Server is offline, it is not possible to deploy new nodes. However, rebooting the Administration Server has no effect on starting instances or on instances already running.
The consequences a reboot of a Control Node has, depends on the services running on that node:
Database, Keystone, RabbitMQ, Glance, Nova: No new instances can be started.
Swift: No object storage data is available. If Glance uses Swift, it will not be possible to start new instances.
Cinder, Ceph: No block storage data is available.
Neutron: No new instances can be started. On running instances the network will be unavailable.
Horizon. Horizon will be unavailable. Starting and managing instances can be done with the command line tools.
Whenever a Compute Node is rebooted, all instances running on that particular node will be shut down and must be manually restarted. Therefore it is recommended to “evacuate” the node by migrating instances to another node, before rebooting it.
In case you need to restart your complete SUSE OpenStack Cloud (after a complete shut down or a power outage), the services need to started in the following order:
Control Node/Cluster on which the Database is deployed
Control Node/Cluster on which RabbitMQ is deployed
Control Node/Cluster on which Keystone is deployed
Any remaining Control Node/Cluster. The following additional rules apply:
The Control Node/Cluster on which the neutron-server
role is deployed needs to be started before starting the node/cluster
on which the neutron-l3 role is deployed.
The Control Node/Cluster on which the nova-controller
role is deployed needs to be started before starting the node/cluster
on which Heat is deployed.
Compute Nodes
If multiple roles are deployed on a single Control Node, the services are automatically started in the correct order on that node. If you have more than one node on which multiple roles are installed, make sure they are started in a way that the order listed above is met as best as possible.
If you need to shut down SUSE OpenStack Cloud, the services need to be terminated in reverse order than on start-up:
Compute Nodes
Control Node/Cluster on which Heat is deployed
Control Node/Cluster on which the nova-controller
role is deployed
Control Node/Cluster on which the neutron-l3
role is deployed
All Control Node(s)/Cluster(s) on which neither of the following services is deployed: Database, RabbitMQ, and Keystone.
Control Node/Cluster on which Keystone is deployed
Control Node/Cluster on which RabbitMQ is deployed
Control Node/Cluster on which the Database is deployed
Upgrading from SUSE OpenStack Cloud 5 to SUSE OpenStack Cloud 6 is done via a Web interface guiding you through the process. The process consists of four phases:
Saving the configuration data of your SUSE OpenStack Cloud 5 installation in a data dump.
Re-Installing and setting up the Administration Server with SUSE OpenStack Cloud 6.
Upgrading the all nodes to SUSE Linux Enterprise Server 12 SP1 and SUSE OpenStack Cloud 6.
Re-applying the barclamps.
Before you start upgrading SUSE OpenStack Cloud, make sure the following requirements are met:
The Administration Server needs to have the latest SUSE OpenStack Cloud 5 updates installed. One of these updates will add the new upgrade routine to the Crowbar Web interface.
All other nodes need to have the latest SUSE OpenStack Cloud 5 updates and the latest SLES updates. If this is not the case, refer to Section 9.4.1, “Deploying Node Updates with the Updater Barclamp” for instructions.
All allocated nodes need to be turned on.
During the upgrade of the Control Nodes and the Compute Nodes the instances need to be shut down. However it is not necessary to do so at the beginning of the upgrade procedure. This step can be postponed until after the Administration Server has been upgraded to SUSE OpenStack Cloud 6 to keep the downtime as short as possible.
As of SUSE OpenStack Cloud 6, HyperV Nodes need to be re-installed after the upgrade procedure. This re-installation will overwrite the instance's data and therefore they will be lost. KVM, VMware and Xen instances are not affected.
It is strongly recommended to create a backup of the Administration Server before starting the upgrade procedure, to be able to restore the server in case the upgrade fails. Refer to the chapter Backing Up and Restoring the Administration Server in the SUSE Cloud 5 documentation for instructions.
To start the upgrade procedure, proceed as follows:
Open a browser and point it to the Crowbar Web interface, for example
http://192.168.124.10/. Log in as user crowbar. The password is
crowbar, if you have not changed the default.
Open › .
Follow the instructions in the Web interface to create and save the upgrade data. Part 1 of the upgrade procedure is finished when you have saved the data.
Make sure to save the upgrade data to a location that can be accessed from the Administration Server after having re-installed it. Do not save it on the Administration Server itself, since it might get overwritten when re-installing the machine.
When the upgrade data has been saved, the Administration Server needs to be re-installed with SUSE OpenStack Cloud 6 on SUSE Linux Enterprise Server 12 SP1:
Check the network configuration of the Administration Server with the command
ifconfig. Note the MAC address and the IP address of
the interface named eth0. Also note the IP addresses
and ranges of all SUSE OpenStack Cloud networks. You can either find them in
/etc/crowbar/network.json or when checking the
section in YaST Crowbar (see Section 7.2, “” for details).
It is not possible to set up a second machine, install SUSE OpenStack Cloud 6 and then switch the old machine with the new one. The MAC address of the network interfaces need to be the same before and after the upgrade.
Reboot the Administration Server from a SUSE Linux Enterprise Server 12 SP1 installation source and install the operating system plus SUSE OpenStack Cloud 6 as an add-on products. For details, see Chapter 3, Installing the Administration Server.
SUSE OpenStack Cloud 6 does not use any of the repositories that
were required for SUSE OpenStack Cloud 5. In case you have mirrored
repositories to the Administration Server and /srv resides on
a separate partition, it is safe to format this partition to free space
for the new repositories.
Optional: If you have installed a local SMT server, configure it as described in Section 4.2, “SMT Configuration”. Make sure the repositories are set up and mirrored as described in Section 4.3, “Setting up Repository Mirroring on the SMT Server”.
Make sure all required repositories are made available as described in Chapter 5, Software Repository Setup.
Configure the network of the Administration Server as described in Chapter 6, Service Configuration: Administration Server Network Configuration. Make sure to use the exact same settings as in the previous installation.
Configure SUSE OpenStack Cloud with YaST Crowbar as described in Chapter 7, Crowbar Setup. Make sure to configure the exact same network settings for Crowbar as in the previous installation.
The Administration Server setup is finished as soon as you have finished the configuration with YaST Crowbar. Do not start the regular SUSE OpenStack Cloud Crowbar installation!
When the Administration Server has been set up and configured, return to the upgrade Web interface to upgrade all nodes in SUSE OpenStack Cloud:
Open a browser and point it to the Crowbar Web interface available on the
Administration Server, for example http://192.168.124.10/.
Choose and start the upgrade process by uploading the upgrade data downloaded in Part 1 of the upgrade procedure. Follow the on-screen instruction to finish the upgrade process. Depending on the amount of nodes in your installation this will take up to several hours.
During the upgrade procedure you will be asked to provide login
credentials for the Crowbar Web interface two times. First time you need to
provide the default login credentials
(crowbar/crowbar. On the second occasion you need to
specify the ones you used with Cloud 5. These credentials are also the
ones you need to provide for subsequent logins to the Crowbar Web interface.
When all nodes have been upgraded, the barclamps need to be re-applied:
Go to the Dashboard on the Crowbar Web interface › and check whether all nodes have been successfully updated—all nodes should be listed in state , indicated by a green dot.
If nodes have not been upgraded successfully, they are marked with a yellow or gray dot. Log in to those nodes (see How can I log in to a node as root? ) and check the log files (see Appendix A, Log Files for reasons. Fix the issues and reboot the node to restart the upgrade process. For more information also refer to What to do if a node is reported to be in the state Problem? What to do if a node hangs at.
When all nodes have bee upgraded successfully re-apply the barclamps. Go to › and apply the barclamps in the given order. For each barclamp the service configuration and the deployment configuration is the same as on SUSE OpenStack Cloud 5, since it was restored from the data dump.
When all barclamp have been successfully deployed, you can restart the instances on the Compute Nodes.
When making an existing SUSE OpenStack Cloud deployment highly available (by setting up HA clusters and moving roles to these clusters), there are a few issues to pay attention to. To make existing services highly available, proceed as follows. Note that moving to an HA setup cannot be done without SUSE OpenStack Cloud service interruption, because it requires OpenStack services to be restarted.
Teaming network mode is required for an HA setup of SUSE OpenStack Cloud. If you are planning to move your cloud to an HA setup at a later point in time, make sure to deploy SUSE OpenStack Cloud with teaming network mode from the beginning. Otherwise a migration to an HA setup is not supported.
Make sure to have read the sections Section 1.5, “HA Setup” and Section 2.6, “High Availability” of this manual and taken any appropriate action.
Make the HA repositories available on the Administration Server as described in
Section 5.2, “Update and Pool Repositories”. Run the command
chef-client afterwards.
Set up your cluster(s) as described in Section 10.2, “Deploying Pacemaker (Optional, HA Setup Only)”.
To move a particular role from a regular control node to a cluster, you need to stop the associated service(s) before re-deploying the role on a cluster:
Log in to each node on which the role is deployed and stop its associated service(s) (a role can have multiple services). Do so by running the service's start/stop script with the stop argument, for example:
rcopenstack-keystone stop
See Appendix C, Roles and Services in SUSE OpenStack Cloud for a list of roles, services and start/stop scripts.
The following roles need additional treatment:
Stop the database on the node the Database barclamp is deployed with the command:
rcpostgresql stop
Copy /var/lib/pgsql to a temporary location
on the node, for example:
cp -ax /var/lib/pgsql /tmp
Redeploy the Database barclamp to the cluster. The original node may also be part of this cluster.
Log in to a cluster node and run the following command to
determine which cluster node runs the
postgresql service:
crm_mon -1
Log in to the cluster node running
postgresql.
Stop the postgresql
service:
crm resource stop postgresql
Copy the data backed up earlier to the cluster node:
rsync -av --delete
NODE_WITH_BACKUP:/tmp/pgsql/ /var/lib/pgsql/
Restart the postgresql
service:
crm resource start postgresql
Copy the content of /var/lib/pgsql/data/ from
the original database node to the cluster node with DRBD or shared
storage.
If using Keystone with PKI tokens, the PKI keys on all nodes
need to be re-generated. This can be achieved by removing the
contents of /var/cache/*/keystone-signing/ on
the nodes. Use a command similar to the following on the
Administration Server as root:
for NODE in NODE1
NODE2 NODE3; do
ssh $NODE rm /var/cache/*/keystone-signing/*
done
Go to the barclamp featuring the role you want to move to the
cluster. From the left side of the section,
remove the node the role is currently running on. Replace it with a
cluster from the section. Then
apply the proposal and verify that application succeeded via the
Crowbar Web interface. You can also check the cluster status via Hawk
or the crm /
crm_mon CLI tools.
Repeat these steps for all roles you want to move to cluster. See Section 2.6.2.1, “Control Node(s)—Avoiding Points of Failure” for a list of services with HA support.
Moving to an HA setup also requires to create SSL certificates for nodes in the cluster that run services using SSL. Certificates need to be issued for the generated names (see Important: Proposal Name) and for all public names you have configured in the cluster.
After a role has been deployed on a cluster, its services are managed by the HA software. You must never manually start or stop an HA-managed service or configure it to start on boot. Services may only be started or stopped by using the cluster management tools Hawk or the crm shell. See http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_basics_resources.html for more information.
Backing Up and Restoring the Administration Server can either be done via the Crowbar
Web interface or on the Administration Server's command line via the crowbarctl
backup command. Both tools provide the same functionality.
To use the Web interface for backing up and restoring the Administration Server, go to
the Crowbar Web interface on the Administration Server, for example
http://192.168.124.10/. Log in as user crowbar. The password is
crowbar by default, if you have not changed it. Go to › .
To create a backup, click the respective button. Provide a descriptive name (allowed characters are letters, numbers, dashes and underscores) and confirm with . Alternatively, you can upload a backup, for example from a previous installation.
Existing backups are listed with name and creation date. For each backup, three actions are available:
Download a copy of the backup file. The TAR archive you receive with this download can be uploaded again via .
Restore the backup.
Delete the backup.
Backing up and restoring the Administration Server from the command line can be done
with the command crowbarctl backup. For getting general
help, run the command crowbarctl --help backup, help on
a subcommand is available by running crowbarctl
SUBCOMMAND --help.
The following commands for creating and managing backups exist:
crowbarctl backup create
NAME
Create a new backup named NAME. It will be
stored at /var/lib/crowbar/backup.
crowbarctl backup [--yes] NAME
Restore the backup named NAME. You will be
asked for confirmation before any existing proposals will get
overwritten. If using the option --yes, confirmations
are tuned off and the restore is forced.
crowbarctl backup delete NAMEDelete the backup named NAME.
crowbarctl backup download NAME
[FILE]
Download the backup named NAME. If you
specify the optional [FILE], the download is
written to the specified file. Otherwise it is saved to the current
working directory with an automatically generated file name. If
specifying - for [FILE],
the output is written to STDOUT.
crowbarctl backup list
List existing backups. You can optionally specify different
output formats and filters—refer to crowbarctl backup
list --help for details.
crowbarctl backup upload
FILE
Upload a backup from FILE.