After the nodes are installed and configured you can start deploying the OpenStack services to finalize the installation. The services need to be deployed in a given order, because they depend on one another. The service for an HA setup is the only exception from this rule—it can be set up at any time. However, when deploying SUSE OpenStack Cloud from scratch, it is recommended to deploy the proposal(s) first. Deployment for all services is done from the Crowbar Web interface through recipes, so-called “barclamps”.
The services controlling the cloud (including storage management and control services) need to be installed on the Control Node(s) (refer to Section 1.2, “The Control Node(s)” for more information). However, you may not use your Control Node(s) as a compute node or storage host for Swift or Ceph. Here is a list with services that may not be installed on the Control Node(s): , all Ceph services, . These services need to be installed on dedicated nodes.
When deploying an HA setup, the controller nodes are replaced by one or more controller clusters consisting of at least two nodes (three are recommended). Setting up three separate clusters—for data, services, and networking—is recommended. See Section 2.6, “High Availability” for more information on requirements and recommendations for an HA setup.
The OpenStack services need to be deployed in the following order. For general instructions on how to edit and deploy barclamp, refer to Section 10.1, “Barclamps”. Deploying Pacemaker (only needed for an HA setup), Swift and Ceph is optional; all other services must be deployed.
The OpenStack services are automatically installed on the nodes by using so-called barclamps—a set of recipes, templates, and installation instructions. A barclamp is configured via a so-called proposal. A proposal contains the configuration of the service(s) associated with the blowlamp and a list of machines onto which the barclamp should be deployed.
All existing barclamps can be accessed from the Crowbar Web interface by clicking . To create or edit barclamp proposals and deploy them, proceed as follows:
Open a browser and point it to the Crowbar Web interface available on the
Administration Server, for example http://192.168.124.10/. Log in
as user crowbar. The password
is crowbar by default, if you have not changed it.
Click to open the menu. Alternatively you may filter the list to or barclamps by choosing the respective option from . The barclamps contain general recipes for setting up and configuring all nodes, while the barclamps are dedicated to OpenStack service deployment and configuration.
You can either a proposal or an existing one.
Most OpenStack barclamps consist of two sections: the section lets you change the configuration, and the section lets you choose onto which nodes to deploy the barclamp.
To edit the section, change the values via the Web form. Alternatively you can directly edit the configuration file by clicking .
If you switch between mode and Web form ( mode), make sure to your changes before switching, otherwise they will be lost.
To assign nodes to a role, use the section of the OpenStack barclamp. It shows the that you can assign to the roles belonging to the barclamp. If the barclamp contains roles that can also be deployed to a cluster and if you have deployed the Pacemaker barclamp, the section of the barclamp will additionally list and of . The latter are clusters that contain both “normal” nodes and Pacemaker remote nodes. See Section 2.6.3, “High Availability of the Compute Node(s)” for the basic details.
One or more nodes are usually automatically pre-selected for available roles. If this pre-selection does not meet your requirements, click the icon next to the role to remove the assignment. Assign a node or cluster of your choice by selecting the respective entry from the list of , , or . Drag it to the desired role and drop it onto the role name. Do not drop a node or cluster onto the text box—this is used to filter the list of available nodes or clusters!
If you try to assign clusters or clusters with remote nodes to roles that can only be assigned to individual nodes, the Crowbar Web interface will refuse to accept and show an error message. If you assign a cluster with remote nodes to a role that can only be applied to “normal” (Corosync) nodes, the role will only be applied to the Corosync nodes of that cluster—not to the remote nodes of the same cluster.
To save and deploy your edits, click . To save your changes without deploying them, click . To remove the complete proposal, click . A proposal that already has been deployed can only be deleted manually, see Section 10.1.1, “Delete a Proposal That Already Has Been Deployed” for details.
If you deploy a proposal onto a node where a previous one is still active, the new proposal will overwrite the old one.
Deploying a proposal might take some time (up to several minutes). It is strongly recommended to always wait until you see the note “Successfully applied the proposal” before proceeding on to the next proposal.
In case the deployment of a barclamp fails, make sure to fix the reason that has caused the failure and deploy the barclamp again. Refer to the respective troubleshooting section at OpenStack Node Deployment for help. A deployment failure may leave your node in an inconsistent state.
To delete a proposal that already has been deployed, you first need to it in the Crowbar Web interface. Deactivating a proposal removes the chef role from the nodes, so the routine that installed and set up the services is not executed anymore. After a proposal has been deactivated, you can it in the Crowbar Web interface to remove the barclamp configuration data from the server.
Deactivating and deleting a barclamp that already had been deployed does
not remove packages installed when the barclamp was
deployed. Nor does it stop any services that were started during the
barclamp deployment. To undo the deployment on the affected node, you need
to stop (systemctl stop
service) the respective services and
disable (systemctl disable
service) them. Uninstalling packages
should not be necessary.
When a proposal is applied to one or more nodes that are nor yet available for deployment (for example because they are rebooting or have not been fully installed, yet), the proposal will be put in a queue. A message like
Successfully queued the proposal until the following become ready: d52-54-00-6c-25-44
will be shown when having applied the proposal. A new button will also become available. Use it to cancel the deployment of the proposal by removing it from the queue.
By setting up one or more clusters by deploying Pacemaker, you can make the SUSE OpenStack Cloud controller functions and the Compute Nodes highly available (see Section 2.6, “High Availability” for details). Since it is possible (and recommended) to deploy more than one cluster, a separate proposal needs to be created for each cluster.
Deploying Pacemaker is optional. In case you do not want to deploy it, skip this section and start the node deployment by deploying the database as described in Section 10.3, “Deploying the Database”.
To set up a cluster, at least two nodes are required. If setting up a cluster for storage with replicated storage via DRBD (for example for a cluster for the database and RabbitMQ), exactly two nodes are required. For all other setups an odd number of nodes with a minimum of three nodes is strongly recommended. See Section 2.6.5, “Cluster Requirements and Recommendations” for more information.
To create a proposal, go to › and click for the Pacemaker barclamp. A drop-down box where you can enter a name and a description for the proposal opens. Click to open the configuration screen for the proposal.
The name you enter for the proposal will be used to generate host names for the virtual IPs of HAProxy. The name uses the following scheme:
NAME.cluster-PROPOSAL_NAME.FQDN
When PROPOSAL_NAME is set to data, this results
in, for example,
controller.cluster-data.example.com.
The following options are configurable in the Pacemaker configuration screen:
Choose a technology used for cluster communication. You can select between , (sending a message to multiple destinations) or (sending a message to a single destination). By default multicast is used.
Whenever communication fails between one or more nodes and the rest of the cluster a “cluster partition” occurs. The nodes of a cluster are split in partitions but are still active. They can only communicate with nodes in the same partition and are unaware of the separated nodes. The cluster partition that has the majority of nodes is defined to have “quorum”.
This configuration option defines what to do with the cluster partition(s) that do not have the quorum. See http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_basics_global.html, section Option no-quorum-policy for details.
The recommended setting is to choose . However, is enforced for two-node clusters to ensure that the remaining node continues to operate normally in case the other node fails. For clusters using shared resources, choosing may be used to ensure that these resources continue to be available.
“Misbehaving” nodes in a cluster are shut down to prevent it from causing trouble. This mechanism is called STONITH (“Shoot the other node in the head”). STONITH can be configured in a variety of ways, refer to http://www.suse.com/documentation/sle-ha-12/book_sleha/data/cha_ha_fencing.html for details. The following configuration options exist:
STONITH will not be configured when deploying the barclamp. It needs to be configured manually as described in http://www.suse.com/documentation/sle-ha-12/book_sleha/data/cha_ha_fencing.html. For experts only.
Using this option automatically sets up STONITH with data received from the IPMI barclamp. Being able to use this option requires that IPMI is configured for all cluster nodes. This should be done by default, when deploying cloud. To check or change the IPMI deployment, go to › › › . Also make sure the option is set to on this barclamp.
To configure STONITH with the IPMI data, all STONITH devices must support IPMI. Problems with this setup may occur with IPMI implementations that are not strictly standards compliant. In this case it is recommended to set up STONITH with STONITH block devices (SBD).
This option requires to manually set up shared storage on the cluster nodes before applying the proposal. To do so, proceed as follows:
Prepare the shared storage. It needs to be reachable by all nodes and must not use host-based RAID, cLVM2, or DRBD.
Install the package sbd
on all cluster nodes.
Initialize SBD device with by running the following command. Make sure to replace /dev/SBD with the path to the shared storage device.
sbd -d /dev/SBD create
Refer to http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_storage_protect_fencing.html#pro_ha_storage_protect_sbd_create for details.
After the shared storage has been set up, specify the path using
the “by-id” notation
(/dev/disk/by-id/DEVICE).
It is possible to specify multiple paths as a comma-separated list.
Deploying the barclamp will automatically complete the SBD setup on the cluster nodes by starting the SBD daemon and configuring the fencing resource.
All nodes will use the exact same configuration. Specify the to use and enter for the agent.
To get a list of STONITH devices which are supported by the
High Availability Extension, run the following command on an
already installed cluster nodes: stonith -L. The
list of parameters depends on the respective agent. To view a list
of parameters use the following command:
stonith -t agent -n
All nodes in the cluster use the same , but can be configured with different parameters. This setup is, for example, required when nodes are in different chassis and therefore need different ILO parameters.
To get a list of STONITH devices which are supported by the
High Availability Extension, run the following command on an
already installed cluster nodes: stonith -L. The
list of parameters depends on the respective agent. To view a list
of parameters use the following command:
stonith -t agent -n
Use this setting for completely virtualized test installations. This option is not supported.
With STONITH, Pacemaker clusters with two nodes may sometimes hit an issue known as STONITH deathmatch where each node kills the other one, resulting in both nodes rebooting all the time. Another similar issue in Pacemaker clusters is the fencing loop, where a reboot caused by STONITH will not be enough to fix a node and it will be fenced again and again.
This setting can be used to limit these issues. When set to , a node that has not been properly shut down or rebooted will not start the services for Pacemaker on boot, and will wait for action from the SUSE OpenStack Cloud operator. When set to , the services for Pacemaker will always be started on boot. The value is used to have the most appropriate value automatically picked: it will be for two-node clusters (to avoid STONITH deathmatches), and otherwise.
When a node will boot but not start corosync because of this setting,
then the node will be displayed with its status set to
"Problem" (red bullet) in the . To make this node usable again, the following
steps need to be performed:
Connect to the node via SSH from the Administration Server and run one of
systemctl start pacemaker or rm
/var/spool/corosync/block_automatic_start. Waiting for the
next periodic of a chef-client run, or manually
running chef-client is also recommended.
On the Administration Server, run the following command to update the status of the node specified with NODE.
crowbar crowbar transition NODE ready
Get notified of cluster node failures via e-mail. If set to , you need to specify which to use, a prefix for the mails' subject and sender and recipient addresses. Note that the SMTP server must be accessible by the cluster nodes.
Set up DRBD for replicated storage on the cluster. This option requires a two-node cluster with a spare hard disk for each node. The disks should have a minimum size of 100 GB. Using DRBD is recommended for making the database and RabbitMQ highly available. For other clusters, set this option to .
The public name is the host name that will be used instead of the generated public name (see Important: Proposal Name) for the public virtual IP of HAProxy. (This is the case when registering public endpoints, for example). Any name specified here needs to be resolved by a name server placed outside of the SUSE OpenStack Cloud network.
The Pacemaker service consists of the following roles. Deploying the role is optional:
Deploy this role on all nodes that should become member of the cluster except for the one where is deployed.
Deploying this role is optional. If deployed, sets up the Hawk
Web interface which lets you monitor the status of the cluster. The
Web interface can be accessed via
http://IP-ADDRESS:7630.
Note that the GUI on SUSE OpenStack Cloud can only be used to monitor the
cluster status and not to change its configuration.
may be deployed on at least one cluster node. It is recommended to deploy it on all cluster nodes.
Deploy this role on all nodes that should become members of the
Compute Nodes cluster. They will run as Pacemaker remote nodes that are
controlled by the cluster, but do not affect quorum. Instead of the
complete cluster stack, only the pacemaker-remote
service will be installed on this nodes.
After a cluster has been successfully deployed, it is listed under in the section and can be used for role deployment like a regular node.
When using clusters, roles from other barclamps must never be deployed to single nodes that are already part of a cluster. The only exceptions from this rule are the following roles:
cinder-volume
swift-proxy + swift-dispersion
swift-ring-compute
swift-storage
After a role has been deployed on a cluster, its services are managed by the HA software. You must never manually start or stop an HA-managed service (or configure it to start on boot). Services may only be started or stopped by using the cluster management tools Hawk or the crm shell. See http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_basics_resources.html for more information.
To check whether all cluster resources are running, either use the
Hawk Web interface or run the command crm_mon
-1r. If it is not the case, clean up the respective
resource with crm resource
cleanup RESOURCE , so it
gets respawned.
Also make sure that STONITH correctly works before continuing with the SUSE OpenStack Cloud setup. This is especially important when having chosen a STONITH configuration requiring manual setup. To test if STONITH works, log in to a node on the cluster and run the following command:
pkill -9 corosync
In case STONITH is correctly configured, the node will reboot.
Before testing on a production cluster, plan a maintenance window in case issues should arise.
The very first service that needs to be deployed is the . The database service is using PostgreSQL and is used by all other services. It must be installed on a Control Node. The Database can be made highly available by deploying it on a cluster.
The only attribute you may change is the maximum number of database connections (). The default value should usually work—only change it for large deployments in case the log files show database connection failures.
To make the database highly available, deploy it on a cluster instead of a single Control Node. This also requires shared storage for the cluster that hosts the database data. To achieve this, either set up a cluster with DRBD support (see Section 10.2, “Deploying Pacemaker (Optional, HA Setup Only)”) or use “traditional” shared storage like an NFS share. It is recommended to use a dedicated cluster to deploy the database together with RabbitMQ, since both services require shared storage.
Deploying the database on a cluster makes an additional section available in the section of the proposal. Configure the in this section. There are two options:
This option requires a two-node cluster that has been set up with DRBD. Also specify the . The suggested value of 50 GB should be sufficient.
Use a shared block device or an NFS mount for shared storage.
Concordantly with the mount command, you need to specify three
attributes: (the mount point), the and the . Refer to
man 8 mount for details on file system types and
mount options.
If you want to use an NFS share as shared storage for a cluster, export it on the NFS server with the following options:
rw,async,insecure,no_subtree_check,no_root_squash
In case mounting the NFS share on the cluster nodes fails, change the export options and re-apply the proposal. However, before doing so, you need to clean up the respective resources on the cluster nodes as described in http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_crm.html#sec_ha_manual_config_cleanup.
The shared NFS directory that is used for the PostgreSQL database needs
to be owned by the same user ID and group ID as of the
postgres user on the HA
database cluster.
To get the IDs log in to one of the HA database cluster machines and issue the following commands:
id -g postgres getent group postgres | cut -d: -f3
The first command returns the numeric user ID, the second one the numeric group ID. Now log in to the NFS server and change the ownership of the shared NFS directory, for example:
chown UID.GID /exports/cloud/db
Replace UID and GID by the respective numeric values retrieved above.
When re-deploying SUSE OpenStack Cloud and reusing a shared storage hosting database files from a previous installation, the installation may fail, because the old database will be used. Always delete the old database from the shared storage before re-deploying SUSE OpenStack Cloud.
The RabbitMQ messaging system enables services to communicate with the other nodes via Advanced Message Queue Protocol (AMQP). Deploying it is mandatory. RabbitMQ needs to be installed on a Control Node. RabbitMQ can be made highly available by deploying it on a cluster. It is recommended not to change the default values of the proposal's attributes.
Name of the default virtual host to be created and used by the
RabbitMQ server (default_vhost configuration option
in rabbitmq.config).
Port the RabbitMQ server listens on (tcp_listeners
configuration option in rabbitmq.config).
RabbitMQ default user (default_user configuration
option in rabbitmq.config).
To make RabbitMQ highly available, deploy it on a cluster instead of a single Control Node. This also requires shared storage for the cluster that hosts the RabbitMQ data. To achieve this, either set up a cluster with DRBD support (see Section 10.2, “Deploying Pacemaker (Optional, HA Setup Only)”) or use “traditional” shared storage like an NFS share. It is recommended to use a dedicated cluster to deploy RabbitMQ together with the database, since both services require shared storage.
Deploying RabbitMQ on a cluster makes an additional section available in the section of the proposal. Configure the in this section. There are two options:
This option requires a two-node cluster that has been set up with DRBD. Also specify the . The suggested value of 50 GB should be sufficient.
Use a shared block device or an NFS mount for shared storage. Concordantly with the mount command, you need to specify three attributes: (the mount point), the and the .
An NFS share that is to be used as a shared storage for a cluster needs to be exported on the NFS server with the following options:
rw,async,insecure,no_subtree_check,no_root_squash
In case mounting the NFS share on the cluster nodes fails, change the export options and re-apply the proposal. Before doing so, however, you need to clean up the respective resources on the cluster nodes as described in http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_crm.html#sec_ha_manual_config_cleanup.
is another core component that is used by all other OpenStack services. It provides authentication and authorization services. needs to be installed on a Control Node. Keystone can be made highly available by deploying it on a cluster. You can configure the following parameters of this barclamp:
Set the algorithm used by Keystone to generate the tokens. It is
strongly recommended to use PKI, since it will
reduce network traffic.
Allows to customize the region name that crowbar is going to manage.
Tenant for the users. Do not change the default value of
openstack.
User name and password for the regular user and the administrator. Both accounts can be used to log in to the SUSE OpenStack Cloud Dashboard to manage Keystone users and access.
When sticking with the default value , public communication will not be encrypted. Choose to use SSL for encryption. See Section 2.3, “SSL Encryption” for background information and Section 9.4.6, “Enabling SSL” for installation instructions. The following additional configuration options will become available when choosing :
When set to true, self-signed certificates are
automatically generated and copied to the correct locations. This
setting is for testing purposes only and should never be used in
production environments!
Location of the certificate key pair files.
Set this option to true when using self-signed
certificates to disable certificate checks. This setting is for
testing purposes only and should never be used in production
environments!
Set this option to true when using your own
certificate authority (CA) for signing. Having done so, you also
need to specify a path to the . If your certificates are signed by a trusted third
party organization, set
to , since the
“official” certificate authorities (CA) are already
known by the system.
Specify the absolute path to the CA certificate here. This option can
only be changed if was
set to true.
By default Keystone uses an SQL database back-end store for authentication. LDAP can be used in addition to the default or as an alternative. Using LDAP requires the Control Node on which Keystone is installed to be able to contact the LDAP server. See Appendix D, The Network Barclamp Template File for instructions on how to adjust the network setup.
To configure LDAP as an alternative to the SQL database back-end store, you need to open the Keystone barclamp configuration in mode. Search for the section.
Adjust the settings according to your LDAP setup. The default
configuration does not include all attributes that can be
set—a complete list of options is available in the file
/opt/dell/chef/data_bags/crowbar/bc-template-keystone.schema
on the Administration Server (search for ldap). There are
three types of attribute values: strings (for example, the value for
url:"ldap://localhost"), bool
(for example, the value for use_dumb_member:
false) and integer (for example, the value for
page_size: 0). Attribute names
and string values always need to be quoted with double quotes; bool and
integer values must not be quoted.
In a production environment, it is recommended to use LDAP over SSL (ldaps), otherwise passwords will be transferred as plain text.
The Hybrid LDAP back-end allows to create a mixed LDAP/SQL setup. This is especially useful when an existing LDAP server should be used to authenticate cloud users. The system and service users (administrators and operators) needed to set up and manage SUSE OpenStack Cloud will be managed in the local SQL database. Assignments of users to projects and roles will also be stored in the local database.
In this scenario the LDAP Server can be read-only for SUSE OpenStack Cloud installation and no Schema modifications are required. Therefore managing LDAP users from within SUSE OpenStack Cloud is not possible and needs to be done using your established tools for LDAP user management. All user that are create with the Keystone command line client or the Horizon Web UI will be stored in the local SQL database.
To configure hybrid authentication, proceed as follows:
Open the Keystone barclamp configuration in mode (see Figure 10.8, “The Keystone Barclamp: Raw Mode”).
Set the identity and assignment drivers to the hybrid back-end:
"identity": {
"driver": "keystone.identity.backends.hybrid.Identity"
},
"assignment": {
"driver": "keystone.assignment.backends.hybrid.Assignment"
}Adjust the settings according to your LDAP setup in the section. Since the LDAP back-end is only used to acquire information on users (but not on projects and roles), only the user-related settings matter here. See the following example of settings that may need to be adjusted:
"ldap": {
"url": "ldap://localhost",
"user": "",
"password": "",
"suffix": "cn=example,cn=com",
"user_tree_dn": "cn=example,cn=com",
"query_scope": "one",
"user_id_attribute": "cn",
"user_enabled_emulation_dn": "",
"tls_req_cert": "demand",
"user_attribute_ignore": "tenant_id,tenants",
"user_objectclass": "inetOrgPerson",
"user_mail_attribute": "mail",
"user_filter": "",
"use_tls": false,
"user_allow_create": false,
"user_pass_attribute": "userPassword",
"user_enabled_attribute": "enabled",
"user_enabled_default": "True",
"page_size": 0,
"tls_cacertdir": "",
"tls_cacertfile": "",
"user_enabled_mask": 0,
"user_allow_update": true,
"group_allow_update": true,
"user_enabled_emulation": false,
"user_name_attribute": "cn"
}To access the LDAP server anonymously, leave the values for and empty.
Making Keystone highly available requires no special configuration—it is sufficient to deploy it on a cluster.
Ceph adds a redundant block storage service to SUSE OpenStack Cloud. It lets you store persistent devices that can be mounted from instances. It offers high data security by storing the data redundantly on a pool of Storage Nodes—therefore Ceph needs to be installed on at least three dedicated nodes. All Ceph nodes need to run SLES 12. Starting with SUSE OpenStack Cloud 5, deploying Ceph on SLES 11 SP3 nodes is no longer possible. For detailed information on how to provide the required repositories, refer to Section 5.2, “Update and Pool Repositories”. For Ceph at least four nodes are required. If deploying the optional Calamari server for Ceph management and monitoring, an additional node is required.
For more information on the Ceph project, visit http://ceph.com/.
The Ceph barclamp has the following configuration options:
Choose whether to only use the first available disk or all available
disks. “Available disks” are all disks currently not used
by the system. Note that one disk (usually
/dev/sda) of every block storage node is already
used for the operating system and is not available for Ceph.
For data security, stored objects are not only stored once, but redundantly. Specify the number of copies that should be stored for each object with this setting. The number includes the object itself. If you for example want the object plus two copies, specify 3.
Choose whether to encrypt public communication () or not (). If choosing , you need to specify the locations for the certificate key pair files.
Calamari is a Web front-end for managing and analyzing the Ceph cluster. Provide administrator credentials (user name, password, e-mail address) in this section. When Ceph has bee deployed you can log in to Calamari with these credentials. Deploying Calamari is optional—leave these text boxes empty when not deploying Calamari.
The Ceph service consists of the following different roles:
The virtual block storage service. Install this role on all dedicated Ceph Storage Nodes (at least three), but not on any other node.
Cluster monitor daemon for the Ceph distributed file system. needs to be installed on three or five Storage Nodes running .
Sets up the Calamari Web interface which lets you manage the Ceph cluster. Deploying it is optional. The Web interface can be accessed via http://IP-ADDRESS/ (where IP-ADDRESS is the address of the machine where is deployed on). needs to be installed on a dedicated node—it is not possible to install it on a nodes running other services.
The HTTP REST gateway for Ceph. Install it on a Storage Node running .
Never deploy on a node that runs non-Ceph OpenStack services. The only services that may be deployed together on a Ceph node, are , and . All Ceph nodes need to run SLES 12—starting with SUSE OpenStack Cloud 5, deploying Ceph on SLES 11 SP3 nodes is no longer possible.
Ceph is HA-enabled by design, so there is no need for a special HA setup.
Swift adds an object storage service to SUSE OpenStack Cloud that lets you store single files such as images or snapshots. It offers high data security by storing the data redundantly on a pool of Storage Nodes—therefore Swift needs to be installed on at least two dedicated nodes.
To be able to properly configure Swift it is important to understand how it places the data. Data is always stored redundantly within the hierarchy. The Swift hierarchy in SUSE OpenStack Cloud is formed out of zones, nodes, hard disks, and logical partitions. Zones are physically separated clusters, for example different server rooms each with its own power supply and network segment. A failure of one zone must not affect another zone. The next level in the hierarchy are the individual Swift storage nodes (on which has been deployed) followed by the hard disks. Logical partitions come last.
Swift automatically places three copies of each object on the highest hierarchy level possible. If three zones are available, the each copy of the object will be placed in a different zone. In a one zone setup with more than two nodes, the object copies will each be stored on a different node. In a one zone setup with two nodes, the copies will be distributed on different hard disks. If no other hierarchy element fits, logical partitions are used.
The following attributes can be set to configure Swift:
Allows to enable public access to containers if set to
true.
If set to true, a copy of the current version is archived, each time an object is updated.
Number of zones (see above). If you do not have different independent
installations of storage nodes, set the number of zones to
1.
Partition power. The number entered here is used to compute the number of logical partitions to be created in the cluster. The number you enter is used as a power of 2 (2^X).
It is recommended to use a minimum of 100 partitions per disk. To measure the partition power for your setup, do the following: Multiply the number of disks from all Swift nodes with 100 and then round up to the nearest power of two. Keep in mind that the first disk of each node is not used by Swift, but rather for the operating system.
Example: 10 Swift nodes with 5 hard disks each.
Four hard disks on each node are used for Swift, so there
is a total of forty disks. Multiplied by 100 gives 4000. The
nearest power of two, 4096, equals 2^12. So the partition power that
needs to be entered is 12.
Changing the number of logical partition after Swift has been deployed is not supported. Therefore the value for the partition power should be calculated from the maximum number of partitions this cloud installation is likely going to need at any point in time.
This option sets the number of hours before a logical partition is
considered for relocation. 24 is the recommended
value.
The number of copies generated for each object. Set this value to
3, the tested and recommended value.
Time (in seconds) after which to start a new replication process.
Shows debugging output in the log files when set to
true.
Choose whether to encrypt public communication () or not (). If choosing , you have two choices. You can either or provide the locations for the certificate key pair files. Using self-signed certificates is for testing purposes only and should never be used in production environments!
Apart from the general configuration described above, the Swift barclamp lets you also activate and configure . The features these middlewares provide can be used via the Swift command line client only. The Ratelimit and S3 middlewares certainly provide for the most interesting features, whereas it is recommended to only enable further middlewares for specific use-cases.
Provides an S3 compatible API on top of Swift.
Enables to serve container data as a static Web site with an index file and optional file listings. See http://docs.openstack.org/developer/swift/middleware.html#staticweb for details.
This middleware requires to set to true.
Enables to create URLs to provide time limited access to objects. See http://docs.openstack.org/developer/swift/middleware.html#tempurl for details.
Enables to upload files to a container via Web form. See http://docs.openstack.org/developer/swift/middleware.html#formpost for details.
Enables the possibility to extract tar files into a swift account and to delete multiple objects or containers with a single request. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.bulk for details.
Allows to interact with the Swift API via Flash, Java and Silverlight from an external network. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.crossdomain for details.
Translates container and account parts of a domain to path parameters
that the Swift proxy server understands. Can be used to
create short URLs that are easy to remember, for example by rewriting
home.tux.example.com/$ROOT/exampleuser;/home/myfile
to home.tux.example.com/myfile.
See
http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.domain_remap
for details.
Ratelimit enables you to throttle resources such as requests per minute to provide denial of service protection. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.ratelimit for details.
The Swift service consists of four different roles. Deploying is optional:
The virtual object storage service. Install this role on all dedicated Swift Storage Nodes (at least two), but not on any other node.
Never install the swift-storage service on a node that runs other OpenStack services.
The ring maintains the information about the location of objects, replicas, and devices. It can be compared to an index, that is used by various OpenStack services to look up the physical location of objects. must only be installed on a single node; it is recommended to use a Control Node.
The Swift proxy server takes care of routing requests to Swift. Installing a single instance of on a Control Node is recommended. The role can be made highly available by deploying it on a cluster.
Deploying is optional. The Swift dispersion tools can be used to test the health of the cluster. It creates a heap of dummy objects (using 1% of the total space available). The state of these objects can be queried using the swift-dispersion-report query. needs to be installed on a Control Node.
Swift replicates by design, so there is no need for a special HA setup. Make sure to fulfill the requirements listed in Section 2.6.4.1, “Swift—Avoiding Points of Failure”.
Glance provides discovery, registration, and delivery services for virtual disk images. An image is needed to start an instance—it is its pre-installed root-partition. All images you want to use in your cloud to boot instances from, are provided by Glance. Glance must be deployed onto a Control Node. Glance can be made highly available by deploying it on a cluster.
There are a lot of options to configure Glance. The most important ones are explained below—for a complete reference refer to http://github.com/crowbar/crowbar/wiki/Glance--barclamp.
Choose whether to use Swift or Ceph () to store the images. If you have deployed neither of these services, the images can alternatively be stored in an image file on the Control Node (). If you have deployed Swift or Ceph, it is recommended to use it for Glance as well.
If using VMware as a hypervisor, it is recommended to use it for storing images, too (). This will make starting VMware instances much faster.
Depending on the storage back-end, there are additional configuration options available:
Specify the directory to host the image file. The directory specified here can also be an NFS share. See Section 9.4.3, “Mounting NFS Shares on a Node” for more information.
Set the name of the container to use for the images in Swift.
If using a SUSE OpenStack Cloud internal Ceph setup, the user you specify here is created in case it does not exist. If using an external Ceph cluster, specify the user you have set up for Glance (see Section 9.4.4, “Using an Externally Managed Ceph Cluster” for more information).
If using a SUSE OpenStack Cloud internal Ceph setup, the pool you specify here is created in case it does not exist. If using an external Ceph cluster, specify the pool you have set up for Glance (see Section 9.4.4, “Using an Externally Managed Ceph Cluster” for more information).
Name or IP address of the vCenter server.
vCenter login credentials.
A comma-separated list of datastores specified in the format: DATACENTER_NAME:DATASTORE_NAME
Specify an absolute path here.
Choose whether to encrypt public communication () or not (). If choosing , refer to SSL Support: Protocol for configuration details.
Enable and configure image caching in this section. By default, image caching is disabled. Learn more about Glance's caching feature at http://docs.openstack.org/developer/glance/cache.html.
Shows debugging output in the log files when set to .
Glance can be made highly available by deploying it on a cluster. It is also strongly recommended to do so for the image data, too. The recommended way to achieve this is to use Swift or an external Ceph cluster for the image repository. If using a directory on the node instead (file storage back-end), you should set up shared storage on the cluster for it.
Cinder, the successor of Nova Volume, provides volume block storage. It adds persistent storage to an instance that will persist until deleted (contrary to ephemeral volumes that will only persist while the instance is running).
Cinder can provide volume storage by using different back-ends such as local file, one or more local disks, Ceph (RADOS), VMware or network storage solutions from EMC, EqualLogic, Fujitsu or NetApp. Since SUSE OpenStack Cloud 5, Cinder supports using several back-ends simultaneously. It is also possible to deploy the same network storage back-end multiple times and therefore use different installations at the same time.
The attributes that can be set to configure Cinder depend on the back-end. The only general option is (see SSL Support: Protocol for configuration details).
When first opening the Cinder barclamp, the default proposal—— is already available for configuration. To optionally add a back-end, go to the section and choose a from the drop-down box. Optionally, specify the . This is recommended when deploying the same volume type more than once. Existing back-end configurations (including the default one) can be deleted by clicking the trashcan icon if no longer needed. Note that at least one back-end must be configured.
Choose whether to only use the disk
or disks. “Available
disks” are all disks, currently not used by the system. Note
that one disk (usually /dev/sda) of every block
storage node is already used for the operating system and is not
available for Cinder.
Specify a name for the Cinder volume.
IP address and Port of the ECOM server.
For VMAX, the user needs to create an initial setup on the Unisphere for VMAX server first. It needs to contain an initiator group, a storage group and a port group and needs to be put in a masking view. This masking view needs to be specified here.
Login credentials for the ECOM server.
Only thin LUNs are supported by the plugin. Thin pools can be created using Unisphere for VMAX and VNX.
For more information on the EMC driver refer to the OpenStack documentation at http://docs.openstack.org/liberty/config-reference/content/emc-vmax-driver.html.
EqualLogic drivers are included as a technology preview and are not supported.
Select the protocol used to connect, either or .
IP address and port of the ETERNUS SMI-S Server.
Login credentials for the ETERNUS SMI-S Server.
Storage pool (RAID group) in which the volumes are created. Make sure to have created that RAID group on the server in advance. If a RAID group that does not exist is specified, the RAID group is created by using unused disk drives. The RAID level is automatically determined by the ETERNUS DX Disk storage system.
SUSE OpenStack Cloud can either use “Data ONTAP” in or in . In vFiler will be configured, in vServer will be configured. The can either be set to or . Choose the driver and the protocol your NetApp is licensed for.
The management IP address for the 7-Mode storage controller or the cluster management IP address for the clustered Data ONTAP.
Transport protocol for communicating with the storage controller or clustered Data ONTAP. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.
The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.
Login credentials.
The vFiler unit to be used for provisioning of OpenStack volumes. This setting is only available in .
Provide a list of comma-separated volumes names to be used for provisioning. This setting is only available when using iSCSI as storage protocol.
Host name of the Virtual Storage Server. This setting is only available in when using NFS as storage protocol.
Provide a list of NFS Exports from the Virtual Storage Server. Specify one entry per line in the form of host name:/volume/path mount-options. Specifying mount options is optional. This setting is only available when using NFS as storage protocol.
Select if you have deployed Ceph with SUSE OpenStack Cloud. In case you are using an external Ceph cluster (see Section 9.4.4, “Using an Externally Managed Ceph Cluster” for setup instructions), select .
This configuration option is only available, if using an external Ceph
cluster. Specify the path to the ceph.conf
file—the default value (/etc/ceph/ceph.conf)
should be fitting if you have followed the setup instructions in Section 9.4.4, “Using an Externally Managed Ceph Cluster”.
This configuration option is only available, if using an external Ceph
cluster. If you have had access to the admin keyring file, the path is
/etc/ceph/ceph.client.admin.keyring. If you have
created your own keyring, use
/etc/ceph/ceph.client.cinder.keyring. See Section 9.4.4, “Using an Externally Managed Ceph Cluster” for more information.
Name of the pool used to store the Cinder volumes.
Ceph user name.
Host name or IP address of the vCenter server.
vCenter login credentials.
Provide a comma-separated list of cluster names.
Path to the directory used to store the Cinder volumes.
Absolute path to the vCenter CA certificate.
Default value: false (the CA truststore is used for verification).
Set this option to true when using self-signed certificates to disable
certificate checks. This setting is for testing purposes only and must not be used in
production environments!
Absolute path to the file to be used for block storage.
Maximum size of the volume file. Make sure not to overcommit the size, since it will result in data loss.
Specify a name for the Cinder volume.
Using a file for block storage is not recommended for production systems, because of performance and data security reasons.
Lets you manually pick and configure a driver. Only use this option for testing purposes, it is not supported.
The Cinder service consists of two different roles:
The Cinder controller provides the scheduler and the API. Installing on a Control Node is recommended.
The virtual block storage service. It can be installed on a Control Node. However, it is recommended to deploy it on one or more dedicated nodes supplied with sufficient networking capacity, since it will generate a lot of network traffic.
While the role can be deployed on a cluster, deploying on a cluster is not supported. Therefore it is generally recommended to deploy on several nodes—this ensures the service continues to be available even when a node fails. In addition with Ceph or a network storage solution, such a setup minimizes the potential downtime.
In case using Ceph or a network storage is no option, you need to set up a shared storage directory (for example, with NFS), mount it on all cinder volume nodes and use the back-end with this shared directory. Using is not an option, since local disks cannot be shared.
Manila provides coordinated access to shared or distributed file systems, similar to what Cinder does for block storage. These file systems can be shared between instances in SUSE OpenStack Cloud.
Manila uses different back-ends. As of SUSE OpenStack Cloud 6 the only back-end that is currently supported is the . Two more back-end options, and are available for testing purposes and are not supported.
When first opening the Manila barclamp, the default proposal is already available for configuration. To replace it, first delete it by clicking the trashcan icon and then choose a different back-end in the section . Select a and—optionally—provide a . Activate the back-end with . Note that at least one back-end must be configured.
The attributes that can be set to configure Cinder depend on the back-end:
The generic driver is included as a technology preview and is not supported.
Host name of the Virtual Storage Server.
The name or IP address for the storage controller or the cluster.
The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.
Login credentials.
Transport protocol for communicating with the storage controller or cluster. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.
Lets you manually pick and configure a driver. Only use this option for testing purposes, it is not supported.
The Manila service consists of two different roles:
The Manila server provides the scheduler and the API. Installing it on a Control Node is recommended.
The shared storage service. It can be installed on a Control Node, but it is recommended to deploy it on one or more dedicated nodes supplied with sufficient disk space and networking capacity, since it will generate a lot of network traffic.
While the role can be deployed on a cluster, deploying on a cluster is not supported. Therefore it is generally recommended to deploy on several nodes—this ensures the service continues to be available even when a node fails.
Neutron provides network connectivity between interface devices managed by other OpenStack services (most likely Nova). The service works by enabling users to create their own networks and then attach interfaces to them.
Neutron must be deployed on a Control Node. You first need to choose a core plug-in— or . Depending on your choice, more configuration options will become available.
The option lets you use an existing VMWare NSX installation. Using this plugin is not a prerequisite for the VMWare vSphere hypervisor support. However, it is needed when wanting to have security groups supported on VMWare compute nodes. For all other scenarios, choose .
The only global option that can be configured is . Choose whether to encrypt public communication () or not (). If choosing , refer to SSL Support: Protocol for configuration details.
Select which mechanism driver(s) shall be enabled for the ml2 plugin.It is possible to select more than one driver by holding the Ctrl key while clicking. Choices are:
. Supports GRE, VLAN and VLANX networks (to be configured via the setting).
. Supports VLANs only. Requires to specify the .
. Enables Neutron to dynamically adjust the VLAN settings of the ports of an existing Cisco Nexus switch when instances are launched. It also requires which will automatically be selected. With , must be added. This option also requires to specify the . See Appendix H, Using Cisco Nexus Switches with Neutron for details.
With the default setup, all intra-Compute Node traffic flows through the network Control Node. The same is true for all traffic from floating IPs. In large deployments the network Control Node can therefore quickly become a bottleneck. When this option is set to , network agents will be installed on all compute nodes. This will de-centralize the network traffic, since Compute Nodes will be able to directly “talk” to each other. Distributed Virtual Routers (DVR) require the driver and will not work with the driver. HyperV Compute Nodes will not be supported—network traffic for these nodes will be routed via the Control Node on which is deployed. For details on DVR refer to https://wiki.openstack.org/wiki/Neutron/DVR.
This option is only available when having chosen the or the mechanism drivers. Options are , and . It is possible to select more than one driver by holding the Ctrl key while clicking.
HyperV Compute Nodes do not support and . If your environment includes a heterogenous mix of Compute Nodes incluing HyperV nodes, make sure to select . This can be done in addition to the other drivers.
When multiple type drivers are enabled, you need to select the
, that will
be used for newly created provider networks. This also includes the
nova_fixed network, that will be created when
applying the Neutron proposal. When manually creating provider
networks with the neutron command, the default can be
overwritten with the --provider:network_type
type switch. You will also need to
set a . It is
not possible to change this default when manually creating tenant
networks with the neutron command. The non-default
type driver will only be used as a fallback.
Depending on your choice of the type driver, more configuration options become available.
. Having chosen , you also need to specify the start and end of the tunnel ID range.
. The option requires you to specify the .
. Having chosen , you also need to specify the start and end of the VNI range.
This plug-in requires to configure access to the VMWare NSX service.
Login credentials for the VMWare NSX server. The user needs to have administrator permissions on the NSX server.
Enter the IP address and the port number (IP-ADDRESS:PORT) of the controller API endpoint. If the port number is omitted, port 443 will be used. You may also enter multiple API endpoints (comma-separated), provided they all belong to the same controller cluster. When multiple API endpoints are specified, the plugin will load balance requests on the various API endpoints.
The UUIDs for the transport zone and the gateway service can be obtained from the NSX server. They will be used when networks are created.
The Neutron service consists of two different roles:
provides the scheduler and the API. It needs to be installed on a Control Node.
This service runs the various agents that manage the network traffic of all the cloud instances. It acts as the DHCP and DNS server and as a gateway for all cloud instances. It is recommend to deploy this role on a dedicated node supplied with sufficient network capacity.
Neutron can be made highly available by deploying and on a cluster. While may be deployed on a cluster shared with other services, it is strongly recommended to use a dedicated cluster solely for the role.
Nova provides key services for managing the SUSE OpenStack Cloud, sets up the Compute Nodes. SUSE OpenStack Cloud currently supports KVM, Xen and Microsoft Hyper V and VMWare vSphere. The unsupported QEMU option is included to enable test setups with virtualized nodes. The following attributes can be configured for Nova:
Set the “overcommit ratio” for RAM for instances on
the Compute Nodes. A ratio of 1.0 means no
overcommitment. Changing this value is not recommended.
Set the “overcommit ratio” for CPUs for instances on
the Compute Nodes. A ratio of 1.0 means no
overcommitment.
Allows to move KVM and Xen instances to a different Compute Node running the same hypervisor (cross hypervisor migrations are not supported). Useful when a Compute Node needs to be shut down or rebooted for maintenance or when the load of the Compute Node is very high. Instances can be moved while running (Live Migration).
Enabling the libvirt migration option will open a TCP port on the Compute Nodes that allows access to all instances from all machines in the admin network. Ensure that only authorized machines have access to the admin network when enabling this option.
Sets up a directory /var/lib/nova/instances on the
Control Node on which is
running. This directory is exported via NFS to all compute nodes and
will host a copy of the root disk of all Xen
instances. This setup is required for live migration of Xen
instances (but not for KVM) and is used to provide central
handling of instance data. Enabling this option is only recommended if
Xen live migration is required—otherwise it should be disabled.
Setting up shared storage in a SUSE OpenStack Cloud where instances are running will result in connection losses to all running instances. It is strongly recommended to set up shared storage when deploying SUSE OpenStack Cloud. If it needs to be done at a later stage, make sure to shut down all instances prior to the change.
Kernel SamePage Merging (KSM) is a Linux Kernel feature which merges identical memory pages from multiple running processes into one memory region. Enabling it optimizes memory usage on the Compute Nodes when using the KVM hypervisor at the cost of slightly increasing CPU usage.
Setting up VMware support is described in a separate section. See Appendix G, VMware vSphere Installation Instructions.
Choose whether to encrypt public communication () or not (). If choosing ,refer to SSL Support: Protocol for configuration details.
Change the default VNC keymap for instances. By default,
en-us is used. Enter the value in lowercase,
either as a two character code (such as de or
jp) or, as a five character code such
as de-ch or en-uk, if applicable.
After having started an instance you can display its VNC console in the OpenStack Dashboard (Horizon) via the browser using the noVNC implementation. By default this connection is not encrypted and can potentially be eavesdropped.
Enable encrypted communication for noVNC by choosing and providing the locations for the certificate key pair files.
Shows debugging output in the log files when set to .
The Nova service consists of eight different roles:
Distributing and scheduling the instances is managed by the . It also provides networking and messaging services. needs to be installed on a Control Node.
Provides the hypervisors (Docker, Hyper-V, KVM, QEMU, VMware vSphere,
Xen, and z/VM) and tools needed to manage the instances. Only one
hypervisor can be deployed on a single compute node. To use
different hypervisors in your cloud, deploy different hypervisors
to different Compute Nodes. A nova-compute-*
role needs to be installed on every Compute Node. However, not all
hypervisors need to be deployed.
Each image that will be made available in SUSE OpenStack Cloud to start an
instance is bound to a hypervisor. Each hypervisor can be deployed
on multiple Compute Nodes (except for the VMWare vSphere role, see
below). In a multi-hypervisor deployment you should make sure to
deploy the nova-compute-* roles in a way, that
enough compute power is available for each hypervisor.
Existing nova-compute-* nodes can be changed
in a productive SUSE OpenStack Cloud without service interruption. You need to
“evacuate”
the node, re-assign a new nova-compute role
via the Nova barclamp and the
change. can only be
deployed on a single node.
nova-compute-hyperv can only be deployed to
Compute Nodes running either Microsoft Hyper-V Server or Windows
Server 2012. Being able to set up such Compute Nodes requires to set
up a netboot environment for Windows. Refer to
Appendix F, Setting up a Netboot Environment for Microsoft* Windows for details.
The default password for Hyper-V Compute Nodes will be “crowbar”.
VMware vSphere is not supported “natively” by SUSE OpenStack Cloud—it rather delegates requests to an existing vCenter. It requires preparations at the vCenter and post install adjustments of the Compute Node. See Appendix G, VMware vSphere Installation Instructions for instructions. can only be deployed on a single Compute Node.
The ability to use Docker is only included as a technology preview and not supported by SUSE. The following features are known to work:
Starting and shutting down an instance.
Resuming a paused instance.
Taking a snapshot of running instance and starting a new image based on this snapshot.
The following features are known to not work:
Suspend and resume.
Attaching Cinder volumes.
If you assign the nova-compute-docker role to a
node, it is recommended to use Btrfs for the respective node to enhance
the performance. How to specify a file system for a node is described
in Section 9.2, “Node Installation”.
Making highly available requires no special configuration—it is sufficient to deploy it on a cluster.
To enable High Availability for Compute Nodes, deploy the following roles to one or more clusters with remote nodes:
nova-compute-kvm
nova-compute-qemu
nova-compute-xen
The cluster to which you deploy the roles above can be completely
independent of the one to which the role
nova-controller is deployed.
The last service that needs to be deployed is Horizon, the OpenStack Dashboard. It provides a Web interface for users to start and stop instances and for administrators to manage users, groups, roles, etc. Horizon should be installed on a Control Node. To make Horizon highly available, deploy it on a cluster.
The following attributes can be configured:
Timeout (in minutes) after which a user is been logged out automatically. The default value is set to 4 hours (240 minutes).
Specify a regular expression with which to check the password. The
default expression (.{8,}) tests for a minimum length
of 8 characters. The string you enter is interpreted as a Python regular
expression (see
http://docs.python.org/2.7/library/re.html#module-re
for a reference).
Error message that will be displayed in case the password validation fails.
Choose whether to encrypt public communication () or not (). If choosing , you have two choices. You can either or provide the locations for the certificate key pair files and,—optionally— the certificate chain file. Using self-signed certificates is for testing purposes only and should never be used in production environments!
Making Horizon highly available requires no special configuration—it is sufficient to deploy it on a cluster.
Heat is a template-based orchestration engine that enables you to, for example, start workloads requiring multiple servers or to automatically restart instances if needed. It also brings auto-scaling to SUSE OpenStack Cloud by automatically starting additional instances if certain criteria are met. For more information about Heat refer to the OpenStack documentation at http://docs.openstack.org/developer/heat/.
Heat should be deployed on a Control Node. To make Heat highly available, deploy it on a cluster.
The following attributes can be configured for Heat:
Shows debugging output in the log files when set to .
Making Heat highly available requires no special configuration—it is sufficient to deploy it on a cluster.
Ceilometer collects CPU and networking data from SUSE OpenStack Cloud. This data can be used by a billing system to enable customer billing. Deploying Ceilometer is optional.
For more information about Ceilometer refer to the OpenStack documentation at http://docs.openstack.org/developer/ceilometer/.
As of SUSE OpenStack Cloud 6 data measuring is only supported for KVM, Xen and Windows instances. Other hypervisors and SUSE OpenStack Cloud features such as object or block storage will not be measured.
The following attributes can be configured for Ceilometer:
Specify an interval in seconds after which Ceilometer performs an update of the specified meter.
Set the interval after which to check whether to raise an alarm because a threshold has been exceeded. For performance reasons, do not set a value lower than the default (60s).
Ceilometer collects a huge amount of data, which is written to a database. In a production system it is recommended to use a separate database for Ceilometer rather than the standard database that is also used by the other SUSE OpenStack Cloud services. MongoDB is optimized to write a lot of data. As of SUSE OpenStack Cloud 6, MongoDB is only included as a technology preview and not supported.
Specify how long to keep the data. -1 means that samples are kept in the database forever.
Shows debugging output in the log files when set to .
The Ceilometer service consists of five different roles:
The Ceilometer API server role. This role needs to be deployed on a Control Node. Ceilometer collects approximately 200 bytes of data per hour and instance. Unless you have a very huge number of instances, there is no need to install it on a dedicated node.
The polling agent listens to the message bus to collect data. It needs to be deployed on a Control Node. It can be deployed on the same node as .
The compute agents collect data from the compute nodes. They need to be deployed on all KVM and Xen compute nodes in your cloud (other hypervisors—except for Hyper-V— are currently not supported).
This compute agents collect data from the compute nodes running on Microsoft Windows. It needs need to be deployed on all Hyper-V Compute Nodes in your cloud.
An agent collecting data from the Swift nodes. This role needs to be deployed on the same node as swift-proxy.
Making Ceilometer highly available requires no special configuration—it is sufficient to deploy the roles and on a cluster. The cluster needs to consist of an odd number of nodes, otherwise the Ceilometer deployment will fail.
Trove is a Database-as-a-Service for SUSE OpenStack Cloud. It provides database instances which can be used by all instances. With Trove being deployed, SUSE OpenStack Cloud users no longer need to deploy and maintain their own database applications. For more information about Trove; refer to the OpenStack documentation at http://docs.openstack.org/developer/trove/.
Trove is only included as a technology preview and not supported.
Trove should be deployed on a dedicated Control Node.
The following attributes can be configured for Trove:
When enabled, Trove will use a Cinder volume to store the data.
Increases the amount of information that is written to the log files when set to .
Shows debugging output in the log files when set to .
An HA Setup for Trove is currently not supported.
Tempest is an integration test suite for SUSE OpenStack Cloud written in Python. It contains multiple integration tests for validating your SUSE OpenStack Cloud deployment. For more information about Tempest refer to the OpenStack documentation at http://docs.openstack.org/developer/tempest/.
Tempest is only included as a technology preview and not supported.
Tempest may be used for testing whether the intended setup will run without problems. It should not be used in a production environment.
Tempest should be deployed on a Control Node.
The following attributes can be configured for Tempest:
Credentials for a regular user. If the user does not exist, it will be created.
Tenant to be used by Tempest. If it does not exist, it will be created. It is safe to stick with the default value.
Credentials for an admin user. If the user does not exist, it will be created.
To run tests with Tempest, log in to the Control Node on which
Tempest was deployed. Change into the directory
/var/lib/openstack-tempest-test. To get an overview
of available commands, run:
./run_tempest.sh --help
To serially invoke a subset of all tests (“the gating
smoketests”) to help validate the working functionality of your
local cloud instance, run the following command. It will save the output
to a log file
tempest_CURRENT_DATE.log.
./run_tempest.sh --no-virtual-env -serial --smoke 2>&1 \ | tee "tempest_$(date +%Y-%m-%d_%H%M%S).log"
Tempest cannot be made highly available.
With a successful deployment of the OpenStack Dashboard, the SUSE OpenStack Cloud installation is finished. To be able to test your setup by starting an instance one last step remains to be done—uploading an image to the Glance service. Refer to the Supplement to Admin User Guide and End User Guide, chapter Manage images for instructions. Images for SUSE OpenStack Cloud can be built in SUSE Studio. Refer to the Supplement to Admin User Guide and End User Guide, section Building Images with SUSE Studio.
Now you can hand over to the cloud administrator to set up users, roles,
flavors, etc.—refer to the Admin User Guide for details. The
default credentials for the OpenStack Dashboard are user name
admin and password crowbar.