Each OpenStack project provides a command-line client, which enables
you to access the project API through easy-to-use commands. For
example, the Compute service provides a nova command-line client.
You can run the commands from the command line, or include the commands within scripts to automate tasks. If you provide OpenStack credentials, such as your user name and password, you can run these commands on any computer.
Internally, each command uses cURL command-line tools, which embed API requests. OpenStack APIs are RESTful APIs, and use the HTTP protocol. They include methods, URIs, media types, and response codes.
OpenStack APIs are open-source Python clients, and can run on Linux or Mac OS X systems. On some client commands, you can specify a debug parameter to show the underlying API request for the command. This is a good way to become familiar with the OpenStack API calls.
As a cloud end user, you can use the OpenStack dashboard to provision your own resources within the limits set by administrators. You can modify the examples provided in this section to create other types and sizes of server instances.
The following table lists the command-line client for each OpenStack service with its package name and description.
OpenStack services and clients
|
Service |
Client |
Package |
Description |
|---|---|---|---|
|
Application catalog |
murano |
python-muranoclient |
Creates and manages applications. |
|
Block Storage |
cinder |
python-cinderclient |
Creates and manages volumes. |
|
Clustering service |
senlin |
python-senlinclient |
Creates and manages clustering services. |
|
Compute |
nova |
python-novaclient |
Creates and manages images, instances, and flavors. |
|
Containers service |
magnum |
python-magnumclient |
Creates and manages containers. |
|
Database service |
trove |
python-troveclient |
Creates and manages databases. |
|
Data processing |
sahara |
python-saharaclient |
Creates and manages Hadoop clusters on OpenStack. |
|
Deployment service |
fuel |
python-fuelclient |
Plans deployments. |
|
Identity |
keystone |
python-keystoneclient |
Creates and manages users, tenants, roles, endpoints, and credentials. |
|
Image service |
glance |
python-glanceclient |
Creates and manages images. |
|
Key Manager service |
barbican |
python-barbicanclient |
Creates and manages keys. |
|
Monitoring |
monasca |
python-monascaclient |
Monitoring solution. |
|
Networking |
neutron |
python-neutronclient |
Configures networks for guest servers. |
|
Object Storage |
swift |
python-swiftclient |
Gathers statistics, lists items, updates metadata, and uploads, downloads, and deletes files stored by the Object Storage service. Gains access to an Object Storage installation for ad hoc processing. |
|
Orchestration |
heat |
python-heatclient |
Launches stacks from templates, views details of running stacks including events and resources, and updates and deletes stacks. |
|
Rating service |
cloudkitty |
python-cloudkittyclient |
Rating service. |
|
Shared file systems |
manila |
python-manilaclient |
Creates and manages shared file systems. |
|
Telemetry |
ceilometer |
python-ceilometerclient |
Creates and collects measurements across OpenStack. |
|
Telemetry v3 |
gnocchi |
python-gnocchiclient |
Creates and collects measurements across OpenStack. |
|
Workflow service |
mistral |
python-mistralclient |
Workflow service for OpenStack cloud. |
|
Common client |
openstack |
python-openstackclient |
Common client for the OpenStack project. |
Install the prerequisite software and the Python package for each OpenStack client.
Most Linux distributions include packaged versions of the command-line clients that you can install directly, see Section 4.2.2.2, “Installing from packages”.
If you need to install the source package for the command-line package, the following table lists the software needed to run the command-line clients, and provides installation instructions as needed.
|
Prerequisite |
Description |
|---|---|
|
Python 2.7 or later |
Currently, the clients do not support Python 3. |
|
setuptools package |
Installed by default on Mac OS X. Many Linux distributions provide packages to make setuptools easy to install. Search your package manager for setuptools to find an installation package. If you cannot find one, download the setuptools package directly from https://pypi.python.org/pypi/setuptools (https://pypi.python.org/pypi/setuptools). The recommended way to install setuptools on Microsoft Windows is to follow the documentation provided on the setuptools website (https://pypi.python.org/pypi/setuptools (https://pypi.python.org/pypi/setuptools)). Another option is to use the unofficial binary installer maintained by Christoph Gohlke (http://www.lfd.uci.edu/~gohlke/pythonlibs/ #setuptools (http://www.lfd.uci.edu/~gohlke/pythonlibs/#setuptools)). |
|
pip package |
To install the clients on a Linux, Mac OS X, or Microsoft Windows system, use pip. It is easy to use, ensures that you get the latest version of the clients from the Python Package Index (https://pypi.python.org/), and lets you update or remove the packages later on. Since the installation process compiles source files, this requires the related Python development package for your operating system and distribution. Install pip through the package manager for your system: MacOS # easy_install pip Microsoft Windows Ensure that the C:\>easy_install pip Another option is to use the unofficial binary installer provided by Christoph Gohlke (http://www.lfd.uci.edu/~gohlke/pythonlibs/#pip (http://www.lfd.uci.edu/~gohlke/pythonlibs/#pip)). Ubuntu and Debian # apt-get install python-dev python-pip Note that extra dependencies may be required, per operating system, depending on the package being installed, such as is the case with Tempest. Red Hat Enterprise Linux, CentOS, or Fedora. A packaged version enables you to use yum to install the package: # yum install python-devel python-pip There are also packaged versions of the clients available in RDO (https://www.rdoproject.org/) that enable yum to install the clients as described in Section 4.2.2.2, “Installing from packages”. SUSE Linux Enterprise Server A packaged version available in the Open Build Service (https://build.opensuse.org/package/show? package=python-pip&project=Cloud:OpenStack:Master (https://build.opensuse.org/package/show?package=python-pip&project=Cloud:OpenStack:Master)) enables you to use YaST or zypper to install the package. First, add the Open Build Service repository: # zypper addrepo -f obs://Cloud:OpenStack: \ Liberty/SLE_12 Liberty Then install pip and use it to manage client installation: # zypper install python-devel python-pip There are also packaged versions of the clients available that enable zypper to install the clients as described in Section 4.2.2.2, “Installing from packages”. openSUSE You can install pip and use it to manage client installation: # zypper install python-devel python-pip There are also packaged versions of the clients available that enable zypper to install the clients as described in Section 4.2.2.2, “Installing from packages”. |
The following example shows the command for installing the OpenStack client
with pip, which supports multiple services.
# pip install python-openstackclient
The following clients, while valid, are de-emphasized in favor of a common
client. Instead of installing and learning all these clients, we recommend
installing and using the OpenStack client. You may need to install an
individual project's client because coverage is not yet sufficient in the
OpenStack client. If you need to install an individual client's project,
replace the <project> name in this pip install command using the
list below.
# pip install python-<project>client
barbican - Key Manager Service API
ceilometer - Telemetry API
cinder - Block Storage API and extensions
cloudkitty - Rating service API
designate - DNS service API
fuel - Deployment service API
glance - Image service API
gnocchi - Telemetry API v3
heat - Orchestration API
magnum - Containers service API
manila - Shared file systems API
mistral - Workflow service API
monasca - Monitoring API
murano - Application catalog API
neutron - Networking API
nova - Compute API and extensions
sahara - Data Processing API
senlin - Clustering service API
swift - Object Storage API
trove - Database service API
openstack - Common OpenStack client supporting multiple services
The following CLIs are deprecated in favor of openstack, the
Common OpenStack client supporting multiple services:
keystone - Identity service API and extensions
While you can install the keystone client for interacting with version 2.0
of the service's API, you should use the openstack client for all Identity
interactions.
Use pip to install the OpenStack clients on a Linux, Mac OS X, or Microsoft Windows system. It is easy to use and ensures that you get the latest version of the client from the Python Package Index (https://pypi.python.org/pypi). Also, pip enables you to update or remove a package.
Install each client separately by using the following command:
For Mac OS X or Linux:
# pip install python-PROJECTclient
For Microsoft Windows:
C:\>pip install python-PROJECTclient
RDO, openSUSE, SUSE Linux Enterprise, Debian, and Ubuntu have client packages
that can be installed without pip.
On Red Hat Enterprise Linux, CentOS, or Fedora, use yum to install
the clients from the packaged versions available in
RDO (https://www.rdoproject.org/):
# yum install python-PROJECTclient
For Ubuntu or Debian, use apt-get to install the clients from the
packaged versions:
# apt-get install python-PROJECTclient
For openSUSE, use zypper to install the clients from the distribution
packages service:
# zypper install python-PROJECTclient
For SUSE Linux Enterprise Server, use zypper to install the clients from
the distribution packages in the Open Build Service. First, add the Open
Build Service repository:
# zypper addrepo -f obs://Cloud:OpenStack:Liberty/SLE_12 Liberty
Then you can install the packages:
# zypper install python-PROJECTclient
To upgrade a client, add the --upgrade option to the
pip install command:
# pip install --upgrade python-PROJECTclient
To remove the client, run the pip uninstall command:
# pip uninstall python-PROJECTclient
Before you can run client commands, you must create and source the
PROJECT-openrc.sh file to set environment variables. See
.
Run the following command to discover the version number for a client:
$ PROJECT --version
For example, to see the version number for the nova client, run the
following command:
$ nova --version 2.31.0
To set the required environment variables for the OpenStack command-line
clients, you must create an environment file called an OpenStack rc
file, or openrc.sh file. If your OpenStack installation provides
it, you can download the file from the OpenStack dashboard as an
administrative user or any other user. This project-specific environment
file contains the credentials that all OpenStack services use.
When you source the file, environment variables are set for your current shell. The variables enable the OpenStack client commands to communicate with the OpenStack services that run in the cloud.
Defining environment variables using an environment file is not a common practice on Microsoft Windows. Environment variables are usually defined in the › dialog box.
Log in to the OpenStack dashboard, choose the project for which you want to download the OpenStack RC file, on the tab, open the tab and click .
On the tab, click and save the file. The filename will be of the form
PROJECT-openrc.sh where PROJECT is the name of the project for
which you downloaded the file.
Copy the PROJECT-openrc.sh file to the computer from which you
want to run OpenStack commands.
For example, copy the file to the computer from which you want to upload
an image with a glance client command.
On any shell from which you want to run OpenStack commands, source the
PROJECT-openrc.sh file for the respective project.
In the following example, the demo-openrc.sh file is sourced for
the demo project:
$ source demo-openrc.sh
When you are prompted for an OpenStack password, enter the password for
the user who downloaded the PROJECT-openrc.sh file.
Alternatively, you can create the PROJECT-openrc.sh file from
scratch, if you cannot download the file from the dashboard.
In a text editor, create a file named PROJECT-openrc.sh and add
the following authentication information:
export OS_USERNAME=username export OS_PASSWORD=password export OS_TENANT_NAME=projectName export OS_AUTH_URL=https://identityHost:portNumber/v2.0 # The following lines can be omitted export OS_TENANT_ID=tenantIDString export OS_REGION_NAME=regionName export OS_CACERT=/path/to/cacertFile
On any shell from which you want to run OpenStack commands, source the
PROJECT-openrc.sh file for the respective project. In this
example, you source the admin-openrc.sh file for the admin
project:
$ source admin-openrc.sh
You are not prompted for the password with this method. The password
lives in clear text format in the PROJECT-openrc.sh file.
Restrict the permissions on this file to avoid security problems.
You can also remove the OS_PASSWORD variable from the file, and
use the --password parameter with OpenStack client commands
instead.
You must set the OS_CACERT environment variable when using the
https protocol in the OS_AUTH_URL environment setting because
the verification process for the TLS (HTTPS) server certificate uses
the one indicated in the environment. This certificate will be used
when verifying the TLS (HTTPS) server certificate.
When you run OpenStack client commands, you can override some
environment variable settings by using the options that are listed at
the end of the help output of the various client commands. For
example, you can override the OS_PASSWORD setting in the
PROJECT-openrc.sh file by specifying a password on a
openstack command, as follows:
$ openstack --os-password PASSWORD service list
Where PASSWORD is your password.
A user specifies their username and password credentials to interact with OpenStack, using any client command. These credentials can be specified using various mechanisms, namely, the environment variable or command-line argument. It is not safe to specify the password using either of these methods.
For example, when you specify your password using the command-line
client with the --os-password argument, anyone with access to your
computer can view it in plain text with the ps field.
To avoid storing the password in plain text, you can prompt for the OpenStack password interactively.
As a cloud administrator, you manage projects, users, and
roles. Projects are organizational units in the cloud to which
you can assign users. Projects are also known as tenants or
accounts. Users can be members of one or more projects. Roles
define which actions users can perform. You assign roles to
user-project pairs.
You can define actions for OpenStack service roles in the
/etc/PROJECT/policy.json files. For example, define actions for
Compute service roles in the /etc/nova/policy.json file.
You can manage projects, users, and roles independently from each other.
During cloud set up, the operator defines at least one project, user, and role.
Learn how to add, update, and delete projects and users, assign users to one or more projects, and change or remove the assignment. To enable or temporarily disable a project or user, you update that project or user. You can also change quotas at the project level.
Before you can delete a user account, you must remove the user account from its primary project.
Before you can run client commands, you must download and source an OpenStack RC file. See Download and source the OpenStack RC file (http://docs.openstack.org/user-guide/common/cli_set_environment_variables_using_openstack_rc.html#download-and-source-the-openstack-rc-file).
A project is a group of zero or more users. In Compute, a project owns virtual machines. In Object Storage, a project owns containers. Users can be associated with more than one project. Each project and user pairing can have a role associated with it.
List all projects with their ID, name, and whether they are enabled or disabled:
$ openstack project list +----------------------------------+--------------------+ | id | name | +----------------------------------+--------------------+ | f7ac731cc11f40efbc03a9f9e1d1d21f | admin | | c150ab41f0d9443f8874e32e725a4cc8 | alt_demo | | a9debfe41a6d4d09a677da737b907d5e | demo | | 9208739195a34c628c58c95d157917d7 | invisible_to_admin | | 3943a53dc92a49b2827fae94363851e1 | service | | 80cab5e1f02045abad92a2864cfd76cb | test_project | +----------------------------------+--------------------+
Create a project named new-project:
$ openstack project create --description 'my new project' new-project +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | my new project | | enabled | True | | id | 1a4a0618b306462c9830f876b0bd6af2 | | name | new-project | +-------------+----------------------------------+
Specify the project ID to update a project. You can update the name, description, and enabled status of a project.
To temporarily disable a project:
$ openstack project set PROJECT_ID --disable
To enable a disabled project:
$ openstack project set PROJECT_ID --enable
To update the name of a project:
$ openstack project set PROJECT_ID --name project-new
To verify your changes, show information for the updated project:
$ openstack project show PROJECT_ID +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | my new project | | enabled | True | | id | 1a4a0618b306462c9830f876b0bd6af2 | | name | project-new | +-------------+----------------------------------+
Specify the project ID to delete a project:
$ openstack project delete PROJECT_ID
List all users:
$ openstack user list +----------------------------------+----------+ | id | name | +----------------------------------+----------+ | 352b37f5c89144d4ad0534139266d51f | admin | | 86c0de739bcb4802b8dc786921355813 | demo | | 32ec34aae8ea432e8af560a1cec0e881 | glance | | 7047fcb7908e420cb36e13bbd72c972c | nova | +----------------------------------+----------+
To create a user, you must specify a name. Optionally, you can specify a tenant ID, password, and email address. It is recommended that you include the tenant ID and password because the user cannot log in to the dashboard without this information.
Create the new-user user:
$ openstack user create --project new-project --password PASSWORD new-user +----------+----------------------------------+ | Field | Value | +----------+----------------------------------+ | email | | | enabled | True | | id | 6e5140962b424cb9814fb172889d3be2 | | name | new-user | | tenantId | new-project | +----------+----------------------------------+
You can update the name, email address, and enabled status for a user.
To temporarily disable a user account:
$ openstack user set USER_NAME --disable
If you disable a user account, the user cannot log in to the dashboard. However, data for the user account is maintained, so you can enable the user at any time.
To enable a disabled user account:
$ openstack user set USER_NAME --enable
To change the name and description for a user account:
$ openstack user set USER_NAME --name user-new --email new-user@example.com User has been updated.
List the available roles:
$ openstack role list +----------------------------------+---------------+ | id | name | +----------------------------------+---------------+ | 71ccc37d41c8491c975ae72676db687f | Member | | 149f50a1fe684bfa88dae76a48d26ef7 | ResellerAdmin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | 6ecf391421604da985db2f141e46a7c8 | admin | | deb4fffd123c4d02a907c2c74559dccf | anotherrole | +----------------------------------+---------------+
Users can be members of multiple projects. To assign users to multiple projects, define a role and assign that role to a user-project pair.
Create the new-role role:
$ openstack role create new-role +--------+----------------------------------+ | Field | Value | +--------+----------------------------------+ | id | bef1f95537914b1295da6aa038ef4de6 | | name | new-role | +--------+----------------------------------+
To assign a user to a project, you must assign the role to a user-project pair. To do this, you need the user, role, and project IDs.
List users and note the user ID you want to assign to the role:
$ openstack user list +----------------------------------+----------+---------+----------------------+ | id | name | enabled | email | +----------------------------------+----------+---------+----------------------+ | 352b37f5c89144d4ad0534139266d51f | admin | True | admin@example.com | | 981422ec906d4842b2fc2a8658a5b534 | alt_demo | True | alt_demo@example.com | | 036e22a764ae497992f5fb8e9fd79896 | cinder | True | cinder@example.com | | 86c0de739bcb4802b8dc786921355813 | demo | True | demo@example.com | | 32ec34aae8ea432e8af560a1cec0e881 | glance | True | glance@example.com | | 7047fcb7908e420cb36e13bbd72c972c | nova | True | nova@example.com | +----------------------------------+----------+---------+----------------------+
List role IDs and note the role ID you want to assign:
$ openstack role list +----------------------------------+---------------+ | id | name | +----------------------------------+---------------+ | 71ccc37d41c8491c975ae72676db687f | Member | | 149f50a1fe684bfa88dae76a48d26ef7 | ResellerAdmin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | 6ecf391421604da985db2f141e46a7c8 | admin | | deb4fffd123c4d02a907c2c74559dccf | anotherrole | | bef1f95537914b1295da6aa038ef4de6 | new-role | +----------------------------------+---------------+
List projects and note the project ID you want to assign to the role:
$ openstack project list +----------------------------------+--------------------+---------+ | id | name | enabled | +----------------------------------+--------------------+---------+ | f7ac731cc11f40efbc03a9f9e1d1d21f | admin | True | | c150ab41f0d9443f8874e32e725a4cc8 | alt_demo | True | | a9debfe41a6d4d09a677da737b907d5e | demo | True | | 9208739195a34c628c58c95d157917d7 | invisible_to_admin | True | | caa9b4ce7d5c4225aa25d6ff8b35c31f | new-user | True | | 1a4a0618b306462c9830f876b0bd6af2 | project-new | True | | 3943a53dc92a49b2827fae94363851e1 | service | True | | 80cab5e1f02045abad92a2864cfd76cb | test_project | True | +----------------------------------+--------------------+---------+
Assign a role to a user-project pair. In this example, assign the
new-role role to the demo and test-project pair:
$ openstack role add --user USER_NAME --project TENANT_ID ROLE_NAME
Verify the role assignment:
$ openstack role list --user USER_NAME --project TENANT_ID +--------------+----------+---------------------------+--------------+ | id | name | user_id | tenant_id | +--------------+----------+---------------------------+--------------+ | bef1f9553... | new-role | 86c0de739bcb4802b21355... | 80cab5e1f... | +--------------+----------+---------------------------+--------------+
View details for a specified role:
$ openstack role show ROLE_NAME +----------+----------------------------------+ | Field | Value | +----------+----------------------------------+ | id | bef1f95537914b1295da6aa038ef4de6 | | name | new-role | +----------+----------------------------------+
Remove a role from a user-project pair:
Run the openstack role remove command:
$ openstack role remove --user USER_NAME --project TENANT_ID ROLE_NAME
Verify the role removal:
$ openstack role list --user USER_NAME --project TENANT_ID
If the role was removed, the command output omits the removed role.
Security groups are sets of IP filter rules that are applied to all project instances, which define networking access to the instance. Group rules are project specific; project members can edit the default rules for their group and add new rule sets.
All projects have a default security group which is applied to any
instance that has no other defined security group. Unless you change the
default, this security group denies all incoming traffic and allows only
outgoing traffic to your instance.
You can use the allow_same_net_traffic option in the
/etc/nova/nova.conf file to globally control whether the rules apply
to hosts which share a network.
If set to:
True (default), hosts on the same subnet are not filtered and are
allowed to pass all types of traffic between them. On a flat network,
this allows all instances from all projects unfiltered communication.
With VLAN networking, this allows access between instances within the
same project. You can also simulate this setting by configuring the
default security group to allow all traffic from the subnet.
False, security groups are enforced for all connections.
Additionally, the number of maximum rules per security group is
controlled by the security_group_rules and the number of allowed
security groups per project is controlled by the security_groups
quota (see the Manage quotas (http://docs.openstack.org/user-guide-admin/cli_set_quotas.html)
section).
From the command-line you can get a list of security groups for the
project, using the nova command:
Ensure your system variables are set for the user and tenant for which you are checking security group rules. For example:
export OS_USERNAME=demo00 export OS_TENANT_NAME=tenant01
Output security groups, as follows:
$ nova secgroup-list +---------+-------------+ | Name | Description | +---------+-------------+ | default | default | | open | all ports | +---------+-------------+
View the details of a group, as follows:
$ nova secgroup-list-rules groupName
For example:
$ nova secgroup-list-rules open +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | 255 | 0.0.0.0/0 | | | tcp | 1 | 65535 | 0.0.0.0/0 | | | udp | 1 | 65535 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
These rules are allow type rules as the default is deny. The first column is the IP protocol (one of icmp, tcp, or udp). The second and third columns specify the affected port range. The third column specifies the IP range in CIDR format. This example shows the full port range for all protocols allowed from all IPs.
When adding a new security group, you should pick a descriptive but brief name. This name shows up in brief descriptions of the instances that use it where the longer description field often does not. For example, seeing that an instance is using security group "http" is much easier to understand than "bobs_group" or "secgrp1".
Ensure your system variables are set for the user and tenant for which you are creating security group rules.
Add the new security group, as follows:
$ nova secgroup-create GroupName Description
For example:
$ nova secgroup-create global_http "Allows Web traffic anywhere on the Internet." +--------------------------------------+-------------+----------------------------------------------+ | Id | Name | Description | +--------------------------------------+-------------+----------------------------------------------+ | 1578a08c-5139-4f3e-9012-86bd9dd9f23b | global_http | Allows Web traffic anywhere on the Internet. | +--------------------------------------+-------------+----------------------------------------------+
Add a new group rule, as follows:
$ nova secgroup-add-rule secGroupName ip-protocol from-port to-port CIDR
The arguments are positional, and the from-port and to-port
arguments specify the local port range connections are allowed to
access, not the source and destination ports of the connection. For
example:
$ nova secgroup-add-rule global_http tcp 80 80 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 80 | 80 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
You can create complex rule sets by creating additional rules. For example, if you want to pass both HTTP and HTTPS traffic, run:
$ nova secgroup-add-rule global_http tcp 443 443 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 443 | 443 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
Despite only outputting the newly added rule, this operation is additive (both rules are created and enforced).
View all rules for the new security group, as follows:
$ nova secgroup-list-rules global_http +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 80 | 80 | 0.0.0.0/0 | | | tcp | 443 | 443 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
Ensure your system variables are set for the user and tenant for which you are deleting a security group.
Delete the new security group, as follows:
$ nova secgroup-delete GroupName
For example:
$ nova secgroup-delete global_http
Source Groups are a special, dynamic way of defining the CIDR of allowed sources. The user specifies a Source Group (Security Group name), and all the user's other Instances using the specified Source Group are selected dynamically. This alleviates the need for individual rules to allow each new member of the cluster.
Make sure to set the system variables for the user and tenant for which you are creating a security group rule.
Add a source group, as follows:
$ nova secgroup-add-group-rule secGroupName source-group ip-protocol from-port to-port
For example:
$ nova secgroup-add-group-rule cluster global_http tcp 22 22
The cluster rule allows ssh access from any other instance that
uses the global_http group.
The Identity service enables you to define services, as follows:
Service catalog template. The Identity service acts
as a service catalog of endpoints for other OpenStack
services. The etc/default_catalog.templates
template file defines the endpoints for services. When
the Identity service uses a template file back end,
any changes that are made to the endpoints are cached.
These changes do not persist when you restart the
service or reboot the machine.
An SQL back end for the catalog service. When the Identity service is online, you must add the services to the catalog. When you deploy a system for production, use the SQL back end.
The auth_token middleware supports the
use of either a shared secret or users for each
service.
To authenticate users against the Identity service, you must create a service user for each OpenStack service. For example, create a service user for the Compute, Block Storage, and Networking services.
To configure the OpenStack services with service users, create a project for all services and create users for each service. Assign the admin role to each service user and project pair. This role enables users to validate tokens and authenticate and authorize other user requests.
List the available services:
$ openstack service list +----------------------------------+----------+------------+ | ID | Name | Type | +----------------------------------+----------+------------+ | 9816f1faaa7c4842b90fb4821cd09223 | cinder | volume | | 1250f64f31e34dcd9a93d35a075ddbe1 | cinderv2 | volumev2 | | da8cf9f8546b4a428c43d5e032fe4afc | ec2 | ec2 | | 5f105eeb55924b7290c8675ad7e294ae | glance | image | | dcaa566e912e4c0e900dc86804e3dde0 | keystone | identity | | 4a715cfbc3664e9ebf388534ff2be76a | nova | compute | | 1aed4a6cf7274297ba4026cf5d5e96c5 | novav21 | computev21 | | bed063c790634c979778551f66c8ede9 | neutron | network | | 6feb2e0b98874d88bee221974770e372 | s3 | s3 | +----------------------------------+----------+------------+
To create a service, run this command:
$ openstack service create --name SERVICE_NAME --description SERVICE_DESCRIPTION SERVICE_TYPE
service_name: the unique name of the new service.
service_type: the service type, such as identity,
compute, network, image, object-store
or any other service identifier string.
service_description: the description of the service.
For example, to create a swift service of type
object-store, run this command:
$ openstack service create --name swift --description "object store service" object-store +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | object store service | | enabled | True | | id | 84c23f4b942c44c38b9c42c5e517cd9a | | name | swift | | type | object-store | +-------------+----------------------------------+
To get details for a service, run this command:
$ openstack service show SERVICE_TYPE|SERVICE_NAME|SERVICE_ID
For example:
$ openstack service show object-store +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | object store service | | enabled | True | | id | 84c23f4b942c44c38b9c42c5e517cd9a | | name | swift | | type | object-store | +-------------+----------------------------------+
Create a project for the service users.
Typically, this project is named service,
but choose any name you like:
$ openstack project create service +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | None | | enabled | True | | id | 3e9f3f5399624b2db548d7f871bd5322 | | name | service | +-------------+----------------------------------+
Create service users for the relevant services for your deployment.
Assign the admin role to the user-project pair.
$ openstack role add --project service --user SERVICE_USER_NAME admin +-------+----------------------------------+ | Field | Value | +-------+----------------------------------+ | id | 233109e756c1465292f31e7662b429b1 | | name | admin | +-------+----------------------------------+
To delete a specified service, specify its ID.
$ openstack service delete SERVICE_TYPE|SERVICE_NAME|SERVICE_ID
For example:
$ openstack service delete object-store
You can enable and disable Compute services. The following
examples disable and enable the nova-compute service.
List the Compute services:
$ nova service-list +------------------+----------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+----------+----------+---------+-------+----------------------------+-----------------+ | nova-conductor | devstack | internal | enabled | up | 2013-10-16T00:56:08.000000 | None | | nova-cert | devstack | internal | enabled | up | 2013-10-16T00:56:09.000000 | None | | nova-compute | devstack | nova | enabled | up | 2013-10-16T00:56:07.000000 | None | | nova-network | devstack | internal | enabled | up | 2013-10-16T00:56:06.000000 | None | | nova-scheduler | devstack | internal | enabled | up | 2013-10-16T00:56:04.000000 | None | | nova-consoleauth | devstack | internal | enabled | up | 2013-10-16T00:56:07.000000 | None | +------------------+----------+----------+---------+-------+----------------------------+-----------------+
Disable a nova service:
$ nova service-disable localhost.localdomain nova-compute --reason 'trial log' +----------+--------------+----------+-------------------+ | Host | Binary | Status | Disabled Reason | +----------+--------------+----------+-------------------+ | devstack | nova-compute | disabled | Trial log | +----------+--------------+----------+-------------------+
Check the service list:
$ nova service-list +------------------+----------+----------+---------+-------+----------------------------+------------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+----------+----------+---------+-------+----------------------------+------------------+ | nova-conductor | devstack | internal | enabled | up | 2013-10-16T00:56:48.000000 | None | | nova-cert | devstack | internal | enabled | up | 2013-10-16T00:56:49.000000 | None | | nova-compute | devstack | nova | disabled | up | 2013-10-16T00:56:47.000000 | Trial log | | nova-network | devstack | internal | enabled | up | 2013-10-16T00:56:51.000000 | None | | nova-scheduler | devstack | internal | enabled | up | 2013-10-16T00:56:44.000000 | None | | nova-consoleauth | devstack | internal | enabled | up | 2013-10-16T00:56:47.000000 | None | +------------------+----------+----------+---------+-------+----------------------------+------------------+
Enable the service:
$ nova service-enable localhost.localdomain nova-compute +----------+--------------+---------+ | Host | Binary | Status | +----------+--------------+---------+ | devstack | nova-compute | enabled | +----------+--------------+---------+
Check the service list:
$ nova service-list +------------------+----------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+----------+----------+---------+-------+----------------------------+-----------------+ | nova-conductor | devstack | internal | enabled | up | 2013-10-16T00:57:08.000000 | None | | nova-cert | devstack | internal | enabled | up | 2013-10-16T00:57:09.000000 | None | | nova-compute | devstack | nova | enabled | up | 2013-10-16T00:57:07.000000 | None | | nova-network | devstack | internal | enabled | up | 2013-10-16T00:57:11.000000 | None | | nova-scheduler | devstack | internal | enabled | up | 2013-10-16T00:57:14.000000 | None | | nova-consoleauth | devstack | internal | enabled | up | 2013-10-16T00:57:07.000000 | None | +------------------+----------+----------+---------+-------+----------------------------+-----------------+
The cloud operator assigns roles to users. Roles determine who can upload and manage images. The operator might restrict image upload and management to only cloud administrators or operators.
You can upload images through the glance client or the Image service
API. You can use the nova client for the image management.
The latter provides mechanisms to list and delete images, set and delete
image metadata, and create images of a running instance or snapshot and
backup types.
After you upload an image, you cannot change it.
For details about image creation, see the Virtual Machine Image Guide (http://docs.openstack.org/image-guide/).
To get a list of images and to get further details about a single
image, use glance image-list and glance image-show
commands.
$ glance image-list +----------+---------------------------------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +----------+---------------------------------+-------------+------------------+----------+--------+ | 397e7... | cirros-0.3.2-x86_64-uec | ami | ami | 25165824 | active | | df430... | cirros-0.3.2-x86_64-uec-kernel | aki | aki | 4955792 | active | | 3cf85... | cirros-0.3.2-x86_64-uec-ramdisk | ari | ari | 3714968 | active | | 7e514... | myCirrosImage | ami | ami | 14221312 | active | +----------+---------------------------------+-------------+------------------+----------+--------+
$ glance image-show myCirrosImage +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | Property 'base_image_ref' | 397e713c-b95b-4186-ad46-6126863ea0a9 | | Property 'image_location' | snapshot | | Property 'image_state' | available | | Property 'image_type' | snapshot | | Property 'instance_type_ephemeral_gb' | 0 | | Property 'instance_type_flavorid' | 2 | | Property 'instance_type_id' | 5 | | Property 'instance_type_memory_mb' | 2048 | | Property 'instance_type_name' | m1.small | | Property 'instance_type_root_gb' | 20 | | Property 'instance_type_rxtx_factor' | 1 | | Property 'instance_type_swap' | 0 | | Property 'instance_type_vcpu_weight' | None | | Property 'instance_type_vcpus' | 1 | | Property 'instance_uuid' | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | | Property 'kernel_id' | df430cc2-3406-4061-b635-a51c16e488ac | | Property 'owner_id' | 66265572db174a7aa66eba661f58eb9e | | Property 'ramdisk_id' | 3cf852bd-2332-48f4-9ae4-7d926d50945e | | Property 'user_id' | 376744b5910b4b4da7d8e6cb483b06a8 | | checksum | 8e4838effa1969ad591655d6485c7ba8 | | container_format | ami | | created_at | 2013-07-22T19:45:58 | | deleted | False | | disk_format | ami | | id | 7e5142af-1253-4634-bcc6-89482c5f2e8a | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | myCirrosImage | | owner | 66265572db174a7aa66eba661f58eb9e | | protected | False | | size | 14221312 | | status | active | | updated_at | 2013-07-22T19:46:42 | +---------------------------------------+--------------------------------------+
When viewing a list of images, you can also use grep to filter the
list, as follows:
$ glance image-list | grep 'cirros' | 397e713c-b95b-4186-ad46-612... | cirros-0.3.2-x86_64-uec | ami | ami | 25165824 | active | | df430cc2-3406-4061-b635-a51... | cirros-0.3.2-x86_64-uec-kernel | aki | aki | 4955792 | active | | 3cf852bd-2332-48f4-9ae4-7d9... | cirros-0.3.2-x86_64-uec-ramdisk | ari | ari | 3714968 | active |
To store location metadata for images, which enables direct file access for a client, update the /etc/glance/glance-api.conf file with the following statements:
show_multiple_locations = True
filesystem_store_metadata_file = filePath, where filePath points to a JSON file that defines the mount point for OpenStack images on your system and a unique ID. For example:
[{
"id": "2d9bb53f-70ea-4066-a68b-67960eaae673",
"mountpoint": "/var/lib/glance/images/"
}]After you restart the Image service, you can use the following syntax to view the image's location information:
$ glance --os-image-api-version 2 image-show imageID
For example, using the image ID shown above, you would issue the command as follows:
$ glance --os-image-api-version 2 image-show 2d9bb53f-70ea-4066-a68b-67960eaae673
To create an image, use glance image-create:
$ glance image-create imageName
To update an image by name or ID, use glance image-update:
$ glance image-update imageName
The following list explains the optional arguments that you can use with
the create and update commands to modify image properties. For
more information, refer to Image service chapter in the OpenStack
Command-Line Interface
Reference (http://docs.openstack.org/cli-reference/index.html).
--name NAME
The name of the image.
--disk-format DISK_FORMAT
The disk format of the image. Acceptable formats are ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso.
--container-format CONTAINER_FORMAT
The container format of the image. Acceptable formats are ami, ari, aki, bare, docker, and ovf.
--owner TENANT_ID --size SIZE
The tenant who should own the image. The size of image data, in bytes.
--min-disk DISK_GB
The minimum size of the disk needed to boot the image, in gigabytes.
--min-ram DISK_RAM
The minimum amount of RAM needed to boot the image, in megabytes.
--location IMAGE_URL
The URL where the data for this image resides. For example, if the
image data is stored in swift, you could specify
swift://account:key@example.com/container/obj.
--file FILE
Local file that contains the disk image to be uploaded during the update. Alternatively, you can pass images to the client through stdin.
--checksum CHECKSUM
Hash of image data to use for verification.
--copy-from IMAGE_URL
Similar to --location in usage, but indicates that the image
server should immediately copy the data and store it in its
configured image store.
--is-public [True|False]
Makes an image accessible for all the tenants (admin-only by default).
--is-protected [True|False]
Prevents an image from being deleted.
--property KEY=VALUE
Arbitrary property to associate with image. This option can be used multiple times.
--purge-props
Deletes all image properties that are not explicitly set in the update request. Otherwise, those properties not referenced are preserved.
--human-readable
Prints the image size in a human-friendly format.
The following example shows the command that you would use to upload a CentOS 6.3 image in qcow2 format and configure it for public access:
$ glance image-create --name centos63-image --disk-format qcow2 \ --container-format bare --is-public True --file ./centos63.qcow2
The following example shows how to update an existing image with a properties that describe the disk bus, the CD-ROM bus, and the VIF model:
When you use OpenStack with VMware vCenter Server, you need to specify
the vmware_disktype and vmware_adaptertype properties with
glance image-create.
Also, we recommend that you set the hypervisor_type="vmware" property.
For more information, see Images with VMware vSphere (http://docs.openstack.org/liberty/config-reference/content/vmware.html#VMware_images)
in the OpenStack Configuration Reference.
$ glance image-update \
--property hw_disk_bus=scsi \
--property hw_cdrom_bus=ide \
--property hw_vif_model=e1000 \
f16-x86_64-openstack-sdaCurrently the libvirt virtualization tool determines the disk, CD-ROM,
and VIF device models based on the configured hypervisor type
(libvirt_type in /etc/nova/nova.conf file). For the sake of optimal
performance, libvirt defaults to using virtio for both disk and VIF
(NIC) models. The disadvantage of this approach is that it is not
possible to run operating systems that lack virtio drivers, for example,
BSD, Solaris, and older versions of Linux and Windows.
If you specify a disk or CD-ROM bus model that is not supported, see the Table 4.2, “Disk and CD-ROM bus model values”. If you specify a VIF model that is not supported, the instance fails to launch. See the Table 4.3, “VIF model values”.
The valid model values depend on the libvirt_type setting, as shown
in the following tables.
Disk and CD-ROM bus model values
|
libvirt_type setting |
Supported model values |
|---|---|
|
qemu or kvm |
|
|
xen |
|
VIF model values
|
libvirt_type setting |
Supported model values |
|---|---|
|
qemu or kvm |
|
|
xen |
|
|
vmware |
|
If you encounter problems in creating an image in the Image service or Compute, the following information may help you troubleshoot the creation process.
Ensure that the version of qemu you are using is version 0.14 or
later. Earlier versions of qemu result in an unknown option -s
error message in the nova-compute.log file.
Examine the /var/log/nova-api.log and
/var/log/nova-compute.log log files for error messages.
A volume is a detachable block storage device, similar to a USB hard
drive. You can attach a volume to only one instance. To create and
manage volumes, you use a combination of nova and cinder client
commands.
As an administrator, you can migrate a volume with its data from one location to another in a manner that is transparent to users and workloads. You can migrate only detached volumes with no snapshots.
Possible use cases for data migration include:
Bring down a physical storage device for maintenance without disrupting workloads.
Modify the properties of a volume.
Free up space in a thinly-provisioned back end.
Migrate a volume with the cinder migrate command, as shown in the
following example:
$ cinder migrate volumeID destinationHost --force-host-copy True|False
In this example, --force-host-copy True forces the generic
host-based migration mechanism and bypasses any driver optimizations.
If the volume is in use or has snapshots, the specified host destination cannot accept the volume. If the user is not an administrator, the migration fails.
This example creates a my-new-volume volume based on an image.
List images, and note the ID of the image that you want to use for your volume:
$ nova image-list +-----------------------+---------------------------------+--------+--------------------------+ | ID | Name | Status | Server | +-----------------------+---------------------------------+--------+--------------------------+ | 397e713c-b95b-4186... | cirros-0.3.2-x86_64-uec | ACTIVE | | | df430cc2-3406-4061... | cirros-0.3.2-x86_64-uec-kernel | ACTIVE | | | 3cf852bd-2332-48f4... | cirros-0.3.2-x86_64-uec-ramdisk | ACTIVE | | | 7e5142af-1253-4634... | myCirrosImage | ACTIVE | 84c6e57d-a6b1-44b6-81... | | 89bcd424-9d15-4723... | mysnapshot | ACTIVE | f51ebd07-c33d-4951-87... | +-----------------------+---------------------------------+--------+--------------------------+
List the availability zones, and note the ID of the availability zone in which you want to create your volume:
$ cinder availability-zone-list +------+-----------+ | Name | Status | +------+-----------+ | nova | available | +------+-----------+
Create a volume with 8 gibibytes (GiB) of space, and specify the availability zone and image:
$ cinder create 8 --display-name my-new-volume --image-id 397e713c-b95b-4186-ad46-6126863ea0a9 --availability-zone nova
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2013-07-25T17:02:12.472269 |
| display_description | None |
| display_name | my-new-volume |
| id | 573e024d-5235-49ce-8332-be1576d323f8 |
| image_id | 397e713c-b95b-4186-ad46-6126863ea0a9 |
| metadata | {} |
| size | 8 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+To verify that your volume was created successfully, list the available volumes:
$ cinder list +-----------------+-----------+-----------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +-----------------+-----------+-----------------+------+-------------+----------+-------------+ | 573e024d-523... | available | my-new-volume | 8 | None | true | | | bd7cf584-45d... | available | my-bootable-vol | 8 | None | true | | +-----------------+-----------+-----------------+------+-------------+----------+-------------+
If your volume was created successfully, its status is available. If
its status is error, you might have exceeded your quota.
Attach your volume to a server, specifying the server ID and the volume ID:
$ nova volume-attach 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 573e024d-5235-49ce-8332-be1576d323f8 /dev/vdb +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | serverId | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | | id | 573e024d-5235-49ce-8332-be1576d323f8 | | volumeId | 573e024d-5235-49ce-8332-be1576d323f8 | +----------+--------------------------------------+
Note the ID of your volume.
Show information for your volume:
$ cinder show 573e024d-5235-49ce-8332-be1576d323f8
The output shows that the volume is attached to the server with ID
84c6e57d-a6b1-44b6-81eb-fcb36afd31b5, is in the nova availability
zone, and is bootable.
+------------------------------+------------------------------------------+
| Property | Value |
+------------------------------+------------------------------------------+
| attachments | [{u'device': u'/dev/vdb', |
| | u'server_id': u'84c6e57d-a |
| | u'id': u'573e024d-... |
| | u'volume_id': u'573e024d... |
| availability_zone | nova |
| bootable | true |
| created_at | 2013-07-25T17:02:12.000000 |
| display_description | None |
| display_name | my-new-volume |
| id | 573e024d-5235-49ce-8332-be1576d323f8 |
| metadata | {} |
| os-vol-host-attr:host | devstack |
| os-vol-tenant-attr:tenant_id | 66265572db174a7aa66eba661f58eb9e |
| size | 8 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| volume_image_metadata | {u'kernel_id': u'df430cc2..., |
| | u'image_id': u'397e713c..., |
| | u'ramdisk_id': u'3cf852bd..., |
| |u'image_name': u'cirros-0.3.2-x86_64-uec'}|
| volume_type | None |
+------------------------------+------------------------------------------+To resize your volume, you must first detach it from the server. To detach the volume from your server, pass the server ID and volume ID to the following command:
$ nova volume-detach 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 573e024d-5235-49ce-8332-be1576d323f8
The volume-detach command does not return any output.
List volumes:
$ cinder list +----------------+-----------+-----------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +----------------+-----------+-----------------+------+-------------+----------+-------------+ | 573e024d-52... | available | my-new-volume | 8 | None | true | | | bd7cf584-45... | available | my-bootable-vol | 8 | None | true | | +----------------+-----------+-----------------+------+-------------+----------+-------------+
Note that the volume is now available.
Resize the volume by passing the volume ID and the new size (a value greater than the old one) as parameters:
$ cinder extend 573e024d-5235-49ce-8332-be1576d323f8 10
The extend command does not return any output.
To delete your volume, you must first detach it from the server. To detach the volume from your server and check for the list of existing volumes, see steps 1 and 2 in Section 4.9.4, “Resize a volume”.
Delete the volume using either the volume name or ID:
$ cinder delete my-new-volume
The delete command does not return any output.
List the volumes again, and note that the status of your volume is
deleting:
$ cinder list +-----------------+-----------+-----------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +-----------------+-----------+-----------------+------+-------------+----------+-------------+ | 573e024d-523... | deleting | my-new-volume | 8 | None | true | | | bd7cf584-45d... | available | my-bootable-vol | 8 | None | true | | +-----------------+-----------+-----------------+------+-------------+----------+-------------+
When the volume is fully deleted, it disappears from the list of volumes:
$ cinder list +-----------------+-----------+-----------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +-----------------+-----------+-----------------+------+-------------+----------+-------------+ | bd7cf584-45d... | available | my-bootable-vol | 8 | None | true | | +-----------------+-----------+-----------------+------+-------------+----------+-------------+
You can transfer a volume from one owner to another by using the
cinder transfer* commands. The volume donor, or original owner,
creates a transfer request and sends the created transfer ID and
authorization key to the volume recipient. The volume recipient, or new
owner, accepts the transfer by using the ID and key.
The procedure for volume transfer is intended for tenants (both the volume donor and recipient) within the same cloud.
Use cases include:
Create a custom bootable volume or a volume with a large data set and transfer it to a customer.
For bulk import of data to the cloud, the data ingress system creates a new Block Storage volume, copies data from the physical device, and transfers device ownership to the end user.
While logged in as the volume donor, list the available volumes:
$ cinder list +-----------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +-----------------+-----------+--------------+------+-------------+----------+-------------+ | 72bfce9f-cac... | error | None | 1 | None | false | | | a1cdace0-08e... | available | None | 1 | None | false | | +-----------------+-----------+--------------+------+-------------+----------+-------------+
As the volume donor, request a volume transfer authorization code for a specific volume:
$ cinder transfer-create volumeID
The volume must be in an available state or the request will be
denied. If the transfer request is valid in the database (that is, it
has not expired or been deleted), the volume is placed in an
awaiting transfer state. For example:
$ cinder transfer-create a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f
The output shows the volume transfer ID in the id row and the
authorization key.
+------------+--------------------------------------+ | Property | Value | +------------+--------------------------------------+ | auth_key | b2c8e585cbc68a80 | | created_at | 2013-10-14T15:20:10.121458 | | id | 6e4e9aa4-bed5-4f94-8f76-df43232f44dc | | name | None | | volume_id | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f | +------------+--------------------------------------+
Optionally, you can specify a name for the transfer by using the
--display-name displayName parameter.
While the auth_key property is visible in the output of
cinder transfer-create VOLUME_ID, it will not be available in
subsequent cinder transfer-show TRANSFER_ID commands.
Send the volume transfer ID and authorization key to the new owner (for example, by email).
View pending transfers:
$ cinder transfer-list +--------------------------------------+--------------------------------------+------+ | ID | VolumeID | Name | +--------------------------------------+--------------------------------------+------+ | 6e4e9aa4-bed5-4f94-8f76-df43232f44dc | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f | None | +--------------------------------------+--------------------------------------+------+
After the volume recipient, or new owner, accepts the transfer, you can see that the transfer is no longer available:
$ cinder transfer-list +----+-----------+------+ | ID | Volume ID | Name | +----+-----------+------+ +----+-----------+------+
As the volume recipient, you must first obtain the transfer ID and authorization key from the original owner.
Accept the request:
$ cinder transfer-accept transferID authKey
For example:
$ cinder transfer-accept 6e4e9aa4-bed5-4f94-8f76-df43232f44dc b2c8e585cbc68a80 +-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | 6e4e9aa4-bed5-4f94-8f76-df43232f44dc | | name | None | | volume_id | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f | +-----------+--------------------------------------+
If you do not have a sufficient quota for the transfer, the transfer is refused.
List available volumes and their statuses:
$ cinder list +-------------+-----------------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +-------------+-----------------+--------------+------+-------------+----------+-------------+ | 72bfce9f... | error | None | 1 | None | false | | | a1cdace0... |awaiting-transfer| None | 1 | None | false | | +-------------+-----------------+--------------+------+-------------+----------+-------------+
Find the matching transfer ID:
$ cinder transfer-list +--------------------------------------+--------------------------------------+------+ | ID | VolumeID | Name | +--------------------------------------+--------------------------------------+------+ | a6da6888-7cdf-4291-9c08-8c1f22426b8a | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f | None | +--------------------------------------+--------------------------------------+------+
Delete the volume:
$ cinder transfer-delete transferID
For example:
$ cinder transfer-delete a6da6888-7cdf-4291-9c08-8c1f22426b8a
Verify that transfer list is now empty and that the volume is again available for transfer:
$ cinder transfer-list +----+-----------+------+ | ID | Volume ID | Name | +----+-----------+------+ +----+-----------+------+
$ cinder list +-----------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +-----------------+-----------+--------------+------+-------------+----------+-------------+ | 72bfce9f-ca... | error | None | 1 | None | false | | | a1cdace0-08... | available | None | 1 | None | false | | +-----------------+-----------+--------------+------+-------------+----------+-------------+
A share is provided by file storage. You can give access to a share to
instances. To create and manage shares, you use manila client commands.
As an administrator, you can migrate a share with its data from one location to another in a manner that is transparent to users and workloads.
Possible use cases for data migration include:
Bring down a physical storage device for maintenance without disrupting workloads.
Modify the properties of a share.
Free up space in a thinly-provisioned back end.
Migrate a share with the manila migrate command, as shown in the
following example:
$ manila migrate shareID destinationHost --force-host-copy True|False
In this example, --force-host-copy True forces the generic
host-based migration mechanism and bypasses any driver optimizations.
destinationHost is in this format host#pool which includes
destination host and pool.
If the user is not an administrator, the migration fails.
In OpenStack, flavors define the compute, memory, and
storage capacity of nova computing instances. To put it
simply, a flavor is an available hardware configuration for a
server. It defines the size of a virtual server
that can be launched.
Flavors can also determine on which compute host a flavor can be used to launch an instance. For information about customizing flavors, refer to the OpenStack Cloud Administrator Guide (http://docs.openstack.org/admin-guide-cloud/compute-flavors.html).
A flavor consists of the following parameters:
Unique ID (integer or UUID) for the new flavor. If specifying 'auto', a UUID will be automatically generated.
Name for the new flavor.
Number of virtual CPUs to use.
Amount of RAM to use (in megabytes).
Amount of disk space (in gigabytes) to use for the root (/) partition.
Amount of disk space (in gigabytes) to use for the ephemeral partition. If unspecified, the value is 0 by default. Ephemeral disks offer machine local disk storage linked to the lifecycle of a VM instance. When a VM is terminated, all data on the ephemeral disk is lost. Ephemeral disks are not included in any snapshots.
Amount of swap space (in megabytes) to use. If unspecified, the value is 0 by default.
The default flavors are:
|
Flavor |
VCPUs |
Disk (in GB) |
RAM (in MB) |
|---|---|---|---|
|
m1.tiny |
1 |
1 |
512 |
|
m1.small |
1 |
20 |
2048 |
|
m1.medium |
2 |
40 |
4096 |
|
m1.large |
4 |
80 |
8192 |
|
m1.xlarge |
8 |
160 |
16384 |
You can create and manage flavors with the nova
flavor-* commands provided by the python-novaclient
package.
List flavors to show the ID and name, the amount of memory, the amount of disk space for the root partition and for the ephemeral partition, the swap, and the number of virtual CPUs for each flavor:
$ nova flavor-list
To create a flavor, specify a name, ID, RAM size, disk size, and the number of VCPUs for the flavor, as follows:
$ nova flavor-create FLAVOR_NAME FLAVOR_ID RAM_IN_MB ROOT_DISK_IN_GB NUMBER_OF_VCPUS
Unique ID (integer or UUID) for the new flavor. If specifying 'auto', a UUID will be automatically generated.
Here is an example with additional optional
parameters filled in that creates a public extra
tiny flavor that automatically gets an ID
assigned, with 256 MB memory, no disk space, and
one VCPU. The rxtx-factor indicates the slice of
bandwidth that the instances with this flavor can
use (through the Virtual Interface (vif) creation
in the hypervisor):
$ nova flavor-create --is-public true m1.extra_tiny auto 256 0 1 --rxtx-factor .1
If an individual user or group of users needs a custom flavor that you do not want other tenants to have access to, you can change the flavor's access to make it a private flavor. See Private Flavors in the OpenStack Operations Guide (http://docs.openstack.org/openstack-ops/content/private-flavors.html).
For a list of optional parameters, run this command:
$ nova help flavor-create
After you create a flavor, assign it to a project by specifying the flavor name or ID and the tenant ID:
$ nova flavor-access-add FLAVOR TENANT_ID
In addition, you can set or unset extra_spec for the existing flavor.
The extra_spec metadata keys can influence the instance directly when
it is launched. If a flavor sets the
extra_spec key/value quota:vif_outbound_peak=65536, the instance's
out bound peak bandwidth I/O should be LTE 512 Mbps. There are several
aspects that can work for an instance including CPU limits,
Disk tuning, Bandwidth I/O, Watchdog behavior, and
Random-number generator.
For information about supporting metadata keys, see the
OpenStack Cloud Administrator Guide (http://docs.openstack.org/admin-guide-cloud/compute-flavors.html).
For a list of optional parameters, run this command:
$ nova help flavor-key
This section includes tasks specific to the OpenStack environment.
With the appropriate permissions, you can select which host instances are launched on and which roles can boot instances on this host.
To select the host where instances are launched, use
the --availability_zone ZONE:HOST parameter on the
nova boot command.
For example:
$ nova boot --image <uuid> --flavor m1.tiny --key_name test --availability-zone nova:server2
To specify which roles can launch an instance on a
specified host, enable the create:forced_host option in
the policy.json file. By default, this option is
enabled for only the admin role.
To view the list of valid compute hosts, use the
nova hypervisor-list command.
$ nova hypervisor-list +----+---------------------+ | ID | Hypervisor hostname | +----+---------------------+ | 1 | server2 | | 2 | server3 | | 3 | server4 | +----+---------------------+
NUMA topology can exist on both the physical hardware of the host, and the virtual hardware of the instance. OpenStack Compute uses libvirt to tune instances to take advantage of NUMA topologies. The libvirt driver boot process looks at the NUMA topology field of both the instance and the host it is being booted on, and uses that information to generate an appropriate configuration.
If the host is NUMA capable, but the instance has not requested a NUMA topology, Compute attempts to pack the instance into a single cell. If this fails, though, Compute will not continue to try.
If the host is NUMA capable, and the instance has requested a specific NUMA topology, Compute will try to pin the vCPUs of different NUMA cells on the instance to the corresponding NUMA cells on the host. It will also expose the NUMA topology of the instance to the guest OS.
If you want Compute to pin a particular vCPU as part of this process,
set the vcpu_pin_set parameter in the nova.conf configuration
file. For more information about the vcpu_pin_set parameter, see the
Configuration Reference Guide.
If a hardware malfunction or other error causes a cloud compute node to fail,
you can evacuate instances to make them available again. You can optionally
include the target host on the evacuate command. If you omit the
host, the scheduler chooses the target host.
To preserve user data on the server disk, configure shared storage on the target host. When you evacuate the instance, Compute detects whether shared storage is available on the target host. Also, you must validate that the current VM host is not operational. Otherwise, the evacuation fails.
To find a host for the evacuated instance, list all hosts:
$ nova host-list
Evacuate the instance. You can use the --password PWD option
to pass the instance password to the command. If you do not specify a
password, the command generates and prints one after it finishes
successfully. The following command evacuates a server from a failed host
to HOST_B.
$ nova evacuate EVACUATED_SERVER_NAME HOST_B
The command rebuilds the instance from the original image or volume and returns a password. The command preserves the original configuration, which includes the instance ID, name, uid, IP address, and so on.
+-----------+--------------+ | Property | Value | +-----------+--------------+ | adminPass | kRAJpErnT4xZ | +-----------+--------------+
To preserve the user disk data on the evacuated server, deploy Compute
with a shared file system. To configure your system, see
Configure migrations (http://docs.openstack.org/admin-guide-cloud/compute-configuring-migrations.html)
in the OpenStack Cloud Administrator Guide. The
following example does not change the password.
$ nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage
When you want to move an instance from one compute host to another,
you can use the nova migrate command. The scheduler chooses the
destination compute host based on its settings. This process does
not assume that the instance has shared storage available on the
target host.
To list the VMs you want to migrate, run:
$ nova list
After selecting a VM from the list, run this command where is set to the ID in the list returned in the previous step:
$ nova show VM_ID
Now, use the nova migrate command.
$ nova migrate VM_ID
To migrate an instance and watch the status, use this example script:
#!/bin/bash
# Provide usage
usage() {
echo "Usage: $0 VM_ID"
exit 1
}
[[ $# -eq 0 ]] && usage
# Migrate the VM to an alternate hypervisor
echo -n "Migrating instance to alternate host"
VM_ID=$1
nova migrate $VM_ID
VM_OUTPUT=`nova show $VM_ID`
VM_STATUS=`echo "$VM_OUTPUT" | grep status | awk '{print $4}'`
while [[ "$VM_STATUS" != "VERIFY_RESIZE" ]]; do
echo -n "."
sleep 2
VM_OUTPUT=`nova show $VM_ID`
VM_STATUS=`echo "$VM_OUTPUT" | grep status | awk '{print $4}'`
done
nova resize-confirm $VM_ID
echo " instance migrated and resized."
echo;
# Show the details for the VM
echo "Updated instance details:"
nova show $VM_ID
# Pause to allow users to examine VM details
read -p "Pausing, press <enter> to exit."If you see this error, it means you are either
trying the command with the wrong credentials,
such as a non-admin user, or the policy.json
file prevents migration for your user:
ERROR (Forbidden): Policy doesn't allow compute_extension:admin_actions:migrate
to be performed. (HTTP 403)
The instance is booted from a new host, but preserves its configuration including its ID, name, any metadata, IP address, and other properties.
Each instance has a private, fixed IP address (assigned when launched) and can also have a public, or floating, address. Private IP addresses are used for communication between instances, and public addresses are used for communication with networks outside the cloud, including the Internet.
By default, both administrative and end users can associate floating IP
addresses with projects and instances. You can change user permissions for
managing IP addresses by updating the /etc/nova/policy.json
file. For basic floating-IP procedures, refer to the Manage IP
Addresses section in the OpenStack End User Guide (http://docs.openstack.org/user-guide/).
For details on creating public networks using OpenStack Networking
(neutron), refer to the OpenStack Cloud Administrator Guide (http://docs.openstack.org/admin-guide-cloud/networking_adv-features.html)
. No floating IP addresses are created by default in OpenStack Networking.
As an administrator using legacy networking (nova-network), you
can use the following bulk commands to list, create, and delete ranges
of floating IP addresses. These addresses can then be associated with
instances by end users.
To list all floating IP addresses for all projects, run:
$ nova floating-ip-bulk-list +------------+---------------+---------------+--------+-----------+ | project_id | address | instance_uuid | pool | interface | +------------+---------------+---------------+--------+-----------+ | None | 172.24.4.225 | None | public | eth0 | | None | 172.24.4.226 | None | public | eth0 | | None | 172.24.4.227 | None | public | eth0 | | None | 172.24.4.228 | None | public | eth0 | | None | 172.24.4.229 | None | public | eth0 | | None | 172.24.4.230 | None | public | eth0 | | None | 172.24.4.231 | None | public | eth0 | | None | 172.24.4.232 | None | public | eth0 | | None | 172.24.4.233 | None | public | eth0 | | None | 172.24.4.234 | None | public | eth0 | | None | 172.24.4.235 | None | public | eth0 | | None | 172.24.4.236 | None | public | eth0 | | None | 172.24.4.237 | None | public | eth0 | | None | 172.24.4.238 | None | public | eth0 | | None | 192.168.253.1 | None | test | eth0 | | None | 192.168.253.2 | None | test | eth0 | | None | 192.168.253.3 | None | test | eth0 | | None | 192.168.253.4 | None | test | eth0 | | None | 192.168.253.5 | None | test | eth0 | | None | 192.168.253.6 | None | test | eth0 | +------------+---------------+---------------+--------+-----------+
To create a range of floating IP addresses, run:
$ nova floating-ip-bulk-create [--pool POOL_NAME] [--interface INTERFACE] RANGE_TO_CREATE
For example:
$ nova floating-ip-bulk-create --pool test 192.168.1.56/29
By default, floating-ip-bulk-create uses the
public pool and eth0 interface values.
You should use a range of free IP addresses that is correct for your network. If you are not sure, at least try to avoid the DHCP address range:
Pick a small range (/29 gives an 8 address range, 6 of which will be usable).
Use nmap to check a range's availability. For example,
192.168.1.56/29 represents a small range of addresses
(192.168.1.56-63, with 57-62 usable), and you could run the
command nmap -sn 192.168.1.56/29 to check whether the entire
range is currently unused.
To delete a range of floating IP addresses, run:
$ nova floating-ip-bulk-delete RANGE_TO_DELETE
For example:
$ nova floating-ip-bulk-delete 192.168.1.56/29
The Orchestration service provides a template-based orchestration engine for the OpenStack cloud, which can be used to create and manage cloud infrastructure resources such as storage, networking, instances, and applications as a repeatable running environment.
Templates are used to create stacks, which are collections of resources. For example, a stack might include instances, floating IPs, volumes, security groups, or users. The Orchestration service offers access to all OpenStack core services via a single modular template, with additional orchestration capabilities such as auto-scaling and basic high availability.
For information about:
basic creation and deletion of Orchestration stacks, refer to the OpenStack End User Guide (http://docs.openstack.org/user-guide/dashboard_stacks.html)
heat CLI commands, see the OpenStack Command Line Interface Reference (http://docs.openstack.org/cli-reference/heat.html)
As an administrator, you can also carry out stack functions on behalf of your users. For example, to resume, suspend, or delete a stack, run:
$ heat action-resume stackID $ heat action-suspend stackID $ heat stack-delete stackID
You can show basic statistics on resource usage for hosts and instances.
For more sophisticated monitoring, see the ceilometer (https://launchpad.net/ceilometer) project. You can also use tools, such as Ganglia (http://ganglia.info/) or Graphite (http://graphite.wikidot.com/), to gather more detailed data.
The following examples show the host usage statistics for a host called
devstack.
List the hosts and the nova-related services that run on them:
$ nova host-list +-----------+-------------+----------+ | host_name | service | zone | +-----------+-------------+----------+ | devstack | conductor | internal | | devstack | compute | nova | | devstack | cert | internal | | devstack | network | internal | | devstack | scheduler | internal | | devstack | consoleauth | internal | +-----------+-------------+----------+
Get a summary of resource usage of all of the instances running on the host:
$ nova host-describe devstack +----------+----------------------------------+-----+-----------+---------+ | HOST | PROJECT | cpu | memory_mb | disk_gb | +----------+----------------------------------+-----+-----------+---------+ | devstack | (total) | 2 | 4003 | 157 | | devstack | (used_now) | 3 | 5120 | 40 | | devstack | (used_max) | 3 | 4608 | 40 | | devstack | b70d90d65e464582b6b2161cf3603ced | 1 | 512 | 0 | | devstack | 66265572db174a7aa66eba661f58eb9e | 2 | 4096 | 40 | +----------+----------------------------------+-----+-----------+---------+
The cpu column shows the sum of the virtual CPUs for instances
running on the host.
The memory_mb column shows the sum of the memory (in MB)
allocated to the instances that run on the host.
The disk_gb column shows the sum of the root and ephemeral disk
sizes (in GB) of the instances that run on the host.
The row that has the value used_now in the PROJECT column
shows the sum of the resources allocated to the instances that run on
the host, plus the resources allocated to the virtual machine of the
host itself.
The row that has the value used_max in the PROJECT column
shows the sum of the resources allocated to the instances that run on
the host.
These values are computed by using information about the flavors of the instances that run on the hosts. This command does not query the CPU usage, memory usage, or hard disk usage of the physical host.
Get CPU, memory, I/O, and network statistics for an instance.
List instances:
$ nova list +----------+----------------------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +----------+----------------------+--------+------------+-------------+------------------+ | 84c6e... | myCirrosServer | ACTIVE | None | Running | private=10.0.0.3 | | 8a995... | myInstanceFromVolume | ACTIVE | None | Running | private=10.0.0.4 | +----------+----------------------+--------+------------+-------------+------------------+
Get diagnostic statistics:
$ nova diagnostics myCirrosServer +------------------+----------------+ | Property | Value | +------------------+----------------+ | vnet1_rx | 1210744 | | cpu0_time | 19624610000000 | | vda_read | 0 | | vda_write | 0 | | vda_write_req | 0 | | vnet1_tx | 863734 | | vnet1_tx_errors | 0 | | vnet1_rx_drop | 0 | | vnet1_tx_packets | 3855 | | vnet1_tx_drop | 0 | | vnet1_rx_errors | 0 | | memory | 2097152 | | vnet1_rx_packets | 5485 | | vda_read_req | 0 | | vda_errors | -1 | +------------------+----------------+
Get summary statistics for each tenant:
$ nova usage-list Usage from 2013-06-25 to 2013-07-24: +----------------------------------+-----------+--------------+-----------+---------------+ | Tenant ID | Instances | RAM MB-Hours | CPU Hours | Disk GB-Hours | +----------------------------------+-----------+--------------+-----------+---------------+ | b70d90d65e464582b6b2161cf3603ced | 1 | 344064.44 | 672.00 | 0.00 | | 66265572db174a7aa66eba661f58eb9e | 3 | 671626.76 | 327.94 | 6558.86 | +----------------------------------+-----------+--------------+-----------+---------------+
To prevent system capacities from being exhausted without notification, you can set up quotas. Quotas are operational limits. For example, the number of gigabytes allowed for each tenant can be controlled so that cloud resources are optimized. Quotas can be enforced at both the tenant (or project) and the tenant-user level.
Using the command-line interface, you can manage quotas for the OpenStack Compute service, the OpenStack Block Storage service, and the OpenStack Networking service.
The cloud operator typically changes default values because a tenant requires more than ten volumes or 1 TB on a compute node.
To view all tenants (projects), run:
$ openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | e66d97ac1b704897853412fc8450f7b9 | admin | | bf4a37b885fe46bd86e999e50adad1d3 | services | | 21bd1c7c95234fd28f589b60903606fa | tenant01 | | f599c5cd1cba4125ae3d7caed08e288c | tenant02 | +----------------------------------+----------+
To display all current users for a tenant, run:
$ openstack user list --project PROJECT_NAME +----------------------------------+--------+ | ID | Name | +----------------------------------+--------+ | ea30aa434ab24a139b0e85125ec8a217 | demo00 | | 4f8113c1d838467cad0c2f337b3dfded | demo01 | +----------------------------------+--------+
As an administrative user, you can use the nova quota-*
commands, which are provided by the python-novaclient
package, to update the Compute service quotas for a specific tenant or
tenant user, as well as update the quota defaults for a new tenant.
Compute quota descriptions
|
Quota name |
Description |
|---|---|
|
cores |
Number of instance cores (VCPUs) allowed per tenant. |
|
fixed-ips |
Number of fixed IP addresses allowed per tenant. This number must be equal to or greater than the number of allowed instances. |
|
floating-ips |
Number of floating IP addresses allowed per tenant. |
|
injected-file-content-bytes |
Number of content bytes allowed per injected file. |
|
injected-file-path-bytes |
Length of injected file path. |
|
injected-files |
Number of injected files allowed per tenant. |
|
instances |
Number of instances allowed per tenant. |
|
key-pairs |
Number of key pairs allowed per user. |
|
metadata-items |
Number of metadata items allowed per instance. |
|
ram |
Megabytes of instance ram allowed per tenant. |
|
security-groups |
Number of security groups per tenant. |
|
security-group-rules |
Number of rules per security group. |
List all default quotas for all tenants:
$ nova quota-defaults
For example:
$ nova quota-defaults +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 10 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | +-----------------------------+-------+
Update a default value for a new tenant.
$ nova quota-class-update --KEY VALUE default
For example:
$ nova quota-class-update --instances 15 default
Place the tenant ID in a usable variable.
$ tenant=$(openstack project show -f value -c id TENANT_NAME)
List the currently set quota values for a tenant.
$ nova quota-show --tenant $tenant
For example:
$ nova quota-show --tenant $tenant +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 10 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | +-----------------------------+-------+
Obtain the tenant ID.
$ tenant=$(openstack project show -f value -c id TENANT_NAME)
Update a particular quota value.
$ nova quota-update --QUOTA_NAME QUOTA_VALUE TENANT_ID
For example:
$ nova quota-update --floating-ips 20 $tenant $ nova quota-show --tenant $tenant +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 20 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | +-----------------------------+-------+
To view a list of options for the quota-update command, run:
$ nova help quota-update
Place the user ID in a usable variable.
$ tenantUser=$(openstack user show -f value -c id USER_NAME)
Place the user's tenant ID in a usable variable, as follows:
$ tenant=$(openstack project show -f value -c id TENANT_NAME)
List the currently set quota values for a tenant user.
$ nova quota-show --user $tenantUser --tenant $tenant
For example:
$ nova quota-show --user $tenantUser --tenant $tenant +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 20 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | +-----------------------------+-------+
Place the user ID in a usable variable.
$ tenantUser=$(openstack user show -f value -c id USER_NAME)
Place the user's tenant ID in a usable variable, as follows:
$ tenant=$(openstack project show -f value -c id TENANT_NAME)
Update a particular quota value, as follows:
$ nova quota-update --user $tenantUser --QUOTA_NAME QUOTA_VALUE $tenant
For example:
$ nova quota-update --user $tenantUser --floating-ips 12 $tenant $ nova quota-show --user $tenantUser --tenant $tenant +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 12 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | +-----------------------------+-------+
To view a list of options for the quota-update command, run:
$ nova help quota-update
Use nova absolute-limits to get a list of the
current quota values and the current quota usage:
$ nova absolute-limits --tenant TENANT_NAME +-------------------------+-------+ | Name | Value | +-------------------------+-------+ | maxServerMeta | 128 | | maxPersonality | 5 | | maxImageMeta | 128 | | maxPersonalitySize | 10240 | | maxTotalRAMSize | 51200 | | maxSecurityGroupRules | 20 | | maxTotalKeypairs | 100 | | totalRAMUsed | 0 | | maxSecurityGroups | 10 | | totalFloatingIpsUsed | 0 | | totalInstancesUsed | 0 | | totalSecurityGroupsUsed | 0 | | maxTotalFloatingIps | 10 | | maxTotalInstances | 10 | | totalCoresUsed | 0 | | maxTotalCores | 20 | +-------------------------+-------+
As an administrative user, you can update the OpenStack Block Storage service quotas for a project. You can also update the quota defaults for a new project.
Block Storage quotas
|
Property name |
Defines the number of |
|---|---|
|
gigabytes |
Volume gigabytes allowed for each project. |
|
snapshots |
Volume snapshots allowed for each project. |
|
volumes |
Volumes allowed for each project. |
Administrative users can view Block Storage service quotas.
Obtain the project ID.
For example:
$ project_id=$(openstack project show -f value -c id PROJECT_NAME)
List the default quotas for a project (tenant):
$ cinder quota-defaults PROJECT_ID
For example:
$ cinder quota-defaults $project_id +-----------+-------+ | Property | Value | +-----------+-------+ | gigabytes | 1000 | | snapshots | 10 | | volumes | 10 | +-----------+-------+
View Block Storage service quotas for a project (tenant):
$ cinder quota-show PROJECT_ID
For example:
$ cinder quota-show $project_id +-----------+-------+ | Property | Value | +-----------+-------+ | gigabytes | 1000 | | snapshots | 10 | | volumes | 10 | +-----------+-------+
Show the current usage of a per-project quota:
$ cinder quota-usage PROJECT_ID
For example:
$ cinder quota-usage $project_id +-----------+--------+----------+-------+ | Type | In_use | Reserved | Limit | +-----------+--------+----------+-------+ | gigabytes | 0 | 0 | 1000 | | snapshots | 0 | 0 | 10 | | volumes | 0 | 0 | 15 | +-----------+--------+----------+-------+
Administrative users can edit and update Block Storage service quotas.
Clear per-project quota limits.
$ cinder quota-delete PROJECT_ID
To update a default value for a new project,
update the property in the
section of the /etc/cinder/cinder.conf file.
For more information, see the Block Storage
Configuration Reference (http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-block-storage.html).
To update Block Storage service quotas for an existing project (tenant)
$ cinder quota-update --QUOTA_NAME QUOTA_VALUE PROJECT_ID
Replace QUOTA_NAME with the quota that is to be updated; NEW_VALUE with the required new value and PROJECT_ID with required project ID
For example:
$ cinder quota-update --volumes 15 $project_id $ cinder quota-show $project_id +-----------+-------+ | Property | Value | +-----------+-------+ | gigabytes | 1000 | | snapshots | 10 | | volumes | 15 | +-----------+-------+
Clear per-project quota limits.
$ cinder quota-delete PROJECT_ID
Determine the binary and host of the service you want to remove.
$ cinder service-list +------------------+----------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+----------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | devstack | nova | enabled | up | 2015-10-13T15:21:48.000000 | - | | cinder-volume | devstack@lvmdriver-1 | nova | enabled | up | 2015-10-13T15:21:52.000000 | - | +------------------+----------------------+------+---------+-------+----------------------------+-----------------+
Disable the service.
$ cinder service-disable HOST_NAME BINARY_NAME
Remove the service from the database.
$ cinder-manage service remove BINARY_NAME HOST_NAME
A quota limits the number of available resources. A default quota might be enforced for all tenants. When you try to create more resources than the quota allows, an error occurs:
$ neutron net-create test_net Quota exceeded for resources: ['network']
Per-tenant quota configuration is also supported by the quota extension API. See Section 4.13.3.2, “Configure per-tenant quotas” for details.
In the Networking default quota mechanism, all tenants have the same quota values, such as the number of resources that a tenant can create.
The quota value is defined in the OpenStack Networking
neutron.conf configuration file. To disable quotas for
a specific resource, such as network, subnet,
or port, remove a corresponding item from quota_items.
This example shows the default quota values:
[quotas] # resource name(s) that are supported in quota features quota_items = network,subnet,port # number of networks allowed per tenant, and minus means unlimited quota_network = 10 # number of subnets allowed per tenant, and minus means unlimited quota_subnet = 10 # number of ports allowed per tenant, and minus means unlimited quota_port = 50 # default driver to use for quota checks quota_driver = neutron.quota.ConfDriver
OpenStack Networking also supports quotas for L3 resources:
router and floating IP. Add these lines to the
quotas section in the neutron.conf file:
[quotas] # number of routers allowed per tenant, and minus means unlimited quota_router = 10 # number of floating IPs allowed per tenant, and minus means unlimited quota_floatingip = 50
The quota_items option does not affect these quotas.
OpenStack Networking also supports quotas for security group
resources: number of security groups and the number of rules for
each security group. Add these lines to the
quotas section in the neutron.conf file:
[quotas] # number of security groups per tenant, and minus means unlimited quota_security_group = 10 # number of security rules allowed per tenant, and minus means unlimited quota_security_group_rule = 100
The quota_items option does not affect these quotas.
OpenStack Networking also supports per-tenant quota limit by quota extension API.
Use these commands to manage per-tenant quotas:
Delete defined quotas for a specified tenant
Lists defined quotas for all tenants
Shows quotas for a specified tenant
Updates quotas for a specified tenant
Only users with the admin role can change a quota value. By default,
the default set of quotas are enforced for all tenants, so no
quota-create command exists.
Configure Networking to show per-tenant quotas
Set the quota_driver option in the neutron.conf file.
quota_driver = neutron.db.quota_db.DbQuotaDriver
When you set this option, the output for Networking commands shows quotas.
List Networking extensions.
To list the Networking extensions, run this command:
$ neutron ext-list -c alias -c name
The command shows the quotas extension, which provides
per-tenant quota management support.
+-----------------+--------------------------+ | alias | name | +-----------------+--------------------------+ | agent_scheduler | Agent Schedulers | | security-group | security-group | | binding | Port Binding | | quotas | Quota management support | | agent | agent | | provider | Provider Network | | router | Neutron L3 Router | | lbaas | LoadBalancing service | | extraroute | Neutron Extra Route | +-----------------+--------------------------+
Show information for the quotas extension.
To show information for the quotas extension, run this command:
$ neutron ext-show quotas +-------------+------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------+ | alias | quotas | | description | Expose functions for quotas management per tenant | | links | | | name | Quota management support | | namespace | http://docs.openstack.org/network/ext/quotas-sets/api/v2.0 | | updated | 2012-07-29T10:00:00-00:00 | +-------------+------------------------------------------------------------+
Only some plug-ins support per-tenant quotas. Specifically, Open vSwitch, Linux Bridge, and VMware NSX support them, but new versions of other plug-ins might bring additional functionality. See the documentation for each plug-in.
List tenants who have per-tenant quota support.
The quota-list command lists tenants for which the per-tenant
quota is enabled. The command does not list tenants with default
quota support. You must be an administrative user to run this command:
$ neutron quota-list +------------+---------+------+--------+--------+----------------------------------+ | floatingip | network | port | router | subnet | tenant_id | +------------+---------+------+--------+--------+----------------------------------+ | 20 | 5 | 20 | 10 | 5 | 6f88036c45344d9999a1f971e4882723 | | 25 | 10 | 30 | 10 | 10 | bff5c9455ee24231b5bc713c1b96d422 | +------------+---------+------+--------+--------+----------------------------------+
Show per-tenant quota values.
The quota-show command reports the current
set of quota limits for the specified tenant.
Non-administrative users can run this command without the
--tenant_id parameter. If per-tenant quota limits are
not enabled for the tenant, the command shows the default
set of quotas.
$ neutron quota-show --tenant_id 6f88036c45344d9999a1f971e4882723 +------------+-------+ | Field | Value | +------------+-------+ | floatingip | 20 | | network | 5 | | port | 20 | | router | 10 | | subnet | 5 | +------------+-------+
The following command shows the command output for a non-administrative user.
$ neutron quota-show +------------+-------+ | Field | Value | +------------+-------+ | floatingip | 20 | | network | 5 | | port | 20 | | router | 10 | | subnet | 5 | +------------+-------+
Update quota values for a specified tenant.
Use the quota-update command to
update a quota for a specified tenant.
$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --network 5 +------------+-------+ | Field | Value | +------------+-------+ | floatingip | 50 | | network | 5 | | port | 50 | | router | 10 | | subnet | 10 | +------------+-------+
You can update quotas for multiple resources through one command.
$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --subnet 5 --port 20 +------------+-------+ | Field | Value | +------------+-------+ | floatingip | 50 | | network | 5 | | port | 20 | | router | 10 | | subnet | 5 | +------------+-------+
To update the limits for an L3 resource such as, router
or floating IP, you must define new values for the quotas
after the -- directive.
This example updates the limit of the number of floating IPs for the specified tenant.
$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 -- --floatingip 20 +------------+-------+ | Field | Value | +------------+-------+ | floatingip | 20 | | network | 5 | | port | 20 | | router | 10 | | subnet | 5 | +------------+-------+
You can update the limits of multiple resources by including L2 resources and L3 resource through one command:
$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --network 3 --subnet 3 --port 3 -- --floatingip 3 --router 3 +------------+-------+ | Field | Value | +------------+-------+ | floatingip | 3 | | network | 3 | | port | 3 | | router | 3 | | subnet | 3 | +------------+-------+
Delete per-tenant quota values.
To clear per-tenant quota limits, use the
quota-delete command.
$ neutron quota-delete --tenant_id 6f88036c45344d9999a1f971e4882723 Deleted quota: 6f88036c45344d9999a1f971e4882723
After you run this command, you can see that quota values for the tenant are reset to the default values.
$ neutron quota-show --tenant_id 6f88036c45344d9999a1f971e4882723 +------------+-------+ | Field | Value | +------------+-------+ | floatingip | 50 | | network | 10 | | port | 50 | | router | 10 | | subnet | 10 | +------------+-------+
Use the swift command-line client to analyze log files.
The swift client is simple to use, scalable, and flexible.
Use the swift client -o or -output option to get
short answers to questions about logs.
You can use the -o or --output option with a single object
download to redirect the command output to a specific file or to STDOUT
(-). The ability to redirect the output to STDOUT enables you to
pipe (|) data without saving it to disk first.
This example assumes that logtest directory contains the
following log files.
2010-11-16-21_access.log 2010-11-16-22_access.log 2010-11-15-21_access.log 2010-11-15-22_access.log
Each file uses the following line format.
Nov 15 21:53:52 lucid64 proxy-server - 127.0.0.1 15/Nov/2010/22/53/52 DELETE /v1/AUTH_cd4f57824deb4248a533f2c28bf156d3/2eefc05599d44df38a7f18b0b42ffedd HTTP/1.0 204 - \ - test%3Atester%2CAUTH_tkcdab3c6296e249d7b7e2454ee57266ff - - - txaba5984c-aac7-460e-b04b-afc43f0c6571 - 0.0432
Change into the logtest directory.
$ cd logtest
Upload the log files into the logtest container.
$ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing upload logtest *.log
2010-11-16-21_access.log 2010-11-16-22_access.log 2010-11-15-21_access.log 2010-11-15-22_access.log
Get statistics for the account.
$ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing \ -q stat
Account: AUTH_cd4f57824deb4248a533f2c28bf156d3 Containers: 1 Objects: 4 Bytes: 5888268
Get statistics for the logtest container.
$ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing \ stat logtest
Account: AUTH_cd4f57824deb4248a533f2c28bf156d3 Container: logtest Objects: 4 Bytes: 5864468 Read ACL: Write ACL:
List all objects in the logtest container.
$ swift -A http:///swift-auth.com:11000/v1.0 -U test:tester -K testing \ list logtest
2010-11-15-21_access.log 2010-11-15-22_access.log 2010-11-16-21_access.log 2010-11-16-22_access.log
This example uses the -o option and a hyphen (-) to get
information about an object.
Use the swift download command to download the object. On this
command, stream the output to awk to break down requests by return
code and the date 2200 on November 16th, 2010.
Using the log line format, find the request type in column 9 and the return code in column 12.
After awk processes the output, it pipes it to sort and uniq
-c to sum up the number of occurrences for each request type and
return code combination.
Download an object.
$ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing \
download -o - logtest 2010-11-16-22_access.log | awk '{ print \
$9"-"$12}' | sort | uniq -c805 DELETE-204 12 DELETE-404 2 DELETE-409 723 GET-200 142 GET-204 74 GET-206 80 GET-304 34 GET-401 5 GET-403 18 GET-404 166 GET-412 2 GET-416 50 HEAD-200 17 HEAD-204 20 HEAD-401 8 HEAD-404 30 POST-202 25 POST-204 22 POST-400 6 POST-404 842 PUT-201 2 PUT-202 32 PUT-400 4 PUT-403 4 PUT-404 2 PUT-411 6 PUT-412 6 PUT-413 2 PUT-422 8 PUT-499
Discover how many PUT requests are in each log file.
Use a bash for loop with awk and swift with the -o or
--output option and a hyphen (-) to discover how many
PUT requests are in each log file.
Run the swift list command to list objects in the logtest
container. Then, for each item in the list, run the
swift download -o - command. Pipe the output into grep to
filter the PUT requests. Finally, pipe into wc -l to count the lines.
$ for f in `swift -A http://swift-auth.com:11000/v1.0 -U test:tester \
-K testing list logtest` ; \
do echo -ne "PUTS - " ; swift -A \
http://swift-auth.com:11000/v1.0 -U test:tester \
-K testing download -o - logtest $f | grep PUT | wc -l ; \
done2010-11-15-21_access.log - PUTS - 402 2010-11-15-22_access.log - PUTS - 1091 2010-11-16-21_access.log - PUTS - 892 2010-11-16-22_access.log - PUTS - 910
List the object names that begin with a specified string.
Run the swift list -p 2010-11-15 command to list objects
in the logtest container that begin with the 2010-11-15 string.
For each item in the list, run the swift download -o - command.
Pipe the output to grep and wc.
Use the echo command to
display the object name.
$ for f in `swift -A http://swift-auth.com:11000/v1.0 -U test:tester \
-K testing list -p 2010-11-15 logtest` ; \
do echo -ne "$f - PUTS - " ; swift -A \
http://127.0.0.1:11000/v1.0 -U test:tester \
-K testing download -o - logtest $f | grep PUT | wc -l ; \
done2010-11-15-21_access.log - PUTS - 402 2010-11-15-22_access.log - PUTS - 910
As an administrative user, you have some control over which volume back end your volumes reside on. You can specify affinity or anti-affinity between two volumes. Affinity between volumes means that they are stored on the same back end, whereas anti-affinity means that they are stored on different back ends.
For information on how to set up multiple back ends for Cinder, refer to the guide for Configuring multiple-storage back ends (http://docs.openstack.org/admin-guide-cloud/blockstorage_multi_backend.html).
Create new volume on the same back end as Volume_A:
$ cinder create --hint same_host=Volume_A-UUID SIZE
Create new volume on a different back end than Volume_A:
$ cinder create --hint different_host=Volume_A-UUID SIZE
Create new volume on the same back end as Volume_A and Volume_B:
$ cinder create --hint same_host=Volume_A-UUID --hint same_host=Volume_B-UUID SIZE
Or:
$ cinder create --hint same_host="[Volume_A-UUID, Volume_B-UUID]" SIZE
Create new volume on a different back end than both Volume_A and Volume_B:
$ cinder create --hint different_host=Volume_A-UUID --hint different_host=Volume_B-UUID SIZE
Or:
$ cinder create --hint different_host="[Volume_A-UUID, Volume_B-UUID]" SIZE