Move configuration notes into configuration guide

We have configuration information split between the README.md and
configuration documentation.  A lot of it is duplicated and it shows
little organisation.

This clears the README.md of detailed configuration options and
consolidates it into the existing configuration guide.  When someone
first hits the README they don't need details on changing the RPC
back-end; but more importantly this indicates clearly where we should
be adding or clarifying details.

Firstly, the detailed overview of local.conf is removed; it was
duplicated in the configuration guide.  This is left as a first-level
section of that guide.

The configuration notes are divided into generic devstack things
(logging, database-backend, etc) and then the rest of the notes on
various projects' configuration options have been moved into a
dedicated sub-section "Projects".

Each project gets its own sub-sub-section.  Duplicated swift guides is
consolidated into the single "Swift section". The neutron and
multi-node nodes, which were all duplicated in their more specific
dedicated guides are removed and replaced with links to those.  Other
sections are moved directly.

Change-Id: Ib0bac56d82be870fe99c47c53fda674d8668b968
This commit is contained in:
Ian Wienand 2015-08-10 13:39:17 +10:00
parent 4ebfea9d0d
commit 7d5be29920
2 changed files with 255 additions and 441 deletions

345
README.md
View File

@ -93,345 +93,14 @@ for example).
# Customizing
You can override environment variables used in `stack.sh` by creating file
name `local.conf` with a ``localrc`` section as shown below. It is likely
that you will need to do this to tweak your networking configuration should
you need to access your cloud from a different host.
You can override environment variables used in `stack.sh` by creating
file name `local.conf` with a ``localrc`` section as shown below. It
is likely that you will need to do this to tweak several settings for
your environment.
[[local|localrc]]
VARIABLE=value
See the **Local Configuration** section below for more details.
# Database Backend
Multiple database backends are available. The available databases are defined
in the lib/databases directory.
`mysql` is the default database, choose a different one by putting the
following in the `localrc` section:
disable_service mysql
enable_service postgresql
`mysql` is the default database.
# RPC Backend
Support for a RabbitMQ RPC backend is included. Additional RPC backends may
be available via external plugins. Enabling or disabling RabbitMQ is handled
via the usual service functions and ``ENABLED_SERVICES``.
Example disabling RabbitMQ in ``local.conf``:
disable_service rabbit
# Apache Frontend
Apache web server can be enabled for wsgi services that support being deployed
under HTTPD + mod_wsgi. By default, services that recommend running under
HTTPD + mod_wsgi are deployed under Apache. To use an alternative deployment
strategy (e.g. eventlet) for services that support an alternative to HTTPD +
mod_wsgi set ``ENABLE_HTTPD_MOD_WSGI_SERVICES`` to ``False`` in your
``local.conf``.
Each service that can be run under HTTPD + mod_wsgi also has an override
toggle available that can be set in your ``local.conf``.
Keystone is run under HTTPD + mod_wsgi by default.
Example (Keystone):
KEYSTONE_USE_MOD_WSGI="True"
Example (Nova):
NOVA_USE_MOD_WSGI="True"
Example (Swift):
SWIFT_USE_MOD_WSGI="True"
# Swift
Swift is disabled by default. When enabled, it is configured with
only one replica to avoid being IO/memory intensive on a small
vm. When running with only one replica the account, container and
object services will run directly in screen. The others services like
replicator, updaters or auditor runs in background.
If you would like to enable Swift you can add this to your `localrc` section:
enable_service s-proxy s-object s-container s-account
If you want a minimal Swift install with only Swift and Keystone you
can have this instead in your `localrc` section:
disable_all_services
enable_service key mysql s-proxy s-object s-container s-account
If you only want to do some testing of a real normal swift cluster
with multiple replicas you can do so by customizing the variable
`SWIFT_REPLICAS` in your `localrc` section (usually to 3).
# Swift S3
If you are enabling `swift3` in `ENABLED_SERVICES` DevStack will
install the swift3 middleware emulation. Swift will be configured to
act as a S3 endpoint for Keystone so effectively replacing the
`nova-objectstore`.
Only Swift proxy server is launched in the screen session all other
services are started in background and managed by `swift-init` tool.
# Neutron
Basic Setup
In order to enable Neutron in a single node setup, you'll need the
following settings in your `local.conf`:
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-metering
Then run `stack.sh` as normal.
DevStack supports setting specific Neutron configuration flags to the
service, ML2 plugin, DHCP and L3 configuration files:
[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2]
mechanism_drivers=openvswitch,l2population
[[post-config|$NEUTRON_CONF]]
[DEFAULT]
quota_port=42
[[post-config|$Q_L3_CONF_FILE]]
[DEFAULT]
agent_mode=legacy
[[post-config|$Q_DHCP_CONF_FILE]]
[DEFAULT]
dnsmasq_dns_servers = 8.8.8.8,8.8.4.4
The ML2 plugin can run with the OVS, LinuxBridge, or Hyper-V agents on compute
hosts. This is a simple way to configure the ml2 plugin:
# VLAN configuration
ENABLE_TENANT_VLANS=True
# GRE tunnel configuration
ENABLE_TENANT_TUNNELS=True
# VXLAN tunnel configuration
Q_ML2_TENANT_NETWORK_TYPE=vxlan
The above will default in DevStack to using the OVS on each compute host.
To change this, set the `Q_AGENT` variable to the agent you want to run
(e.g. linuxbridge).
Variable Name Notes
----------------------------------------------------------------------------
Q_AGENT This specifies which agent to run with the
ML2 Plugin (Typically either `openvswitch`
or `linuxbridge`).
Defaults to `openvswitch`.
Q_ML2_PLUGIN_MECHANISM_DRIVERS The ML2 MechanismDrivers to load. The default
is `openvswitch,linuxbridge`.
Q_ML2_PLUGIN_TYPE_DRIVERS The ML2 TypeDrivers to load. Defaults to
all available TypeDrivers.
Q_ML2_PLUGIN_GRE_TYPE_OPTIONS GRE TypeDriver options. Defaults to
`tunnel_id_ranges=1:1000'.
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS VXLAN TypeDriver options. Defaults to
`vni_ranges=1001:2000`
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS VLAN TypeDriver options. Defaults to none.
# Heat
Heat is disabled by default (see `stackrc` file). To enable it explicitly
you'll need the following settings in your `localrc` section:
enable_service heat h-api h-api-cfn h-api-cw h-eng
Heat can also run in standalone mode, and be configured to orchestrate
on an external OpenStack cloud. To launch only Heat in standalone mode
you'll need the following settings in your `localrc` section:
disable_all_services
enable_service rabbit mysql heat h-api h-api-cfn h-api-cw h-eng
HEAT_STANDALONE=True
KEYSTONE_SERVICE_HOST=...
KEYSTONE_AUTH_HOST=...
# Tempest
If tempest has been successfully configured, a basic set of smoke
tests can be run as follows:
$ cd /opt/stack/tempest
$ tox -efull tempest.scenario.test_network_basic_ops
By default tempest is downloaded and the config file is generated, but the
tempest package is not installed in the system's global site-packages (the
package install includes installing dependences). So tempest won't run
outside of tox. If you would like to install it add the following to your
``localrc`` section:
INSTALL_TEMPEST=True
# DevStack on Xenserver
If you would like to use Xenserver as the hypervisor, please refer
to the instructions in `./tools/xen/README.md`.
# Additional Projects
DevStack has a hook mechanism to call out to a dispatch script at specific
points in the execution of `stack.sh`, `unstack.sh` and `clean.sh`. This
allows upper-layer projects, especially those that the lower layer projects
have no dependency on, to be added to DevStack without modifying the core
scripts. Tempest is built this way as an example of how to structure the
dispatch script, see `extras.d/80-tempest.sh`. See `extras.d/README.md`
for more information.
# Multi-Node Setup
A more interesting setup involves running multiple compute nodes, with Neutron
networks connecting VMs on different compute nodes.
You should run at least one "controller node", which should have a `stackrc`
that includes at least:
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
You likely want to change your `localrc` section to run a scheduler that
will balance VMs across hosts:
SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
You can then run many compute nodes, each of which should have a `stackrc`
which includes the following, with the IP address of the above controller node:
ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
SERVICE_HOST=[IP of controller node]
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST
MATCHMAKER_REDIS_HOST=$SERVICE_HOST
# Multi-Region Setup
We want to setup two devstack (RegionOne and RegionTwo) with shared keystone
(same users and services) and horizon.
Keystone and Horizon will be located in RegionOne.
Full spec is available at:
https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat.
In RegionOne:
REGION_NAME=RegionOne
In RegionTwo:
disable_service horizon
KEYSTONE_SERVICE_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
KEYSTONE_AUTH_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
REGION_NAME=RegionTwo
# Cells
Cells is a new scaling option with a full spec at:
http://wiki.openstack.org/blueprint-nova-compute-cells.
To setup a cells environment add the following to your `localrc` section:
enable_service n-cell
Be aware that there are some features currently missing in cells, one notable
one being security groups. The exercises have been patched to disable
functionality not supported by cells.
# IPv6
By default, most Openstack services are bound to 0.0.0.0
and service endpoints are registered as IPv4 addresses.
A new variable was created to control this behavior, and to
allow for operation over IPv6 instead of IPv4.
For this, add the following to `local.conf`:
SERVICE_IP_VERSION=6
When set to "6" devstack services will open listen sockets on ::
and service endpoints will be registered using HOST_IPV6 as the
address. The default value for this setting is `4`. Dual-mode
support, for example `4+6` is not currently supported.
# Local Configuration
Historically DevStack has used ``localrc`` to contain all local configuration
and customizations. More and more of the configuration variables available for
DevStack are passed-through to the individual project configuration files.
The old mechanism for this required specific code for each file and did not
scale well. This is handled now by a master local configuration file.
# local.conf
The new config file ``local.conf`` is an extended-INI format that introduces
a new meta-section header that provides some additional information such
as a phase name and destination config filename:
[[ <phase> | <config-file-name> ]]
where ``<phase>`` is one of a set of phase names defined by ``stack.sh``
and ``<config-file-name>`` is the configuration filename. The filename is
eval'ed in the ``stack.sh`` context so all environment variables are
available and may be used. Using the project config file variables in
the header is strongly suggested (see the ``NOVA_CONF`` example below).
If the path of the config file does not exist it is skipped.
The defined phases are:
* **local** - extracts ``localrc`` from ``local.conf`` before ``stackrc`` is sourced
* **post-config** - runs after the layer 2 services are configured
and before they are started
* **extra** - runs after services are started and before any files
in ``extra.d`` are executed
* **post-extra** - runs after files in ``extra.d`` are executed
The file is processed strictly in sequence; meta-sections may be specified more
than once but if any settings are duplicated the last to appear in the file
will be used.
[[post-config|$NOVA_CONF]]
[DEFAULT]
use_syslog = True
[osapi_v3]
enabled = False
A specific meta-section ``local|localrc`` is used to provide a default
``localrc`` file (actually ``.localrc.auto``). This allows all custom
settings for DevStack to be contained in a single file. If ``localrc``
exists it will be used instead to preserve backward-compatibility.
[[local|localrc]]
FIXED_RANGE=10.254.1.0/24
ADMIN_PASSWORD=speciale
LOGFILE=$DEST/logs/stack.sh.log
Note that ``Q_PLUGIN_CONF_FILE`` is unique in that it is assumed to *NOT*
start with a ``/`` (slash) character. A slash will need to be added:
[[post-config|/$Q_PLUGIN_CONF_FILE]]
Start by reading the [configuration
guide](doc/source/configuration.rst) for details of the many available
options.

View File

@ -148,6 +148,34 @@ will not be set if there is no IPv6 address on the default Ethernet interface.
Setting it here also makes it available for ``openrc`` to set ``OS_AUTH_URL``.
``HOST_IPV6`` is not set by default.
Examples
========
- Eliminate a Cinder pass-through (``CINDER_PERIODIC_INTERVAL``):
::
[[post-config|$CINDER_CONF]]
[DEFAULT]
periodic_interval = 60
- Sample ``local.conf`` with screen logging enabled:
::
[[local|localrc]]
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1
LOGDAYS=1
LOGDIR=$DEST/logs
LOGFILE=$LOGDIR/stack.sh.log
ADMIN_PASSWORD=quiet
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50
Configuration Notes
===================
@ -228,6 +256,72 @@ to direct the message stream to the log host. |
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
Database Backend
----------------
Multiple database backends are available. The available databases are defined
in the lib/databases directory.
`mysql` is the default database, choose a different one by putting the
following in the `localrc` section:
::
disable_service mysql
enable_service postgresql
`mysql` is the default database.
RPC Backend
-----------
Support for a RabbitMQ RPC backend is included. Additional RPC
backends may be available via external plugins. Enabling or disabling
RabbitMQ is handled via the usual service functions and
``ENABLED_SERVICES``.
Example disabling RabbitMQ in ``local.conf``:
::
disable_service rabbit
Apache Frontend
---------------
The Apache web server can be enabled for wsgi services that support
being deployed under HTTPD + mod_wsgi. By default, services that
recommend running under HTTPD + mod_wsgi are deployed under Apache. To
use an alternative deployment strategy (e.g. eventlet) for services
that support an alternative to HTTPD + mod_wsgi set
``ENABLE_HTTPD_MOD_WSGI_SERVICES`` to ``False`` in your
``local.conf``.
Each service that can be run under HTTPD + mod_wsgi also has an
override toggle available that can be set in your ``local.conf``.
Keystone is run under Apache with ``mod_wsgi`` by default.
Example (Keystone)
::
KEYSTONE_USE_MOD_WSGI="True"
Example (Nova):
::
NOVA_USE_MOD_WSGI="True"
Example (Swift):
::
SWIFT_USE_MOD_WSGI="True"
Libraries from Git
------------------
@ -295,48 +389,6 @@ that matches requirements.
PIP_UPGRADE=True
Swift
-----
Swift is now used as the back-end for the S3-like object store. When
enabled Nova's objectstore (``n-obj`` in ``ENABLED_SERVICES``) is
automatically disabled. Enable Swift by adding it services to
``ENABLED_SERVICES``
::
enable_service s-proxy s-object s-container s-account
Setting Swift's hash value is required and you will be prompted for it
if Swift is enabled so just set it to something already:
::
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
For development purposes the default number of replicas is set to
``1`` to reduce the overhead required. To better simulate a production
deployment set this to ``3`` or more.
::
SWIFT_REPLICAS=3
The data for Swift is stored in the source tree by default (in
``$DEST/swift/data``) and can be moved by setting
``SWIFT_DATA_DIR``. The specified directory will be created if it does
not exist.
::
SWIFT_DATA_DIR=$DEST/data/swift
*Note*: Previously just enabling ``swift`` was sufficient to start the
Swift services. That does not provide proper service granularity,
particularly in multi-host configurations, and is considered
deprecated. Some service combination tests now check for specific
Swift services and the old blanket acceptance will longer work
correctly.
Service Catalog Backend
-----------------------
@ -354,47 +406,6 @@ with ``KEYSTONE_CATALOG_BACKEND``:
DevStack's default configuration in ``sql`` mode is set in
``files/keystone_data.sh``
Cinder
------
The logical volume group used to hold the Cinder-managed volumes is
set by ``VOLUME_GROUP``, the logical volume name prefix is set with
``VOLUME_NAME_PREFIX`` and the size of the volume backing file is set
with ``VOLUME_BACKING_FILE_SIZE``.
::
VOLUME_GROUP="stack-volumes"
VOLUME_NAME_PREFIX="volume-"
VOLUME_BACKING_FILE_SIZE=10250M
Multi-host DevStack
-------------------
Running DevStack with multiple hosts requires a custom ``local.conf``
section for each host. The master is the same as a single host
installation with ``MULTI_HOST=True``. The slaves have fewer services
enabled and a couple of host variables pointing to the master.
Master
~~~~~~
Set ``MULTI_HOST`` to true
::
MULTI_HOST=True
Slave
~~~~~
Set the following options to point to the master
::
MYSQL_HOST=w.x.y.z
RABBIT_HOST=w.x.y.z
GLANCE_HOSTPORT=w.x.y.z:9292
ENABLED_SERVICES=n-vol,n-cpu,n-net,n-api
IP Version
----------
@ -447,29 +458,163 @@ optionally be used to alter the default IPv6 address
HOST_IPV6=${some_local_ipv6_address}
Examples
========
Multi-node setup
~~~~~~~~~~~~~~~~
- Eliminate a Cinder pass-through (``CINDER_PERIODIC_INTERVAL``):
See the :doc:`multi-node lab guide<guides/multinode-lab>`
::
Projects
--------
[[post-config|$CINDER_CONF]]
[DEFAULT]
periodic_interval = 60
Neutron
~~~~~~~
- Sample ``local.conf`` with screen logging enabled:
See the :doc:`neutron configuration guide<guides/neutron>` for
details on configuration of Neutron
::
[[local|localrc]]
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1
LOGDAYS=1
LOGDIR=$DEST/logs
LOGFILE=$LOGDIR/stack.sh.log
ADMIN_PASSWORD=quiet
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50
Swift
~~~~~
Swift is disabled by default. When enabled, it is configured with
only one replica to avoid being IO/memory intensive on a small
VM. When running with only one replica the account, container and
object services will run directly in screen. The others services like
replicator, updaters or auditor runs in background.
If you would like to enable Swift you can add this to your `localrc`
section:
::
enable_service s-proxy s-object s-container s-account
If you want a minimal Swift install with only Swift and Keystone you
can have this instead in your `localrc` section:
::
disable_all_services
enable_service key mysql s-proxy s-object s-container s-account
If you only want to do some testing of a real normal swift cluster
with multiple replicas you can do so by customizing the variable
`SWIFT_REPLICAS` in your `localrc` section (usually to 3).
Swift S3
++++++++
If you are enabling `swift3` in `ENABLED_SERVICES` DevStack will
install the swift3 middleware emulation. Swift will be configured to
act as a S3 endpoint for Keystone so effectively replacing the
`nova-objectstore`.
Only Swift proxy server is launched in the screen session all other
services are started in background and managed by `swift-init` tool.
Heat
~~~~
Heat is disabled by default (see `stackrc` file). To enable it
explicitly you'll need the following settings in your `localrc`
section
::
enable_service heat h-api h-api-cfn h-api-cw h-eng
Heat can also run in standalone mode, and be configured to orchestrate
on an external OpenStack cloud. To launch only Heat in standalone mode
you'll need the following settings in your `localrc` section
::
disable_all_services
enable_service rabbit mysql heat h-api h-api-cfn h-api-cw h-eng
HEAT_STANDALONE=True
KEYSTONE_SERVICE_HOST=...
KEYSTONE_AUTH_HOST=...
Tempest
~~~~~~~
If tempest has been successfully configured, a basic set of smoke
tests can be run as follows:
::
$ cd /opt/stack/tempest
$ tox -efull tempest.scenario.test_network_basic_ops
By default tempest is downloaded and the config file is generated, but the
tempest package is not installed in the system's global site-packages (the
package install includes installing dependences). So tempest won't run
outside of tox. If you would like to install it add the following to your
``localrc`` section:
::
INSTALL_TEMPEST=True
Xenserver
~~~~~~~~~
If you would like to use Xenserver as the hypervisor, please refer to
the instructions in `./tools/xen/README.md`.
Cells
~~~~~
`Cells <http://wiki.openstack.org/blueprint-nova-compute-cells>`__ is
an alternative scaling option. To setup a cells environment add the
following to your `localrc` section:
::
enable_service n-cell
Be aware that there are some features currently missing in cells, one
notable one being security groups. The exercises have been patched to
disable functionality not supported by cells.
Cinder
~~~~~~
The logical volume group used to hold the Cinder-managed volumes is
set by ``VOLUME_GROUP``, the logical volume name prefix is set with
``VOLUME_NAME_PREFIX`` and the size of the volume backing file is set
with ``VOLUME_BACKING_FILE_SIZE``.
::
VOLUME_GROUP="stack-volumes"
VOLUME_NAME_PREFIX="volume-"
VOLUME_BACKING_FILE_SIZE=10250M
Keystone
~~~~~~~~
Multi-Region Setup
++++++++++++++++++
We want to setup two devstack (RegionOne and RegionTwo) with shared
keystone (same users and services) and horizon. Keystone and Horizon
will be located in RegionOne. Full spec is available at:
`<https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat>`__.
In RegionOne:
::
REGION_NAME=RegionOne
In RegionTwo:
::
disable_service horizon
KEYSTONE_SERVICE_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
KEYSTONE_AUTH_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
REGION_NAME=RegionTwo