The documentation of the inventory started to be spread out, but also massive to be inside one page. This moves it to the inventory section of the reference, but at the same time, to improve readability, moves the previous content into sub pages. Change-Id: If2c6b83abafdc66879d818df4c9690142389a965
7.4 KiB
Configuring the inventory
In this chapter, you can find the information on how to configure the openstack-ansible dynamic inventory to your needs.
Introduction
Common OpenStack services and their configuration are defined by
OpenStack-Ansible in the
/etc/openstack_deploy/openstack_user_config.yml
settings
file.
Additional services should be defined with a YAML file in
/etc/openstack_deploy/conf.d
, in order to manage file
size.
The /etc/openstack_deploy/env.d
directory sources all
YAML files into the deployed environment, allowing a deployer to define
additional group mappings. This directory is used to extend the
environment skeleton, or modify the defaults defined in the
inventory/env.d
directory.
To understand how the dynamic inventory works, see inventory-in-depth
.
Warning
Never edit or delete the files
/etc/openstack_deploy/openstack_inventory.json
or
/etc/openstack_deploy/openstack_hostnames_ips.yml
. This can
lead to file corruptions, and problems with the inventory: hosts and
container could disappear and new ones would appear, breaking your
existing deployment.
Configuration constraints
Group memberships
When adding groups, keep the following in mind:
- A group can contain hosts
- A group can contain child groups
However, groups cannot contain child groups and hosts.
The lxc_hosts Group
When the dynamic inventory script creates a container name, the host
on which the container resides is added to the lxc_hosts
inventory group.
Using this name for a group in the configuration will result in a runtime error.
Deploying directly on hosts
To deploy a component directly on the host instead of within a
container, set the is_metal
property to true
for the container group in the container_skel
section in
the appropriate file.
The use of container_vars
and mapping from container
groups to host groups is the same for a service deployed directly onto
the host.
Note
The cinder-volume
component is deployed directly on the
host by default. See the env.d/cinder.yml
file for this
example.
Example: Running galera on dedicated hosts
For example, to run Galera directly on dedicated hosts, you would perform the following steps:
Modify the
container_skel
section of theenv.d/galera.yml
file. For example:container_skel: galera_container: belongs_to: - db_containers contains: - galera properties: is_metal: true
Note
To deploy within containers on these dedicated hosts, omit the
is_metal: true
property.Assign the
db_containers
container group (from the preceding step) to a host group by providing aphysical_skel
section for the host group in a new or existing file, such asenv.d/galera.yml
. For example:physical_skel: db_containers: belongs_to: - all_containers db_hosts: belongs_to: - hosts
Define the host group (
db_hosts
) in aconf.d/
file (such asgalera.yml
). For example:db_hosts: db-host1: ip: 172.39.123.11 db-host2: ip: 172.39.123.12 db-host3: ip: 172.39.123.13
Note
Each of the custom group names in this example (
db_containers
anddb_hosts
) are arbitrary. Choose your own group names, but ensure the references are consistent among all relevant files.
Deploying 0 (or more than one) of component type per host
When OpenStack-Ansible generates its dynamic inventory, the affinity setting determines how many containers of a similar type are deployed on a single physical host.
Using shared-infra_hosts
as an example, consider this
openstack_user_config.yml
configuration:
shared-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
Three hosts are assigned to the shared-infra_hosts group, OpenStack-Ansible ensures that each host runs a single database container, a single Memcached container, and a single RabbitMQ container. Each host has an affinity of 1 by default, which means that each host runs one of each container type.
If you are deploying a stand-alone Object Storage (swift)
environment, you can skip the deployment of RabbitMQ. If you use this
configuration, your openstack_user_config.yml
file would
look as follows:
shared-infra_hosts:
infra1:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.101
infra2:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.102
infra3:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.103
This configuration deploys a Memcached container and a database container on each host, but no RabbitMQ containers.
Omit a service or component from the deployment
To omit a component from a deployment, you can use one of several options:
- Remove the
physical_skel
link between the container group and the host group by deleting the related file located in theenv.d/
directory. - Do not run the playbook that installs the component. Unless you
specify the component to run directly on a host by using the
is_metal
property, a container is created for this component. - Adjust the
affinity
to 0 for the host group. Similar to the second option listed here, Unless you specify the component to run directly on a host by using theis_metal
property, a container is created for this component.
Deploying using a different container technology
Note
While nspawn is an available containerization technology it should be considered experemental at this time. Even though this subsystem is not yet recommended for production, it is stable enough to introduce to the community and something we'd like feedback on as we improve it over the next cycle.
OpenStack-Ansible presently supports two different container technologies, LXC and nspawn. These two container technologies can be used separately or together within the same cluster but has a limitation of only one setting per host.
Using shared-infra_hosts
as an example, consider this
openstack_user_config.yml
configuration:
shared-infra_hosts:
infra1:
ip: 172.29.236.101
container_vars:
container_tech: lxc
infra2:
ip: 172.29.236.102
container_vars:
container_tech: nspawn
infra3:
ip: 172.29.236.103
In this example the three hosts are assigned to the shared-infra_hosts group, and will deploy
containerized workloads using lxc
on
infra1, nspawn
on infra2,
and lxc
on infra3. Notice
infra3 does not define the container_tech
option because it not required. If this option is undefined the value
will automatically be set to lxc
within the generated
inventory. The two supported options for the container_tech
configuration variable are lxc
or nspawn
.