kolla-ansible/ansible/roles/nova-cell/tasks/reload.yml
Doug Szumski 78a828ef42 Support multiple nova cells
This patch adds initial support for deploying multiple Nova cells.

Splitting a nova-cell role out from the Nova role allows a more granular
approach to deploying and configuring Nova services.

A new enable_cells flag has been added that enables the support of
multiple cells via the introduction of a super conductor in addition to
cell-specific conductors. When this flag is not set (the default), nova
is configured in the same manner as before - with a single conductor.

The nova role now deploys the global services:

* nova-api
* nova-scheduler
* nova-super-conductor (if enable_cells is true)

The nova-cell role handles services specific to a cell:

* nova-compute
* nova-compute-ironic
* nova-conductor
* nova-libvirt
* nova-novncproxy
* nova-serialproxy
* nova-spicehtml5proxy
* nova-ssh

This patch does not support using a single cell controller for managing
more than one cell. Support for sharing a cell controller will be added
in a future patch.

This patch should be backwards compatible and is tested by existing CI
jobs. A new CI job has been added that tests a multi-cell environment.

ceph-mon has been removed from the play hosts list as it is not
necessary - delegate_to does not require the host to be in the play.

Documentation will be added in a separate patch.

Partially Implements: blueprint support-nova-cells
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I810aad7d49db3f5a7fd9a2f0f746fd912fe03917
2019-10-16 17:42:36 +00:00

41 lines
1.9 KiB
YAML

---
# NOTE(mgoddard): Currently (just prior to Stein release), sending SIGHUP to
# nova compute services leaves them in a broken state in which they cannot
# start new instances. The following error is seen in the logs:
# "In shutdown, no new events can be scheduled"
# To work around this we restart the nova-compute services.
# Speaking to the nova team, this seems to be an issue in oslo.service,
# with a fix proposed here: https://review.openstack.org/#/c/641907.
# This issue also seems to affect the proxy services, which exit non-zero in
# reponse to a SIGHUP, so restart those too.
# The issue actually affects all nova services, since they remain with RPC
# version pinned to the previous release:
# https://bugs.launchpad.net/kolla-ansible/+bug/1833069.
# TODO(mgoddard): Use SIGHUP when this bug has been fixed.
# NOTE(mgoddard): We use recreate_or_restart_container to cover the case where
# nova_safety_upgrade is "yes", and we need to recreate all containers.
# FIXME(mgoddard): Need to always do this since nova-compute handlers will not
# generally fire on controllers.
- name: Reload nova cell services to remove RPC version cap
vars:
service: "{{ nova_cell_services[item] }}"
become: true
kolla_docker:
action: "recreate_or_restart_container"
common_options: "{{ docker_common_options }}"
name: "{{ service.container_name }}"
image: "{{ service.image }}"
environment: "{{ service.environment|default(omit) }}"
pid_mode: "{{ service.pid_mode|default('') }}"
ipc_mode: "{{ service.ipc_mode|default(omit) }}"
privileged: "{{ service.privileged | default(False) }}"
volumes: "{{ service.volumes|reject('equalto', '')|list }}"
dimensions: "{{ service.dimensions }}"
when:
- kolla_action == 'upgrade'
- inventory_hostname in groups[service.group]
- service.enabled | bool
with_items: "{{ nova_cell_services_require_nova_conf }}"