Rafael Weingärtner 22a6223b1b Standardize the configuration of "oslo_messaging" section
After all of the discussions we had on
"https://review.opendev.org/#/c/670626/2", I studied all projects that
have an "oslo_messaging" section. Afterwards, I applied the same method
that is already used in "oslo_messaging" section in Nova, Cinder, and
others. This guarantees that we have a consistent method to
enable/disable notifications across projects based on components (e.g.
Ceilometer) being enabled or disabled. Here follows the list of
components, and the respective changes I did.

* Aodh:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.

* Congress:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.

* Cinder:
It was already properly configured.

* Octavia:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.

* Heat:
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Ceilometer:
Ceilometer publishes some messages in the rabbitMQ. However, the
default driver is "messagingv2", and not ''(empty) as defined in Oslo;
these configurations are defined in ceilometer/publisher/messaging.py.
Therefore, we do not need to do anything for the
"oslo_messaging_notifications" section in Ceilometer

* Tacker:
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Neutron:
It was already properly configured.

* Nova
It was already properly configured. However, we found another issue
with its configuration. Kolla-ansible does not configure nova
notifications as it should. If 'searchlight' is not installed (enabled)
the 'notification_format' should be 'unversioned'. The default is
'both'; so nova will send a notification to the queue
versioned_notifications; but that queue has no consumer when
'searchlight' is disabled. In our case, the queue got 511k messages.
The huge amount of "stuck" messages made the Rabbitmq cluster
unstable.

https://bugzilla.redhat.com/show_bug.cgi?id=1478274
https://bugs.launchpad.net/ceilometer/+bug/1665449

* Nova_hyperv:
I added the same configurations as in Nova project.

* Vitrage
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Searchlight
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.

* Ironic
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.

* Glance
It was already properly configured.

* Trove
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Blazar
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Sahara
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Watcher
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.

* Barbican
I created a mechanism similar to what we have in Cinder, Nova,
and others. I also added a configuration to 'keystone_notifications'
section. Barbican needs its own queue to capture events from Keystone.
Otherwise, it has an impact on Ceilometer and other systems that are
connected to the "notifications" default queue.

* Keystone
Keystone is the system that triggered this work with the discussions
that followed on https://review.opendev.org/#/c/670626/2. After a long
discussion, we agreed to apply the same approach that we have in Nova,
Cinder and other systems in Keystone. That is what we did. Moreover, we
introduce a new topic "barbican_notifications" when barbican is
enabled. We also removed the "variable" enable_cadf_notifications, as
it is obsolete, and the default in Keystone is CADF.

* Mistral:
It was hardcoded "noop" as the driver. However, that does not seem a
good practice. Instead, I applied the same standard of using the driver
and pushing to "notifications" queue if Ceilometer is enabled.

* Cyborg:
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.

* Murano
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Senlin
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Manila
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Zun
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.

* Designate
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Magnum
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

Closes-Bug: #1838985

Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
2019-08-15 13:18:16 -03:00

174 lines
7.0 KiB
YAML

---
project_name: "vitrage"
vitrage_services:
vitrage-api:
container_name: vitrage_api
group: vitrage-api
enabled: true
image: "{{ vitrage_api_image_full }}"
volumes: "{{ vitrage_api_default_volumes + vitrage_api_extra_volumes }}"
dimensions: "{{ vitrage_api_dimensions }}"
haproxy:
vitrage_api:
enabled: "{{ enable_vitrage }}"
mode: "http"
external: false
port: "{{ vitrage_api_port }}"
vitrage_api_external:
enabled: "{{ enable_vitrage }}"
mode: "http"
external: true
port: "{{ vitrage_api_port }}"
vitrage-notifier:
container_name: vitrage_notifier
group: vitrage-notifier
enabled: true
image: "{{ vitrage_notifier_image_full }}"
volumes: "{{ vitrage_notifier_default_volumes + vitrage_notifier_extra_volumes }}"
dimensions: "{{ vitrage_notifier_dimensions }}"
vitrage-graph:
container_name: vitrage_graph
group: vitrage-graph
enabled: true
image: "{{ vitrage_graph_image_full }}"
volumes: "{{ vitrage_graph_default_volumes + vitrage_graph_extra_volumes }}"
dimensions: "{{ vitrage_graph_dimensions }}"
vitrage-ml:
container_name: vitrage_ml
group: vitrage-ml
enabled: true
image: "{{ vitrage_ml_image_full }}"
volumes: "{{ vitrage_ml_default_volumes + vitrage_ml_extra_volumes }}"
dimensions: "{{ vitrage_ml_dimensions }}"
####################
## Database
#####################
vitrage_database_name: "vitrage"
vitrage_database_user: "{% if use_preconfigured_databases | bool and use_common_mariadb_user | bool %}{{ database_user }}{% else %}vitrage{% endif %}"
vitrage_database_address: "{{ database_address }}:{{ database_port }}"
####################
# Docker
####################
vitrage_install_type: "{{ kolla_install_type }}"
vitrage_tag: "{{ openstack_release }}"
vitrage_graph_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ vitrage_install_type }}-vitrage-graph"
vitrage_graph_tag: "{{ vitrage_tag }}"
vitrage_graph_image_full: "{{ vitrage_graph_image }}:{{ vitrage_graph_tag }}"
vitrage_api_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ vitrage_install_type }}-vitrage-api"
vitrage_api_tag: "{{ vitrage_tag }}"
vitrage_api_image_full: "{{ vitrage_api_image }}:{{ vitrage_api_tag }}"
vitrage_notifier_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ vitrage_install_type }}-vitrage-notifier"
vitrage_notifier_tag: "{{ vitrage_tag }}"
vitrage_notifier_image_full: "{{ vitrage_notifier_image }}:{{ vitrage_notifier_tag }}"
vitrage_ml_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ vitrage_install_type }}-vitrage-ml"
vitrage_ml_tag: "{{ vitrage_tag }}"
vitrage_ml_image_full: "{{ vitrage_ml_image }}:{{ vitrage_ml_tag }}"
vitrage_api_dimensions: "{{ default_container_dimensions }}"
vitrage_notifier_dimensions: "{{ default_container_dimensions }}"
vitrage_graph_dimensions: "{{ default_container_dimensions }}"
vitrage_ml_dimensions: "{{ default_container_dimensions }}"
vitrage_api_default_volumes:
- "{{ node_config_directory }}/vitrage-api/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ kolla_dev_repos_directory ~ '/vitrage/vitrage:/var/lib/kolla/venv/lib/python2.7/site-packages/vitrage' if vitrage_dev_mode | bool else '' }}"
- "kolla_logs:/var/log/kolla/"
vitrage_notifier_default_volumes:
- "{{ node_config_directory }}/vitrage-notifier/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ kolla_dev_repos_directory ~ '/vitrage/vitrage:/var/lib/kolla/venv/lib/python2.7/site-packages/vitrage' if vitrage_dev_mode | bool else '' }}"
- "kolla_logs:/var/log/kolla/"
vitrage_graph_default_volumes:
- "{{ node_config_directory }}/vitrage-graph/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ kolla_dev_repos_directory ~ '/vitrage/vitrage:/var/lib/kolla/venv/lib/python2.7/site-packages/vitrage' if vitrage_dev_mode | bool else '' }}"
- "kolla_logs:/var/log/kolla/"
vitrage_ml_default_volumes:
- "{{ node_config_directory }}/vitrage-ml/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ kolla_dev_repos_directory ~ '/vitrage/vitrage:/var/lib/kolla/venv/lib/python2.7/site-packages/vitrage' if vitrage_dev_mode | bool else '' }}"
- "kolla_logs:/var/log/kolla/"
vitrage_extra_volumes: "{{ default_extra_volumes }}"
vitrage_api_extra_volumes: "{{ vitrage_extra_volumes }}"
vitrage_notifier_extra_volumes: "{{ vitrage_extra_volumes }}"
vitrage_graph_extra_volumes: "{{ vitrage_extra_volumes }}"
vitrage_ml_extra_volumes: "{{ vitrage_extra_volumes }}"
####################
# OpenStack
####################
vitrage_admin_endpoint: "{{ admin_protocol }}://{{ kolla_internal_fqdn }}:{{ vitrage_api_port }}"
vitrage_internal_endpoint: "{{ internal_protocol }}://{{ kolla_internal_fqdn }}:{{ vitrage_api_port }}"
vitrage_public_endpoint: "{{ public_protocol }}://{{ kolla_external_fqdn }}:{{ vitrage_api_port }}"
vitrage_logging_debug: "{{ openstack_logging_debug }}"
vitrage_keystone_user: "vitrage"
openstack_vitrage_auth: "{{ openstack_auth }}"
#####################
# Datasources
#####################
vitrage_notifier:
- name: "aodh"
enabled: "{{ enable_aodh | bool }}"
- name: "mistral"
enabled: "{{ enable_mistral | bool }}"
- name: "nova"
enabled: "{{ enable_nova | bool }}"
vitrage_notifiers: "{{ vitrage_notifier | selectattr('enabled', 'equalto', true) | list }}"
vitrage_datasource:
- name: "static"
enabled: true
- name: "nova.host,nova.instance,nova.zone"
enabled: "{{ enable_nova | bool }}"
- name: "aodh"
enabled: "{{ enable_aodh | bool }}"
- name: "collectd"
enabled: "{{ enable_collectd | bool }}"
- name: "cinder.volume"
enabled: "{{ enable_cinder | bool }}"
- name: "neutron.network,neutron.port"
enabled: "{{ enable_neutron | bool }}"
# TODO(egonzalez) Heat cannot be used with default policy.json due stacks:global_index=rule:deny_everybody.
# Document process to deploy vitrage+heat.
- name: "heat.stack"
enabled: "no"
- name: "prometheus"
enabled: "{{ enable_vitrage_prometheus_datasource | bool }}"
vitrage_datasources: "{{ vitrage_datasource | selectattr('enabled', 'equalto', true) | list }}"
####################
# Kolla
####################
vitrage_git_repository: "{{ kolla_dev_repos_git }}/{{ project_name }}"
vitrage_dev_repos_pull: "{{ kolla_dev_repos_pull }}"
vitrage_dev_mode: "{{ kolla_dev_mode }}"
vitrage_source_version: "{{ kolla_source_version }}"
####################
# Notifications
####################
vitrage_notification_topics:
- name: notifications
enabled: "{{ enable_ceilometer | bool }}"
- name: vitrage_notifications
enabled: True
vitrage_enabled_notification_topics: "{{ vitrage_notification_topics | selectattr('enabled', 'equalto', true) | list }}"