bc053c09c1
Introduce kolla_address filter. Introduce put_address_in_context filter. Add AF config to vars. Address contexts: - raw (default): <ADDR> - memcache: inet6:[<ADDR>] - url: [<ADDR>] Other changes: globals.yml - mention just IP in comment prechecks/port_checks (api_intf) - kolla_address handles validation 3x interface conditional (swift configs: replication/storage) 2x interface variable definition with hostname (haproxy listens; api intf) 1x interface variable definition with hostname with bifrost exclusion (baremetal pre-install /etc/hosts; api intf) neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network basic multinode source CI job for IPv6 prechecks for rabbitmq and qdrouterd use proper NSS database now MariaDB Galera Cluster WSREP SST mariabackup workaround (socat and IPv6) Ceph naming workaround in CI TODO: probably needs documenting RabbitMQ IPv6-only proto_dist Ceph ms switch to IPv6 mode Remove neutron-server ml2_type_vxlan/vxlan_group setting as it is not used (let's avoid any confusion) and could break setups without proper multicast routing if it started working (also IPv4-only) haproxy upgrade checks for slaves based on ipv6 addresses TODO: ovs-dpdk grabs ipv4 network address (w/ prefix len / submask) not supported, invalid by default because neutron_external has no address No idea whether ovs-dpdk works at all atm. ml2 for xenapi Xen is not supported too well. This would require working with XenAPI facts. rp_filter setting This would require meddling with ip6tables (there is no sysctl param). By default nothing is dropped. Unlikely we really need it. ironic dnsmasq is configured IPv4-only dnsmasq needs DHCPv6 options and testing in vivo. KNOWN ISSUES (beyond us): One cannot use IPv6 address to reference the image for docker like we currently do, see: https://github.com/moby/moby/issues/39033 (docker_registry; docker API 400 - invalid reference format) workaround: use hostname/FQDN RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4. This is due to old RabbitMQ versions available in images. IPv4 is preferred by default and may fail in the IPv6-only scenario. This should be no problem in real life as IPv6-only is indeed IPv6-only. Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will no longer be relevant as we supply all the necessary config. See: https://github.com/rabbitmq/rabbitmq-server/pull/1982 For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed to work well). Older Ansible versions are known to miss IPv6 addresses in interface facts. This may affect redeploys, reconfigures and upgrades which run after VIP address is assigned. See: https://github.com/ansible/ansible/issues/63227 Bifrost Train does not support IPv6 deployments. See: https://storyboard.openstack.org/#!/story/2006689 Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c Implements: blueprint ipv6-control-plane Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
171 lines
6.2 KiB
Django/Jinja
171 lines
6.2 KiB
Django/Jinja
#
|
||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||
# Copyright 2017 Fujitsu LIMITED
|
||
#
|
||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||
# you may not use this file except in compliance with the License.
|
||
# You may obtain a copy of the License at
|
||
#
|
||
# http://www.apache.org/licenses/LICENSE-2.0
|
||
#
|
||
# Unless required by applicable law or agreed to in writing, software
|
||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||
# implied.
|
||
# See the License for the specific language governing permissions and
|
||
# limitations under the License.
|
||
#
|
||
metricSpoutThreads: 2
|
||
metricSpoutTasks: 2
|
||
|
||
statsdConfig:
|
||
host: 127.0.0.1
|
||
port: {{ monasca_agent_statsd_port }}
|
||
debugmetrics: {{ monasca_logging_debug }}
|
||
dimensions: !!map
|
||
service : monitoring
|
||
component : storm
|
||
whitelist: !!seq
|
||
- aggregation-bolt.execute-count.filtering-bolt_alarm-creation-stream
|
||
- aggregation-bolt.execute-count.filtering-bolt_default
|
||
- aggregation-bolt.execute-count.system_tick
|
||
- filtering-bolt.execute-count.event-bolt_metric-alarm-events
|
||
- filtering-bolt.execute-count.metrics-spout_default
|
||
- thresholding-bolt.execute-count.aggregation-bolt_default
|
||
- thresholding-bolt.execute-count.event-bolt_alarm-definition-events
|
||
- system.memory_heap.committedBytes
|
||
- system.memory_nonHeap.committedBytes
|
||
- system.newWorkerEvent
|
||
- system.startTimeSecs
|
||
- system.GC_ConcurrentMarkSweep.timeMs
|
||
metricmap: !!map
|
||
aggregation-bolt.execute-count.filtering-bolt_alarm-creation-stream :
|
||
monasca.threshold.aggregation-bolt.execute-count.filtering-bolt_alarm-creation-stream
|
||
aggregation-bolt.execute-count.filtering-bolt_default :
|
||
monasca.threshold.aggregation-bolt.execute-count.filtering-bolt_default
|
||
aggregation-bolt.execute-count.system_tick :
|
||
monasca.threshold.aggregation-bolt.execute-count.system_tick
|
||
filtering-bolt.execute-count.event-bolt_metric-alarm-events :
|
||
monasca.threshold.filtering-bolt.execute-count.event-bolt_metric-alarm-events
|
||
filtering-bolt.execute-count.metrics-spout_default :
|
||
monasca.threshold.filtering-bolt.execute-count.metrics-spout_default
|
||
thresholding-bolt.execute-count.aggregation-bolt_default :
|
||
monasca.threshold.thresholding-bolt.execute-count.aggregation-bolt_default
|
||
thresholding-bolt.execute-count.event-bolt_alarm-definition-events :
|
||
monasca.threshold.thresholding-bolt.execute-count.event-bolt_alarm-definition-events
|
||
system.memory_heap.committedBytes :
|
||
monasca.threshold.system.memory_heap.committedBytes
|
||
system.memory_nonHeap.committedBytes :
|
||
monasca.threshold.system.memory_nonHeap.committedBytes
|
||
system.newWorkerEvent :
|
||
monasca.threshold.system.newWorkerEvent
|
||
system.startTimeSecs :
|
||
monasca.threshold.system.startTimeSecs
|
||
system.GC_ConcurrentMarkSweep.timeMs :
|
||
monasca.threshold.system.GC_ConcurrentMarkSweep.timeMs
|
||
|
||
|
||
metricSpoutConfig:
|
||
kafkaConsumerConfiguration:
|
||
# See http://kafka.apache.org/documentation.html#api for semantics and defaults.
|
||
topic: "{{ monasca_metrics_topic }}"
|
||
numThreads: 1
|
||
groupId: "thresh-metric"
|
||
zookeeperConnect: "{{ monasca_zookeeper_servers }}"
|
||
consumerId: 1
|
||
socketTimeoutMs: 30000
|
||
socketReceiveBufferBytes: 65536
|
||
fetchMessageMaxBytes: 1048576
|
||
autoCommitEnable: true
|
||
autoCommitIntervalMs: 60000
|
||
queuedMaxMessageChunks: 10
|
||
rebalanceMaxRetries: 4
|
||
fetchMinBytes: 1
|
||
fetchWaitMaxMs: 100
|
||
rebalanceBackoffMs: 2000
|
||
refreshLeaderBackoffMs: 200
|
||
autoOffsetReset: largest
|
||
consumerTimeoutMs: -1
|
||
clientId: 1
|
||
zookeeperSessionTimeoutMs: 60000
|
||
zookeeperConnectionTimeoutMs: 60000
|
||
zookeeperSyncTimeMs: 2000
|
||
|
||
|
||
eventSpoutConfig:
|
||
kafkaConsumerConfiguration:
|
||
# See http://kafka.apache.org/documentation.html#api for semantics and defaults.
|
||
topic: "{{ monasca_events_topic }}"
|
||
numThreads: 1
|
||
groupId: "thresh-event"
|
||
zookeeperConnect: "{{ monasca_zookeeper_servers }}"
|
||
consumerId: 1
|
||
socketTimeoutMs: 30000
|
||
socketReceiveBufferBytes: 65536
|
||
fetchMessageMaxBytes: 1048576
|
||
autoCommitEnable: true
|
||
autoCommitIntervalMs: 60000
|
||
queuedMaxMessageChunks: 10
|
||
rebalanceMaxRetries: 4
|
||
fetchMinBytes: 1
|
||
fetchWaitMaxMs: 100
|
||
rebalanceBackoffMs: 2000
|
||
refreshLeaderBackoffMs: 200
|
||
autoOffsetReset: largest
|
||
consumerTimeoutMs: -1
|
||
clientId: 1
|
||
zookeeperSessionTimeoutMs: 60000
|
||
zookeeperConnectionTimeoutMs: 60000
|
||
zookeeperSyncTimeMs: 2000
|
||
|
||
|
||
kafkaProducerConfig:
|
||
# See http://kafka.apache.org/documentation.html#api for semantics and defaults.
|
||
topic: "{{ monasca_alarm_state_transitions_topic }}"
|
||
metadataBrokerList: "{{ monasca_kafka_servers }}"
|
||
serializerClass: kafka.serializer.StringEncoder
|
||
partitionerClass:
|
||
requestRequiredAcks: 1
|
||
requestTimeoutMs: 10000
|
||
producerType: sync
|
||
keySerializerClass:
|
||
compressionCodec: none
|
||
compressedTopics:
|
||
messageSendMaxRetries: 3
|
||
retryBackoffMs: 100
|
||
topicMetadataRefreshIntervalMs: 600000
|
||
queueBufferingMaxMs: 5000
|
||
queueBufferingMaxMessages: 10000
|
||
queueEnqueueTimeoutMs: -1
|
||
batchNumMessages: 200
|
||
sendBufferBytes: 102400
|
||
clientId: Threshold_Engine
|
||
|
||
|
||
sporadicMetricNamespaces:
|
||
- foo
|
||
|
||
database:
|
||
driverClass: org.drizzle.jdbc.DrizzleDriver
|
||
url: "jdbc:drizzle://{{ monasca_database_address | put_address_in_context('url') }}:{{ monasca_database_port }}/{{ monasca_database_name }}"
|
||
user: "{{ monasca_database_user }}"
|
||
password: "{{ monasca_database_password }}"
|
||
properties:
|
||
ssl: false
|
||
# the maximum amount of time to wait on an empty pool before throwing an exception
|
||
maxWaitForConnection: 1s
|
||
# the SQL query to run when validating a connection's liveness TODO FIXME
|
||
validationQuery: "/* MyService Health Check */ SELECT 1"
|
||
# the minimum number of connections to keep open
|
||
minSize: 8
|
||
# the maximum number of connections to keep open
|
||
maxSize: 41
|
||
hibernateSupport: false
|
||
# hibernate provider class
|
||
providerClass: com.zaxxer.hikari.hibernate.HikariConnectionProvider
|
||
databaseName: "{{ monasca_database_name }}"
|
||
serverName: "{{ monasca_database_address }}"
|
||
portNumber: "{{ monasca_database_port }}"
|
||
# hibernate auto configuration parameter
|
||
autoConfig: validate
|