Kevin Carter 717462996a Add playbook to ship journals from hosts
The journal within systemd is able to be shipped from a physical hosts
to a centralized location. This change introduces
`systemd-journal-remote` which will ship all journals on the physical
host to the log host and store the journals under
"/var/log/journal/remote". This change gives deployers greater
visability into the cloud using the systemd built-ins.

> NOTE: This change is all accomplished in a playbook using our common
        roles. While this could be moved into a role by itself, it would
        be a waist of effort given how small this change is.

Given all services are inherently logging to the journal, this change
may allow us to one day deprecate or minimize the usage of our
rsyslog roles. If we were to remove the requirement for rsyslog to run
everywhere we could reduce overall internal cluster IO (CPU, network and
block) and remove the requirement for all services to ship log files from
all containers and hosts. This change is NOT modifying the integrated
logging architecture. At this time we're simply ensuring that the
journals on the physical host are co-located on the logging machines.

At this time there's no suitable package available for
systemd-journal-remote on suse so the playbook to install and setup
remote journalling is being omitted when the suse is detected. When a
suitable package is found the playbook omission should be removed.

Change-Id: I254d52df6303b7cc4d4071b4beaf347922b2616e
Related-Change: https://review.openstack.org/553707
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
2018-04-06 00:12:21 +00:00

126 lines
4.4 KiB
YAML

---
# Copyright 2016, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## OpenStack Source Code Release
openstack_release: master
## Verbosity Options
debug: False
## SSH connection wait time
ssh_delay: 5
# Set the package install state for distribution packages
# Options are 'present' and 'latest'.
# NOTE(mhayden): Allowing CentOS 7 and openSUSE to use package_state=present should give
# gate jobs a better chance to finish and expose more issues to fix.
package_state: "{{ (ansible_pkg_mgr in ['dnf', 'yum', 'zypper']) | ternary('present', 'latest') }}"
# Set "/var/log" to be a bind mount to the physical host.
default_bind_mount_logs: true
# Set distro variable
# NOTE(hwoarang): ansible_distribution may return a string with spaces
# such as "openSUSE Leap" so we need to replace the space with underscore
# in order to create a more sensible repo name for the distro.
os_distro_version: "{{ (ansible_distribution | lower) | replace(' ', '_') }}-{{ ansible_distribution_version.split('.')[:2] | join('.') }}-{{ ansible_architecture | lower }}"
# Set the systemd prefix based on the base OS.
systemd_utils_distro_prefix:
apt: "/lib/systemd"
yum: "/lib/systemd"
dnf: "/lib/systemd"
zypper: "/usr/lib/systemd"
systemd_utils_prefix: "{{ systemd_utils_distro_prefix[ansible_pkg_mgr] }}"
# Ensure that the package state matches the global setting
rsyslog_client_package_state: "{{ package_state }}"
## OpenStack source options
openstack_repo_url: "http://{{ internal_lb_vip_address }}:{{ repo_server_port }}"
openstack_repo_git_url: "git://{{ internal_lb_vip_address }}"
# URL for the frozen internal openstack repo.
repo_server_port: 8181
repo_pkg_cache_enabled: true
repo_pkg_cache_port: 3142
repo_pkg_cache_url: "http://{{ internal_lb_vip_address }}:{{ repo_pkg_cache_port }}"
repo_release_path: "{{ openstack_repo_url }}/os-releases/{{ openstack_release }}/{{ os_distro_version }}"
## DNS resolution (resolvconf) options
#Group containing resolvers to configure
resolvconf_resolver_group: unbound
## Enable external SSL handling for general OpenStack services
openstack_external_ssl: true
## OpenStack global Endpoint Protos
openstack_service_publicuri_proto: https
#openstack_service_adminuri_proto: http
#openstack_service_internaluri_proto: http
## Region Name
service_region: RegionOne
## OpenStack Domain
openstack_domain: openstack.local
lxc_container_domain: "{{ container_domain }}"
container_domain: "{{ openstack_domain }}"
## DHCP Domain Name
dhcp_domain: openstacklocal
## LDAP enabled toggle
service_ldap_backend_enabled: "{{ keystone_ldap is defined and keystone_ldap.Default is defined }}"
## Base venv configuration
venv_tag: "{{ openstack_release }}"
venv_base_download_url: "{{ openstack_repo_url }}/venvs/{{ openstack_release }}/{{ os_distro_version }}"
## Gnocchi
# Used in both Gnocchi and Swift roles.
gnocchi_service_project_name: "{{ (gnocchi_storage_driver is defined and gnocchi_storage_driver == 'swift') | ternary('gnocchi_swift', 'service') }}"
## OpenStack Openrc
openrc_os_auth_url: "{{ keystone_service_internalurl }}"
openrc_os_password: "{{ keystone_auth_admin_password }}"
openrc_os_domain_name: "Default"
openrc_region_name: "{{ service_region }}"
## Host security hardening
# The ansible-hardening role provides security hardening for hosts
# by applying security configurations from the STIG. Hardening is enabled by
# default, but an option to opt out is available by setting the following
# variable to 'false'.
# Docs: https://docs.openstack.org/ansible-hardening/latest/
apply_security_hardening: true
## Ansible ssh configuration
ansible_ssh_extra_args: >
-o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no
-o ServerAliveInterval=64
-o ServerAliveCountMax=1024
-o Compression=no
-o TCPKeepAlive=yes
-o VerifyHostKeyDNS=no
-o ForwardX11=no
-o ForwardAgent=yes
-T
# Toggle whether the service is deployed in a container or not
is_metal: "{{ properties.is_metal | default(false) }}"