Retire project
Leave README around for those that follow. http://lists.openstack.org/pipermail/openstack-discuss/2019-February/003186.html http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000057.html Change-Id: I459871b2d4ab15807b54b7625a4096293d29e74a
This commit is contained in:
parent
b96926fb1a
commit
91d8c4f9f4
18
.gitignore
vendored
18
.gitignore
vendored
@ -1,18 +0,0 @@
|
||||
bin
|
||||
.coverage
|
||||
.tox
|
||||
.testrepository
|
||||
tags
|
||||
*.sw[nop]
|
||||
*.pyc
|
||||
trusty/
|
||||
xenial/
|
||||
precise/
|
||||
tests/cirros-*-disk.img
|
||||
.unit-state.db
|
||||
.idea
|
||||
.project
|
||||
.pydevproject
|
||||
revision
|
||||
func-results*
|
||||
files/id*
|
@ -1,8 +0,0 @@
|
||||
[DEFAULT]
|
||||
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
|
||||
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
|
||||
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
|
||||
${PYTHON:-python} -m subunit.run discover -t ./ ./unit_tests $LISTOPT $IDOPTION
|
||||
|
||||
test_id_option=--load-list $IDFILE
|
||||
test_list_option=--list
|
@ -1,4 +1,3 @@
|
||||
- project:
|
||||
templates:
|
||||
- python-charm-jobs
|
||||
- openstack-python35-jobs-nonvoting
|
||||
- noop-jobs
|
||||
|
202
LICENSE
202
LICENSE
@ -1,202 +0,0 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
28
Makefile
28
Makefile
@ -1,28 +0,0 @@
|
||||
#!/usr/bin/make
|
||||
PYTHON := /usr/bin/env python
|
||||
CHARM_DIR := $(PWD)
|
||||
HOOKS_DIR := $(PWD)/hooks
|
||||
TEST_PREFIX := PYTHONPATH=$(HOOKS_DIR)
|
||||
|
||||
clean:
|
||||
rm -f .coverage .testrepository
|
||||
find . -name '*.pyc' -delete
|
||||
|
||||
lint:
|
||||
@tox -e pep8
|
||||
|
||||
bin/charm_helpers_sync.py:
|
||||
@mkdir -p bin
|
||||
@curl -o bin/charm_helpers_sync.py https://raw.githubusercontent.com/juju/charm-helpers/master/tools/charm_helpers_sync/charm_helpers_sync.py
|
||||
|
||||
|
||||
sync: bin/charm_helpers_sync.py
|
||||
@$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
|
||||
|
||||
test:
|
||||
@echo Starting unit tests...
|
||||
@tox -e py27
|
||||
|
||||
functional_test:
|
||||
@echo Starting amulet tests...
|
||||
@tox -e func27
|
55
README.md
55
README.md
@ -1,51 +1,6 @@
|
||||
# Overview
|
||||
This project is no longer maintained.
|
||||
|
||||
*This charm is in ALPHA state, currently in active development.*
|
||||
|
||||
*Developers can be reached on freenode channel #openstack-charms.*
|
||||
|
||||
The nova-compute-proxy charm deploys OpenStack Nova Compute to a
|
||||
pre-existing rpm-based Power8 PowerKVM or s390x z/KVM machine,
|
||||
where the remainder of the Ubuntu OpenStack control plane and storage
|
||||
applications are deployed to machines via MAAS.
|
||||
|
||||
# Usage
|
||||
|
||||
To deploy a nova-compute-proxy service, have the following prepared in
|
||||
advance:
|
||||
|
||||
* PowerKVM or z/KVM machine(s) manually provisioned, booted, accessible from
|
||||
the control plane units, with network interfaces and storage ready to use.
|
||||
* An ssh key that the charm can use to remotely execute installation and
|
||||
configuration operations.
|
||||
* Yum repository/repositories or .iso file(s) which contain the appropriate
|
||||
IBM OpenStack RPMs. If using .iso file(s), they must be loop-mounted
|
||||
on the compute node host.
|
||||
* Password-less sudo for the specified user configured on the compute node.
|
||||
|
||||
Once you have this setup you must configure the charm as follow:
|
||||
|
||||
* Apply the following charm config:
|
||||
* remote-user: username used to access and configure the power node.
|
||||
* remote-repos: Yum repository url(s) or file url(s)
|
||||
* remote-hosts: IP address of power node
|
||||
* remote-key: Private key string to use for access
|
||||
* Example:
|
||||
```
|
||||
remote-user: youruser
|
||||
remote-repos: file:///tmp/openstack-iso/openstack,file:///tmp/other-iso/repofs
|
||||
remote-key: |
|
||||
-----BEGIN DSA PRIVATE KEY-----
|
||||
MIIBugIBAAKBgQD3IG188Q07kQdbRJhlZqknNpoGDB1r9+XGq9+7nmWGKusbOn6L
|
||||
5VdyoHnx0BvgHHJmOAvJ+39sex9KvToEM0Jfav30EfffVzIrjaZZBMZkO/kWkEdd
|
||||
TJrpMoW5nqiyNQRHCJWKkTiT7hNwS7AzUFkH1cR16bkabUfNhx3nWVsfGQIVAM7l
|
||||
FlrJwujvWxOOHIRrihVmnUylAoGBAKGjWAPuj23p2II8NSTfaK/VJ9CyEF1RQ4Pv
|
||||
+wtCRRE/DoN/3jpFnQz8Yjt6dYEewdcWFDG9aJ/PLvm/qX335TSz86pfYBd2Q3dp
|
||||
9/RuaXTnLK6L/gdgkGcDXG8fy2kk0zteNjMjpzbaYpjZmIQ4lu3StUkwTm8EppZz
|
||||
b0KXUNhwAn8bSTxNIZnlfoYzzwT2XPjHMlqeFbYxJMo9Dk5+AY6+tmr4/uR5ySDD
|
||||
A+Txxh7RPhIBQwrIdGlOYOR3Mh03NcYuU+yrUsv4xLP8SeWcfiuAXFctXu0kzvPC
|
||||
uIQ1EfKCrOtbWPcbza2ipo1J8MN/vzLCu69Jdq8af0OqJFoDcY0vAhUAxh2BNdRr
|
||||
HyF1bGCP1t8JdMJVtb0=
|
||||
-----END DSA PRIVATE KEY-----
|
||||
remote-hosts: 10.10.10.10 10.10.10.11
|
||||
```
|
||||
The contents of this repository are still available in the Git
|
||||
source code management system. To see the contents of this
|
||||
repository before it reached its end of life, please check out the
|
||||
previous commit with "git checkout HEAD^1".
|
||||
|
@ -1,258 +0,0 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Authors:
|
||||
# Adam Gandelman <adamg@ubuntu.com>
|
||||
|
||||
import logging
|
||||
import optparse
|
||||
import os
|
||||
import subprocess
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
import yaml
|
||||
from fnmatch import fnmatch
|
||||
|
||||
import six
|
||||
|
||||
CHARM_HELPERS_REPO = 'https://github.com/juju/charm-helpers'
|
||||
|
||||
|
||||
def parse_config(conf_file):
|
||||
if not os.path.isfile(conf_file):
|
||||
logging.error('Invalid config file: %s.' % conf_file)
|
||||
return False
|
||||
return yaml.load(open(conf_file).read())
|
||||
|
||||
|
||||
def clone_helpers(work_dir, repo):
|
||||
dest = os.path.join(work_dir, 'charm-helpers')
|
||||
logging.info('Cloning out %s to %s.' % (repo, dest))
|
||||
branch = None
|
||||
if '@' in repo:
|
||||
repo, branch = repo.split('@', 1)
|
||||
cmd = ['git', 'clone', '--depth=1']
|
||||
if branch is not None:
|
||||
cmd += ['--branch', branch]
|
||||
cmd += [repo, dest]
|
||||
subprocess.check_call(cmd)
|
||||
return dest
|
||||
|
||||
|
||||
def _module_path(module):
|
||||
return os.path.join(*module.split('.'))
|
||||
|
||||
|
||||
def _src_path(src, module):
|
||||
return os.path.join(src, 'charmhelpers', _module_path(module))
|
||||
|
||||
|
||||
def _dest_path(dest, module):
|
||||
return os.path.join(dest, _module_path(module))
|
||||
|
||||
|
||||
def _is_pyfile(path):
|
||||
return os.path.isfile(path + '.py')
|
||||
|
||||
|
||||
def ensure_init(path):
|
||||
'''
|
||||
ensure directories leading up to path are importable, omitting
|
||||
parent directory, eg path='/hooks/helpers/foo'/:
|
||||
hooks/
|
||||
hooks/helpers/__init__.py
|
||||
hooks/helpers/foo/__init__.py
|
||||
'''
|
||||
for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
|
||||
_i = os.path.join(d, '__init__.py')
|
||||
if not os.path.exists(_i):
|
||||
logging.info('Adding missing __init__.py: %s' % _i)
|
||||
open(_i, 'wb').close()
|
||||
|
||||
|
||||
def sync_pyfile(src, dest):
|
||||
src = src + '.py'
|
||||
src_dir = os.path.dirname(src)
|
||||
logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
|
||||
if not os.path.exists(dest):
|
||||
os.makedirs(dest)
|
||||
shutil.copy(src, dest)
|
||||
if os.path.isfile(os.path.join(src_dir, '__init__.py')):
|
||||
shutil.copy(os.path.join(src_dir, '__init__.py'),
|
||||
dest)
|
||||
ensure_init(dest)
|
||||
|
||||
|
||||
def get_filter(opts=None):
|
||||
opts = opts or []
|
||||
if 'inc=*' in opts:
|
||||
# do not filter any files, include everything
|
||||
return None
|
||||
|
||||
def _filter(dir, ls):
|
||||
incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
|
||||
_filter = []
|
||||
for f in ls:
|
||||
_f = os.path.join(dir, f)
|
||||
|
||||
if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
|
||||
if True not in [fnmatch(_f, inc) for inc in incs]:
|
||||
logging.debug('Not syncing %s, does not match include '
|
||||
'filters (%s)' % (_f, incs))
|
||||
_filter.append(f)
|
||||
else:
|
||||
logging.debug('Including file, which matches include '
|
||||
'filters (%s): %s' % (incs, _f))
|
||||
elif (os.path.isfile(_f) and not _f.endswith('.py')):
|
||||
logging.debug('Not syncing file: %s' % f)
|
||||
_filter.append(f)
|
||||
elif (os.path.isdir(_f) and not
|
||||
os.path.isfile(os.path.join(_f, '__init__.py'))):
|
||||
logging.debug('Not syncing directory: %s' % f)
|
||||
_filter.append(f)
|
||||
return _filter
|
||||
return _filter
|
||||
|
||||
|
||||
def sync_directory(src, dest, opts=None):
|
||||
if os.path.exists(dest):
|
||||
logging.debug('Removing existing directory: %s' % dest)
|
||||
shutil.rmtree(dest)
|
||||
logging.info('Syncing directory: %s -> %s.' % (src, dest))
|
||||
|
||||
shutil.copytree(src, dest, ignore=get_filter(opts))
|
||||
ensure_init(dest)
|
||||
|
||||
|
||||
def sync(src, dest, module, opts=None):
|
||||
|
||||
# Sync charmhelpers/__init__.py for bootstrap code.
|
||||
sync_pyfile(_src_path(src, '__init__'), dest)
|
||||
|
||||
# Sync other __init__.py files in the path leading to module.
|
||||
m = []
|
||||
steps = module.split('.')[:-1]
|
||||
while steps:
|
||||
m.append(steps.pop(0))
|
||||
init = '.'.join(m + ['__init__'])
|
||||
sync_pyfile(_src_path(src, init),
|
||||
os.path.dirname(_dest_path(dest, init)))
|
||||
|
||||
# Sync the module, or maybe a .py file.
|
||||
if os.path.isdir(_src_path(src, module)):
|
||||
sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
|
||||
elif _is_pyfile(_src_path(src, module)):
|
||||
sync_pyfile(_src_path(src, module),
|
||||
os.path.dirname(_dest_path(dest, module)))
|
||||
else:
|
||||
logging.warn('Could not sync: %s. Neither a pyfile or directory, '
|
||||
'does it even exist?' % module)
|
||||
|
||||
|
||||
def parse_sync_options(options):
|
||||
if not options:
|
||||
return []
|
||||
return options.split(',')
|
||||
|
||||
|
||||
def extract_options(inc, global_options=None):
|
||||
global_options = global_options or []
|
||||
if global_options and isinstance(global_options, six.string_types):
|
||||
global_options = [global_options]
|
||||
if '|' not in inc:
|
||||
return (inc, global_options)
|
||||
inc, opts = inc.split('|')
|
||||
return (inc, parse_sync_options(opts) + global_options)
|
||||
|
||||
|
||||
def sync_helpers(include, src, dest, options=None):
|
||||
if not os.path.isdir(dest):
|
||||
os.makedirs(dest)
|
||||
|
||||
global_options = parse_sync_options(options)
|
||||
|
||||
for inc in include:
|
||||
if isinstance(inc, str):
|
||||
inc, opts = extract_options(inc, global_options)
|
||||
sync(src, dest, inc, opts)
|
||||
elif isinstance(inc, dict):
|
||||
# could also do nested dicts here.
|
||||
for k, v in six.iteritems(inc):
|
||||
if isinstance(v, list):
|
||||
for m in v:
|
||||
inc, opts = extract_options(m, global_options)
|
||||
sync(src, dest, '%s.%s' % (k, inc), opts)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = optparse.OptionParser()
|
||||
parser.add_option('-c', '--config', action='store', dest='config',
|
||||
default=None, help='helper config file')
|
||||
parser.add_option('-D', '--debug', action='store_true', dest='debug',
|
||||
default=False, help='debug')
|
||||
parser.add_option('-r', '--repository', action='store', dest='repo',
|
||||
help='charm-helpers git repository (overrides config)')
|
||||
parser.add_option('-d', '--destination', action='store', dest='dest_dir',
|
||||
help='sync destination dir (overrides config)')
|
||||
(opts, args) = parser.parse_args()
|
||||
|
||||
if opts.debug:
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
else:
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
if opts.config:
|
||||
logging.info('Loading charm helper config from %s.' % opts.config)
|
||||
config = parse_config(opts.config)
|
||||
if not config:
|
||||
logging.error('Could not parse config from %s.' % opts.config)
|
||||
sys.exit(1)
|
||||
else:
|
||||
config = {}
|
||||
|
||||
if 'repo' not in config:
|
||||
config['repo'] = CHARM_HELPERS_REPO
|
||||
if opts.repo:
|
||||
config['repo'] = opts.repo
|
||||
if opts.dest_dir:
|
||||
config['destination'] = opts.dest_dir
|
||||
|
||||
if 'destination' not in config:
|
||||
logging.error('No destination dir. specified as option or config.')
|
||||
sys.exit(1)
|
||||
|
||||
if 'include' not in config:
|
||||
if not args:
|
||||
logging.error('No modules to sync specified as option or config.')
|
||||
sys.exit(1)
|
||||
config['include'] = []
|
||||
[config['include'].append(a) for a in args]
|
||||
|
||||
sync_options = None
|
||||
if 'options' in config:
|
||||
sync_options = config['options']
|
||||
tmpd = tempfile.mkdtemp()
|
||||
try:
|
||||
checkout = clone_helpers(tmpd, config['repo'])
|
||||
sync_helpers(config['include'], checkout, config['destination'],
|
||||
options=sync_options)
|
||||
except Exception as e:
|
||||
logging.error("Could not sync: %s" % e)
|
||||
raise e
|
||||
finally:
|
||||
logging.debug('Cleaning up %s' % tmpd)
|
||||
shutil.rmtree(tmpd)
|
@ -1,2 +0,0 @@
|
||||
bzr [platform:dpkg] # for bzr+lp: python requirements format
|
||||
|
@ -1,18 +0,0 @@
|
||||
repo: https://github.com/juju/charm-helpers
|
||||
destination: hooks/charmhelpers
|
||||
include:
|
||||
- cli
|
||||
- core
|
||||
- fetch
|
||||
- osplatform
|
||||
- contrib.openstack|inc=*
|
||||
- contrib.storage
|
||||
- contrib.hahelpers:
|
||||
- apache
|
||||
- cluster
|
||||
- contrib.network
|
||||
- contrib.python
|
||||
- payload
|
||||
- contrib.charmsupport
|
||||
- contrib.hardening|inc=*
|
||||
|
103
config.yaml
103
config.yaml
@ -1,103 +0,0 @@
|
||||
options:
|
||||
openstack-release:
|
||||
type: string
|
||||
default: mitaka
|
||||
description: OpenStack release to use for configuration of remote compute node.
|
||||
remote-user:
|
||||
type: string
|
||||
default:
|
||||
description: Username used to access remote compute nodes.
|
||||
remote-key:
|
||||
type: string
|
||||
default:
|
||||
description: SSH key to use to access remote compute nodes.
|
||||
remote-repos:
|
||||
type: string
|
||||
default:
|
||||
description: Comma separated list of RPM repositorys of OpenStack packages to deploy to remote compute nodes.
|
||||
remote-hosts:
|
||||
type: string
|
||||
default:
|
||||
description: Remote compute node hosts to manager; space delimited.
|
||||
remote-password:
|
||||
type: string
|
||||
default:
|
||||
description: sudo password on remote compute node (NOT recommended). Use ssh key instead.
|
||||
rabbit-user:
|
||||
default: nova
|
||||
type: string
|
||||
description: Username used to access rabbitmq queue
|
||||
rabbit-vhost:
|
||||
default: openstack
|
||||
type: string
|
||||
description: Rabbitmq vhost
|
||||
debug:
|
||||
type: boolean
|
||||
default: false
|
||||
description: Enabled debug level logging
|
||||
verbose:
|
||||
type: boolean
|
||||
default: false
|
||||
description: Enabled verbose level logging
|
||||
use-syslog:
|
||||
type: boolean
|
||||
default: False
|
||||
description: |
|
||||
By default, all services will log into their corresponding log files.
|
||||
Setting this to True will force all services to log to the syslog.
|
||||
instances-path:
|
||||
type: string
|
||||
default:
|
||||
description: Instance path to use - empty means default of /var/lib/nova/instances
|
||||
config-flags:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
Comma-separated list of key=value config flags. These values will be
|
||||
placed in the nova.conf [DEFAULT] section. Use with caution.
|
||||
data-port:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
The data port will be added to br-data and will allow usage of flat or VLAN
|
||||
network types with Neutron.
|
||||
disable-security-groups:
|
||||
type: boolean
|
||||
default: false
|
||||
description: |
|
||||
Disable neutron based security groups - setting this configuration option
|
||||
will override any settings configured via the neutron-api charm.
|
||||
.
|
||||
BE CAREFUL - this option allows you to disable all port level security within
|
||||
an OpenStack cloud.
|
||||
prevent-arp-spoofing:
|
||||
type: boolean
|
||||
default: true
|
||||
description: |
|
||||
Enable suppression of ARP responses that don't match an IP address that belongs
|
||||
to the port from which they originate.
|
||||
.
|
||||
Only supported in OpenStack Liberty or newer, which has the required minimum version
|
||||
of Open vSwitch.
|
||||
cpu-mode:
|
||||
type: string
|
||||
default: none
|
||||
description: |
|
||||
Set to 'host-model' to clone the host CPU feature flags; to
|
||||
'host-passthrough' to use the host CPU model exactly; to 'custom' to
|
||||
use a named CPU model; to 'none' to not set any CPU model. If
|
||||
virt_type='kvm|qemu', it will default to 'host-model', otherwise it will
|
||||
default to 'none'. Defaults to 'host-passthrough' for ppc64el, ppc64le
|
||||
if no value is set.
|
||||
cpu-model:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
Set to a named libvirt CPU model (see names listed in
|
||||
/usr/share/libvirt/cpu_map.xml). Only has effect if cpu_mode='custom' and
|
||||
virt_type='kvm|qemu'.
|
||||
reserved-host-memory:
|
||||
type: int
|
||||
default: 512
|
||||
description: |
|
||||
Amount of memory in MB to reserve for the host. Defaults to 512MB.
|
16
copyright
16
copyright
@ -1,16 +0,0 @@
|
||||
Format: http://dep.debian.net/deps/dep5/
|
||||
|
||||
Files: *
|
||||
Copyright: Copyright 2012, Canonical Ltd., All Rights Reserved.
|
||||
License: Apache-2.0
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
@ -1,13 +0,0 @@
|
||||
# Copyright 2016 Canonical Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -1 +0,0 @@
|
||||
nova_compute_hooks.py
|
@ -1 +0,0 @@
|
||||
nova_compute_hooks.py
|
@ -1 +0,0 @@
|
||||
nova_compute_hooks.py
|
@ -1 +0,0 @@
|
||||
nova_compute_hooks.py
|
@ -1,97 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Bootstrap charm-helpers, installing its dependencies if necessary using
|
||||
# only standard libraries.
|
||||
from __future__ import print_function
|
||||
from __future__ import absolute_import
|
||||
|
||||
import functools
|
||||
import inspect
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
try:
|
||||
import six # NOQA:F401
|
||||
except ImportError:
|
||||
if sys.version_info.major == 2:
|
||||
subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
|
||||
else:
|
||||
subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
|
||||
import six # NOQA:F401
|
||||
|
||||
try:
|
||||
import yaml # NOQA:F401
|
||||
except ImportError:
|
||||
if sys.version_info.major == 2:
|
||||
subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
|
||||
else:
|
||||
subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
|
||||
import yaml # NOQA:F401
|
||||
|
||||
|
||||
# Holds a list of mapping of mangled function names that have been deprecated
|
||||
# using the @deprecate decorator below. This is so that the warning is only
|
||||
# printed once for each usage of the function.
|
||||
__deprecated_functions = {}
|
||||
|
||||
|
||||
def deprecate(warning, date=None, log=None):
|
||||
"""Add a deprecation warning the first time the function is used.
|
||||
The date, which is a string in semi-ISO8660 format indicate the year-month
|
||||
that the function is officially going to be removed.
|
||||
|
||||
usage:
|
||||
|
||||
@deprecate('use core/fetch/add_source() instead', '2017-04')
|
||||
def contributed_add_source_thing(...):
|
||||
...
|
||||
|
||||
And it then prints to the log ONCE that the function is deprecated.
|
||||
The reason for passing the logging function (log) is so that hookenv.log
|
||||
can be used for a charm if needed.
|
||||
|
||||
:param warning: String to indicat where it has moved ot.
|
||||
:param date: optional sting, in YYYY-MM format to indicate when the
|
||||
function will definitely (probably) be removed.
|
||||
:param log: The log function to call to log. If not, logs to stdout
|
||||
"""
|
||||
def wrap(f):
|
||||
|
||||
@functools.wraps(f)
|
||||
def wrapped_f(*args, **kwargs):
|
||||
try:
|
||||
module = inspect.getmodule(f)
|
||||
file = inspect.getsourcefile(f)
|
||||
lines = inspect.getsourcelines(f)
|
||||
f_name = "{}-{}-{}..{}-{}".format(
|
||||
module.__name__, file, lines[0], lines[-1], f.__name__)
|
||||
except (IOError, TypeError):
|
||||
# assume it was local, so just use the name of the function
|
||||
f_name = f.__name__
|
||||
if f_name not in __deprecated_functions:
|
||||
__deprecated_functions[f_name] = True
|
||||
s = "DEPRECATION WARNING: Function {} is being removed".format(
|
||||
f.__name__)
|
||||
if date:
|
||||
s = "{} on/around {}".format(s, date)
|
||||
if warning:
|
||||
s = "{} : {}".format(s, warning)
|
||||
if log:
|
||||
log(s)
|
||||
else:
|
||||
print(s)
|
||||
return f(*args, **kwargs)
|
||||
return wrapped_f
|
||||
return wrap
|
@ -1,189 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import inspect
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
from six.moves import zip
|
||||
|
||||
import charmhelpers.core.unitdata
|
||||
|
||||
|
||||
class OutputFormatter(object):
|
||||
def __init__(self, outfile=sys.stdout):
|
||||
self.formats = (
|
||||
"raw",
|
||||
"json",
|
||||
"py",
|
||||
"yaml",
|
||||
"csv",
|
||||
"tab",
|
||||
)
|
||||
self.outfile = outfile
|
||||
|
||||
def add_arguments(self, argument_parser):
|
||||
formatgroup = argument_parser.add_mutually_exclusive_group()
|
||||
choices = self.supported_formats
|
||||
formatgroup.add_argument("--format", metavar='FMT',
|
||||
help="Select output format for returned data, "
|
||||
"where FMT is one of: {}".format(choices),
|
||||
choices=choices, default='raw')
|
||||
for fmt in self.formats:
|
||||
fmtfunc = getattr(self, fmt)
|
||||
formatgroup.add_argument("-{}".format(fmt[0]),
|
||||
"--{}".format(fmt), action='store_const',
|
||||
const=fmt, dest='format',
|
||||
help=fmtfunc.__doc__)
|
||||
|
||||
@property
|
||||
def supported_formats(self):
|
||||
return self.formats
|
||||
|
||||
def raw(self, output):
|
||||
"""Output data as raw string (default)"""
|
||||
if isinstance(output, (list, tuple)):
|
||||
output = '\n'.join(map(str, output))
|
||||
self.outfile.write(str(output))
|
||||
|
||||
def py(self, output):
|
||||
"""Output data as a nicely-formatted python data structure"""
|
||||
import pprint
|
||||
pprint.pprint(output, stream=self.outfile)
|
||||
|
||||
def json(self, output):
|
||||
"""Output data in JSON format"""
|
||||
import json
|
||||
json.dump(output, self.outfile)
|
||||
|
||||
def yaml(self, output):
|
||||
"""Output data in YAML format"""
|
||||
import yaml
|
||||
yaml.safe_dump(output, self.outfile)
|
||||
|
||||
def csv(self, output):
|
||||
"""Output data as excel-compatible CSV"""
|
||||
import csv
|
||||
csvwriter = csv.writer(self.outfile)
|
||||
csvwriter.writerows(output)
|
||||
|
||||
def tab(self, output):
|
||||
"""Output data in excel-compatible tab-delimited format"""
|
||||
import csv
|
||||
csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
|
||||
csvwriter.writerows(output)
|
||||
|
||||
def format_output(self, output, fmt='raw'):
|
||||
fmtfunc = getattr(self, fmt)
|
||||
fmtfunc(output)
|
||||
|
||||
|
||||
class CommandLine(object):
|
||||
argument_parser = None
|
||||
subparsers = None
|
||||
formatter = None
|
||||
exit_code = 0
|
||||
|
||||
def __init__(self):
|
||||
if not self.argument_parser:
|
||||
self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
|
||||
if not self.formatter:
|
||||
self.formatter = OutputFormatter()
|
||||
self.formatter.add_arguments(self.argument_parser)
|
||||
if not self.subparsers:
|
||||
self.subparsers = self.argument_parser.add_subparsers(help='Commands')
|
||||
|
||||
def subcommand(self, command_name=None):
|
||||
"""
|
||||
Decorate a function as a subcommand. Use its arguments as the
|
||||
command-line arguments"""
|
||||
def wrapper(decorated):
|
||||
cmd_name = command_name or decorated.__name__
|
||||
subparser = self.subparsers.add_parser(cmd_name,
|
||||
description=decorated.__doc__)
|
||||
for args, kwargs in describe_arguments(decorated):
|
||||
subparser.add_argument(*args, **kwargs)
|
||||
subparser.set_defaults(func=decorated)
|
||||
return decorated
|
||||
return wrapper
|
||||
|
||||
def test_command(self, decorated):
|
||||
"""
|
||||
Subcommand is a boolean test function, so bool return values should be
|
||||
converted to a 0/1 exit code.
|
||||
"""
|
||||
decorated._cli_test_command = True
|
||||
return decorated
|
||||
|
||||
def no_output(self, decorated):
|
||||
"""
|
||||
Subcommand is not expected to return a value, so don't print a spurious None.
|
||||
"""
|
||||
decorated._cli_no_output = True
|
||||
return decorated
|
||||
|
||||
def subcommand_builder(self, command_name, description=None):
|
||||
"""
|
||||
Decorate a function that builds a subcommand. Builders should accept a
|
||||
single argument (the subparser instance) and return the function to be
|
||||
run as the command."""
|
||||
def wrapper(decorated):
|
||||
subparser = self.subparsers.add_parser(command_name)
|
||||
func = decorated(subparser)
|
||||
subparser.set_defaults(func=func)
|
||||
subparser.description = description or func.__doc__
|
||||
return wrapper
|
||||
|
||||
def run(self):
|
||||
"Run cli, processing arguments and executing subcommands."
|
||||
arguments = self.argument_parser.parse_args()
|
||||
argspec = inspect.getargspec(arguments.func)
|
||||
vargs = []
|
||||
for arg in argspec.args:
|
||||
vargs.append(getattr(arguments, arg))
|
||||
if argspec.varargs:
|
||||
vargs.extend(getattr(arguments, argspec.varargs))
|
||||
output = arguments.func(*vargs)
|
||||
if getattr(arguments.func, '_cli_test_command', False):
|
||||
self.exit_code = 0 if output else 1
|
||||
output = ''
|
||||
if getattr(arguments.func, '_cli_no_output', False):
|
||||
output = ''
|
||||
self.formatter.format_output(output, arguments.format)
|
||||
if charmhelpers.core.unitdata._KV:
|
||||
charmhelpers.core.unitdata._KV.flush()
|
||||
|
||||
|
||||
cmdline = CommandLine()
|
||||
|
||||
|
||||
def describe_arguments(func):
|
||||
"""
|
||||
Analyze a function's signature and return a data structure suitable for
|
||||
passing in as arguments to an argparse parser's add_argument() method."""
|
||||
|
||||
argspec = inspect.getargspec(func)
|
||||
# we should probably raise an exception somewhere if func includes **kwargs
|
||||
if argspec.defaults:
|
||||
positional_args = argspec.args[:-len(argspec.defaults)]
|
||||
keyword_names = argspec.args[-len(argspec.defaults):]
|
||||
for arg, default in zip(keyword_names, argspec.defaults):
|
||||
yield ('--{}'.format(arg),), {'default': default}
|
||||
else:
|
||||
positional_args = argspec.args
|
||||
|
||||
for arg in positional_args:
|
||||
yield (arg,), {}
|
||||
if argspec.varargs:
|
||||
yield (argspec.varargs,), {'nargs': '*'}
|
@ -1,34 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import cmdline
|
||||
from charmhelpers.contrib.benchmark import Benchmark
|
||||
|
||||
|
||||
@cmdline.subcommand(command_name='benchmark-start')
|
||||
def start():
|
||||
Benchmark.start()
|
||||
|
||||
|
||||
@cmdline.subcommand(command_name='benchmark-finish')
|
||||
def finish():
|
||||
Benchmark.finish()
|
||||
|
||||
|
||||
@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score")
|
||||
def service(subparser):
|
||||
subparser.add_argument("value", help="The composite score.")
|
||||
subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.")
|
||||
subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.")
|
||||
return Benchmark.set_composite_score
|
@ -1,30 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This module loads sub-modules into the python runtime so they can be
|
||||
discovered via the inspect module. In order to prevent flake8 from (rightfully)
|
||||
telling us these are unused modules, throw a ' # noqa' at the end of each import
|
||||
so that the warning is suppressed.
|
||||
"""
|
||||
|
||||
from . import CommandLine # noqa
|
||||
|
||||
"""
|
||||
Import the sub-modules which have decorated subcommands to register with chlp.
|
||||
"""
|
||||
from . import host # noqa
|
||||
from . import benchmark # noqa
|
||||
from . import unitdata # noqa
|
||||
from . import hookenv # noqa
|
@ -1,21 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import cmdline
|
||||
from charmhelpers.core import hookenv
|
||||
|
||||
|
||||
cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped)
|
||||
cmdline.subcommand('service-name')(hookenv.service_name)
|
||||
cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped)
|
@ -1,29 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import cmdline
|
||||
from charmhelpers.core import host
|
||||
|
||||
|
||||
@cmdline.subcommand()
|
||||
def mounts():
|
||||
"List mounts"
|
||||
return host.mounts()
|
||||
|
||||
|
||||
@cmdline.subcommand_builder('service', description="Control system services")
|
||||
def service(subparser):
|
||||
subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
|
||||
subparser.add_argument("service_name", help="Name of the service to control")
|
||||
return host.service
|
@ -1,37 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import cmdline
|
||||
from charmhelpers.core import unitdata
|
||||
|
||||
|
||||
@cmdline.subcommand_builder('unitdata', description="Store and retrieve data")
|
||||
def unitdata_cmd(subparser):
|
||||
nested = subparser.add_subparsers()
|
||||
get_cmd = nested.add_parser('get', help='Retrieve data')
|
||||
get_cmd.add_argument('key', help='Key to retrieve the value of')
|
||||
get_cmd.set_defaults(action='get', value=None)
|
||||
set_cmd = nested.add_parser('set', help='Store data')
|
||||
set_cmd.add_argument('key', help='Key to set')
|
||||
set_cmd.add_argument('value', help='Value to store')
|
||||
set_cmd.set_defaults(action='set')
|
||||
|
||||
def _unitdata_cmd(action, key, value):
|
||||
if action == 'get':
|
||||
return unitdata.kv().get(key)
|
||||
elif action == 'set':
|
||||
unitdata.kv().set(key, value)
|
||||
unitdata.kv().flush()
|
||||
return ''
|
||||
return _unitdata_cmd
|
@ -1,13 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -1,13 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -1,455 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Compatibility with the nrpe-external-master charm"""
|
||||
# Copyright 2012 Canonical Ltd.
|
||||
#
|
||||
# Authors:
|
||||
# Matthew Wedgwood <matthew.wedgwood@canonical.com>
|
||||
|
||||
import subprocess
|
||||
import pwd
|
||||
import grp
|
||||
import os
|
||||
import glob
|
||||
import shutil
|
||||
import re
|
||||
import shlex
|
||||
import yaml
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
config,
|
||||
hook_name,
|
||||
local_unit,
|
||||
log,
|
||||
relation_ids,
|
||||
relation_set,
|
||||
relations_of_type,
|
||||
)
|
||||
|
||||
from charmhelpers.core.host import service
|
||||
from charmhelpers.core import host
|
||||
|
||||
# This module adds compatibility with the nrpe-external-master and plain nrpe
|
||||
# subordinate charms. To use it in your charm:
|
||||
#
|
||||
# 1. Update metadata.yaml
|
||||
#
|
||||
# provides:
|
||||
# (...)
|
||||
# nrpe-external-master:
|
||||
# interface: nrpe-external-master
|
||||
# scope: container
|
||||
#
|
||||
# and/or
|
||||
#
|
||||
# provides:
|
||||
# (...)
|
||||
# local-monitors:
|
||||
# interface: local-monitors
|
||||
# scope: container
|
||||
|
||||
#
|
||||
# 2. Add the following to config.yaml
|
||||
#
|
||||
# nagios_context:
|
||||
# default: "juju"
|
||||
# type: string
|
||||
# description: |
|
||||
# Used by the nrpe subordinate charms.
|
||||
# A string that will be prepended to instance name to set the host name
|
||||
# in nagios. So for instance the hostname would be something like:
|
||||
# juju-myservice-0
|
||||
# If you're running multiple environments with the same services in them
|
||||
# this allows you to differentiate between them.
|
||||
# nagios_servicegroups:
|
||||
# default: ""
|
||||
# type: string
|
||||
# description: |
|
||||
# A comma-separated list of nagios servicegroups.
|
||||
# If left empty, the nagios_context will be used as the servicegroup
|
||||
#
|
||||
# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
|
||||
#
|
||||
# 4. Update your hooks.py with something like this:
|
||||
#
|
||||
# from charmsupport.nrpe import NRPE
|
||||
# (...)
|
||||
# def update_nrpe_config():
|
||||
# nrpe_compat = NRPE()
|
||||
# nrpe_compat.add_check(
|
||||
# shortname = "myservice",
|
||||
# description = "Check MyService",
|
||||
# check_cmd = "check_http -w 2 -c 10 http://localhost"
|
||||
# )
|
||||
# nrpe_compat.add_check(
|
||||
# "myservice_other",
|
||||
# "Check for widget failures",
|
||||
# check_cmd = "/srv/myapp/scripts/widget_check"
|
||||
# )
|
||||
# nrpe_compat.write()
|
||||
#
|
||||
# def config_changed():
|
||||
# (...)
|
||||
# update_nrpe_config()
|
||||
#
|
||||
# def nrpe_external_master_relation_changed():
|
||||
# update_nrpe_config()
|
||||
#
|
||||
# def local_monitors_relation_changed():
|
||||
# update_nrpe_config()
|
||||
#
|
||||
# 4.a If your charm is a subordinate charm set primary=False
|
||||
#
|
||||
# from charmsupport.nrpe import NRPE
|
||||
# (...)
|
||||
# def update_nrpe_config():
|
||||
# nrpe_compat = NRPE(primary=False)
|
||||
#
|
||||
# 5. ln -s hooks.py nrpe-external-master-relation-changed
|
||||
# ln -s hooks.py local-monitors-relation-changed
|
||||
|
||||
|
||||
class CheckException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class Check(object):
|
||||
shortname_re = '[A-Za-z0-9-_.@]+$'
|
||||
service_template = ("""
|
||||
#---------------------------------------------------
|
||||
# This file is Juju managed
|
||||
#---------------------------------------------------
|
||||
define service {{
|
||||
use active-service
|
||||
host_name {nagios_hostname}
|
||||
service_description {nagios_hostname}[{shortname}] """
|
||||
"""{description}
|
||||
check_command check_nrpe!{command}
|
||||
servicegroups {nagios_servicegroup}
|
||||
}}
|
||||
""")
|
||||
|
||||
def __init__(self, shortname, description, check_cmd):
|
||||
super(Check, self).__init__()
|
||||
# XXX: could be better to calculate this from the service name
|
||||
if not re.match(self.shortname_re, shortname):
|
||||
raise CheckException("shortname must match {}".format(
|
||||
Check.shortname_re))
|
||||
self.shortname = shortname
|
||||
self.command = "check_{}".format(shortname)
|
||||
# Note: a set of invalid characters is defined by the
|
||||
# Nagios server config
|
||||
# The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
|
||||
self.description = description
|
||||
self.check_cmd = self._locate_cmd(check_cmd)
|
||||
|
||||
def _get_check_filename(self):
|
||||
return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command))
|
||||
|
||||
def _get_service_filename(self, hostname):
|
||||
return os.path.join(NRPE.nagios_exportdir,
|
||||
'service__{}_{}.cfg'.format(hostname, self.command))
|
||||
|
||||
def _locate_cmd(self, check_cmd):
|
||||
search_path = (
|
||||
'/usr/lib/nagios/plugins',
|
||||
'/usr/local/lib/nagios/plugins',
|
||||
)
|
||||
parts = shlex.split(check_cmd)
|
||||
for path in search_path:
|
||||
if os.path.exists(os.path.join(path, parts[0])):
|
||||
command = os.path.join(path, parts[0])
|
||||
if len(parts) > 1:
|
||||
command += " " + " ".join(parts[1:])
|
||||
return command
|
||||
log('Check command not found: {}'.format(parts[0]))
|
||||
return ''
|
||||
|
||||
def _remove_service_files(self):
|
||||
if not os.path.exists(NRPE.nagios_exportdir):
|
||||
return
|
||||
for f in os.listdir(NRPE.nagios_exportdir):
|
||||
if f.endswith('_{}.cfg'.format(self.command)):
|
||||
os.remove(os.path.join(NRPE.nagios_exportdir, f))
|
||||
|
||||
def remove(self, hostname):
|
||||
nrpe_check_file = self._get_check_filename()
|
||||
if os.path.exists(nrpe_check_file):
|
||||
os.remove(nrpe_check_file)
|
||||
self._remove_service_files()
|
||||
|
||||
def write(self, nagios_context, hostname, nagios_servicegroups):
|
||||
nrpe_check_file = self._get_check_filename()
|
||||
with open(nrpe_check_file, 'w') as nrpe_check_config:
|
||||
nrpe_check_config.write("# check {}\n".format(self.shortname))
|
||||
if nagios_servicegroups:
|
||||
nrpe_check_config.write(
|
||||
"# The following header was added automatically by juju\n")
|
||||
nrpe_check_config.write(
|
||||
"# Modifying it will affect nagios monitoring and alerting\n")
|
||||
nrpe_check_config.write(
|
||||
"# servicegroups: {}\n".format(nagios_servicegroups))
|
||||
nrpe_check_config.write("command[{}]={}\n".format(
|
||||
self.command, self.check_cmd))
|
||||
|
||||
if not os.path.exists(NRPE.nagios_exportdir):
|
||||
log('Not writing service config as {} is not accessible'.format(
|
||||
NRPE.nagios_exportdir))
|
||||
else:
|
||||
self.write_service_config(nagios_context, hostname,
|
||||
nagios_servicegroups)
|
||||
|
||||
def write_service_config(self, nagios_context, hostname,
|
||||
nagios_servicegroups):
|
||||
self._remove_service_files()
|
||||
|
||||
templ_vars = {
|
||||
'nagios_hostname': hostname,
|
||||
'nagios_servicegroup': nagios_servicegroups,
|
||||
'description': self.description,
|
||||
'shortname': self.shortname,
|
||||
'command': self.command,
|
||||
}
|
||||
nrpe_service_text = Check.service_template.format(**templ_vars)
|
||||
nrpe_service_file = self._get_service_filename(hostname)
|
||||
with open(nrpe_service_file, 'w') as nrpe_service_config:
|
||||
nrpe_service_config.write(str(nrpe_service_text))
|
||||
|
||||
def run(self):
|
||||
subprocess.call(self.check_cmd)
|
||||
|
||||
|
||||
class NRPE(object):
|
||||
nagios_logdir = '/var/log/nagios'
|
||||
nagios_exportdir = '/var/lib/nagios/export'
|
||||
nrpe_confdir = '/etc/nagios/nrpe.d'
|
||||
homedir = '/var/lib/nagios' # home dir provided by nagios-nrpe-server
|
||||
|
||||
def __init__(self, hostname=None, primary=True):
|
||||
super(NRPE, self).__init__()
|
||||
self.config = config()
|
||||
self.primary = primary
|
||||
self.nagios_context = self.config['nagios_context']
|
||||
if 'nagios_servicegroups' in self.config and self.config['nagios_servicegroups']:
|
||||
self.nagios_servicegroups = self.config['nagios_servicegroups']
|
||||
else:
|
||||
self.nagios_servicegroups = self.nagios_context
|
||||
self.unit_name = local_unit().replace('/', '-')
|
||||
if hostname:
|
||||
self.hostname = hostname
|
||||
else:
|
||||
nagios_hostname = get_nagios_hostname()
|
||||
if nagios_hostname:
|
||||
self.hostname = nagios_hostname
|
||||
else:
|
||||
self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
|
||||
self.checks = []
|
||||
# Iff in an nrpe-external-master relation hook, set primary status
|
||||
relation = relation_ids('nrpe-external-master')
|
||||
if relation:
|
||||
log("Setting charm primary status {}".format(primary))
|
||||
for rid in relation_ids('nrpe-external-master'):
|
||||
relation_set(relation_id=rid, relation_settings={'primary': self.primary})
|
||||
|
||||
def add_check(self, *args, **kwargs):
|
||||
self.checks.append(Check(*args, **kwargs))
|
||||
|
||||
def remove_check(self, *args, **kwargs):
|
||||
if kwargs.get('shortname') is None:
|
||||
raise ValueError('shortname of check must be specified')
|
||||
|
||||
# Use sensible defaults if they're not specified - these are not
|
||||
# actually used during removal, but they're required for constructing
|
||||
# the Check object; check_disk is chosen because it's part of the
|
||||
# nagios-plugins-basic package.
|
||||
if kwargs.get('check_cmd') is None:
|
||||
kwargs['check_cmd'] = 'check_disk'
|
||||
if kwargs.get('description') is None:
|
||||
kwargs['description'] = ''
|
||||
|
||||
check = Check(*args, **kwargs)
|
||||
check.remove(self.hostname)
|
||||
|
||||
def write(self):
|
||||
try:
|
||||
nagios_uid = pwd.getpwnam('nagios').pw_uid
|
||||
nagios_gid = grp.getgrnam('nagios').gr_gid
|
||||
except Exception:
|
||||
log("Nagios user not set up, nrpe checks not updated")
|
||||
return
|
||||
|
||||
if not os.path.exists(NRPE.nagios_logdir):
|
||||
os.mkdir(NRPE.nagios_logdir)
|
||||
os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
|
||||
|
||||
nrpe_monitors = {}
|
||||
monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
|
||||
for nrpecheck in self.checks:
|
||||
nrpecheck.write(self.nagios_context, self.hostname,
|
||||
self.nagios_servicegroups)
|
||||
nrpe_monitors[nrpecheck.shortname] = {
|
||||
"command": nrpecheck.command,
|
||||
}
|
||||
|
||||
# update-status hooks are configured to firing every 5 minutes by
|
||||
# default. When nagios-nrpe-server is restarted, the nagios server
|
||||
# reports checks failing causing unnecessary alerts. Let's not restart
|
||||
# on update-status hooks.
|
||||
if not hook_name() == 'update-status':
|
||||
service('restart', 'nagios-nrpe-server')
|
||||
|
||||
monitor_ids = relation_ids("local-monitors") + \
|
||||
relation_ids("nrpe-external-master")
|
||||
for rid in monitor_ids:
|
||||
relation_set(relation_id=rid, monitors=yaml.dump(monitors))
|
||||
|
||||
|
||||
def get_nagios_hostcontext(relation_name='nrpe-external-master'):
|
||||
"""
|
||||
Query relation with nrpe subordinate, return the nagios_host_context
|
||||
|
||||
:param str relation_name: Name of relation nrpe sub joined to
|
||||
"""
|
||||
for rel in relations_of_type(relation_name):
|
||||
if 'nagios_host_context' in rel:
|
||||
return rel['nagios_host_context']
|
||||
|
||||
|
||||
def get_nagios_hostname(relation_name='nrpe-external-master'):
|
||||
"""
|
||||
Query relation with nrpe subordinate, return the nagios_hostname
|
||||
|
||||
:param str relation_name: Name of relation nrpe sub joined to
|
||||
"""
|
||||
for rel in relations_of_type(relation_name):
|
||||
if 'nagios_hostname' in rel:
|
||||
return rel['nagios_hostname']
|
||||
|
||||
|
||||
def get_nagios_unit_name(relation_name='nrpe-external-master'):
|
||||
"""
|
||||
Return the nagios unit name prepended with host_context if needed
|
||||
|
||||
:param str relation_name: Name of relation nrpe sub joined to
|
||||
"""
|
||||
host_context = get_nagios_hostcontext(relation_name)
|
||||
if host_context:
|
||||
unit = "%s:%s" % (host_context, local_unit())
|
||||
else:
|
||||
unit = local_unit()
|
||||
return unit
|
||||
|
||||
|
||||
def add_init_service_checks(nrpe, services, unit_name, immediate_check=True):
|
||||
"""
|
||||
Add checks for each service in list
|
||||
|
||||
:param NRPE nrpe: NRPE object to add check to
|
||||
:param list services: List of services to check
|
||||
:param str unit_name: Unit name to use in check description
|
||||
:param bool immediate_check: For sysv init, run the service check immediately
|
||||
"""
|
||||
for svc in services:
|
||||
# Don't add a check for these services from neutron-gateway
|
||||
if svc in ['ext-port', 'os-charm-phy-nic-mtu']:
|
||||
next
|
||||
|
||||
upstart_init = '/etc/init/%s.conf' % svc
|
||||
sysv_init = '/etc/init.d/%s' % svc
|
||||
|
||||
if host.init_is_systemd():
|
||||
nrpe.add_check(
|
||||
shortname=svc,
|
||||
description='process check {%s}' % unit_name,
|
||||
check_cmd='check_systemd.py %s' % svc
|
||||
)
|
||||
elif os.path.exists(upstart_init):
|
||||
nrpe.add_check(
|
||||
shortname=svc,
|
||||
description='process check {%s}' % unit_name,
|
||||
check_cmd='check_upstart_job %s' % svc
|
||||
)
|
||||
elif os.path.exists(sysv_init):
|
||||
cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
|
||||
checkpath = '%s/service-check-%s.txt' % (nrpe.homedir, svc)
|
||||
croncmd = (
|
||||
'/usr/local/lib/nagios/plugins/check_exit_status.pl '
|
||||
'-e -s /etc/init.d/%s status' % svc
|
||||
)
|
||||
cron_file = '*/5 * * * * root %s > %s\n' % (croncmd, checkpath)
|
||||
f = open(cronpath, 'w')
|
||||
f.write(cron_file)
|
||||
f.close()
|
||||
nrpe.add_check(
|
||||
shortname=svc,
|
||||
description='service check {%s}' % unit_name,
|
||||
check_cmd='check_status_file.py -f %s' % checkpath,
|
||||
)
|
||||
# if /var/lib/nagios doesn't exist open(checkpath, 'w') will fail
|
||||
# (LP: #1670223).
|
||||
if immediate_check and os.path.isdir(nrpe.homedir):
|
||||
f = open(checkpath, 'w')
|
||||
subprocess.call(
|
||||
croncmd.split(),
|
||||
stdout=f,
|
||||
stderr=subprocess.STDOUT
|
||||
)
|
||||
f.close()
|
||||
os.chmod(checkpath, 0o644)
|
||||
|
||||
|
||||
def copy_nrpe_checks(nrpe_files_dir=None):
|
||||
"""
|
||||
Copy the nrpe checks into place
|
||||
|
||||
"""
|
||||
NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins'
|
||||
if nrpe_files_dir is None:
|
||||
# determine if "charmhelpers" is in CHARMDIR or CHARMDIR/hooks
|
||||
for segment in ['.', 'hooks']:
|
||||
nrpe_files_dir = os.path.abspath(os.path.join(
|
||||
os.getenv('CHARM_DIR'),
|
||||
segment,
|
||||
'charmhelpers',
|
||||
'contrib',
|
||||
'openstack',
|
||||
'files'))
|
||||
if os.path.isdir(nrpe_files_dir):
|
||||
break
|
||||
else:
|
||||
raise RuntimeError("Couldn't find charmhelpers directory")
|
||||
if not os.path.exists(NAGIOS_PLUGINS):
|
||||
os.makedirs(NAGIOS_PLUGINS)
|
||||
for fname in glob.glob(os.path.join(nrpe_files_dir, "check_*")):
|
||||
if os.path.isfile(fname):
|
||||
shutil.copy2(fname,
|
||||
os.path.join(NAGIOS_PLUGINS, os.path.basename(fname)))
|
||||
|
||||
|
||||
def add_haproxy_checks(nrpe, unit_name):
|
||||
"""
|
||||
Add checks for each service in list
|
||||
|
||||
:param NRPE nrpe: NRPE object to add check to
|
||||
:param str unit_name: Unit name to use in check description
|
||||
"""
|
||||
nrpe.add_check(
|
||||
shortname='haproxy_servers',
|
||||
description='Check HAProxy {%s}' % unit_name,
|
||||
check_cmd='check_haproxy.sh')
|
||||
nrpe.add_check(
|
||||
shortname='haproxy_queue',
|
||||
description='Check HAProxy queue depth {%s}' % unit_name,
|
||||
check_cmd='check_haproxy_queue_depth.sh')
|
@ -1,173 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
'''
|
||||
Functions for managing volumes in juju units. One volume is supported per unit.
|
||||
Subordinates may have their own storage, provided it is on its own partition.
|
||||
|
||||
Configuration stanzas::
|
||||
|
||||
volume-ephemeral:
|
||||
type: boolean
|
||||
default: true
|
||||
description: >
|
||||
If false, a volume is mounted as sepecified in "volume-map"
|
||||
If true, ephemeral storage will be used, meaning that log data
|
||||
will only exist as long as the machine. YOU HAVE BEEN WARNED.
|
||||
volume-map:
|
||||
type: string
|
||||
default: {}
|
||||
description: >
|
||||
YAML map of units to device names, e.g:
|
||||
"{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
|
||||
Service units will raise a configure-error if volume-ephemeral
|
||||
is 'true' and no volume-map value is set. Use 'juju set' to set a
|
||||
value and 'juju resolved' to complete configuration.
|
||||
|
||||
Usage::
|
||||
|
||||
from charmsupport.volumes import configure_volume, VolumeConfigurationError
|
||||
from charmsupport.hookenv import log, ERROR
|
||||
def post_mount_hook():
|
||||
stop_service('myservice')
|
||||
def post_mount_hook():
|
||||
start_service('myservice')
|
||||
|
||||
if __name__ == '__main__':
|
||||
try:
|
||||
configure_volume(before_change=pre_mount_hook,
|
||||
after_change=post_mount_hook)
|
||||
except VolumeConfigurationError:
|
||||
log('Storage could not be configured', ERROR)
|
||||
|
||||
'''
|
||||
|
||||
# XXX: Known limitations
|
||||
# - fstab is neither consulted nor updated
|
||||
|
||||
import os
|
||||
from charmhelpers.core import hookenv
|
||||
from charmhelpers.core import host
|
||||
import yaml
|
||||
|
||||
|
||||
MOUNT_BASE = '/srv/juju/volumes'
|
||||
|
||||
|
||||
class VolumeConfigurationError(Exception):
|
||||
'''Volume configuration data is missing or invalid'''
|
||||
pass
|
||||
|
||||
|
||||
def get_config():
|
||||
'''Gather and sanity-check volume configuration data'''
|
||||
volume_config = {}
|
||||
config = hookenv.config()
|
||||
|
||||
errors = False
|
||||
|
||||
if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
|
||||
volume_config['ephemeral'] = True
|
||||
else:
|
||||
volume_config['ephemeral'] = False
|
||||
|
||||
try:
|
||||
volume_map = yaml.safe_load(config.get('volume-map', '{}'))
|
||||
except yaml.YAMLError as e:
|
||||
hookenv.log("Error parsing YAML volume-map: {}".format(e),
|
||||
hookenv.ERROR)
|
||||
errors = True
|
||||
if volume_map is None:
|
||||
# probably an empty string
|
||||
volume_map = {}
|
||||
elif not isinstance(volume_map, dict):
|
||||
hookenv.log("Volume-map should be a dictionary, not {}".format(
|
||||
type(volume_map)))
|
||||
errors = True
|
||||
|
||||
volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
|
||||
if volume_config['device'] and volume_config['ephemeral']:
|
||||
# asked for ephemeral storage but also defined a volume ID
|
||||
hookenv.log('A volume is defined for this unit, but ephemeral '
|
||||
'storage was requested', hookenv.ERROR)
|
||||
errors = True
|
||||
elif not volume_config['device'] and not volume_config['ephemeral']:
|
||||
# asked for permanent storage but did not define volume ID
|
||||
hookenv.log('Ephemeral storage was requested, but there is no volume '
|
||||
'defined for this unit.', hookenv.ERROR)
|
||||
errors = True
|
||||
|
||||
unit_mount_name = hookenv.local_unit().replace('/', '-')
|
||||
volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
|
||||
|
||||
if errors:
|
||||
return None
|
||||
return volume_config
|
||||
|
||||
|
||||
def mount_volume(config):
|
||||
if os.path.exists(config['mountpoint']):
|
||||
if not os.path.isdir(config['mountpoint']):
|
||||
hookenv.log('Not a directory: {}'.format(config['mountpoint']))
|
||||
raise VolumeConfigurationError()
|
||||
else:
|
||||
host.mkdir(config['mountpoint'])
|
||||
if os.path.ismount(config['mountpoint']):
|
||||
unmount_volume(config)
|
||||
if not host.mount(config['device'], config['mountpoint'], persist=True):
|
||||
raise VolumeConfigurationError()
|
||||
|
||||
|
||||
def unmount_volume(config):
|
||||
if os.path.ismount(config['mountpoint']):
|
||||
if not host.umount(config['mountpoint'], persist=True):
|
||||
raise VolumeConfigurationError()
|
||||
|
||||
|
||||
def managed_mounts():
|
||||
'''List of all mounted managed volumes'''
|
||||
return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
|
||||
|
||||
|
||||
def configure_volume(before_change=lambda: None, after_change=lambda: None):
|
||||
'''Set up storage (or don't) according to the charm's volume configuration.
|
||||
Returns the mount point or "ephemeral". before_change and after_change
|
||||
are optional functions to be called if the volume configuration changes.
|
||||
'''
|
||||
|
||||
config = get_config()
|
||||
if not config:
|
||||
hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
|
||||
raise VolumeConfigurationError()
|
||||
|
||||
if config['ephemeral']:
|
||||
if os.path.ismount(config['mountpoint']):
|
||||
before_change()
|
||||
unmount_volume(config)
|
||||
after_change()
|
||||
return 'ephemeral'
|
||||
else:
|
||||
# persistent storage
|
||||
if os.path.ismount(config['mountpoint']):
|
||||
mounts = dict(managed_mounts())
|
||||
if mounts.get(config['mountpoint']) != config['device']:
|
||||
before_change()
|
||||
unmount_volume(config)
|
||||
mount_volume(config)
|
||||
after_change()
|
||||
else:
|
||||
before_change()
|
||||
mount_volume(config)
|
||||
after_change()
|
||||
return config['mountpoint']
|
@ -1,13 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -1,86 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#
|
||||
# Copyright 2012 Canonical Ltd.
|
||||
#
|
||||
# This file is sourced from lp:openstack-charm-helpers
|
||||
#
|
||||
# Authors:
|
||||
# James Page <james.page@ubuntu.com>
|
||||
# Adam Gandelman <adamg@ubuntu.com>
|
||||
#
|
||||
|
||||
import os
|
||||
|
||||
from charmhelpers.core import host
|
||||
from charmhelpers.core.hookenv import (
|
||||
config as config_get,
|
||||
relation_get,
|
||||
relation_ids,
|
||||
related_units as relation_list,
|
||||
log,
|
||||
INFO,
|
||||
)
|
||||
|
||||
|
||||
def get_cert(cn=None):
|
||||
# TODO: deal with multiple https endpoints via charm config
|
||||
cert = config_get('ssl_cert')
|
||||
key = config_get('ssl_key')
|
||||
if not (cert and key):
|
||||
log("Inspecting identity-service relations for SSL certificate.",
|
||||
level=INFO)
|
||||
cert = key = None
|
||||
if cn:
|
||||
ssl_cert_attr = 'ssl_cert_{}'.format(cn)
|
||||
ssl_key_attr = 'ssl_key_{}'.format(cn)
|
||||
else:
|
||||
ssl_cert_attr = 'ssl_cert'
|
||||
ssl_key_attr = 'ssl_key'
|
||||
for r_id in relation_ids('identity-service'):
|
||||
for unit in relation_list(r_id):
|
||||
if not cert:
|
||||
cert = relation_get(ssl_cert_attr,
|
||||
rid=r_id, unit=unit)
|
||||
if not key:
|
||||
key = relation_get(ssl_key_attr,
|
||||
rid=r_id, unit=unit)
|
||||
return (cert, key)
|
||||
|
||||
|
||||
def get_ca_cert():
|
||||
ca_cert = config_get('ssl_ca')
|
||||
if ca_cert is None:
|
||||
log("Inspecting identity-service relations for CA SSL certificate.",
|
||||
level=INFO)
|
||||
for r_id in (relation_ids('identity-service') +
|
||||
relation_ids('identity-credentials')):
|
||||
for unit in relation_list(r_id):
|
||||
if ca_cert is None:
|
||||
ca_cert = relation_get('ca_cert',
|
||||
rid=r_id, unit=unit)
|
||||
return ca_cert
|
||||
|
||||
|
||||
def retrieve_ca_cert(cert_file):
|
||||
cert = None
|
||||
if os.path.isfile(cert_file):
|
||||
with open(cert_file, 'rb') as crt:
|
||||
cert = crt.read()
|
||||
return cert
|
||||
|
||||
|
||||
def install_ca_cert(ca_cert):
|
||||
host.install_ca_cert(ca_cert, 'keystone_juju_ca_cert')
|
@ -1,406 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#
|
||||
# Copyright 2012 Canonical Ltd.
|
||||
#
|
||||
# Authors:
|
||||
# James Page <james.page@ubuntu.com>
|
||||
# Adam Gandelman <adamg@ubuntu.com>
|
||||
#
|
||||
|
||||
"""
|
||||
Helpers for clustering and determining "cluster leadership" and other
|
||||
clustering-related helpers.
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import os
|
||||
import time
|
||||
|
||||
from socket import gethostname as get_unit_hostname
|
||||
|
||||
import six
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
relation_ids,
|
||||
related_units as relation_list,
|
||||
relation_get,
|
||||
config as config_get,
|
||||
INFO,
|
||||
DEBUG,
|
||||
WARNING,
|
||||
unit_get,
|
||||
is_leader as juju_is_leader,
|
||||
status_set,
|
||||
)
|
||||
from charmhelpers.core.host import (
|
||||
modulo_distribution,
|
||||
)
|
||||
from charmhelpers.core.decorators import (
|
||||
retry_on_exception,
|
||||
)
|
||||
from charmhelpers.core.strutils import (
|
||||
bool_from_string,
|
||||
)
|
||||
|
||||
DC_RESOURCE_NAME = 'DC'
|
||||
|
||||
|
||||
class HAIncompleteConfig(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class HAIncorrectConfig(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class CRMResourceNotFound(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class CRMDCNotFound(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def is_elected_leader(resource):
|
||||
"""
|
||||
Returns True if the charm executing this is the elected cluster leader.
|
||||
|
||||
It relies on two mechanisms to determine leadership:
|
||||
1. If juju is sufficiently new and leadership election is supported,
|
||||
the is_leader command will be used.
|
||||
2. If the charm is part of a corosync cluster, call corosync to
|
||||
determine leadership.
|
||||
3. If the charm is not part of a corosync cluster, the leader is
|
||||
determined as being "the alive unit with the lowest unit numer". In
|
||||
other words, the oldest surviving unit.
|
||||
"""
|
||||
try:
|
||||
return juju_is_leader()
|
||||
except NotImplementedError:
|
||||
log('Juju leadership election feature not enabled'
|
||||
', using fallback support',
|
||||
level=WARNING)
|
||||
|
||||
if is_clustered():
|
||||
if not is_crm_leader(resource):
|
||||
log('Deferring action to CRM leader.', level=INFO)
|
||||
return False
|
||||
else:
|
||||
peers = peer_units()
|
||||
if peers and not oldest_peer(peers):
|
||||
log('Deferring action to oldest service unit.', level=INFO)
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def is_clustered():
|
||||
for r_id in (relation_ids('ha') or []):
|
||||
for unit in (relation_list(r_id) or []):
|
||||
clustered = relation_get('clustered',
|
||||
rid=r_id,
|
||||
unit=unit)
|
||||
if clustered:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def is_crm_dc():
|
||||
"""
|
||||
Determine leadership by querying the pacemaker Designated Controller
|
||||
"""
|
||||
cmd = ['crm', 'status']
|
||||
try:
|
||||
status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
|
||||
if not isinstance(status, six.text_type):
|
||||
status = six.text_type(status, "utf-8")
|
||||
except subprocess.CalledProcessError as ex:
|
||||
raise CRMDCNotFound(str(ex))
|
||||
|
||||
current_dc = ''
|
||||
for line in status.split('\n'):
|
||||
if line.startswith('Current DC'):
|
||||
# Current DC: juju-lytrusty-machine-2 (168108163) - partition with quorum
|
||||
current_dc = line.split(':')[1].split()[0]
|
||||
if current_dc == get_unit_hostname():
|
||||
return True
|
||||
elif current_dc == 'NONE':
|
||||
raise CRMDCNotFound('Current DC: NONE')
|
||||
|
||||
return False
|
||||
|
||||
|
||||
@retry_on_exception(5, base_delay=2,
|
||||
exc_type=(CRMResourceNotFound, CRMDCNotFound))
|
||||
def is_crm_leader(resource, retry=False):
|
||||
"""
|
||||
Returns True if the charm calling this is the elected corosync leader,
|
||||
as returned by calling the external "crm" command.
|
||||
|
||||
We allow this operation to be retried to avoid the possibility of getting a
|
||||
false negative. See LP #1396246 for more info.
|
||||
"""
|
||||
if resource == DC_RESOURCE_NAME:
|
||||
return is_crm_dc()
|
||||
cmd = ['crm', 'resource', 'show', resource]
|
||||
try:
|
||||
status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
|
||||
if not isinstance(status, six.text_type):
|
||||
status = six.text_type(status, "utf-8")
|
||||
except subprocess.CalledProcessError:
|
||||
status = None
|
||||
|
||||
if status and get_unit_hostname() in status:
|
||||
return True
|
||||
|
||||
if status and "resource %s is NOT running" % (resource) in status:
|
||||
raise CRMResourceNotFound("CRM resource %s not found" % (resource))
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def is_leader(resource):
|
||||
log("is_leader is deprecated. Please consider using is_crm_leader "
|
||||
"instead.", level=WARNING)
|
||||
return is_crm_leader(resource)
|
||||
|
||||
|
||||
def peer_units(peer_relation="cluster"):
|
||||
peers = []
|
||||
for r_id in (relation_ids(peer_relation) or []):
|
||||
for unit in (relation_list(r_id) or []):
|
||||
peers.append(unit)
|
||||
return peers
|
||||
|
||||
|
||||
def peer_ips(peer_relation='cluster', addr_key='private-address'):
|
||||
'''Return a dict of peers and their private-address'''
|
||||
peers = {}
|
||||
for r_id in relation_ids(peer_relation):
|
||||
for unit in relation_list(r_id):
|
||||
peers[unit] = relation_get(addr_key, rid=r_id, unit=unit)
|
||||
return peers
|
||||
|
||||
|
||||
def oldest_peer(peers):
|
||||
"""Determines who the oldest peer is by comparing unit numbers."""
|
||||
local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
|
||||
for peer in peers:
|
||||
remote_unit_no = int(peer.split('/')[1])
|
||||
if remote_unit_no < local_unit_no:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def eligible_leader(resource):
|
||||
log("eligible_leader is deprecated. Please consider using "
|
||||
"is_elected_leader instead.", level=WARNING)
|
||||
return is_elected_leader(resource)
|
||||
|
||||
|
||||
def https():
|
||||
'''
|
||||
Determines whether enough data has been provided in configuration
|
||||
or relation data to configure HTTPS
|
||||
.
|
||||
returns: boolean
|
||||
'''
|
||||
use_https = config_get('use-https')
|
||||
if use_https and bool_from_string(use_https):
|
||||
return True
|
||||
if config_get('ssl_cert') and config_get('ssl_key'):
|
||||
return True
|
||||
for r_id in relation_ids('certificates'):
|
||||
for unit in relation_list(r_id):
|
||||
ca = relation_get('ca', rid=r_id, unit=unit)
|
||||
if ca:
|
||||
return True
|
||||
for r_id in relation_ids('identity-service'):
|
||||
for unit in relation_list(r_id):
|
||||
# TODO - needs fixing for new helper as ssl_cert/key suffixes with CN
|
||||
rel_state = [
|
||||
relation_get('https_keystone', rid=r_id, unit=unit),
|
||||
relation_get('ca_cert', rid=r_id, unit=unit),
|
||||
]
|
||||
# NOTE: works around (LP: #1203241)
|
||||
if (None not in rel_state) and ('' not in rel_state):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def determine_api_port(public_port, singlenode_mode=False):
|
||||
'''
|
||||
Determine correct API server listening port based on
|
||||
existence of HTTPS reverse proxy and/or haproxy.
|
||||
|
||||
public_port: int: standard public port for given service
|
||||
|
||||
singlenode_mode: boolean: Shuffle ports when only a single unit is present
|
||||
|
||||
returns: int: the correct listening port for the API service
|
||||
'''
|
||||
i = 0
|
||||
if singlenode_mode:
|
||||
i += 1
|
||||
elif len(peer_units()) > 0 or is_clustered():
|
||||
i += 1
|
||||
if https():
|
||||
i += 1
|
||||
return public_port - (i * 10)
|
||||
|
||||
|
||||
def determine_apache_port(public_port, singlenode_mode=False):
|
||||
'''
|
||||
Description: Determine correct apache listening port based on public IP +
|
||||
state of the cluster.
|
||||
|
||||
public_port: int: standard public port for given service
|
||||
|
||||
singlenode_mode: boolean: Shuffle ports when only a single unit is present
|
||||
|
||||
returns: int: the correct listening port for the HAProxy service
|
||||
'''
|
||||
i = 0
|
||||
if singlenode_mode:
|
||||
i += 1
|
||||
elif len(peer_units()) > 0 or is_clustered():
|
||||
i += 1
|
||||
return public_port - (i * 10)
|
||||
|
||||
|
||||
def get_hacluster_config(exclude_keys=None):
|
||||
'''
|
||||
Obtains all relevant configuration from charm configuration required
|
||||
for initiating a relation to hacluster:
|
||||
|
||||
ha-bindiface, ha-mcastport, vip, os-internal-hostname,
|
||||
os-admin-hostname, os-public-hostname, os-access-hostname
|
||||
|
||||
param: exclude_keys: list of setting key(s) to be excluded.
|
||||
returns: dict: A dict containing settings keyed by setting name.
|
||||
raises: HAIncompleteConfig if settings are missing or incorrect.
|
||||
'''
|
||||
settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'os-internal-hostname',
|
||||
'os-admin-hostname', 'os-public-hostname', 'os-access-hostname']
|
||||
conf = {}
|
||||
for setting in settings:
|
||||
if exclude_keys and setting in exclude_keys:
|
||||
continue
|
||||
|
||||
conf[setting] = config_get(setting)
|
||||
|
||||
if not valid_hacluster_config():
|
||||
raise HAIncorrectConfig('Insufficient or incorrect config data to '
|
||||
'configure hacluster.')
|
||||
return conf
|
||||
|
||||
|
||||
def valid_hacluster_config():
|
||||
'''
|
||||
Check that either vip or dns-ha is set. If dns-ha then one of os-*-hostname
|
||||
must be set.
|
||||
|
||||
Note: ha-bindiface and ha-macastport both have defaults and will always
|
||||
be set. We only care that either vip or dns-ha is set.
|
||||
|
||||
:returns: boolean: valid config returns true.
|
||||
raises: HAIncompatibileConfig if settings conflict.
|
||||
raises: HAIncompleteConfig if settings are missing.
|
||||
'''
|
||||
vip = config_get('vip')
|
||||
dns = config_get('dns-ha')
|
||||
if not(bool(vip) ^ bool(dns)):
|
||||
msg = ('HA: Either vip or dns-ha must be set but not both in order to '
|
||||
'use high availability')
|
||||
status_set('blocked', msg)
|
||||
raise HAIncorrectConfig(msg)
|
||||
|
||||
# If dns-ha then one of os-*-hostname must be set
|
||||
if dns:
|
||||
dns_settings = ['os-internal-hostname', 'os-admin-hostname',
|
||||
'os-public-hostname', 'os-access-hostname']
|
||||
# At this point it is unknown if one or all of the possible
|
||||
# network spaces are in HA. Validate at least one is set which is
|
||||
# the minimum required.
|
||||
for setting in dns_settings:
|
||||
if config_get(setting):
|
||||
log('DNS HA: At least one hostname is set {}: {}'
|
||||
''.format(setting, config_get(setting)),
|
||||
level=DEBUG)
|
||||
return True
|
||||
|
||||
msg = ('DNS HA: At least one os-*-hostname(s) must be set to use '
|
||||
'DNS HA')
|
||||
status_set('blocked', msg)
|
||||
raise HAIncompleteConfig(msg)
|
||||
|
||||
log('VIP HA: VIP is set {}'.format(vip), level=DEBUG)
|
||||
return True
|
||||
|
||||
|
||||
def canonical_url(configs, vip_setting='vip'):
|
||||
'''
|
||||
Returns the correct HTTP URL to this host given the state of HTTPS
|
||||
configuration and hacluster.
|
||||
|
||||
:configs : OSTemplateRenderer: A config tempating object to inspect for
|
||||
a complete https context.
|
||||
|
||||
:vip_setting: str: Setting in charm config that specifies
|
||||
VIP address.
|
||||
'''
|
||||
scheme = 'http'
|
||||
if 'https' in configs.complete_contexts():
|
||||
scheme = 'https'
|
||||
if is_clustered():
|
||||
addr = config_get(vip_setting)
|
||||
else:
|
||||
addr = unit_get('private-address')
|
||||
return '%s://%s' % (scheme, addr)
|
||||
|
||||
|
||||
def distributed_wait(modulo=None, wait=None, operation_name='operation'):
|
||||
''' Distribute operations by waiting based on modulo_distribution
|
||||
|
||||
If modulo and or wait are not set, check config_get for those values.
|
||||
If config values are not set, default to modulo=3 and wait=30.
|
||||
|
||||
:param modulo: int The modulo number creates the group distribution
|
||||
:param wait: int The constant time wait value
|
||||
:param operation_name: string Operation name for status message
|
||||
i.e. 'restart'
|
||||
:side effect: Calls config_get()
|
||||
:side effect: Calls log()
|
||||
:side effect: Calls status_set()
|
||||
:side effect: Calls time.sleep()
|
||||
'''
|
||||
if modulo is None:
|
||||
modulo = config_get('modulo-nodes') or 3
|
||||
if wait is None:
|
||||
wait = config_get('known-wait') or 30
|
||||
if juju_is_leader():
|
||||
# The leader should never wait
|
||||
calculated_wait = 0
|
||||
else:
|
||||
# non_zero_wait=True guarantees the non-leader who gets modulo 0
|
||||
# will still wait
|
||||
calculated_wait = modulo_distribution(modulo=modulo, wait=wait,
|
||||
non_zero_wait=True)
|
||||
msg = "Waiting {} seconds for {} ...".format(calculated_wait,
|
||||
operation_name)
|
||||
log(msg, DEBUG)
|
||||
status_set('maintenance', msg)
|
||||
time.sleep(calculated_wait)
|
@ -1,38 +0,0 @@
|
||||
# Juju charm-helpers hardening library
|
||||
|
||||
## Description
|
||||
|
||||
This library provides multiple implementations of system and application
|
||||
hardening that conform to the standards of http://hardening.io/.
|
||||
|
||||
Current implementations include:
|
||||
|
||||
* OS
|
||||
* SSH
|
||||
* MySQL
|
||||
* Apache
|
||||
|
||||
## Requirements
|
||||
|
||||
* Juju Charms
|
||||
|
||||
## Usage
|
||||
|
||||
1. Synchronise this library into your charm and add the harden() decorator
|
||||
(from contrib.hardening.harden) to any functions or methods you want to use
|
||||
to trigger hardening of your application/system.
|
||||
|
||||
2. Add a config option called 'harden' to your charm config.yaml and set it to
|
||||
a space-delimited list of hardening modules you want to run e.g. "os ssh"
|
||||
|
||||
3. Override any config defaults (contrib.hardening.defaults) by adding a file
|
||||
called hardening.yaml to your charm root containing the name(s) of the
|
||||
modules whose settings you want override at root level and then any settings
|
||||
with overrides e.g.
|
||||
|
||||
os:
|
||||
general:
|
||||
desktop_enable: True
|
||||
|
||||
4. Now just run your charm as usual and hardening will be applied each time the
|
||||
hook runs.
|
@ -1,13 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -1,17 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from os import path
|
||||
|
||||
TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates')
|
@ -1,29 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.apache.checks import config
|
||||
|
||||
|
||||
def run_apache_checks():
|
||||
log("Starting Apache hardening checks.", level=DEBUG)
|
||||
checks = config.get_audits()
|
||||
for check in checks:
|
||||
log("Running '%s' check" % (check.__class__.__name__), level=DEBUG)
|
||||
check.ensure_compliance()
|
||||
|
||||
log("Apache hardening checks complete.", level=DEBUG)
|
@ -1,104 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import re
|
||||
import six
|
||||
import subprocess
|
||||
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
INFO,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.audits.file import (
|
||||
FilePermissionAudit,
|
||||
DirectoryPermissionAudit,
|
||||
NoReadWriteForOther,
|
||||
TemplatedFile,
|
||||
DeletedFile
|
||||
)
|
||||
from charmhelpers.contrib.hardening.audits.apache import DisabledModuleAudit
|
||||
from charmhelpers.contrib.hardening.apache import TEMPLATES_DIR
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get Apache hardening config audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
if subprocess.call(['which', 'apache2'], stdout=subprocess.PIPE) != 0:
|
||||
log("Apache server does not appear to be installed on this node - "
|
||||
"skipping apache hardening", level=INFO)
|
||||
return []
|
||||
|
||||
context = ApacheConfContext()
|
||||
settings = utils.get_settings('apache')
|
||||
audits = [
|
||||
FilePermissionAudit(paths=os.path.join(
|
||||
settings['common']['apache_dir'], 'apache2.conf'),
|
||||
user='root', group='root', mode=0o0640),
|
||||
|
||||
TemplatedFile(os.path.join(settings['common']['apache_dir'],
|
||||
'mods-available/alias.conf'),
|
||||
context,
|
||||
TEMPLATES_DIR,
|
||||
mode=0o0640,
|
||||
user='root',
|
||||
service_actions=[{'service': 'apache2',
|
||||
'actions': ['restart']}]),
|
||||
|
||||
TemplatedFile(os.path.join(settings['common']['apache_dir'],
|
||||
'conf-enabled/99-hardening.conf'),
|
||||
context,
|
||||
TEMPLATES_DIR,
|
||||
mode=0o0640,
|
||||
user='root',
|
||||
service_actions=[{'service': 'apache2',
|
||||
'actions': ['restart']}]),
|
||||
|
||||
DirectoryPermissionAudit(settings['common']['apache_dir'],
|
||||
user='root',
|
||||
group='root',
|
||||
mode=0o0750),
|
||||
|
||||
DisabledModuleAudit(settings['hardening']['modules_to_disable']),
|
||||
|
||||
NoReadWriteForOther(settings['common']['apache_dir']),
|
||||
|
||||
DeletedFile(['/var/www/html/index.html'])
|
||||
]
|
||||
|
||||
return audits
|
||||
|
||||
|
||||
class ApacheConfContext(object):
|
||||
"""Defines the set of key/value pairs to set in a apache config file.
|
||||
|
||||
This context, when called, will return a dictionary containing the
|
||||
key/value pairs of setting to specify in the
|
||||
/etc/apache/conf-enabled/hardening.conf file.
|
||||
"""
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('apache')
|
||||
ctxt = settings['hardening']
|
||||
|
||||
out = subprocess.check_output(['apache2', '-v'])
|
||||
if six.PY3:
|
||||
out = out.decode('utf-8')
|
||||
ctxt['apache_version'] = re.search(r'.+version: Apache/(.+?)\s.+',
|
||||
out).group(1)
|
||||
ctxt['apache_icondir'] = '/usr/share/apache2/icons/'
|
||||
return ctxt
|
@ -1,32 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
|
||||
<Location / >
|
||||
<LimitExcept {{ allowed_http_methods }} >
|
||||
# http://httpd.apache.org/docs/2.4/upgrading.html
|
||||
{% if apache_version > '2.2' -%}
|
||||
Require all granted
|
||||
{% else -%}
|
||||
Order Allow,Deny
|
||||
Deny from all
|
||||
{% endif %}
|
||||
</LimitExcept>
|
||||
</Location>
|
||||
|
||||
<Directory />
|
||||
Options -Indexes -FollowSymLinks
|
||||
AllowOverride None
|
||||
</Directory>
|
||||
|
||||
<Directory /var/www/>
|
||||
Options -Indexes -FollowSymLinks
|
||||
AllowOverride None
|
||||
</Directory>
|
||||
|
||||
TraceEnable {{ traceenable }}
|
||||
ServerTokens {{ servertokens }}
|
||||
|
||||
SSLHonorCipherOrder {{ honor_cipher_order }}
|
||||
SSLCipherSuite {{ cipher_suite }}
|
@ -1,31 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
<IfModule alias_module>
|
||||
#
|
||||
# Aliases: Add here as many aliases as you need (with no limit). The format is
|
||||
# Alias fakename realname
|
||||
#
|
||||
# Note that if you include a trailing / on fakename then the server will
|
||||
# require it to be present in the URL. So "/icons" isn't aliased in this
|
||||
# example, only "/icons/". If the fakename is slash-terminated, then the
|
||||
# realname must also be slash terminated, and if the fakename omits the
|
||||
# trailing slash, the realname must also omit it.
|
||||
#
|
||||
# We include the /icons/ alias for FancyIndexed directory listings. If
|
||||
# you do not use FancyIndexing, you may comment this out.
|
||||
#
|
||||
Alias /icons/ "{{ apache_icondir }}/"
|
||||
|
||||
<Directory "{{ apache_icondir }}">
|
||||
Options -Indexes -MultiViews -FollowSymLinks
|
||||
AllowOverride None
|
||||
{% if apache_version == '2.4' -%}
|
||||
Require all granted
|
||||
{% else -%}
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
{% endif %}
|
||||
</Directory>
|
||||
</IfModule>
|
@ -1,54 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
class BaseAudit(object): # NO-QA
|
||||
"""Base class for hardening checks.
|
||||
|
||||
The lifecycle of a hardening check is to first check to see if the system
|
||||
is in compliance for the specified check. If it is not in compliance, the
|
||||
check method will return a value which will be supplied to the.
|
||||
"""
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.unless = kwargs.get('unless', None)
|
||||
super(BaseAudit, self).__init__()
|
||||
|
||||
def ensure_compliance(self):
|
||||
"""Checks to see if the current hardening check is in compliance or
|
||||
not.
|
||||
|
||||
If the check that is performed is not in compliance, then an exception
|
||||
should be raised.
|
||||
"""
|
||||
pass
|
||||
|
||||
def _take_action(self):
|
||||
"""Determines whether to perform the action or not.
|
||||
|
||||
Checks whether or not an action should be taken. This is determined by
|
||||
the truthy value for the unless parameter. If unless is a callback
|
||||
method, it will be invoked with no parameters in order to determine
|
||||
whether or not the action should be taken. Otherwise, the truthy value
|
||||
of the unless attribute will determine if the action should be
|
||||
performed.
|
||||
"""
|
||||
# Do the action if there isn't an unless override.
|
||||
if self.unless is None:
|
||||
return True
|
||||
|
||||
# Invoke the callback if there is one.
|
||||
if hasattr(self.unless, '__call__'):
|
||||
return not self.unless()
|
||||
|
||||
return not self.unless
|
@ -1,100 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
import six
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
INFO,
|
||||
ERROR,
|
||||
)
|
||||
|
||||
from charmhelpers.contrib.hardening.audits import BaseAudit
|
||||
|
||||
|
||||
class DisabledModuleAudit(BaseAudit):
|
||||
"""Audits Apache2 modules.
|
||||
|
||||
Determines if the apache2 modules are enabled. If the modules are enabled
|
||||
then they are removed in the ensure_compliance.
|
||||
"""
|
||||
def __init__(self, modules):
|
||||
if modules is None:
|
||||
self.modules = []
|
||||
elif isinstance(modules, six.string_types):
|
||||
self.modules = [modules]
|
||||
else:
|
||||
self.modules = modules
|
||||
|
||||
def ensure_compliance(self):
|
||||
"""Ensures that the modules are not loaded."""
|
||||
if not self.modules:
|
||||
return
|
||||
|
||||
try:
|
||||
loaded_modules = self._get_loaded_modules()
|
||||
non_compliant_modules = []
|
||||
for module in self.modules:
|
||||
if module in loaded_modules:
|
||||
log("Module '%s' is enabled but should not be." %
|
||||
(module), level=INFO)
|
||||
non_compliant_modules.append(module)
|
||||
|
||||
if len(non_compliant_modules) == 0:
|
||||
return
|
||||
|
||||
for module in non_compliant_modules:
|
||||
self._disable_module(module)
|
||||
self._restart_apache()
|
||||
except subprocess.CalledProcessError as e:
|
||||
log('Error occurred auditing apache module compliance. '
|
||||
'This may have been already reported. '
|
||||
'Output is: %s' % e.output, level=ERROR)
|
||||
|
||||
@staticmethod
|
||||
def _get_loaded_modules():
|
||||
"""Returns the modules which are enabled in Apache."""
|
||||
output = subprocess.check_output(['apache2ctl', '-M'])
|
||||
if six.PY3:
|
||||
output = output.decode('utf-8')
|
||||
modules = []
|
||||
for line in output.splitlines():
|
||||
# Each line of the enabled module output looks like:
|
||||
# module_name (static|shared)
|
||||
# Plus a header line at the top of the output which is stripped
|
||||
# out by the regex.
|
||||
matcher = re.search(r'^ (\S*)_module (\S*)', line)
|
||||
if matcher:
|
||||
modules.append(matcher.group(1))
|
||||
return modules
|
||||
|
||||
@staticmethod
|
||||
def _disable_module(module):
|
||||
"""Disables the specified module in Apache."""
|
||||
try:
|
||||
subprocess.check_call(['a2dismod', module])
|
||||
except subprocess.CalledProcessError as e:
|
||||
# Note: catch error here to allow the attempt of disabling
|
||||
# multiple modules in one go rather than failing after the
|
||||
# first module fails.
|
||||
log('Error occurred disabling module %s. '
|
||||
'Output is: %s' % (module, e.output), level=ERROR)
|
||||
|
||||
@staticmethod
|
||||
def _restart_apache():
|
||||
"""Restarts the apache process"""
|
||||
subprocess.check_output(['service', 'apache2', 'restart'])
|
@ -1,103 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import absolute_import # required for external apt import
|
||||
from apt import apt_pkg
|
||||
from six import string_types
|
||||
|
||||
from charmhelpers.fetch import (
|
||||
apt_cache,
|
||||
apt_purge
|
||||
)
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
WARNING,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.audits import BaseAudit
|
||||
|
||||
|
||||
class AptConfig(BaseAudit):
|
||||
|
||||
def __init__(self, config, **kwargs):
|
||||
self.config = config
|
||||
|
||||
def verify_config(self):
|
||||
apt_pkg.init()
|
||||
for cfg in self.config:
|
||||
value = apt_pkg.config.get(cfg['key'], cfg.get('default', ''))
|
||||
if value and value != cfg['expected']:
|
||||
log("APT config '%s' has unexpected value '%s' "
|
||||
"(expected='%s')" %
|
||||
(cfg['key'], value, cfg['expected']), level=WARNING)
|
||||
|
||||
def ensure_compliance(self):
|
||||
self.verify_config()
|
||||
|
||||
|
||||
class RestrictedPackages(BaseAudit):
|
||||
"""Class used to audit restricted packages on the system."""
|
||||
|
||||
def __init__(self, pkgs, **kwargs):
|
||||
super(RestrictedPackages, self).__init__(**kwargs)
|
||||
if isinstance(pkgs, string_types) or not hasattr(pkgs, '__iter__'):
|
||||
self.pkgs = [pkgs]
|
||||
else:
|
||||
self.pkgs = pkgs
|
||||
|
||||
def ensure_compliance(self):
|
||||
cache = apt_cache()
|
||||
|
||||
for p in self.pkgs:
|
||||
if p not in cache:
|
||||
continue
|
||||
|
||||
pkg = cache[p]
|
||||
if not self.is_virtual_package(pkg):
|
||||
if not pkg.current_ver:
|
||||
log("Package '%s' is not installed." % pkg.name,
|
||||
level=DEBUG)
|
||||
continue
|
||||
else:
|
||||
log("Restricted package '%s' is installed" % pkg.name,
|
||||
level=WARNING)
|
||||
self.delete_package(cache, pkg)
|
||||
else:
|
||||
log("Checking restricted virtual package '%s' provides" %
|
||||
pkg.name, level=DEBUG)
|
||||
self.delete_package(cache, pkg)
|
||||
|
||||
def delete_package(self, cache, pkg):
|
||||
"""Deletes the package from the system.
|
||||
|
||||
Deletes the package form the system, properly handling virtual
|
||||
packages.
|
||||
|
||||
:param cache: the apt cache
|
||||
:param pkg: the package to remove
|
||||
"""
|
||||
if self.is_virtual_package(pkg):
|
||||
log("Package '%s' appears to be virtual - purging provides" %
|
||||
pkg.name, level=DEBUG)
|
||||
for _p in pkg.provides_list:
|
||||
self.delete_package(cache, _p[2].parent_pkg)
|
||||
elif not pkg.current_ver:
|
||||
log("Package '%s' not installed" % pkg.name, level=DEBUG)
|
||||
return
|
||||
else:
|
||||
log("Purging package '%s'" % pkg.name, level=DEBUG)
|
||||
apt_purge(pkg.name)
|
||||
|
||||
def is_virtual_package(self, pkg):
|
||||
return pkg.has_provides and not pkg.has_versions
|
@ -1,550 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import grp
|
||||
import os
|
||||
import pwd
|
||||
import re
|
||||
|
||||
from subprocess import (
|
||||
CalledProcessError,
|
||||
check_output,
|
||||
check_call,
|
||||
)
|
||||
from traceback import format_exc
|
||||
from six import string_types
|
||||
from stat import (
|
||||
S_ISGID,
|
||||
S_ISUID
|
||||
)
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
INFO,
|
||||
WARNING,
|
||||
ERROR,
|
||||
)
|
||||
from charmhelpers.core import unitdata
|
||||
from charmhelpers.core.host import file_hash
|
||||
from charmhelpers.contrib.hardening.audits import BaseAudit
|
||||
from charmhelpers.contrib.hardening.templating import (
|
||||
get_template_path,
|
||||
render_and_write,
|
||||
)
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
class BaseFileAudit(BaseAudit):
|
||||
"""Base class for file audits.
|
||||
|
||||
Provides api stubs for compliance check flow that must be used by any class
|
||||
that implemented this one.
|
||||
"""
|
||||
|
||||
def __init__(self, paths, always_comply=False, *args, **kwargs):
|
||||
"""
|
||||
:param paths: string path of list of paths of files we want to apply
|
||||
compliance checks are criteria to.
|
||||
:param always_comply: if true compliance criteria is always applied
|
||||
else compliance is skipped for non-existent
|
||||
paths.
|
||||
"""
|
||||
super(BaseFileAudit, self).__init__(*args, **kwargs)
|
||||
self.always_comply = always_comply
|
||||
if isinstance(paths, string_types) or not hasattr(paths, '__iter__'):
|
||||
self.paths = [paths]
|
||||
else:
|
||||
self.paths = paths
|
||||
|
||||
def ensure_compliance(self):
|
||||
"""Ensure that the all registered files comply to registered criteria.
|
||||
"""
|
||||
for p in self.paths:
|
||||
if os.path.exists(p):
|
||||
if self.is_compliant(p):
|
||||
continue
|
||||
|
||||
log('File %s is not in compliance.' % p, level=INFO)
|
||||
else:
|
||||
if not self.always_comply:
|
||||
log("Non-existent path '%s' - skipping compliance check"
|
||||
% (p), level=INFO)
|
||||
continue
|
||||
|
||||
if self._take_action():
|
||||
log("Applying compliance criteria to '%s'" % (p), level=INFO)
|
||||
self.comply(p)
|
||||
|
||||
def is_compliant(self, path):
|
||||
"""Audits the path to see if it is compliance.
|
||||
|
||||
:param path: the path to the file that should be checked.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def comply(self, path):
|
||||
"""Enforces the compliance of a path.
|
||||
|
||||
:param path: the path to the file that should be enforced.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
@classmethod
|
||||
def _get_stat(cls, path):
|
||||
"""Returns the Posix st_stat information for the specified file path.
|
||||
|
||||
:param path: the path to get the st_stat information for.
|
||||
:returns: an st_stat object for the path or None if the path doesn't
|
||||
exist.
|
||||
"""
|
||||
return os.stat(path)
|
||||
|
||||
|
||||
class FilePermissionAudit(BaseFileAudit):
|
||||
"""Implements an audit for file permissions and ownership for a user.
|
||||
|
||||
This class implements functionality that ensures that a specific user/group
|
||||
will own the file(s) specified and that the permissions specified are
|
||||
applied properly to the file.
|
||||
"""
|
||||
def __init__(self, paths, user, group=None, mode=0o600, **kwargs):
|
||||
self.user = user
|
||||
self.group = group
|
||||
self.mode = mode
|
||||
super(FilePermissionAudit, self).__init__(paths, user, group, mode,
|
||||
**kwargs)
|
||||
|
||||
@property
|
||||
def user(self):
|
||||
return self._user
|
||||
|
||||
@user.setter
|
||||
def user(self, name):
|
||||
try:
|
||||
user = pwd.getpwnam(name)
|
||||
except KeyError:
|
||||
log('Unknown user %s' % name, level=ERROR)
|
||||
user = None
|
||||
self._user = user
|
||||
|
||||
@property
|
||||
def group(self):
|
||||
return self._group
|
||||
|
||||
@group.setter
|
||||
def group(self, name):
|
||||
try:
|
||||
group = None
|
||||
if name:
|
||||
group = grp.getgrnam(name)
|
||||
else:
|
||||
group = grp.getgrgid(self.user.pw_gid)
|
||||
except KeyError:
|
||||
log('Unknown group %s' % name, level=ERROR)
|
||||
self._group = group
|
||||
|
||||
def is_compliant(self, path):
|
||||
"""Checks if the path is in compliance.
|
||||
|
||||
Used to determine if the path specified meets the necessary
|
||||
requirements to be in compliance with the check itself.
|
||||
|
||||
:param path: the file path to check
|
||||
:returns: True if the path is compliant, False otherwise.
|
||||
"""
|
||||
stat = self._get_stat(path)
|
||||
user = self.user
|
||||
group = self.group
|
||||
|
||||
compliant = True
|
||||
if stat.st_uid != user.pw_uid or stat.st_gid != group.gr_gid:
|
||||
log('File %s is not owned by %s:%s.' % (path, user.pw_name,
|
||||
group.gr_name),
|
||||
level=INFO)
|
||||
compliant = False
|
||||
|
||||
# POSIX refers to the st_mode bits as corresponding to both the
|
||||
# file type and file permission bits, where the least significant 12
|
||||
# bits (o7777) are the suid (11), sgid (10), sticky bits (9), and the
|
||||
# file permission bits (8-0)
|
||||
perms = stat.st_mode & 0o7777
|
||||
if perms != self.mode:
|
||||
log('File %s has incorrect permissions, currently set to %s' %
|
||||
(path, oct(stat.st_mode & 0o7777)), level=INFO)
|
||||
compliant = False
|
||||
|
||||
return compliant
|
||||
|
||||
def comply(self, path):
|
||||
"""Issues a chown and chmod to the file paths specified."""
|
||||
utils.ensure_permissions(path, self.user.pw_name, self.group.gr_name,
|
||||
self.mode)
|
||||
|
||||
|
||||
class DirectoryPermissionAudit(FilePermissionAudit):
|
||||
"""Performs a permission check for the specified directory path."""
|
||||
|
||||
def __init__(self, paths, user, group=None, mode=0o600,
|
||||
recursive=True, **kwargs):
|
||||
super(DirectoryPermissionAudit, self).__init__(paths, user, group,
|
||||
mode, **kwargs)
|
||||
self.recursive = recursive
|
||||
|
||||
def is_compliant(self, path):
|
||||
"""Checks if the directory is compliant.
|
||||
|
||||
Used to determine if the path specified and all of its children
|
||||
directories are in compliance with the check itself.
|
||||
|
||||
:param path: the directory path to check
|
||||
:returns: True if the directory tree is compliant, otherwise False.
|
||||
"""
|
||||
if not os.path.isdir(path):
|
||||
log('Path specified %s is not a directory.' % path, level=ERROR)
|
||||
raise ValueError("%s is not a directory." % path)
|
||||
|
||||
if not self.recursive:
|
||||
return super(DirectoryPermissionAudit, self).is_compliant(path)
|
||||
|
||||
compliant = True
|
||||
for root, dirs, _ in os.walk(path):
|
||||
if len(dirs) > 0:
|
||||
continue
|
||||
|
||||
if not super(DirectoryPermissionAudit, self).is_compliant(root):
|
||||
compliant = False
|
||||
continue
|
||||
|
||||
return compliant
|
||||
|
||||
def comply(self, path):
|
||||
for root, dirs, _ in os.walk(path):
|
||||
if len(dirs) > 0:
|
||||
super(DirectoryPermissionAudit, self).comply(root)
|
||||
|
||||
|
||||
class ReadOnly(BaseFileAudit):
|
||||
"""Audits that files and folders are read only."""
|
||||
def __init__(self, paths, *args, **kwargs):
|
||||
super(ReadOnly, self).__init__(paths=paths, *args, **kwargs)
|
||||
|
||||
def is_compliant(self, path):
|
||||
try:
|
||||
output = check_output(['find', path, '-perm', '-go+w',
|
||||
'-type', 'f']).strip()
|
||||
|
||||
# The find above will find any files which have permission sets
|
||||
# which allow too broad of write access. As such, the path is
|
||||
# compliant if there is no output.
|
||||
if output:
|
||||
return False
|
||||
|
||||
return True
|
||||
except CalledProcessError as e:
|
||||
log('Error occurred checking finding writable files for %s. '
|
||||
'Error information is: command %s failed with returncode '
|
||||
'%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output,
|
||||
format_exc(e)), level=ERROR)
|
||||
return False
|
||||
|
||||
def comply(self, path):
|
||||
try:
|
||||
check_output(['chmod', 'go-w', '-R', path])
|
||||
except CalledProcessError as e:
|
||||
log('Error occurred removing writeable permissions for %s. '
|
||||
'Error information is: command %s failed with returncode '
|
||||
'%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output,
|
||||
format_exc(e)), level=ERROR)
|
||||
|
||||
|
||||
class NoReadWriteForOther(BaseFileAudit):
|
||||
"""Ensures that the files found under the base path are readable or
|
||||
writable by anyone other than the owner or the group.
|
||||
"""
|
||||
def __init__(self, paths):
|
||||
super(NoReadWriteForOther, self).__init__(paths)
|
||||
|
||||
def is_compliant(self, path):
|
||||
try:
|
||||
cmd = ['find', path, '-perm', '-o+r', '-type', 'f', '-o',
|
||||
'-perm', '-o+w', '-type', 'f']
|
||||
output = check_output(cmd).strip()
|
||||
|
||||
# The find above here will find any files which have read or
|
||||
# write permissions for other, meaning there is too broad of access
|
||||
# to read/write the file. As such, the path is compliant if there's
|
||||
# no output.
|
||||
if output:
|
||||
return False
|
||||
|
||||
return True
|
||||
except CalledProcessError as e:
|
||||
log('Error occurred while finding files which are readable or '
|
||||
'writable to the world in %s. '
|
||||
'Command output is: %s.' % (path, e.output), level=ERROR)
|
||||
|
||||
def comply(self, path):
|
||||
try:
|
||||
check_output(['chmod', '-R', 'o-rw', path])
|
||||
except CalledProcessError as e:
|
||||
log('Error occurred attempting to change modes of files under '
|
||||
'path %s. Output of command is: %s' % (path, e.output))
|
||||
|
||||
|
||||
class NoSUIDSGIDAudit(BaseFileAudit):
|
||||
"""Audits that specified files do not have SUID/SGID bits set."""
|
||||
def __init__(self, paths, *args, **kwargs):
|
||||
super(NoSUIDSGIDAudit, self).__init__(paths=paths, *args, **kwargs)
|
||||
|
||||
def is_compliant(self, path):
|
||||
stat = self._get_stat(path)
|
||||
if (stat.st_mode & (S_ISGID | S_ISUID)) != 0:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def comply(self, path):
|
||||
try:
|
||||
log('Removing suid/sgid from %s.' % path, level=DEBUG)
|
||||
check_output(['chmod', '-s', path])
|
||||
except CalledProcessError as e:
|
||||
log('Error occurred removing suid/sgid from %s.'
|
||||
'Error information is: command %s failed with returncode '
|
||||
'%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output,
|
||||
format_exc(e)), level=ERROR)
|
||||
|
||||
|
||||
class TemplatedFile(BaseFileAudit):
|
||||
"""The TemplatedFileAudit audits the contents of a templated file.
|
||||
|
||||
This audit renders a file from a template, sets the appropriate file
|
||||
permissions, then generates a hashsum with which to check the content
|
||||
changed.
|
||||
"""
|
||||
def __init__(self, path, context, template_dir, mode, user='root',
|
||||
group='root', service_actions=None, **kwargs):
|
||||
self.context = context
|
||||
self.user = user
|
||||
self.group = group
|
||||
self.mode = mode
|
||||
self.template_dir = template_dir
|
||||
self.service_actions = service_actions
|
||||
super(TemplatedFile, self).__init__(paths=path, always_comply=True,
|
||||
**kwargs)
|
||||
|
||||
def is_compliant(self, path):
|
||||
"""Determines if the templated file is compliant.
|
||||
|
||||
A templated file is only compliant if it has not changed (as
|
||||
determined by its sha256 hashsum) AND its file permissions are set
|
||||
appropriately.
|
||||
|
||||
:param path: the path to check compliance.
|
||||
"""
|
||||
same_templates = self.templates_match(path)
|
||||
same_content = self.contents_match(path)
|
||||
same_permissions = self.permissions_match(path)
|
||||
|
||||
if same_content and same_permissions and same_templates:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def run_service_actions(self):
|
||||
"""Run any actions on services requested."""
|
||||
if not self.service_actions:
|
||||
return
|
||||
|
||||
for svc_action in self.service_actions:
|
||||
name = svc_action['service']
|
||||
actions = svc_action['actions']
|
||||
log("Running service '%s' actions '%s'" % (name, actions),
|
||||
level=DEBUG)
|
||||
for action in actions:
|
||||
cmd = ['service', name, action]
|
||||
try:
|
||||
check_call(cmd)
|
||||
except CalledProcessError as exc:
|
||||
log("Service name='%s' action='%s' failed - %s" %
|
||||
(name, action, exc), level=WARNING)
|
||||
|
||||
def comply(self, path):
|
||||
"""Ensures the contents and the permissions of the file.
|
||||
|
||||
:param path: the path to correct
|
||||
"""
|
||||
dirname = os.path.dirname(path)
|
||||
if not os.path.exists(dirname):
|
||||
os.makedirs(dirname)
|
||||
|
||||
self.pre_write()
|
||||
render_and_write(self.template_dir, path, self.context())
|
||||
utils.ensure_permissions(path, self.user, self.group, self.mode)
|
||||
self.run_service_actions()
|
||||
self.save_checksum(path)
|
||||
self.post_write()
|
||||
|
||||
def pre_write(self):
|
||||
"""Invoked prior to writing the template."""
|
||||
pass
|
||||
|
||||
def post_write(self):
|
||||
"""Invoked after writing the template."""
|
||||
pass
|
||||
|
||||
def templates_match(self, path):
|
||||
"""Determines if the template files are the same.
|
||||
|
||||
The template file equality is determined by the hashsum of the
|
||||
template files themselves. If there is no hashsum, then the content
|
||||
cannot be sure to be the same so treat it as if they changed.
|
||||
Otherwise, return whether or not the hashsums are the same.
|
||||
|
||||
:param path: the path to check
|
||||
:returns: boolean
|
||||
"""
|
||||
template_path = get_template_path(self.template_dir, path)
|
||||
key = 'hardening:template:%s' % template_path
|
||||
template_checksum = file_hash(template_path)
|
||||
kv = unitdata.kv()
|
||||
stored_tmplt_checksum = kv.get(key)
|
||||
if not stored_tmplt_checksum:
|
||||
kv.set(key, template_checksum)
|
||||
kv.flush()
|
||||
log('Saved template checksum for %s.' % template_path,
|
||||
level=DEBUG)
|
||||
# Since we don't have a template checksum, then assume it doesn't
|
||||
# match and return that the template is different.
|
||||
return False
|
||||
elif stored_tmplt_checksum != template_checksum:
|
||||
kv.set(key, template_checksum)
|
||||
kv.flush()
|
||||
log('Updated template checksum for %s.' % template_path,
|
||||
level=DEBUG)
|
||||
return False
|
||||
|
||||
# Here the template hasn't changed based upon the calculated
|
||||
# checksum of the template and what was previously stored.
|
||||
return True
|
||||
|
||||
def contents_match(self, path):
|
||||
"""Determines if the file content is the same.
|
||||
|
||||
This is determined by comparing hashsum of the file contents and
|
||||
the saved hashsum. If there is no hashsum, then the content cannot
|
||||
be sure to be the same so treat them as if they are not the same.
|
||||
Otherwise, return True if the hashsums are the same, False if they
|
||||
are not the same.
|
||||
|
||||
:param path: the file to check.
|
||||
"""
|
||||
checksum = file_hash(path)
|
||||
|
||||
kv = unitdata.kv()
|
||||
stored_checksum = kv.get('hardening:%s' % path)
|
||||
if not stored_checksum:
|
||||
# If the checksum hasn't been generated, return False to ensure
|
||||
# the file is written and the checksum stored.
|
||||
log('Checksum for %s has not been calculated.' % path, level=DEBUG)
|
||||
return False
|
||||
elif stored_checksum != checksum:
|
||||
log('Checksum mismatch for %s.' % path, level=DEBUG)
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def permissions_match(self, path):
|
||||
"""Determines if the file owner and permissions match.
|
||||
|
||||
:param path: the path to check.
|
||||
"""
|
||||
audit = FilePermissionAudit(path, self.user, self.group, self.mode)
|
||||
return audit.is_compliant(path)
|
||||
|
||||
def save_checksum(self, path):
|
||||
"""Calculates and saves the checksum for the path specified.
|
||||
|
||||
:param path: the path of the file to save the checksum.
|
||||
"""
|
||||
checksum = file_hash(path)
|
||||
kv = unitdata.kv()
|
||||
kv.set('hardening:%s' % path, checksum)
|
||||
kv.flush()
|
||||
|
||||
|
||||
class DeletedFile(BaseFileAudit):
|
||||
"""Audit to ensure that a file is deleted."""
|
||||
def __init__(self, paths):
|
||||
super(DeletedFile, self).__init__(paths)
|
||||
|
||||
def is_compliant(self, path):
|
||||
return not os.path.exists(path)
|
||||
|
||||
def comply(self, path):
|
||||
os.remove(path)
|
||||
|
||||
|
||||
class FileContentAudit(BaseFileAudit):
|
||||
"""Audit the contents of a file."""
|
||||
def __init__(self, paths, cases, **kwargs):
|
||||
# Cases we expect to pass
|
||||
self.pass_cases = cases.get('pass', [])
|
||||
# Cases we expect to fail
|
||||
self.fail_cases = cases.get('fail', [])
|
||||
super(FileContentAudit, self).__init__(paths, **kwargs)
|
||||
|
||||
def is_compliant(self, path):
|
||||
"""
|
||||
Given a set of content matching cases i.e. tuple(regex, bool) where
|
||||
bool value denotes whether or not regex is expected to match, check that
|
||||
all cases match as expected with the contents of the file. Cases can be
|
||||
expected to pass of fail.
|
||||
|
||||
:param path: Path of file to check.
|
||||
:returns: Boolean value representing whether or not all cases are
|
||||
found to be compliant.
|
||||
"""
|
||||
log("Auditing contents of file '%s'" % (path), level=DEBUG)
|
||||
with open(path, 'r') as fd:
|
||||
contents = fd.read()
|
||||
|
||||
matches = 0
|
||||
for pattern in self.pass_cases:
|
||||
key = re.compile(pattern, flags=re.MULTILINE)
|
||||
results = re.search(key, contents)
|
||||
if results:
|
||||
matches += 1
|
||||
else:
|
||||
log("Pattern '%s' was expected to pass but instead it failed"
|
||||
% (pattern), level=WARNING)
|
||||
|
||||
for pattern in self.fail_cases:
|
||||
key = re.compile(pattern, flags=re.MULTILINE)
|
||||
results = re.search(key, contents)
|
||||
if not results:
|
||||
matches += 1
|
||||
else:
|
||||
log("Pattern '%s' was expected to fail but instead it passed"
|
||||
% (pattern), level=WARNING)
|
||||
|
||||
total = len(self.pass_cases) + len(self.fail_cases)
|
||||
log("Checked %s cases and %s passed" % (total, matches), level=DEBUG)
|
||||
return matches == total
|
||||
|
||||
def comply(self, *args, **kwargs):
|
||||
"""NOOP since we just issue warnings. This is to avoid the
|
||||
NotImplememtedError.
|
||||
"""
|
||||
log("Not applying any compliance criteria, only checks.", level=INFO)
|
@ -1,16 +0,0 @@
|
||||
# NOTE: this file contains the default configuration for the 'apache' hardening
|
||||
# code. If you want to override any settings you must add them to a file
|
||||
# called hardening.yaml in the root directory of your charm using the
|
||||
# name 'apache' as the root key followed by any of the following with new
|
||||
# values.
|
||||
|
||||
common:
|
||||
apache_dir: '/etc/apache2'
|
||||
|
||||
hardening:
|
||||
traceenable: 'off'
|
||||
allowed_http_methods: "GET POST"
|
||||
modules_to_disable: [ cgi, cgid ]
|
||||
servertokens: 'Prod'
|
||||
honor_cipher_order: 'on'
|
||||
cipher_suite: 'ALL:+MEDIUM:+HIGH:!LOW:!MD5:!RC4:!eNULL:!aNULL:!3DES'
|
@ -1,12 +0,0 @@
|
||||
# NOTE: this schema must contain all valid keys from it's associated defaults
|
||||
# file. It is used to validate user-provided overrides.
|
||||
common:
|
||||
apache_dir:
|
||||
traceenable:
|
||||
|
||||
hardening:
|
||||
allowed_http_methods:
|
||||
modules_to_disable:
|
||||
servertokens:
|
||||
honor_cipher_order:
|
||||
cipher_suite:
|
@ -1,38 +0,0 @@
|
||||
# NOTE: this file contains the default configuration for the 'mysql' hardening
|
||||
# code. If you want to override any settings you must add them to a file
|
||||
# called hardening.yaml in the root directory of your charm using the
|
||||
# name 'mysql' as the root key followed by any of the following with new
|
||||
# values.
|
||||
|
||||
hardening:
|
||||
mysql-conf: /etc/mysql/my.cnf
|
||||
hardening-conf: /etc/mysql/conf.d/hardening.cnf
|
||||
|
||||
security:
|
||||
# @see http://www.symantec.com/connect/articles/securing-mysql-step-step
|
||||
# @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_chroot
|
||||
chroot: None
|
||||
|
||||
# @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_safe-user-create
|
||||
safe-user-create: 1
|
||||
|
||||
# @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-auth
|
||||
secure-auth: 1
|
||||
|
||||
# @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_symbolic-links
|
||||
skip-symbolic-links: 1
|
||||
|
||||
# @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_skip-show-database
|
||||
skip-show-database: True
|
||||
|
||||
# @see http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_local_infile
|
||||
local-infile: 0
|
||||
|
||||
# @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_allow-suspicious-udfs
|
||||
allow-suspicious-udfs: 0
|
||||
|
||||
# @see https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_automatic_sp_privileges
|
||||
automatic-sp-privileges: 0
|
||||
|
||||
# @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-file-priv
|
||||
secure-file-priv: /tmp
|
@ -1,15 +0,0 @@
|
||||
# NOTE: this schema must contain all valid keys from it's associated defaults
|
||||
# file. It is used to validate user-provided overrides.
|
||||
hardening:
|
||||
mysql-conf:
|
||||
hardening-conf:
|
||||
security:
|
||||
chroot:
|
||||
safe-user-create:
|
||||
secure-auth:
|
||||
skip-symbolic-links:
|
||||
skip-show-database:
|
||||
local-infile:
|
||||
allow-suspicious-udfs:
|
||||
automatic-sp-privileges:
|
||||
secure-file-priv:
|
@ -1,68 +0,0 @@
|
||||
# NOTE: this file contains the default configuration for the 'os' hardening
|
||||
# code. If you want to override any settings you must add them to a file
|
||||
# called hardening.yaml in the root directory of your charm using the
|
||||
# name 'os' as the root key followed by any of the following with new
|
||||
# values.
|
||||
|
||||
general:
|
||||
desktop_enable: False # (type:boolean)
|
||||
|
||||
environment:
|
||||
extra_user_paths: []
|
||||
umask: 027
|
||||
root_path: /
|
||||
|
||||
auth:
|
||||
pw_max_age: 60
|
||||
# discourage password cycling
|
||||
pw_min_age: 7
|
||||
retries: 5
|
||||
lockout_time: 600
|
||||
timeout: 60
|
||||
allow_homeless: False # (type:boolean)
|
||||
pam_passwdqc_enable: True # (type:boolean)
|
||||
pam_passwdqc_options: 'min=disabled,disabled,16,12,8'
|
||||
root_ttys:
|
||||
console
|
||||
tty1
|
||||
tty2
|
||||
tty3
|
||||
tty4
|
||||
tty5
|
||||
tty6
|
||||
uid_min: 1000
|
||||
gid_min: 1000
|
||||
sys_uid_min: 100
|
||||
sys_uid_max: 999
|
||||
sys_gid_min: 100
|
||||
sys_gid_max: 999
|
||||
chfn_restrict:
|
||||
|
||||
security:
|
||||
users_allow: []
|
||||
suid_sgid_enforce: True # (type:boolean)
|
||||
# user-defined blacklist and whitelist
|
||||
suid_sgid_blacklist: []
|
||||
suid_sgid_whitelist: []
|
||||
# if this is True, remove any suid/sgid bits from files that were not in the whitelist
|
||||
suid_sgid_dry_run_on_unknown: False # (type:boolean)
|
||||
suid_sgid_remove_from_unknown: False # (type:boolean)
|
||||
# remove packages with known issues
|
||||
packages_clean: True # (type:boolean)
|
||||
packages_list:
|
||||
xinetd
|
||||
inetd
|
||||
ypserv
|
||||
telnet-server
|
||||
rsh-server
|
||||
rsync
|
||||
kernel_enable_module_loading: True # (type:boolean)
|
||||
kernel_enable_core_dump: False # (type:boolean)
|
||||
ssh_tmout: 300
|
||||
|
||||
sysctl:
|
||||
kernel_secure_sysrq: 244 # 4 + 16 + 32 + 64 + 128
|
||||
kernel_enable_sysrq: False # (type:boolean)
|
||||
forwarding: False # (type:boolean)
|
||||
ipv6_enable: False # (type:boolean)
|
||||
arp_restricted: True # (type:boolean)
|
@ -1,43 +0,0 @@
|
||||
# NOTE: this schema must contain all valid keys from it's associated defaults
|
||||
# file. It is used to validate user-provided overrides.
|
||||
general:
|
||||
desktop_enable:
|
||||
environment:
|
||||
extra_user_paths:
|
||||
umask:
|
||||
root_path:
|
||||
auth:
|
||||
pw_max_age:
|
||||
pw_min_age:
|
||||
retries:
|
||||
lockout_time:
|
||||
timeout:
|
||||
allow_homeless:
|
||||
pam_passwdqc_enable:
|
||||
pam_passwdqc_options:
|
||||
root_ttys:
|
||||
uid_min:
|
||||
gid_min:
|
||||
sys_uid_min:
|
||||
sys_uid_max:
|
||||
sys_gid_min:
|
||||
sys_gid_max:
|
||||
chfn_restrict:
|
||||
security:
|
||||
users_allow:
|
||||
suid_sgid_enforce:
|
||||
suid_sgid_blacklist:
|
||||
suid_sgid_whitelist:
|
||||
suid_sgid_dry_run_on_unknown:
|
||||
suid_sgid_remove_from_unknown:
|
||||
packages_clean:
|
||||
packages_list:
|
||||
kernel_enable_module_loading:
|
||||
kernel_enable_core_dump:
|
||||
ssh_tmout:
|
||||
sysctl:
|
||||
kernel_secure_sysrq:
|
||||
kernel_enable_sysrq:
|
||||
forwarding:
|
||||
ipv6_enable:
|
||||
arp_restricted:
|
@ -1,49 +0,0 @@
|
||||
# NOTE: this file contains the default configuration for the 'ssh' hardening
|
||||
# code. If you want to override any settings you must add them to a file
|
||||
# called hardening.yaml in the root directory of your charm using the
|
||||
# name 'ssh' as the root key followed by any of the following with new
|
||||
# values.
|
||||
|
||||
common:
|
||||
service_name: 'ssh'
|
||||
network_ipv6_enable: False # (type:boolean)
|
||||
ports: [22]
|
||||
remote_hosts: []
|
||||
|
||||
client:
|
||||
package: 'openssh-client'
|
||||
cbc_required: False # (type:boolean)
|
||||
weak_hmac: False # (type:boolean)
|
||||
weak_kex: False # (type:boolean)
|
||||
roaming: False
|
||||
password_authentication: 'no'
|
||||
|
||||
server:
|
||||
host_key_files: ['/etc/ssh/ssh_host_rsa_key', '/etc/ssh/ssh_host_dsa_key',
|
||||
'/etc/ssh/ssh_host_ecdsa_key']
|
||||
cbc_required: False # (type:boolean)
|
||||
weak_hmac: False # (type:boolean)
|
||||
weak_kex: False # (type:boolean)
|
||||
allow_root_with_key: False # (type:boolean)
|
||||
allow_tcp_forwarding: 'no'
|
||||
allow_agent_forwarding: 'no'
|
||||
allow_x11_forwarding: 'no'
|
||||
use_privilege_separation: 'sandbox'
|
||||
listen_to: ['0.0.0.0']
|
||||
use_pam: 'no'
|
||||
package: 'openssh-server'
|
||||
password_authentication: 'no'
|
||||
alive_interval: '600'
|
||||
alive_count: '3'
|
||||
sftp_enable: False # (type:boolean)
|
||||
sftp_group: 'sftponly'
|
||||
sftp_chroot: '/home/%u'
|
||||
deny_users: []
|
||||
allow_users: []
|
||||
deny_groups: []
|
||||
allow_groups: []
|
||||
print_motd: 'no'
|
||||
print_last_log: 'no'
|
||||
use_dns: 'no'
|
||||
max_auth_tries: 2
|
||||
max_sessions: 10
|
@ -1,42 +0,0 @@
|
||||
# NOTE: this schema must contain all valid keys from it's associated defaults
|
||||
# file. It is used to validate user-provided overrides.
|
||||
common:
|
||||
service_name:
|
||||
network_ipv6_enable:
|
||||
ports:
|
||||
remote_hosts:
|
||||
client:
|
||||
package:
|
||||
cbc_required:
|
||||
weak_hmac:
|
||||
weak_kex:
|
||||
roaming:
|
||||
password_authentication:
|
||||
server:
|
||||
host_key_files:
|
||||
cbc_required:
|
||||
weak_hmac:
|
||||
weak_kex:
|
||||
allow_root_with_key:
|
||||
allow_tcp_forwarding:
|
||||
allow_agent_forwarding:
|
||||
allow_x11_forwarding:
|
||||
use_privilege_separation:
|
||||
listen_to:
|
||||
use_pam:
|
||||
package:
|
||||
password_authentication:
|
||||
alive_interval:
|
||||
alive_count:
|
||||
sftp_enable:
|
||||
sftp_group:
|
||||
sftp_chroot:
|
||||
deny_users:
|
||||
allow_users:
|
||||
deny_groups:
|
||||
allow_groups:
|
||||
print_motd:
|
||||
print_last_log:
|
||||
use_dns:
|
||||
max_auth_tries:
|
||||
max_sessions:
|
@ -1,96 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import six
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
config,
|
||||
log,
|
||||
DEBUG,
|
||||
WARNING,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.host.checks import run_os_checks
|
||||
from charmhelpers.contrib.hardening.ssh.checks import run_ssh_checks
|
||||
from charmhelpers.contrib.hardening.mysql.checks import run_mysql_checks
|
||||
from charmhelpers.contrib.hardening.apache.checks import run_apache_checks
|
||||
|
||||
_DISABLE_HARDENING_FOR_UNIT_TEST = False
|
||||
|
||||
|
||||
def harden(overrides=None):
|
||||
"""Hardening decorator.
|
||||
|
||||
This is the main entry point for running the hardening stack. In order to
|
||||
run modules of the stack you must add this decorator to charm hook(s) and
|
||||
ensure that your charm config.yaml contains the 'harden' option set to
|
||||
one or more of the supported modules. Setting these will cause the
|
||||
corresponding hardening code to be run when the hook fires.
|
||||
|
||||
This decorator can and should be applied to more than one hook or function
|
||||
such that hardening modules are called multiple times. This is because
|
||||
subsequent calls will perform auditing checks that will report any changes
|
||||
to resources hardened by the first run (and possibly perform compliance
|
||||
actions as a result of any detected infractions).
|
||||
|
||||
:param overrides: Optional list of stack modules used to override those
|
||||
provided with 'harden' config.
|
||||
:returns: Returns value returned by decorated function once executed.
|
||||
"""
|
||||
if overrides is None:
|
||||
overrides = []
|
||||
|
||||
def _harden_inner1(f):
|
||||
# As this has to be py2.7 compat, we can't use nonlocal. Use a trick
|
||||
# to capture the dictionary that can then be updated.
|
||||
_logged = {'done': False}
|
||||
|
||||
def _harden_inner2(*args, **kwargs):
|
||||
# knock out hardening via a config var; normally it won't get
|
||||
# disabled.
|
||||
if _DISABLE_HARDENING_FOR_UNIT_TEST:
|
||||
return f(*args, **kwargs)
|
||||
if not _logged['done']:
|
||||
log("Hardening function '%s'" % (f.__name__), level=DEBUG)
|
||||
_logged['done'] = True
|
||||
RUN_CATALOG = OrderedDict([('os', run_os_checks),
|
||||
('ssh', run_ssh_checks),
|
||||
('mysql', run_mysql_checks),
|
||||
('apache', run_apache_checks)])
|
||||
|
||||
enabled = overrides[:] or (config("harden") or "").split()
|
||||
if enabled:
|
||||
modules_to_run = []
|
||||
# modules will always be performed in the following order
|
||||
for module, func in six.iteritems(RUN_CATALOG):
|
||||
if module in enabled:
|
||||
enabled.remove(module)
|
||||
modules_to_run.append(func)
|
||||
|
||||
if enabled:
|
||||
log("Unknown hardening modules '%s' - ignoring" %
|
||||
(', '.join(enabled)), level=WARNING)
|
||||
|
||||
for hardener in modules_to_run:
|
||||
log("Executing hardening module '%s'" %
|
||||
(hardener.__name__), level=DEBUG)
|
||||
hardener()
|
||||
else:
|
||||
log("No hardening applied to '%s'" % (f.__name__), level=DEBUG)
|
||||
|
||||
return f(*args, **kwargs)
|
||||
return _harden_inner2
|
||||
|
||||
return _harden_inner1
|
@ -1,17 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from os import path
|
||||
|
||||
TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates')
|
@ -1,48 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.host.checks import (
|
||||
apt,
|
||||
limits,
|
||||
login,
|
||||
minimize_access,
|
||||
pam,
|
||||
profile,
|
||||
securetty,
|
||||
suid_sgid,
|
||||
sysctl
|
||||
)
|
||||
|
||||
|
||||
def run_os_checks():
|
||||
log("Starting OS hardening checks.", level=DEBUG)
|
||||
checks = apt.get_audits()
|
||||
checks.extend(limits.get_audits())
|
||||
checks.extend(login.get_audits())
|
||||
checks.extend(minimize_access.get_audits())
|
||||
checks.extend(pam.get_audits())
|
||||
checks.extend(profile.get_audits())
|
||||
checks.extend(securetty.get_audits())
|
||||
checks.extend(suid_sgid.get_audits())
|
||||
checks.extend(sysctl.get_audits())
|
||||
|
||||
for check in checks:
|
||||
log("Running '%s' check" % (check.__class__.__name__), level=DEBUG)
|
||||
check.ensure_compliance()
|
||||
|
||||
log("OS hardening checks complete.", level=DEBUG)
|
@ -1,37 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.contrib.hardening.utils import get_settings
|
||||
from charmhelpers.contrib.hardening.audits.apt import (
|
||||
AptConfig,
|
||||
RestrictedPackages,
|
||||
)
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get OS hardening apt audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
audits = [AptConfig([{'key': 'APT::Get::AllowUnauthenticated',
|
||||
'expected': 'false'}])]
|
||||
|
||||
settings = get_settings('os')
|
||||
clean_packages = settings['security']['packages_clean']
|
||||
if clean_packages:
|
||||
security_packages = settings['security']['packages_list']
|
||||
if security_packages:
|
||||
audits.append(RestrictedPackages(security_packages))
|
||||
|
||||
return audits
|
@ -1,53 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.contrib.hardening.audits.file import (
|
||||
DirectoryPermissionAudit,
|
||||
TemplatedFile,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.host import TEMPLATES_DIR
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get OS hardening security limits audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
audits = []
|
||||
settings = utils.get_settings('os')
|
||||
|
||||
# Ensure that the /etc/security/limits.d directory is only writable
|
||||
# by the root user, but others can execute and read.
|
||||
audits.append(DirectoryPermissionAudit('/etc/security/limits.d',
|
||||
user='root', group='root',
|
||||
mode=0o755))
|
||||
|
||||
# If core dumps are not enabled, then don't allow core dumps to be
|
||||
# created as they may contain sensitive information.
|
||||
if not settings['security']['kernel_enable_core_dump']:
|
||||
audits.append(TemplatedFile('/etc/security/limits.d/10.hardcore.conf',
|
||||
SecurityLimitsContext(),
|
||||
template_dir=TEMPLATES_DIR,
|
||||
user='root', group='root', mode=0o0440))
|
||||
return audits
|
||||
|
||||
|
||||
class SecurityLimitsContext(object):
|
||||
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('os')
|
||||
ctxt = {'disable_core_dump':
|
||||
not settings['security']['kernel_enable_core_dump']}
|
||||
return ctxt
|
@ -1,65 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from six import string_types
|
||||
|
||||
from charmhelpers.contrib.hardening.audits.file import TemplatedFile
|
||||
from charmhelpers.contrib.hardening.host import TEMPLATES_DIR
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get OS hardening login.defs audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
audits = [TemplatedFile('/etc/login.defs', LoginContext(),
|
||||
template_dir=TEMPLATES_DIR,
|
||||
user='root', group='root', mode=0o0444)]
|
||||
return audits
|
||||
|
||||
|
||||
class LoginContext(object):
|
||||
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('os')
|
||||
|
||||
# Octal numbers in yaml end up being turned into decimal,
|
||||
# so check if the umask is entered as a string (e.g. '027')
|
||||
# or as an octal umask as we know it (e.g. 002). If its not
|
||||
# a string assume it to be octal and turn it into an octal
|
||||
# string.
|
||||
umask = settings['environment']['umask']
|
||||
if not isinstance(umask, string_types):
|
||||
umask = '%s' % oct(umask)
|
||||
|
||||
ctxt = {
|
||||
'additional_user_paths':
|
||||
settings['environment']['extra_user_paths'],
|
||||
'umask': umask,
|
||||
'pwd_max_age': settings['auth']['pw_max_age'],
|
||||
'pwd_min_age': settings['auth']['pw_min_age'],
|
||||
'uid_min': settings['auth']['uid_min'],
|
||||
'sys_uid_min': settings['auth']['sys_uid_min'],
|
||||
'sys_uid_max': settings['auth']['sys_uid_max'],
|
||||
'gid_min': settings['auth']['gid_min'],
|
||||
'sys_gid_min': settings['auth']['sys_gid_min'],
|
||||
'sys_gid_max': settings['auth']['sys_gid_max'],
|
||||
'login_retries': settings['auth']['retries'],
|
||||
'login_timeout': settings['auth']['timeout'],
|
||||
'chfn_restrict': settings['auth']['chfn_restrict'],
|
||||
'allow_login_without_home': settings['auth']['allow_homeless']
|
||||
}
|
||||
|
||||
return ctxt
|
@ -1,50 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.contrib.hardening.audits.file import (
|
||||
FilePermissionAudit,
|
||||
ReadOnly,
|
||||
)
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get OS hardening access audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
audits = []
|
||||
settings = utils.get_settings('os')
|
||||
|
||||
# Remove write permissions from $PATH folders for all regular users.
|
||||
# This prevents changing system-wide commands from normal users.
|
||||
path_folders = {'/usr/local/sbin',
|
||||
'/usr/local/bin',
|
||||
'/usr/sbin',
|
||||
'/usr/bin',
|
||||
'/bin'}
|
||||
extra_user_paths = settings['environment']['extra_user_paths']
|
||||
path_folders.update(extra_user_paths)
|
||||
audits.append(ReadOnly(path_folders))
|
||||
|
||||
# Only allow the root user to have access to the shadow file.
|
||||
audits.append(FilePermissionAudit('/etc/shadow', 'root', 'root', 0o0600))
|
||||
|
||||
if 'change_user' not in settings['security']['users_allow']:
|
||||
# su should only be accessible to user and group root, unless it is
|
||||
# expressly defined to allow users to change to root via the
|
||||
# security_users_allow config option.
|
||||
audits.append(FilePermissionAudit('/bin/su', 'root', 'root', 0o750))
|
||||
|
||||
return audits
|
@ -1,132 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from subprocess import (
|
||||
check_output,
|
||||
CalledProcessError,
|
||||
)
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
ERROR,
|
||||
)
|
||||
from charmhelpers.fetch import (
|
||||
apt_install,
|
||||
apt_purge,
|
||||
apt_update,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.audits.file import (
|
||||
TemplatedFile,
|
||||
DeletedFile,
|
||||
)
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
from charmhelpers.contrib.hardening.host import TEMPLATES_DIR
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get OS hardening PAM authentication audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
audits = []
|
||||
|
||||
settings = utils.get_settings('os')
|
||||
|
||||
if settings['auth']['pam_passwdqc_enable']:
|
||||
audits.append(PasswdqcPAM('/etc/passwdqc.conf'))
|
||||
|
||||
if settings['auth']['retries']:
|
||||
audits.append(Tally2PAM('/usr/share/pam-configs/tally2'))
|
||||
else:
|
||||
audits.append(DeletedFile('/usr/share/pam-configs/tally2'))
|
||||
|
||||
return audits
|
||||
|
||||
|
||||
class PasswdqcPAMContext(object):
|
||||
|
||||
def __call__(self):
|
||||
ctxt = {}
|
||||
settings = utils.get_settings('os')
|
||||
|
||||
ctxt['auth_pam_passwdqc_options'] = \
|
||||
settings['auth']['pam_passwdqc_options']
|
||||
|
||||
return ctxt
|
||||
|
||||
|
||||
class PasswdqcPAM(TemplatedFile):
|
||||
"""The PAM Audit verifies the linux PAM settings."""
|
||||
def __init__(self, path):
|
||||
super(PasswdqcPAM, self).__init__(path=path,
|
||||
template_dir=TEMPLATES_DIR,
|
||||
context=PasswdqcPAMContext(),
|
||||
user='root',
|
||||
group='root',
|
||||
mode=0o0640)
|
||||
|
||||
def pre_write(self):
|
||||
# Always remove?
|
||||
for pkg in ['libpam-ccreds', 'libpam-cracklib']:
|
||||
log("Purging package '%s'" % pkg, level=DEBUG),
|
||||
apt_purge(pkg)
|
||||
|
||||
apt_update(fatal=True)
|
||||
for pkg in ['libpam-passwdqc']:
|
||||
log("Installing package '%s'" % pkg, level=DEBUG),
|
||||
apt_install(pkg)
|
||||
|
||||
def post_write(self):
|
||||
"""Updates the PAM configuration after the file has been written"""
|
||||
try:
|
||||
check_output(['pam-auth-update', '--package'])
|
||||
except CalledProcessError as e:
|
||||
log('Error calling pam-auth-update: %s' % e, level=ERROR)
|
||||
|
||||
|
||||
class Tally2PAMContext(object):
|
||||
|
||||
def __call__(self):
|
||||
ctxt = {}
|
||||
settings = utils.get_settings('os')
|
||||
|
||||
ctxt['auth_lockout_time'] = settings['auth']['lockout_time']
|
||||
ctxt['auth_retries'] = settings['auth']['retries']
|
||||
|
||||
return ctxt
|
||||
|
||||
|
||||
class Tally2PAM(TemplatedFile):
|
||||
"""The PAM Audit verifies the linux PAM settings."""
|
||||
def __init__(self, path):
|
||||
super(Tally2PAM, self).__init__(path=path,
|
||||
template_dir=TEMPLATES_DIR,
|
||||
context=Tally2PAMContext(),
|
||||
user='root',
|
||||
group='root',
|
||||
mode=0o0640)
|
||||
|
||||
def pre_write(self):
|
||||
# Always remove?
|
||||
apt_purge('libpam-ccreds')
|
||||
apt_update(fatal=True)
|
||||
apt_install('libpam-modules')
|
||||
|
||||
def post_write(self):
|
||||
"""Updates the PAM configuration after the file has been written"""
|
||||
try:
|
||||
check_output(['pam-auth-update', '--package'])
|
||||
except CalledProcessError as e:
|
||||
log('Error calling pam-auth-update: %s' % e, level=ERROR)
|
@ -1,49 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.contrib.hardening.audits.file import TemplatedFile
|
||||
from charmhelpers.contrib.hardening.host import TEMPLATES_DIR
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get OS hardening profile audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
audits = []
|
||||
|
||||
settings = utils.get_settings('os')
|
||||
# If core dumps are not enabled, then don't allow core dumps to be
|
||||
# created as they may contain sensitive information.
|
||||
if not settings['security']['kernel_enable_core_dump']:
|
||||
audits.append(TemplatedFile('/etc/profile.d/pinerolo_profile.sh',
|
||||
ProfileContext(),
|
||||
template_dir=TEMPLATES_DIR,
|
||||
mode=0o0755, user='root', group='root'))
|
||||
if settings['security']['ssh_tmout']:
|
||||
audits.append(TemplatedFile('/etc/profile.d/99-hardening.sh',
|
||||
ProfileContext(),
|
||||
template_dir=TEMPLATES_DIR,
|
||||
mode=0o0644, user='root', group='root'))
|
||||
return audits
|
||||
|
||||
|
||||
class ProfileContext(object):
|
||||
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('os')
|
||||
ctxt = {'ssh_tmout':
|
||||
settings['security']['ssh_tmout']}
|
||||
return ctxt
|
@ -1,37 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.contrib.hardening.audits.file import TemplatedFile
|
||||
from charmhelpers.contrib.hardening.host import TEMPLATES_DIR
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get OS hardening Secure TTY audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
audits = []
|
||||
audits.append(TemplatedFile('/etc/securetty', SecureTTYContext(),
|
||||
template_dir=TEMPLATES_DIR,
|
||||
mode=0o0400, user='root', group='root'))
|
||||
return audits
|
||||
|
||||
|
||||
class SecureTTYContext(object):
|
||||
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('os')
|
||||
ctxt = {'ttys': settings['auth']['root_ttys']}
|
||||
return ctxt
|
@ -1,129 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import subprocess
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
INFO,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.audits.file import NoSUIDSGIDAudit
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
BLACKLIST = ['/usr/bin/rcp', '/usr/bin/rlogin', '/usr/bin/rsh',
|
||||
'/usr/libexec/openssh/ssh-keysign',
|
||||
'/usr/lib/openssh/ssh-keysign',
|
||||
'/sbin/netreport',
|
||||
'/usr/sbin/usernetctl',
|
||||
'/usr/sbin/userisdnctl',
|
||||
'/usr/sbin/pppd',
|
||||
'/usr/bin/lockfile',
|
||||
'/usr/bin/mail-lock',
|
||||
'/usr/bin/mail-unlock',
|
||||
'/usr/bin/mail-touchlock',
|
||||
'/usr/bin/dotlockfile',
|
||||
'/usr/bin/arping',
|
||||
'/usr/sbin/uuidd',
|
||||
'/usr/bin/mtr',
|
||||
'/usr/lib/evolution/camel-lock-helper-1.2',
|
||||
'/usr/lib/pt_chown',
|
||||
'/usr/lib/eject/dmcrypt-get-device',
|
||||
'/usr/lib/mc/cons.saver']
|
||||
|
||||
WHITELIST = ['/bin/mount', '/bin/ping', '/bin/su', '/bin/umount',
|
||||
'/sbin/pam_timestamp_check', '/sbin/unix_chkpwd', '/usr/bin/at',
|
||||
'/usr/bin/gpasswd', '/usr/bin/locate', '/usr/bin/newgrp',
|
||||
'/usr/bin/passwd', '/usr/bin/ssh-agent',
|
||||
'/usr/libexec/utempter/utempter', '/usr/sbin/lockdev',
|
||||
'/usr/sbin/sendmail.sendmail', '/usr/bin/expiry',
|
||||
'/bin/ping6', '/usr/bin/traceroute6.iputils',
|
||||
'/sbin/mount.nfs', '/sbin/umount.nfs',
|
||||
'/sbin/mount.nfs4', '/sbin/umount.nfs4',
|
||||
'/usr/bin/crontab',
|
||||
'/usr/bin/wall', '/usr/bin/write',
|
||||
'/usr/bin/screen',
|
||||
'/usr/bin/mlocate',
|
||||
'/usr/bin/chage', '/usr/bin/chfn', '/usr/bin/chsh',
|
||||
'/bin/fusermount',
|
||||
'/usr/bin/pkexec',
|
||||
'/usr/bin/sudo', '/usr/bin/sudoedit',
|
||||
'/usr/sbin/postdrop', '/usr/sbin/postqueue',
|
||||
'/usr/sbin/suexec',
|
||||
'/usr/lib/squid/ncsa_auth', '/usr/lib/squid/pam_auth',
|
||||
'/usr/kerberos/bin/ksu',
|
||||
'/usr/sbin/ccreds_validate',
|
||||
'/usr/bin/Xorg',
|
||||
'/usr/bin/X',
|
||||
'/usr/lib/dbus-1.0/dbus-daemon-launch-helper',
|
||||
'/usr/lib/vte/gnome-pty-helper',
|
||||
'/usr/lib/libvte9/gnome-pty-helper',
|
||||
'/usr/lib/libvte-2.90-9/gnome-pty-helper']
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get OS hardening suid/sgid audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
checks = []
|
||||
settings = utils.get_settings('os')
|
||||
if not settings['security']['suid_sgid_enforce']:
|
||||
log("Skipping suid/sgid hardening", level=INFO)
|
||||
return checks
|
||||
|
||||
# Build the blacklist and whitelist of files for suid/sgid checks.
|
||||
# There are a total of 4 lists:
|
||||
# 1. the system blacklist
|
||||
# 2. the system whitelist
|
||||
# 3. the user blacklist
|
||||
# 4. the user whitelist
|
||||
#
|
||||
# The blacklist is the set of paths which should NOT have the suid/sgid bit
|
||||
# set and the whitelist is the set of paths which MAY have the suid/sgid
|
||||
# bit setl. The user whitelist/blacklist effectively override the system
|
||||
# whitelist/blacklist.
|
||||
u_b = settings['security']['suid_sgid_blacklist']
|
||||
u_w = settings['security']['suid_sgid_whitelist']
|
||||
|
||||
blacklist = set(BLACKLIST) - set(u_w + u_b)
|
||||
whitelist = set(WHITELIST) - set(u_b + u_w)
|
||||
|
||||
checks.append(NoSUIDSGIDAudit(blacklist))
|
||||
|
||||
dry_run = settings['security']['suid_sgid_dry_run_on_unknown']
|
||||
|
||||
if settings['security']['suid_sgid_remove_from_unknown'] or dry_run:
|
||||
# If the policy is a dry_run (e.g. complain only) or remove unknown
|
||||
# suid/sgid bits then find all of the paths which have the suid/sgid
|
||||
# bit set and then remove the whitelisted paths.
|
||||
root_path = settings['environment']['root_path']
|
||||
unknown_paths = find_paths_with_suid_sgid(root_path) - set(whitelist)
|
||||
checks.append(NoSUIDSGIDAudit(unknown_paths, unless=dry_run))
|
||||
|
||||
return checks
|
||||
|
||||
|
||||
def find_paths_with_suid_sgid(root_path):
|
||||
"""Finds all paths/files which have an suid/sgid bit enabled.
|
||||
|
||||
Starting with the root_path, this will recursively find all paths which
|
||||
have an suid or sgid bit set.
|
||||
"""
|
||||
cmd = ['find', root_path, '-perm', '-4000', '-o', '-perm', '-2000',
|
||||
'-type', 'f', '!', '-path', '/proc/*', '-print']
|
||||
|
||||
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
out, _ = p.communicate()
|
||||
return set(out.split('\n'))
|
@ -1,209 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import platform
|
||||
import re
|
||||
import six
|
||||
import subprocess
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
INFO,
|
||||
WARNING,
|
||||
)
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
from charmhelpers.contrib.hardening.audits.file import (
|
||||
FilePermissionAudit,
|
||||
TemplatedFile,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.host import TEMPLATES_DIR
|
||||
|
||||
|
||||
SYSCTL_DEFAULTS = """net.ipv4.ip_forward=%(net_ipv4_ip_forward)s
|
||||
net.ipv6.conf.all.forwarding=%(net_ipv6_conf_all_forwarding)s
|
||||
net.ipv4.conf.all.rp_filter=1
|
||||
net.ipv4.conf.default.rp_filter=1
|
||||
net.ipv4.icmp_echo_ignore_broadcasts=1
|
||||
net.ipv4.icmp_ignore_bogus_error_responses=1
|
||||
net.ipv4.icmp_ratelimit=100
|
||||
net.ipv4.icmp_ratemask=88089
|
||||
net.ipv6.conf.all.disable_ipv6=%(net_ipv6_conf_all_disable_ipv6)s
|
||||
net.ipv4.tcp_timestamps=%(net_ipv4_tcp_timestamps)s
|
||||
net.ipv4.conf.all.arp_ignore=%(net_ipv4_conf_all_arp_ignore)s
|
||||
net.ipv4.conf.all.arp_announce=%(net_ipv4_conf_all_arp_announce)s
|
||||
net.ipv4.tcp_rfc1337=1
|
||||
net.ipv4.tcp_syncookies=1
|
||||
net.ipv4.conf.all.shared_media=1
|
||||
net.ipv4.conf.default.shared_media=1
|
||||
net.ipv4.conf.all.accept_source_route=0
|
||||
net.ipv4.conf.default.accept_source_route=0
|
||||
net.ipv4.conf.all.accept_redirects=0
|
||||
net.ipv4.conf.default.accept_redirects=0
|
||||
net.ipv6.conf.all.accept_redirects=0
|
||||
net.ipv6.conf.default.accept_redirects=0
|
||||
net.ipv4.conf.all.secure_redirects=0
|
||||
net.ipv4.conf.default.secure_redirects=0
|
||||
net.ipv4.conf.all.send_redirects=0
|
||||
net.ipv4.conf.default.send_redirects=0
|
||||
net.ipv4.conf.all.log_martians=0
|
||||
net.ipv6.conf.default.router_solicitations=0
|
||||
net.ipv6.conf.default.accept_ra_rtr_pref=0
|
||||
net.ipv6.conf.default.accept_ra_pinfo=0
|
||||
net.ipv6.conf.default.accept_ra_defrtr=0
|
||||
net.ipv6.conf.default.autoconf=0
|
||||
net.ipv6.conf.default.dad_transmits=0
|
||||
net.ipv6.conf.default.max_addresses=1
|
||||
net.ipv6.conf.all.accept_ra=0
|
||||
net.ipv6.conf.default.accept_ra=0
|
||||
kernel.modules_disabled=%(kernel_modules_disabled)s
|
||||
kernel.sysrq=%(kernel_sysrq)s
|
||||
fs.suid_dumpable=%(fs_suid_dumpable)s
|
||||
kernel.randomize_va_space=2
|
||||
"""
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get OS hardening sysctl audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
audits = []
|
||||
settings = utils.get_settings('os')
|
||||
|
||||
# Apply the sysctl settings which are configured to be applied.
|
||||
audits.append(SysctlConf())
|
||||
# Make sure that only root has access to the sysctl.conf file, and
|
||||
# that it is read-only.
|
||||
audits.append(FilePermissionAudit('/etc/sysctl.conf',
|
||||
user='root',
|
||||
group='root', mode=0o0440))
|
||||
# If module loading is not enabled, then ensure that the modules
|
||||
# file has the appropriate permissions and rebuild the initramfs
|
||||
if not settings['security']['kernel_enable_module_loading']:
|
||||
audits.append(ModulesTemplate())
|
||||
|
||||
return audits
|
||||
|
||||
|
||||
class ModulesContext(object):
|
||||
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('os')
|
||||
with open('/proc/cpuinfo', 'r') as fd:
|
||||
cpuinfo = fd.readlines()
|
||||
|
||||
for line in cpuinfo:
|
||||
match = re.search(r"^vendor_id\s+:\s+(.+)", line)
|
||||
if match:
|
||||
vendor = match.group(1)
|
||||
|
||||
if vendor == "GenuineIntel":
|
||||
vendor = "intel"
|
||||
elif vendor == "AuthenticAMD":
|
||||
vendor = "amd"
|
||||
|
||||
ctxt = {'arch': platform.processor(),
|
||||
'cpuVendor': vendor,
|
||||
'desktop_enable': settings['general']['desktop_enable']}
|
||||
|
||||
return ctxt
|
||||
|
||||
|
||||
class ModulesTemplate(object):
|
||||
|
||||
def __init__(self):
|
||||
super(ModulesTemplate, self).__init__('/etc/initramfs-tools/modules',
|
||||
ModulesContext(),
|
||||
templates_dir=TEMPLATES_DIR,
|
||||
user='root', group='root',
|
||||
mode=0o0440)
|
||||
|
||||
def post_write(self):
|
||||
subprocess.check_call(['update-initramfs', '-u'])
|
||||
|
||||
|
||||
class SysCtlHardeningContext(object):
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('os')
|
||||
ctxt = {'sysctl': {}}
|
||||
|
||||
log("Applying sysctl settings", level=INFO)
|
||||
extras = {'net_ipv4_ip_forward': 0,
|
||||
'net_ipv6_conf_all_forwarding': 0,
|
||||
'net_ipv6_conf_all_disable_ipv6': 1,
|
||||
'net_ipv4_tcp_timestamps': 0,
|
||||
'net_ipv4_conf_all_arp_ignore': 0,
|
||||
'net_ipv4_conf_all_arp_announce': 0,
|
||||
'kernel_sysrq': 0,
|
||||
'fs_suid_dumpable': 0,
|
||||
'kernel_modules_disabled': 1}
|
||||
|
||||
if settings['sysctl']['ipv6_enable']:
|
||||
extras['net_ipv6_conf_all_disable_ipv6'] = 0
|
||||
|
||||
if settings['sysctl']['forwarding']:
|
||||
extras['net_ipv4_ip_forward'] = 1
|
||||
extras['net_ipv6_conf_all_forwarding'] = 1
|
||||
|
||||
if settings['sysctl']['arp_restricted']:
|
||||
extras['net_ipv4_conf_all_arp_ignore'] = 1
|
||||
extras['net_ipv4_conf_all_arp_announce'] = 2
|
||||
|
||||
if settings['security']['kernel_enable_module_loading']:
|
||||
extras['kernel_modules_disabled'] = 0
|
||||
|
||||
if settings['sysctl']['kernel_enable_sysrq']:
|
||||
sysrq_val = settings['sysctl']['kernel_secure_sysrq']
|
||||
extras['kernel_sysrq'] = sysrq_val
|
||||
|
||||
if settings['security']['kernel_enable_core_dump']:
|
||||
extras['fs_suid_dumpable'] = 1
|
||||
|
||||
settings.update(extras)
|
||||
for d in (SYSCTL_DEFAULTS % settings).split():
|
||||
d = d.strip().partition('=')
|
||||
key = d[0].strip()
|
||||
path = os.path.join('/proc/sys', key.replace('.', '/'))
|
||||
if not os.path.exists(path):
|
||||
log("Skipping '%s' since '%s' does not exist" % (key, path),
|
||||
level=WARNING)
|
||||
continue
|
||||
|
||||
ctxt['sysctl'][key] = d[2] or None
|
||||
|
||||
# Translate for python3
|
||||
return {'sysctl_settings':
|
||||
[(k, v) for k, v in six.iteritems(ctxt['sysctl'])]}
|
||||
|
||||
|
||||
class SysctlConf(TemplatedFile):
|
||||
"""An audit check for sysctl settings."""
|
||||
def __init__(self):
|
||||
self.conffile = '/etc/sysctl.d/99-juju-hardening.conf'
|
||||
super(SysctlConf, self).__init__(self.conffile,
|
||||
SysCtlHardeningContext(),
|
||||
template_dir=TEMPLATES_DIR,
|
||||
user='root', group='root',
|
||||
mode=0o0440)
|
||||
|
||||
def post_write(self):
|
||||
try:
|
||||
subprocess.check_call(['sysctl', '-p', self.conffile])
|
||||
except subprocess.CalledProcessError as e:
|
||||
# NOTE: on some systems if sysctl cannot apply all settings it
|
||||
# will return non-zero as well.
|
||||
log("sysctl command returned an error (maybe some "
|
||||
"keys could not be set) - %s" % (e),
|
||||
level=WARNING)
|
@ -1,8 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
{% if disable_core_dump -%}
|
||||
# Prevent core dumps for all users. These are usually only needed by developers and may contain sensitive information.
|
||||
* hard core 0
|
||||
{% endif %}
|
@ -1,5 +0,0 @@
|
||||
TMOUT={{ tmout }}
|
||||
readonly TMOUT
|
||||
export TMOUT
|
||||
|
||||
readonly HISTFILE
|
@ -1,7 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
{% for key, value in sysctl_settings -%}
|
||||
{{ key }}={{ value }}
|
||||
{% endfor -%}
|
@ -1,349 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
#
|
||||
# /etc/login.defs - Configuration control definitions for the login package.
|
||||
#
|
||||
# Three items must be defined: MAIL_DIR, ENV_SUPATH, and ENV_PATH.
|
||||
# If unspecified, some arbitrary (and possibly incorrect) value will
|
||||
# be assumed. All other items are optional - if not specified then
|
||||
# the described action or option will be inhibited.
|
||||
#
|
||||
# Comment lines (lines beginning with "#") and blank lines are ignored.
|
||||
#
|
||||
# Modified for Linux. --marekm
|
||||
|
||||
# REQUIRED for useradd/userdel/usermod
|
||||
# Directory where mailboxes reside, _or_ name of file, relative to the
|
||||
# home directory. If you _do_ define MAIL_DIR and MAIL_FILE,
|
||||
# MAIL_DIR takes precedence.
|
||||
#
|
||||
# Essentially:
|
||||
# - MAIL_DIR defines the location of users mail spool files
|
||||
# (for mbox use) by appending the username to MAIL_DIR as defined
|
||||
# below.
|
||||
# - MAIL_FILE defines the location of the users mail spool files as the
|
||||
# fully-qualified filename obtained by prepending the user home
|
||||
# directory before $MAIL_FILE
|
||||
#
|
||||
# NOTE: This is no more used for setting up users MAIL environment variable
|
||||
# which is, starting from shadow 4.0.12-1 in Debian, entirely the
|
||||
# job of the pam_mail PAM modules
|
||||
# See default PAM configuration files provided for
|
||||
# login, su, etc.
|
||||
#
|
||||
# This is a temporary situation: setting these variables will soon
|
||||
# move to /etc/default/useradd and the variables will then be
|
||||
# no more supported
|
||||
MAIL_DIR /var/mail
|
||||
#MAIL_FILE .mail
|
||||
|
||||
#
|
||||
# Enable logging and display of /var/log/faillog login failure info.
|
||||
# This option conflicts with the pam_tally PAM module.
|
||||
#
|
||||
FAILLOG_ENAB yes
|
||||
|
||||
#
|
||||
# Enable display of unknown usernames when login failures are recorded.
|
||||
#
|
||||
# WARNING: Unknown usernames may become world readable.
|
||||
# See #290803 and #298773 for details about how this could become a security
|
||||
# concern
|
||||
LOG_UNKFAIL_ENAB no
|
||||
|
||||
#
|
||||
# Enable logging of successful logins
|
||||
#
|
||||
LOG_OK_LOGINS yes
|
||||
|
||||
#
|
||||
# Enable "syslog" logging of su activity - in addition to sulog file logging.
|
||||
# SYSLOG_SG_ENAB does the same for newgrp and sg.
|
||||
#
|
||||
SYSLOG_SU_ENAB yes
|
||||
SYSLOG_SG_ENAB yes
|
||||
|
||||
#
|
||||
# If defined, all su activity is logged to this file.
|
||||
#
|
||||
#SULOG_FILE /var/log/sulog
|
||||
|
||||
#
|
||||
# If defined, file which maps tty line to TERM environment parameter.
|
||||
# Each line of the file is in a format something like "vt100 tty01".
|
||||
#
|
||||
#TTYTYPE_FILE /etc/ttytype
|
||||
|
||||
#
|
||||
# If defined, login failures will be logged here in a utmp format
|
||||
# last, when invoked as lastb, will read /var/log/btmp, so...
|
||||
#
|
||||
FTMP_FILE /var/log/btmp
|
||||
|
||||
#
|
||||
# If defined, the command name to display when running "su -". For
|
||||
# example, if this is defined as "su" then a "ps" will display the
|
||||
# command is "-su". If not defined, then "ps" would display the
|
||||
# name of the shell actually being run, e.g. something like "-sh".
|
||||
#
|
||||
SU_NAME su
|
||||
|
||||
#
|
||||
# If defined, file which inhibits all the usual chatter during the login
|
||||
# sequence. If a full pathname, then hushed mode will be enabled if the
|
||||
# user's name or shell are found in the file. If not a full pathname, then
|
||||
# hushed mode will be enabled if the file exists in the user's home directory.
|
||||
#
|
||||
HUSHLOGIN_FILE .hushlogin
|
||||
#HUSHLOGIN_FILE /etc/hushlogins
|
||||
|
||||
#
|
||||
# *REQUIRED* The default PATH settings, for superuser and normal users.
|
||||
#
|
||||
# (they are minimal, add the rest in the shell startup files)
|
||||
ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
ENV_PATH PATH=/usr/local/bin:/usr/bin:/bin{% if additional_user_paths %}{{ additional_user_paths }}{% endif %}
|
||||
|
||||
#
|
||||
# Terminal permissions
|
||||
#
|
||||
# TTYGROUP Login tty will be assigned this group ownership.
|
||||
# TTYPERM Login tty will be set to this permission.
|
||||
#
|
||||
# If you have a "write" program which is "setgid" to a special group
|
||||
# which owns the terminals, define TTYGROUP to the group number and
|
||||
# TTYPERM to 0620. Otherwise leave TTYGROUP commented out and assign
|
||||
# TTYPERM to either 622 or 600.
|
||||
#
|
||||
# In Debian /usr/bin/bsd-write or similar programs are setgid tty
|
||||
# However, the default and recommended value for TTYPERM is still 0600
|
||||
# to not allow anyone to write to anyone else console or terminal
|
||||
|
||||
# Users can still allow other people to write them by issuing
|
||||
# the "mesg y" command.
|
||||
|
||||
TTYGROUP tty
|
||||
TTYPERM 0600
|
||||
|
||||
#
|
||||
# Login configuration initializations:
|
||||
#
|
||||
# ERASECHAR Terminal ERASE character ('\010' = backspace).
|
||||
# KILLCHAR Terminal KILL character ('\025' = CTRL/U).
|
||||
# UMASK Default "umask" value.
|
||||
#
|
||||
# The ERASECHAR and KILLCHAR are used only on System V machines.
|
||||
#
|
||||
# UMASK is the default umask value for pam_umask and is used by
|
||||
# useradd and newusers to set the mode of the new home directories.
|
||||
# 022 is the "historical" value in Debian for UMASK
|
||||
# 027, or even 077, could be considered better for privacy
|
||||
# There is no One True Answer here : each sysadmin must make up his/her
|
||||
# mind.
|
||||
#
|
||||
# If USERGROUPS_ENAB is set to "yes", that will modify this UMASK default value
|
||||
# for private user groups, i. e. the uid is the same as gid, and username is
|
||||
# the same as the primary group name: for these, the user permissions will be
|
||||
# used as group permissions, e. g. 022 will become 002.
|
||||
#
|
||||
# Prefix these values with "0" to get octal, "0x" to get hexadecimal.
|
||||
#
|
||||
ERASECHAR 0177
|
||||
KILLCHAR 025
|
||||
UMASK {{ umask }}
|
||||
|
||||
# Enable setting of the umask group bits to be the same as owner bits (examples: `022` -> `002`, `077` -> `007`) for non-root users, if the uid is the same as gid, and username is the same as the primary group name.
|
||||
# If set to yes, userdel will remove the user´s group if it contains no more members, and useradd will create by default a group with the name of the user.
|
||||
USERGROUPS_ENAB yes
|
||||
|
||||
#
|
||||
# Password aging controls:
|
||||
#
|
||||
# PASS_MAX_DAYS Maximum number of days a password may be used.
|
||||
# PASS_MIN_DAYS Minimum number of days allowed between password changes.
|
||||
# PASS_WARN_AGE Number of days warning given before a password expires.
|
||||
#
|
||||
PASS_MAX_DAYS {{ pwd_max_age }}
|
||||
PASS_MIN_DAYS {{ pwd_min_age }}
|
||||
PASS_WARN_AGE 7
|
||||
|
||||
#
|
||||
# Min/max values for automatic uid selection in useradd
|
||||
#
|
||||
UID_MIN {{ uid_min }}
|
||||
UID_MAX 60000
|
||||
# System accounts
|
||||
SYS_UID_MIN {{ sys_uid_min }}
|
||||
SYS_UID_MAX {{ sys_uid_max }}
|
||||
|
||||
# Min/max values for automatic gid selection in groupadd
|
||||
GID_MIN {{ gid_min }}
|
||||
GID_MAX 60000
|
||||
# System accounts
|
||||
SYS_GID_MIN {{ sys_gid_min }}
|
||||
SYS_GID_MAX {{ sys_gid_max }}
|
||||
|
||||
#
|
||||
# Max number of login retries if password is bad. This will most likely be
|
||||
# overriden by PAM, since the default pam_unix module has it's own built
|
||||
# in of 3 retries. However, this is a safe fallback in case you are using
|
||||
# an authentication module that does not enforce PAM_MAXTRIES.
|
||||
#
|
||||
LOGIN_RETRIES {{ login_retries }}
|
||||
|
||||
#
|
||||
# Max time in seconds for login
|
||||
#
|
||||
LOGIN_TIMEOUT {{ login_timeout }}
|
||||
|
||||
#
|
||||
# Which fields may be changed by regular users using chfn - use
|
||||
# any combination of letters "frwh" (full name, room number, work
|
||||
# phone, home phone). If not defined, no changes are allowed.
|
||||
# For backward compatibility, "yes" = "rwh" and "no" = "frwh".
|
||||
#
|
||||
{% if chfn_restrict %}
|
||||
CHFN_RESTRICT {{ chfn_restrict }}
|
||||
{% endif %}
|
||||
|
||||
#
|
||||
# Should login be allowed if we can't cd to the home directory?
|
||||
# Default in no.
|
||||
#
|
||||
DEFAULT_HOME {% if allow_login_without_home %} yes {% else %} no {% endif %}
|
||||
|
||||
#
|
||||
# If defined, this command is run when removing a user.
|
||||
# It should remove any at/cron/print jobs etc. owned by
|
||||
# the user to be removed (passed as the first argument).
|
||||
#
|
||||
#USERDEL_CMD /usr/sbin/userdel_local
|
||||
|
||||
#
|
||||
# Enable setting of the umask group bits to be the same as owner bits
|
||||
# (examples: 022 -> 002, 077 -> 007) for non-root users, if the uid is
|
||||
# the same as gid, and username is the same as the primary group name.
|
||||
#
|
||||
# If set to yes, userdel will remove the user´s group if it contains no
|
||||
# more members, and useradd will create by default a group with the name
|
||||
# of the user.
|
||||
#
|
||||
USERGROUPS_ENAB yes
|
||||
|
||||
#
|
||||
# Instead of the real user shell, the program specified by this parameter
|
||||
# will be launched, although its visible name (argv[0]) will be the shell's.
|
||||
# The program may do whatever it wants (logging, additional authentification,
|
||||
# banner, ...) before running the actual shell.
|
||||
#
|
||||
# FAKE_SHELL /bin/fakeshell
|
||||
|
||||
#
|
||||
# If defined, either full pathname of a file containing device names or
|
||||
# a ":" delimited list of device names. Root logins will be allowed only
|
||||
# upon these devices.
|
||||
#
|
||||
# This variable is used by login and su.
|
||||
#
|
||||
#CONSOLE /etc/consoles
|
||||
#CONSOLE console:tty01:tty02:tty03:tty04
|
||||
|
||||
#
|
||||
# List of groups to add to the user's supplementary group set
|
||||
# when logging in on the console (as determined by the CONSOLE
|
||||
# setting). Default is none.
|
||||
#
|
||||
# Use with caution - it is possible for users to gain permanent
|
||||
# access to these groups, even when not logged in on the console.
|
||||
# How to do it is left as an exercise for the reader...
|
||||
#
|
||||
# This variable is used by login and su.
|
||||
#
|
||||
#CONSOLE_GROUPS floppy:audio:cdrom
|
||||
|
||||
#
|
||||
# If set to "yes", new passwords will be encrypted using the MD5-based
|
||||
# algorithm compatible with the one used by recent releases of FreeBSD.
|
||||
# It supports passwords of unlimited length and longer salt strings.
|
||||
# Set to "no" if you need to copy encrypted passwords to other systems
|
||||
# which don't understand the new algorithm. Default is "no".
|
||||
#
|
||||
# This variable is deprecated. You should use ENCRYPT_METHOD.
|
||||
#
|
||||
MD5_CRYPT_ENAB no
|
||||
|
||||
#
|
||||
# If set to MD5 , MD5-based algorithm will be used for encrypting password
|
||||
# If set to SHA256, SHA256-based algorithm will be used for encrypting password
|
||||
# If set to SHA512, SHA512-based algorithm will be used for encrypting password
|
||||
# If set to DES, DES-based algorithm will be used for encrypting password (default)
|
||||
# Overrides the MD5_CRYPT_ENAB option
|
||||
#
|
||||
# Note: It is recommended to use a value consistent with
|
||||
# the PAM modules configuration.
|
||||
#
|
||||
ENCRYPT_METHOD SHA512
|
||||
|
||||
#
|
||||
# Only used if ENCRYPT_METHOD is set to SHA256 or SHA512.
|
||||
#
|
||||
# Define the number of SHA rounds.
|
||||
# With a lot of rounds, it is more difficult to brute forcing the password.
|
||||
# But note also that it more CPU resources will be needed to authenticate
|
||||
# users.
|
||||
#
|
||||
# If not specified, the libc will choose the default number of rounds (5000).
|
||||
# The values must be inside the 1000-999999999 range.
|
||||
# If only one of the MIN or MAX values is set, then this value will be used.
|
||||
# If MIN > MAX, the highest value will be used.
|
||||
#
|
||||
# SHA_CRYPT_MIN_ROUNDS 5000
|
||||
# SHA_CRYPT_MAX_ROUNDS 5000
|
||||
|
||||
################# OBSOLETED BY PAM ##############
|
||||
# #
|
||||
# These options are now handled by PAM. Please #
|
||||
# edit the appropriate file in /etc/pam.d/ to #
|
||||
# enable the equivelants of them.
|
||||
#
|
||||
###############
|
||||
|
||||
#MOTD_FILE
|
||||
#DIALUPS_CHECK_ENAB
|
||||
#LASTLOG_ENAB
|
||||
#MAIL_CHECK_ENAB
|
||||
#OBSCURE_CHECKS_ENAB
|
||||
#PORTTIME_CHECKS_ENAB
|
||||
#SU_WHEEL_ONLY
|
||||
#CRACKLIB_DICTPATH
|
||||
#PASS_CHANGE_TRIES
|
||||
#PASS_ALWAYS_WARN
|
||||
#ENVIRON_FILE
|
||||
#NOLOGINS_FILE
|
||||
#ISSUE_FILE
|
||||
#PASS_MIN_LEN
|
||||
#PASS_MAX_LEN
|
||||
#ULIMIT
|
||||
#ENV_HZ
|
||||
#CHFN_AUTH
|
||||
#CHSH_AUTH
|
||||
#FAIL_DELAY
|
||||
|
||||
################# OBSOLETED #######################
|
||||
# #
|
||||
# These options are no more handled by shadow. #
|
||||
# #
|
||||
# Shadow utilities will display a warning if they #
|
||||
# still appear. #
|
||||
# #
|
||||
###################################################
|
||||
|
||||
# CLOSE_SESSIONS
|
||||
# LOGIN_STRING
|
||||
# NO_PASSWORD_CONSOLE
|
||||
# QMAIL_DIR
|
||||
|
||||
|
||||
|
@ -1,117 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
# /etc/modules: kernel modules to load at boot time.
|
||||
#
|
||||
# This file contains the names of kernel modules that should be loaded
|
||||
# at boot time, one per line. Lines beginning with "#" are ignored.
|
||||
# Parameters can be specified after the module name.
|
||||
|
||||
# Arch
|
||||
# ----
|
||||
#
|
||||
# Modules for certains builds, contains support modules and some CPU-specific optimizations.
|
||||
|
||||
{% if arch == "x86_64" -%}
|
||||
# Optimize for x86_64 cryptographic features
|
||||
twofish-x86_64-3way
|
||||
twofish-x86_64
|
||||
aes-x86_64
|
||||
salsa20-x86_64
|
||||
blowfish-x86_64
|
||||
{% endif -%}
|
||||
|
||||
{% if cpuVendor == "intel" -%}
|
||||
# Intel-specific optimizations
|
||||
ghash-clmulni-intel
|
||||
aesni-intel
|
||||
kvm-intel
|
||||
{% endif -%}
|
||||
|
||||
{% if cpuVendor == "amd" -%}
|
||||
# AMD-specific optimizations
|
||||
kvm-amd
|
||||
{% endif -%}
|
||||
|
||||
kvm
|
||||
|
||||
|
||||
# Crypto
|
||||
# ------
|
||||
|
||||
# Some core modules which comprise strong cryptography.
|
||||
blowfish_common
|
||||
blowfish_generic
|
||||
ctr
|
||||
cts
|
||||
lrw
|
||||
lzo
|
||||
rmd160
|
||||
rmd256
|
||||
rmd320
|
||||
serpent
|
||||
sha512_generic
|
||||
twofish_common
|
||||
twofish_generic
|
||||
xts
|
||||
zlib
|
||||
|
||||
|
||||
# Drivers
|
||||
# -------
|
||||
|
||||
# Basics
|
||||
lp
|
||||
rtc
|
||||
loop
|
||||
|
||||
# Filesystems
|
||||
ext2
|
||||
btrfs
|
||||
|
||||
{% if desktop_enable -%}
|
||||
# Desktop
|
||||
psmouse
|
||||
snd
|
||||
snd_ac97_codec
|
||||
snd_intel8x0
|
||||
snd_page_alloc
|
||||
snd_pcm
|
||||
snd_timer
|
||||
soundcore
|
||||
usbhid
|
||||
{% endif -%}
|
||||
|
||||
# Lib
|
||||
# ---
|
||||
xz
|
||||
|
||||
|
||||
# Net
|
||||
# ---
|
||||
|
||||
# All packets needed for netfilter rules (ie iptables, ebtables).
|
||||
ip_tables
|
||||
x_tables
|
||||
iptable_filter
|
||||
iptable_nat
|
||||
|
||||
# Targets
|
||||
ipt_LOG
|
||||
ipt_REJECT
|
||||
|
||||
# Modules
|
||||
xt_connlimit
|
||||
xt_tcpudp
|
||||
xt_recent
|
||||
xt_limit
|
||||
xt_conntrack
|
||||
nf_conntrack
|
||||
nf_conntrack_ipv4
|
||||
nf_defrag_ipv4
|
||||
xt_state
|
||||
nf_nat
|
||||
|
||||
# Addons
|
||||
xt_pknock
|
@ -1,11 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
Name: passwdqc password strength enforcement
|
||||
Default: yes
|
||||
Priority: 1024
|
||||
Conflicts: cracklib
|
||||
Password-Type: Primary
|
||||
Password:
|
||||
requisite pam_passwdqc.so {{ auth_pam_passwdqc_options }}
|
@ -1,8 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
# Disable core dumps via soft limits for all users. Compliance to this setting
|
||||
# is voluntary and can be modified by users up to a hard limit. This setting is
|
||||
# a sane default.
|
||||
ulimit -S -c 0 > /dev/null 2>&1
|
@ -1,11 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
# A list of TTYs, from which root can log in
|
||||
# see `man securetty` for reference
|
||||
{% if ttys -%}
|
||||
{% for tty in ttys -%}
|
||||
{{ tty }}
|
||||
{% endfor -%}
|
||||
{% endif -%}
|
@ -1,14 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
Name: tally2 lockout after failed attempts enforcement
|
||||
Default: yes
|
||||
Priority: 1024
|
||||
Conflicts: cracklib
|
||||
Auth-Type: Primary
|
||||
Auth-Initial:
|
||||
required pam_tally2.so deny={{ auth_retries }} onerr=fail unlock_time={{ auth_lockout_time }}
|
||||
Account-Type: Primary
|
||||
Account-Initial:
|
||||
required pam_tally2.so
|
@ -1,17 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from os import path
|
||||
|
||||
TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates')
|
@ -1,29 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.mysql.checks import config
|
||||
|
||||
|
||||
def run_mysql_checks():
|
||||
log("Starting MySQL hardening checks.", level=DEBUG)
|
||||
checks = config.get_audits()
|
||||
for check in checks:
|
||||
log("Running '%s' check" % (check.__class__.__name__), level=DEBUG)
|
||||
check.ensure_compliance()
|
||||
|
||||
log("MySQL hardening checks complete.", level=DEBUG)
|
@ -1,87 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import six
|
||||
import subprocess
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
WARNING,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.audits.file import (
|
||||
FilePermissionAudit,
|
||||
DirectoryPermissionAudit,
|
||||
TemplatedFile,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.mysql import TEMPLATES_DIR
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get MySQL hardening config audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
if subprocess.call(['which', 'mysql'], stdout=subprocess.PIPE) != 0:
|
||||
log("MySQL does not appear to be installed on this node - "
|
||||
"skipping mysql hardening", level=WARNING)
|
||||
return []
|
||||
|
||||
settings = utils.get_settings('mysql')
|
||||
hardening_settings = settings['hardening']
|
||||
my_cnf = hardening_settings['mysql-conf']
|
||||
|
||||
audits = [
|
||||
FilePermissionAudit(paths=[my_cnf], user='root',
|
||||
group='root', mode=0o0600),
|
||||
|
||||
TemplatedFile(hardening_settings['hardening-conf'],
|
||||
MySQLConfContext(),
|
||||
TEMPLATES_DIR,
|
||||
mode=0o0750,
|
||||
user='mysql',
|
||||
group='root',
|
||||
service_actions=[{'service': 'mysql',
|
||||
'actions': ['restart']}]),
|
||||
|
||||
# MySQL and Percona charms do not allow configuration of the
|
||||
# data directory, so use the default.
|
||||
DirectoryPermissionAudit('/var/lib/mysql',
|
||||
user='mysql',
|
||||
group='mysql',
|
||||
recursive=False,
|
||||
mode=0o755),
|
||||
|
||||
DirectoryPermissionAudit('/etc/mysql',
|
||||
user='root',
|
||||
group='root',
|
||||
recursive=False,
|
||||
mode=0o700),
|
||||
]
|
||||
|
||||
return audits
|
||||
|
||||
|
||||
class MySQLConfContext(object):
|
||||
"""Defines the set of key/value pairs to set in a mysql config file.
|
||||
|
||||
This context, when called, will return a dictionary containing the
|
||||
key/value pairs of setting to specify in the
|
||||
/etc/mysql/conf.d/hardening.cnf file.
|
||||
"""
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('mysql')
|
||||
# Translate for python3
|
||||
return {'mysql_settings':
|
||||
[(k, v) for k, v in six.iteritems(settings['security'])]}
|
@ -1,12 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
[mysqld]
|
||||
{% for setting, value in mysql_settings -%}
|
||||
{% if value == 'True' -%}
|
||||
{{ setting }}
|
||||
{% elif value != 'None' and value != None -%}
|
||||
{{ setting }} = {{ value }}
|
||||
{% endif -%}
|
||||
{% endfor -%}
|
@ -1,17 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from os import path
|
||||
|
||||
TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates')
|
@ -1,29 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.ssh.checks import config
|
||||
|
||||
|
||||
def run_ssh_checks():
|
||||
log("Starting SSH hardening checks.", level=DEBUG)
|
||||
checks = config.get_audits()
|
||||
for check in checks:
|
||||
log("Running '%s' check" % (check.__class__.__name__), level=DEBUG)
|
||||
check.ensure_compliance()
|
||||
|
||||
log("SSH hardening checks complete.", level=DEBUG)
|
@ -1,435 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
|
||||
from charmhelpers.contrib.network.ip import (
|
||||
get_address_in_network,
|
||||
get_iface_addr,
|
||||
is_ip,
|
||||
)
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
)
|
||||
from charmhelpers.fetch import (
|
||||
apt_install,
|
||||
apt_update,
|
||||
)
|
||||
from charmhelpers.core.host import (
|
||||
lsb_release,
|
||||
CompareHostReleases,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.audits.file import (
|
||||
TemplatedFile,
|
||||
FileContentAudit,
|
||||
)
|
||||
from charmhelpers.contrib.hardening.ssh import TEMPLATES_DIR
|
||||
from charmhelpers.contrib.hardening import utils
|
||||
|
||||
|
||||
def get_audits():
|
||||
"""Get SSH hardening config audits.
|
||||
|
||||
:returns: dictionary of audits
|
||||
"""
|
||||
audits = [SSHConfig(), SSHDConfig(), SSHConfigFileContentAudit(),
|
||||
SSHDConfigFileContentAudit()]
|
||||
return audits
|
||||
|
||||
|
||||
class SSHConfigContext(object):
|
||||
|
||||
type = 'client'
|
||||
|
||||
def get_macs(self, allow_weak_mac):
|
||||
if allow_weak_mac:
|
||||
weak_macs = 'weak'
|
||||
else:
|
||||
weak_macs = 'default'
|
||||
|
||||
default = 'hmac-sha2-512,hmac-sha2-256,hmac-ripemd160'
|
||||
macs = {'default': default,
|
||||
'weak': default + ',hmac-sha1'}
|
||||
|
||||
default = ('hmac-sha2-512-etm@openssh.com,'
|
||||
'hmac-sha2-256-etm@openssh.com,'
|
||||
'hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,'
|
||||
'hmac-sha2-512,hmac-sha2-256,hmac-ripemd160')
|
||||
macs_66 = {'default': default,
|
||||
'weak': default + ',hmac-sha1'}
|
||||
|
||||
# Use newer ciphers on Ubuntu Trusty and above
|
||||
_release = lsb_release()['DISTRIB_CODENAME'].lower()
|
||||
if CompareHostReleases(_release) >= 'trusty':
|
||||
log("Detected Ubuntu 14.04 or newer, using new macs", level=DEBUG)
|
||||
macs = macs_66
|
||||
|
||||
return macs[weak_macs]
|
||||
|
||||
def get_kexs(self, allow_weak_kex):
|
||||
if allow_weak_kex:
|
||||
weak_kex = 'weak'
|
||||
else:
|
||||
weak_kex = 'default'
|
||||
|
||||
default = 'diffie-hellman-group-exchange-sha256'
|
||||
weak = (default + ',diffie-hellman-group14-sha1,'
|
||||
'diffie-hellman-group-exchange-sha1,'
|
||||
'diffie-hellman-group1-sha1')
|
||||
kex = {'default': default,
|
||||
'weak': weak}
|
||||
|
||||
default = ('curve25519-sha256@libssh.org,'
|
||||
'diffie-hellman-group-exchange-sha256')
|
||||
weak = (default + ',diffie-hellman-group14-sha1,'
|
||||
'diffie-hellman-group-exchange-sha1,'
|
||||
'diffie-hellman-group1-sha1')
|
||||
kex_66 = {'default': default,
|
||||
'weak': weak}
|
||||
|
||||
# Use newer kex on Ubuntu Trusty and above
|
||||
_release = lsb_release()['DISTRIB_CODENAME'].lower()
|
||||
if CompareHostReleases(_release) >= 'trusty':
|
||||
log('Detected Ubuntu 14.04 or newer, using new key exchange '
|
||||
'algorithms', level=DEBUG)
|
||||
kex = kex_66
|
||||
|
||||
return kex[weak_kex]
|
||||
|
||||
def get_ciphers(self, cbc_required):
|
||||
if cbc_required:
|
||||
weak_ciphers = 'weak'
|
||||
else:
|
||||
weak_ciphers = 'default'
|
||||
|
||||
default = 'aes256-ctr,aes192-ctr,aes128-ctr'
|
||||
cipher = {'default': default,
|
||||
'weak': default + 'aes256-cbc,aes192-cbc,aes128-cbc'}
|
||||
|
||||
default = ('chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,'
|
||||
'aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr')
|
||||
ciphers_66 = {'default': default,
|
||||
'weak': default + ',aes256-cbc,aes192-cbc,aes128-cbc'}
|
||||
|
||||
# Use newer ciphers on ubuntu Trusty and above
|
||||
_release = lsb_release()['DISTRIB_CODENAME'].lower()
|
||||
if CompareHostReleases(_release) >= 'trusty':
|
||||
log('Detected Ubuntu 14.04 or newer, using new ciphers',
|
||||
level=DEBUG)
|
||||
cipher = ciphers_66
|
||||
|
||||
return cipher[weak_ciphers]
|
||||
|
||||
def get_listening(self, listen=['0.0.0.0']):
|
||||
"""Returns a list of addresses SSH can list on
|
||||
|
||||
Turns input into a sensible list of IPs SSH can listen on. Input
|
||||
must be a python list of interface names, IPs and/or CIDRs.
|
||||
|
||||
:param listen: list of IPs, CIDRs, interface names
|
||||
|
||||
:returns: list of IPs available on the host
|
||||
"""
|
||||
if listen == ['0.0.0.0']:
|
||||
return listen
|
||||
|
||||
value = []
|
||||
for network in listen:
|
||||
try:
|
||||
ip = get_address_in_network(network=network, fatal=True)
|
||||
except ValueError:
|
||||
if is_ip(network):
|
||||
ip = network
|
||||
else:
|
||||
try:
|
||||
ip = get_iface_addr(iface=network, fatal=False)[0]
|
||||
except IndexError:
|
||||
continue
|
||||
value.append(ip)
|
||||
if value == []:
|
||||
return ['0.0.0.0']
|
||||
return value
|
||||
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('ssh')
|
||||
if settings['common']['network_ipv6_enable']:
|
||||
addr_family = 'any'
|
||||
else:
|
||||
addr_family = 'inet'
|
||||
|
||||
ctxt = {
|
||||
'addr_family': addr_family,
|
||||
'remote_hosts': settings['common']['remote_hosts'],
|
||||
'password_auth_allowed':
|
||||
settings['client']['password_authentication'],
|
||||
'ports': settings['common']['ports'],
|
||||
'ciphers': self.get_ciphers(settings['client']['cbc_required']),
|
||||
'macs': self.get_macs(settings['client']['weak_hmac']),
|
||||
'kexs': self.get_kexs(settings['client']['weak_kex']),
|
||||
'roaming': settings['client']['roaming'],
|
||||
}
|
||||
return ctxt
|
||||
|
||||
|
||||
class SSHConfig(TemplatedFile):
|
||||
def __init__(self):
|
||||
path = '/etc/ssh/ssh_config'
|
||||
super(SSHConfig, self).__init__(path=path,
|
||||
template_dir=TEMPLATES_DIR,
|
||||
context=SSHConfigContext(),
|
||||
user='root',
|
||||
group='root',
|
||||
mode=0o0644)
|
||||
|
||||
def pre_write(self):
|
||||
settings = utils.get_settings('ssh')
|
||||
apt_update(fatal=True)
|
||||
apt_install(settings['client']['package'])
|
||||
if not os.path.exists('/etc/ssh'):
|
||||
os.makedir('/etc/ssh')
|
||||
# NOTE: don't recurse
|
||||
utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755,
|
||||
maxdepth=0)
|
||||
|
||||
def post_write(self):
|
||||
# NOTE: don't recurse
|
||||
utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755,
|
||||
maxdepth=0)
|
||||
|
||||
|
||||
class SSHDConfigContext(SSHConfigContext):
|
||||
|
||||
type = 'server'
|
||||
|
||||
def __call__(self):
|
||||
settings = utils.get_settings('ssh')
|
||||
if settings['common']['network_ipv6_enable']:
|
||||
addr_family = 'any'
|
||||
else:
|
||||
addr_family = 'inet'
|
||||
|
||||
ctxt = {
|
||||
'ssh_ip': self.get_listening(settings['server']['listen_to']),
|
||||
'password_auth_allowed':
|
||||
settings['server']['password_authentication'],
|
||||
'ports': settings['common']['ports'],
|
||||
'addr_family': addr_family,
|
||||
'ciphers': self.get_ciphers(settings['server']['cbc_required']),
|
||||
'macs': self.get_macs(settings['server']['weak_hmac']),
|
||||
'kexs': self.get_kexs(settings['server']['weak_kex']),
|
||||
'host_key_files': settings['server']['host_key_files'],
|
||||
'allow_root_with_key': settings['server']['allow_root_with_key'],
|
||||
'password_authentication':
|
||||
settings['server']['password_authentication'],
|
||||
'use_priv_sep': settings['server']['use_privilege_separation'],
|
||||
'use_pam': settings['server']['use_pam'],
|
||||
'allow_x11_forwarding': settings['server']['allow_x11_forwarding'],
|
||||
'print_motd': settings['server']['print_motd'],
|
||||
'print_last_log': settings['server']['print_last_log'],
|
||||
'client_alive_interval':
|
||||
settings['server']['alive_interval'],
|
||||
'client_alive_count': settings['server']['alive_count'],
|
||||
'allow_tcp_forwarding': settings['server']['allow_tcp_forwarding'],
|
||||
'allow_agent_forwarding':
|
||||
settings['server']['allow_agent_forwarding'],
|
||||
'deny_users': settings['server']['deny_users'],
|
||||
'allow_users': settings['server']['allow_users'],
|
||||
'deny_groups': settings['server']['deny_groups'],
|
||||
'allow_groups': settings['server']['allow_groups'],
|
||||
'use_dns': settings['server']['use_dns'],
|
||||
'sftp_enable': settings['server']['sftp_enable'],
|
||||
'sftp_group': settings['server']['sftp_group'],
|
||||
'sftp_chroot': settings['server']['sftp_chroot'],
|
||||
'max_auth_tries': settings['server']['max_auth_tries'],
|
||||
'max_sessions': settings['server']['max_sessions'],
|
||||
}
|
||||
return ctxt
|
||||
|
||||
|
||||
class SSHDConfig(TemplatedFile):
|
||||
def __init__(self):
|
||||
path = '/etc/ssh/sshd_config'
|
||||
super(SSHDConfig, self).__init__(path=path,
|
||||
template_dir=TEMPLATES_DIR,
|
||||
context=SSHDConfigContext(),
|
||||
user='root',
|
||||
group='root',
|
||||
mode=0o0600,
|
||||
service_actions=[{'service': 'ssh',
|
||||
'actions':
|
||||
['restart']}])
|
||||
|
||||
def pre_write(self):
|
||||
settings = utils.get_settings('ssh')
|
||||
apt_update(fatal=True)
|
||||
apt_install(settings['server']['package'])
|
||||
if not os.path.exists('/etc/ssh'):
|
||||
os.makedir('/etc/ssh')
|
||||
# NOTE: don't recurse
|
||||
utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755,
|
||||
maxdepth=0)
|
||||
|
||||
def post_write(self):
|
||||
# NOTE: don't recurse
|
||||
utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755,
|
||||
maxdepth=0)
|
||||
|
||||
|
||||
class SSHConfigFileContentAudit(FileContentAudit):
|
||||
def __init__(self):
|
||||
self.path = '/etc/ssh/ssh_config'
|
||||
super(SSHConfigFileContentAudit, self).__init__(self.path, {})
|
||||
|
||||
def is_compliant(self, *args, **kwargs):
|
||||
self.pass_cases = []
|
||||
self.fail_cases = []
|
||||
settings = utils.get_settings('ssh')
|
||||
|
||||
_release = lsb_release()['DISTRIB_CODENAME'].lower()
|
||||
if CompareHostReleases(_release) >= 'trusty':
|
||||
if not settings['server']['weak_hmac']:
|
||||
self.pass_cases.append(r'^MACs.+,hmac-ripemd160$')
|
||||
else:
|
||||
self.pass_cases.append(r'^MACs.+,hmac-sha1$')
|
||||
|
||||
if settings['server']['weak_kex']:
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa
|
||||
else:
|
||||
self.pass_cases.append(r'^KexAlgorithms.+,diffie-hellman-group-exchange-sha256$') # noqa
|
||||
self.fail_cases.append(r'^KexAlgorithms.*diffie-hellman-group14-sha1[,\s]?') # noqa
|
||||
|
||||
if settings['server']['cbc_required']:
|
||||
self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?')
|
||||
else:
|
||||
self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\schacha20-poly1305@openssh.com,.+') # noqa
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes128-ctr$')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?')
|
||||
else:
|
||||
if not settings['client']['weak_hmac']:
|
||||
self.fail_cases.append(r'^MACs.+,hmac-sha1$')
|
||||
else:
|
||||
self.pass_cases.append(r'^MACs.+,hmac-sha1$')
|
||||
|
||||
if settings['client']['weak_kex']:
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa
|
||||
else:
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256$') # noqa
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa
|
||||
|
||||
if settings['client']['cbc_required']:
|
||||
self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?')
|
||||
else:
|
||||
self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?')
|
||||
|
||||
if settings['client']['roaming']:
|
||||
self.pass_cases.append(r'^UseRoaming yes$')
|
||||
else:
|
||||
self.fail_cases.append(r'^UseRoaming yes$')
|
||||
|
||||
return super(SSHConfigFileContentAudit, self).is_compliant(*args,
|
||||
**kwargs)
|
||||
|
||||
|
||||
class SSHDConfigFileContentAudit(FileContentAudit):
|
||||
def __init__(self):
|
||||
self.path = '/etc/ssh/sshd_config'
|
||||
super(SSHDConfigFileContentAudit, self).__init__(self.path, {})
|
||||
|
||||
def is_compliant(self, *args, **kwargs):
|
||||
self.pass_cases = []
|
||||
self.fail_cases = []
|
||||
settings = utils.get_settings('ssh')
|
||||
|
||||
_release = lsb_release()['DISTRIB_CODENAME'].lower()
|
||||
if CompareHostReleases(_release) >= 'trusty':
|
||||
if not settings['server']['weak_hmac']:
|
||||
self.pass_cases.append(r'^MACs.+,hmac-ripemd160$')
|
||||
else:
|
||||
self.pass_cases.append(r'^MACs.+,hmac-sha1$')
|
||||
|
||||
if settings['server']['weak_kex']:
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa
|
||||
else:
|
||||
self.pass_cases.append(r'^KexAlgorithms.+,diffie-hellman-group-exchange-sha256$') # noqa
|
||||
self.fail_cases.append(r'^KexAlgorithms.*diffie-hellman-group14-sha1[,\s]?') # noqa
|
||||
|
||||
if settings['server']['cbc_required']:
|
||||
self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?')
|
||||
else:
|
||||
self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\schacha20-poly1305@openssh.com,.+') # noqa
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes128-ctr$')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?')
|
||||
else:
|
||||
if not settings['server']['weak_hmac']:
|
||||
self.pass_cases.append(r'^MACs.+,hmac-ripemd160$')
|
||||
else:
|
||||
self.pass_cases.append(r'^MACs.+,hmac-sha1$')
|
||||
|
||||
if settings['server']['weak_kex']:
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa
|
||||
else:
|
||||
self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256$') # noqa
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa
|
||||
self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa
|
||||
|
||||
if settings['server']['cbc_required']:
|
||||
self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?')
|
||||
self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?')
|
||||
else:
|
||||
self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?')
|
||||
self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?')
|
||||
|
||||
if settings['server']['sftp_enable']:
|
||||
self.pass_cases.append(r'^Subsystem\ssftp')
|
||||
else:
|
||||
self.fail_cases.append(r'^Subsystem\ssftp')
|
||||
|
||||
return super(SSHDConfigFileContentAudit, self).is_compliant(*args,
|
||||
**kwargs)
|
@ -1,70 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
# This is the ssh client system-wide configuration file. See
|
||||
# ssh_config(5) for more information. This file provides defaults for
|
||||
# users, and the values can be changed in per-user configuration files
|
||||
# or on the command line.
|
||||
|
||||
# Configuration data is parsed as follows:
|
||||
# 1. command line options
|
||||
# 2. user-specific file
|
||||
# 3. system-wide file
|
||||
# Any configuration value is only changed the first time it is set.
|
||||
# Thus, host-specific definitions should be at the beginning of the
|
||||
# configuration file, and defaults at the end.
|
||||
|
||||
# Site-wide defaults for some commonly used options. For a comprehensive
|
||||
# list of available options, their meanings and defaults, please see the
|
||||
# ssh_config(5) man page.
|
||||
|
||||
# Restrict the following configuration to be limited to this Host.
|
||||
{% if remote_hosts -%}
|
||||
Host {{ ' '.join(remote_hosts) }}
|
||||
{% endif %}
|
||||
ForwardAgent no
|
||||
ForwardX11 no
|
||||
ForwardX11Trusted yes
|
||||
RhostsRSAAuthentication no
|
||||
RSAAuthentication yes
|
||||
PasswordAuthentication {{ password_auth_allowed }}
|
||||
HostbasedAuthentication no
|
||||
GSSAPIAuthentication no
|
||||
GSSAPIDelegateCredentials no
|
||||
GSSAPIKeyExchange no
|
||||
GSSAPITrustDNS no
|
||||
BatchMode no
|
||||
CheckHostIP yes
|
||||
AddressFamily {{ addr_family }}
|
||||
ConnectTimeout 0
|
||||
StrictHostKeyChecking ask
|
||||
IdentityFile ~/.ssh/identity
|
||||
IdentityFile ~/.ssh/id_rsa
|
||||
IdentityFile ~/.ssh/id_dsa
|
||||
# The port at the destination should be defined
|
||||
{% for port in ports -%}
|
||||
Port {{ port }}
|
||||
{% endfor %}
|
||||
Protocol 2
|
||||
Cipher 3des
|
||||
{% if ciphers -%}
|
||||
Ciphers {{ ciphers }}
|
||||
{%- endif %}
|
||||
{% if macs -%}
|
||||
MACs {{ macs }}
|
||||
{%- endif %}
|
||||
{% if kexs -%}
|
||||
KexAlgorithms {{ kexs }}
|
||||
{%- endif %}
|
||||
EscapeChar ~
|
||||
Tunnel no
|
||||
TunnelDevice any:any
|
||||
PermitLocalCommand no
|
||||
VisualHostKey no
|
||||
RekeyLimit 1G 1h
|
||||
SendEnv LANG LC_*
|
||||
HashKnownHosts yes
|
||||
{% if roaming -%}
|
||||
UseRoaming {{ roaming }}
|
||||
{% endif %}
|
@ -1,159 +0,0 @@
|
||||
###############################################################################
|
||||
# WARNING: This configuration file is maintained by Juju. Local changes may
|
||||
# be overwritten.
|
||||
###############################################################################
|
||||
# Package generated configuration file
|
||||
# See the sshd_config(5) manpage for details
|
||||
|
||||
# What ports, IPs and protocols we listen for
|
||||
{% for port in ports -%}
|
||||
Port {{ port }}
|
||||
{% endfor -%}
|
||||
AddressFamily {{ addr_family }}
|
||||
# Use these options to restrict which interfaces/protocols sshd will bind to
|
||||
{% if ssh_ip -%}
|
||||
{% for ip in ssh_ip -%}
|
||||
ListenAddress {{ ip }}
|
||||
{% endfor %}
|
||||
{%- else -%}
|
||||
ListenAddress ::
|
||||
ListenAddress 0.0.0.0
|
||||
{% endif -%}
|
||||
Protocol 2
|
||||
{% if ciphers -%}
|
||||
Ciphers {{ ciphers }}
|
||||
{% endif -%}
|
||||
{% if macs -%}
|
||||
MACs {{ macs }}
|
||||
{% endif -%}
|
||||
{% if kexs -%}
|
||||
KexAlgorithms {{ kexs }}
|
||||
{% endif -%}
|
||||
# HostKeys for protocol version 2
|
||||
{% for keyfile in host_key_files -%}
|
||||
HostKey {{ keyfile }}
|
||||
{% endfor -%}
|
||||
|
||||
# Privilege Separation is turned on for security
|
||||
{% if use_priv_sep -%}
|
||||
UsePrivilegeSeparation {{ use_priv_sep }}
|
||||
{% endif -%}
|
||||
|
||||
# Lifetime and size of ephemeral version 1 server key
|
||||
KeyRegenerationInterval 3600
|
||||
ServerKeyBits 1024
|
||||
|
||||
# Logging
|
||||
SyslogFacility AUTH
|
||||
LogLevel VERBOSE
|
||||
|
||||
# Authentication:
|
||||
LoginGraceTime 30s
|
||||
{% if allow_root_with_key -%}
|
||||
PermitRootLogin without-password
|
||||
{% else -%}
|
||||
PermitRootLogin no
|
||||
{% endif %}
|
||||
PermitTunnel no
|
||||
PermitUserEnvironment no
|
||||
StrictModes yes
|
||||
|
||||
RSAAuthentication yes
|
||||
PubkeyAuthentication yes
|
||||
AuthorizedKeysFile %h/.ssh/authorized_keys
|
||||
|
||||
# Don't read the user's ~/.rhosts and ~/.shosts files
|
||||
IgnoreRhosts yes
|
||||
# For this to work you will also need host keys in /etc/ssh_known_hosts
|
||||
RhostsRSAAuthentication no
|
||||
# similar for protocol version 2
|
||||
HostbasedAuthentication no
|
||||
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
|
||||
IgnoreUserKnownHosts yes
|
||||
|
||||
# To enable empty passwords, change to yes (NOT RECOMMENDED)
|
||||
PermitEmptyPasswords no
|
||||
|
||||
# Change to yes to enable challenge-response passwords (beware issues with
|
||||
# some PAM modules and threads)
|
||||
ChallengeResponseAuthentication no
|
||||
|
||||
# Change to no to disable tunnelled clear text passwords
|
||||
PasswordAuthentication {{ password_authentication }}
|
||||
|
||||
# Kerberos options
|
||||
KerberosAuthentication no
|
||||
KerberosGetAFSToken no
|
||||
KerberosOrLocalPasswd no
|
||||
KerberosTicketCleanup yes
|
||||
|
||||
# GSSAPI options
|
||||
GSSAPIAuthentication no
|
||||
GSSAPICleanupCredentials yes
|
||||
|
||||
X11Forwarding {{ allow_x11_forwarding }}
|
||||
X11DisplayOffset 10
|
||||
X11UseLocalhost yes
|
||||
GatewayPorts no
|
||||
PrintMotd {{ print_motd }}
|
||||
PrintLastLog {{ print_last_log }}
|
||||
TCPKeepAlive no
|
||||
UseLogin no
|
||||
|
||||
ClientAliveInterval {{ client_alive_interval }}
|
||||
ClientAliveCountMax {{ client_alive_count }}
|
||||
AllowTcpForwarding {{ allow_tcp_forwarding }}
|
||||
AllowAgentForwarding {{ allow_agent_forwarding }}
|
||||
|
||||
MaxStartups 10:30:100
|
||||
#Banner /etc/issue.net
|
||||
|
||||
# Allow client to pass locale environment variables
|
||||
AcceptEnv LANG LC_*
|
||||
|
||||
# Set this to 'yes' to enable PAM authentication, account processing,
|
||||
# and session processing. If this is enabled, PAM authentication will
|
||||
# be allowed through the ChallengeResponseAuthentication and
|
||||
# PasswordAuthentication. Depending on your PAM configuration,
|
||||
# PAM authentication via ChallengeResponseAuthentication may bypass
|
||||
# the setting of "PermitRootLogin without-password".
|
||||
# If you just want the PAM account and session checks to run without
|
||||
# PAM authentication, then enable this but set PasswordAuthentication
|
||||
# and ChallengeResponseAuthentication to 'no'.
|
||||
UsePAM {{ use_pam }}
|
||||
|
||||
{% if deny_users -%}
|
||||
DenyUsers {{ deny_users }}
|
||||
{% endif -%}
|
||||
{% if allow_users -%}
|
||||
AllowUsers {{ allow_users }}
|
||||
{% endif -%}
|
||||
{% if deny_groups -%}
|
||||
DenyGroups {{ deny_groups }}
|
||||
{% endif -%}
|
||||
{% if allow_groups -%}
|
||||
AllowGroups allow_groups
|
||||
{% endif -%}
|
||||
UseDNS {{ use_dns }}
|
||||
MaxAuthTries {{ max_auth_tries }}
|
||||
MaxSessions {{ max_sessions }}
|
||||
|
||||
{% if sftp_enable -%}
|
||||
# Configuration, in case SFTP is used
|
||||
## override default of no subsystems
|
||||
## Subsystem sftp /opt/app/openssh5/libexec/sftp-server
|
||||
Subsystem sftp internal-sftp -l VERBOSE
|
||||
|
||||
## These lines must appear at the *end* of sshd_config
|
||||
Match Group {{ sftp_group }}
|
||||
ForceCommand internal-sftp -l VERBOSE
|
||||
ChrootDirectory {{ sftp_chroot }}
|
||||
{% else -%}
|
||||
# Configuration, in case SFTP is used
|
||||
## override default of no subsystems
|
||||
## Subsystem sftp /opt/app/openssh5/libexec/sftp-server
|
||||
## These lines must appear at the *end* of sshd_config
|
||||
Match Group sftponly
|
||||
ForceCommand internal-sftp -l VERBOSE
|
||||
ChrootDirectory /sftpchroot/home/%u
|
||||
{% endif %}
|
@ -1,73 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import six
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
WARNING,
|
||||
)
|
||||
|
||||
try:
|
||||
from jinja2 import FileSystemLoader, Environment
|
||||
except ImportError:
|
||||
from charmhelpers.fetch import apt_install
|
||||
from charmhelpers.fetch import apt_update
|
||||
apt_update(fatal=True)
|
||||
if six.PY2:
|
||||
apt_install('python-jinja2', fatal=True)
|
||||
else:
|
||||
apt_install('python3-jinja2', fatal=True)
|
||||
from jinja2 import FileSystemLoader, Environment
|
||||
|
||||
|
||||
# NOTE: function separated from main rendering code to facilitate easier
|
||||
# mocking in unit tests.
|
||||
def write(path, data):
|
||||
with open(path, 'wb') as out:
|
||||
out.write(data)
|
||||
|
||||
|
||||
def get_template_path(template_dir, path):
|
||||
"""Returns the template file which would be used to render the path.
|
||||
|
||||
The path to the template file is returned.
|
||||
:param template_dir: the directory the templates are located in
|
||||
:param path: the file path to be written to.
|
||||
:returns: path to the template file
|
||||
"""
|
||||
return os.path.join(template_dir, os.path.basename(path))
|
||||
|
||||
|
||||
def render_and_write(template_dir, path, context):
|
||||
"""Renders the specified template into the file.
|
||||
|
||||
:param template_dir: the directory to load the template from
|
||||
:param path: the path to write the templated contents to
|
||||
:param context: the parameters to pass to the rendering engine
|
||||
"""
|
||||
env = Environment(loader=FileSystemLoader(template_dir))
|
||||
template_file = os.path.basename(path)
|
||||
template = env.get_template(template_file)
|
||||
log('Rendering from template: %s' % template.name, level=DEBUG)
|
||||
rendered_content = template.render(context)
|
||||
if not rendered_content:
|
||||
log("Render returned None - skipping '%s'" % path,
|
||||
level=WARNING)
|
||||
return
|
||||
|
||||
write(path, rendered_content.encode('utf-8').strip())
|
||||
log('Wrote template %s' % path, level=DEBUG)
|
@ -1,155 +0,0 @@
|
||||
# Copyright 2016 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import glob
|
||||
import grp
|
||||
import os
|
||||
import pwd
|
||||
import six
|
||||
import yaml
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
DEBUG,
|
||||
INFO,
|
||||
WARNING,
|
||||
ERROR,
|
||||
)
|
||||
|
||||
|
||||
# Global settings cache. Since each hook fire entails a fresh module import it
|
||||
# is safe to hold this in memory and not risk missing config changes (since
|
||||
# they will result in a new hook fire and thus re-import).
|
||||
__SETTINGS__ = {}
|
||||
|
||||
|
||||
def _get_defaults(modules):
|
||||
"""Load the default config for the provided modules.
|
||||
|
||||
:param modules: stack modules config defaults to lookup.
|
||||
:returns: modules default config dictionary.
|
||||
"""
|
||||
default = os.path.join(os.path.dirname(__file__),
|
||||
'defaults/%s.yaml' % (modules))
|
||||
return yaml.safe_load(open(default))
|
||||
|
||||
|
||||
def _get_schema(modules):
|
||||
"""Load the config schema for the provided modules.
|
||||
|
||||
NOTE: this schema is intended to have 1-1 relationship with they keys in
|
||||
the default config and is used a means to verify valid overrides provided
|
||||
by the user.
|
||||
|
||||
:param modules: stack modules config schema to lookup.
|
||||
:returns: modules default schema dictionary.
|
||||
"""
|
||||
schema = os.path.join(os.path.dirname(__file__),
|
||||
'defaults/%s.yaml.schema' % (modules))
|
||||
return yaml.safe_load(open(schema))
|
||||
|
||||
|
||||
def _get_user_provided_overrides(modules):
|
||||
"""Load user-provided config overrides.
|
||||
|
||||
:param modules: stack modules to lookup in user overrides yaml file.
|
||||
:returns: overrides dictionary.
|
||||
"""
|
||||
overrides = os.path.join(os.environ['JUJU_CHARM_DIR'],
|
||||
'hardening.yaml')
|
||||
if os.path.exists(overrides):
|
||||
log("Found user-provided config overrides file '%s'" %
|
||||
(overrides), level=DEBUG)
|
||||
settings = yaml.safe_load(open(overrides))
|
||||
if settings and settings.get(modules):
|
||||
log("Applying '%s' overrides" % (modules), level=DEBUG)
|
||||
return settings.get(modules)
|
||||
|
||||
log("No overrides found for '%s'" % (modules), level=DEBUG)
|
||||
else:
|
||||
log("No hardening config overrides file '%s' found in charm "
|
||||
"root dir" % (overrides), level=DEBUG)
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
def _apply_overrides(settings, overrides, schema):
|
||||
"""Get overrides config overlayed onto modules defaults.
|
||||
|
||||
:param modules: require stack modules config.
|
||||
:returns: dictionary of modules config with user overrides applied.
|
||||
"""
|
||||
if overrides:
|
||||
for k, v in six.iteritems(overrides):
|
||||
if k in schema:
|
||||
if schema[k] is None:
|
||||
settings[k] = v
|
||||
elif type(schema[k]) is dict:
|
||||
settings[k] = _apply_overrides(settings[k], overrides[k],
|
||||
schema[k])
|
||||
else:
|
||||
raise Exception("Unexpected type found in schema '%s'" %
|
||||
type(schema[k]), level=ERROR)
|
||||
else:
|
||||
log("Unknown override key '%s' - ignoring" % (k), level=INFO)
|
||||
|
||||
return settings
|
||||
|
||||
|
||||
def get_settings(modules):
|
||||
global __SETTINGS__
|
||||
if modules in __SETTINGS__:
|
||||
return __SETTINGS__[modules]
|
||||
|
||||
schema = _get_schema(modules)
|
||||
settings = _get_defaults(modules)
|
||||
overrides = _get_user_provided_overrides(modules)
|
||||
__SETTINGS__[modules] = _apply_overrides(settings, overrides, schema)
|
||||
return __SETTINGS__[modules]
|
||||
|
||||
|
||||
def ensure_permissions(path, user, group, permissions, maxdepth=-1):
|
||||
"""Ensure permissions for path.
|
||||
|
||||
If path is a file, apply to file and return. If path is a directory,
|
||||
apply recursively (if required) to directory contents and return.
|
||||
|
||||
:param user: user name
|
||||
:param group: group name
|
||||
:param permissions: octal permissions
|
||||
:param maxdepth: maximum recursion depth. A negative maxdepth allows
|
||||
infinite recursion and maxdepth=0 means no recursion.
|
||||
:returns: None
|
||||
"""
|
||||
if not os.path.exists(path):
|
||||
log("File '%s' does not exist - cannot set permissions" % (path),
|
||||
level=WARNING)
|
||||
return
|
||||
|
||||
_user = pwd.getpwnam(user)
|
||||
os.chown(path, _user.pw_uid, grp.getgrnam(group).gr_gid)
|
||||
os.chmod(path, permissions)
|
||||
|
||||
if maxdepth == 0:
|
||||
log("Max recursion depth reached - skipping further recursion",
|
||||
level=DEBUG)
|
||||
return
|
||||
elif maxdepth > 0:
|
||||
maxdepth -= 1
|
||||
|
||||
if os.path.isdir(path):
|
||||
contents = glob.glob("%s/*" % (path))
|
||||
for c in contents:
|
||||
ensure_permissions(c, user=user, group=group,
|
||||
permissions=permissions, maxdepth=maxdepth)
|
@ -1,13 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -1,602 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import glob
|
||||
import re
|
||||
import subprocess
|
||||
import six
|
||||
import socket
|
||||
|
||||
from functools import partial
|
||||
|
||||
from charmhelpers.fetch import apt_install, apt_update
|
||||
from charmhelpers.core.hookenv import (
|
||||
config,
|
||||
log,
|
||||
network_get_primary_address,
|
||||
unit_get,
|
||||
WARNING,
|
||||
NoNetworkBinding,
|
||||
)
|
||||
|
||||
from charmhelpers.core.host import (
|
||||
lsb_release,
|
||||
CompareHostReleases,
|
||||
)
|
||||
|
||||
try:
|
||||
import netifaces
|
||||
except ImportError:
|
||||
apt_update(fatal=True)
|
||||
if six.PY2:
|
||||
apt_install('python-netifaces', fatal=True)
|
||||
else:
|
||||
apt_install('python3-netifaces', fatal=True)
|
||||
import netifaces
|
||||
|
||||
try:
|
||||
import netaddr
|
||||
except ImportError:
|
||||
apt_update(fatal=True)
|
||||
if six.PY2:
|
||||
apt_install('python-netaddr', fatal=True)
|
||||
else:
|
||||
apt_install('python3-netaddr', fatal=True)
|
||||
import netaddr
|
||||
|
||||
|
||||
def _validate_cidr(network):
|
||||
try:
|
||||
netaddr.IPNetwork(network)
|
||||
except (netaddr.core.AddrFormatError, ValueError):
|
||||
raise ValueError("Network (%s) is not in CIDR presentation format" %
|
||||
network)
|
||||
|
||||
|
||||
def no_ip_found_error_out(network):
|
||||
errmsg = ("No IP address found in network(s): %s" % network)
|
||||
raise ValueError(errmsg)
|
||||
|
||||
|
||||
def _get_ipv6_network_from_address(address):
|
||||
"""Get an netaddr.IPNetwork for the given IPv6 address
|
||||
:param address: a dict as returned by netifaces.ifaddresses
|
||||
:returns netaddr.IPNetwork: None if the address is a link local or loopback
|
||||
address
|
||||
"""
|
||||
if address['addr'].startswith('fe80') or address['addr'] == "::1":
|
||||
return None
|
||||
|
||||
prefix = address['netmask'].split("/")
|
||||
if len(prefix) > 1:
|
||||
netmask = prefix[1]
|
||||
else:
|
||||
netmask = address['netmask']
|
||||
return netaddr.IPNetwork("%s/%s" % (address['addr'],
|
||||
netmask))
|
||||
|
||||
|
||||
def get_address_in_network(network, fallback=None, fatal=False):
|
||||
"""Get an IPv4 or IPv6 address within the network from the host.
|
||||
|
||||
:param network (str): CIDR presentation format. For example,
|
||||
'192.168.1.0/24'. Supports multiple networks as a space-delimited list.
|
||||
:param fallback (str): If no address is found, return fallback.
|
||||
:param fatal (boolean): If no address is found, fallback is not
|
||||
set and fatal is True then exit(1).
|
||||
"""
|
||||
if network is None:
|
||||
if fallback is not None:
|
||||
return fallback
|
||||
|
||||
if fatal:
|
||||
no_ip_found_error_out(network)
|
||||
else:
|
||||
return None
|
||||
|
||||
networks = network.split() or [network]
|
||||
for network in networks:
|
||||
_validate_cidr(network)
|
||||
network = netaddr.IPNetwork(network)
|
||||
for iface in netifaces.interfaces():
|
||||
try:
|
||||
addresses = netifaces.ifaddresses(iface)
|
||||
except ValueError:
|
||||
# If an instance was deleted between
|
||||
# netifaces.interfaces() run and now, its interfaces are gone
|
||||
continue
|
||||
if network.version == 4 and netifaces.AF_INET in addresses:
|
||||
for addr in addresses[netifaces.AF_INET]:
|
||||
cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
|
||||
addr['netmask']))
|
||||
if cidr in network:
|
||||
return str(cidr.ip)
|
||||
|
||||
if network.version == 6 and netifaces.AF_INET6 in addresses:
|
||||
for addr in addresses[netifaces.AF_INET6]:
|
||||
cidr = _get_ipv6_network_from_address(addr)
|
||||
if cidr and cidr in network:
|
||||
return str(cidr.ip)
|
||||
|
||||
if fallback is not None:
|
||||
return fallback
|
||||
|
||||
if fatal:
|
||||
no_ip_found_error_out(network)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def is_ipv6(address):
|
||||
"""Determine whether provided address is IPv6 or not."""
|
||||
try:
|
||||
address = netaddr.IPAddress(address)
|
||||
except netaddr.AddrFormatError:
|
||||
# probably a hostname - so not an address at all!
|
||||
return False
|
||||
|
||||
return address.version == 6
|
||||
|
||||
|
||||
def is_address_in_network(network, address):
|
||||
"""
|
||||
Determine whether the provided address is within a network range.
|
||||
|
||||
:param network (str): CIDR presentation format. For example,
|
||||
'192.168.1.0/24'.
|
||||
:param address: An individual IPv4 or IPv6 address without a net
|
||||
mask or subnet prefix. For example, '192.168.1.1'.
|
||||
:returns boolean: Flag indicating whether address is in network.
|
||||
"""
|
||||
try:
|
||||
network = netaddr.IPNetwork(network)
|
||||
except (netaddr.core.AddrFormatError, ValueError):
|
||||
raise ValueError("Network (%s) is not in CIDR presentation format" %
|
||||
network)
|
||||
|
||||
try:
|
||||
address = netaddr.IPAddress(address)
|
||||
except (netaddr.core.AddrFormatError, ValueError):
|
||||
raise ValueError("Address (%s) is not in correct presentation format" %
|
||||
address)
|
||||
|
||||
if address in network:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def _get_for_address(address, key):
|
||||
"""Retrieve an attribute of or the physical interface that
|
||||
the IP address provided could be bound to.
|
||||
|
||||
:param address (str): An individual IPv4 or IPv6 address without a net
|
||||
mask or subnet prefix. For example, '192.168.1.1'.
|
||||
:param key: 'iface' for the physical interface name or an attribute
|
||||
of the configured interface, for example 'netmask'.
|
||||
:returns str: Requested attribute or None if address is not bindable.
|
||||
"""
|
||||
address = netaddr.IPAddress(address)
|
||||
for iface in netifaces.interfaces():
|
||||
addresses = netifaces.ifaddresses(iface)
|
||||
if address.version == 4 and netifaces.AF_INET in addresses:
|
||||
addr = addresses[netifaces.AF_INET][0]['addr']
|
||||
netmask = addresses[netifaces.AF_INET][0]['netmask']
|
||||
network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
|
||||
cidr = network.cidr
|
||||
if address in cidr:
|
||||
if key == 'iface':
|
||||
return iface
|
||||
else:
|
||||
return addresses[netifaces.AF_INET][0][key]
|
||||
|
||||
if address.version == 6 and netifaces.AF_INET6 in addresses:
|
||||
for addr in addresses[netifaces.AF_INET6]:
|
||||
network = _get_ipv6_network_from_address(addr)
|
||||
if not network:
|
||||
continue
|
||||
|
||||
cidr = network.cidr
|
||||
if address in cidr:
|
||||
if key == 'iface':
|
||||
return iface
|
||||
elif key == 'netmask' and cidr:
|
||||
return str(cidr).split('/')[1]
|
||||
else:
|
||||
return addr[key]
|
||||
return None
|
||||
|
||||
|
||||
get_iface_for_address = partial(_get_for_address, key='iface')
|
||||
|
||||
|
||||
get_netmask_for_address = partial(_get_for_address, key='netmask')
|
||||
|
||||
|
||||
def resolve_network_cidr(ip_address):
|
||||
'''
|
||||
Resolves the full address cidr of an ip_address based on
|
||||
configured network interfaces
|
||||
'''
|
||||
netmask = get_netmask_for_address(ip_address)
|
||||
return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr)
|
||||
|
||||
|
||||
def format_ipv6_addr(address):
|
||||
"""If address is IPv6, wrap it in '[]' otherwise return None.
|
||||
|
||||
This is required by most configuration files when specifying IPv6
|
||||
addresses.
|
||||
"""
|
||||
if is_ipv6(address):
|
||||
return "[%s]" % address
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def is_ipv6_disabled():
|
||||
try:
|
||||
result = subprocess.check_output(
|
||||
['sysctl', 'net.ipv6.conf.all.disable_ipv6'],
|
||||
stderr=subprocess.STDOUT,
|
||||
universal_newlines=True)
|
||||
except subprocess.CalledProcessError:
|
||||
return True
|
||||
|
||||
return "net.ipv6.conf.all.disable_ipv6 = 1" in result
|
||||
|
||||
|
||||
def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
|
||||
fatal=True, exc_list=None):
|
||||
"""Return the assigned IP address for a given interface, if any.
|
||||
|
||||
:param iface: network interface on which address(es) are expected to
|
||||
be found.
|
||||
:param inet_type: inet address family
|
||||
:param inc_aliases: include alias interfaces in search
|
||||
:param fatal: if True, raise exception if address not found
|
||||
:param exc_list: list of addresses to ignore
|
||||
:return: list of ip addresses
|
||||
"""
|
||||
# Extract nic if passed /dev/ethX
|
||||
if '/' in iface:
|
||||
iface = iface.split('/')[-1]
|
||||
|
||||
if not exc_list:
|
||||
exc_list = []
|
||||
|
||||
try:
|
||||
inet_num = getattr(netifaces, inet_type)
|
||||
except AttributeError:
|
||||
raise Exception("Unknown inet type '%s'" % str(inet_type))
|
||||
|
||||
interfaces = netifaces.interfaces()
|
||||
if inc_aliases:
|
||||
ifaces = []
|
||||
for _iface in interfaces:
|
||||
if iface == _iface or _iface.split(':')[0] == iface:
|
||||
ifaces.append(_iface)
|
||||
|
||||
if fatal and not ifaces:
|
||||
raise Exception("Invalid interface '%s'" % iface)
|
||||
|
||||
ifaces.sort()
|
||||
else:
|
||||
if iface not in interfaces:
|
||||
if fatal:
|
||||
raise Exception("Interface '%s' not found " % (iface))
|
||||
else:
|
||||
return []
|
||||
|
||||
else:
|
||||
ifaces = [iface]
|
||||
|
||||
addresses = []
|
||||
for netiface in ifaces:
|
||||
net_info = netifaces.ifaddresses(netiface)
|
||||
if inet_num in net_info:
|
||||
for entry in net_info[inet_num]:
|
||||
if 'addr' in entry and entry['addr'] not in exc_list:
|
||||
addresses.append(entry['addr'])
|
||||
|
||||
if fatal and not addresses:
|
||||
raise Exception("Interface '%s' doesn't have any %s addresses." %
|
||||
(iface, inet_type))
|
||||
|
||||
return sorted(addresses)
|
||||
|
||||
|
||||
get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
|
||||
|
||||
|
||||
def get_iface_from_addr(addr):
|
||||
"""Work out on which interface the provided address is configured."""
|
||||
for iface in netifaces.interfaces():
|
||||
addresses = netifaces.ifaddresses(iface)
|
||||
for inet_type in addresses:
|
||||
for _addr in addresses[inet_type]:
|
||||
_addr = _addr['addr']
|
||||
# link local
|
||||
ll_key = re.compile("(.+)%.*")
|
||||
raw = re.match(ll_key, _addr)
|
||||
if raw:
|
||||
_addr = raw.group(1)
|
||||
|
||||
if _addr == addr:
|
||||
log("Address '%s' is configured on iface '%s'" %
|
||||
(addr, iface))
|
||||
return iface
|
||||
|
||||
msg = "Unable to infer net iface on which '%s' is configured" % (addr)
|
||||
raise Exception(msg)
|
||||
|
||||
|
||||
def sniff_iface(f):
|
||||
"""Ensure decorated function is called with a value for iface.
|
||||
|
||||
If no iface provided, inject net iface inferred from unit private address.
|
||||
"""
|
||||
def iface_sniffer(*args, **kwargs):
|
||||
if not kwargs.get('iface', None):
|
||||
kwargs['iface'] = get_iface_from_addr(unit_get('private-address'))
|
||||
|
||||
return f(*args, **kwargs)
|
||||
|
||||
return iface_sniffer
|
||||
|
||||
|
||||
@sniff_iface
|
||||
def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None,
|
||||
dynamic_only=True):
|
||||
"""Get assigned IPv6 address for a given interface.
|
||||
|
||||
Returns list of addresses found. If no address found, returns empty list.
|
||||
|
||||
If iface is None, we infer the current primary interface by doing a reverse
|
||||
lookup on the unit private-address.
|
||||
|
||||
We currently only support scope global IPv6 addresses i.e. non-temporary
|
||||
addresses. If no global IPv6 address is found, return the first one found
|
||||
in the ipv6 address list.
|
||||
|
||||
:param iface: network interface on which ipv6 address(es) are expected to
|
||||
be found.
|
||||
:param inc_aliases: include alias interfaces in search
|
||||
:param fatal: if True, raise exception if address not found
|
||||
:param exc_list: list of addresses to ignore
|
||||
:param dynamic_only: only recognise dynamic addresses
|
||||
:return: list of ipv6 addresses
|
||||
"""
|
||||
addresses = get_iface_addr(iface=iface, inet_type='AF_INET6',
|
||||
inc_aliases=inc_aliases, fatal=fatal,
|
||||
exc_list=exc_list)
|
||||
|
||||
if addresses:
|
||||
global_addrs = []
|
||||
for addr in addresses:
|
||||
key_scope_link_local = re.compile("^fe80::..(.+)%(.+)")
|
||||
m = re.match(key_scope_link_local, addr)
|
||||
if m:
|
||||
eui_64_mac = m.group(1)
|
||||
iface = m.group(2)
|
||||
else:
|
||||
global_addrs.append(addr)
|
||||
|
||||
if global_addrs:
|
||||
# Make sure any found global addresses are not temporary
|
||||
cmd = ['ip', 'addr', 'show', iface]
|
||||
out = subprocess.check_output(cmd).decode('UTF-8')
|
||||
if dynamic_only:
|
||||
key = re.compile("inet6 (.+)/[0-9]+ scope global.* dynamic.*")
|
||||
else:
|
||||
key = re.compile("inet6 (.+)/[0-9]+ scope global.*")
|
||||
|
||||
addrs = []
|
||||
for line in out.split('\n'):
|
||||
line = line.strip()
|
||||
m = re.match(key, line)
|
||||
if m and 'temporary' not in line:
|
||||
# Return the first valid address we find
|
||||
for addr in global_addrs:
|
||||
if m.group(1) == addr:
|
||||
if not dynamic_only or \
|
||||
m.group(1).endswith(eui_64_mac):
|
||||
addrs.append(addr)
|
||||
|
||||
if addrs:
|
||||
return addrs
|
||||
|
||||
if fatal:
|
||||
raise Exception("Interface '%s' does not have a scope global "
|
||||
"non-temporary ipv6 address." % iface)
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def get_bridges(vnic_dir='/sys/devices/virtual/net'):
|
||||
"""Return a list of bridges on the system."""
|
||||
b_regex = "%s/*/bridge" % vnic_dir
|
||||
return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
|
||||
|
||||
|
||||
def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
|
||||
"""Return a list of nics comprising a given bridge on the system."""
|
||||
brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
|
||||
return [x.split('/')[-1] for x in glob.glob(brif_regex)]
|
||||
|
||||
|
||||
def is_bridge_member(nic):
|
||||
"""Check if a given nic is a member of a bridge."""
|
||||
for bridge in get_bridges():
|
||||
if nic in get_bridge_nics(bridge):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def is_ip(address):
|
||||
"""
|
||||
Returns True if address is a valid IP address.
|
||||
"""
|
||||
try:
|
||||
# Test to see if already an IPv4/IPv6 address
|
||||
address = netaddr.IPAddress(address)
|
||||
return True
|
||||
except (netaddr.AddrFormatError, ValueError):
|
||||
return False
|
||||
|
||||
|
||||
def ns_query(address):
|
||||
try:
|
||||
import dns.resolver
|
||||
except ImportError:
|
||||
if six.PY2:
|
||||
apt_install('python-dnspython', fatal=True)
|
||||
else:
|
||||
apt_install('python3-dnspython', fatal=True)
|
||||
import dns.resolver
|
||||
|
||||
if isinstance(address, dns.name.Name):
|
||||
rtype = 'PTR'
|
||||
elif isinstance(address, six.string_types):
|
||||
rtype = 'A'
|
||||
else:
|
||||
return None
|
||||
|
||||
try:
|
||||
answers = dns.resolver.query(address, rtype)
|
||||
except dns.resolver.NXDOMAIN:
|
||||
return None
|
||||
|
||||
if answers:
|
||||
return str(answers[0])
|
||||
return None
|
||||
|
||||
|
||||
def get_host_ip(hostname, fallback=None):
|
||||
"""
|
||||
Resolves the IP for a given hostname, or returns
|
||||
the input if it is already an IP.
|
||||
"""
|
||||
if is_ip(hostname):
|
||||
return hostname
|
||||
|
||||
ip_addr = ns_query(hostname)
|
||||
if not ip_addr:
|
||||
try:
|
||||
ip_addr = socket.gethostbyname(hostname)
|
||||
except Exception:
|
||||
log("Failed to resolve hostname '%s'" % (hostname),
|
||||
level=WARNING)
|
||||
return fallback
|
||||
return ip_addr
|
||||
|
||||
|
||||
def get_hostname(address, fqdn=True):
|
||||
"""
|
||||
Resolves hostname for given IP, or returns the input
|
||||
if it is already a hostname.
|
||||
"""
|
||||
if is_ip(address):
|
||||
try:
|
||||
import dns.reversename
|
||||
except ImportError:
|
||||
if six.PY2:
|
||||
apt_install("python-dnspython", fatal=True)
|
||||
else:
|
||||
apt_install("python3-dnspython", fatal=True)
|
||||
import dns.reversename
|
||||
|
||||
rev = dns.reversename.from_address(address)
|
||||
result = ns_query(rev)
|
||||
|
||||
if not result:
|
||||
try:
|
||||
result = socket.gethostbyaddr(address)[0]
|
||||
except Exception:
|
||||
return None
|
||||
else:
|
||||
result = address
|
||||
|
||||
if fqdn:
|
||||
# strip trailing .
|
||||
if result.endswith('.'):
|
||||
return result[:-1]
|
||||
else:
|
||||
return result
|
||||
else:
|
||||
return result.split('.')[0]
|
||||
|
||||
|
||||
def port_has_listener(address, port):
|
||||
"""
|
||||
Returns True if the address:port is open and being listened to,
|
||||
else False.
|
||||
|
||||
@param address: an IP address or hostname
|
||||
@param port: integer port
|
||||
|
||||
Note calls 'zc' via a subprocess shell
|
||||
"""
|
||||
cmd = ['nc', '-z', address, str(port)]
|
||||
result = subprocess.call(cmd)
|
||||
return not(bool(result))
|
||||
|
||||
|
||||
def assert_charm_supports_ipv6():
|
||||
"""Check whether we are able to support charms ipv6."""
|
||||
release = lsb_release()['DISTRIB_CODENAME'].lower()
|
||||
if CompareHostReleases(release) < "trusty":
|
||||
raise Exception("IPv6 is not supported in the charms for Ubuntu "
|
||||
"versions less than Trusty 14.04")
|
||||
|
||||
|
||||
def get_relation_ip(interface, cidr_network=None):
|
||||
"""Return this unit's IP for the given interface.
|
||||
|
||||
Allow for an arbitrary interface to use with network-get to select an IP.
|
||||
Handle all address selection options including passed cidr network and
|
||||
IPv6.
|
||||
|
||||
Usage: get_relation_ip('amqp', cidr_network='10.0.0.0/8')
|
||||
|
||||
@param interface: string name of the relation.
|
||||
@param cidr_network: string CIDR Network to select an address from.
|
||||
@raises Exception if prefer-ipv6 is configured but IPv6 unsupported.
|
||||
@returns IPv6 or IPv4 address
|
||||
"""
|
||||
# Select the interface address first
|
||||
# For possible use as a fallback bellow with get_address_in_network
|
||||
try:
|
||||
# Get the interface specific IP
|
||||
address = network_get_primary_address(interface)
|
||||
except NotImplementedError:
|
||||
# If network-get is not available
|
||||
address = get_host_ip(unit_get('private-address'))
|
||||
except NoNetworkBinding:
|
||||
log("No network binding for {}".format(interface), WARNING)
|
||||
address = get_host_ip(unit_get('private-address'))
|
||||
|
||||
if config('prefer-ipv6'):
|
||||
# Currently IPv6 has priority, eventually we want IPv6 to just be
|
||||
# another network space.
|
||||
assert_charm_supports_ipv6()
|
||||
return get_ipv6_addr()[0]
|
||||
elif cidr_network:
|
||||
# If a specific CIDR network is passed get the address from that
|
||||
# network.
|
||||
return get_address_in_network(cidr_network, address)
|
||||
|
||||
# Return the interface address
|
||||
return address
|
@ -1,249 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
''' Helpers for interacting with OpenvSwitch '''
|
||||
import hashlib
|
||||
import subprocess
|
||||
import os
|
||||
import six
|
||||
|
||||
from charmhelpers.fetch import apt_install
|
||||
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log, WARNING, INFO, DEBUG
|
||||
)
|
||||
from charmhelpers.core.host import (
|
||||
service
|
||||
)
|
||||
|
||||
BRIDGE_TEMPLATE = """\
|
||||
# This veth pair is required when neutron data-port is mapped to an existing linux bridge. lp:1635067
|
||||
|
||||
auto {linuxbridge_port}
|
||||
iface {linuxbridge_port} inet manual
|
||||
pre-up ip link add name {linuxbridge_port} type veth peer name {ovsbridge_port}
|
||||
pre-up ip link set {ovsbridge_port} master {bridge}
|
||||
pre-up ip link set {ovsbridge_port} up
|
||||
up ip link set {linuxbridge_port} up
|
||||
down ip link del {linuxbridge_port}
|
||||
"""
|
||||
|
||||
MAX_KERNEL_INTERFACE_NAME_LEN = 15
|
||||
|
||||
|
||||
def add_bridge(name, datapath_type=None):
|
||||
''' Add the named bridge to openvswitch '''
|
||||
log('Creating bridge {}'.format(name))
|
||||
cmd = ["ovs-vsctl", "--", "--may-exist", "add-br", name]
|
||||
if datapath_type is not None:
|
||||
cmd += ['--', 'set', 'bridge', name,
|
||||
'datapath_type={}'.format(datapath_type)]
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
|
||||
def del_bridge(name):
|
||||
''' Delete the named bridge from openvswitch '''
|
||||
log('Deleting bridge {}'.format(name))
|
||||
subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name])
|
||||
|
||||
|
||||
def add_bridge_port(name, port, promisc=False):
|
||||
''' Add a port to the named openvswitch bridge '''
|
||||
log('Adding port {} to bridge {}'.format(port, name))
|
||||
subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-port",
|
||||
name, port])
|
||||
subprocess.check_call(["ip", "link", "set", port, "up"])
|
||||
if promisc:
|
||||
subprocess.check_call(["ip", "link", "set", port, "promisc", "on"])
|
||||
else:
|
||||
subprocess.check_call(["ip", "link", "set", port, "promisc", "off"])
|
||||
|
||||
|
||||
def del_bridge_port(name, port):
|
||||
''' Delete a port from the named openvswitch bridge '''
|
||||
log('Deleting port {} from bridge {}'.format(port, name))
|
||||
subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port",
|
||||
name, port])
|
||||
subprocess.check_call(["ip", "link", "set", port, "down"])
|
||||
subprocess.check_call(["ip", "link", "set", port, "promisc", "off"])
|
||||
|
||||
|
||||
def add_ovsbridge_linuxbridge(name, bridge):
|
||||
''' Add linux bridge to the named openvswitch bridge
|
||||
:param name: Name of ovs bridge to be added to Linux bridge
|
||||
:param bridge: Name of Linux bridge to be added to ovs bridge
|
||||
:returns: True if veth is added between ovs bridge and linux bridge,
|
||||
False otherwise'''
|
||||
try:
|
||||
import netifaces
|
||||
except ImportError:
|
||||
if six.PY2:
|
||||
apt_install('python-netifaces', fatal=True)
|
||||
else:
|
||||
apt_install('python3-netifaces', fatal=True)
|
||||
import netifaces
|
||||
|
||||
# NOTE(jamespage):
|
||||
# Older code supported addition of a linuxbridge directly
|
||||
# to an OVS bridge; ensure we don't break uses on upgrade
|
||||
existing_ovs_bridge = port_to_br(bridge)
|
||||
if existing_ovs_bridge is not None:
|
||||
log('Linuxbridge {} is already directly in use'
|
||||
' by OVS bridge {}'.format(bridge, existing_ovs_bridge),
|
||||
level=INFO)
|
||||
return
|
||||
|
||||
# NOTE(jamespage):
|
||||
# preserve existing naming because interfaces may already exist.
|
||||
ovsbridge_port = "veth-" + name
|
||||
linuxbridge_port = "veth-" + bridge
|
||||
if (len(ovsbridge_port) > MAX_KERNEL_INTERFACE_NAME_LEN or
|
||||
len(linuxbridge_port) > MAX_KERNEL_INTERFACE_NAME_LEN):
|
||||
# NOTE(jamespage):
|
||||
# use parts of hashed bridgename (openstack style) when
|
||||
# a bridge name exceeds 15 chars
|
||||
hashed_bridge = hashlib.sha256(bridge.encode('UTF-8')).hexdigest()
|
||||
base = '{}-{}'.format(hashed_bridge[:8], hashed_bridge[-2:])
|
||||
ovsbridge_port = "cvo{}".format(base)
|
||||
linuxbridge_port = "cvb{}".format(base)
|
||||
|
||||
interfaces = netifaces.interfaces()
|
||||
for interface in interfaces:
|
||||
if interface == ovsbridge_port or interface == linuxbridge_port:
|
||||
log('Interface {} already exists'.format(interface), level=INFO)
|
||||
return
|
||||
|
||||
log('Adding linuxbridge {} to ovsbridge {}'.format(bridge, name),
|
||||
level=INFO)
|
||||
|
||||
check_for_eni_source()
|
||||
|
||||
with open('/etc/network/interfaces.d/{}.cfg'.format(
|
||||
linuxbridge_port), 'w') as config:
|
||||
config.write(BRIDGE_TEMPLATE.format(linuxbridge_port=linuxbridge_port,
|
||||
ovsbridge_port=ovsbridge_port,
|
||||
bridge=bridge))
|
||||
|
||||
subprocess.check_call(["ifup", linuxbridge_port])
|
||||
add_bridge_port(name, linuxbridge_port)
|
||||
|
||||
|
||||
def is_linuxbridge_interface(port):
|
||||
''' Check if the interface is a linuxbridge bridge
|
||||
:param port: Name of an interface to check whether it is a Linux bridge
|
||||
:returns: True if port is a Linux bridge'''
|
||||
|
||||
if os.path.exists('/sys/class/net/' + port + '/bridge'):
|
||||
log('Interface {} is a Linux bridge'.format(port), level=DEBUG)
|
||||
return True
|
||||
else:
|
||||
log('Interface {} is not a Linux bridge'.format(port), level=DEBUG)
|
||||
return False
|
||||
|
||||
|
||||
def set_manager(manager):
|
||||
''' Set the controller for the local openvswitch '''
|
||||
log('Setting manager for local ovs to {}'.format(manager))
|
||||
subprocess.check_call(['ovs-vsctl', 'set-manager',
|
||||
'ssl:{}'.format(manager)])
|
||||
|
||||
|
||||
def set_Open_vSwitch_column_value(column_value):
|
||||
"""
|
||||
Calls ovs-vsctl and sets the 'column_value' in the Open_vSwitch table.
|
||||
|
||||
:param column_value:
|
||||
See http://www.openvswitch.org//ovs-vswitchd.conf.db.5.pdf for
|
||||
details of the relevant values.
|
||||
:type str
|
||||
:raises CalledProcessException: possibly ovsdb-server is not running
|
||||
"""
|
||||
log('Setting {} in the Open_vSwitch table'.format(column_value))
|
||||
subprocess.check_call(['ovs-vsctl', 'set', 'Open_vSwitch', '.', column_value])
|
||||
|
||||
|
||||
CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem'
|
||||
|
||||
|
||||
def get_certificate():
|
||||
''' Read openvswitch certificate from disk '''
|
||||
if os.path.exists(CERT_PATH):
|
||||
log('Reading ovs certificate from {}'.format(CERT_PATH))
|
||||
with open(CERT_PATH, 'r') as cert:
|
||||
full_cert = cert.read()
|
||||
begin_marker = "-----BEGIN CERTIFICATE-----"
|
||||
end_marker = "-----END CERTIFICATE-----"
|
||||
begin_index = full_cert.find(begin_marker)
|
||||
end_index = full_cert.rfind(end_marker)
|
||||
if end_index == -1 or begin_index == -1:
|
||||
raise RuntimeError("Certificate does not contain valid begin"
|
||||
" and end markers.")
|
||||
full_cert = full_cert[begin_index:(end_index + len(end_marker))]
|
||||
return full_cert
|
||||
else:
|
||||
log('Certificate not found', level=WARNING)
|
||||
return None
|
||||
|
||||
|
||||
def check_for_eni_source():
|
||||
''' Juju removes the source line when setting up interfaces,
|
||||
replace if missing '''
|
||||
|
||||
with open('/etc/network/interfaces', 'r') as eni:
|
||||
for line in eni:
|
||||
if line == 'source /etc/network/interfaces.d/*':
|
||||
return
|
||||
with open('/etc/network/interfaces', 'a') as eni:
|
||||
eni.write('\nsource /etc/network/interfaces.d/*')
|
||||
|
||||
|
||||
def full_restart():
|
||||
''' Full restart and reload of openvswitch '''
|
||||
if os.path.exists('/etc/init/openvswitch-force-reload-kmod.conf'):
|
||||
service('start', 'openvswitch-force-reload-kmod')
|
||||
else:
|
||||
service('force-reload-kmod', 'openvswitch-switch')
|
||||
|
||||
|
||||
def enable_ipfix(bridge, target):
|
||||
'''Enable IPfix on bridge to target.
|
||||
:param bridge: Bridge to monitor
|
||||
:param target: IPfix remote endpoint
|
||||
'''
|
||||
cmd = ['ovs-vsctl', 'set', 'Bridge', bridge, 'ipfix=@i', '--',
|
||||
'--id=@i', 'create', 'IPFIX', 'targets="{}"'.format(target)]
|
||||
log('Enabling IPfix on {}.'.format(bridge))
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
|
||||
def disable_ipfix(bridge):
|
||||
'''Diable IPfix on target bridge.
|
||||
:param bridge: Bridge to modify
|
||||
'''
|
||||
cmd = ['ovs-vsctl', 'clear', 'Bridge', bridge, 'ipfix']
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
|
||||
def port_to_br(port):
|
||||
'''Determine the bridge that contains a port
|
||||
:param port: Name of port to check for
|
||||
:returns str: OVS bridge containing port or None if not found
|
||||
'''
|
||||
try:
|
||||
return subprocess.check_output(
|
||||
['ovs-vsctl', 'port-to-br', port]
|
||||
).decode('UTF-8').strip()
|
||||
except subprocess.CalledProcessError:
|
||||
return None
|
@ -1,339 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This module contains helpers to add and remove ufw rules.
|
||||
|
||||
Examples:
|
||||
|
||||
- open SSH port for subnet 10.0.3.0/24:
|
||||
|
||||
>>> from charmhelpers.contrib.network import ufw
|
||||
>>> ufw.enable()
|
||||
>>> ufw.grant_access(src='10.0.3.0/24', dst='any', port='22', proto='tcp')
|
||||
|
||||
- open service by name as defined in /etc/services:
|
||||
|
||||
>>> from charmhelpers.contrib.network import ufw
|
||||
>>> ufw.enable()
|
||||
>>> ufw.service('ssh', 'open')
|
||||
|
||||
- close service by port number:
|
||||
|
||||
>>> from charmhelpers.contrib.network import ufw
|
||||
>>> ufw.enable()
|
||||
>>> ufw.service('4949', 'close') # munin
|
||||
"""
|
||||
import re
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
from charmhelpers.core import hookenv
|
||||
from charmhelpers.core.kernel import modprobe, is_module_loaded
|
||||
|
||||
__author__ = "Felipe Reyes <felipe.reyes@canonical.com>"
|
||||
|
||||
|
||||
class UFWError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class UFWIPv6Error(UFWError):
|
||||
pass
|
||||
|
||||
|
||||
def is_enabled():
|
||||
"""
|
||||
Check if `ufw` is enabled
|
||||
|
||||
:returns: True if ufw is enabled
|
||||
"""
|
||||
output = subprocess.check_output(['ufw', 'status'],
|
||||
universal_newlines=True,
|
||||
env={'LANG': 'en_US',
|
||||
'PATH': os.environ['PATH']})
|
||||
|
||||
m = re.findall(r'^Status: active\n', output, re.M)
|
||||
|
||||
return len(m) >= 1
|
||||
|
||||
|
||||
def is_ipv6_ok(soft_fail=False):
|
||||
"""
|
||||
Check if IPv6 support is present and ip6tables functional
|
||||
|
||||
:param soft_fail: If set to True and IPv6 support is broken, then reports
|
||||
that the host doesn't have IPv6 support, otherwise a
|
||||
UFWIPv6Error exception is raised.
|
||||
:returns: True if IPv6 is working, False otherwise
|
||||
"""
|
||||
|
||||
# do we have IPv6 in the machine?
|
||||
if os.path.isdir('/proc/sys/net/ipv6'):
|
||||
# is ip6tables kernel module loaded?
|
||||
if not is_module_loaded('ip6_tables'):
|
||||
# ip6tables support isn't complete, let's try to load it
|
||||
try:
|
||||
modprobe('ip6_tables')
|
||||
# great, we can load the module
|
||||
return True
|
||||
except subprocess.CalledProcessError as ex:
|
||||
hookenv.log("Couldn't load ip6_tables module: %s" % ex.output,
|
||||
level="WARN")
|
||||
# we are in a world where ip6tables isn't working
|
||||
if soft_fail:
|
||||
# so we inform that the machine doesn't have IPv6
|
||||
return False
|
||||
else:
|
||||
raise UFWIPv6Error("IPv6 firewall support broken")
|
||||
else:
|
||||
# the module is present :)
|
||||
return True
|
||||
|
||||
else:
|
||||
# the system doesn't have IPv6
|
||||
return False
|
||||
|
||||
|
||||
def disable_ipv6():
|
||||
"""
|
||||
Disable ufw IPv6 support in /etc/default/ufw
|
||||
"""
|
||||
exit_code = subprocess.call(['sed', '-i', 's/IPV6=.*/IPV6=no/g',
|
||||
'/etc/default/ufw'])
|
||||
if exit_code == 0:
|
||||
hookenv.log('IPv6 support in ufw disabled', level='INFO')
|
||||
else:
|
||||
hookenv.log("Couldn't disable IPv6 support in ufw", level="ERROR")
|
||||
raise UFWError("Couldn't disable IPv6 support in ufw")
|
||||
|
||||
|
||||
def enable(soft_fail=False):
|
||||
"""
|
||||
Enable ufw
|
||||
|
||||
:param soft_fail: If set to True silently disables IPv6 support in ufw,
|
||||
otherwise a UFWIPv6Error exception is raised when IP6
|
||||
support is broken.
|
||||
:returns: True if ufw is successfully enabled
|
||||
"""
|
||||
if is_enabled():
|
||||
return True
|
||||
|
||||
if not is_ipv6_ok(soft_fail):
|
||||
disable_ipv6()
|
||||
|
||||
output = subprocess.check_output(['ufw', 'enable'],
|
||||
universal_newlines=True,
|
||||
env={'LANG': 'en_US',
|
||||
'PATH': os.environ['PATH']})
|
||||
|
||||
m = re.findall('^Firewall is active and enabled on system startup\n',
|
||||
output, re.M)
|
||||
hookenv.log(output, level='DEBUG')
|
||||
|
||||
if len(m) == 0:
|
||||
hookenv.log("ufw couldn't be enabled", level='WARN')
|
||||
return False
|
||||
else:
|
||||
hookenv.log("ufw enabled", level='INFO')
|
||||
return True
|
||||
|
||||
|
||||
def reload():
|
||||
"""
|
||||
Reload ufw
|
||||
|
||||
:returns: True if ufw is successfully enabled
|
||||
"""
|
||||
output = subprocess.check_output(['ufw', 'reload'],
|
||||
universal_newlines=True,
|
||||
env={'LANG': 'en_US',
|
||||
'PATH': os.environ['PATH']})
|
||||
|
||||
m = re.findall('^Firewall reloaded\n',
|
||||
output, re.M)
|
||||
hookenv.log(output, level='DEBUG')
|
||||
|
||||
if len(m) == 0:
|
||||
hookenv.log("ufw couldn't be reloaded", level='WARN')
|
||||
return False
|
||||
else:
|
||||
hookenv.log("ufw reloaded", level='INFO')
|
||||
return True
|
||||
|
||||
|
||||
def disable():
|
||||
"""
|
||||
Disable ufw
|
||||
|
||||
:returns: True if ufw is successfully disabled
|
||||
"""
|
||||
if not is_enabled():
|
||||
return True
|
||||
|
||||
output = subprocess.check_output(['ufw', 'disable'],
|
||||
universal_newlines=True,
|
||||
env={'LANG': 'en_US',
|
||||
'PATH': os.environ['PATH']})
|
||||
|
||||
m = re.findall(r'^Firewall stopped and disabled on system startup\n',
|
||||
output, re.M)
|
||||
hookenv.log(output, level='DEBUG')
|
||||
|
||||
if len(m) == 0:
|
||||
hookenv.log("ufw couldn't be disabled", level='WARN')
|
||||
return False
|
||||
else:
|
||||
hookenv.log("ufw disabled", level='INFO')
|
||||
return True
|
||||
|
||||
|
||||
def default_policy(policy='deny', direction='incoming'):
|
||||
"""
|
||||
Changes the default policy for traffic `direction`
|
||||
|
||||
:param policy: allow, deny or reject
|
||||
:param direction: traffic direction, possible values: incoming, outgoing,
|
||||
routed
|
||||
"""
|
||||
if policy not in ['allow', 'deny', 'reject']:
|
||||
raise UFWError(('Unknown policy %s, valid values: '
|
||||
'allow, deny, reject') % policy)
|
||||
|
||||
if direction not in ['incoming', 'outgoing', 'routed']:
|
||||
raise UFWError(('Unknown direction %s, valid values: '
|
||||
'incoming, outgoing, routed') % direction)
|
||||
|
||||
output = subprocess.check_output(['ufw', 'default', policy, direction],
|
||||
universal_newlines=True,
|
||||
env={'LANG': 'en_US',
|
||||
'PATH': os.environ['PATH']})
|
||||
hookenv.log(output, level='DEBUG')
|
||||
|
||||
m = re.findall("^Default %s policy changed to '%s'\n" % (direction,
|
||||
policy),
|
||||
output, re.M)
|
||||
if len(m) == 0:
|
||||
hookenv.log("ufw couldn't change the default policy to %s for %s"
|
||||
% (policy, direction), level='WARN')
|
||||
return False
|
||||
else:
|
||||
hookenv.log("ufw default policy for %s changed to %s"
|
||||
% (direction, policy), level='INFO')
|
||||
return True
|
||||
|
||||
|
||||
def modify_access(src, dst='any', port=None, proto=None, action='allow',
|
||||
index=None):
|
||||
"""
|
||||
Grant access to an address or subnet
|
||||
|
||||
:param src: address (e.g. 192.168.1.234) or subnet
|
||||
(e.g. 192.168.1.0/24).
|
||||
:param dst: destiny of the connection, if the machine has multiple IPs and
|
||||
connections to only one of those have to accepted this is the
|
||||
field has to be set.
|
||||
:param port: destiny port
|
||||
:param proto: protocol (tcp or udp)
|
||||
:param action: `allow` or `delete`
|
||||
:param index: if different from None the rule is inserted at the given
|
||||
`index`.
|
||||
"""
|
||||
if not is_enabled():
|
||||
hookenv.log('ufw is disabled, skipping modify_access()', level='WARN')
|
||||
return
|
||||
|
||||
if action == 'delete':
|
||||
cmd = ['ufw', 'delete', 'allow']
|
||||
elif index is not None:
|
||||
cmd = ['ufw', 'insert', str(index), action]
|
||||
else:
|
||||
cmd = ['ufw', action]
|
||||
|
||||
if src is not None:
|
||||
cmd += ['from', src]
|
||||
|
||||
if dst is not None:
|
||||
cmd += ['to', dst]
|
||||
|
||||
if port is not None:
|
||||
cmd += ['port', str(port)]
|
||||
|
||||
if proto is not None:
|
||||
cmd += ['proto', proto]
|
||||
|
||||
hookenv.log('ufw {}: {}'.format(action, ' '.join(cmd)), level='DEBUG')
|
||||
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
|
||||
(stdout, stderr) = p.communicate()
|
||||
|
||||
hookenv.log(stdout, level='INFO')
|
||||
|
||||
if p.returncode != 0:
|
||||
hookenv.log(stderr, level='ERROR')
|
||||
hookenv.log('Error running: {}, exit code: {}'.format(' '.join(cmd),
|
||||
p.returncode),
|
||||
level='ERROR')
|
||||
|
||||
|
||||
def grant_access(src, dst='any', port=None, proto=None, index=None):
|
||||
"""
|
||||
Grant access to an address or subnet
|
||||
|
||||
:param src: address (e.g. 192.168.1.234) or subnet
|
||||
(e.g. 192.168.1.0/24).
|
||||
:param dst: destiny of the connection, if the machine has multiple IPs and
|
||||
connections to only one of those have to accepted this is the
|
||||
field has to be set.
|
||||
:param port: destiny port
|
||||
:param proto: protocol (tcp or udp)
|
||||
:param index: if different from None the rule is inserted at the given
|
||||
`index`.
|
||||
"""
|
||||
return modify_access(src, dst=dst, port=port, proto=proto, action='allow',
|
||||
index=index)
|
||||
|
||||
|
||||
def revoke_access(src, dst='any', port=None, proto=None):
|
||||
"""
|
||||
Revoke access to an address or subnet
|
||||
|
||||
:param src: address (e.g. 192.168.1.234) or subnet
|
||||
(e.g. 192.168.1.0/24).
|
||||
:param dst: destiny of the connection, if the machine has multiple IPs and
|
||||
connections to only one of those have to accepted this is the
|
||||
field has to be set.
|
||||
:param port: destiny port
|
||||
:param proto: protocol (tcp or udp)
|
||||
"""
|
||||
return modify_access(src, dst=dst, port=port, proto=proto, action='delete')
|
||||
|
||||
|
||||
def service(name, action):
|
||||
"""
|
||||
Open/close access to a service
|
||||
|
||||
:param name: could be a service name defined in `/etc/services` or a port
|
||||
number.
|
||||
:param action: `open` or `close`
|
||||
"""
|
||||
if action == 'open':
|
||||
subprocess.check_output(['ufw', 'allow', str(name)],
|
||||
universal_newlines=True)
|
||||
elif action == 'close':
|
||||
subprocess.check_output(['ufw', 'delete', 'allow', str(name)],
|
||||
universal_newlines=True)
|
||||
else:
|
||||
raise UFWError(("'{}' not supported, use 'allow' "
|
||||
"or 'delete'").format(action))
|
@ -1,13 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -1,44 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
''' Helper for managing alternatives for file conflict resolution '''
|
||||
|
||||
import subprocess
|
||||
import shutil
|
||||
import os
|
||||
|
||||
|
||||
def install_alternative(name, target, source, priority=50):
|
||||
''' Install alternative configuration '''
|
||||
if (os.path.exists(target) and not os.path.islink(target)):
|
||||
# Move existing file/directory away before installing
|
||||
shutil.move(target, '{}.bak'.format(target))
|
||||
cmd = [
|
||||
'update-alternatives', '--force', '--install',
|
||||
target, name, source, str(priority)
|
||||
]
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
|
||||
def remove_alternative(name, source):
|
||||
"""Remove an installed alternative configuration file
|
||||
|
||||
:param name: string name of the alternative to remove
|
||||
:param source: string full path to alternative to remove
|
||||
"""
|
||||
cmd = [
|
||||
'update-alternatives', '--remove',
|
||||
name, source
|
||||
]
|
||||
subprocess.check_call(cmd)
|
@ -1,13 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -1,360 +0,0 @@
|
||||
# Copyright 2014-2015 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import six
|
||||
from collections import OrderedDict
|
||||
from charmhelpers.contrib.amulet.deployment import (
|
||||
AmuletDeployment
|
||||
)
|
||||
from charmhelpers.contrib.openstack.amulet.utils import (
|
||||
OPENSTACK_RELEASES_PAIRS
|
||||
)
|
||||
|
||||
DEBUG = logging.DEBUG
|
||||
ERROR = logging.ERROR
|
||||
|
||||
|
||||
class OpenStackAmuletDeployment(AmuletDeployment):
|
||||
"""OpenStack amulet deployment.
|
||||
|
||||
This class inherits from AmuletDeployment and has additional support
|
||||
that is specifically for use by OpenStack charms.
|
||||
"""
|
||||
|
||||
def __init__(self, series=None, openstack=None, source=None,
|
||||
stable=True, log_level=DEBUG):
|
||||
"""Initialize the deployment environment."""
|
||||
super(OpenStackAmuletDeployment, self).__init__(series)
|
||||
self.log = self.get_logger(level=log_level)
|
||||
self.log.info('OpenStackAmuletDeployment: init')
|
||||
self.openstack = openstack
|
||||
self.source = source
|
||||
self.stable = stable
|
||||
|
||||
def get_logger(self, name="deployment-logger", level=logging.DEBUG):
|
||||
"""Get a logger object that will log to stdout."""
|
||||
log = logging
|
||||
logger = log.getLogger(name)
|
||||
fmt = log.Formatter("%(asctime)s %(funcName)s "
|
||||
"%(levelname)s: %(message)s")
|
||||
|
||||
handler = log.StreamHandler(stream=sys.stdout)
|
||||
handler.setLevel(level)
|
||||
handler.setFormatter(fmt)
|
||||
|
||||
logger.addHandler(handler)
|
||||
logger.setLevel(level)
|
||||
|
||||
return logger
|
||||
|
||||
def _determine_branch_locations(self, other_services):
|
||||
"""Determine the branch locations for the other services.
|
||||
|
||||
Determine if the local branch being tested is derived from its
|
||||
stable or next (dev) branch, and based on this, use the corresonding
|
||||
stable or next branches for the other_services."""
|
||||
|
||||
self.log.info('OpenStackAmuletDeployment: determine branch locations')
|
||||
|
||||
# Charms outside the ~openstack-charmers
|
||||
base_charms = {
|
||||
'mysql': ['trusty'],
|
||||
'mongodb': ['trusty'],
|
||||
'nrpe': ['trusty', 'xenial'],
|
||||
}
|
||||
|
||||
for svc in other_services:
|
||||
# If a location has been explicitly set, use it
|
||||
if svc.get('location'):
|
||||
continue
|
||||
if svc['name'] in base_charms:
|
||||
# NOTE: not all charms have support for all series we
|
||||
# want/need to test against, so fix to most recent
|
||||
# that each base charm supports
|
||||
target_series = self.series
|
||||
if self.series not in base_charms[svc['name']]:
|
||||
target_series = base_charms[svc['name']][-1]
|
||||
svc['location'] = 'cs:{}/{}'.format(target_series,
|
||||
svc['name'])
|
||||
elif self.stable:
|
||||
svc['location'] = 'cs:{}/{}'.format(self.series,
|
||||
svc['name'])
|
||||
else:
|
||||
svc['location'] = 'cs:~openstack-charmers-next/{}/{}'.format(
|
||||
self.series,
|
||||
svc['name']
|
||||
)
|
||||
|
||||
return other_services
|
||||
|
||||
def _add_services(self, this_service, other_services, use_source=None,
|
||||
no_origin=None):
|
||||
"""Add services to the deployment and optionally set
|
||||
openstack-origin/source.
|
||||
|
||||
:param this_service dict: Service dictionary describing the service
|
||||
whose amulet tests are being run
|
||||
:param other_services dict: List of service dictionaries describing
|
||||
the services needed to support the target
|
||||
service
|
||||
:param use_source list: List of services which use the 'source' config
|
||||
option rather than 'openstack-origin'
|
||||
:param no_origin list: List of services which do not support setting
|
||||
the Cloud Archive.
|
||||
Service Dict:
|
||||
{
|
||||
'name': str charm-name,
|
||||
'units': int number of units,
|
||||
'constraints': dict of juju constraints,
|
||||
'location': str location of charm,
|
||||
}
|
||||
eg
|
||||
this_service = {
|
||||
'name': 'openvswitch-odl',
|
||||
'constraints': {'mem': '8G'},
|
||||
}
|
||||
other_services = [
|
||||
{
|
||||
'name': 'nova-compute',
|
||||
'units': 2,
|
||||
'constraints': {'mem': '4G'},
|
||||
'location': cs:~bob/xenial/nova-compute
|
||||
},
|
||||
{
|
||||
'name': 'mysql',
|
||||
'constraints': {'mem': '2G'},
|
||||
},
|
||||
{'neutron-api-odl'}]
|
||||
use_source = ['mysql']
|
||||
no_origin = ['neutron-api-odl']
|
||||
"""
|
||||
self.log.info('OpenStackAmuletDeployment: adding services')
|
||||
|
||||
other_services = self._determine_branch_locations(other_services)
|
||||
|
||||
super(OpenStackAmuletDeployment, self)._add_services(this_service,
|
||||
other_services)
|
||||
|
||||
services = other_services
|
||||
services.append(this_service)
|
||||
|
||||
use_source = use_source or []
|
||||
no_origin = no_origin or []
|
||||
|
||||
# Charms which should use the source config option
|
||||
use_source = list(set(
|
||||
use_source + ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
|
||||
'ceph-osd', 'ceph-radosgw', 'ceph-mon',
|
||||
'ceph-proxy', 'percona-cluster', 'lxd']))
|
||||
|
||||
# Charms which can not use openstack-origin, ie. many subordinates
|
||||
no_origin = list(set(
|
||||
no_origin + ['cinder-ceph', 'hacluster', 'neutron-openvswitch',
|
||||
'nrpe', 'openvswitch-odl', 'neutron-api-odl',
|
||||
'odl-controller', 'cinder-backup', 'nexentaedge-data',
|
||||
'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw',
|
||||
'cinder-nexentaedge', 'nexentaedge-mgmt',
|
||||
'ceilometer-agent']))
|
||||
|
||||
if self.openstack:
|
||||
for svc in services:
|
||||
if svc['name'] not in use_source + no_origin:
|
||||
config = {'openstack-origin': self.openstack}
|
||||
self.d.configure(svc['name'], config)
|
||||
|
||||
if self.source:
|
||||
for svc in services:
|
||||
if svc['name'] in use_source and svc['name'] not in no_origin:
|
||||
config = {'source': self.source}
|
||||
self.d.configure(svc['name'], config)
|
||||
|
||||
def _configure_services(self, configs):
|
||||
"""Configure all of the services."""
|
||||
self.log.info('OpenStackAmuletDeployment: configure services')
|
||||
for service, config in six.iteritems(configs):
|
||||
self.d.configure(service, config)
|
||||
|
||||
def _auto_wait_for_status(self, message=None, exclude_services=None,
|
||||
include_only=None, timeout=None):
|
||||
"""Wait for all units to have a specific extended status, except
|
||||
for any defined as excluded. Unless specified via message, any
|
||||
status containing any case of 'ready' will be considered a match.
|
||||
|
||||
Examples of message usage:
|
||||
|
||||
Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
|
||||
message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
|
||||
|
||||
Wait for all units to reach this status (exact match):
|
||||
message = re.compile('^Unit is ready and clustered$')
|
||||
|
||||
Wait for all units to reach any one of these (exact match):
|
||||
message = re.compile('Unit is ready|OK|Ready')
|
||||
|
||||
Wait for at least one unit to reach this status (exact match):
|
||||
message = {'ready'}
|
||||
|
||||
See Amulet's sentry.wait_for_messages() for message usage detail.
|
||||
https://github.com/juju/amulet/blob/master/amulet/sentry.py
|
||||
|
||||
:param message: Expected status match
|
||||
:param exclude_services: List of juju service names to ignore,
|
||||
not to be used in conjuction with include_only.
|
||||
:param include_only: List of juju service names to exclusively check,
|
||||
not to be used in conjuction with exclude_services.
|
||||
:param timeout: Maximum time in seconds to wait for status match
|
||||
:returns: None. Raises if timeout is hit.
|
||||
"""
|
||||
if not timeout:
|
||||
timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 1800))
|
||||
self.log.info('Waiting for extended status on units for {}s...'
|
||||
''.format(timeout))
|
||||
|
||||
all_services = self.d.services.keys()
|
||||
|
||||
if exclude_services and include_only:
|
||||
raise ValueError('exclude_services can not be used '
|
||||
'with include_only')
|
||||
|
||||
if message:
|
||||
if isinstance(message, re._pattern_type):
|
||||
match = message.pattern
|
||||
else:
|
||||
match = message
|
||||
|
||||
self.log.debug('Custom extended status wait match: '
|
||||
'{}'.format(match))
|
||||
else:
|
||||
self.log.debug('Default extended status wait match: contains '
|
||||
'READY (case-insensitive)')
|
||||
message = re.compile('.*ready.*', re.IGNORECASE)
|
||||
|
||||
if exclude_services:
|
||||
self.log.debug('Excluding services from extended status match: '
|
||||
'{}'.format(exclude_services))
|
||||
else:
|
||||
exclude_services = []
|
||||
|
||||
if include_only:
|
||||
services = include_only
|
||||
else:
|
||||
services = list(set(all_services) - set(exclude_services))
|
||||
|
||||
self.log.debug('Waiting up to {}s for extended status on services: '
|
||||
'{}'.format(timeout, services))
|
||||
service_messages = {service: message for service in services}
|
||||
|
||||
# Check for idleness
|
||||
self.d.sentry.wait(timeout=timeout)
|
||||
# Check for error states and bail early
|
||||
self.d.sentry.wait_for_status(self.d.juju_env, services, timeout=timeout)
|
||||
# Check for ready messages
|
||||
self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
|
||||
|
||||
self.log.info('OK')
|
||||
|
||||
def _get_openstack_release(self):
|
||||
"""Get openstack release.
|
||||
|
||||
Return an integer representing the enum value of the openstack
|
||||
release.
|
||||
"""
|
||||
# Must be ordered by OpenStack release (not by Ubuntu release):
|
||||
for i, os_pair in enumerate(OPENSTACK_RELEASES_PAIRS):
|
||||
setattr(self, os_pair, i)
|
||||
|
||||
releases = {
|
||||
('trusty', None): self.trusty_icehouse,
|
||||
('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
|
||||
('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
|
||||
('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
|
||||
('xenial', None): self.xenial_mitaka,
|
||||
('xenial', 'cloud:xenial-newton'): self.xenial_newton,
|
||||
('xenial', 'cloud:xenial-ocata'): self.xenial_ocata,
|
||||
('xenial', 'cloud:xenial-pike'): self.xenial_pike,
|
||||
('xenial', 'cloud:xenial-queens'): self.xenial_queens,
|
||||
('yakkety', None): self.yakkety_newton,
|
||||
('zesty', None): self.zesty_ocata,
|
||||
('artful', None): self.artful_pike,
|
||||
('bionic', None): self.bionic_queens,
|
||||
('bionic', 'cloud:bionic-rocky'): self.bionic_rocky,
|
||||
('bionic', 'cloud:bionic-stein'): self.bionic_stein,
|
||||
('cosmic', None): self.cosmic_rocky,
|
||||
('disco', None): self.disco_stein,
|
||||
}
|
||||
return releases[(self.series, self.openstack)]
|
||||
|
||||
def _get_openstack_release_string(self):
|
||||
"""Get openstack release string.
|
||||
|
||||
Return a string representing the openstack release.
|
||||
"""
|
||||
releases = OrderedDict([
|
||||
('trusty', 'icehouse'),
|
||||
('xenial', 'mitaka'),
|
||||
('yakkety', 'newton'),
|
||||
('zesty', 'ocata'),
|
||||
('artful', 'pike'),
|
||||
('bionic', 'queens'),
|
||||
('cosmic', 'rocky'),
|
||||
])
|
||||
if self.openstack:
|
||||
os_origin = self.openstack.split(':')[1]
|
||||
return os_origin.split('%s-' % self.series)[1].split('/')[0]
|
||||
else:
|
||||
return releases[self.series]
|
||||
|
||||
def get_ceph_expected_pools(self, radosgw=False):
|
||||
"""Return a list of expected ceph pools in a ceph + cinder + glance
|
||||
test scenario, based on OpenStack release and whether ceph radosgw
|
||||
is flagged as present or not."""
|
||||
|
||||
if self._get_openstack_release() == self.trusty_icehouse:
|
||||
# Icehouse
|
||||
pools = [
|
||||
'data',
|
||||
'metadata',
|
||||
'rbd',
|
||||
'cinder-ceph',
|
||||
'glance'
|
||||
]
|
||||
elif (self.trusty_kilo <= self._get_openstack_release() <=
|
||||
self.zesty_ocata):
|
||||
# Kilo through Ocata
|
||||
pools = [
|
||||
'rbd',
|
||||
'cinder-ceph',
|
||||
'glance'
|
||||
]
|
||||
else:
|
||||
# Pike and later
|
||||
pools = [
|
||||
'cinder-ceph',
|
||||
'glance'
|
||||
]
|
||||
|
||||
if radosgw:
|
||||
pools.extend([
|
||||
'.rgw.root',
|
||||
'.rgw.control',
|
||||
'.rgw',
|
||||
'.rgw.gc',
|
||||
'.users.uid'
|
||||
])
|
||||
|
||||
return pools
|
File diff suppressed because it is too large
Load Diff
@ -1,275 +0,0 @@
|
||||
# Copyright 2014-2018 Canonical Limited.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Common python helper functions used for OpenStack charm certificats.
|
||||
|
||||
import os
|
||||
import json
|
||||
|
||||
from charmhelpers.contrib.network.ip import (
|
||||
get_hostname,
|
||||
resolve_network_cidr,
|
||||
)
|
||||
from charmhelpers.core.hookenv import (
|
||||
local_unit,
|
||||
network_get_primary_address,
|
||||
config,
|
||||
related_units,
|
||||
relation_get,
|
||||
relation_ids,
|
||||
unit_get,
|
||||
NoNetworkBinding,
|
||||
log,
|
||||
WARNING,
|
||||
)
|
||||
from charmhelpers.contrib.openstack.ip import (
|
||||
ADMIN,
|
||||
resolve_address,
|
||||
get_vip_in_network,
|
||||
INTERNAL,
|
||||
PUBLIC,
|
||||
ADDRESS_MAP)
|
||||
|
||||
from charmhelpers.core.host import (
|
||||
mkdir,
|
||||
write_file,
|
||||
)
|
||||
|
||||
from charmhelpers.contrib.hahelpers.apache import (
|
||||
install_ca_cert
|
||||
)
|
||||
|
||||
|
||||
class CertRequest(object):
|
||||
|
||||
"""Create a request for certificates to be generated
|
||||
"""
|
||||
|
||||
def __init__(self, json_encode=True):
|
||||
self.entries = []
|
||||
self.hostname_entry = None
|
||||
self.json_encode = json_encode
|
||||
|
||||
def add_entry(self, net_type, cn, addresses):
|
||||
"""Add a request to the batch
|
||||
|
||||
:param net_type: str netwrok space name request is for
|
||||
:param cn: str Canonical Name for certificate
|
||||
:param addresses: [] List of addresses to be used as SANs
|
||||
"""
|
||||
self.entries.append({
|
||||
'cn': cn,
|
||||
'addresses': addresses})
|
||||
|
||||
def add_hostname_cn(self):
|
||||
"""Add a request for the hostname of the machine"""
|
||||
ip = unit_get('private-address')
|
||||
addresses = [ip]
|
||||
# If a vip is being used without os-hostname config or
|
||||
# network spaces then we need to ensure the local units
|
||||
# cert has the approriate vip in the SAN list
|
||||
vip = get_vip_in_network(resolve_network_cidr(ip))
|
||||
if vip:
|
||||
addresses.append(vip)
|
||||
self.hostname_entry = {
|
||||
'cn': get_hostname(ip),
|
||||
'addresses': addresses}
|
||||
|
||||
def add_hostname_cn_ip(self, addresses):
|
||||
"""Add an address to the SAN list for the hostname request
|
||||
|
||||
:param addr: [] List of address to be added
|
||||
"""
|
||||
for addr in addresses:
|
||||
if addr not in self.hostname_entry['addresses']:
|
||||
self.hostname_entry['addresses'].append(addr)
|
||||
|
||||
def get_request(self):
|
||||
"""Generate request from the batched up entries
|
||||
|
||||
"""
|
||||
if self.hostname_entry:
|
||||
self.entries.append(self.hostname_entry)
|
||||
request = {}
|
||||
for entry in self.entries:
|
||||
sans = sorted(list(set(entry['addresses'])))
|
||||
request[entry['cn']] = {'sans': sans}
|
||||
if self.json_encode:
|
||||
return {'cert_requests': json.dumps(request, sort_keys=True)}
|
||||
else:
|
||||
return {'cert_requests': request}
|
||||
|
||||
|
||||
def get_certificate_request(json_encode=True):
|
||||
"""Generate a certificatee requests based on the network confioguration
|
||||
|
||||
"""
|
||||
req = CertRequest(json_encode=json_encode)
|
||||
req.add_hostname_cn()
|
||||
# Add os-hostname entries
|
||||
for net_type in [INTERNAL, ADMIN, PUBLIC]:
|
||||
net_config = config(ADDRESS_MAP[net_type]['override'])
|
||||
try:
|
||||
net_addr = resolve_address(endpoint_type=net_type)
|
||||
ip = network_get_primary_address(
|
||||
ADDRESS_MAP[net_type]['binding'])
|
||||
addresses = [net_addr, ip]
|
||||
vip = get_vip_in_network(resolve_network_cidr(ip))
|
||||
if vip:
|
||||
addresses.append(vip)
|
||||
if net_config:
|
||||
req.add_entry(
|
||||
net_type,
|
||||
net_config,
|
||||
addresses)
|
||||
else:
|
||||
# There is network address with no corresponding hostname.
|
||||
# Add the ip to the hostname cert to allow for this.
|
||||
req.add_hostname_cn_ip(addresses)
|
||||
except NoNetworkBinding:
|
||||
log("Skipping request for certificate for ip in {} space, no "
|
||||
"local address found".format(net_type), WARNING)
|
||||
return req.get_request()
|
||||
|
||||
|
||||
def create_ip_cert_links(ssl_dir, custom_hostname_link=None):
|
||||
"""Create symlinks for SAN records
|
||||
|
||||
:param ssl_dir: str Directory to create symlinks in
|
||||
:param custom_hostname_link: str Additional link to be created
|
||||
"""
|
||||
hostname = get_hostname(unit_get('private-address'))
|
||||
hostname_cert = os.path.join(
|
||||
ssl_dir,
|
||||
'cert_{}'.format(hostname))
|
||||
hostname_key = os.path.join(
|
||||
ssl_dir,
|
||||
'key_{}'.format(hostname))
|
||||
# Add links to hostname cert, used if os-hostname vars not set
|
||||
for net_type in [INTERNAL, ADMIN, PUBLIC]:
|
||||
try:
|
||||
addr = resolve_address(endpoint_type=net_type)
|
||||
cert = os.path.join(ssl_dir, 'cert_{}'.format(addr))
|
||||
key = os.path.join(ssl_dir, 'key_{}'.format(addr))
|
||||
if os.path.isfile(hostname_cert) and not os.path.isfile(cert):
|
||||
os.symlink(hostname_cert, cert)
|
||||
os.symlink(hostname_key, key)
|
||||
except NoNetworkBinding:
|
||||
log("Skipping creating cert symlink for ip in {} space, no "
|
||||
"local address found".format(net_type), WARNING)
|
||||
if custom_hostname_link:
|
||||
custom_cert = os.path.join(
|
||||
ssl_dir,
|
||||
'cert_{}'.format(custom_hostname_link))
|
||||
custom_key = os.path.join(
|
||||
ssl_dir,
|
||||
'key_{}'.format(custom_hostname_link))
|
||||
if os.path.isfile(hostname_cert) and not os.path.isfile(custom_cert):
|
||||
os.symlink(hostname_cert, custom_cert)
|
||||
os.symlink(hostname_key, custom_key)
|
||||
|
||||
|
||||
def install_certs(ssl_dir, certs, chain=None):
|
||||
"""Install the certs passed into the ssl dir and append the chain if
|
||||
provided.
|
||||
|
||||
:param ssl_dir: str Directory to create symlinks in
|
||||
:param certs: {} {'cn': {'cert': 'CERT', 'key': 'KEY'}}
|
||||
:param chain: str Chain to be appended to certs
|
||||
"""
|
||||
for cn, bundle in certs.items():
|
||||
cert_filename = 'cert_{}'.format(cn)
|
||||
key_filename = 'key_{}'.format(cn)
|
||||
cert_data = bundle['cert']
|
||||
if chain:
|
||||
# Append chain file so that clients that trust the root CA will
|
||||
# trust certs signed by an intermediate in the chain
|
||||
cert_data = cert_data + os.linesep + chain
|
||||
write_file(
|
||||
path=os.path.join(ssl_dir, cert_filename),
|
||||
content=cert_data, perms=0o640)
|
||||
write_file(
|
||||
path=os.path.join(ssl_dir, key_filename),
|
||||
content=bundle['key'], perms=0o640)
|
||||
|
||||
|
||||
def process_certificates(service_name, relation_id, unit,
|
||||
custom_hostname_link=None):
|
||||
"""Process the certificates supplied down the relation
|
||||
|
||||
:param service_name: str Name of service the certifcates are for.
|
||||
:param relation_id: str Relation id providing the certs
|
||||
:param unit: str Unit providing the certs
|
||||
:param custom_hostname_link: str Name of custom link to create
|
||||
"""
|
||||
data = relation_get(rid=relation_id, unit=unit)
|
||||
ssl_dir = os.path.join('/etc/apache2/ssl/', service_name)
|
||||
mkdir(path=ssl_dir)
|
||||
name = local_unit().replace('/', '_')
|
||||
certs = data.get('{}.processed_requests'.format(name))
|
||||
chain = data.get('chain')
|
||||
ca = data.get('ca')
|
||||
if certs:
|
||||
certs = json.loads(certs)
|
||||
install_ca_cert(ca.encode())
|
||||
install_certs(ssl_dir, certs, chain)
|
||||
create_ip_cert_links(
|
||||
ssl_dir,
|
||||
custom_hostname_link=custom_hostname_link)
|
||||
|
||||
|
||||
def get_requests_for_local_unit(relation_name=None):
|
||||
"""Extract any certificates data targeted at this unit down relation_name.
|
||||
|
||||
:param relation_name: str Name of relation to check for data.
|
||||
:returns: List of bundles of certificates.
|
||||
:rtype: List of dicts
|
||||
"""
|
||||
local_name = local_unit().replace('/', '_')
|
||||
raw_certs_key = '{}.processed_requests'.format(local_name)
|
||||
relation_name = relation_name or 'certificates'
|
||||
bundles = []
|
||||
for rid in relation_ids(relation_name):
|
||||
for unit in related_units(rid):
|
||||
data = relation_get(rid=rid, unit=unit)
|
||||
if data.get(raw_certs_key):
|
||||
bundles.append({
|
||||
'ca': data['ca'],
|
||||
'chain': data.get('chain'),
|
||||
'certs': json.loads(data[raw_certs_key])})
|
||||
return bundles
|
||||
|
||||
|
||||
def get_bundle_for_cn(cn, relation_name=None):
|
||||
"""Extract certificates for the given cn.
|
||||
|
||||
:param cn: str Canonical Name on certificate.
|
||||
:param relation_name: str Relation to check for certificates down.
|
||||
:returns: Dictionary of certificate data,
|
||||
:rtype: dict.
|
||||
"""
|
||||
entries = get_requests_for_local_unit(relation_name)
|
||||
cert_bundle = {}
|
||||
for entry in entries:
|
||||
for _cn, bundle in entry['certs'].items():
|
||||
if _cn == cn:
|
||||
cert_bundle = {
|
||||
'cert': bundle['cert'],
|
||||
'key': bundle['key'],
|
||||
'chain': entry['chain'],
|
||||
'ca': entry['ca']}
|
||||
break
|
||||
if cert_bundle:
|
||||
break
|
||||
return cert_bundle
|
File diff suppressed because it is too large
Load Diff
@ -1,21 +0,0 @@
|
||||
# Copyright 2016 Canonical Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
class OSContextError(Exception):
|
||||
"""Raised when an error occurs during context generation.
|
||||
|
||||
This exception is principally used in contrib.openstack.context
|
||||
"""
|
||||
pass
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user