Retire repo

This repo was created by accident, use deb-python-oslo.concurrency
instead.

Needed-By: I1ac1a06931c8b6dd7c2e73620a0302c29e605f03
Change-Id: I81894aea69b9d09b0977039623c26781093a397a
This commit is contained in:
Andreas Jaeger 2017-04-17 19:34:53 +02:00
parent 2e8d5481b1
commit 95d65706ce
59 changed files with 13 additions and 4313 deletions

View File

@ -1,8 +0,0 @@
[run]
branch = True
source = oslo_concurrency
omit = oslo_concurrency/tests/*
[report]
ignore_errors = True
precision = 2

55
.gitignore vendored
View File

@ -1,55 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
cover
.tox
nosetests.xml
.testrepository
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
# reno build
releasenotes/build

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/oslo.concurrency.git

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/oslo.concurrency

View File

@ -1,4 +0,0 @@
oslo.concurrency Style Commandments
======================================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,20 +0,0 @@
==================
oslo.concurrency
==================
.. image:: https://img.shields.io/pypi/v/oslo.concurrency.svg
:target: https://pypi.python.org/pypi/oslo.concurrency/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/oslo.concurrency.svg
:target: https://pypi.python.org/pypi/oslo.concurrency/
:alt: Downloads
The oslo.concurrency library has utilities for safely running multi-thread,
multi-process applications using locking mechanisms and for running
external processes.
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/oslo.concurrency
* Source: http://git.openstack.org/cgit/openstack/oslo.concurrency
* Bugs: http://bugs.launchpad.net/oslo.concurrency

13
README.txt Normal file
View File

@ -0,0 +1,13 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
Use instead the project deb-python-oslo.concurrency at
http://git.openstack.org/cgit/openstack/deb-python-oslo.concurrency .
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,8 +0,0 @@
===========================================
:mod:`oslo_concurrency.fixture.lockutils`
===========================================
.. automodule:: oslo_concurrency.fixture.lockutils
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,8 +0,0 @@
.. toctree::
:maxdepth: 1
fixture.lockutils
lockutils
opts
processutils
watchdog

View File

@ -1,8 +0,0 @@
===================================
:mod:`oslo_concurrency.lockutils`
===================================
.. automodule:: oslo_concurrency.lockutils
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,8 +0,0 @@
==============================
:mod:`oslo_concurrency.opts`
==============================
.. automodule:: oslo_concurrency.opts
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,8 +0,0 @@
======================================
:mod:`oslo_concurrency.processutils`
======================================
.. automodule:: oslo_concurrency.processutils
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,8 +0,0 @@
==================================
:mod:`oslo_concurrency.watchdog`
==================================
.. automodule:: oslo_concurrency.watchdog
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,80 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'oslosphinx',
'oslo_config.sphinxext',
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# A list of glob-style patterns that should be excluded when looking for source
# files.
exclude_patterns = []
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'oslo.concurrency'
copyright = u'2014, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
# intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,5 +0,0 @@
==============
Contributing
==============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1 +0,0 @@
.. include:: ../../ChangeLog

View File

@ -1,40 +0,0 @@
============================================
Welcome to oslo.concurrency's documentation!
============================================
The `oslo`_ concurrency library has utilities for safely running multi-thread,
multi-process applications using locking mechanisms and for running
external processes.
.. toctree::
:maxdepth: 1
installation
usage
opts
contributing
API Documentation
=================
.. toctree::
:maxdepth: 1
api/index
Release Notes
=============
.. toctree::
:maxdepth: 1
history
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. _oslo: https://wiki.openstack.org/wiki/Oslo

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install oslo.concurrency
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv oslo.concurrency
$ pip install oslo.concurrency

View File

@ -1,8 +0,0 @@
=======================
Configuration Options
=======================
oslo.concurrency uses oslo.config to define and manage configuration options
to allow the deployer to control how an application uses this library.
.. show-options:: oslo.concurrency

View File

@ -1,74 +0,0 @@
=======
Usage
=======
To use oslo.concurrency in a project, import the relevant module. For
example::
from oslo_concurrency import lockutils
from oslo_concurrency import processutils
.. seealso::
* :doc:`API Documentation <api/index>`
Locking a function (local to a process)
=======================================
To ensure that a function (which is not thread safe) is only used in
a thread safe manner (typically such type of function should be refactored
to avoid this problem but if not then the following can help)::
@lockutils.synchronized('not_thread_safe')
def not_thread_safe():
pass
Once decorated later callers of this function will be able to call into
this method and the contract that two threads will **not** enter this
function at the same time will be upheld. Make sure that the names of the
locks used are carefully chosen (typically by namespacing them to your
app so that other apps will not chose the same names).
Locking a function (local to a process as well as across process)
=================================================================
To ensure that a function (which is not thread safe **or** multi-process
safe) is only used in a safe manner (typically such type of function should
be refactored to avoid this problem but if not then the following can help)::
@lockutils.synchronized('not_thread_process_safe', external=True)
def not_thread_process_safe():
pass
Once decorated later callers of this function will be able to call into
this method and the contract that two threads (or any two processes)
will **not** enter this function at the same time will be upheld. Make
sure that the names of the locks used are carefully chosen (typically by
namespacing them to your app so that other apps will not chose the same
names).
Common ways to prefix/namespace the synchronized decorator
==========================================================
Since it is **highly** recommended to prefix (or namespace) the usage
of the synchronized there are a few helpers that can make this much easier
to achieve.
An example is::
myapp_synchronized = lockutils.synchronized_with_prefix("myapp")
Then further usage of the ``lockutils.synchronized`` would instead now use
this decorator created above instead of using ``lockutils.synchronized``
directly.
Command Line Wrapper
====================
``oslo.concurrency`` includes a command line tool for use in test jobs
that need the environment variable :envvar:`OSLO_LOCK_PATH` set. To
use it, prefix the command to be run with
:command:`lockutils-wrapper`. For example::
$ lockutils-wrapper env | grep OSLO_LOCK_PATH
OSLO_LOCK_PATH=/tmp/tmpbFHK45

View File

@ -1,32 +0,0 @@
# Copyright 2014 Mirantis Inc.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='oslo_concurrency')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

View File

@ -1,78 +0,0 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
from oslo_config import fixture as config
from oslo_concurrency import lockutils
class LockFixture(fixtures.Fixture):
"""External locking fixture.
This fixture is basically an alternative to the synchronized decorator with
the external flag so that tearDowns and addCleanups will be included in
the lock context for locking between tests. The fixture is recommended to
be the first line in a test method, like so::
def test_method(self):
self.useFixture(LockFixture('lock_name'))
...
or the first line in setUp if all the test methods in the class are
required to be serialized. Something like::
class TestCase(testtools.testcase):
def setUp(self):
self.useFixture(LockFixture('lock_name'))
super(TestCase, self).setUp()
...
This is because addCleanups are put on a LIFO queue that gets run after the
test method exits. (either by completing or raising an exception)
"""
def __init__(self, name, lock_file_prefix=None):
self.mgr = lockutils.lock(name, lock_file_prefix, True)
def setUp(self):
super(LockFixture, self).setUp()
self.addCleanup(self.mgr.__exit__, None, None, None)
self.lock = self.mgr.__enter__()
class ExternalLockFixture(fixtures.Fixture):
"""Configure lock_path so external locks can be used in unit tests.
Creates a temporary directory to hold file locks and sets the oslo.config
lock_path opt to use it. This can be used to enable external locking
on a per-test basis, rather than globally with the OSLO_LOCK_PATH
environment variable.
Example::
def test_method(self):
self.useFixture(ExternalLockFixture())
something_that_needs_external_locks()
Alternatively, the useFixture call could be placed in a test class's
setUp method to provide this functionality to all tests in the class.
.. versionadded:: 0.3
"""
def setUp(self):
super(ExternalLockFixture, self).setUp()
temp_dir = self.useFixture(fixtures.TempDir())
conf = self.useFixture(config.Config(lockutils.CONF)).config
conf(lock_path=temp_dir.path, group='oslo_concurrency')

View File

@ -1,19 +0,0 @@
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.concurrency 3.9.1.dev2\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-06-04 05:27+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-02 07:05+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language-Team: German\n"
"Language: de\n"
"X-Generator: Zanata 3.7.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "Failed to remove file %(file)s"
msgstr "Fehler beim Entfernen der Datei %(file)s"

View File

@ -1,101 +0,0 @@
# Translations template for oslo.concurrency.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.concurrency
# project.
#
# Translators:
# Christian Berendt <berendt@b1-systems.de>, 2014
# Ettore Atalan <atalanttore@googlemail.com>, 2014
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.concurrency 3.9.1.dev3\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-06-07 17:48+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-08 06:36+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language: de\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: German\n"
#, python-format
msgid ""
"%(desc)r\n"
"command: %(cmd)r\n"
"exit code: %(code)r\n"
"stdout: %(stdout)r\n"
"stderr: %(stderr)r"
msgstr ""
"%(desc)r\n"
"Kommando: %(cmd)r\n"
"Abschlusscode: %(code)r\n"
"Stdout: %(stdout)r\n"
"Stderr: %(stderr)r"
#, python-format
msgid ""
"%(description)s\n"
"Command: %(cmd)s\n"
"Exit code: %(exit_code)s\n"
"Stdout: %(stdout)r\n"
"Stderr: %(stderr)r"
msgstr ""
"%(description)s\n"
"Befehl: %(cmd)s.\n"
"Beendigungscode: %(exit_code)s.\n"
"Standardausgabe: %(stdout)r\n"
"Standardfehler: %(stderr)r"
#, python-format
msgid "%r failed. Not Retrying."
msgstr "%r fehlgeschlagen. Wird nicht wiederholt."
#, python-format
msgid "%r failed. Retrying."
msgstr "%r fehlgeschlagen. Neuversuch."
msgid ""
"Calling lockutils directly is no longer supported. Please use the lockutils-"
"wrapper console script instead."
msgstr ""
"Ein direkter Aufruf von lockutils wird nicht mehr unterstützt. Verwenden Sie "
"stattdessen das lockutils-wrapper Konsolescript."
msgid "Command requested root, but did not specify a root helper."
msgstr "Kommando braucht root, es wurde aber kein root helper spezifiziert."
msgid "Environment not supported over SSH"
msgstr "Umgebung wird nicht über SSH unterstützt"
#, python-format
msgid ""
"Got an OSError\n"
"command: %(cmd)r\n"
"errno: %(errno)r"
msgstr ""
"OS Fehler aufgetreten:\n"
"Kommando: %(cmd)r\n"
"Fehlernummer: %(errno)r"
#, python-format
msgid "Got invalid arg log_errors: %r"
msgstr "Ungültiges Argument für log_errors: %r"
#, python-format
msgid "Got unknown keyword args: %r"
msgstr "Ungültige Schlüsswelwortargumente: %r"
#, python-format
msgid "Running cmd (subprocess): %s"
msgstr "Führe Kommando (subprocess) aus: %s"
msgid "Unexpected error while running command."
msgstr "Unerwarteter Fehler bei der Ausführung des Kommandos."
msgid "process_input not supported over SSH"
msgstr "process_input wird nicht über SSH unterstützt"

View File

@ -1,27 +0,0 @@
# Translations template for oslo.concurrency.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.concurrency
# project.
#
# Translators:
# Andi Chandler <andi@gowling.com>, 2014
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.concurrency 3.6.1.dev10\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-19 12:20+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2015-06-10 11:06+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language: en-GB\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: English (United Kingdom)\n"
#, python-format
msgid "Failed to remove file %(file)s"
msgstr "Failed to remove file %(file)s"

View File

@ -1,100 +0,0 @@
# Translations template for oslo.concurrency.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.concurrency
# project.
#
# Translators:
# Andi Chandler <andi@gowling.com>, 2014-2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.concurrency 3.6.1.dev10\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-19 12:20+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2015-06-10 11:06+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language: en-GB\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: English (United Kingdom)\n"
#, python-format
msgid ""
"%(desc)r\n"
"command: %(cmd)r\n"
"exit code: %(code)r\n"
"stdout: %(stdout)r\n"
"stderr: %(stderr)r"
msgstr ""
"%(desc)r\n"
"command: %(cmd)r\n"
"exit code: %(code)r\n"
"stdout: %(stdout)r\n"
"stderr: %(stderr)r"
#, python-format
msgid ""
"%(description)s\n"
"Command: %(cmd)s\n"
"Exit code: %(exit_code)s\n"
"Stdout: %(stdout)r\n"
"Stderr: %(stderr)r"
msgstr ""
"%(description)s\n"
"Command: %(cmd)s\n"
"Exit code: %(exit_code)s\n"
"Stdout: %(stdout)r\n"
"Stderr: %(stderr)r"
#, python-format
msgid "%r failed. Not Retrying."
msgstr "%r failed. Not Retrying."
#, python-format
msgid "%r failed. Retrying."
msgstr "%r failed. Retrying."
msgid ""
"Calling lockutils directly is no longer supported. Please use the lockutils-"
"wrapper console script instead."
msgstr ""
"Calling lockutils directly is no longer supported. Please use the lockutils-"
"wrapper console script instead."
msgid "Command requested root, but did not specify a root helper."
msgstr "Command requested root, but did not specify a root helper."
msgid "Environment not supported over SSH"
msgstr "Environment not supported over SSH"
#, python-format
msgid ""
"Got an OSError\n"
"command: %(cmd)r\n"
"errno: %(errno)r"
msgstr ""
"Got an OSError\n"
"command: %(cmd)r\n"
"errno: %(errno)r"
#, python-format
msgid "Got invalid arg log_errors: %r"
msgstr "Got invalid arg log_errors: %r"
#, python-format
msgid "Got unknown keyword args: %r"
msgstr "Got unknown keyword args: %r"
#, python-format
msgid "Running cmd (subprocess): %s"
msgstr "Running cmd (subprocess): %s"
msgid "Unexpected error while running command."
msgstr "Unexpected error while running command."
msgid "process_input not supported over SSH"
msgstr "process_input not supported over SSH"

View File

@ -1,27 +0,0 @@
# Translations template for oslo.concurrency.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.concurrency
# project.
#
# Translators:
# Adriana Chisco Landazábal <achisco94@gmail.com>, 2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.concurrency 3.6.1.dev10\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-19 12:20+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2015-06-22 09:27+0000\n"
"Last-Translator: Adriana Chisco Landazábal <achisco94@gmail.com>\n"
"Language: es\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Spanish\n"
#, python-format
msgid "Failed to remove file %(file)s"
msgstr "No se ha podido eliminar el fichero %(file)s"

View File

@ -1,100 +0,0 @@
# Translations template for oslo.concurrency.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.concurrency
# project.
#
# Translators:
# Adriana Chisco Landazábal <achisco94@gmail.com>, 2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.concurrency 3.6.1.dev10\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-19 12:20+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2015-06-22 09:35+0000\n"
"Last-Translator: Adriana Chisco Landazábal <achisco94@gmail.com>\n"
"Language: es\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Spanish\n"
#, python-format
msgid ""
"%(desc)r\n"
"command: %(cmd)r\n"
"exit code: %(code)r\n"
"stdout: %(stdout)r\n"
"stderr: %(stderr)r"
msgstr ""
"%(desc)r\n"
"comando: %(cmd)r\n"
"código de salida: %(code)r\n"
"stdout: %(stdout)r\n"
"stderr: %(stderr)r"
#, python-format
msgid ""
"%(description)s\n"
"Command: %(cmd)s\n"
"Exit code: %(exit_code)s\n"
"Stdout: %(stdout)r\n"
"Stderr: %(stderr)r"
msgstr ""
"%(description)s\n"
"Comando: %(cmd)s\n"
"Código de salida: %(exit_code)s\n"
"Stdout: %(stdout)r\n"
"Stderr: %(stderr)r"
#, python-format
msgid "%r failed. Not Retrying."
msgstr "%r ha fallado. No se está intentando de nuevo."
#, python-format
msgid "%r failed. Retrying."
msgstr "%r ha fallado. Intentando de nuevo."
msgid ""
"Calling lockutils directly is no longer supported. Please use the lockutils-"
"wrapper console script instead."
msgstr ""
"Ya no se soporta llamar LockUtil. Por favor utilice a cambio la consola "
"script lockutils-wrapper."
msgid "Command requested root, but did not specify a root helper."
msgstr "Comando ha solicitado root, pero no especificó un auxiliar root."
msgid "Environment not supported over SSH"
msgstr "Ambiente no soportado a través de SSH"
#, python-format
msgid ""
"Got an OSError\n"
"command: %(cmd)r\n"
"errno: %(errno)r"
msgstr ""
"Se obtuvo error de Sistema Operativo\n"
"comando: %(cmd)r\n"
"errno: %(errno)r"
#, python-format
msgid "Got invalid arg log_errors: %r"
msgstr "Se obtuvo argumento no válido: %r"
#, python-format
msgid "Got unknown keyword args: %r"
msgstr "Se obtuvieron argumentos de palabra clave: %r"
#, python-format
msgid "Running cmd (subprocess): %s"
msgstr "Ejecutando cmd (subproceso): %s"
msgid "Unexpected error while running command."
msgstr "Error inesperado mientras se ejecutaba el comando."
msgid "process_input not supported over SSH"
msgstr "entrada de proceso no soportada a través de SSH"

View File

@ -1,27 +0,0 @@
# Translations template for oslo.concurrency.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.concurrency
# project.
#
# Translators:
# Maxime COQUEREL <max.coquerel@gmail.com>, 2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.concurrency 3.6.1.dev10\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-19 12:20+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2015-06-10 11:06+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language: fr\n"
"Plural-Forms: nplurals=2; plural=(n > 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: French\n"
#, python-format
msgid "Failed to remove file %(file)s"
msgstr "Échec lors de la suppression du fichier %(file)s"

View File

@ -1,100 +0,0 @@
# Translations template for oslo.concurrency.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.concurrency
# project.
#
# Translators:
# Maxime COQUEREL <max.coquerel@gmail.com>, 2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.concurrency 3.6.1.dev10\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-19 12:20+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2015-06-10 11:06+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language: fr\n"
"Plural-Forms: nplurals=2; plural=(n > 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: French\n"
#, python-format
msgid ""
"%(desc)r\n"
"command: %(cmd)r\n"
"exit code: %(code)r\n"
"stdout: %(stdout)r\n"
"stderr: %(stderr)r"
msgstr ""
"%(desc)r\n"
"commande: %(cmd)r\n"
"Code de sortie: %(code)r\n"
"stdout: %(stdout)r\n"
"stderr: %(stderr)r"
#, python-format
msgid ""
"%(description)s\n"
"Command: %(cmd)s\n"
"Exit code: %(exit_code)s\n"
"Stdout: %(stdout)r\n"
"Stderr: %(stderr)r"
msgstr ""
"%(description)s\n"
"Commande: %(cmd)s\n"
"Code de sortie: %(exit_code)s\n"
"Stdout: %(stdout)r\n"
"Stderr: %(stderr)r"
#, python-format
msgid "%r failed. Not Retrying."
msgstr "Echec de %r. Nouvelle tentative."
#, python-format
msgid "%r failed. Retrying."
msgstr "Echec de %r. Nouvelle tentative."
msgid ""
"Calling lockutils directly is no longer supported. Please use the lockutils-"
"wrapper console script instead."
msgstr ""
"Lockutils appelant directement n'est plus pris en charge. Merci d'utiliser "
"le script de la console lockutils -wrapper à la place."
msgid "Command requested root, but did not specify a root helper."
msgstr "La commande exigeait root, mais n'indiquait pas comment obtenir root."
msgid "Environment not supported over SSH"
msgstr "Environnement non prise en charge sur SSH"
#, python-format
msgid ""
"Got an OSError\n"
"command: %(cmd)r\n"
"errno: %(errno)r"
msgstr ""
"Erreur du Système\n"
"commande: %(cmd)r\n"
"errno: %(errno)r"
#, python-format
msgid "Got invalid arg log_errors: %r"
msgstr "Argument reçu non valide log_errors: %r"
#, python-format
msgid "Got unknown keyword args: %r"
msgstr "Ags, mot clé inconnu: %r"
#, python-format
msgid "Running cmd (subprocess): %s"
msgstr "Exécution de la commande (sous-processus): %s"
msgid "Unexpected error while running command."
msgstr "Erreur inattendue lors de lexécution de la commande."
msgid "process_input not supported over SSH"
msgstr "process_input non pris en charge sur SSH"

View File

@ -1,371 +0,0 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import functools
import logging
import os
import shutil
import subprocess
import sys
import tempfile
import threading
import weakref
import fasteners
from oslo_config import cfg
from oslo_utils import reflection
from oslo_utils import timeutils
import six
from oslo_concurrency._i18n import _, _LI
LOG = logging.getLogger(__name__)
_opts = [
cfg.BoolOpt('disable_process_locking', default=False,
help='Enables or disables inter-process locks.',
deprecated_group='DEFAULT'),
cfg.StrOpt('lock_path',
default=os.environ.get("OSLO_LOCK_PATH"),
help='Directory to use for lock files. For security, the '
'specified directory should only be writable by the user '
'running the processes that need locking. '
'Defaults to environment variable OSLO_LOCK_PATH. '
'If external locks are used, a lock path must be set.',
deprecated_group='DEFAULT')
]
def _register_opts(conf):
conf.register_opts(_opts, group='oslo_concurrency')
CONF = cfg.CONF
_register_opts(CONF)
def set_defaults(lock_path):
"""Set value for lock_path.
This can be used by tests to set lock_path to a temporary directory.
"""
cfg.set_defaults(_opts, lock_path=lock_path)
def get_lock_path(conf):
"""Return the path used for external file-based locks.
:param conf: Configuration object
:type conf: oslo_config.cfg.ConfigOpts
.. versionadded:: 1.8
"""
_register_opts(conf)
return conf.oslo_concurrency.lock_path
InterProcessLock = fasteners.InterProcessLock
ReaderWriterLock = fasteners.ReaderWriterLock
"""A reader/writer lock.
.. versionadded:: 0.4
"""
class Semaphores(object):
"""A garbage collected container of semaphores.
This collection internally uses a weak value dictionary so that when a
semaphore is no longer in use (by any threads) it will automatically be
removed from this container by the garbage collector.
.. versionadded:: 0.3
"""
def __init__(self):
self._semaphores = weakref.WeakValueDictionary()
self._lock = threading.Lock()
def get(self, name):
"""Gets (or creates) a semaphore with a given name.
:param name: The semaphore name to get/create (used to associate
previously created names with the same semaphore).
Returns an newly constructed semaphore (or an existing one if it was
already created for the given name).
"""
with self._lock:
try:
return self._semaphores[name]
except KeyError:
sem = threading.Semaphore()
self._semaphores[name] = sem
return sem
def __len__(self):
"""Returns how many semaphores exist at the current time."""
return len(self._semaphores)
_semaphores = Semaphores()
def _get_lock_path(name, lock_file_prefix, lock_path=None):
# NOTE(mikal): the lock name cannot contain directory
# separators
name = name.replace(os.sep, '_')
if lock_file_prefix:
sep = '' if lock_file_prefix.endswith('-') else '-'
name = '%s%s%s' % (lock_file_prefix, sep, name)
local_lock_path = lock_path or CONF.oslo_concurrency.lock_path
if not local_lock_path:
raise cfg.RequiredOptError('lock_path')
return os.path.join(local_lock_path, name)
def external_lock(name, lock_file_prefix=None, lock_path=None):
lock_file_path = _get_lock_path(name, lock_file_prefix, lock_path)
return InterProcessLock(lock_file_path)
def remove_external_lock_file(name, lock_file_prefix=None, lock_path=None,
semaphores=None):
"""Remove an external lock file when it's not used anymore
This will be helpful when we have a lot of lock files
"""
with internal_lock(name, semaphores=semaphores):
lock_file_path = _get_lock_path(name, lock_file_prefix, lock_path)
try:
os.remove(lock_file_path)
except OSError:
LOG.info(_LI('Failed to remove file %(file)s'),
{'file': lock_file_path})
def internal_lock(name, semaphores=None):
if semaphores is None:
semaphores = _semaphores
return semaphores.get(name)
@contextlib.contextmanager
def lock(name, lock_file_prefix=None, external=False, lock_path=None,
do_log=True, semaphores=None, delay=0.01):
"""Context based lock
This function yields a `threading.Semaphore` instance (if we don't use
eventlet.monkey_patch(), else `semaphore.Semaphore`) unless external is
True, in which case, it'll yield an InterProcessLock instance.
:param lock_file_prefix: The lock_file_prefix argument is used to provide
lock files on disk with a meaningful prefix.
:param external: The external keyword argument denotes whether this lock
should work across multiple processes. This means that if two different
workers both run a method decorated with @synchronized('mylock',
external=True), only one of them will execute at a time.
:param lock_path: The path in which to store external lock files. For
external locking to work properly, this must be the same for all
references to the lock.
:param do_log: Whether to log acquire/release messages. This is primarily
intended to reduce log message duplication when `lock` is used from the
`synchronized` decorator.
:param semaphores: Container that provides semaphores to use when locking.
This ensures that threads inside the same application can not collide,
due to the fact that external process locks are unaware of a processes
active threads.
:param delay: Delay between acquisition attempts (in seconds).
.. versionchanged:: 0.2
Added *do_log* optional parameter.
.. versionchanged:: 0.3
Added *delay* and *semaphores* optional parameters.
"""
int_lock = internal_lock(name, semaphores=semaphores)
with int_lock:
if do_log:
LOG.debug('Acquired semaphore "%(lock)s"', {'lock': name})
try:
if external and not CONF.oslo_concurrency.disable_process_locking:
ext_lock = external_lock(name, lock_file_prefix, lock_path)
ext_lock.acquire(delay=delay)
try:
yield ext_lock
finally:
ext_lock.release()
else:
yield int_lock
finally:
if do_log:
LOG.debug('Releasing semaphore "%(lock)s"', {'lock': name})
def synchronized(name, lock_file_prefix=None, external=False, lock_path=None,
semaphores=None, delay=0.01):
"""Synchronization decorator.
Decorating a method like so::
@synchronized('mylock')
def foo(self, *args):
...
ensures that only one thread will execute the foo method at a time.
Different methods can share the same lock::
@synchronized('mylock')
def foo(self, *args):
...
@synchronized('mylock')
def bar(self, *args):
...
This way only one of either foo or bar can be executing at a time.
.. versionchanged:: 0.3
Added *delay* and *semaphores* optional parameter.
"""
def wrap(f):
@six.wraps(f)
def inner(*args, **kwargs):
t1 = timeutils.now()
t2 = None
try:
with lock(name, lock_file_prefix, external, lock_path,
do_log=False, semaphores=semaphores, delay=delay):
t2 = timeutils.now()
LOG.debug('Lock "%(name)s" acquired by "%(function)s" :: '
'waited %(wait_secs)0.3fs',
{'name': name,
'function': reflection.get_callable_name(f),
'wait_secs': (t2 - t1)})
return f(*args, **kwargs)
finally:
t3 = timeutils.now()
if t2 is None:
held_secs = "N/A"
else:
held_secs = "%0.3fs" % (t3 - t2)
LOG.debug('Lock "%(name)s" released by "%(function)s" :: held '
'%(held_secs)s',
{'name': name,
'function': reflection.get_callable_name(f),
'held_secs': held_secs})
return inner
return wrap
def synchronized_with_prefix(lock_file_prefix):
"""Partial object generator for the synchronization decorator.
Redefine @synchronized in each project like so::
(in nova/utils.py)
from oslo_concurrency import lockutils
synchronized = lockutils.synchronized_with_prefix('nova-')
(in nova/foo.py)
from nova import utils
@utils.synchronized('mylock')
def bar(self, *args):
...
The lock_file_prefix argument is used to provide lock files on disk with a
meaningful prefix.
"""
return functools.partial(synchronized, lock_file_prefix=lock_file_prefix)
def remove_external_lock_file_with_prefix(lock_file_prefix):
"""Partial object generator for the remove lock file function.
Redefine remove_external_lock_file_with_prefix in each project like so::
(in nova/utils.py)
from oslo_concurrency import lockutils
synchronized = lockutils.synchronized_with_prefix('nova-')
synchronized_remove = lockutils.remove_external_lock_file_with_prefix(
'nova-')
(in nova/foo.py)
from nova import utils
@utils.synchronized('mylock')
def bar(self, *args):
...
<eventually call synchronized_remove('mylock') to cleanup>
The lock_file_prefix argument is used to provide lock files on disk with a
meaningful prefix.
"""
return functools.partial(remove_external_lock_file,
lock_file_prefix=lock_file_prefix)
def _lock_wrapper(argv):
"""Create a dir for locks and pass it to command from arguments
This is exposed as a console script entry point named
lockutils-wrapper
If you run this:
lockutils-wrapper python setup.py testr <etc>
a temporary directory will be created for all your locks and passed to all
your tests in an environment variable. The temporary dir will be deleted
afterwards and the return value will be preserved.
"""
lock_dir = tempfile.mkdtemp()
os.environ["OSLO_LOCK_PATH"] = lock_dir
try:
ret_val = subprocess.call(argv[1:])
finally:
shutil.rmtree(lock_dir, ignore_errors=True)
return ret_val
def main():
sys.exit(_lock_wrapper(sys.argv))
if __name__ == '__main__':
raise NotImplementedError(_('Calling lockutils directly is no longer '
'supported. Please use the '
'lockutils-wrapper console script instead.'))

View File

@ -1,45 +0,0 @@
# Copyright 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
__all__ = [
'list_opts',
]
import copy
from oslo_concurrency import lockutils
def list_opts():
"""Return a list of oslo.config options available in the library.
The returned list includes all oslo.config options which may be registered
at runtime by the library.
Each element of the list is a tuple. The first element is the name of the
group under which the list of elements in the second element will be
registered. A group name of None corresponds to the [DEFAULT] group in
config files.
This function is also discoverable via the 'oslo_concurrency' entry point
under the 'oslo.config.opts' namespace.
The purpose of this is to allow tools like the Oslo sample config file
generator to discover the options exposed to users by this library.
:returns: a list of (group_name, opts) tuples
"""
return [('oslo_concurrency', copy.deepcopy(lockutils._opts))]

View File

@ -1,110 +0,0 @@
# Copyright 2016 Red Hat.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import argparse
import os
import resource
import sys
USAGE_PROGRAM = ('%s -m oslo_concurrency.prlimit'
% os.path.basename(sys.executable))
RESOURCES = (
# argparse argument => resource
('as', resource.RLIMIT_AS),
('core', resource.RLIMIT_CORE),
('cpu', resource.RLIMIT_CPU),
('data', resource.RLIMIT_DATA),
('fsize', resource.RLIMIT_FSIZE),
('memlock', resource.RLIMIT_MEMLOCK),
('nofile', resource.RLIMIT_NOFILE),
('nproc', resource.RLIMIT_NPROC),
('rss', resource.RLIMIT_RSS),
('stack', resource.RLIMIT_STACK),
)
def parse_args():
parser = argparse.ArgumentParser(description='prlimit', prog=USAGE_PROGRAM)
parser.add_argument('--as', type=int,
help='Address space limit in bytes')
parser.add_argument('--core', type=int,
help='Core file size limit in bytes')
parser.add_argument('--cpu', type=int,
help='CPU time limit in seconds')
parser.add_argument('--data', type=int,
help='Data size limit in bytes')
parser.add_argument('--fsize', type=int,
help='File size limit in bytes')
parser.add_argument('--memlock', type=int,
help='Locked memory limit in bytes')
parser.add_argument('--nofile', type=int,
help='Maximum number of open files')
parser.add_argument('--nproc', type=int,
help='Maximum number of processes')
parser.add_argument('--rss', type=int,
help='Maximum Resident Set Size (RSS) in bytes')
parser.add_argument('--stack', type=int,
help='Stack size limit in bytes')
parser.add_argument('program',
help='Program (absolute path)')
parser.add_argument('program_args', metavar="arg", nargs='...',
help='Program parameters')
args = parser.parse_args()
return args
def main():
args = parse_args()
program = args.program
if not os.path.isabs(program):
# program uses a relative path: try to find the absolute path
# to the executable
if sys.version_info >= (3, 3):
import shutil
program_abs = shutil.which(program)
else:
import distutils.spawn
program_abs = distutils.spawn.find_executable(program)
if program_abs:
program = program_abs
for arg_name, rlimit in RESOURCES:
value = getattr(args, arg_name)
if value is None:
continue
try:
resource.setrlimit(rlimit, (value, value))
except ValueError as exc:
print("%s: failed to set the %s resource limit: %s"
% (USAGE_PROGRAM, arg_name.upper(), exc),
file=sys.stderr)
sys.exit(1)
try:
os.execv(program, [program] + args.program_args)
except Exception as exc:
print("%s: failed to execute %s: %s"
% (USAGE_PROGRAM, program, exc),
file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -1,542 +0,0 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
System-level utilities and helper functions.
"""
import functools
import logging
import multiprocessing
import os
import random
import shlex
import signal
import sys
import time
import enum
from oslo_utils import importutils
from oslo_utils import strutils
from oslo_utils import timeutils
import six
from oslo_concurrency._i18n import _
# NOTE(bnemec): eventlet doesn't monkey patch subprocess, so we need to
# determine the proper subprocess module to use ourselves. I'm using the
# time module as the check because that's a monkey patched module we use
# in combination with subprocess below, so they need to match.
eventlet = importutils.try_import('eventlet')
if eventlet and eventlet.patcher.is_monkey_patched(time):
from eventlet.green import subprocess
else:
import subprocess
LOG = logging.getLogger(__name__)
class InvalidArgumentError(Exception):
def __init__(self, message=None):
super(InvalidArgumentError, self).__init__(message)
class UnknownArgumentError(Exception):
def __init__(self, message=None):
super(UnknownArgumentError, self).__init__(message)
class ProcessExecutionError(Exception):
def __init__(self, stdout=None, stderr=None, exit_code=None, cmd=None,
description=None):
super(ProcessExecutionError, self).__init__(
stdout, stderr, exit_code, cmd, description)
self.exit_code = exit_code
self.stderr = stderr
self.stdout = stdout
self.cmd = cmd
self.description = description
def __str__(self):
description = self.description
if description is None:
description = _("Unexpected error while running command.")
exit_code = self.exit_code
if exit_code is None:
exit_code = '-'
message = _('%(description)s\n'
'Command: %(cmd)s\n'
'Exit code: %(exit_code)s\n'
'Stdout: %(stdout)r\n'
'Stderr: %(stderr)r') % {'description': description,
'cmd': self.cmd,
'exit_code': exit_code,
'stdout': self.stdout,
'stderr': self.stderr}
return message
class NoRootWrapSpecified(Exception):
def __init__(self, message=None):
super(NoRootWrapSpecified, self).__init__(message)
def _subprocess_setup(on_preexec_fn):
# Python installs a SIGPIPE handler by default. This is usually not what
# non-Python subprocesses expect.
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
if on_preexec_fn:
on_preexec_fn()
@enum.unique
class LogErrors(enum.IntEnum):
"""Enumerations that affect if stdout and stderr are logged on error.
.. versionadded:: 2.7
"""
#: No logging on errors.
DEFAULT = 0
#: Log an error on **each** occurence of an error.
ALL = 1
#: Log an error on the last attempt that errored **only**.
FINAL = 2
# Retain these aliases for a number of releases...
LOG_ALL_ERRORS = LogErrors.ALL
LOG_FINAL_ERROR = LogErrors.FINAL
LOG_DEFAULT_ERROR = LogErrors.DEFAULT
class ProcessLimits(object):
"""Resource limits on a process.
Attributes:
* address_space: Address space limit in bytes
* core_file_size: Core file size limit in bytes
* cpu_time: CPU time limit in seconds
* data_size: Data size limit in bytes
* file_size: File size limit in bytes
* memory_locked: Locked memory limit in bytes
* number_files: Maximum number of open files
* number_processes: Maximum number of processes
* resident_set_size: Maximum Resident Set Size (RSS) in bytes
* stack_size: Stack size limit in bytes
This object can be used for the *prlimit* parameter of :func:`execute`.
"""
_LIMITS = {
"address_space": "--as",
"core_file_size": "--core",
"cpu_time": "--cpu",
"data_size": "--data",
"file_size": "--fsize",
"memory_locked": "--memlock",
"number_files": "--nofile",
"number_processes": "--nproc",
"resident_set_size": "--rss",
"stack_size": "--stack",
}
def __init__(self, **kw):
for limit in self._LIMITS.keys():
setattr(self, limit, kw.pop(limit, None))
if kw:
raise ValueError("invalid limits: %s"
% ', '.join(sorted(kw.keys())))
def prlimit_args(self):
"""Create a list of arguments for the prlimit command line."""
args = []
for limit in self._LIMITS.keys():
val = getattr(self, limit)
if val is not None:
args.append("%s=%s" % (self._LIMITS[limit], val))
return args
def execute(*cmd, **kwargs):
"""Helper method to shell out and execute a command through subprocess.
Allows optional retry.
:param cmd: Passed to subprocess.Popen.
:type cmd: string
:param cwd: Set the current working directory
:type cwd: string
:param process_input: Send to opened process.
:type process_input: string
:param env_variables: Environment variables and their values that
will be set for the process.
:type env_variables: dict
:param check_exit_code: Single bool, int, or list of allowed exit
codes. Defaults to [0]. Raise
:class:`ProcessExecutionError` unless
program exits with one of these code.
:type check_exit_code: boolean, int, or [int]
:param delay_on_retry: True | False. Defaults to True. If set to True,
wait a short amount of time before retrying.
:type delay_on_retry: boolean
:param attempts: How many times to retry cmd.
:type attempts: int
:param run_as_root: True | False. Defaults to False. If set to True,
the command is prefixed by the command specified
in the root_helper kwarg.
:type run_as_root: boolean
:param root_helper: command to prefix to commands called with
run_as_root=True
:type root_helper: string
:param shell: whether or not there should be a shell used to
execute this command. Defaults to false.
:type shell: boolean
:param loglevel: log level for execute commands.
:type loglevel: int. (Should be logging.DEBUG or logging.INFO)
:param log_errors: Should stdout and stderr be logged on error?
Possible values are
:py:attr:`~.LogErrors.DEFAULT`,
:py:attr:`~.LogErrors.FINAL`, or
:py:attr:`~.LogErrors.ALL`. Note that the
values :py:attr:`~.LogErrors.FINAL` and
:py:attr:`~.LogErrors.ALL`
are **only** relevant when multiple attempts of
command execution are requested using the
``attempts`` parameter.
:type log_errors: :py:class:`~.LogErrors`
:param binary: On Python 3, return stdout and stderr as bytes if
binary is True, as Unicode otherwise.
:type binary: boolean
:param on_execute: This function will be called upon process creation
with the object as a argument. The Purpose of this
is to allow the caller of `processutils.execute` to
track process creation asynchronously.
:type on_execute: function(:class:`subprocess.Popen`)
:param on_completion: This function will be called upon process
completion with the object as a argument. The
Purpose of this is to allow the caller of
`processutils.execute` to track process completion
asynchronously.
:type on_completion: function(:class:`subprocess.Popen`)
:param preexec_fn: This function will be called
in the child process just before the child
is executed. WARNING: On windows, we silently
drop this preexec_fn as it is not supported by
subprocess.Popen on windows (throws a
ValueError)
:type preexec_fn: function()
:param prlimit: Set resource limits on the child process. See
below for a detailed description.
:type prlimit: :class:`ProcessLimits`
:returns: (stdout, stderr) from process execution
:raises: :class:`UnknownArgumentError` on
receiving unknown arguments
:raises: :class:`ProcessExecutionError`
:raises: :class:`OSError`
The *prlimit* parameter can be used to set resource limits on the child
process. If this parameter is used, the child process will be spawned by a
wrapper process which will set limits before spawning the command.
.. versionchanged:: 3.4
Added *prlimit* optional parameter.
.. versionchanged:: 1.5
Added *cwd* optional parameter.
.. versionchanged:: 1.9
Added *binary* optional parameter. On Python 3, *stdout* and *stdout*
are now returned as Unicode strings by default, or bytes if *binary* is
true.
.. versionchanged:: 2.1
Added *on_execute* and *on_completion* optional parameters.
.. versionchanged:: 2.3
Added *preexec_fn* optional parameter.
"""
cwd = kwargs.pop('cwd', None)
process_input = kwargs.pop('process_input', None)
env_variables = kwargs.pop('env_variables', None)
check_exit_code = kwargs.pop('check_exit_code', [0])
ignore_exit_code = False
delay_on_retry = kwargs.pop('delay_on_retry', True)
attempts = kwargs.pop('attempts', 1)
run_as_root = kwargs.pop('run_as_root', False)
root_helper = kwargs.pop('root_helper', '')
shell = kwargs.pop('shell', False)
loglevel = kwargs.pop('loglevel', logging.DEBUG)
log_errors = kwargs.pop('log_errors', None)
if log_errors is None:
log_errors = LogErrors.DEFAULT
binary = kwargs.pop('binary', False)
on_execute = kwargs.pop('on_execute', None)
on_completion = kwargs.pop('on_completion', None)
preexec_fn = kwargs.pop('preexec_fn', None)
prlimit = kwargs.pop('prlimit', None)
if isinstance(check_exit_code, bool):
ignore_exit_code = not check_exit_code
check_exit_code = [0]
elif isinstance(check_exit_code, int):
check_exit_code = [check_exit_code]
if kwargs:
raise UnknownArgumentError(_('Got unknown keyword args: %r') % kwargs)
if isinstance(log_errors, six.integer_types):
log_errors = LogErrors(log_errors)
if not isinstance(log_errors, LogErrors):
raise InvalidArgumentError(_('Got invalid arg log_errors: %r') %
log_errors)
if run_as_root and hasattr(os, 'geteuid') and os.geteuid() != 0:
if not root_helper:
raise NoRootWrapSpecified(
message=_('Command requested root, but did not '
'specify a root helper.'))
if shell:
# root helper has to be injected into the command string
cmd = [' '.join((root_helper, cmd[0]))] + list(cmd[1:])
else:
# root helper has to be tokenized into argument list
cmd = shlex.split(root_helper) + list(cmd)
cmd = [str(c) for c in cmd]
if prlimit:
args = [sys.executable, '-m', 'oslo_concurrency.prlimit']
args.extend(prlimit.prlimit_args())
args.append('--')
args.extend(cmd)
cmd = args
sanitized_cmd = strutils.mask_password(' '.join(cmd))
watch = timeutils.StopWatch()
while attempts > 0:
attempts -= 1
watch.restart()
try:
LOG.log(loglevel, _('Running cmd (subprocess): %s'), sanitized_cmd)
_PIPE = subprocess.PIPE # pylint: disable=E1101
if os.name == 'nt':
on_preexec_fn = None
close_fds = False
else:
on_preexec_fn = functools.partial(_subprocess_setup,
preexec_fn)
close_fds = True
obj = subprocess.Popen(cmd,
stdin=_PIPE,
stdout=_PIPE,
stderr=_PIPE,
close_fds=close_fds,
preexec_fn=on_preexec_fn,
shell=shell,
cwd=cwd,
env=env_variables)
if on_execute:
on_execute(obj)
try:
result = obj.communicate(process_input)
obj.stdin.close() # pylint: disable=E1101
_returncode = obj.returncode # pylint: disable=E1101
LOG.log(loglevel, 'CMD "%s" returned: %s in %0.3fs',
sanitized_cmd, _returncode, watch.elapsed())
finally:
if on_completion:
on_completion(obj)
if not ignore_exit_code and _returncode not in check_exit_code:
(stdout, stderr) = result
if six.PY3:
stdout = os.fsdecode(stdout)
stderr = os.fsdecode(stderr)
sanitized_stdout = strutils.mask_password(stdout)
sanitized_stderr = strutils.mask_password(stderr)
raise ProcessExecutionError(exit_code=_returncode,
stdout=sanitized_stdout,
stderr=sanitized_stderr,
cmd=sanitized_cmd)
if six.PY3 and not binary and result is not None:
(stdout, stderr) = result
# Decode from the locale using using the surrogateescape error
# handler (decoding cannot fail)
stdout = os.fsdecode(stdout)
stderr = os.fsdecode(stderr)
return (stdout, stderr)
else:
return result
except (ProcessExecutionError, OSError) as err:
# if we want to always log the errors or if this is
# the final attempt that failed and we want to log that.
if log_errors == LOG_ALL_ERRORS or (
log_errors == LOG_FINAL_ERROR and not attempts):
if isinstance(err, ProcessExecutionError):
format = _('%(desc)r\ncommand: %(cmd)r\n'
'exit code: %(code)r\nstdout: %(stdout)r\n'
'stderr: %(stderr)r')
LOG.log(loglevel, format, {"desc": err.description,
"cmd": err.cmd,
"code": err.exit_code,
"stdout": err.stdout,
"stderr": err.stderr})
else:
format = _('Got an OSError\ncommand: %(cmd)r\n'
'errno: %(errno)r')
LOG.log(loglevel, format, {"cmd": sanitized_cmd,
"errno": err.errno})
if not attempts:
LOG.log(loglevel, _('%r failed. Not Retrying.'),
sanitized_cmd)
raise
else:
LOG.log(loglevel, _('%r failed. Retrying.'),
sanitized_cmd)
if delay_on_retry:
time.sleep(random.randint(20, 200) / 100.0)
finally:
# NOTE(termie): this appears to be necessary to let the subprocess
# call clean something up in between calls, without
# it two execute calls in a row hangs the second one
# NOTE(bnemec): termie's comment above is probably specific to the
# eventlet subprocess module, but since we still
# have to support that we're leaving the sleep. It
# won't hurt anything in the stdlib case anyway.
time.sleep(0)
def trycmd(*args, **kwargs):
"""A wrapper around execute() to more easily handle warnings and errors.
Returns an (out, err) tuple of strings containing the output of
the command's stdout and stderr. If 'err' is not empty then the
command can be considered to have failed.
:discard_warnings True | False. Defaults to False. If set to True,
then for succeeding commands, stderr is cleared
"""
discard_warnings = kwargs.pop('discard_warnings', False)
try:
out, err = execute(*args, **kwargs)
failed = False
except ProcessExecutionError as exn:
out, err = '', six.text_type(exn)
failed = True
if not failed and discard_warnings and err:
# Handle commands that output to stderr but otherwise succeed
err = ''
return out, err
def ssh_execute(ssh, cmd, process_input=None,
addl_env=None, check_exit_code=True,
binary=False, timeout=None):
"""Run a command through SSH.
.. versionchanged:: 1.9
Added *binary* optional parameter.
"""
sanitized_cmd = strutils.mask_password(cmd)
LOG.debug('Running cmd (SSH): %s', sanitized_cmd)
if addl_env:
raise InvalidArgumentError(_('Environment not supported over SSH'))
if process_input:
# This is (probably) fixable if we need it...
raise InvalidArgumentError(_('process_input not supported over SSH'))
stdin_stream, stdout_stream, stderr_stream = ssh.exec_command(
cmd, timeout=timeout)
channel = stdout_stream.channel
# NOTE(justinsb): This seems suspicious...
# ...other SSH clients have buffering issues with this approach
stdout = stdout_stream.read()
stderr = stderr_stream.read()
stdin_stream.close()
exit_status = channel.recv_exit_status()
if six.PY3:
# Decode from the locale using using the surrogateescape error handler
# (decoding cannot fail). Decode even if binary is True because
# mask_password() requires Unicode on Python 3
stdout = os.fsdecode(stdout)
stderr = os.fsdecode(stderr)
stdout = strutils.mask_password(stdout)
stderr = strutils.mask_password(stderr)
# exit_status == -1 if no exit code was returned
if exit_status != -1:
LOG.debug('Result was %s' % exit_status)
if check_exit_code and exit_status != 0:
raise ProcessExecutionError(exit_code=exit_status,
stdout=stdout,
stderr=stderr,
cmd=sanitized_cmd)
if binary:
if six.PY2:
# On Python 2, stdout is a bytes string if mask_password() failed
# to decode it, or an Unicode string otherwise. Encode to the
# default encoding (ASCII) because mask_password() decodes from
# the same encoding.
if isinstance(stdout, unicode):
stdout = stdout.encode()
if isinstance(stderr, unicode):
stderr = stderr.encode()
else:
# fsencode() is the reverse operation of fsdecode()
stdout = os.fsencode(stdout)
stderr = os.fsencode(stderr)
return (stdout, stderr)
def get_worker_count():
"""Utility to get the default worker count.
@return: The number of CPUs if that can be determined, else a default
worker count of 1 is returned.
"""
try:
return multiprocessing.cpu_count()
except NotImplementedError:
return 1

View File

@ -1,19 +0,0 @@
# Copyright 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
if os.environ.get('TEST_EVENTLET'):
import eventlet
eventlet.monkey_patch()

View File

@ -1,527 +0,0 @@
# Copyright 2011 Justin Santa Barbara
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import fcntl
import os
import shutil
import signal
import subprocess
import sys
import tempfile
import threading
import time
from oslo_config import cfg
from oslotest import base as test_base
import six
from oslo_concurrency.fixture import lockutils as fixtures
from oslo_concurrency import lockutils
from oslo_config import fixture as config
class LockTestCase(test_base.BaseTestCase):
def setUp(self):
super(LockTestCase, self).setUp()
self.config = self.useFixture(config.Config(lockutils.CONF)).config
def test_synchronized_wrapped_function_metadata(self):
@lockutils.synchronized('whatever', 'test-')
def foo():
"""Bar."""
pass
self.assertEqual('Bar.', foo.__doc__, "Wrapped function's docstring "
"got lost")
self.assertEqual('foo', foo.__name__, "Wrapped function's name "
"got mangled")
def test_lock_internally_different_collections(self):
s1 = lockutils.Semaphores()
s2 = lockutils.Semaphores()
trigger = threading.Event()
who_ran = collections.deque()
def f(name, semaphores, pull_trigger):
with lockutils.internal_lock('testing', semaphores=semaphores):
if pull_trigger:
trigger.set()
else:
trigger.wait()
who_ran.append(name)
threads = [
threading.Thread(target=f, args=(1, s1, True)),
threading.Thread(target=f, args=(2, s2, False)),
]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
self.assertEqual([1, 2], sorted(who_ran))
def test_lock_internally(self):
"""We can lock across multiple threads."""
saved_sem_num = len(lockutils._semaphores)
seen_threads = list()
def f(_id):
with lockutils.lock('testlock2', 'test-', external=False):
for x in range(10):
seen_threads.append(_id)
threads = []
for i in range(10):
thread = threading.Thread(target=f, args=(i,))
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
self.assertEqual(100, len(seen_threads))
# Looking at the seen threads, split it into chunks of 10, and verify
# that the last 9 match the first in each chunk.
for i in range(10):
for j in range(9):
self.assertEqual(seen_threads[i * 10],
seen_threads[i * 10 + 1 + j])
self.assertEqual(saved_sem_num, len(lockutils._semaphores),
"Semaphore leak detected")
def test_nested_synchronized_external_works(self):
"""We can nest external syncs."""
tempdir = tempfile.mkdtemp()
try:
self.config(lock_path=tempdir, group='oslo_concurrency')
sentinel = object()
@lockutils.synchronized('testlock1', 'test-', external=True)
def outer_lock():
@lockutils.synchronized('testlock2', 'test-', external=True)
def inner_lock():
return sentinel
return inner_lock()
self.assertEqual(sentinel, outer_lock())
finally:
if os.path.exists(tempdir):
shutil.rmtree(tempdir)
def _do_test_lock_externally(self):
"""We can lock across multiple processes."""
def lock_files(handles_dir):
with lockutils.lock('external', 'test-', external=True):
# Open some files we can use for locking
handles = []
for n in range(50):
path = os.path.join(handles_dir, ('file-%s' % n))
handles.append(open(path, 'w'))
# Loop over all the handles and try locking the file
# without blocking, keep a count of how many files we
# were able to lock and then unlock. If the lock fails
# we get an IOError and bail out with bad exit code
count = 0
for handle in handles:
try:
fcntl.flock(handle, fcntl.LOCK_EX | fcntl.LOCK_NB)
count += 1
fcntl.flock(handle, fcntl.LOCK_UN)
except IOError:
os._exit(2)
finally:
handle.close()
# Check if we were able to open all files
self.assertEqual(50, count)
handles_dir = tempfile.mkdtemp()
try:
children = []
for n in range(50):
pid = os.fork()
if pid:
children.append(pid)
else:
try:
lock_files(handles_dir)
finally:
os._exit(0)
for child in children:
(pid, status) = os.waitpid(child, 0)
if pid:
self.assertEqual(0, status)
finally:
if os.path.exists(handles_dir):
shutil.rmtree(handles_dir, ignore_errors=True)
def test_lock_externally(self):
lock_dir = tempfile.mkdtemp()
self.config(lock_path=lock_dir, group='oslo_concurrency')
try:
self._do_test_lock_externally()
finally:
if os.path.exists(lock_dir):
shutil.rmtree(lock_dir, ignore_errors=True)
def test_lock_externally_lock_dir_not_exist(self):
lock_dir = tempfile.mkdtemp()
os.rmdir(lock_dir)
self.config(lock_path=lock_dir, group='oslo_concurrency')
try:
self._do_test_lock_externally()
finally:
if os.path.exists(lock_dir):
shutil.rmtree(lock_dir, ignore_errors=True)
def test_synchronized_with_prefix(self):
lock_name = 'mylock'
lock_pfix = 'mypfix-'
foo = lockutils.synchronized_with_prefix(lock_pfix)
@foo(lock_name, external=True)
def bar(dirpath, pfix, name):
return True
lock_dir = tempfile.mkdtemp()
self.config(lock_path=lock_dir, group='oslo_concurrency')
self.assertTrue(bar(lock_dir, lock_pfix, lock_name))
def test_synchronized_without_prefix(self):
lock_dir = tempfile.mkdtemp()
self.config(lock_path=lock_dir, group='oslo_concurrency')
@lockutils.synchronized('lock', external=True)
def test_without_prefix():
# We can't check much
pass
try:
test_without_prefix()
finally:
if os.path.exists(lock_dir):
shutil.rmtree(lock_dir, ignore_errors=True)
def test_synchronized_prefix_without_hypen(self):
lock_dir = tempfile.mkdtemp()
self.config(lock_path=lock_dir, group='oslo_concurrency')
@lockutils.synchronized('lock', 'hypen', True)
def test_without_hypen():
# We can't check much
pass
try:
test_without_hypen()
finally:
if os.path.exists(lock_dir):
shutil.rmtree(lock_dir, ignore_errors=True)
def test_contextlock(self):
lock_dir = tempfile.mkdtemp()
self.config(lock_path=lock_dir, group='oslo_concurrency')
try:
# Note(flaper87): Lock is not external, which means
# a semaphore will be yielded
with lockutils.lock("test") as sem:
if six.PY2:
self.assertTrue(isinstance(sem, threading._Semaphore))
else:
self.assertTrue(isinstance(sem, threading.Semaphore))
# NOTE(flaper87): Lock is external so an InterProcessLock
# will be yielded.
with lockutils.lock("test2", external=True) as lock:
self.assertTrue(lock.exists())
with lockutils.lock("test1",
external=True) as lock1:
self.assertTrue(isinstance(lock1,
lockutils.InterProcessLock))
finally:
if os.path.exists(lock_dir):
shutil.rmtree(lock_dir, ignore_errors=True)
def test_contextlock_unlocks(self):
lock_dir = tempfile.mkdtemp()
self.config(lock_path=lock_dir, group='oslo_concurrency')
sem = None
try:
with lockutils.lock("test") as sem:
if six.PY2:
self.assertTrue(isinstance(sem, threading._Semaphore))
else:
self.assertTrue(isinstance(sem, threading.Semaphore))
with lockutils.lock("test2", external=True) as lock:
self.assertTrue(lock.exists())
# NOTE(flaper87): Lock should be free
with lockutils.lock("test2", external=True) as lock:
self.assertTrue(lock.exists())
# NOTE(flaper87): Lock should be free
# but semaphore should already exist.
with lockutils.lock("test") as sem2:
self.assertEqual(sem, sem2)
finally:
if os.path.exists(lock_dir):
shutil.rmtree(lock_dir, ignore_errors=True)
def _test_remove_lock_external_file(self, lock_dir, use_external=False):
lock_name = 'mylock'
lock_pfix = 'mypfix-remove-lock-test-'
if use_external:
lock_path = lock_dir
else:
lock_path = None
lockutils.remove_external_lock_file(lock_name, lock_pfix, lock_path)
for ent in os.listdir(lock_dir):
self.assertRaises(OSError, ent.startswith, lock_pfix)
if os.path.exists(lock_dir):
shutil.rmtree(lock_dir, ignore_errors=True)
def test_remove_lock_external_file(self):
lock_dir = tempfile.mkdtemp()
self.config(lock_path=lock_dir, group='oslo_concurrency')
self._test_remove_lock_external_file(lock_dir)
def test_remove_lock_external_file_lock_path(self):
lock_dir = tempfile.mkdtemp()
self._test_remove_lock_external_file(lock_dir,
use_external=True)
def test_no_slash_in_b64(self):
# base64(sha1(foobar)) has a slash in it
with lockutils.lock("foobar"):
pass
def test_deprecated_names(self):
paths = self.create_tempfiles([['fake.conf', '\n'.join([
'[DEFAULT]',
'lock_path=foo',
'disable_process_locking=True'])
]])
conf = cfg.ConfigOpts()
conf(['--config-file', paths[0]])
conf.register_opts(lockutils._opts, 'oslo_concurrency')
self.assertEqual('foo', conf.oslo_concurrency.lock_path)
self.assertTrue(conf.oslo_concurrency.disable_process_locking)
class FileBasedLockingTestCase(test_base.BaseTestCase):
def setUp(self):
super(FileBasedLockingTestCase, self).setUp()
self.lock_dir = tempfile.mkdtemp()
def test_lock_file_exists(self):
lock_file = os.path.join(self.lock_dir, 'lock-file')
@lockutils.synchronized('lock-file', external=True,
lock_path=self.lock_dir)
def foo():
self.assertTrue(os.path.exists(lock_file))
foo()
def test_interprocess_lock(self):
lock_file = os.path.join(self.lock_dir, 'processlock')
pid = os.fork()
if pid:
# Make sure the child grabs the lock first
start = time.time()
while not os.path.exists(lock_file):
if time.time() - start > 5:
self.fail('Timed out waiting for child to grab lock')
time.sleep(0)
lock1 = lockutils.InterProcessLock('foo')
lock1.lockfile = open(lock_file, 'w')
# NOTE(bnemec): There is a brief window between when the lock file
# is created and when it actually becomes locked. If we happen to
# context switch in that window we may succeed in locking the
# file. Keep retrying until we either get the expected exception
# or timeout waiting.
while time.time() - start < 5:
try:
lock1.trylock()
lock1.unlock()
time.sleep(0)
except IOError:
# This is what we expect to happen
break
else:
self.fail('Never caught expected lock exception')
# We don't need to wait for the full sleep in the child here
os.kill(pid, signal.SIGKILL)
else:
try:
lock2 = lockutils.InterProcessLock('foo')
lock2.lockfile = open(lock_file, 'w')
have_lock = False
while not have_lock:
try:
lock2.trylock()
have_lock = True
except IOError:
pass
finally:
# NOTE(bnemec): This is racy, but I don't want to add any
# synchronization primitives that might mask a problem
# with the one we're trying to test here.
time.sleep(.5)
os._exit(0)
def test_interthread_external_lock(self):
call_list = []
@lockutils.synchronized('foo', external=True, lock_path=self.lock_dir)
def foo(param):
"""Simulate a long-running threaded operation."""
call_list.append(param)
# NOTE(bnemec): This is racy, but I don't want to add any
# synchronization primitives that might mask a problem
# with the one we're trying to test here.
time.sleep(.5)
call_list.append(param)
def other(param):
foo(param)
thread = threading.Thread(target=other, args=('other',))
thread.start()
# Make sure the other thread grabs the lock
# NOTE(bnemec): File locks do not actually work between threads, so
# this test is verifying that the local semaphore is still enforcing
# external locks in that case. This means this test does not have
# the same race problem as the process test above because when the
# file is created the semaphore has already been grabbed.
start = time.time()
while not os.path.exists(os.path.join(self.lock_dir, 'foo')):
if time.time() - start > 5:
self.fail('Timed out waiting for thread to grab lock')
time.sleep(0)
thread1 = threading.Thread(target=other, args=('main',))
thread1.start()
thread1.join()
thread.join()
self.assertEqual(['other', 'other', 'main', 'main'], call_list)
def test_non_destructive(self):
lock_file = os.path.join(self.lock_dir, 'not-destroyed')
with open(lock_file, 'w') as f:
f.write('test')
with lockutils.lock('not-destroyed', external=True,
lock_path=self.lock_dir):
with open(lock_file) as f:
self.assertEqual('test', f.read())
class LockutilsModuleTestCase(test_base.BaseTestCase):
def setUp(self):
super(LockutilsModuleTestCase, self).setUp()
self.old_env = os.environ.get('OSLO_LOCK_PATH')
if self.old_env is not None:
del os.environ['OSLO_LOCK_PATH']
def tearDown(self):
if self.old_env is not None:
os.environ['OSLO_LOCK_PATH'] = self.old_env
super(LockutilsModuleTestCase, self).tearDown()
def test_main(self):
script = '\n'.join([
'import os',
'lock_path = os.environ.get("OSLO_LOCK_PATH")',
'assert lock_path is not None',
'assert os.path.isdir(lock_path)',
])
argv = ['', sys.executable, '-c', script]
retval = lockutils._lock_wrapper(argv)
self.assertEqual(0, retval, "Bad OSLO_LOCK_PATH has been set")
def test_return_value_maintained(self):
script = '\n'.join([
'import sys',
'sys.exit(1)',
])
argv = ['', sys.executable, '-c', script]
retval = lockutils._lock_wrapper(argv)
self.assertEqual(1, retval)
def test_direct_call_explodes(self):
cmd = [sys.executable, '-m', 'oslo_concurrency.lockutils']
with open(os.devnull, 'w') as devnull:
retval = subprocess.call(cmd, stderr=devnull)
self.assertEqual(1, retval)
class TestLockFixture(test_base.BaseTestCase):
def setUp(self):
super(TestLockFixture, self).setUp()
self.config = self.useFixture(config.Config(lockutils.CONF)).config
self.tempdir = tempfile.mkdtemp()
def _check_in_lock(self):
self.assertTrue(self.lock.exists())
def tearDown(self):
self._check_in_lock()
super(TestLockFixture, self).tearDown()
def test_lock_fixture(self):
# Setup lock fixture to test that teardown is inside the lock
self.config(lock_path=self.tempdir, group='oslo_concurrency')
fixture = fixtures.LockFixture('test-lock')
self.useFixture(fixture)
self.lock = fixture.lock
class TestGetLockPath(test_base.BaseTestCase):
def setUp(self):
super(TestGetLockPath, self).setUp()
self.conf = self.useFixture(config.Config(lockutils.CONF)).conf
def test_get_default(self):
lockutils.set_defaults(lock_path='/the/path')
self.assertEqual('/the/path', lockutils.get_lock_path(self.conf))
def test_get_override(self):
lockutils._register_opts(self.conf)
self.conf.set_override('lock_path', '/alternate/path',
group='oslo_concurrency')
self.assertEqual('/alternate/path', lockutils.get_lock_path(self.conf))

View File

@ -1,59 +0,0 @@
# Copyright 2011 Justin Santa Barbara
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import shutil
import tempfile
import eventlet
from eventlet import greenpool
from oslotest import base as test_base
from oslo_concurrency import lockutils
class TestFileLocks(test_base.BaseTestCase):
def test_concurrent_green_lock_succeeds(self):
"""Verify spawn_n greenthreads with two locks run concurrently."""
tmpdir = tempfile.mkdtemp()
try:
self.completed = False
def locka(wait):
a = lockutils.InterProcessLock(os.path.join(tmpdir, 'a'))
with a:
wait.wait()
self.completed = True
def lockb(wait):
b = lockutils.InterProcessLock(os.path.join(tmpdir, 'b'))
with b:
wait.wait()
wait1 = eventlet.event.Event()
wait2 = eventlet.event.Event()
pool = greenpool.GreenPool()
pool.spawn_n(locka, wait1)
pool.spawn_n(lockb, wait2)
wait2.send()
eventlet.sleep(0)
wait1.send()
pool.waitall()
self.assertTrue(self.completed)
finally:
if os.path.exists(tmpdir):
shutil.rmtree(tmpdir)

View File

@ -1,897 +0,0 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import errno
import logging
import multiprocessing
import os
import pickle
import resource
import socket
import stat
import subprocess
import sys
import tempfile
import fixtures
import mock
from oslotest import base as test_base
import six
from oslo_concurrency import processutils
from oslotest import mockpatch
PROCESS_EXECUTION_ERROR_LOGGING_TEST = """#!/bin/bash
exit 41"""
TEST_EXCEPTION_AND_MASKING_SCRIPT = """#!/bin/bash
# This is to test stdout and stderr
# and the command returned in an exception
# when a non-zero exit code is returned
echo onstdout --password='"secret"'
echo onstderr --password='"secret"' 1>&2
exit 38"""
# This byte sequence is undecodable from most encoding
UNDECODABLE_BYTES = b'[a\x80\xe9\xff]'
TRUE_UTILITY = (sys.platform.startswith('darwin') and
'/usr/bin/true' or '/bin/true')
class UtilsTest(test_base.BaseTestCase):
# NOTE(jkoelker) Moar tests from nova need to be ported. But they
# need to be mock'd out. Currently they require actually
# running code.
def test_execute_unknown_kwargs(self):
self.assertRaises(processutils.UnknownArgumentError,
processutils.execute,
hozer=True)
@mock.patch.object(multiprocessing, 'cpu_count', return_value=8)
def test_get_worker_count(self, mock_cpu_count):
self.assertEqual(8, processutils.get_worker_count())
@mock.patch.object(multiprocessing, 'cpu_count',
side_effect=NotImplementedError())
def test_get_worker_count_cpu_count_not_implemented(self,
mock_cpu_count):
self.assertEqual(1, processutils.get_worker_count())
def test_execute_with_callback(self):
on_execute_callback = mock.Mock()
on_completion_callback = mock.Mock()
processutils.execute(TRUE_UTILITY)
self.assertEqual(0, on_execute_callback.call_count)
self.assertEqual(0, on_completion_callback.call_count)
processutils.execute(TRUE_UTILITY, on_execute=on_execute_callback,
on_completion=on_completion_callback)
self.assertEqual(1, on_execute_callback.call_count)
self.assertEqual(1, on_completion_callback.call_count)
@mock.patch.object(subprocess.Popen, "communicate")
def test_execute_with_callback_and_errors(self, mock_comm):
on_execute_callback = mock.Mock()
on_completion_callback = mock.Mock()
def fake_communicate(*args):
raise IOError("Broken pipe")
mock_comm.side_effect = fake_communicate
self.assertRaises(IOError,
processutils.execute,
TRUE_UTILITY,
on_execute=on_execute_callback,
on_completion=on_completion_callback)
self.assertEqual(1, on_execute_callback.call_count)
self.assertEqual(1, on_completion_callback.call_count)
def test_execute_with_preexec_fn(self):
# NOTE(dims): preexec_fn is set to a callable object, this object
# will be called in the child process just before the child is
# executed. So we cannot pass share variables etc, simplest is to
# check if a specific exception is thrown which can be caught here.
def preexec_fn():
raise processutils.InvalidArgumentError()
processutils.execute(TRUE_UTILITY)
expected_exception = (processutils.InvalidArgumentError if six.PY2
else subprocess.SubprocessError)
self.assertRaises(expected_exception,
processutils.execute,
TRUE_UTILITY,
preexec_fn=preexec_fn)
class ProcessExecutionErrorTest(test_base.BaseTestCase):
def test_defaults(self):
err = processutils.ProcessExecutionError()
self.assertTrue('None\n' in six.text_type(err))
self.assertTrue('code: -\n' in six.text_type(err))
def test_with_description(self):
description = 'The Narwhal Bacons at Midnight'
err = processutils.ProcessExecutionError(description=description)
self.assertTrue(description in six.text_type(err))
def test_with_exit_code(self):
exit_code = 0
err = processutils.ProcessExecutionError(exit_code=exit_code)
self.assertTrue(str(exit_code) in six.text_type(err))
def test_with_cmd(self):
cmd = 'telinit'
err = processutils.ProcessExecutionError(cmd=cmd)
self.assertTrue(cmd in six.text_type(err))
def test_with_stdout(self):
stdout = """
Lo, praise of the prowess of people-kings
of spear-armed Danes, in days long sped,
we have heard, and what honot the athelings won!
Oft Scyld the Scefing from squadroned foes,
from many a tribe, the mead-bench tore,
awing the earls. Since erse he lay
friendless, a foundling, fate repaid him:
for he waxed under welkin, in wealth he trove,
till before him the folk, both far and near,
who house by the whale-path, heard his mandate,
gabe him gits: a good king he!
To him an heir was afterward born,
a son in his halls, whom heaven sent
to favor the fol, feeling their woe
that erst they had lacked an earl for leader
so long a while; the Lord endowed him,
the Wielder of Wonder, with world's renown.
""".strip()
err = processutils.ProcessExecutionError(stdout=stdout)
print(six.text_type(err))
self.assertTrue('people-kings' in six.text_type(err))
def test_with_stderr(self):
stderr = 'Cottonian library'
err = processutils.ProcessExecutionError(stderr=stderr)
self.assertTrue(stderr in six.text_type(err))
def test_retry_on_failure(self):
fd, tmpfilename = tempfile.mkstemp()
_, tmpfilename2 = tempfile.mkstemp()
try:
fp = os.fdopen(fd, 'w+')
fp.write('''#!/bin/sh
# If stdin fails to get passed during one of the runs, make a note.
if ! grep -q foo
then
echo 'failure' > "$1"
fi
# If stdin has failed to get passed during this or a previous run, exit early.
if grep failure "$1"
then
exit 1
fi
runs="$(cat $1)"
if [ -z "$runs" ]
then
runs=0
fi
runs=$(($runs + 1))
echo $runs > "$1"
exit 1
''')
fp.close()
os.chmod(tmpfilename, 0o755)
self.assertRaises(processutils.ProcessExecutionError,
processutils.execute,
tmpfilename, tmpfilename2, attempts=10,
process_input=b'foo',
delay_on_retry=False)
fp = open(tmpfilename2, 'r')
runs = fp.read()
fp.close()
self.assertNotEqual('failure', 'stdin did not '
'always get passed '
'correctly',
runs.strip())
runs = int(runs.strip())
self.assertEqual(10, runs, 'Ran %d times instead of 10.' % (runs,))
finally:
os.unlink(tmpfilename)
os.unlink(tmpfilename2)
def test_unknown_kwargs_raises_error(self):
self.assertRaises(processutils.UnknownArgumentError,
processutils.execute,
'/usr/bin/env', 'true',
this_is_not_a_valid_kwarg=True)
def test_check_exit_code_boolean(self):
processutils.execute('/usr/bin/env', 'false', check_exit_code=False)
self.assertRaises(processutils.ProcessExecutionError,
processutils.execute,
'/usr/bin/env', 'false', check_exit_code=True)
def test_check_cwd(self):
tmpdir = tempfile.mkdtemp()
out, err = processutils.execute('/usr/bin/env',
'sh', '-c', 'pwd',
cwd=tmpdir)
self.assertIn(tmpdir, out)
def test_check_exit_code_list(self):
processutils.execute('/usr/bin/env', 'sh', '-c', 'exit 101',
check_exit_code=(101, 102))
processutils.execute('/usr/bin/env', 'sh', '-c', 'exit 102',
check_exit_code=(101, 102))
self.assertRaises(processutils.ProcessExecutionError,
processutils.execute,
'/usr/bin/env', 'sh', '-c', 'exit 103',
check_exit_code=(101, 102))
self.assertRaises(processutils.ProcessExecutionError,
processutils.execute,
'/usr/bin/env', 'sh', '-c', 'exit 0',
check_exit_code=(101, 102))
def test_no_retry_on_success(self):
fd, tmpfilename = tempfile.mkstemp()
_, tmpfilename2 = tempfile.mkstemp()
try:
fp = os.fdopen(fd, 'w+')
fp.write("""#!/bin/sh
# If we've already run, bail out.
grep -q foo "$1" && exit 1
# Mark that we've run before.
echo foo > "$1"
# Check that stdin gets passed correctly.
grep foo
""")
fp.close()
os.chmod(tmpfilename, 0o755)
processutils.execute(tmpfilename,
tmpfilename2,
process_input=b'foo',
attempts=2)
finally:
os.unlink(tmpfilename)
os.unlink(tmpfilename2)
# This test and the one below ensures that when communicate raises
# an OSError, we do the right thing(s)
def test_exception_on_communicate_error(self):
mock = self.useFixture(mockpatch.Patch(
'subprocess.Popen.communicate',
side_effect=OSError(errno.EAGAIN, 'fake-test')))
self.assertRaises(OSError,
processutils.execute,
'/usr/bin/env',
'false',
check_exit_code=False)
self.assertEqual(1, mock.mock.call_count)
def test_retry_on_communicate_error(self):
mock = self.useFixture(mockpatch.Patch(
'subprocess.Popen.communicate',
side_effect=OSError(errno.EAGAIN, 'fake-test')))
self.assertRaises(OSError,
processutils.execute,
'/usr/bin/env',
'false',
check_exit_code=False,
attempts=5)
self.assertEqual(5, mock.mock.call_count)
def _test_and_check_logging_communicate_errors(self, log_errors=None,
attempts=None):
mock = self.useFixture(mockpatch.Patch(
'subprocess.Popen.communicate',
side_effect=OSError(errno.EAGAIN, 'fake-test')))
fixture = self.useFixture(fixtures.FakeLogger(level=logging.DEBUG))
kwargs = {}
if log_errors:
kwargs.update({"log_errors": log_errors})
if attempts:
kwargs.update({"attempts": attempts})
self.assertRaises(OSError,
processutils.execute,
'/usr/bin/env',
'false',
**kwargs)
self.assertEqual(attempts if attempts else 1, mock.mock.call_count)
self.assertIn('Got an OSError', fixture.output)
self.assertIn('errno: %d' % errno.EAGAIN, fixture.output)
self.assertIn("'/usr/bin/env false'", fixture.output)
def test_logging_on_communicate_error_1(self):
self._test_and_check_logging_communicate_errors(
log_errors=processutils.LOG_FINAL_ERROR,
attempts=None)
def test_logging_on_communicate_error_2(self):
self._test_and_check_logging_communicate_errors(
log_errors=processutils.LOG_FINAL_ERROR,
attempts=1)
def test_logging_on_communicate_error_3(self):
self._test_and_check_logging_communicate_errors(
log_errors=processutils.LOG_FINAL_ERROR,
attempts=5)
def test_logging_on_communicate_error_4(self):
self._test_and_check_logging_communicate_errors(
log_errors=processutils.LOG_ALL_ERRORS,
attempts=None)
def test_logging_on_communicate_error_5(self):
self._test_and_check_logging_communicate_errors(
log_errors=processutils.LOG_ALL_ERRORS,
attempts=1)
def test_logging_on_communicate_error_6(self):
self._test_and_check_logging_communicate_errors(
log_errors=processutils.LOG_ALL_ERRORS,
attempts=5)
def test_with_env_variables(self):
env_vars = {'SUPER_UNIQUE_VAR': 'The answer is 42'}
out, err = processutils.execute('/usr/bin/env', env_variables=env_vars)
self.assertIsInstance(out, str)
self.assertIsInstance(err, str)
self.assertIn('SUPER_UNIQUE_VAR=The answer is 42', out)
def test_binary(self):
env_vars = {'SUPER_UNIQUE_VAR': 'The answer is 42'}
out, err = processutils.execute('/usr/bin/env',
env_variables=env_vars,
binary=True)
self.assertIsInstance(out, bytes)
self.assertIsInstance(err, bytes)
self.assertIn(b'SUPER_UNIQUE_VAR=The answer is 42', out)
def test_exception_and_masking(self):
tmpfilename = self.create_tempfiles(
[["test_exceptions_and_masking",
TEST_EXCEPTION_AND_MASKING_SCRIPT]], ext='bash')[0]
os.chmod(tmpfilename, (stat.S_IRWXU |
stat.S_IRGRP |
stat.S_IXGRP |
stat.S_IROTH |
stat.S_IXOTH))
err = self.assertRaises(processutils.ProcessExecutionError,
processutils.execute,
tmpfilename, 'password="secret"',
'something')
self.assertEqual(38, err.exit_code)
self.assertIsInstance(err.stdout, six.text_type)
self.assertIsInstance(err.stderr, six.text_type)
self.assertIn('onstdout --password="***"', err.stdout)
self.assertIn('onstderr --password="***"', err.stderr)
self.assertEqual(' '.join([tmpfilename,
'password="***"',
'something']),
err.cmd)
self.assertNotIn('secret', str(err))
def execute_undecodable_bytes(self, out_bytes, err_bytes,
exitcode=0, binary=False):
if six.PY3:
code = ';'.join(('import sys',
'sys.stdout.buffer.write(%a)' % out_bytes,
'sys.stdout.flush()',
'sys.stderr.buffer.write(%a)' % err_bytes,
'sys.stderr.flush()',
'sys.exit(%s)' % exitcode))
else:
code = ';'.join(('import sys',
'sys.stdout.write(%r)' % out_bytes,
'sys.stdout.flush()',
'sys.stderr.write(%r)' % err_bytes,
'sys.stderr.flush()',
'sys.exit(%s)' % exitcode))
return processutils.execute(sys.executable, '-c', code, binary=binary)
def check_undecodable_bytes(self, binary):
out_bytes = b'out: ' + UNDECODABLE_BYTES
err_bytes = b'err: ' + UNDECODABLE_BYTES
out, err = self.execute_undecodable_bytes(out_bytes, err_bytes,
binary=binary)
if six.PY3 and not binary:
self.assertEqual(os.fsdecode(out_bytes), out)
self.assertEqual(os.fsdecode(err_bytes), err)
else:
self.assertEqual(out, out_bytes)
self.assertEqual(err, err_bytes)
def test_undecodable_bytes(self):
self.check_undecodable_bytes(False)
def test_binary_undecodable_bytes(self):
self.check_undecodable_bytes(True)
def check_undecodable_bytes_error(self, binary):
out_bytes = b'out: password="secret1" ' + UNDECODABLE_BYTES
err_bytes = b'err: password="secret2" ' + UNDECODABLE_BYTES
exc = self.assertRaises(processutils.ProcessExecutionError,
self.execute_undecodable_bytes,
out_bytes, err_bytes, exitcode=1,
binary=binary)
out = exc.stdout
err = exc.stderr
out_bytes = b'out: password="***" ' + UNDECODABLE_BYTES
err_bytes = b'err: password="***" ' + UNDECODABLE_BYTES
if six.PY3:
# On Python 3, stdout and stderr attributes of
# ProcessExecutionError must always be Unicode
self.assertEqual(os.fsdecode(out_bytes), out)
self.assertEqual(os.fsdecode(err_bytes), err)
else:
# On Python 2, stdout and stderr attributes of
# ProcessExecutionError must always be bytes
self.assertEqual(out_bytes, out)
self.assertEqual(err_bytes, err)
def test_undecodable_bytes_error(self):
self.check_undecodable_bytes_error(False)
def test_binary_undecodable_bytes_error(self):
self.check_undecodable_bytes_error(True)
def test_picklable(self):
exc = processutils.ProcessExecutionError(
stdout='my stdout', stderr='my stderr',
exit_code=42, cmd='my cmd',
description='my description')
exc_message = str(exc)
exc = pickle.loads(pickle.dumps(exc))
self.assertEqual('my stdout', exc.stdout)
self.assertEqual('my stderr', exc.stderr)
self.assertEqual(42, exc.exit_code)
self.assertEqual('my cmd', exc.cmd)
self.assertEqual('my description', exc.description)
self.assertEqual(str(exc), exc_message)
class ProcessExecutionErrorLoggingTest(test_base.BaseTestCase):
def setUp(self):
super(ProcessExecutionErrorLoggingTest, self).setUp()
self.tmpfilename = self.create_tempfiles(
[["process_execution_error_logging_test",
PROCESS_EXECUTION_ERROR_LOGGING_TEST]],
ext='bash')[0]
os.chmod(self.tmpfilename, (stat.S_IRWXU | stat.S_IRGRP |
stat.S_IXGRP | stat.S_IROTH |
stat.S_IXOTH))
def _test_and_check(self, log_errors=None, attempts=None):
fixture = self.useFixture(fixtures.FakeLogger(level=logging.DEBUG))
kwargs = {}
if log_errors:
kwargs.update({"log_errors": log_errors})
if attempts:
kwargs.update({"attempts": attempts})
err = self.assertRaises(processutils.ProcessExecutionError,
processutils.execute,
self.tmpfilename,
**kwargs)
self.assertEqual(41, err.exit_code)
self.assertIn(self.tmpfilename, fixture.output)
def test_with_invalid_log_errors(self):
self.assertRaises(processutils.InvalidArgumentError,
processutils.execute,
self.tmpfilename,
log_errors='invalid')
def test_with_log_errors_NONE(self):
self._test_and_check(log_errors=None, attempts=None)
def test_with_log_errors_final(self):
self._test_and_check(log_errors=processutils.LOG_FINAL_ERROR,
attempts=None)
def test_with_log_errors_all(self):
self._test_and_check(log_errors=processutils.LOG_ALL_ERRORS,
attempts=None)
def test_multiattempt_with_log_errors_NONE(self):
self._test_and_check(log_errors=None, attempts=3)
def test_multiattempt_with_log_errors_final(self):
self._test_and_check(log_errors=processutils.LOG_FINAL_ERROR,
attempts=3)
def test_multiattempt_with_log_errors_all(self):
self._test_and_check(log_errors=processutils.LOG_ALL_ERRORS,
attempts=3)
def fake_execute(*cmd, **kwargs):
return 'stdout', 'stderr'
def fake_execute_raises(*cmd, **kwargs):
raise processutils.ProcessExecutionError(exit_code=42,
stdout='stdout',
stderr='stderr',
cmd=['this', 'is', 'a',
'command'])
class TryCmdTestCase(test_base.BaseTestCase):
def test_keep_warnings(self):
self.useFixture(fixtures.MonkeyPatch(
'oslo_concurrency.processutils.execute', fake_execute))
o, e = processutils.trycmd('this is a command'.split(' '))
self.assertNotEqual('', o)
self.assertNotEqual('', e)
def test_keep_warnings_from_raise(self):
self.useFixture(fixtures.MonkeyPatch(
'oslo_concurrency.processutils.execute', fake_execute_raises))
o, e = processutils.trycmd('this is a command'.split(' '),
discard_warnings=True)
self.assertIsNotNone(o)
self.assertNotEqual('', e)
def test_discard_warnings(self):
self.useFixture(fixtures.MonkeyPatch(
'oslo_concurrency.processutils.execute', fake_execute))
o, e = processutils.trycmd('this is a command'.split(' '),
discard_warnings=True)
self.assertIsNotNone(o)
self.assertEqual('', e)
class FakeSshChannel(object):
def __init__(self, rc):
self.rc = rc
def recv_exit_status(self):
return self.rc
class FakeSshStream(six.BytesIO):
def setup_channel(self, rc):
self.channel = FakeSshChannel(rc)
class FakeSshConnection(object):
def __init__(self, rc, out=b'stdout', err=b'stderr'):
self.rc = rc
self.out = out
self.err = err
def exec_command(self, cmd, timeout=None):
if timeout:
raise socket.timeout()
stdout = FakeSshStream(self.out)
stdout.setup_channel(self.rc)
return (six.BytesIO(),
stdout,
six.BytesIO(self.err))
class SshExecuteTestCase(test_base.BaseTestCase):
def test_invalid_addl_env(self):
self.assertRaises(processutils.InvalidArgumentError,
processutils.ssh_execute,
None, 'ls', addl_env='important')
def test_invalid_process_input(self):
self.assertRaises(processutils.InvalidArgumentError,
processutils.ssh_execute,
None, 'ls', process_input='important')
def test_timeout_error(self):
self.assertRaises(socket.timeout,
processutils.ssh_execute,
FakeSshConnection(0), 'ls',
timeout=10)
def test_works(self):
out, err = processutils.ssh_execute(FakeSshConnection(0), 'ls')
self.assertEqual('stdout', out)
self.assertEqual('stderr', err)
self.assertIsInstance(out, six.text_type)
self.assertIsInstance(err, six.text_type)
def test_binary(self):
o, e = processutils.ssh_execute(FakeSshConnection(0), 'ls',
binary=True)
self.assertEqual(b'stdout', o)
self.assertEqual(b'stderr', e)
self.assertIsInstance(o, bytes)
self.assertIsInstance(e, bytes)
def check_undecodable_bytes(self, binary):
out_bytes = b'out: ' + UNDECODABLE_BYTES
err_bytes = b'err: ' + UNDECODABLE_BYTES
conn = FakeSshConnection(0, out=out_bytes, err=err_bytes)
out, err = processutils.ssh_execute(conn, 'ls', binary=binary)
if six.PY3 and not binary:
self.assertEqual(os.fsdecode(out_bytes), out)
self.assertEqual(os.fsdecode(err_bytes), err)
else:
self.assertEqual(out_bytes, out)
self.assertEqual(err_bytes, err)
def test_undecodable_bytes(self):
self.check_undecodable_bytes(False)
def test_binary_undecodable_bytes(self):
self.check_undecodable_bytes(True)
def check_undecodable_bytes_error(self, binary):
out_bytes = b'out: password="secret1" ' + UNDECODABLE_BYTES
err_bytes = b'err: password="secret2" ' + UNDECODABLE_BYTES
conn = FakeSshConnection(1, out=out_bytes, err=err_bytes)
out_bytes = b'out: password="***" ' + UNDECODABLE_BYTES
err_bytes = b'err: password="***" ' + UNDECODABLE_BYTES
exc = self.assertRaises(processutils.ProcessExecutionError,
processutils.ssh_execute,
conn, 'ls',
binary=binary, check_exit_code=True)
out = exc.stdout
err = exc.stderr
if six.PY3:
# On Python 3, stdout and stderr attributes of
# ProcessExecutionError must always be Unicode
self.assertEqual(os.fsdecode(out_bytes), out)
self.assertEqual(os.fsdecode(err_bytes), err)
else:
# On Python 2, stdout and stderr attributes of
# ProcessExecutionError must always be bytes
self.assertEqual(out_bytes, out)
self.assertEqual(err_bytes, err)
def test_undecodable_bytes_error(self):
self.check_undecodable_bytes_error(False)
def test_binary_undecodable_bytes_error(self):
self.check_undecodable_bytes_error(True)
def test_fails(self):
self.assertRaises(processutils.ProcessExecutionError,
processutils.ssh_execute, FakeSshConnection(1), 'ls')
def _test_compromising_ssh(self, rc, check):
fixture = self.useFixture(fixtures.FakeLogger(level=logging.DEBUG))
fake_stdin = six.BytesIO()
fake_stdout = mock.Mock()
fake_stdout.channel.recv_exit_status.return_value = rc
fake_stdout.read.return_value = b'password="secret"'
fake_stderr = six.BytesIO(b'password="foobar"')
command = 'ls --password="bar"'
connection = mock.Mock()
connection.exec_command.return_value = (fake_stdin, fake_stdout,
fake_stderr)
if check and rc != -1 and rc != 0:
err = self.assertRaises(processutils.ProcessExecutionError,
processutils.ssh_execute,
connection, command,
check_exit_code=check)
self.assertEqual(rc, err.exit_code)
self.assertEqual('password="***"', err.stdout)
self.assertEqual('password="***"', err.stderr)
self.assertEqual('ls --password="***"', err.cmd)
self.assertNotIn('secret', str(err))
self.assertNotIn('foobar', str(err))
else:
o, e = processutils.ssh_execute(connection, command,
check_exit_code=check)
self.assertEqual('password="***"', o)
self.assertEqual('password="***"', e)
self.assertIn('password="***"', fixture.output)
self.assertNotIn('bar', fixture.output)
def test_compromising_ssh1(self):
self._test_compromising_ssh(rc=-1, check=True)
def test_compromising_ssh2(self):
self._test_compromising_ssh(rc=0, check=True)
def test_compromising_ssh3(self):
self._test_compromising_ssh(rc=1, check=True)
def test_compromising_ssh4(self):
self._test_compromising_ssh(rc=1, check=False)
def test_compromising_ssh5(self):
self._test_compromising_ssh(rc=0, check=False)
def test_compromising_ssh6(self):
self._test_compromising_ssh(rc=-1, check=False)
class PrlimitTestCase(test_base.BaseTestCase):
# Simply program that does nothing and returns an exit code 0.
# Use Python to be portable.
SIMPLE_PROGRAM = [sys.executable, '-c', 'pass']
def soft_limit(self, res, substract, default_limit):
# Create a new soft limit for a resource, lower than the current
# soft limit.
soft_limit, hard_limit = resource.getrlimit(res)
if soft_limit <= 0:
soft_limit = default_limit
else:
soft_limit -= substract
return soft_limit
def memory_limit(self, res):
# Substract 1 kB just to get a different limit. Don't substract too
# much to avoid memory allocation issues.
#
# Use 1 GB by default. Limit high enough to be able to load shared
# libraries. Limit low enough to be work on 32-bit platforms.
return self.soft_limit(res, 1024, 1024 ** 3)
def limit_address_space(self):
max_memory = self.memory_limit(resource.RLIMIT_AS)
return processutils.ProcessLimits(address_space=max_memory)
def test_simple(self):
# Simple test running a program (/bin/true) with no parameter
prlimit = self.limit_address_space()
stdout, stderr = processutils.execute(*self.SIMPLE_PROGRAM,
prlimit=prlimit)
self.assertEqual('', stdout.rstrip())
self.assertEqual(stderr.rstrip(), '')
def check_limit(self, prlimit, resource, value):
code = ';'.join(('import resource',
'print(resource.getrlimit(resource.%s))' % resource))
args = [sys.executable, '-c', code]
stdout, stderr = processutils.execute(*args, prlimit=prlimit)
expected = (value, value)
self.assertEqual(str(expected), stdout.rstrip())
def test_address_space(self):
prlimit = self.limit_address_space()
self.check_limit(prlimit, 'RLIMIT_AS', prlimit.address_space)
def test_core_size(self):
size = self.soft_limit(resource.RLIMIT_CORE, 1, 1024)
prlimit = processutils.ProcessLimits(core_file_size=size)
self.check_limit(prlimit, 'RLIMIT_CORE', prlimit.core_file_size)
def test_cpu_time(self):
time = self.soft_limit(resource.RLIMIT_CPU, 1, 1024)
prlimit = processutils.ProcessLimits(cpu_time=time)
self.check_limit(prlimit, 'RLIMIT_CPU', prlimit.cpu_time)
def test_data_size(self):
max_memory = self.memory_limit(resource.RLIMIT_DATA)
prlimit = processutils.ProcessLimits(data_size=max_memory)
self.check_limit(prlimit, 'RLIMIT_DATA', max_memory)
def test_file_size(self):
size = self.soft_limit(resource.RLIMIT_FSIZE, 1, 1024)
prlimit = processutils.ProcessLimits(file_size=size)
self.check_limit(prlimit, 'RLIMIT_FSIZE', prlimit.file_size)
def test_memory_locked(self):
max_memory = self.memory_limit(resource.RLIMIT_MEMLOCK)
prlimit = processutils.ProcessLimits(memory_locked=max_memory)
self.check_limit(prlimit, 'RLIMIT_MEMLOCK', max_memory)
def test_resident_set_size(self):
max_memory = self.memory_limit(resource.RLIMIT_RSS)
prlimit = processutils.ProcessLimits(resident_set_size=max_memory)
self.check_limit(prlimit, 'RLIMIT_RSS', max_memory)
def test_number_files(self):
nfiles = self.soft_limit(resource.RLIMIT_NOFILE, 1, 1024)
prlimit = processutils.ProcessLimits(number_files=nfiles)
self.check_limit(prlimit, 'RLIMIT_NOFILE', nfiles)
def test_number_processes(self):
nprocs = self.soft_limit(resource.RLIMIT_NPROC, 1, 65535)
prlimit = processutils.ProcessLimits(number_processes=nprocs)
self.check_limit(prlimit, 'RLIMIT_NPROC', nprocs)
def test_stack_size(self):
max_memory = self.memory_limit(resource.RLIMIT_STACK)
prlimit = processutils.ProcessLimits(stack_size=max_memory)
self.check_limit(prlimit, 'RLIMIT_STACK', max_memory)
def test_unsupported_prlimit(self):
self.assertRaises(ValueError, processutils.ProcessLimits, xxx=33)
def test_relative_path(self):
prlimit = self.limit_address_space()
program = sys.executable
env = dict(os.environ)
env['PATH'] = os.path.dirname(program)
args = [os.path.basename(program), '-c', 'pass']
processutils.execute(*args, prlimit=prlimit, env_variables=env)
def test_execv_error(self):
prlimit = self.limit_address_space()
args = ['/missing_path/dont_exist/program']
try:
processutils.execute(*args, prlimit=prlimit)
except processutils.ProcessExecutionError as exc:
self.assertEqual(1, exc.exit_code)
self.assertEqual('', exc.stdout)
expected = ('%s -m oslo_concurrency.prlimit: '
'failed to execute /missing_path/dont_exist/program: '
% os.path.basename(sys.executable))
self.assertIn(expected, exc.stderr)
else:
self.fail("ProcessExecutionError not raised")
def test_setrlimit_error(self):
prlimit = self.limit_address_space()
# trying to set a limit higher than the current hard limit
# with setrlimit() should fail.
higher_limit = prlimit.address_space + 1024
args = [sys.executable, '-m', 'oslo_concurrency.prlimit',
'--as=%s' % higher_limit,
'--']
args.extend(self.SIMPLE_PROGRAM)
try:
processutils.execute(*args, prlimit=prlimit)
except processutils.ProcessExecutionError as exc:
self.assertEqual(1, exc.exit_code)
self.assertEqual('', exc.stdout)
expected = ('%s -m oslo_concurrency.prlimit: '
'failed to set the AS resource limit: '
% os.path.basename(sys.executable))
self.assertIn(expected, exc.stderr)
else:
self.fail("ProcessExecutionError not raised")

View File

@ -1,18 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
version_info = pbr.version.VersionInfo('oslo_concurrency')

View File

@ -1,76 +0,0 @@
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Watchdog module.
.. versionadded:: 0.4
"""
import contextlib
import logging
import threading
from oslo_utils import timeutils
@contextlib.contextmanager
def watch(logger, action, level=logging.DEBUG, after=5.0):
"""Log a message if an operation exceeds a time threshold.
This context manager is expected to be used when you are going to
do an operation in code which might either deadlock or take an
extraordinary amount of time, and you'd like to emit a status
message back to the user that the operation is still ongoing but
has not completed in an expected amount of time. This is more user
friendly than logging 'start' and 'end' events and making users
correlate the events to figure out they ended up in a deadlock.
:param logger: an object that complies to the logger definition
(has a .log method).
:param action: a meaningful string that describes the thing you
are about to do.
:param level: the logging level the message should be emitted
at. Defaults to logging.DEBUG.
:param after: the duration in seconds before the message is
emitted. Defaults to 5.0 seconds.
Example usage::
FORMAT = '%(asctime)-15s %(message)s'
logging.basicConfig(format=FORMAT)
LOG = logging.getLogger('mylogger')
with watchdog.watch(LOG, "subprocess call", logging.ERROR):
subprocess.call("sleep 10", shell=True)
print "done"
"""
watch = timeutils.StopWatch()
watch.start()
def log():
msg = "%s not completed after %0.3fs" % (action, watch.elapsed())
logger.log(level, msg)
timer = threading.Timer(after, log)
timer.start()
try:
yield
finally:
timer.cancel()
timer.join()

View File

@ -1,3 +0,0 @@
---
other:
- Switch to reno for managing release notes.

View File

@ -1,273 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'oslosphinx',
'reno.sphinxext',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'oslo.concurrency Release Notes'
copyright = u'2016, oslo.concurrency Developers'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
from oslo_concurrency.version import version_info as oslo_concurrency_version
# The full version, including alpha/beta/rc tags.
release = oslo_concurrency_version.version_string_with_vcs()
# The short X.Y version.
version = oslo_concurrency_version.canonical_version_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'oslo.concurrencyReleaseNotesDoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'oslo.concurrencyReleaseNotes.tex',
u'oslo.concurrency Release Notes Documentation',
u'oslo.concurrency Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'oslo.concurrencyReleaseNotes',
u'oslo.concurrency Release Notes Documentation',
[u'oslo.concurrency Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'oslo.concurrencyReleaseNotes',
u'oslo.concurrency Release Notes Documentation',
u'oslo.concurrency Developers', 'oslo.concurrencyReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False

View File

@ -1,8 +0,0 @@
===============================
oslo.concurrency Release Notes
===============================
.. toctree::
:maxdepth: 1
unreleased

View File

@ -1,30 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.concurrency Release Notes 3.11.1\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2016-07-01 03:44+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-28 05:54+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.7.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "3.10.0"
msgstr "3.10.0"
msgid "Other Notes"
msgstr "Other Notes"
msgid "Switch to reno for managing release notes."
msgstr "Switch to reno for managing release notes."
msgid "Unreleased Release Notes"
msgstr "Unreleased Release Notes"
msgid "oslo.concurrency Release Notes"
msgstr "oslo.concurrency Release Notes"

View File

@ -1,5 +0,0 @@
==========================
Unreleased Release Notes
==========================
.. release-notes::

View File

@ -1,13 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6 # Apache-2.0
enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
iso8601>=0.1.11 # MIT
oslo.config>=3.12.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.utils>=3.16.0 # Apache-2.0
six>=1.9.0 # MIT
fasteners>=0.7 # Apache-2.0
retrying!=1.3.0,>=1.2.3 # Apache-2.0

View File

@ -1,58 +0,0 @@
[metadata]
name = oslo.concurrency
summary = Oslo Concurrency library
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://launchpad.net/oslo
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.4
Programming Language :: Python :: 3.5
[files]
packages =
oslo_concurrency
[entry_points]
oslo.config.opts =
oslo.concurrency = oslo_concurrency.opts:list_opts
console_scripts =
lockutils-wrapper = oslo_concurrency.lockutils:main
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = oslo_concurrency/locale
domain = oslo_concurrency
[update_catalog]
domain = oslo_concurrency
output_dir = oslo_concurrency/locale
input_file = oslo_concurrency/locale/oslo_concurrency.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = oslo_concurrency/locale/oslo_concurrency.pot
[pbr]
warnerrors = True
[wheel]
universal = 1

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

View File

@ -1,16 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.11,>=0.10.0
oslotest>=1.10.0 # Apache-2.0
coverage>=3.6 # Apache-2.0
futures>=3.0;python_version=='2.7' or python_version=='2.6' # BSD
fixtures>=3.0.0 # Apache-2.0/BSD
# These are needed for docs generation
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
reno>=1.8.0 # Apache2
eventlet!=0.18.3,>=0.18.2 # MIT

42
tox.ini
View File

@ -1,42 +0,0 @@
[tox]
minversion = 1.6
envlist = py35,py34,py27,pep8
[testenv]
deps = -r{toxinidir}/test-requirements.txt
# We want to support both vanilla stdlib and eventlet monkey patched
commands =
lockutils-wrapper python setup.py testr --slowest --testr-args='{posargs}'
env TEST_EVENTLET=1 lockutils-wrapper python setup.py testr --slowest --testr-args='{posargs}'
[testenv:pep8]
commands = flake8
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:cover]
commands = python setup.py test --coverage --coverage-package-name=oslo_concurrency --testr-args='{posargs}'
[flake8]
show-source = True
ignore = H405
exclude=.venv,.git,.tox,dist,*lib/python*,*egg,build
[hacking]
import_exceptions =
oslo_concurrency._i18n
[testenv:pip-missing-reqs]
# do not install test-requirements as that will pollute the virtualenv for
# determining missing packages
# this also means that pip-missing-reqs must be installed separately, outside
# of the requirements.txt files
deps = pip_missing_reqs
commands = pip-missing-reqs -d --ignore-module=oslo_concurrency* --ignore-file=oslo_concurrency/tests/* --ignore-file=tests/ oslo_concurrency
[testenv:releasenotes]
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html