Update APMEC code
This commit is contained in:
commit
6f5e1d7338
16
CONTRIBUTING.rst
Normal file
16
CONTRIBUTING.rst
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
If you would like to contribute to the development of OpenStack,
|
||||||
|
you must follow the steps in this page:
|
||||||
|
|
||||||
|
https://docs.openstack.org/infra/manual/developers.html
|
||||||
|
|
||||||
|
Once those steps have been completed, changes to OpenStack
|
||||||
|
should be submitted for review via the Gerrit tool, following
|
||||||
|
the workflow documented at:
|
||||||
|
|
||||||
|
https://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||||
|
|
||||||
|
Pull requests submitted through GitHub will be ignored.
|
||||||
|
|
||||||
|
Bugs should be filed on Launchpad, not GitHub:
|
||||||
|
|
||||||
|
https://bugs.launchpad.net/apmec
|
19
HACKING.rst
Normal file
19
HACKING.rst
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
Apmec Style Commandments
|
||||||
|
=========================
|
||||||
|
|
||||||
|
- Step 1: Read the OpenStack Style Commandments
|
||||||
|
https://docs.openstack.org/hacking/latest/
|
||||||
|
- Step 2: Read on
|
||||||
|
|
||||||
|
Apmec Specific Commandments
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
- [N320] Validate that LOG messages, except debug ones, have translations
|
||||||
|
|
||||||
|
Creating Unit Tests
|
||||||
|
-------------------
|
||||||
|
For every new feature, unit tests should be created that both test and
|
||||||
|
(implicitly) document the usage of said feature. If submitting a patch for a
|
||||||
|
bug that had no unit test, a new passing unit test should be added. If a
|
||||||
|
submitted bug fix does have a unit test, be sure to add a new one that fails
|
||||||
|
without the patch and passes with the patch.
|
201
LICENSE
Normal file
201
LICENSE
Normal file
@ -0,0 +1,201 @@
|
|||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
35
README.md
Normal file
35
README.md
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
# apmec
|
||||||
|
This project aims at building an automated platform for Mobile Edge Cloud (MEC) based on OpenStack
|
||||||
|
|
||||||
|
|
||||||
|
The objective of APMEC is to:
|
||||||
|
|
||||||
|
- manage the lifecycle of MEC applications including "create/update/delete"
|
||||||
|
|
||||||
|
- monitor MEC application
|
||||||
|
|
||||||
|
- scale in/out MEC applications
|
||||||
|
|
||||||
|
- provide advanced features like live migration, state management, and fast data processing
|
||||||
|
|
||||||
|
- tightly integrate with OpenStack projects like Apmec (MEC Orchestrator)
|
||||||
|
|
||||||
|
|
||||||
|
The development of this project is still under implementation, therefore folks should consider the copyright
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
**Taxonomy**:
|
||||||
|
|
||||||
|
|
||||||
|
MEP: Mobile Edge Platform
|
||||||
|
|
||||||
|
MEM: Mobile Edge manager
|
||||||
|
|
||||||
|
MEO: Mobile Edge Orchestrator
|
||||||
|
|
||||||
|
MEA: Mobile Edge Application
|
||||||
|
|
||||||
|
MEAD: MEA Descriptor
|
||||||
|
|
||||||
|
Author: Tung Doan
|
41
README.rst
Normal file
41
README.rst
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
Welcome!
|
||||||
|
========
|
||||||
|
|
||||||
|
APMEC is an OpenStack based MEC Orchestrator service with built-in general
|
||||||
|
purpose MEC Manager to deploy and operate MEC applications (MEPs)
|
||||||
|
on an OpenStack based MEC Platform. It is based on ETSI MEC Architectural
|
||||||
|
Framework and provides full functional stack to orchestrate MEC applications.
|
||||||
|
|
||||||
|
Installation:
|
||||||
|
=============
|
||||||
|
|
||||||
|
Installation instructions:
|
||||||
|
https://wiki.openstack.org/wiki/apmec/Installation
|
||||||
|
**Add wiki comnets here**
|
||||||
|
|
||||||
|
APMEC code base now supports OpenStack master and pike releases. Please
|
||||||
|
follow the instructions in the above wiki for a successful installation of
|
||||||
|
corresponding release.
|
||||||
|
|
||||||
|
Code:
|
||||||
|
=====
|
||||||
|
|
||||||
|
APMEC code is available in following repositories:
|
||||||
|
|
||||||
|
* **APMEC server:** https://git.openstack.org/cgit/openstack/apmec
|
||||||
|
* **APMEC Python client:** https://git.openstack.org/cgit/openstack/python-apmecclient
|
||||||
|
* **APMEC Horizon UI:** https://git.openstack.org/cgit/openstack/apmec-gui
|
||||||
|
|
||||||
|
Bugs:
|
||||||
|
=====
|
||||||
|
|
||||||
|
Please report bugs at: https://bugs.launchpad.net/apmec
|
||||||
|
|
||||||
|
External Resources:
|
||||||
|
===================
|
||||||
|
|
||||||
|
MEC Wiki:
|
||||||
|
https://wiki.openstack.org/wiki/apmec
|
||||||
|
|
||||||
|
For help on usage and hacking of APMEC, please send mail to
|
||||||
|
<mailto:openstack-dev@lists.openstack.org> with [APMEC] tag.
|
130
TESTING.rst
Normal file
130
TESTING.rst
Normal file
@ -0,0 +1,130 @@
|
|||||||
|
Testing Apmec
|
||||||
|
==============
|
||||||
|
|
||||||
|
Overview
|
||||||
|
--------
|
||||||
|
|
||||||
|
The unit tests are meant to cover as much code as possible and should
|
||||||
|
be executed without the service running. They are designed to test
|
||||||
|
the various pieces of the apmec tree to make sure any new changes
|
||||||
|
don't break existing functionality.
|
||||||
|
|
||||||
|
The functional tests are intended to validate actual system
|
||||||
|
interaction. Mocks should be used sparingly, if at all. Care
|
||||||
|
should be taken to ensure that existing system resources are not
|
||||||
|
modified and that resources created in tests are properly cleaned
|
||||||
|
up.
|
||||||
|
|
||||||
|
Development process
|
||||||
|
-------------------
|
||||||
|
|
||||||
|
It is expected that any new changes that are proposed for merge
|
||||||
|
come with tests for that feature or code area. Ideally any bugs
|
||||||
|
fixes that are submitted also have tests to prove that they stay
|
||||||
|
fixed! In addition, before proposing for merge, all of the
|
||||||
|
current tests should be passing.
|
||||||
|
|
||||||
|
Running unit tests
|
||||||
|
------------------
|
||||||
|
|
||||||
|
There are two mechanisms for running tests: tox and nose. Before
|
||||||
|
submitting a patch for review you should always ensure all test pass;
|
||||||
|
a tox run is triggered by the jenkins gate executed on gerrit for
|
||||||
|
each patch pushed for review.
|
||||||
|
|
||||||
|
With these mechanisms you can either run the tests in the standard
|
||||||
|
environment or create a virtual environment to run them in.
|
||||||
|
|
||||||
|
By default after running all of the tests, any pep8 errors
|
||||||
|
found in the tree will be reported.
|
||||||
|
|
||||||
|
Note that the tests can use a database, see ``tools/tests-setup.sh``
|
||||||
|
on how the databases are set up in the OpenStack CI environment.
|
||||||
|
|
||||||
|
With `nose`
|
||||||
|
~~~~~~~~~~~
|
||||||
|
|
||||||
|
You can use `nose`_ to run individual tests, as well as use for debugging
|
||||||
|
portions of your code::
|
||||||
|
|
||||||
|
source .venv/bin/activate
|
||||||
|
pip install nose
|
||||||
|
nosetests
|
||||||
|
|
||||||
|
There are disadvantages to running Nose - the tests are run sequentially, so
|
||||||
|
race condition bugs will not be triggered, and the full test suite will
|
||||||
|
take significantly longer than tox & testr. The upside is that testr has
|
||||||
|
some rough edges when it comes to diagnosing errors and failures, and there is
|
||||||
|
no easy way to set a breakpoint in the Apmec code, and enter an
|
||||||
|
interactive debugging session while using testr.
|
||||||
|
|
||||||
|
.. _nose: https://nose.readthedocs.org/en/latest/index.html
|
||||||
|
|
||||||
|
With `tox`
|
||||||
|
~~~~~~~~~~
|
||||||
|
|
||||||
|
Apmec, like other OpenStack projects, uses `tox`_ for managing the virtual
|
||||||
|
environments for running test cases. It uses `Testr`_ for managing the running
|
||||||
|
of the test cases.
|
||||||
|
|
||||||
|
Tox handles the creation of a series of `virtualenvs`_ that target specific
|
||||||
|
versions of Python (2.7, 3.5, etc).
|
||||||
|
|
||||||
|
Testr handles the parallel execution of series of test cases as well as
|
||||||
|
the tracking of long-running tests and other things.
|
||||||
|
|
||||||
|
Running unit tests is as easy as executing this in the root directory of the
|
||||||
|
Apmec source code::
|
||||||
|
|
||||||
|
tox
|
||||||
|
|
||||||
|
For more information on the standard Tox-based test infrastructure used by
|
||||||
|
OpenStack and how to do some common test/debugging procedures with Testr,
|
||||||
|
see this wiki page:
|
||||||
|
|
||||||
|
https://wiki.openstack.org/wiki/Testr
|
||||||
|
|
||||||
|
.. _Testr: https://wiki.openstack.org/wiki/Testr
|
||||||
|
.. _tox: http://tox.readthedocs.org/en/latest/
|
||||||
|
.. _virtualenvs: https://pypi.python.org/pypi/virtualenv
|
||||||
|
|
||||||
|
|
||||||
|
Running individual tests
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
For running individual test modules or cases, you just need to pass
|
||||||
|
the dot-separated path to the module you want as an argument to it.
|
||||||
|
|
||||||
|
For executing a specific test case, specify the name of the test case
|
||||||
|
class separating it from the module path with a colon.
|
||||||
|
|
||||||
|
For example, the following would run only the TestMemPlugin tests from
|
||||||
|
apmec/tests/unit/vm/test_plugin.py::
|
||||||
|
|
||||||
|
$ ./tox apmec.tests.unit.vm.test_plugin:TestMemPlugin
|
||||||
|
|
||||||
|
Debugging
|
||||||
|
---------
|
||||||
|
|
||||||
|
It's possible to debug tests in a tox environment::
|
||||||
|
|
||||||
|
$ tox -e venv -- python -m testtools.run [test module path]
|
||||||
|
|
||||||
|
Tox-created virtual environments (venv's) can also be activated
|
||||||
|
after a tox run and reused for debugging::
|
||||||
|
|
||||||
|
$ tox -e venv
|
||||||
|
$ . .tox/venv/bin/activate
|
||||||
|
$ python -m testtools.run [test module path]
|
||||||
|
|
||||||
|
Tox packages and installs the apmec source tree in a given venv
|
||||||
|
on every invocation, but if modifications need to be made between
|
||||||
|
invocation (e.g. adding more pdb statements), it is recommended
|
||||||
|
that the source tree be installed in the venv in editable mode::
|
||||||
|
|
||||||
|
# run this only after activating the venv
|
||||||
|
$ pip install --editable .
|
||||||
|
|
||||||
|
Editable mode ensures that changes made to the source tree are
|
||||||
|
automatically reflected in the venv, and that such changes are not
|
||||||
|
overwritten during the next tox run.
|
23
apmec/__init__.py
Normal file
23
apmec/__init__.py
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
# Copyright 2011 OpenStack Foundation
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import gettext
|
||||||
|
|
||||||
|
import six
|
||||||
|
|
||||||
|
if six.PY2:
|
||||||
|
gettext.install('apmec', unicode=1)
|
||||||
|
else:
|
||||||
|
gettext.install('apmec')
|
24
apmec/_i18n.py
Normal file
24
apmec/_i18n.py
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import oslo_i18n
|
||||||
|
|
||||||
|
_translators = oslo_i18n.TranslatorFactory(domain='apmec')
|
||||||
|
|
||||||
|
# The primary translation function using the well-known name "_"
|
||||||
|
_ = _translators.primary
|
||||||
|
|
||||||
|
|
||||||
|
def enable_lazy(enable=True):
|
||||||
|
return oslo_i18n.enable_lazy(enable)
|
0
apmec/agent/__init__.py
Normal file
0
apmec/agent/__init__.py
Normal file
0
apmec/agent/linux/__init__.py
Normal file
0
apmec/agent/linux/__init__.py
Normal file
130
apmec/agent/linux/utils.py
Normal file
130
apmec/agent/linux/utils.py
Normal file
@ -0,0 +1,130 @@
|
|||||||
|
# Copyright 2012 Locaweb.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import fcntl
|
||||||
|
import os
|
||||||
|
import shlex
|
||||||
|
import socket
|
||||||
|
import struct
|
||||||
|
import tempfile
|
||||||
|
|
||||||
|
from eventlet.green import subprocess
|
||||||
|
from eventlet import greenthread
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_utils import excutils
|
||||||
|
|
||||||
|
from apmec.common import utils
|
||||||
|
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def create_process(cmd, root_helper=None, addl_env=None,
|
||||||
|
debuglog=True):
|
||||||
|
"""Create a process object for the given command.
|
||||||
|
|
||||||
|
The return value will be a tuple of the process object and the
|
||||||
|
list of command arguments used to create it.
|
||||||
|
"""
|
||||||
|
if root_helper:
|
||||||
|
cmd = shlex.split(root_helper) + cmd
|
||||||
|
cmd = map(str, cmd)
|
||||||
|
|
||||||
|
if debuglog:
|
||||||
|
LOG.debug("Running command: %s", cmd)
|
||||||
|
env = os.environ.copy()
|
||||||
|
if addl_env:
|
||||||
|
env.update(addl_env)
|
||||||
|
|
||||||
|
obj = utils.subprocess_popen(cmd, shell=False,
|
||||||
|
stdin=subprocess.PIPE,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
stderr=subprocess.PIPE,
|
||||||
|
env=env)
|
||||||
|
|
||||||
|
return obj, cmd
|
||||||
|
|
||||||
|
|
||||||
|
def execute(cmd, root_helper=None, process_input=None, addl_env=None,
|
||||||
|
check_exit_code=True, return_stderr=False, debuglog=True):
|
||||||
|
# Note(gongysh) not use log_levels in config file because
|
||||||
|
# some other codes that are not in a loop probably need the debug log
|
||||||
|
try:
|
||||||
|
obj, cmd = create_process(cmd, root_helper=root_helper,
|
||||||
|
addl_env=addl_env, debuglog=debuglog)
|
||||||
|
_stdout, _stderr = (process_input and
|
||||||
|
obj.communicate(process_input) or
|
||||||
|
obj.communicate())
|
||||||
|
obj.stdin.close()
|
||||||
|
m = _("\nCommand: %(cmd)s\nExit code: %(code)s\nStdout: %(stdout)r\n"
|
||||||
|
"Stderr: %(stderr)r") % {'cmd': cmd, 'code': obj.returncode,
|
||||||
|
'stdout': _stdout, 'stderr': _stderr}
|
||||||
|
if obj.returncode:
|
||||||
|
LOG.error(m)
|
||||||
|
if check_exit_code:
|
||||||
|
raise RuntimeError(m)
|
||||||
|
elif debuglog:
|
||||||
|
LOG.debug(m)
|
||||||
|
finally:
|
||||||
|
# NOTE(termie): this appears to be necessary to let the subprocess
|
||||||
|
# call clean something up in between calls, without
|
||||||
|
# it two execute calls in a row hangs the second one
|
||||||
|
greenthread.sleep(0)
|
||||||
|
|
||||||
|
return return_stderr and (_stdout, _stderr) or _stdout
|
||||||
|
|
||||||
|
|
||||||
|
def get_interface_mac(interface):
|
||||||
|
DEVICE_NAME_LEN = 15
|
||||||
|
MAC_START = 18
|
||||||
|
MAC_END = 24
|
||||||
|
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||||
|
info = fcntl.ioctl(s.fileno(), 0x8927,
|
||||||
|
struct.pack('256s', interface[:DEVICE_NAME_LEN]))
|
||||||
|
return ''.join(['%02x:' % ord(char)
|
||||||
|
for char in info[MAC_START:MAC_END]])[:-1]
|
||||||
|
|
||||||
|
|
||||||
|
def replace_file(file_name, data):
|
||||||
|
"""Replaces the contents of file_name with data in a safe manner.
|
||||||
|
|
||||||
|
First write to a temp file and then rename. Since POSIX renames are
|
||||||
|
atomic, the file is unlikely to be corrupted by competing writes.
|
||||||
|
|
||||||
|
We create the tempfile on the same device to ensure that it can be renamed.
|
||||||
|
"""
|
||||||
|
|
||||||
|
base_dir = os.path.dirname(os.path.abspath(file_name))
|
||||||
|
tmp_file = tempfile.NamedTemporaryFile('w+', dir=base_dir, delete=False)
|
||||||
|
tmp_file.write(data)
|
||||||
|
tmp_file.close()
|
||||||
|
os.chmod(tmp_file.name, 0o644)
|
||||||
|
os.rename(tmp_file.name, file_name)
|
||||||
|
|
||||||
|
|
||||||
|
def find_child_pids(pid):
|
||||||
|
"""Retrieve a list of the pids of child processes of the given pid."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
raw_pids = execute(['ps', '--ppid', pid, '-o', 'pid='])
|
||||||
|
except RuntimeError as e:
|
||||||
|
# Unexpected errors are the responsibility of the caller
|
||||||
|
with excutils.save_and_reraise_exception() as ctxt:
|
||||||
|
# Exception has already been logged by execute
|
||||||
|
no_children_found = 'Exit code: 1' in str(e)
|
||||||
|
if no_children_found:
|
||||||
|
ctxt.reraise = False
|
||||||
|
return []
|
||||||
|
return [x.strip() for x in raw_pids.split('\n') if x.strip()]
|
94
apmec/alarm_receiver.py
Normal file
94
apmec/alarm_receiver.py
Normal file
@ -0,0 +1,94 @@
|
|||||||
|
# Copyright 2012 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_serialization import jsonutils
|
||||||
|
from six.moves.urllib import parse
|
||||||
|
from apmec.mem.monitor_drivers.token import Token
|
||||||
|
from apmec import wsgi
|
||||||
|
# check alarm url with db --> move to plugin
|
||||||
|
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
OPTS = [
|
||||||
|
cfg.StrOpt('username', default='admin',
|
||||||
|
help=_('User name for alarm monitoring')),
|
||||||
|
cfg.StrOpt('password', default='devstack',
|
||||||
|
help=_('password for alarm monitoring')),
|
||||||
|
cfg.StrOpt('project_name', default='admin',
|
||||||
|
help=_('project name for alarm monitoring')),
|
||||||
|
]
|
||||||
|
|
||||||
|
cfg.CONF.register_opts(OPTS, 'alarm_auth')
|
||||||
|
|
||||||
|
|
||||||
|
def config_opts():
|
||||||
|
return [('alarm_auth', OPTS)]
|
||||||
|
|
||||||
|
|
||||||
|
class AlarmReceiver(wsgi.Middleware):
|
||||||
|
def process_request(self, req):
|
||||||
|
LOG.debug('Process request: %s', req)
|
||||||
|
if req.method != 'POST':
|
||||||
|
return
|
||||||
|
url = req.url
|
||||||
|
if not self.handle_url(url):
|
||||||
|
return
|
||||||
|
prefix, info, params = self.handle_url(req.url)
|
||||||
|
auth = cfg.CONF.keystone_authtoken
|
||||||
|
token = Token(username=cfg.CONF.alarm_auth.username,
|
||||||
|
password=cfg.CONF.alarm_auth.password,
|
||||||
|
project_name=cfg.CONF.alarm_auth.project_name,
|
||||||
|
auth_url=auth.auth_url + '/v3',
|
||||||
|
user_domain_name='default',
|
||||||
|
project_domain_name='default')
|
||||||
|
|
||||||
|
token_identity = token.create_token()
|
||||||
|
req.headers['X_AUTH_TOKEN'] = token_identity
|
||||||
|
# Change the body request
|
||||||
|
if req.body:
|
||||||
|
body_dict = dict()
|
||||||
|
body_dict['trigger'] = {}
|
||||||
|
body_dict['trigger'].setdefault('params', {})
|
||||||
|
# Update params in the body request
|
||||||
|
body_info = jsonutils.loads(req.body)
|
||||||
|
body_dict['trigger']['params']['data'] = body_info
|
||||||
|
body_dict['trigger']['params']['credential'] = info[6]
|
||||||
|
# Update policy and action
|
||||||
|
body_dict['trigger']['policy_name'] = info[4]
|
||||||
|
body_dict['trigger']['action_name'] = info[5]
|
||||||
|
req.body = jsonutils.dumps(body_dict)
|
||||||
|
LOG.debug('Body alarm: %s', req.body)
|
||||||
|
# Need to change url because of mandatory
|
||||||
|
req.environ['PATH_INFO'] = prefix + 'triggers'
|
||||||
|
req.environ['QUERY_STRING'] = ''
|
||||||
|
LOG.debug('alarm url in receiver: %s', req.url)
|
||||||
|
|
||||||
|
def handle_url(self, url):
|
||||||
|
# alarm_url = 'http://host:port/v1.0/meas/mea-uuid/mon-policy-name/action-name/8ef785' # noqa
|
||||||
|
parts = parse.urlparse(url)
|
||||||
|
p = parts.path.split('/')
|
||||||
|
if len(p) != 7:
|
||||||
|
return None
|
||||||
|
|
||||||
|
if any((p[0] != '', p[2] != 'meas')):
|
||||||
|
return None
|
||||||
|
# decode action name: respawn%25log
|
||||||
|
p[5] = parse.unquote(p[5])
|
||||||
|
qs = parse.parse_qs(parts.query)
|
||||||
|
params = dict((k, v[0]) for k, v in qs.items())
|
||||||
|
prefix_url = '/%(collec)s/%(mea_uuid)s/' % {'collec': p[2],
|
||||||
|
'mea_uuid': p[3]}
|
||||||
|
return prefix_url, p, params
|
0
apmec/api/__init__.py
Normal file
0
apmec/api/__init__.py
Normal file
411
apmec/api/api_common.py
Normal file
411
apmec/api/api_common.py
Normal file
@ -0,0 +1,411 @@
|
|||||||
|
# Copyright 2011 Citrix System.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import netaddr
|
||||||
|
from oslo_config import cfg
|
||||||
|
import oslo_i18n
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_policy import policy as oslo_policy
|
||||||
|
from six import iteritems
|
||||||
|
from six.moves.urllib import parse as urllib_parse
|
||||||
|
from webob import exc
|
||||||
|
|
||||||
|
from apmec.common import constants
|
||||||
|
from apmec.common import exceptions
|
||||||
|
from apmec import wsgi
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def get_filters(request, attr_info, skips=None):
|
||||||
|
"""Extracts the filters from the request string.
|
||||||
|
|
||||||
|
Returns a dict of lists for the filters:
|
||||||
|
check=a&check=b&name=Bob&
|
||||||
|
becomes:
|
||||||
|
{'check': [u'a', u'b'], 'name': [u'Bob']}
|
||||||
|
"""
|
||||||
|
res = {}
|
||||||
|
skips = skips or []
|
||||||
|
for key, values in iteritems(request.GET.dict_of_lists()):
|
||||||
|
if key in skips:
|
||||||
|
continue
|
||||||
|
values = [v for v in values if v]
|
||||||
|
key_attr_info = attr_info.get(key, {})
|
||||||
|
if 'convert_list_to' in key_attr_info:
|
||||||
|
values = key_attr_info['convert_list_to'](values)
|
||||||
|
elif 'convert_to' in key_attr_info:
|
||||||
|
convert_to = key_attr_info['convert_to']
|
||||||
|
values = [convert_to(v) for v in values]
|
||||||
|
if values:
|
||||||
|
res[key] = values
|
||||||
|
return res
|
||||||
|
|
||||||
|
|
||||||
|
def get_previous_link(request, items, id_key):
|
||||||
|
params = request.GET.copy()
|
||||||
|
params.pop('marker', None)
|
||||||
|
if items:
|
||||||
|
marker = items[0][id_key]
|
||||||
|
params['marker'] = marker
|
||||||
|
params['page_reverse'] = True
|
||||||
|
return "%s?%s" % (request.path_url, urllib_parse.urlencode(params))
|
||||||
|
|
||||||
|
|
||||||
|
def get_next_link(request, items, id_key):
|
||||||
|
params = request.GET.copy()
|
||||||
|
params.pop('marker', None)
|
||||||
|
if items:
|
||||||
|
marker = items[-1][id_key]
|
||||||
|
params['marker'] = marker
|
||||||
|
params.pop('page_reverse', None)
|
||||||
|
return "%s?%s" % (request.path_url, urllib_parse.urlencode(params))
|
||||||
|
|
||||||
|
|
||||||
|
def get_limit_and_marker(request):
|
||||||
|
"""Return marker, limit tuple from request.
|
||||||
|
|
||||||
|
:param request: `wsgi.Request` possibly containing 'marker' and 'limit'
|
||||||
|
GET variables. 'marker' is the id of the last element
|
||||||
|
the client has seen, and 'limit' is the maximum number
|
||||||
|
of items to return. If limit == 0, it means we needn't
|
||||||
|
pagination, then return None.
|
||||||
|
"""
|
||||||
|
max_limit = _get_pagination_max_limit()
|
||||||
|
limit = _get_limit_param(request, max_limit)
|
||||||
|
if max_limit > 0:
|
||||||
|
limit = min(max_limit, limit) or max_limit
|
||||||
|
if not limit:
|
||||||
|
return None, None
|
||||||
|
marker = request.GET.get('marker', None)
|
||||||
|
return limit, marker
|
||||||
|
|
||||||
|
|
||||||
|
def _get_pagination_max_limit():
|
||||||
|
max_limit = -1
|
||||||
|
if (cfg.CONF.pagination_max_limit.lower() !=
|
||||||
|
constants.PAGINATION_INFINITE):
|
||||||
|
try:
|
||||||
|
max_limit = int(cfg.CONF.pagination_max_limit)
|
||||||
|
if max_limit == 0:
|
||||||
|
raise ValueError()
|
||||||
|
except ValueError:
|
||||||
|
LOG.warning("Invalid value for pagination_max_limit: %s. It "
|
||||||
|
"should be an integer greater to 0",
|
||||||
|
cfg.CONF.pagination_max_limit)
|
||||||
|
return max_limit
|
||||||
|
|
||||||
|
|
||||||
|
def _get_limit_param(request, max_limit):
|
||||||
|
"""Extract integer limit from request or fail."""
|
||||||
|
try:
|
||||||
|
limit = int(request.GET.get('limit', 0))
|
||||||
|
if limit >= 0:
|
||||||
|
return limit
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
msg = _("Limit must be an integer 0 or greater and not '%d'")
|
||||||
|
raise exceptions.BadRequest(resource='limit', msg=msg)
|
||||||
|
|
||||||
|
|
||||||
|
def list_args(request, arg):
|
||||||
|
"""Extracts the list of arg from request."""
|
||||||
|
return [v for v in request.GET.getall(arg) if v]
|
||||||
|
|
||||||
|
|
||||||
|
def get_sorts(request, attr_info):
|
||||||
|
"""Extract sort_key and sort_dir from request.
|
||||||
|
|
||||||
|
Return as: [(key1, value1), (key2, value2)]
|
||||||
|
"""
|
||||||
|
sort_keys = list_args(request, "sort_key")
|
||||||
|
sort_dirs = list_args(request, "sort_dir")
|
||||||
|
if len(sort_keys) != len(sort_dirs):
|
||||||
|
msg = _("The number of sort_keys and sort_dirs must be same")
|
||||||
|
raise exc.HTTPBadRequest(explanation=msg)
|
||||||
|
valid_dirs = [constants.SORT_DIRECTION_ASC, constants.SORT_DIRECTION_DESC]
|
||||||
|
absent_keys = [x for x in sort_keys if x not in attr_info]
|
||||||
|
if absent_keys:
|
||||||
|
msg = _("%s is invalid attribute for sort_keys") % absent_keys
|
||||||
|
raise exc.HTTPBadRequest(explanation=msg)
|
||||||
|
invalid_dirs = [x for x in sort_dirs if x not in valid_dirs]
|
||||||
|
if invalid_dirs:
|
||||||
|
msg = (_("%(invalid_dirs)s is invalid value for sort_dirs, "
|
||||||
|
"valid value is '%(asc)s' and '%(desc)s'") %
|
||||||
|
{'invalid_dirs': invalid_dirs,
|
||||||
|
'asc': constants.SORT_DIRECTION_ASC,
|
||||||
|
'desc': constants.SORT_DIRECTION_DESC})
|
||||||
|
raise exc.HTTPBadRequest(explanation=msg)
|
||||||
|
return zip(sort_keys,
|
||||||
|
[x == constants.SORT_DIRECTION_ASC for x in sort_dirs])
|
||||||
|
|
||||||
|
|
||||||
|
def get_page_reverse(request):
|
||||||
|
data = request.GET.get('page_reverse', 'False')
|
||||||
|
return data.lower() == "true"
|
||||||
|
|
||||||
|
|
||||||
|
def get_pagination_links(request, items, limit,
|
||||||
|
marker, page_reverse, key="id"):
|
||||||
|
key = key if key else 'id'
|
||||||
|
links = []
|
||||||
|
if not limit:
|
||||||
|
return links
|
||||||
|
if not (len(items) < limit and not page_reverse):
|
||||||
|
links.append({"rel": "next",
|
||||||
|
"href": get_next_link(request, items,
|
||||||
|
key)})
|
||||||
|
if not (len(items) < limit and page_reverse):
|
||||||
|
links.append({"rel": "previous",
|
||||||
|
"href": get_previous_link(request, items,
|
||||||
|
key)})
|
||||||
|
return links
|
||||||
|
|
||||||
|
|
||||||
|
class PaginationHelper(object):
|
||||||
|
|
||||||
|
def __init__(self, request, primary_key='id'):
|
||||||
|
self.request = request
|
||||||
|
self.primary_key = primary_key
|
||||||
|
|
||||||
|
def update_fields(self, original_fields, fields_to_add):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def update_args(self, args):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def paginate(self, items):
|
||||||
|
return items
|
||||||
|
|
||||||
|
def get_links(self, items):
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
class PaginationEmulatedHelper(PaginationHelper):
|
||||||
|
|
||||||
|
def __init__(self, request, primary_key='id'):
|
||||||
|
super(PaginationEmulatedHelper, self).__init__(request, primary_key)
|
||||||
|
self.limit, self.marker = get_limit_and_marker(request)
|
||||||
|
self.page_reverse = get_page_reverse(request)
|
||||||
|
|
||||||
|
def update_fields(self, original_fields, fields_to_add):
|
||||||
|
if not original_fields:
|
||||||
|
return
|
||||||
|
if self.primary_key not in original_fields:
|
||||||
|
original_fields.append(self.primary_key)
|
||||||
|
fields_to_add.append(self.primary_key)
|
||||||
|
|
||||||
|
def paginate(self, items):
|
||||||
|
if not self.limit:
|
||||||
|
return items
|
||||||
|
i = -1
|
||||||
|
if self.marker:
|
||||||
|
for item in items:
|
||||||
|
i = i + 1
|
||||||
|
if item[self.primary_key] == self.marker:
|
||||||
|
break
|
||||||
|
if self.page_reverse:
|
||||||
|
return items[i - self.limit:i]
|
||||||
|
return items[i + 1:i + self.limit + 1]
|
||||||
|
|
||||||
|
def get_links(self, items):
|
||||||
|
return get_pagination_links(
|
||||||
|
self.request, items, self.limit, self.marker,
|
||||||
|
self.page_reverse, self.primary_key)
|
||||||
|
|
||||||
|
|
||||||
|
class PaginationNativeHelper(PaginationEmulatedHelper):
|
||||||
|
|
||||||
|
def update_args(self, args):
|
||||||
|
if self.primary_key not in dict(args.get('sorts', [])).keys():
|
||||||
|
args.setdefault('sorts', []).append((self.primary_key, True))
|
||||||
|
args.update({'limit': self.limit, 'marker': self.marker,
|
||||||
|
'page_reverse': self.page_reverse})
|
||||||
|
|
||||||
|
def paginate(self, items):
|
||||||
|
return items
|
||||||
|
|
||||||
|
|
||||||
|
class NoPaginationHelper(PaginationHelper):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class SortingHelper(object):
|
||||||
|
|
||||||
|
def __init__(self, request, attr_info):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def update_args(self, args):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def update_fields(self, original_fields, fields_to_add):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def sort(self, items):
|
||||||
|
return items
|
||||||
|
|
||||||
|
|
||||||
|
class SortingEmulatedHelper(SortingHelper):
|
||||||
|
|
||||||
|
def __init__(self, request, attr_info):
|
||||||
|
super(SortingEmulatedHelper, self).__init__(request, attr_info)
|
||||||
|
self.sort_dict = get_sorts(request, attr_info)
|
||||||
|
|
||||||
|
def update_fields(self, original_fields, fields_to_add):
|
||||||
|
if not original_fields:
|
||||||
|
return
|
||||||
|
for key in dict(self.sort_dict).keys():
|
||||||
|
if key not in original_fields:
|
||||||
|
original_fields.append(key)
|
||||||
|
fields_to_add.append(key)
|
||||||
|
|
||||||
|
def sort(self, items):
|
||||||
|
def cmp_func(obj1, obj2):
|
||||||
|
for key, direction in self.sort_dict:
|
||||||
|
ret = cmp(obj1[key], obj2[key])
|
||||||
|
if ret:
|
||||||
|
return ret * (1 if direction else -1)
|
||||||
|
return 0
|
||||||
|
return sorted(items, cmp=cmp_func)
|
||||||
|
|
||||||
|
|
||||||
|
class SortingNativeHelper(SortingHelper):
|
||||||
|
|
||||||
|
def __init__(self, request, attr_info):
|
||||||
|
super(SortingNativeHelper, self).__init__(request, attr_info)
|
||||||
|
self.sort_dict = get_sorts(request, attr_info)
|
||||||
|
|
||||||
|
def update_args(self, args):
|
||||||
|
args['sorts'] = self.sort_dict
|
||||||
|
|
||||||
|
|
||||||
|
class NoSortingHelper(SortingHelper):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class ApmecController(object):
|
||||||
|
"""Base controller class for Apmec API."""
|
||||||
|
# _resource_name will be redefined in sub concrete controller
|
||||||
|
_resource_name = None
|
||||||
|
|
||||||
|
def __init__(self, plugin):
|
||||||
|
self._plugin = plugin
|
||||||
|
super(ApmecController, self).__init__()
|
||||||
|
|
||||||
|
def _prepare_request_body(self, body, params):
|
||||||
|
"""Verifies required parameters are in request body.
|
||||||
|
|
||||||
|
Sets default value for missing optional parameters.
|
||||||
|
Body argument must be the deserialized body.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
if body is None:
|
||||||
|
# Initialize empty resource for setting default value
|
||||||
|
body = {self._resource_name: {}}
|
||||||
|
data = body[self._resource_name]
|
||||||
|
except KeyError:
|
||||||
|
# raise if _resource_name is not in req body.
|
||||||
|
raise exc.HTTPBadRequest(_("Unable to find '%s' in request body") %
|
||||||
|
self._resource_name)
|
||||||
|
for param in params:
|
||||||
|
param_name = param['param-name']
|
||||||
|
param_value = data.get(param_name)
|
||||||
|
# If the parameter wasn't found and it was required, return 400
|
||||||
|
if param_value is None and param['required']:
|
||||||
|
msg = (_("Failed to parse request. "
|
||||||
|
"Parameter '%s' not specified") % param_name)
|
||||||
|
LOG.error(msg)
|
||||||
|
raise exc.HTTPBadRequest(msg)
|
||||||
|
data[param_name] = param_value or param.get('default-value')
|
||||||
|
return body
|
||||||
|
|
||||||
|
|
||||||
|
def convert_exception_to_http_exc(e, faults, language):
|
||||||
|
serializer = wsgi.JSONDictSerializer()
|
||||||
|
e = translate(e, language)
|
||||||
|
body = serializer.serialize(
|
||||||
|
{'ApmecError': get_exception_data(e)})
|
||||||
|
kwargs = {'body': body, 'content_type': 'application/json'}
|
||||||
|
if isinstance(e, exc.HTTPException):
|
||||||
|
# already an HTTP error, just update with content type and body
|
||||||
|
e.body = body
|
||||||
|
e.content_type = kwargs['content_type']
|
||||||
|
return e
|
||||||
|
if isinstance(e, (exceptions.ApmecException, netaddr.AddrFormatError,
|
||||||
|
oslo_policy.PolicyNotAuthorized)):
|
||||||
|
for fault in faults:
|
||||||
|
if isinstance(e, fault):
|
||||||
|
mapped_exc = faults[fault]
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
mapped_exc = exc.HTTPInternalServerError
|
||||||
|
return mapped_exc(**kwargs)
|
||||||
|
if isinstance(e, NotImplementedError):
|
||||||
|
# NOTE(armando-migliaccio): from a client standpoint
|
||||||
|
# it makes sense to receive these errors, because
|
||||||
|
# extensions may or may not be implemented by
|
||||||
|
# the underlying plugin. So if something goes south,
|
||||||
|
# because a plugin does not implement a feature,
|
||||||
|
# returning 500 is definitely confusing.
|
||||||
|
kwargs['body'] = serializer.serialize(
|
||||||
|
{'NotImplementedError': get_exception_data(e)})
|
||||||
|
return exc.HTTPNotImplemented(**kwargs)
|
||||||
|
# NOTE(jkoelker) Everything else is 500
|
||||||
|
# Do not expose details of 500 error to clients.
|
||||||
|
msg = _('Request Failed: internal server error while '
|
||||||
|
'processing your request.')
|
||||||
|
msg = translate(msg, language)
|
||||||
|
kwargs['body'] = serializer.serialize(
|
||||||
|
{'ApmecError': get_exception_data(exc.HTTPInternalServerError(msg))})
|
||||||
|
return exc.HTTPInternalServerError(**kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def get_exception_data(e):
|
||||||
|
"""Extract the information about an exception.
|
||||||
|
|
||||||
|
Apmec client for the v1 API expects exceptions to have 'type', 'message'
|
||||||
|
and 'detail' attributes.This information is extracted and converted into a
|
||||||
|
dictionary.
|
||||||
|
|
||||||
|
:param e: the exception to be reraised
|
||||||
|
:returns: a structured dict with the exception data
|
||||||
|
"""
|
||||||
|
err_data = {'type': e.__class__.__name__,
|
||||||
|
'message': e, 'detail': ''}
|
||||||
|
return err_data
|
||||||
|
|
||||||
|
|
||||||
|
def translate(translatable, locale):
|
||||||
|
"""Translates the object to the given locale.
|
||||||
|
|
||||||
|
If the object is an exception its translatable elements are translated
|
||||||
|
in place, if the object is a translatable string it is translated and
|
||||||
|
returned. Otherwise, the object is returned as-is.
|
||||||
|
|
||||||
|
:param translatable: the object to be translated
|
||||||
|
:param locale: the locale to translate to
|
||||||
|
:returns: the translated object, or the object as-is if it
|
||||||
|
was not translated
|
||||||
|
"""
|
||||||
|
localize = oslo_i18n.translate
|
||||||
|
if isinstance(translatable, exceptions.ApmecException):
|
||||||
|
translatable.msg = localize(translatable.msg, locale)
|
||||||
|
elif isinstance(translatable, exc.HTTPError):
|
||||||
|
translatable.detail = localize(translatable.detail, locale)
|
||||||
|
elif isinstance(translatable, Exception):
|
||||||
|
translatable.message = localize(translatable, locale)
|
||||||
|
else:
|
||||||
|
return localize(translatable, locale)
|
||||||
|
return translatable
|
622
apmec/api/extensions.py
Normal file
622
apmec/api/extensions.py
Normal file
@ -0,0 +1,622 @@
|
|||||||
|
# Copyright 2011 OpenStack Foundation.
|
||||||
|
# Copyright 2011 Justin Santa Barbara
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import abc
|
||||||
|
import imp
|
||||||
|
import os
|
||||||
|
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_log import log as logging
|
||||||
|
import routes
|
||||||
|
import six
|
||||||
|
import webob.dec
|
||||||
|
import webob.exc
|
||||||
|
|
||||||
|
from apmec.common import exceptions
|
||||||
|
import apmec.extensions
|
||||||
|
from apmec import policy
|
||||||
|
from apmec import wsgi
|
||||||
|
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@six.add_metaclass(abc.ABCMeta)
|
||||||
|
class PluginInterface(object):
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def __subclasshook__(cls, klass):
|
||||||
|
"""Checking plugin class.
|
||||||
|
|
||||||
|
The __subclasshook__ method is a class method
|
||||||
|
that will be called every time a class is tested
|
||||||
|
using issubclass(klass, PluginInterface).
|
||||||
|
In that case, it will check that every method
|
||||||
|
marked with the abstractmethod decorator is
|
||||||
|
provided by the plugin class.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if not cls.__abstractmethods__:
|
||||||
|
return NotImplemented
|
||||||
|
|
||||||
|
for method in cls.__abstractmethods__:
|
||||||
|
if any(method in base.__dict__ for base in klass.__mro__):
|
||||||
|
continue
|
||||||
|
return NotImplemented
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
class ExtensionDescriptor(object):
|
||||||
|
"""Base class that defines the contract for extensions.
|
||||||
|
|
||||||
|
Note that you don't have to derive from this class to have a valid
|
||||||
|
extension; it is purely a convenience.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def get_name(self):
|
||||||
|
"""The name of the extension.
|
||||||
|
|
||||||
|
e.g. 'Fox In Socks'
|
||||||
|
"""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def get_alias(self):
|
||||||
|
"""The alias for the extension.
|
||||||
|
|
||||||
|
e.g. 'FOXNSOX'
|
||||||
|
"""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def get_description(self):
|
||||||
|
"""Friendly description for the extension.
|
||||||
|
|
||||||
|
e.g. 'The Fox In Socks Extension'
|
||||||
|
"""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def get_namespace(self):
|
||||||
|
"""The XML namespace for the extension.
|
||||||
|
|
||||||
|
e.g. 'http://www.fox.in.socks/api/ext/pie/v1.0'
|
||||||
|
"""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def get_updated(self):
|
||||||
|
"""The timestamp when the extension was last updated.
|
||||||
|
|
||||||
|
e.g. '2011-01-22T13:25:27-06:00'
|
||||||
|
"""
|
||||||
|
# NOTE(justinsb): Not sure of the purpose of this is, vs the XML NS
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def get_resources(self):
|
||||||
|
"""List of extensions.ResourceExtension extension objects.
|
||||||
|
|
||||||
|
Resources define new nouns, and are accessible through URLs.
|
||||||
|
"""
|
||||||
|
resources = []
|
||||||
|
return resources
|
||||||
|
|
||||||
|
def get_actions(self):
|
||||||
|
"""List of extensions.ActionExtension extension objects.
|
||||||
|
|
||||||
|
Actions are verbs callable from the API.
|
||||||
|
"""
|
||||||
|
actions = []
|
||||||
|
return actions
|
||||||
|
|
||||||
|
def get_request_extensions(self):
|
||||||
|
"""List of extensions.RequestException extension objects.
|
||||||
|
|
||||||
|
Request extensions are used to handle custom request data.
|
||||||
|
"""
|
||||||
|
request_exts = []
|
||||||
|
return request_exts
|
||||||
|
|
||||||
|
def get_extended_resources(self, version):
|
||||||
|
"""Retrieve extended resources or attributes for core resources.
|
||||||
|
|
||||||
|
Extended attributes are implemented by a core plugin similarly
|
||||||
|
to the attributes defined in the core, and can appear in
|
||||||
|
request and response messages. Their names are scoped with the
|
||||||
|
extension's prefix. The core API version is passed to this
|
||||||
|
function, which must return a
|
||||||
|
map[<resource_name>][<attribute_name>][<attribute_property>]
|
||||||
|
specifying the extended resource attribute properties required
|
||||||
|
by that API version.
|
||||||
|
|
||||||
|
Extension can add resources and their attr definitions too.
|
||||||
|
The returned map can be integrated into RESOURCE_ATTRIBUTE_MAP.
|
||||||
|
"""
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def get_plugin_interface(self):
|
||||||
|
"""Returns an abstract class which defines contract for the plugin.
|
||||||
|
|
||||||
|
The abstract class should inherit from extesnions.PluginInterface,
|
||||||
|
Methods in this abstract class should be decorated as abstractmethod
|
||||||
|
"""
|
||||||
|
return None
|
||||||
|
|
||||||
|
def update_attributes_map(self, extended_attributes,
|
||||||
|
extension_attrs_map=None):
|
||||||
|
"""Update attributes map for this extension.
|
||||||
|
|
||||||
|
This is default method for extending an extension's attributes map.
|
||||||
|
An extension can use this method and supplying its own resource
|
||||||
|
attribute map in extension_attrs_map argument to extend all its
|
||||||
|
attributes that needs to be extended.
|
||||||
|
|
||||||
|
If an extension does not implement update_attributes_map, the method
|
||||||
|
does nothing and just return.
|
||||||
|
"""
|
||||||
|
if not extension_attrs_map:
|
||||||
|
return
|
||||||
|
|
||||||
|
for resource, attrs in extension_attrs_map.items():
|
||||||
|
extended_attrs = extended_attributes.get(resource)
|
||||||
|
if extended_attrs:
|
||||||
|
attrs.update(extended_attrs)
|
||||||
|
|
||||||
|
def get_alias_namespace_compatibility_map(self):
|
||||||
|
"""Returns mappings between extension aliases and XML namespaces.
|
||||||
|
|
||||||
|
The mappings are XML namespaces that should, for backward compatibility
|
||||||
|
reasons, be added to the XML serialization of extended attributes.
|
||||||
|
This allows an established extended attribute to be provided by
|
||||||
|
another extension than the original one while keeping its old alias
|
||||||
|
in the name.
|
||||||
|
:return: A dictionary of extension_aliases and namespace strings.
|
||||||
|
"""
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
class ActionExtensionController(wsgi.Controller):
|
||||||
|
|
||||||
|
def __init__(self, application):
|
||||||
|
self.application = application
|
||||||
|
self.action_handlers = {}
|
||||||
|
|
||||||
|
def add_action(self, action_name, handler):
|
||||||
|
self.action_handlers[action_name] = handler
|
||||||
|
|
||||||
|
def action(self, request, id):
|
||||||
|
input_dict = self._deserialize(request.body,
|
||||||
|
request.get_content_type())
|
||||||
|
for action_name, handler in (self.action_handlers).items():
|
||||||
|
if action_name in input_dict:
|
||||||
|
return handler(input_dict, request, id)
|
||||||
|
# no action handler found (bump to downstream application)
|
||||||
|
response = self.application
|
||||||
|
return response
|
||||||
|
|
||||||
|
|
||||||
|
class RequestExtensionController(wsgi.Controller):
|
||||||
|
|
||||||
|
def __init__(self, application):
|
||||||
|
self.application = application
|
||||||
|
self.handlers = []
|
||||||
|
|
||||||
|
def add_handler(self, handler):
|
||||||
|
self.handlers.append(handler)
|
||||||
|
|
||||||
|
def process(self, request, *args, **kwargs):
|
||||||
|
res = request.get_response(self.application)
|
||||||
|
# currently request handlers are un-ordered
|
||||||
|
for handler in self.handlers:
|
||||||
|
response = handler(request, res)
|
||||||
|
return response
|
||||||
|
|
||||||
|
|
||||||
|
class ExtensionController(wsgi.Controller):
|
||||||
|
|
||||||
|
def __init__(self, extension_manager):
|
||||||
|
self.extension_manager = extension_manager
|
||||||
|
|
||||||
|
def _translate(self, ext):
|
||||||
|
ext_data = {}
|
||||||
|
ext_data['name'] = ext.get_name()
|
||||||
|
ext_data['alias'] = ext.get_alias()
|
||||||
|
ext_data['description'] = ext.get_description()
|
||||||
|
ext_data['namespace'] = ext.get_namespace()
|
||||||
|
ext_data['updated'] = ext.get_updated()
|
||||||
|
ext_data['links'] = [] # TODO(dprince): implement extension links
|
||||||
|
return ext_data
|
||||||
|
|
||||||
|
def index(self, request):
|
||||||
|
extensions = []
|
||||||
|
for _alias, ext in (self.extension_manager.extensions).items():
|
||||||
|
extensions.append(self._translate(ext))
|
||||||
|
return dict(extensions=extensions)
|
||||||
|
|
||||||
|
def show(self, request, id):
|
||||||
|
# NOTE(dprince): the extensions alias is used as the 'id' for show
|
||||||
|
ext = self.extension_manager.extensions.get(id)
|
||||||
|
if not ext:
|
||||||
|
raise webob.exc.HTTPNotFound(
|
||||||
|
_("Extension with alias %s does not exist") % id)
|
||||||
|
return dict(extension=self._translate(ext))
|
||||||
|
|
||||||
|
def delete(self, request, id):
|
||||||
|
msg = _('Resource not found.')
|
||||||
|
raise webob.exc.HTTPNotFound(msg)
|
||||||
|
|
||||||
|
def create(self, request):
|
||||||
|
msg = _('Resource not found.')
|
||||||
|
raise webob.exc.HTTPNotFound(msg)
|
||||||
|
|
||||||
|
|
||||||
|
class ExtensionMiddleware(wsgi.Middleware):
|
||||||
|
"""Extensions middleware for WSGI."""
|
||||||
|
|
||||||
|
def __init__(self, application,
|
||||||
|
ext_mgr=None):
|
||||||
|
self.ext_mgr = (ext_mgr
|
||||||
|
or ExtensionManager(get_extensions_path()))
|
||||||
|
mapper = routes.Mapper()
|
||||||
|
|
||||||
|
# extended resources
|
||||||
|
for resource in self.ext_mgr.get_resources():
|
||||||
|
path_prefix = resource.path_prefix
|
||||||
|
if resource.parent:
|
||||||
|
path_prefix = (resource.path_prefix +
|
||||||
|
"/%s/{%s_id}" %
|
||||||
|
(resource.parent["collection_name"],
|
||||||
|
resource.parent["member_name"]))
|
||||||
|
|
||||||
|
LOG.debug('Extended resource: %s', resource.collection)
|
||||||
|
for action, method in (resource.collection_actions).items():
|
||||||
|
conditions = dict(method=[method])
|
||||||
|
path = "/%s/%s" % (resource.collection, action)
|
||||||
|
with mapper.submapper(controller=resource.controller,
|
||||||
|
action=action,
|
||||||
|
path_prefix=path_prefix,
|
||||||
|
conditions=conditions) as submap:
|
||||||
|
submap.connect(path)
|
||||||
|
submap.connect("%s.:(format)" % path)
|
||||||
|
|
||||||
|
mapper.resource(resource.collection, resource.collection,
|
||||||
|
controller=resource.controller,
|
||||||
|
member=resource.member_actions,
|
||||||
|
parent_resource=resource.parent,
|
||||||
|
path_prefix=path_prefix)
|
||||||
|
|
||||||
|
# extended actions
|
||||||
|
action_controllers = self._action_ext_controllers(application,
|
||||||
|
self.ext_mgr, mapper)
|
||||||
|
for action in self.ext_mgr.get_actions():
|
||||||
|
LOG.debug('Extended action: %s', action.action_name)
|
||||||
|
controller = action_controllers[action.collection]
|
||||||
|
controller.add_action(action.action_name, action.handler)
|
||||||
|
|
||||||
|
# extended requests
|
||||||
|
req_controllers = self._request_ext_controllers(application,
|
||||||
|
self.ext_mgr, mapper)
|
||||||
|
for request_ext in self.ext_mgr.get_request_extensions():
|
||||||
|
LOG.debug('Extended request: %s', request_ext.key)
|
||||||
|
controller = req_controllers[request_ext.key]
|
||||||
|
controller.add_handler(request_ext.handler)
|
||||||
|
|
||||||
|
self._router = routes.middleware.RoutesMiddleware(self._dispatch,
|
||||||
|
mapper)
|
||||||
|
super(ExtensionMiddleware, self).__init__(application)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def factory(cls, global_config, **local_config):
|
||||||
|
"""Paste factory."""
|
||||||
|
def _factory(app):
|
||||||
|
return cls(app, global_config, **local_config)
|
||||||
|
return _factory
|
||||||
|
|
||||||
|
def _action_ext_controllers(self, application, ext_mgr, mapper):
|
||||||
|
"""Return a dict of ActionExtensionController-s by collection."""
|
||||||
|
action_controllers = {}
|
||||||
|
for action in ext_mgr.get_actions():
|
||||||
|
if action.collection not in action_controllers.keys():
|
||||||
|
controller = ActionExtensionController(application)
|
||||||
|
mapper.connect("/%s/:(id)/action.:(format)" %
|
||||||
|
action.collection,
|
||||||
|
action='action',
|
||||||
|
controller=controller,
|
||||||
|
conditions=dict(method=['POST']))
|
||||||
|
mapper.connect("/%s/:(id)/action" % action.collection,
|
||||||
|
action='action',
|
||||||
|
controller=controller,
|
||||||
|
conditions=dict(method=['POST']))
|
||||||
|
action_controllers[action.collection] = controller
|
||||||
|
|
||||||
|
return action_controllers
|
||||||
|
|
||||||
|
def _request_ext_controllers(self, application, ext_mgr, mapper):
|
||||||
|
"""Returns a dict of RequestExtensionController-s by collection."""
|
||||||
|
request_ext_controllers = {}
|
||||||
|
for req_ext in ext_mgr.get_request_extensions():
|
||||||
|
if req_ext.key not in request_ext_controllers.keys():
|
||||||
|
controller = RequestExtensionController(application)
|
||||||
|
mapper.connect(req_ext.url_route + '.:(format)',
|
||||||
|
action='process',
|
||||||
|
controller=controller,
|
||||||
|
conditions=req_ext.conditions)
|
||||||
|
|
||||||
|
mapper.connect(req_ext.url_route,
|
||||||
|
action='process',
|
||||||
|
controller=controller,
|
||||||
|
conditions=req_ext.conditions)
|
||||||
|
request_ext_controllers[req_ext.key] = controller
|
||||||
|
|
||||||
|
return request_ext_controllers
|
||||||
|
|
||||||
|
@webob.dec.wsgify(RequestClass=wsgi.Request)
|
||||||
|
def __call__(self, req):
|
||||||
|
"""Route the incoming request with router."""
|
||||||
|
req.environ['extended.app'] = self.application
|
||||||
|
return self._router
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
@webob.dec.wsgify(RequestClass=wsgi.Request)
|
||||||
|
def _dispatch(req):
|
||||||
|
"""Dispatch the request.
|
||||||
|
|
||||||
|
Returns the routed WSGI app's response or defers to the extended
|
||||||
|
application.
|
||||||
|
"""
|
||||||
|
match = req.environ['wsgiorg.routing_args'][1]
|
||||||
|
if not match:
|
||||||
|
return req.environ['extended.app']
|
||||||
|
app = match['controller']
|
||||||
|
return app
|
||||||
|
|
||||||
|
|
||||||
|
def extension_middleware_factory(global_config, **local_config):
|
||||||
|
"""Paste factory."""
|
||||||
|
def _factory(app):
|
||||||
|
ext_mgr = ExtensionManager.get_instance()
|
||||||
|
return ExtensionMiddleware(app, ext_mgr=ext_mgr)
|
||||||
|
return _factory
|
||||||
|
|
||||||
|
|
||||||
|
class ExtensionManager(object):
|
||||||
|
"""Load extensions from the configured extension path.
|
||||||
|
|
||||||
|
See tests/unit/extensions/foxinsocks.py for an
|
||||||
|
example extension implementation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
_instance = None
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get_instance(cls):
|
||||||
|
if cls._instance is None:
|
||||||
|
cls._instance = cls(get_extensions_path())
|
||||||
|
return cls._instance
|
||||||
|
|
||||||
|
def __init__(self, path):
|
||||||
|
LOG.info('Initializing extension manager.')
|
||||||
|
self.path = path
|
||||||
|
self.extensions = {}
|
||||||
|
self._load_all_extensions()
|
||||||
|
policy.reset()
|
||||||
|
|
||||||
|
def get_resources(self):
|
||||||
|
"""Returns a list of ResourceExtension objects."""
|
||||||
|
resources = []
|
||||||
|
resources.append(ResourceExtension('extensions',
|
||||||
|
ExtensionController(self)))
|
||||||
|
for ext in self.extensions.values():
|
||||||
|
try:
|
||||||
|
resources.extend(ext.get_resources())
|
||||||
|
except AttributeError:
|
||||||
|
# NOTE(dprince): Extension aren't required to have resource
|
||||||
|
# extensions
|
||||||
|
pass
|
||||||
|
return resources
|
||||||
|
|
||||||
|
def get_actions(self):
|
||||||
|
"""Returns a list of ActionExtension objects."""
|
||||||
|
actions = []
|
||||||
|
for ext in self.extensions.values():
|
||||||
|
try:
|
||||||
|
actions.extend(ext.get_actions())
|
||||||
|
except AttributeError:
|
||||||
|
# NOTE(dprince): Extension aren't required to have action
|
||||||
|
# extensions
|
||||||
|
pass
|
||||||
|
return actions
|
||||||
|
|
||||||
|
def get_request_extensions(self):
|
||||||
|
"""Returns a list of RequestExtension objects."""
|
||||||
|
request_exts = []
|
||||||
|
for ext in self.extensions.values():
|
||||||
|
try:
|
||||||
|
request_exts.extend(ext.get_request_extensions())
|
||||||
|
except AttributeError:
|
||||||
|
# NOTE(dprince): Extension aren't required to have request
|
||||||
|
# extensions
|
||||||
|
pass
|
||||||
|
return request_exts
|
||||||
|
|
||||||
|
def extend_resources(self, version, attr_map):
|
||||||
|
"""Extend resources with additional resources or attributes.
|
||||||
|
|
||||||
|
:param attr_map: the existing mapping from resource name to
|
||||||
|
attrs definition.
|
||||||
|
|
||||||
|
After this function, we will extend the attr_map if an extension
|
||||||
|
wants to extend this map.
|
||||||
|
"""
|
||||||
|
update_exts = []
|
||||||
|
processed_exts = set()
|
||||||
|
exts_to_process = self.extensions.copy()
|
||||||
|
# Iterate until there are unprocessed extensions or if no progress
|
||||||
|
# is made in a whole iteration
|
||||||
|
while exts_to_process:
|
||||||
|
processed_ext_count = len(processed_exts)
|
||||||
|
for ext_name, ext in exts_to_process.items():
|
||||||
|
if not hasattr(ext, 'get_extended_resources'):
|
||||||
|
del exts_to_process[ext_name]
|
||||||
|
continue
|
||||||
|
if hasattr(ext, 'update_attributes_map'):
|
||||||
|
update_exts.append(ext)
|
||||||
|
if hasattr(ext, 'get_required_extensions'):
|
||||||
|
# Process extension only if all required extensions
|
||||||
|
# have been processed already
|
||||||
|
required_exts_set = set(ext.get_required_extensions())
|
||||||
|
if required_exts_set - processed_exts:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
extended_attrs = ext.get_extended_resources(version)
|
||||||
|
for resource, resource_attrs in extended_attrs.items():
|
||||||
|
if attr_map.get(resource):
|
||||||
|
attr_map[resource].update(resource_attrs)
|
||||||
|
else:
|
||||||
|
attr_map[resource] = resource_attrs
|
||||||
|
except AttributeError:
|
||||||
|
LOG.exception("Error fetching extended attributes for "
|
||||||
|
"extension '%s'", ext.get_name())
|
||||||
|
processed_exts.add(ext_name)
|
||||||
|
del exts_to_process[ext_name]
|
||||||
|
if len(processed_exts) == processed_ext_count:
|
||||||
|
# Exit loop as no progress was made
|
||||||
|
break
|
||||||
|
if exts_to_process:
|
||||||
|
# NOTE(salv-orlando): Consider whether this error should be fatal
|
||||||
|
LOG.error("It was impossible to process the following "
|
||||||
|
"extensions: %s because of missing requirements.",
|
||||||
|
','.join(exts_to_process.keys()))
|
||||||
|
|
||||||
|
# Extending extensions' attributes map.
|
||||||
|
for ext in update_exts:
|
||||||
|
ext.update_attributes_map(attr_map)
|
||||||
|
|
||||||
|
def _check_extension(self, extension):
|
||||||
|
"""Checks for required methods in extension objects."""
|
||||||
|
try:
|
||||||
|
LOG.debug('Ext name: %s', extension.get_name())
|
||||||
|
LOG.debug('Ext alias: %s', extension.get_alias())
|
||||||
|
LOG.debug('Ext description: %s', extension.get_description())
|
||||||
|
LOG.debug('Ext namespace: %s', extension.get_namespace())
|
||||||
|
LOG.debug('Ext updated: %s', extension.get_updated())
|
||||||
|
except AttributeError as ex:
|
||||||
|
LOG.exception("Exception loading extension: %s", ex)
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _load_all_extensions(self):
|
||||||
|
"""Load extensions from the configured path.
|
||||||
|
|
||||||
|
The extension name is constructed from the module_name. If your
|
||||||
|
extension module is named widgets.py, the extension class within that
|
||||||
|
module should be 'Widgets'.
|
||||||
|
|
||||||
|
See tests/unit/extensions/foxinsocks.py for an example extension
|
||||||
|
implementation.
|
||||||
|
"""
|
||||||
|
for path in self.path.split(':'):
|
||||||
|
if os.path.exists(path):
|
||||||
|
self._load_all_extensions_from_path(path)
|
||||||
|
else:
|
||||||
|
LOG.error("Extension path '%s' doesn't exist!", path)
|
||||||
|
|
||||||
|
def _load_all_extensions_from_path(self, path):
|
||||||
|
# Sorting the extension list makes the order in which they
|
||||||
|
# are loaded predictable across a cluster of load-balanced
|
||||||
|
# Apmec Servers
|
||||||
|
for f in sorted(os.listdir(path)):
|
||||||
|
try:
|
||||||
|
LOG.debug('Loading extension file: %s', f)
|
||||||
|
mod_name, file_ext = os.path.splitext(os.path.split(f)[-1])
|
||||||
|
ext_path = os.path.join(path, f)
|
||||||
|
if file_ext.lower() == '.py' and not mod_name.startswith('_'):
|
||||||
|
mod = imp.load_source(mod_name, ext_path)
|
||||||
|
ext_name = mod_name[0].upper() + mod_name[1:]
|
||||||
|
new_ext_class = getattr(mod, ext_name, None)
|
||||||
|
if not new_ext_class:
|
||||||
|
LOG.warning('Did not find expected name '
|
||||||
|
'"%(ext_name)s" in %(file)s',
|
||||||
|
{'ext_name': ext_name,
|
||||||
|
'file': ext_path})
|
||||||
|
continue
|
||||||
|
new_ext = new_ext_class()
|
||||||
|
self.add_extension(new_ext)
|
||||||
|
except Exception as exception:
|
||||||
|
LOG.warning("Extension file %(f)s wasn't loaded due to "
|
||||||
|
"%(exception)s",
|
||||||
|
{'f': f, 'exception': exception})
|
||||||
|
|
||||||
|
def add_extension(self, ext):
|
||||||
|
# Do nothing if the extension doesn't check out
|
||||||
|
if not self._check_extension(ext):
|
||||||
|
return
|
||||||
|
|
||||||
|
alias = ext.get_alias()
|
||||||
|
LOG.info('Loaded extension: %s', alias)
|
||||||
|
|
||||||
|
if alias in self.extensions:
|
||||||
|
raise exceptions.DuplicatedExtension(alias=alias)
|
||||||
|
self.extensions[alias] = ext
|
||||||
|
|
||||||
|
|
||||||
|
class RequestExtension(object):
|
||||||
|
"""Extend requests and responses of core Apmec OpenStack API controllers.
|
||||||
|
|
||||||
|
Provide a way to add data to responses and handle custom request data
|
||||||
|
that is sent to core Apmec OpenStack API controllers.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, method, url_route, handler):
|
||||||
|
self.url_route = url_route
|
||||||
|
self.handler = handler
|
||||||
|
self.conditions = dict(method=[method])
|
||||||
|
self.key = "%s-%s" % (method, url_route)
|
||||||
|
|
||||||
|
|
||||||
|
class ActionExtension(object):
|
||||||
|
"""Add custom actions to core Apmec OpenStack API controllers."""
|
||||||
|
|
||||||
|
def __init__(self, collection, action_name, handler):
|
||||||
|
self.collection = collection
|
||||||
|
self.action_name = action_name
|
||||||
|
self.handler = handler
|
||||||
|
|
||||||
|
|
||||||
|
class ResourceExtension(object):
|
||||||
|
"""Add top level resources to the OpenStack API in Apmec."""
|
||||||
|
|
||||||
|
def __init__(self, collection, controller, parent=None, path_prefix="",
|
||||||
|
collection_actions={}, member_actions={}, attr_map={}):
|
||||||
|
self.collection = collection
|
||||||
|
self.controller = controller
|
||||||
|
self.parent = parent
|
||||||
|
self.collection_actions = collection_actions
|
||||||
|
self.member_actions = member_actions
|
||||||
|
self.path_prefix = path_prefix
|
||||||
|
self.attr_map = attr_map
|
||||||
|
|
||||||
|
|
||||||
|
# Returns the extension paths from a config entry and the __path__
|
||||||
|
# of apmec.extensions
|
||||||
|
def get_extensions_path():
|
||||||
|
paths = ':'.join(apmec.extensions.__path__)
|
||||||
|
if cfg.CONF.api_extensions_path:
|
||||||
|
paths = ':'.join([cfg.CONF.api_extensions_path, paths])
|
||||||
|
|
||||||
|
return paths
|
||||||
|
|
||||||
|
|
||||||
|
def append_api_extensions_path(paths):
|
||||||
|
paths = [cfg.CONF.api_extensions_path] + paths
|
||||||
|
cfg.CONF.set_override('api_extensions_path',
|
||||||
|
':'.join([p for p in paths if p]))
|
0
apmec/api/v1/__init__.py
Normal file
0
apmec/api/v1/__init__.py
Normal file
613
apmec/api/v1/attributes.py
Normal file
613
apmec/api/v1/attributes.py
Normal file
@ -0,0 +1,613 @@
|
|||||||
|
# Copyright (c) 2012 OpenStack Foundation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import re
|
||||||
|
|
||||||
|
import netaddr
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_utils import uuidutils
|
||||||
|
import six
|
||||||
|
|
||||||
|
from apmec.common import exceptions as n_exc
|
||||||
|
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
ATTRIBUTES_TO_UPDATE = 'attributes_to_update'
|
||||||
|
ATTR_NOT_SPECIFIED = object()
|
||||||
|
# Defining a constant to avoid repeating string literal in several modules
|
||||||
|
SHARED = 'shared'
|
||||||
|
|
||||||
|
# Used by range check to indicate no limit for a bound.
|
||||||
|
UNLIMITED = None
|
||||||
|
|
||||||
|
|
||||||
|
def _verify_dict_keys(expected_keys, target_dict, strict=True):
|
||||||
|
"""Allows to verify keys in a dictionary.
|
||||||
|
|
||||||
|
:param expected_keys: A list of keys expected to be present.
|
||||||
|
:param target_dict: The dictionary which should be verified.
|
||||||
|
:param strict: Specifies whether additional keys are allowed to be present.
|
||||||
|
:return: True, if keys in the dictionary correspond to the specification.
|
||||||
|
"""
|
||||||
|
if not isinstance(target_dict, dict):
|
||||||
|
msg = (_("Invalid input. '%(target_dict)s' must be a dictionary "
|
||||||
|
"with keys: %(expected_keys)s") %
|
||||||
|
{'target_dict': target_dict, 'expected_keys': expected_keys})
|
||||||
|
return msg
|
||||||
|
|
||||||
|
expected_keys = set(expected_keys)
|
||||||
|
provided_keys = set(target_dict.keys())
|
||||||
|
|
||||||
|
predicate = expected_keys.__eq__ if strict else expected_keys.issubset
|
||||||
|
|
||||||
|
if not predicate(provided_keys):
|
||||||
|
msg = (_("Validation of dictionary's keys failed."
|
||||||
|
"Expected keys: %(expected_keys)s "
|
||||||
|
"Provided keys: %(provided_keys)s") %
|
||||||
|
{'expected_keys': expected_keys,
|
||||||
|
'provided_keys': provided_keys})
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def is_attr_set(attribute):
|
||||||
|
return not (attribute is None or attribute is ATTR_NOT_SPECIFIED)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_values(data, valid_values=None):
|
||||||
|
if data not in valid_values:
|
||||||
|
msg = (_("'%(data)s' is not in %(valid_values)s") %
|
||||||
|
{'data': data, 'valid_values': valid_values})
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_not_empty_string_or_none(data, max_len=None):
|
||||||
|
if data is not None:
|
||||||
|
return _validate_not_empty_string(data, max_len=max_len)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_not_empty_string(data, max_len=None):
|
||||||
|
msg = _validate_string(data, max_len=max_len)
|
||||||
|
if msg:
|
||||||
|
return msg
|
||||||
|
if not data.strip():
|
||||||
|
return _("'%s' Blank strings are not permitted") % data
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_string_or_none(data, max_len=None):
|
||||||
|
if data is not None:
|
||||||
|
return _validate_string(data, max_len=max_len)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_string(data, max_len=None):
|
||||||
|
if not isinstance(data, six.string_types):
|
||||||
|
msg = _("'%s' is not a valid string") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
if max_len is not None and len(data) > max_len:
|
||||||
|
msg = (_("'%(data)s' exceeds maximum length of %(max_len)s") %
|
||||||
|
{'data': data, 'max_len': max_len})
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_boolean(data, valid_values=None):
|
||||||
|
try:
|
||||||
|
convert_to_boolean(data)
|
||||||
|
except n_exc.InvalidInput:
|
||||||
|
msg = _("'%s' is not a valid boolean value") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_range(data, valid_values=None):
|
||||||
|
"""Check that integer value is within a range provided.
|
||||||
|
|
||||||
|
Test is inclusive. Allows either limit to be ignored, to allow
|
||||||
|
checking ranges where only the lower or upper limit matter.
|
||||||
|
It is expected that the limits provided are valid integers or
|
||||||
|
the value None.
|
||||||
|
"""
|
||||||
|
|
||||||
|
min_value = valid_values[0]
|
||||||
|
max_value = valid_values[1]
|
||||||
|
try:
|
||||||
|
data = int(data)
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
msg = _("'%s' is not an integer") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
if min_value is not UNLIMITED and data < min_value:
|
||||||
|
msg = _("'%(data)s' is too small - must be at least "
|
||||||
|
"'%(limit)d'") % {'data': data, 'limit': min_value}
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
if max_value is not UNLIMITED and data > max_value:
|
||||||
|
msg = _("'%(data)s' is too large - must be no larger than "
|
||||||
|
"'%(limit)d'") % {'data': data, 'limit': max_value}
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_no_whitespace(data):
|
||||||
|
"""Validates that input has no whitespace."""
|
||||||
|
if len(data.split()) > 1:
|
||||||
|
msg = _("'%s' contains whitespace") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
raise n_exc.InvalidInput(error_message=msg)
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_mac_address(data, valid_values=None):
|
||||||
|
valid_mac = False
|
||||||
|
try:
|
||||||
|
valid_mac = netaddr.valid_mac(_validate_no_whitespace(data))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
# TODO(arosen): The code in this file should be refactored
|
||||||
|
# so it catches the correct exceptions. _validate_no_whitespace
|
||||||
|
# raises AttributeError if data is None.
|
||||||
|
if valid_mac is False:
|
||||||
|
msg = _("'%s' is not a valid MAC address") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_mac_address_or_none(data, valid_values=None):
|
||||||
|
if data is None:
|
||||||
|
return
|
||||||
|
return _validate_mac_address(data, valid_values)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_ip_address(data, valid_values=None):
|
||||||
|
try:
|
||||||
|
netaddr.IPAddress(_validate_no_whitespace(data))
|
||||||
|
except Exception:
|
||||||
|
msg = _("'%s' is not a valid IP address") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_ip_pools(data, valid_values=None):
|
||||||
|
"""Validate that start and end IP addresses are present.
|
||||||
|
|
||||||
|
In addition to this the IP addresses will also be validated
|
||||||
|
"""
|
||||||
|
if not isinstance(data, list):
|
||||||
|
msg = _("Invalid data format for IP pool: '%s'") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
expected_keys = ['start', 'end']
|
||||||
|
for ip_pool in data:
|
||||||
|
msg = _verify_dict_keys(expected_keys, ip_pool)
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
for k in expected_keys:
|
||||||
|
msg = _validate_ip_address(ip_pool[k])
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_fixed_ips(data, valid_values=None):
|
||||||
|
if not isinstance(data, list):
|
||||||
|
msg = _("Invalid data format for fixed IP: '%s'") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
ips = []
|
||||||
|
for fixed_ip in data:
|
||||||
|
if not isinstance(fixed_ip, dict):
|
||||||
|
msg = _("Invalid data format for fixed IP: '%s'") % fixed_ip
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
if 'ip_address' in fixed_ip:
|
||||||
|
# Ensure that duplicate entries are not set - just checking IP
|
||||||
|
# suffices. Duplicate subnet_id's are legitimate.
|
||||||
|
fixed_ip_address = fixed_ip['ip_address']
|
||||||
|
if fixed_ip_address in ips:
|
||||||
|
msg = _("Duplicate IP address '%s'") % fixed_ip_address
|
||||||
|
else:
|
||||||
|
msg = _validate_ip_address(fixed_ip_address)
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
ips.append(fixed_ip_address)
|
||||||
|
if 'subnet_id' in fixed_ip:
|
||||||
|
msg = _validate_uuid(fixed_ip['subnet_id'])
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_nameservers(data, valid_values=None):
|
||||||
|
if not hasattr(data, '__iter__'):
|
||||||
|
msg = _("Invalid data format for nameserver: '%s'") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
ips = []
|
||||||
|
for ip in data:
|
||||||
|
msg = _validate_ip_address(ip)
|
||||||
|
if msg:
|
||||||
|
# This may be a hostname
|
||||||
|
msg = _validate_regex(ip, HOSTNAME_PATTERN)
|
||||||
|
if msg:
|
||||||
|
msg = _("'%s' is not a valid nameserver") % ip
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
if ip in ips:
|
||||||
|
msg = _("Duplicate nameserver '%s'") % ip
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
ips.append(ip)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_hostroutes(data, valid_values=None):
|
||||||
|
if not isinstance(data, list):
|
||||||
|
msg = _("Invalid data format for hostroute: '%s'") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
expected_keys = ['destination', 'nexthop']
|
||||||
|
hostroutes = []
|
||||||
|
for hostroute in data:
|
||||||
|
msg = _verify_dict_keys(expected_keys, hostroute)
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
msg = _validate_subnet(hostroute['destination'])
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
msg = _validate_ip_address(hostroute['nexthop'])
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
if hostroute in hostroutes:
|
||||||
|
msg = _("Duplicate hostroute '%s'") % hostroute
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
hostroutes.append(hostroute)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_ip_address_or_none(data, valid_values=None):
|
||||||
|
if data is None:
|
||||||
|
return None
|
||||||
|
return _validate_ip_address(data, valid_values)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_subnet(data, valid_values=None):
|
||||||
|
msg = None
|
||||||
|
try:
|
||||||
|
net = netaddr.IPNetwork(_validate_no_whitespace(data))
|
||||||
|
if '/' not in data:
|
||||||
|
msg = _("'%(data)s' isn't a recognized IP subnet cidr,"
|
||||||
|
" '%(cidr)s' is recommended") % {"data": data,
|
||||||
|
"cidr": net.cidr}
|
||||||
|
else:
|
||||||
|
return
|
||||||
|
except Exception:
|
||||||
|
msg = _("'%s' is not a valid IP subnet") % data
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_subnet_list(data, valid_values=None):
|
||||||
|
if not isinstance(data, list):
|
||||||
|
msg = _("'%s' is not a list") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
if len(set(data)) != len(data):
|
||||||
|
msg = _("Duplicate items in the list: '%s'") % ', '.join(data)
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
for item in data:
|
||||||
|
msg = _validate_subnet(item)
|
||||||
|
if msg:
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_subnet_or_none(data, valid_values=None):
|
||||||
|
if data is None:
|
||||||
|
return
|
||||||
|
return _validate_subnet(data, valid_values)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_regex(data, valid_values=None):
|
||||||
|
try:
|
||||||
|
if re.match(valid_values, data):
|
||||||
|
return
|
||||||
|
except TypeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
msg = _("'%s' is not a valid input") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_regex_or_none(data, valid_values=None):
|
||||||
|
if data is None:
|
||||||
|
return
|
||||||
|
return _validate_regex(data, valid_values)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_uuid(data, valid_values=None):
|
||||||
|
if not uuidutils.is_uuid_like(data):
|
||||||
|
msg = _("'%s' is not a valid UUID") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_uuid_or_none(data, valid_values=None):
|
||||||
|
if data is not None:
|
||||||
|
return _validate_uuid(data)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_uuid_list(data, valid_values=None):
|
||||||
|
if not isinstance(data, list):
|
||||||
|
msg = _("'%s' is not a list") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
for item in data:
|
||||||
|
msg = _validate_uuid(item)
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
if len(set(data)) != len(data):
|
||||||
|
msg = _("Duplicate items in the list: '%s'") % ', '.join(data)
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_dict_item(key, key_validator, data):
|
||||||
|
# Find conversion function, if any, and apply it
|
||||||
|
conv_func = key_validator.get('convert_to')
|
||||||
|
if conv_func:
|
||||||
|
data[key] = conv_func(data.get(key))
|
||||||
|
# Find validator function
|
||||||
|
# TODO(salv-orlando): Structure of dict attributes should be improved
|
||||||
|
# to avoid iterating over items
|
||||||
|
val_func = val_params = None
|
||||||
|
for (k, v) in (key_validator).items():
|
||||||
|
if k.startswith('type:'):
|
||||||
|
# ask forgiveness, not permission
|
||||||
|
try:
|
||||||
|
val_func = validators[k]
|
||||||
|
except KeyError:
|
||||||
|
return _("Validator '%s' does not exist.") % k
|
||||||
|
val_params = v
|
||||||
|
break
|
||||||
|
# Process validation
|
||||||
|
if val_func:
|
||||||
|
return val_func(data.get(key), val_params)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_dict(data, key_specs=None):
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
msg = _("'%s' is not a dictionary") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
# Do not perform any further validation, if no constraints are supplied
|
||||||
|
if not key_specs:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Check whether all required keys are present
|
||||||
|
required_keys = [key for key, spec in (key_specs).items()
|
||||||
|
if spec.get('required')]
|
||||||
|
|
||||||
|
if required_keys:
|
||||||
|
msg = _verify_dict_keys(required_keys, data, False)
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
# Perform validation and conversion of all values
|
||||||
|
# according to the specifications.
|
||||||
|
for key, key_validator in [(k, v) for k, v in (key_specs).items()
|
||||||
|
if k in data]:
|
||||||
|
msg = _validate_dict_item(key, key_validator, data)
|
||||||
|
if msg:
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_dict_or_none(data, key_specs=None):
|
||||||
|
if data is not None:
|
||||||
|
return _validate_dict(data, key_specs)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_dict_or_empty(data, key_specs=None):
|
||||||
|
if data != {}:
|
||||||
|
return _validate_dict(data, key_specs)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_dict_or_nodata(data, key_specs=None):
|
||||||
|
if data:
|
||||||
|
return _validate_dict(data, key_specs)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_non_negative(data, valid_values=None):
|
||||||
|
try:
|
||||||
|
data = int(data)
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
msg = _("'%s' is not an integer") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
if data < 0:
|
||||||
|
msg = _("'%s' should be non-negative") % data
|
||||||
|
LOG.debug(msg)
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
def convert_to_boolean(data):
|
||||||
|
if isinstance(data, six.string_types):
|
||||||
|
val = data.lower()
|
||||||
|
if val == "true" or val == "1":
|
||||||
|
return True
|
||||||
|
if val == "false" or val == "0":
|
||||||
|
return False
|
||||||
|
elif isinstance(data, bool):
|
||||||
|
return data
|
||||||
|
elif isinstance(data, int):
|
||||||
|
if data == 0:
|
||||||
|
return False
|
||||||
|
elif data == 1:
|
||||||
|
return True
|
||||||
|
msg = _("'%s' cannot be converted to boolean") % data
|
||||||
|
raise n_exc.InvalidInput(error_message=msg)
|
||||||
|
|
||||||
|
|
||||||
|
def convert_to_int(data):
|
||||||
|
try:
|
||||||
|
return int(data)
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
msg = _("'%s' is not an integer") % data
|
||||||
|
raise n_exc.InvalidInput(error_message=msg)
|
||||||
|
|
||||||
|
|
||||||
|
def convert_kvp_str_to_list(data):
|
||||||
|
"""Convert a value of the form 'key=value' to ['key', 'value'].
|
||||||
|
|
||||||
|
:raises n_exc.InvalidInput: if any of the strings are malformed
|
||||||
|
(e.g. do not contain a key).
|
||||||
|
"""
|
||||||
|
kvp = [x.strip() for x in data.split('=', 1)]
|
||||||
|
if len(kvp) == 2 and kvp[0]:
|
||||||
|
return kvp
|
||||||
|
msg = _("'%s' is not of the form <key>=[value]") % data
|
||||||
|
raise n_exc.InvalidInput(error_message=msg)
|
||||||
|
|
||||||
|
|
||||||
|
def convert_kvp_list_to_dict(kvp_list):
|
||||||
|
"""Convert a list of 'key=value' strings to a dict.
|
||||||
|
|
||||||
|
:raises n_exc.InvalidInput: if any of the strings are malformed
|
||||||
|
(e.g. do not contain a key) or if any
|
||||||
|
of the keys appear more than once.
|
||||||
|
"""
|
||||||
|
if kvp_list == ['True']:
|
||||||
|
# No values were provided (i.e. '--flag-name')
|
||||||
|
return {}
|
||||||
|
kvp_map = {}
|
||||||
|
for kvp_str in kvp_list:
|
||||||
|
key, value = convert_kvp_str_to_list(kvp_str)
|
||||||
|
kvp_map.setdefault(key, set())
|
||||||
|
kvp_map[key].add(value)
|
||||||
|
return dict((x, list(y)) for x, y in (kvp_map).items())
|
||||||
|
|
||||||
|
|
||||||
|
def convert_none_to_empty_list(value):
|
||||||
|
return [] if value is None else value
|
||||||
|
|
||||||
|
|
||||||
|
def convert_none_to_empty_dict(value):
|
||||||
|
return {} if value is None else value
|
||||||
|
|
||||||
|
|
||||||
|
def convert_to_list(data):
|
||||||
|
if data is None:
|
||||||
|
return []
|
||||||
|
elif hasattr(data, '__iter__') and not isinstance(data, six.string_types):
|
||||||
|
return list(data)
|
||||||
|
else:
|
||||||
|
return [data]
|
||||||
|
|
||||||
|
|
||||||
|
HOSTNAME_PATTERN = ("(?=^.{1,254}$)(^(?:(?!\d+\.|-)[a-zA-Z0-9_\-]"
|
||||||
|
"{1,63}(?<!-)\.?)+(?:[a-zA-Z]{2,})$)")
|
||||||
|
|
||||||
|
HEX_ELEM = '[0-9A-Fa-f]'
|
||||||
|
UUID_PATTERN = '-'.join([HEX_ELEM + '{8}', HEX_ELEM + '{4}',
|
||||||
|
HEX_ELEM + '{4}', HEX_ELEM + '{4}',
|
||||||
|
HEX_ELEM + '{12}'])
|
||||||
|
# Note: In order to ensure that the MAC address is unicast the first byte
|
||||||
|
# must be even.
|
||||||
|
MAC_PATTERN = "^%s[aceACE02468](:%s{2}){5}$" % (HEX_ELEM, HEX_ELEM)
|
||||||
|
|
||||||
|
# Dictionary that maintains a list of validation functions
|
||||||
|
validators = {'type:dict': _validate_dict,
|
||||||
|
'type:dict_or_none': _validate_dict_or_none,
|
||||||
|
'type:dict_or_empty': _validate_dict_or_empty,
|
||||||
|
'type:dict_or_nodata': _validate_dict_or_nodata,
|
||||||
|
'type:fixed_ips': _validate_fixed_ips,
|
||||||
|
'type:hostroutes': _validate_hostroutes,
|
||||||
|
'type:ip_address': _validate_ip_address,
|
||||||
|
'type:ip_address_or_none': _validate_ip_address_or_none,
|
||||||
|
'type:ip_pools': _validate_ip_pools,
|
||||||
|
'type:mac_address': _validate_mac_address,
|
||||||
|
'type:mac_address_or_none': _validate_mac_address_or_none,
|
||||||
|
'type:nameservers': _validate_nameservers,
|
||||||
|
'type:non_negative': _validate_non_negative,
|
||||||
|
'type:range': _validate_range,
|
||||||
|
'type:regex': _validate_regex,
|
||||||
|
'type:regex_or_none': _validate_regex_or_none,
|
||||||
|
'type:string': _validate_string,
|
||||||
|
'type:string_or_none': _validate_string_or_none,
|
||||||
|
'type:not_empty_string': _validate_not_empty_string,
|
||||||
|
'type:not_empty_string_or_none':
|
||||||
|
_validate_not_empty_string_or_none,
|
||||||
|
'type:subnet': _validate_subnet,
|
||||||
|
'type:subnet_list': _validate_subnet_list,
|
||||||
|
'type:subnet_or_none': _validate_subnet_or_none,
|
||||||
|
'type:uuid': _validate_uuid,
|
||||||
|
'type:uuid_or_none': _validate_uuid_or_none,
|
||||||
|
'type:uuid_list': _validate_uuid_list,
|
||||||
|
'type:values': _validate_values,
|
||||||
|
'type:boolean': _validate_boolean}
|
||||||
|
|
||||||
|
# Define constants for base resource name
|
||||||
|
|
||||||
|
# Note: a default of ATTR_NOT_SPECIFIED indicates that an
|
||||||
|
# attribute is not required, but will be generated by the plugin
|
||||||
|
# if it is not specified. Particularly, a value of ATTR_NOT_SPECIFIED
|
||||||
|
# is different from an attribute that has been specified with a value of
|
||||||
|
# None. For example, if 'gateway_ip' is omitted in a request to
|
||||||
|
# create a subnet, the plugin will receive ATTR_NOT_SPECIFIED
|
||||||
|
# and the default gateway_ip will be generated.
|
||||||
|
# However, if gateway_ip is specified as None, this means that
|
||||||
|
# the subnet does not have a gateway IP.
|
||||||
|
# The following is a short reference for understanding attribute info:
|
||||||
|
# default: default value of the attribute (if missing, the attribute
|
||||||
|
# becomes mandatory.
|
||||||
|
# allow_post: the attribute can be used on POST requests.
|
||||||
|
# allow_put: the attribute can be used on PUT requests.
|
||||||
|
# validate: specifies rules for validating data in the attribute.
|
||||||
|
# convert_to: transformation to apply to the value before it is returned
|
||||||
|
# is_visible: the attribute is returned in GET responses.
|
||||||
|
# required_by_policy: the attribute is required by the policy engine and
|
||||||
|
# should therefore be filled by the API layer even if not present in
|
||||||
|
# request body.
|
||||||
|
# enforce_policy: the attribute is actively part of the policy enforcing
|
||||||
|
# mechanism, ie: there might be rules which refer to this attribute.
|
||||||
|
|
||||||
|
# Identify the attribute used by a resource to reference another resource
|
||||||
|
|
||||||
|
RESOURCE_ATTRIBUTE_MAP = {}
|
||||||
|
|
||||||
|
RESOURCE_FOREIGN_KEYS = {}
|
||||||
|
|
||||||
|
PLURALS = {'extensions': 'extension'}
|
600
apmec/api/v1/base.py
Normal file
600
apmec/api/v1/base.py
Normal file
@ -0,0 +1,600 @@
|
|||||||
|
# Copyright (c) 2012 OpenStack Foundation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import netaddr
|
||||||
|
import webob.exc
|
||||||
|
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_utils import strutils
|
||||||
|
|
||||||
|
from apmec.api import api_common
|
||||||
|
from apmec.api.v1 import attributes
|
||||||
|
from apmec.api.v1 import resource as wsgi_resource
|
||||||
|
from apmec.common import exceptions
|
||||||
|
from apmec.common import rpc as n_rpc
|
||||||
|
from apmec import policy
|
||||||
|
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
FAULT_MAP = {exceptions.NotFound: webob.exc.HTTPNotFound,
|
||||||
|
exceptions.Conflict: webob.exc.HTTPConflict,
|
||||||
|
exceptions.InUse: webob.exc.HTTPConflict,
|
||||||
|
exceptions.BadRequest: webob.exc.HTTPBadRequest,
|
||||||
|
exceptions.ServiceUnavailable: webob.exc.HTTPServiceUnavailable,
|
||||||
|
exceptions.NotAuthorized: webob.exc.HTTPForbidden,
|
||||||
|
netaddr.AddrFormatError: webob.exc.HTTPBadRequest,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class Controller(object):
|
||||||
|
LIST = 'list'
|
||||||
|
SHOW = 'show'
|
||||||
|
CREATE = 'create'
|
||||||
|
UPDATE = 'update'
|
||||||
|
DELETE = 'delete'
|
||||||
|
|
||||||
|
def __init__(self, plugin, collection, resource, attr_info,
|
||||||
|
allow_bulk=False, member_actions=None, parent=None,
|
||||||
|
allow_pagination=False, allow_sorting=False):
|
||||||
|
member_actions = member_actions or []
|
||||||
|
self._plugin = plugin
|
||||||
|
self._collection = collection.replace('-', '_')
|
||||||
|
self._resource = resource.replace('-', '_')
|
||||||
|
self._attr_info = attr_info
|
||||||
|
self._allow_bulk = allow_bulk
|
||||||
|
self._allow_pagination = allow_pagination
|
||||||
|
self._allow_sorting = allow_sorting
|
||||||
|
self._native_bulk = self._is_native_bulk_supported()
|
||||||
|
self._native_pagination = self._is_native_pagination_supported()
|
||||||
|
self._native_sorting = self._is_native_sorting_supported()
|
||||||
|
self._policy_attrs = [name for (name, info) in self._attr_info.items()
|
||||||
|
if info.get('required_by_policy')]
|
||||||
|
self._notifier = n_rpc.get_notifier('mec')
|
||||||
|
self._member_actions = member_actions
|
||||||
|
self._primary_key = self._get_primary_key()
|
||||||
|
if self._allow_pagination and self._native_pagination:
|
||||||
|
# Native pagination need native sorting support
|
||||||
|
if not self._native_sorting:
|
||||||
|
raise exceptions.Invalid(
|
||||||
|
_("Native pagination depend on native sorting")
|
||||||
|
)
|
||||||
|
if not self._allow_sorting:
|
||||||
|
LOG.info("Allow sorting is enabled because native "
|
||||||
|
"pagination requires native sorting")
|
||||||
|
self._allow_sorting = True
|
||||||
|
|
||||||
|
if parent:
|
||||||
|
self._parent_id_name = '%s_id' % parent['member_name']
|
||||||
|
parent_part = '_%s' % parent['member_name']
|
||||||
|
else:
|
||||||
|
self._parent_id_name = None
|
||||||
|
parent_part = ''
|
||||||
|
self._plugin_handlers = {
|
||||||
|
self.LIST: 'get%s_%s' % (parent_part, self._collection),
|
||||||
|
self.SHOW: 'get%s_%s' % (parent_part, self._resource)
|
||||||
|
}
|
||||||
|
for action in [self.CREATE, self.UPDATE, self.DELETE]:
|
||||||
|
self._plugin_handlers[action] = '%s%s_%s' % (action, parent_part,
|
||||||
|
self._resource)
|
||||||
|
|
||||||
|
def _get_primary_key(self, default_primary_key='id'):
|
||||||
|
for key, value in (self._attr_info).items():
|
||||||
|
if value.get('primary_key', False):
|
||||||
|
return key
|
||||||
|
return default_primary_key
|
||||||
|
|
||||||
|
def _is_native_bulk_supported(self):
|
||||||
|
native_bulk_attr_name = ("_%s__native_bulk_support"
|
||||||
|
% self._plugin.__class__.__name__)
|
||||||
|
return getattr(self._plugin, native_bulk_attr_name, False)
|
||||||
|
|
||||||
|
def _is_native_pagination_supported(self):
|
||||||
|
native_pagination_attr_name = ("_%s__native_pagination_support"
|
||||||
|
% self._plugin.__class__.__name__)
|
||||||
|
return getattr(self._plugin, native_pagination_attr_name, False)
|
||||||
|
|
||||||
|
def _is_native_sorting_supported(self):
|
||||||
|
native_sorting_attr_name = ("_%s__native_sorting_support"
|
||||||
|
% self._plugin.__class__.__name__)
|
||||||
|
return getattr(self._plugin, native_sorting_attr_name, False)
|
||||||
|
|
||||||
|
def _exclude_attributes_by_policy(self, context, data):
|
||||||
|
"""Identifies attributes to exclude according to authZ policies.
|
||||||
|
|
||||||
|
Return a list of attribute names which should be stripped from the
|
||||||
|
response returned to the user because the user is not authorized
|
||||||
|
to see them.
|
||||||
|
"""
|
||||||
|
attributes_to_exclude = []
|
||||||
|
for attr_name in data.keys():
|
||||||
|
attr_data = self._attr_info.get(attr_name)
|
||||||
|
if attr_data and attr_data['is_visible']:
|
||||||
|
if policy.check(
|
||||||
|
context,
|
||||||
|
'%s:%s' % (self._plugin_handlers[self.SHOW], attr_name),
|
||||||
|
data,
|
||||||
|
might_not_exist=True):
|
||||||
|
# this attribute is visible, check next one
|
||||||
|
continue
|
||||||
|
# if the code reaches this point then either the policy check
|
||||||
|
# failed or the attribute was not visible in the first place
|
||||||
|
attributes_to_exclude.append(attr_name)
|
||||||
|
return attributes_to_exclude
|
||||||
|
|
||||||
|
def _view(self, context, data, fields_to_strip=None):
|
||||||
|
"""Build a view of an API resource.
|
||||||
|
|
||||||
|
:param context: the apmec context
|
||||||
|
:param data: the object for which a view is being created
|
||||||
|
:param fields_to_strip: attributes to remove from the view
|
||||||
|
|
||||||
|
:returns: a view of the object which includes only attributes
|
||||||
|
visible according to API resource declaration and authZ policies.
|
||||||
|
"""
|
||||||
|
fields_to_strip = ((fields_to_strip or []) +
|
||||||
|
self._exclude_attributes_by_policy(context, data))
|
||||||
|
return self._filter_attributes(context, data, fields_to_strip)
|
||||||
|
|
||||||
|
def _filter_attributes(self, context, data, fields_to_strip=None):
|
||||||
|
if not fields_to_strip:
|
||||||
|
return data
|
||||||
|
return dict(item for item in (data).items()
|
||||||
|
if (item[0] not in fields_to_strip))
|
||||||
|
|
||||||
|
def _do_field_list(self, original_fields):
|
||||||
|
fields_to_add = None
|
||||||
|
# don't do anything if fields were not specified in the request
|
||||||
|
if original_fields:
|
||||||
|
fields_to_add = [attr for attr in self._policy_attrs
|
||||||
|
if attr not in original_fields]
|
||||||
|
original_fields.extend(self._policy_attrs)
|
||||||
|
return original_fields, fields_to_add
|
||||||
|
|
||||||
|
def __getattr__(self, name):
|
||||||
|
if name in self._member_actions:
|
||||||
|
def _handle_action(request, id, **kwargs):
|
||||||
|
arg_list = [request.context, id]
|
||||||
|
# Ensure policy engine is initialized
|
||||||
|
policy.init()
|
||||||
|
# Fetch the resource and verify if the user can access it
|
||||||
|
try:
|
||||||
|
resource = self._item(request, id, True)
|
||||||
|
except exceptions.PolicyNotAuthorized:
|
||||||
|
msg = _('The resource could not be found.')
|
||||||
|
raise webob.exc.HTTPNotFound(msg)
|
||||||
|
body = kwargs.pop('body', None)
|
||||||
|
# Explicit comparison with None to distinguish from {}
|
||||||
|
if body is not None:
|
||||||
|
arg_list.append(body)
|
||||||
|
# It is ok to raise a 403 because accessibility to the
|
||||||
|
# object was checked earlier in this method
|
||||||
|
policy.enforce(request.context, name, resource)
|
||||||
|
return getattr(self._plugin, name)(*arg_list, **kwargs)
|
||||||
|
return _handle_action
|
||||||
|
else:
|
||||||
|
raise AttributeError
|
||||||
|
|
||||||
|
def _get_pagination_helper(self, request):
|
||||||
|
if self._allow_pagination and self._native_pagination:
|
||||||
|
return api_common.PaginationNativeHelper(request,
|
||||||
|
self._primary_key)
|
||||||
|
elif self._allow_pagination:
|
||||||
|
return api_common.PaginationEmulatedHelper(request,
|
||||||
|
self._primary_key)
|
||||||
|
return api_common.NoPaginationHelper(request, self._primary_key)
|
||||||
|
|
||||||
|
def _get_sorting_helper(self, request):
|
||||||
|
if self._allow_sorting and self._native_sorting:
|
||||||
|
return api_common.SortingNativeHelper(request, self._attr_info)
|
||||||
|
elif self._allow_sorting:
|
||||||
|
return api_common.SortingEmulatedHelper(request, self._attr_info)
|
||||||
|
return api_common.NoSortingHelper(request, self._attr_info)
|
||||||
|
|
||||||
|
def _items(self, request, do_authz=False, parent_id=None):
|
||||||
|
"""Retrieves and formats a list of elements of the requested entity."""
|
||||||
|
# NOTE(salvatore-orlando): The following ensures that fields which
|
||||||
|
# are needed for authZ policy validation are not stripped away by the
|
||||||
|
# plugin before returning.
|
||||||
|
original_fields, fields_to_add = self._do_field_list(
|
||||||
|
api_common.list_args(request, 'fields'))
|
||||||
|
filters = api_common.get_filters(request, self._attr_info,
|
||||||
|
['fields', 'sort_key', 'sort_dir',
|
||||||
|
'limit', 'marker', 'page_reverse'])
|
||||||
|
kwargs = {'filters': filters,
|
||||||
|
'fields': original_fields}
|
||||||
|
sorting_helper = self._get_sorting_helper(request)
|
||||||
|
pagination_helper = self._get_pagination_helper(request)
|
||||||
|
sorting_helper.update_args(kwargs)
|
||||||
|
sorting_helper.update_fields(original_fields, fields_to_add)
|
||||||
|
pagination_helper.update_args(kwargs)
|
||||||
|
pagination_helper.update_fields(original_fields, fields_to_add)
|
||||||
|
if parent_id:
|
||||||
|
kwargs[self._parent_id_name] = parent_id
|
||||||
|
obj_getter = getattr(self._plugin, self._plugin_handlers[self.LIST])
|
||||||
|
obj_list = obj_getter(request.context, **kwargs)
|
||||||
|
obj_list = sorting_helper.sort(obj_list)
|
||||||
|
obj_list = pagination_helper.paginate(obj_list)
|
||||||
|
# Check authz
|
||||||
|
if do_authz:
|
||||||
|
# FIXME(salvatore-orlando): obj_getter might return references to
|
||||||
|
# other resources. Must check authZ on them too.
|
||||||
|
# Omit items from list that should not be visible
|
||||||
|
obj_list = [obj for obj in obj_list
|
||||||
|
if policy.check(request.context,
|
||||||
|
self._plugin_handlers[self.SHOW],
|
||||||
|
obj,
|
||||||
|
plugin=self._plugin)]
|
||||||
|
# Use the first element in the list for discriminating which attributes
|
||||||
|
# should be filtered out because of authZ policies
|
||||||
|
# fields_to_add contains a list of attributes added for request policy
|
||||||
|
# checks but that were not required by the user. They should be
|
||||||
|
# therefore stripped
|
||||||
|
fields_to_strip = fields_to_add or []
|
||||||
|
if obj_list:
|
||||||
|
fields_to_strip += self._exclude_attributes_by_policy(
|
||||||
|
request.context, obj_list[0])
|
||||||
|
collection = {self._collection:
|
||||||
|
[self._filter_attributes(
|
||||||
|
request.context, obj,
|
||||||
|
fields_to_strip=fields_to_strip)
|
||||||
|
for obj in obj_list]}
|
||||||
|
pagination_links = pagination_helper.get_links(obj_list)
|
||||||
|
if pagination_links:
|
||||||
|
collection[self._collection + "_links"] = pagination_links
|
||||||
|
return collection
|
||||||
|
|
||||||
|
def _item(self, request, id, do_authz=False, field_list=None,
|
||||||
|
parent_id=None):
|
||||||
|
"""Retrieves and formats a single element of the requested entity."""
|
||||||
|
kwargs = {'fields': field_list}
|
||||||
|
action = self._plugin_handlers[self.SHOW]
|
||||||
|
if parent_id:
|
||||||
|
kwargs[self._parent_id_name] = parent_id
|
||||||
|
obj_getter = getattr(self._plugin, action)
|
||||||
|
obj = obj_getter(request.context, id, **kwargs)
|
||||||
|
# Check authz
|
||||||
|
# FIXME(salvatore-orlando): obj_getter might return references to
|
||||||
|
# other resources. Must check authZ on them too.
|
||||||
|
if do_authz:
|
||||||
|
policy.enforce(request.context, action, obj)
|
||||||
|
return obj
|
||||||
|
|
||||||
|
def index(self, request, **kwargs):
|
||||||
|
"""Returns a list of the requested entity."""
|
||||||
|
parent_id = kwargs.get(self._parent_id_name)
|
||||||
|
# Ensure policy engine is initialized
|
||||||
|
policy.init()
|
||||||
|
return self._items(request, True, parent_id)
|
||||||
|
|
||||||
|
def show(self, request, id, **kwargs):
|
||||||
|
"""Returns detailed information about the requested entity."""
|
||||||
|
try:
|
||||||
|
# NOTE(salvatore-orlando): The following ensures that fields
|
||||||
|
# which are needed for authZ policy validation are not stripped
|
||||||
|
# away by the plugin before returning.
|
||||||
|
field_list, added_fields = self._do_field_list(
|
||||||
|
api_common.list_args(request, "fields"))
|
||||||
|
parent_id = kwargs.get(self._parent_id_name)
|
||||||
|
# Ensure policy engine is initialized
|
||||||
|
policy.init()
|
||||||
|
return {self._resource:
|
||||||
|
self._view(request.context,
|
||||||
|
self._item(request,
|
||||||
|
id,
|
||||||
|
do_authz=True,
|
||||||
|
field_list=field_list,
|
||||||
|
parent_id=parent_id),
|
||||||
|
fields_to_strip=added_fields)}
|
||||||
|
except exceptions.PolicyNotAuthorized:
|
||||||
|
# To avoid giving away information, pretend that it
|
||||||
|
# doesn't exist
|
||||||
|
msg = _('The resource could not be found.')
|
||||||
|
raise webob.exc.HTTPNotFound(msg)
|
||||||
|
|
||||||
|
def _emulate_bulk_create(self, obj_creator, request, body, parent_id=None):
|
||||||
|
objs = []
|
||||||
|
try:
|
||||||
|
for item in body[self._collection]:
|
||||||
|
kwargs = {self._resource: item}
|
||||||
|
if parent_id:
|
||||||
|
kwargs[self._parent_id_name] = parent_id
|
||||||
|
fields_to_strip = self._exclude_attributes_by_policy(
|
||||||
|
request.context, item)
|
||||||
|
objs.append(self._filter_attributes(
|
||||||
|
request.context,
|
||||||
|
obj_creator(request.context, **kwargs),
|
||||||
|
fields_to_strip=fields_to_strip))
|
||||||
|
return objs
|
||||||
|
# Note(salvatore-orlando): broad catch as in theory a plugin
|
||||||
|
# could raise any kind of exception
|
||||||
|
except Exception:
|
||||||
|
for obj in objs:
|
||||||
|
obj_deleter = getattr(self._plugin,
|
||||||
|
self._plugin_handlers[self.DELETE])
|
||||||
|
try:
|
||||||
|
kwargs = ({self._parent_id_name: parent_id} if parent_id
|
||||||
|
else {})
|
||||||
|
obj_deleter(request.context, obj['id'], **kwargs)
|
||||||
|
except Exception:
|
||||||
|
# broad catch as our only purpose is to log the exception
|
||||||
|
LOG.exception("Unable to undo add for %(resource)s %(id)s",
|
||||||
|
{'resource': self._resource,
|
||||||
|
'id': obj['id']})
|
||||||
|
# TODO(salvatore-orlando): The object being processed when the
|
||||||
|
# plugin raised might have been created or not in the db.
|
||||||
|
# We need a way for ensuring that if it has been created,
|
||||||
|
# it is then deleted
|
||||||
|
raise
|
||||||
|
|
||||||
|
def create(self, request, body=None, **kwargs):
|
||||||
|
"""Creates a new instance of the requested entity."""
|
||||||
|
parent_id = kwargs.get(self._parent_id_name)
|
||||||
|
self._notifier.info(request.context,
|
||||||
|
self._resource + '.create.start',
|
||||||
|
body)
|
||||||
|
body = Controller.prepare_request_body(request.context, body, True,
|
||||||
|
self._resource, self._attr_info,
|
||||||
|
allow_bulk=self._allow_bulk)
|
||||||
|
action = self._plugin_handlers[self.CREATE]
|
||||||
|
# Check authz
|
||||||
|
if self._collection in body:
|
||||||
|
# Have to account for bulk create
|
||||||
|
items = body[self._collection]
|
||||||
|
else:
|
||||||
|
items = [body]
|
||||||
|
# Ensure policy engine is initialized
|
||||||
|
policy.init()
|
||||||
|
for item in items:
|
||||||
|
policy.enforce(request.context,
|
||||||
|
action,
|
||||||
|
item[self._resource])
|
||||||
|
|
||||||
|
def notify(create_result):
|
||||||
|
notifier_method = self._resource + '.create.end'
|
||||||
|
self._notifier.info(request.context,
|
||||||
|
notifier_method,
|
||||||
|
create_result)
|
||||||
|
return create_result
|
||||||
|
|
||||||
|
kwargs = {self._parent_id_name: parent_id} if parent_id else {}
|
||||||
|
if self._collection in body and self._native_bulk:
|
||||||
|
# plugin does atomic bulk create operations
|
||||||
|
obj_creator = getattr(self._plugin, "%s_bulk" % action)
|
||||||
|
objs = obj_creator(request.context, body, **kwargs)
|
||||||
|
# Use first element of list to discriminate attributes which
|
||||||
|
# should be removed because of authZ policies
|
||||||
|
fields_to_strip = self._exclude_attributes_by_policy(
|
||||||
|
request.context, objs[0])
|
||||||
|
return notify({self._collection: [self._filter_attributes(
|
||||||
|
request.context, obj, fields_to_strip=fields_to_strip)
|
||||||
|
for obj in objs]})
|
||||||
|
else:
|
||||||
|
obj_creator = getattr(self._plugin, action)
|
||||||
|
if self._collection in body:
|
||||||
|
# Emulate atomic bulk behavior
|
||||||
|
objs = self._emulate_bulk_create(obj_creator, request,
|
||||||
|
body, parent_id)
|
||||||
|
return notify({self._collection: objs})
|
||||||
|
else:
|
||||||
|
kwargs.update({self._resource: body})
|
||||||
|
obj = obj_creator(request.context, **kwargs)
|
||||||
|
return notify({self._resource: self._view(request.context,
|
||||||
|
obj)})
|
||||||
|
|
||||||
|
def delete(self, request, id, **kwargs):
|
||||||
|
"""Deletes the specified entity."""
|
||||||
|
self._notifier.info(request.context,
|
||||||
|
self._resource + '.delete.start',
|
||||||
|
{self._resource + '_id': id})
|
||||||
|
action = self._plugin_handlers[self.DELETE]
|
||||||
|
|
||||||
|
# Check authz
|
||||||
|
policy.init()
|
||||||
|
parent_id = kwargs.get(self._parent_id_name)
|
||||||
|
obj = self._item(request, id, parent_id=parent_id)
|
||||||
|
try:
|
||||||
|
policy.enforce(request.context,
|
||||||
|
action,
|
||||||
|
obj)
|
||||||
|
except exceptions.PolicyNotAuthorized:
|
||||||
|
# To avoid giving away information, pretend that it
|
||||||
|
# doesn't exist
|
||||||
|
msg = _('The resource could not be found.')
|
||||||
|
raise webob.exc.HTTPNotFound(msg)
|
||||||
|
|
||||||
|
obj_deleter = getattr(self._plugin, action)
|
||||||
|
obj_deleter(request.context, id, **kwargs)
|
||||||
|
notifier_method = self._resource + '.delete.end'
|
||||||
|
self._notifier.info(request.context,
|
||||||
|
notifier_method,
|
||||||
|
{self._resource + '_id': id})
|
||||||
|
|
||||||
|
def update(self, request, id, body=None, **kwargs):
|
||||||
|
"""Updates the specified entity's attributes."""
|
||||||
|
parent_id = kwargs.get(self._parent_id_name)
|
||||||
|
try:
|
||||||
|
payload = body.copy()
|
||||||
|
except AttributeError:
|
||||||
|
msg = _("Invalid format: %s") % request.body
|
||||||
|
raise exceptions.BadRequest(resource='body', msg=msg)
|
||||||
|
payload['id'] = id
|
||||||
|
self._notifier.info(request.context,
|
||||||
|
self._resource + '.update.start',
|
||||||
|
payload)
|
||||||
|
body = Controller.prepare_request_body(request.context, body, False,
|
||||||
|
self._resource, self._attr_info,
|
||||||
|
allow_bulk=self._allow_bulk)
|
||||||
|
action = self._plugin_handlers[self.UPDATE]
|
||||||
|
# Load object to check authz
|
||||||
|
# but pass only attributes in the original body and required
|
||||||
|
# by the policy engine to the policy 'brain'
|
||||||
|
field_list = [name for (name, value) in (self._attr_info).items()
|
||||||
|
if (value.get('required_by_policy') or
|
||||||
|
value.get('primary_key') or
|
||||||
|
'default' not in value)]
|
||||||
|
# Ensure policy engine is initialized
|
||||||
|
policy.init()
|
||||||
|
orig_obj = self._item(request, id, field_list=field_list,
|
||||||
|
parent_id=parent_id)
|
||||||
|
orig_obj.update(body[self._resource])
|
||||||
|
attribs = attributes.ATTRIBUTES_TO_UPDATE
|
||||||
|
orig_obj[attribs] = body[self._resource].keys()
|
||||||
|
try:
|
||||||
|
policy.enforce(request.context,
|
||||||
|
action,
|
||||||
|
orig_obj)
|
||||||
|
except exceptions.PolicyNotAuthorized:
|
||||||
|
# To avoid giving away information, pretend that it
|
||||||
|
# doesn't exist
|
||||||
|
msg = _('The resource could not be found.')
|
||||||
|
raise webob.exc.HTTPNotFound(msg)
|
||||||
|
|
||||||
|
obj_updater = getattr(self._plugin, action)
|
||||||
|
kwargs = {self._resource: body}
|
||||||
|
if parent_id:
|
||||||
|
kwargs[self._parent_id_name] = parent_id
|
||||||
|
obj = obj_updater(request.context, id, **kwargs)
|
||||||
|
result = {self._resource: self._view(request.context, obj)}
|
||||||
|
notifier_method = self._resource + '.update.end'
|
||||||
|
self._notifier.info(request.context, notifier_method, result)
|
||||||
|
return result
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _populate_tenant_id(context, res_dict, is_create):
|
||||||
|
|
||||||
|
if (('tenant_id' in res_dict and
|
||||||
|
res_dict['tenant_id'] != context.tenant_id and
|
||||||
|
not context.is_admin)):
|
||||||
|
msg = _("Specifying 'tenant_id' other than authenticated "
|
||||||
|
"tenant in request requires admin privileges")
|
||||||
|
raise webob.exc.HTTPBadRequest(msg)
|
||||||
|
|
||||||
|
if is_create and 'tenant_id' not in res_dict:
|
||||||
|
if context.tenant_id:
|
||||||
|
res_dict['tenant_id'] = context.tenant_id
|
||||||
|
else:
|
||||||
|
msg = _("Running without keystone AuthN requires "
|
||||||
|
" that tenant_id is specified")
|
||||||
|
raise webob.exc.HTTPBadRequest(msg)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def prepare_request_body(context, body, is_create, resource, attr_info,
|
||||||
|
allow_bulk=False):
|
||||||
|
"""Verifies required attributes are in request body.
|
||||||
|
|
||||||
|
Also checking that an attribute is only specified if it is allowed
|
||||||
|
for the given operation (create/update).
|
||||||
|
|
||||||
|
Attribute with default values are considered to be optional.
|
||||||
|
|
||||||
|
body argument must be the deserialized body.
|
||||||
|
"""
|
||||||
|
collection = resource + "s"
|
||||||
|
if not body:
|
||||||
|
raise webob.exc.HTTPBadRequest(_("Resource body required"))
|
||||||
|
|
||||||
|
LOG.debug("Request body: %(body)s",
|
||||||
|
{'body': strutils.mask_password(body)})
|
||||||
|
prep_req_body = lambda x: Controller.prepare_request_body(
|
||||||
|
context,
|
||||||
|
x if resource in x else {resource: x},
|
||||||
|
is_create,
|
||||||
|
resource,
|
||||||
|
attr_info,
|
||||||
|
allow_bulk)
|
||||||
|
if collection in body:
|
||||||
|
if not allow_bulk:
|
||||||
|
raise webob.exc.HTTPBadRequest(_("Bulk operation "
|
||||||
|
"not supported"))
|
||||||
|
bulk_body = [prep_req_body(item) for item in body[collection]]
|
||||||
|
if not bulk_body:
|
||||||
|
raise webob.exc.HTTPBadRequest(_("Resources required"))
|
||||||
|
return {collection: bulk_body}
|
||||||
|
|
||||||
|
res_dict = body.get(resource)
|
||||||
|
if res_dict is None:
|
||||||
|
msg = _("Unable to find '%s' in request body") % resource
|
||||||
|
raise webob.exc.HTTPBadRequest(msg)
|
||||||
|
|
||||||
|
Controller._populate_tenant_id(context, res_dict, is_create)
|
||||||
|
Controller._verify_attributes(res_dict, attr_info)
|
||||||
|
|
||||||
|
if is_create: # POST
|
||||||
|
for attr, attr_vals in (attr_info).items():
|
||||||
|
if attr_vals['allow_post']:
|
||||||
|
if ('default' not in attr_vals and
|
||||||
|
attr not in res_dict):
|
||||||
|
msg = _("Failed to parse request. Required "
|
||||||
|
"attribute '%s' not specified") % attr
|
||||||
|
raise webob.exc.HTTPBadRequest(msg)
|
||||||
|
res_dict[attr] = res_dict.get(attr,
|
||||||
|
attr_vals.get('default'))
|
||||||
|
else:
|
||||||
|
if attr in res_dict:
|
||||||
|
msg = _("Attribute '%s' not allowed in POST") % attr
|
||||||
|
raise webob.exc.HTTPBadRequest(msg)
|
||||||
|
else: # PUT
|
||||||
|
for attr, attr_vals in (attr_info).items():
|
||||||
|
if attr in res_dict and not attr_vals['allow_put']:
|
||||||
|
msg = _("Cannot update read-only attribute %s") % attr
|
||||||
|
raise webob.exc.HTTPBadRequest(msg)
|
||||||
|
|
||||||
|
for attr, attr_vals in (attr_info).items():
|
||||||
|
if (attr not in res_dict or
|
||||||
|
res_dict[attr] is attributes.ATTR_NOT_SPECIFIED):
|
||||||
|
continue
|
||||||
|
# Convert values if necessary
|
||||||
|
if 'convert_to' in attr_vals:
|
||||||
|
res_dict[attr] = attr_vals['convert_to'](res_dict[attr])
|
||||||
|
# Check that configured values are correct
|
||||||
|
if 'validate' not in attr_vals:
|
||||||
|
continue
|
||||||
|
for rule in attr_vals['validate']:
|
||||||
|
# skip validating mead_id when mead_template is specified to
|
||||||
|
# create mea
|
||||||
|
if (resource == 'mea') and ('mead_template' in body['mea'])\
|
||||||
|
and (attr == "mead_id") and is_create:
|
||||||
|
continue
|
||||||
|
# skip validating mesd_id when mesd_template is provided
|
||||||
|
if (resource == 'mes') and ('mesd_template' in body['mes'])\
|
||||||
|
and (attr == 'mesd_id') and is_create:
|
||||||
|
continue
|
||||||
|
res = attributes.validators[rule](res_dict[attr],
|
||||||
|
attr_vals['validate'][rule])
|
||||||
|
if res:
|
||||||
|
msg_dict = dict(attr=attr, reason=res)
|
||||||
|
msg = _("Invalid input for %(attr)s. "
|
||||||
|
"Reason: %(reason)s.") % msg_dict
|
||||||
|
raise webob.exc.HTTPBadRequest(msg)
|
||||||
|
return body
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _verify_attributes(res_dict, attr_info):
|
||||||
|
extra_keys = set(res_dict.keys()) - set(attr_info.keys())
|
||||||
|
if extra_keys:
|
||||||
|
msg = _("Unrecognized attribute(s) '%s'") % ', '.join(extra_keys)
|
||||||
|
raise webob.exc.HTTPBadRequest(msg)
|
||||||
|
|
||||||
|
|
||||||
|
def create_resource(collection, resource, plugin, params, allow_bulk=False,
|
||||||
|
member_actions=None, parent=None, allow_pagination=False,
|
||||||
|
allow_sorting=False):
|
||||||
|
controller = Controller(plugin, collection, resource, params, allow_bulk,
|
||||||
|
member_actions=member_actions, parent=parent,
|
||||||
|
allow_pagination=allow_pagination,
|
||||||
|
allow_sorting=allow_sorting)
|
||||||
|
|
||||||
|
return wsgi_resource.Resource(controller, FAULT_MAP)
|
114
apmec/api/v1/resource.py
Normal file
114
apmec/api/v1/resource.py
Normal file
@ -0,0 +1,114 @@
|
|||||||
|
# Copyright 2012 OpenStack Foundation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Utility methods for working with WSGI servers redux
|
||||||
|
"""
|
||||||
|
|
||||||
|
from oslo_log import log as logging
|
||||||
|
import webob.dec
|
||||||
|
|
||||||
|
from apmec.api import api_common
|
||||||
|
from apmec import wsgi
|
||||||
|
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class Request(wsgi.Request):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def Resource(controller, faults=None, deserializers=None, serializers=None):
|
||||||
|
"""API entity resource.
|
||||||
|
|
||||||
|
Represents an API entity resource and the associated serialization and
|
||||||
|
deserialization logic
|
||||||
|
"""
|
||||||
|
default_deserializers = {'application/json': wsgi.JSONDeserializer()}
|
||||||
|
default_serializers = {'application/json': wsgi.JSONDictSerializer()}
|
||||||
|
format_types = {'json': 'application/json'}
|
||||||
|
action_status = dict(create=201, delete=204)
|
||||||
|
|
||||||
|
default_deserializers.update(deserializers or {})
|
||||||
|
default_serializers.update(serializers or {})
|
||||||
|
|
||||||
|
deserializers = default_deserializers
|
||||||
|
serializers = default_serializers
|
||||||
|
faults = faults or {}
|
||||||
|
|
||||||
|
@webob.dec.wsgify(RequestClass=Request)
|
||||||
|
def resource(request):
|
||||||
|
route_args = request.environ.get('wsgiorg.routing_args')
|
||||||
|
if route_args:
|
||||||
|
args = route_args[1].copy()
|
||||||
|
else:
|
||||||
|
args = {}
|
||||||
|
|
||||||
|
# NOTE(jkoelker) by now the controller is already found, remove
|
||||||
|
# it from the args if it is in the matchdict
|
||||||
|
args.pop('controller', None)
|
||||||
|
fmt = args.pop('format', None)
|
||||||
|
action = args.pop('action', None)
|
||||||
|
content_type = format_types.get(fmt,
|
||||||
|
request.best_match_content_type())
|
||||||
|
language = request.best_match_language()
|
||||||
|
deserializer = deserializers.get(content_type)
|
||||||
|
serializer = serializers.get(content_type)
|
||||||
|
|
||||||
|
try:
|
||||||
|
if request.body:
|
||||||
|
args['body'] = deserializer.deserialize(request.body)['body']
|
||||||
|
|
||||||
|
method = getattr(controller, action)
|
||||||
|
|
||||||
|
result = method(request=request, **args)
|
||||||
|
except Exception as e:
|
||||||
|
mapped_exc = api_common.convert_exception_to_http_exc(e, faults,
|
||||||
|
language)
|
||||||
|
if hasattr(mapped_exc, 'code') and 400 <= mapped_exc.code < 500:
|
||||||
|
LOG.info('%(action)s failed (client error): %(exc)s',
|
||||||
|
{'action': action, 'exc': mapped_exc})
|
||||||
|
else:
|
||||||
|
LOG.exception('%(action)s failed: %(details)s',
|
||||||
|
{'action': action,
|
||||||
|
'details': extract_exc_details(e)})
|
||||||
|
raise mapped_exc
|
||||||
|
|
||||||
|
status = action_status.get(action, 200)
|
||||||
|
body = serializer.serialize(result)
|
||||||
|
# NOTE(jkoelker) Comply with RFC2616 section 9.7
|
||||||
|
if status == 204:
|
||||||
|
content_type = ''
|
||||||
|
body = None
|
||||||
|
|
||||||
|
return webob.Response(request=request, status=status,
|
||||||
|
content_type=content_type,
|
||||||
|
body=body)
|
||||||
|
return resource
|
||||||
|
|
||||||
|
|
||||||
|
_NO_ARGS_MARKER = object()
|
||||||
|
|
||||||
|
|
||||||
|
def extract_exc_details(e):
|
||||||
|
for attr in ('_error_context_msg', '_error_context_args'):
|
||||||
|
if not hasattr(e, attr):
|
||||||
|
return _('No details.')
|
||||||
|
details = e._error_context_msg
|
||||||
|
args = e._error_context_args
|
||||||
|
if args is _NO_ARGS_MARKER:
|
||||||
|
return details
|
||||||
|
return details % args
|
83
apmec/api/v1/resource_helper.py
Normal file
83
apmec/api/v1/resource_helper.py
Normal file
@ -0,0 +1,83 @@
|
|||||||
|
# (c) Copyright 2014 Cisco Systems Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from oslo_config import cfg
|
||||||
|
|
||||||
|
from apmec.api import extensions
|
||||||
|
from apmec.api.v1 import base
|
||||||
|
from apmec import manager
|
||||||
|
from apmec.plugins.common import constants
|
||||||
|
|
||||||
|
|
||||||
|
def build_plural_mappings(special_mappings, resource_map):
|
||||||
|
"""Create plural to singular mapping for all resources.
|
||||||
|
|
||||||
|
Allows for special mappings to be provided, like policies -> policy.
|
||||||
|
Otherwise, will strip off the last character for normal mappings, like
|
||||||
|
routers -> router.
|
||||||
|
"""
|
||||||
|
plural_mappings = {}
|
||||||
|
for plural in resource_map:
|
||||||
|
singular = special_mappings.get(plural, plural[:-1])
|
||||||
|
plural_mappings[plural] = singular
|
||||||
|
return plural_mappings
|
||||||
|
|
||||||
|
|
||||||
|
def build_resource_info(plural_mappings, resource_map, which_service,
|
||||||
|
action_map=None,
|
||||||
|
translate_name=False, allow_bulk=False):
|
||||||
|
"""Build resources for advanced services.
|
||||||
|
|
||||||
|
Takes the resource information, and singular/plural mappings, and creates
|
||||||
|
API resource objects for advanced services extensions. Will optionally
|
||||||
|
translate underscores to dashes in resource names, register the resource,
|
||||||
|
and accept action information for resources.
|
||||||
|
|
||||||
|
:param plural_mappings: mappings between singular and plural forms
|
||||||
|
:param resource_map: attribute map for the WSGI resources to create
|
||||||
|
:param which_service: The name of the service for which the WSGI resources
|
||||||
|
are being created. This name will be used to pass
|
||||||
|
the appropriate plugin to the WSGI resource.
|
||||||
|
It can be set to None or "CORE"to create WSGI
|
||||||
|
resources for the core plugin
|
||||||
|
:param action_map: custom resource actions
|
||||||
|
:param translate_name: replaces underscores with dashes
|
||||||
|
:param allow_bulk: True if bulk create are allowed
|
||||||
|
"""
|
||||||
|
resources = []
|
||||||
|
if not which_service:
|
||||||
|
which_service = constants.CORE
|
||||||
|
action_map = action_map or {}
|
||||||
|
plugin = manager.ApmecManager.get_service_plugins()[which_service]
|
||||||
|
for collection_name in resource_map:
|
||||||
|
resource_name = plural_mappings[collection_name]
|
||||||
|
params = resource_map.get(collection_name, {})
|
||||||
|
if translate_name:
|
||||||
|
collection_name = collection_name.replace('_', '-')
|
||||||
|
member_actions = action_map.get(resource_name, {})
|
||||||
|
controller = base.create_resource(
|
||||||
|
collection_name, resource_name, plugin, params,
|
||||||
|
member_actions=member_actions,
|
||||||
|
allow_bulk=allow_bulk,
|
||||||
|
allow_pagination=cfg.CONF.allow_pagination,
|
||||||
|
allow_sorting=cfg.CONF.allow_sorting)
|
||||||
|
resource = extensions.ResourceExtension(
|
||||||
|
collection_name,
|
||||||
|
controller,
|
||||||
|
path_prefix=constants.COMMON_PREFIXES[which_service],
|
||||||
|
member_actions=member_actions,
|
||||||
|
attr_map=params)
|
||||||
|
resources.append(resource)
|
||||||
|
return resources
|
60
apmec/api/v1/router.py
Normal file
60
apmec/api/v1/router.py
Normal file
@ -0,0 +1,60 @@
|
|||||||
|
# Copyright (c) 2012 OpenStack Foundation.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
# implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import routes as routes_mapper
|
||||||
|
import six.moves.urllib.parse as urlparse
|
||||||
|
import webob
|
||||||
|
import webob.dec
|
||||||
|
import webob.exc
|
||||||
|
|
||||||
|
from apmec.api import extensions
|
||||||
|
from apmec.api.v1 import attributes
|
||||||
|
from apmec import wsgi
|
||||||
|
|
||||||
|
|
||||||
|
class Index(wsgi.Application):
|
||||||
|
def __init__(self, resources):
|
||||||
|
self.resources = resources
|
||||||
|
|
||||||
|
@webob.dec.wsgify(RequestClass=wsgi.Request)
|
||||||
|
def __call__(self, req):
|
||||||
|
metadata = {}
|
||||||
|
|
||||||
|
layout = []
|
||||||
|
for name, collection in (self.resources).items():
|
||||||
|
href = urlparse.urljoin(req.path_url, collection)
|
||||||
|
resource = {'name': name,
|
||||||
|
'collection': collection,
|
||||||
|
'links': [{'rel': 'self',
|
||||||
|
'href': href}]}
|
||||||
|
layout.append(resource)
|
||||||
|
response = dict(resources=layout)
|
||||||
|
content_type = req.best_match_content_type()
|
||||||
|
body = wsgi.Serializer(metadata=metadata).serialize(response,
|
||||||
|
content_type)
|
||||||
|
return webob.Response(body=body, content_type=content_type)
|
||||||
|
|
||||||
|
|
||||||
|
class APIRouter(wsgi.Router):
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def factory(cls, global_config, **local_config):
|
||||||
|
return cls(**local_config)
|
||||||
|
|
||||||
|
def __init__(self, **local_config):
|
||||||
|
mapper = routes_mapper.Mapper()
|
||||||
|
ext_mgr = extensions.ExtensionManager.get_instance()
|
||||||
|
ext_mgr.extend_resources("1.0", attributes.RESOURCE_ATTRIBUTE_MAP)
|
||||||
|
super(APIRouter, self).__init__(mapper)
|
59
apmec/api/versions.py
Normal file
59
apmec/api/versions.py
Normal file
@ -0,0 +1,59 @@
|
|||||||
|
# Copyright 2011 Citrix Systems.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import webob.dec
|
||||||
|
|
||||||
|
import oslo_i18n
|
||||||
|
|
||||||
|
from apmec.api.views import versions as versions_view
|
||||||
|
from apmec import wsgi
|
||||||
|
|
||||||
|
|
||||||
|
class Versions(object):
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def factory(cls, global_config, **local_config):
|
||||||
|
return cls()
|
||||||
|
|
||||||
|
@webob.dec.wsgify(RequestClass=wsgi.Request)
|
||||||
|
def __call__(self, req):
|
||||||
|
"""Respond to a request for all Apmec API versions."""
|
||||||
|
version_objs = [
|
||||||
|
{
|
||||||
|
"id": "v1.0",
|
||||||
|
"status": "CURRENT",
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
if req.path != '/':
|
||||||
|
language = req.best_match_language()
|
||||||
|
msg = _('Unknown API version specified')
|
||||||
|
msg = oslo_i18n.translate(msg, language)
|
||||||
|
return webob.exc.HTTPNotFound(explanation=msg)
|
||||||
|
|
||||||
|
builder = versions_view.get_view_builder(req)
|
||||||
|
versions = [builder.build(version) for version in version_objs]
|
||||||
|
response = dict(versions=versions)
|
||||||
|
metadata = {}
|
||||||
|
|
||||||
|
content_type = req.best_match_content_type()
|
||||||
|
body = (wsgi.Serializer(metadata=metadata).
|
||||||
|
serialize(response, content_type))
|
||||||
|
|
||||||
|
response = webob.Response()
|
||||||
|
response.content_type = content_type
|
||||||
|
response.body = body
|
||||||
|
|
||||||
|
return response
|
0
apmec/api/views/__init__.py
Normal file
0
apmec/api/views/__init__.py
Normal file
58
apmec/api/views/versions.py
Normal file
58
apmec/api/views/versions.py
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
# Copyright 2010-2011 OpenStack Foundation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
|
||||||
|
def get_view_builder(req):
|
||||||
|
base_url = req.application_url
|
||||||
|
return ViewBuilder(base_url)
|
||||||
|
|
||||||
|
|
||||||
|
class ViewBuilder(object):
|
||||||
|
|
||||||
|
def __init__(self, base_url):
|
||||||
|
"""Object initialization.
|
||||||
|
|
||||||
|
:param base_url: url of the root wsgi application
|
||||||
|
"""
|
||||||
|
self.base_url = base_url
|
||||||
|
|
||||||
|
def build(self, version_data):
|
||||||
|
"""Generic method used to generate a version entity."""
|
||||||
|
version = {
|
||||||
|
"id": version_data["id"],
|
||||||
|
"status": version_data["status"],
|
||||||
|
"links": self._build_links(version_data),
|
||||||
|
}
|
||||||
|
|
||||||
|
return version
|
||||||
|
|
||||||
|
def _build_links(self, version_data):
|
||||||
|
"""Generate a container of links that refer to the provided version."""
|
||||||
|
href = self.generate_href(version_data["id"])
|
||||||
|
|
||||||
|
links = [
|
||||||
|
{
|
||||||
|
"rel": "self",
|
||||||
|
"href": href,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
return links
|
||||||
|
|
||||||
|
def generate_href(self, version_number):
|
||||||
|
"""Create an url that refers to a specific version_number."""
|
||||||
|
return os.path.join(self.base_url, version_number)
|
75
apmec/auth.py
Normal file
75
apmec/auth.py
Normal file
@ -0,0 +1,75 @@
|
|||||||
|
# Copyright 2012 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_middleware import request_id
|
||||||
|
import webob.dec
|
||||||
|
import webob.exc
|
||||||
|
|
||||||
|
from apmec import context
|
||||||
|
from apmec import wsgi
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class ApmecKeystoneContext(wsgi.Middleware):
|
||||||
|
"""Make a request context from keystone headers."""
|
||||||
|
|
||||||
|
@webob.dec.wsgify
|
||||||
|
def __call__(self, req):
|
||||||
|
# Determine the user ID
|
||||||
|
user_id = req.headers.get('X_USER_ID')
|
||||||
|
if not user_id:
|
||||||
|
LOG.debug("X_USER_ID is not found in request")
|
||||||
|
return webob.exc.HTTPUnauthorized()
|
||||||
|
|
||||||
|
# Determine the tenant
|
||||||
|
tenant_id = req.headers.get('X_PROJECT_ID')
|
||||||
|
|
||||||
|
# Suck out the roles
|
||||||
|
roles = [r.strip() for r in req.headers.get('X_ROLES', '').split(',')]
|
||||||
|
|
||||||
|
# Human-friendly names
|
||||||
|
tenant_name = req.headers.get('X_PROJECT_NAME')
|
||||||
|
user_name = req.headers.get('X_USER_NAME')
|
||||||
|
|
||||||
|
# Use request_id if already set
|
||||||
|
req_id = req.environ.get(request_id.ENV_REQUEST_ID)
|
||||||
|
|
||||||
|
# Get the auth token
|
||||||
|
auth_token = req.headers.get('X_AUTH_TOKEN',
|
||||||
|
req.headers.get('X_STORAGE_TOKEN'))
|
||||||
|
|
||||||
|
# Create a context with the authentication data
|
||||||
|
ctx = context.Context(user_id, tenant_id, roles=roles,
|
||||||
|
user_name=user_name, tenant_name=tenant_name,
|
||||||
|
request_id=req_id, auth_token=auth_token)
|
||||||
|
|
||||||
|
# Inject the context...
|
||||||
|
req.environ['apmec.context'] = ctx
|
||||||
|
|
||||||
|
return self.application
|
||||||
|
|
||||||
|
|
||||||
|
def pipeline_factory(loader, global_conf, **local_conf):
|
||||||
|
"""Create a paste pipeline based on the 'auth_strategy' config option."""
|
||||||
|
pipeline = local_conf[cfg.CONF.auth_strategy]
|
||||||
|
pipeline = pipeline.split()
|
||||||
|
filters = [loader.get_filter(n) for n in pipeline[:-1]]
|
||||||
|
app = loader.get_app(pipeline[-1])
|
||||||
|
filters.reverse()
|
||||||
|
for f in filters:
|
||||||
|
app = f(app)
|
||||||
|
return app
|
0
apmec/catalogs/__init__.py
Normal file
0
apmec/catalogs/__init__.py
Normal file
0
apmec/catalogs/tosca/__init__.py
Normal file
0
apmec/catalogs/tosca/__init__.py
Normal file
192
apmec/catalogs/tosca/lib/apmec_defs.yaml
Normal file
192
apmec/catalogs/tosca/lib/apmec_defs.yaml
Normal file
@ -0,0 +1,192 @@
|
|||||||
|
data_types:
|
||||||
|
tosca.datatypes.apmec.ActionMap:
|
||||||
|
properties:
|
||||||
|
trigger:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
action:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
params:
|
||||||
|
type: map
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
|
||||||
|
tosca.datatypes.apmec.MonitoringParams:
|
||||||
|
properties:
|
||||||
|
monitoring_delay:
|
||||||
|
type: int
|
||||||
|
required: false
|
||||||
|
count:
|
||||||
|
type: int
|
||||||
|
required: false
|
||||||
|
interval:
|
||||||
|
type: int
|
||||||
|
required: false
|
||||||
|
timeout:
|
||||||
|
type: int
|
||||||
|
required: false
|
||||||
|
retry:
|
||||||
|
type: int
|
||||||
|
required: false
|
||||||
|
port:
|
||||||
|
type: int
|
||||||
|
required: false
|
||||||
|
|
||||||
|
tosca.datatypes.apmec.MonitoringType:
|
||||||
|
properties:
|
||||||
|
name:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
actions:
|
||||||
|
type: map
|
||||||
|
required: true
|
||||||
|
parameters:
|
||||||
|
type: tosca.datatypes.apmec.MonitoringParams
|
||||||
|
required: false
|
||||||
|
|
||||||
|
tosca.datatypes.compute_properties:
|
||||||
|
properties:
|
||||||
|
num_cpus:
|
||||||
|
type: integer
|
||||||
|
required: false
|
||||||
|
mem_size:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
disk_size:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
mem_page_size:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
numa_node_count:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- greater_or_equal: 2
|
||||||
|
required: false
|
||||||
|
numa_nodes:
|
||||||
|
type: map
|
||||||
|
required: false
|
||||||
|
cpu_allocation:
|
||||||
|
type: map
|
||||||
|
required: false
|
||||||
|
|
||||||
|
tosca.datatypes.apmec.VirtualIP:
|
||||||
|
properties:
|
||||||
|
ip_address:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
description: The virtual IP address allowed to be paired with.
|
||||||
|
mac_address:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
description: The mac address allowed to be paired with specific virtual IP.
|
||||||
|
|
||||||
|
policy_types:
|
||||||
|
tosca.policies.apmec.Placement:
|
||||||
|
derived_from: tosca.policies.Root
|
||||||
|
|
||||||
|
tosca.policies.apmec.Failure:
|
||||||
|
derived_from: tosca.policies.Root
|
||||||
|
action:
|
||||||
|
type: string
|
||||||
|
|
||||||
|
tosca.policies.apmec.Failure.Respawn:
|
||||||
|
derived_from: tosca.policies.apmec.Failure
|
||||||
|
action: respawn
|
||||||
|
|
||||||
|
tosca.policies.apmec.Failure.Terminate:
|
||||||
|
derived_from: tosca.policies.apmec.Failure
|
||||||
|
action: log_and_kill
|
||||||
|
|
||||||
|
tosca.policies.apmec.Failure.Log:
|
||||||
|
derived_from: tosca.policies.apmec.Failure
|
||||||
|
action: log
|
||||||
|
|
||||||
|
tosca.policies.apmec.Monitoring:
|
||||||
|
derived_from: tosca.policies.Root
|
||||||
|
properties:
|
||||||
|
name:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
parameters:
|
||||||
|
type: map
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
actions:
|
||||||
|
type: map
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
|
||||||
|
tosca.policies.apmec.Monitoring.NoOp:
|
||||||
|
derived_from: tosca.policies.apmec.Monitoring
|
||||||
|
properties:
|
||||||
|
name: noop
|
||||||
|
|
||||||
|
tosca.policies.apmec.Monitoring.Ping:
|
||||||
|
derived_from: tosca.policies.apmec.Monitoring
|
||||||
|
properties:
|
||||||
|
name: ping
|
||||||
|
|
||||||
|
tosca.policies.apmec.Monitoring.HttpPing:
|
||||||
|
derived_from: tosca.policies.apmec.Monitoring.Ping
|
||||||
|
properties:
|
||||||
|
name: http-ping
|
||||||
|
|
||||||
|
tosca.policies.apmec.Alarming:
|
||||||
|
derived_from: tosca.policies.Monitoring
|
||||||
|
triggers:
|
||||||
|
resize_compute:
|
||||||
|
event_type:
|
||||||
|
type: map
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
metrics:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
condition:
|
||||||
|
type: map
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
action:
|
||||||
|
type: map
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
|
||||||
|
tosca.policies.apmec.Scaling:
|
||||||
|
derived_from: tosca.policies.Scaling
|
||||||
|
description: Defines policy for scaling the given targets.
|
||||||
|
properties:
|
||||||
|
increment:
|
||||||
|
type: integer
|
||||||
|
required: true
|
||||||
|
description: Number of nodes to add or remove during the scale out/in.
|
||||||
|
targets:
|
||||||
|
type: list
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
description: List of Scaling nodes.
|
||||||
|
min_instances:
|
||||||
|
type: integer
|
||||||
|
required: true
|
||||||
|
description: Minimum number of instances to scale in.
|
||||||
|
max_instances:
|
||||||
|
type: integer
|
||||||
|
required: true
|
||||||
|
description: Maximum number of instances to scale out.
|
||||||
|
default_instances:
|
||||||
|
type: integer
|
||||||
|
required: true
|
||||||
|
description: Initial number of instances.
|
||||||
|
cooldown:
|
||||||
|
type: integer
|
||||||
|
required: false
|
||||||
|
default: 120
|
||||||
|
description: Wait time (in seconds) between consecutive scaling operations. During the cooldown period, scaling action will be ignored
|
274
apmec/catalogs/tosca/lib/apmec_mec_defs.yaml
Normal file
274
apmec/catalogs/tosca/lib/apmec_mec_defs.yaml
Normal file
@ -0,0 +1,274 @@
|
|||||||
|
data_types:
|
||||||
|
tosca.mec.datatypes.pathType:
|
||||||
|
properties:
|
||||||
|
forwarder:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
capability:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
|
||||||
|
tosca.mec.datatypes.aclType:
|
||||||
|
properties:
|
||||||
|
eth_type:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
eth_src:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
eth_dst:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
vlan_id:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 1, 4094 ]
|
||||||
|
required: false
|
||||||
|
vlan_pcp:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 0, 7 ]
|
||||||
|
required: false
|
||||||
|
mpls_label:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 16, 1048575]
|
||||||
|
required: false
|
||||||
|
mpls_tc:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 0, 7 ]
|
||||||
|
required: false
|
||||||
|
ip_dscp:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 0, 63 ]
|
||||||
|
required: false
|
||||||
|
ip_ecn:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 0, 3 ]
|
||||||
|
required: false
|
||||||
|
ip_src_prefix:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
ip_dst_prefix:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
ip_proto:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 1, 254 ]
|
||||||
|
required: false
|
||||||
|
destination_port_range:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
source_port_range:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
network_src_port_id:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
network_dst_port_id:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
network_id:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
network_name:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
tenant_id:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
icmpv4_type:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 0, 254 ]
|
||||||
|
required: false
|
||||||
|
icmpv4_code:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 0, 15 ]
|
||||||
|
required: false
|
||||||
|
arp_op:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 1, 25 ]
|
||||||
|
required: false
|
||||||
|
arp_spa:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
arp_tpa:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
arp_sha:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
arp_tha:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
ipv6_src:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
ipv6_dst:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
ipv6_flabel:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 0, 1048575]
|
||||||
|
required: false
|
||||||
|
icmpv6_type:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 0, 255]
|
||||||
|
required: false
|
||||||
|
icmpv6_code:
|
||||||
|
type: integer
|
||||||
|
constraints:
|
||||||
|
- in_range: [ 0, 7]
|
||||||
|
required: false
|
||||||
|
ipv6_nd_target:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
ipv6_nd_sll:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
ipv6_nd_tll:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
|
||||||
|
tosca.mec.datatypes.policyType:
|
||||||
|
properties:
|
||||||
|
type:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
constraints:
|
||||||
|
- valid_values: [ ACL ]
|
||||||
|
criteria:
|
||||||
|
type: list
|
||||||
|
required: true
|
||||||
|
entry_schema:
|
||||||
|
type: tosca.mec.datatypes.aclType
|
||||||
|
|
||||||
|
node_types:
|
||||||
|
tosca.nodes.mec.VDU.Apmec:
|
||||||
|
derived_from: tosca.nodes.mec.VDU
|
||||||
|
capabilities:
|
||||||
|
mec_compute:
|
||||||
|
type: tosca.datatypes.compute_properties
|
||||||
|
properties:
|
||||||
|
name:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
image:
|
||||||
|
# type: tosca.artifacts.Deployment.Image.VM
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
flavor:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
availability_zone:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
metadata:
|
||||||
|
type: map
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
config_drive:
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
required: false
|
||||||
|
|
||||||
|
placement_policy:
|
||||||
|
# type: tosca.policies.apmec.Placement
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
|
||||||
|
monitoring_policy:
|
||||||
|
# type: tosca.policies.apmec.Monitoring
|
||||||
|
# type: tosca.datatypes.apmec.MonitoringType
|
||||||
|
type: map
|
||||||
|
required: false
|
||||||
|
|
||||||
|
config:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
|
||||||
|
mgmt_driver:
|
||||||
|
type: string
|
||||||
|
default: noop
|
||||||
|
required: false
|
||||||
|
|
||||||
|
service_type:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
|
||||||
|
user_data:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
|
||||||
|
user_data_format:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
|
||||||
|
key_name:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
|
||||||
|
tosca.nodes.mec.CP.Apmec:
|
||||||
|
derived_from: tosca.nodes.mec.CP
|
||||||
|
properties:
|
||||||
|
mac_address:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
name:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
management:
|
||||||
|
type: boolean
|
||||||
|
required: false
|
||||||
|
anti_spoofing_protection:
|
||||||
|
type: boolean
|
||||||
|
required: false
|
||||||
|
allowed_address_pairs:
|
||||||
|
type: list
|
||||||
|
entry_schema:
|
||||||
|
type: tosca.datatypes.apmec.VirtualIP
|
||||||
|
required: false
|
||||||
|
security_groups:
|
||||||
|
type: list
|
||||||
|
required: false
|
||||||
|
type:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
constraints:
|
||||||
|
- valid_values: [ sriov, vnic ]
|
||||||
|
|
||||||
|
tosca.nodes.mec.MEAC.Apmec:
|
||||||
|
derived_from: tosca.nodes.SoftwareComponent
|
||||||
|
requirements:
|
||||||
|
- host:
|
||||||
|
node: tosca.nodes.mec.VDU.Apmec
|
||||||
|
relationship: tosca.relationships.HostedOn
|
||||||
|
|
||||||
|
tosca.nodes.BlockStorage.Apmec:
|
||||||
|
derived_from: tosca.nodes.BlockStorage
|
||||||
|
properties:
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
|
||||||
|
tosca.nodes.BlockStorageAttachment:
|
||||||
|
derived_from: tosca.nodes.Root
|
||||||
|
properties:
|
||||||
|
location:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
requirements:
|
||||||
|
- virtualBinding:
|
||||||
|
node: tosca.nodes.mec.VDU.Apmec
|
||||||
|
- virtualAttachment:
|
||||||
|
node: tosca.nodes.BlockStorage.Apmec
|
687
apmec/catalogs/tosca/utils.py
Normal file
687
apmec/catalogs/tosca/utils.py
Normal file
@ -0,0 +1,687 @@
|
|||||||
|
# Copyright 2016 - Nokia
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import collections
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from toscaparser import properties
|
||||||
|
from toscaparser.utils import yamlparser
|
||||||
|
|
||||||
|
from apmec.common import log
|
||||||
|
from apmec.common import utils
|
||||||
|
from apmec.extensions import mem
|
||||||
|
|
||||||
|
from collections import OrderedDict
|
||||||
|
|
||||||
|
FAILURE = 'tosca.policies.apmec.Failure'
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
MONITORING = 'tosca.policies.Monitoring'
|
||||||
|
SCALING = 'tosca.policies.Scaling'
|
||||||
|
PLACEMENT = 'tosca.policies.apmec.Placement'
|
||||||
|
APMECCP = 'tosca.nodes.mec.CP.Apmec'
|
||||||
|
APMECVDU = 'tosca.nodes.mec.VDU.Apmec'
|
||||||
|
BLOCKSTORAGE = 'tosca.nodes.BlockStorage.Apmec'
|
||||||
|
BLOCKSTORAGE_ATTACHMENT = 'tosca.nodes.BlockStorageAttachment'
|
||||||
|
TOSCA_BINDS_TO = 'tosca.relationships.network.BindsTo'
|
||||||
|
VDU = 'tosca.nodes.mec.VDU'
|
||||||
|
IMAGE = 'tosca.artifacts.Deployment.Image.VM'
|
||||||
|
HEAT_SOFTWARE_CONFIG = 'OS::Heat::SoftwareConfig'
|
||||||
|
OS_RESOURCES = {
|
||||||
|
'flavor': 'get_flavor_dict',
|
||||||
|
'image': 'get_image_dict'
|
||||||
|
}
|
||||||
|
|
||||||
|
FLAVOR_PROPS = {
|
||||||
|
"num_cpus": ("vcpus", 1, None),
|
||||||
|
"disk_size": ("disk", 1, "GB"),
|
||||||
|
"mem_size": ("ram", 512, "MB")
|
||||||
|
}
|
||||||
|
|
||||||
|
CPU_PROP_MAP = (('hw:cpu_policy', 'cpu_affinity'),
|
||||||
|
('hw:cpu_threads_policy', 'thread_allocation'),
|
||||||
|
('hw:cpu_sockets', 'socket_count'),
|
||||||
|
('hw:cpu_threads', 'thread_count'),
|
||||||
|
('hw:cpu_cores', 'core_count'))
|
||||||
|
|
||||||
|
CPU_PROP_KEY_SET = {'cpu_affinity', 'thread_allocation', 'socket_count',
|
||||||
|
'thread_count', 'core_count'}
|
||||||
|
|
||||||
|
FLAVOR_EXTRA_SPECS_LIST = ('cpu_allocation',
|
||||||
|
'mem_page_size',
|
||||||
|
'numa_node_count',
|
||||||
|
'numa_nodes')
|
||||||
|
|
||||||
|
delpropmap = {APMECVDU: ('mgmt_driver', 'config', 'service_type',
|
||||||
|
'placement_policy', 'monitoring_policy',
|
||||||
|
'metadata', 'failure_policy'),
|
||||||
|
APMECCP: ('management',)}
|
||||||
|
|
||||||
|
convert_prop = {APMECCP: {'anti_spoofing_protection':
|
||||||
|
'port_security_enabled',
|
||||||
|
'type':
|
||||||
|
'binding:vnic_type'}}
|
||||||
|
|
||||||
|
convert_prop_values = {APMECCP: {'type': {'sriov': 'direct',
|
||||||
|
'vnic': 'normal'}}}
|
||||||
|
|
||||||
|
deletenodes = (MONITORING, FAILURE, PLACEMENT)
|
||||||
|
|
||||||
|
HEAT_RESOURCE_MAP = {
|
||||||
|
"flavor": "OS::Nova::Flavor",
|
||||||
|
"image": "OS::Glance::Image"
|
||||||
|
}
|
||||||
|
|
||||||
|
SCALE_GROUP_RESOURCE = "OS::Heat::AutoScalingGroup"
|
||||||
|
SCALE_POLICY_RESOURCE = "OS::Heat::ScalingPolicy"
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def updateimports(template):
|
||||||
|
path = os.path.dirname(os.path.abspath(__file__)) + '/lib/'
|
||||||
|
defsfile = path + 'apmec_defs.yaml'
|
||||||
|
|
||||||
|
if 'imports' in template:
|
||||||
|
template['imports'].append(defsfile)
|
||||||
|
else:
|
||||||
|
template['imports'] = [defsfile]
|
||||||
|
|
||||||
|
if 'mec' in template['tosca_definitions_version']:
|
||||||
|
mecfile = path + 'apmec_mec_defs.yaml'
|
||||||
|
|
||||||
|
template['imports'].append(mecfile)
|
||||||
|
|
||||||
|
LOG.debug(path)
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def check_for_substitution_mappings(template, params):
|
||||||
|
sm_dict = params.get('substitution_mappings', {})
|
||||||
|
requirements = sm_dict.get('requirements')
|
||||||
|
node_tpl = template['topology_template']['node_templates']
|
||||||
|
req_dict_tpl = template['topology_template']['substitution_mappings'].get(
|
||||||
|
'requirements')
|
||||||
|
# Check if substitution_mappings and requirements are empty in params but
|
||||||
|
# not in template. If True raise exception
|
||||||
|
if (not sm_dict or not requirements) and req_dict_tpl:
|
||||||
|
raise mem.InvalidParamsForSM()
|
||||||
|
# Check if requirements are present for SM in template, if True then return
|
||||||
|
elif (not sm_dict or not requirements) and not req_dict_tpl:
|
||||||
|
return
|
||||||
|
del params['substitution_mappings']
|
||||||
|
for req_name, req_val in (req_dict_tpl).items():
|
||||||
|
if req_name not in requirements:
|
||||||
|
raise mem.SMRequirementMissing(requirement=req_name)
|
||||||
|
if not isinstance(req_val, list):
|
||||||
|
raise mem.InvalidSubstitutionMapping(requirement=req_name)
|
||||||
|
try:
|
||||||
|
node_name = req_val[0]
|
||||||
|
node_req = req_val[1]
|
||||||
|
|
||||||
|
node_tpl[node_name]['requirements'].append({
|
||||||
|
node_req: {
|
||||||
|
'node': requirements[req_name]
|
||||||
|
}
|
||||||
|
})
|
||||||
|
node_tpl[requirements[req_name]] = \
|
||||||
|
sm_dict[requirements[req_name]]
|
||||||
|
except Exception:
|
||||||
|
raise mem.InvalidSubstitutionMapping(requirement=req_name)
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_vdu_monitoring(template):
|
||||||
|
monitoring_dict = dict()
|
||||||
|
policy_dict = dict()
|
||||||
|
policy_dict['vdus'] = collections.OrderedDict()
|
||||||
|
for nt in template.nodetemplates:
|
||||||
|
if nt.type_definition.is_derived_from(APMECVDU):
|
||||||
|
mon_policy = nt.get_property_value('monitoring_policy') or 'noop'
|
||||||
|
if mon_policy != 'noop':
|
||||||
|
if 'parameters' in mon_policy:
|
||||||
|
mon_policy['monitoring_params'] = mon_policy['parameters']
|
||||||
|
policy_dict['vdus'][nt.name] = {}
|
||||||
|
policy_dict['vdus'][nt.name][mon_policy['name']] = mon_policy
|
||||||
|
if policy_dict.get('vdus'):
|
||||||
|
monitoring_dict = policy_dict
|
||||||
|
return monitoring_dict
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_vdu_metadata(template):
|
||||||
|
metadata = dict()
|
||||||
|
metadata.setdefault('vdus', {})
|
||||||
|
for nt in template.nodetemplates:
|
||||||
|
if nt.type_definition.is_derived_from(APMECVDU):
|
||||||
|
metadata_dict = nt.get_property_value('metadata') or None
|
||||||
|
if metadata_dict:
|
||||||
|
metadata['vdus'][nt.name] = {}
|
||||||
|
metadata['vdus'][nt.name].update(metadata_dict)
|
||||||
|
return metadata
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def pre_process_alarm_resources(mea, template, vdu_metadata):
|
||||||
|
alarm_resources = dict()
|
||||||
|
matching_metadata = dict()
|
||||||
|
alarm_actions = dict()
|
||||||
|
for policy in template.policies:
|
||||||
|
if (policy.type_definition.is_derived_from(MONITORING)):
|
||||||
|
matching_metadata =\
|
||||||
|
_process_matching_metadata(vdu_metadata, policy)
|
||||||
|
alarm_actions = _process_alarm_actions(mea, policy)
|
||||||
|
alarm_resources['matching_metadata'] = matching_metadata
|
||||||
|
alarm_resources['alarm_actions'] = alarm_actions
|
||||||
|
return alarm_resources
|
||||||
|
|
||||||
|
|
||||||
|
def _process_matching_metadata(metadata, policy):
|
||||||
|
matching_mtdata = dict()
|
||||||
|
triggers = policy.entity_tpl['triggers']
|
||||||
|
for trigger_name, trigger_dict in triggers.items():
|
||||||
|
if not (trigger_dict.get('metadata') and metadata):
|
||||||
|
raise mem.MetadataNotMatched()
|
||||||
|
is_matched = False
|
||||||
|
for vdu_name, metadata_dict in metadata['vdus'].items():
|
||||||
|
if trigger_dict['metadata'] ==\
|
||||||
|
metadata_dict['metering.mea']:
|
||||||
|
is_matched = True
|
||||||
|
if not is_matched:
|
||||||
|
raise mem.MetadataNotMatched()
|
||||||
|
matching_mtdata[trigger_name] = dict()
|
||||||
|
matching_mtdata[trigger_name]['metadata.user_metadata.mea'] =\
|
||||||
|
trigger_dict['metadata']
|
||||||
|
return matching_mtdata
|
||||||
|
|
||||||
|
|
||||||
|
def _process_alarm_actions(mea, policy):
|
||||||
|
# process alarm url here
|
||||||
|
triggers = policy.entity_tpl['triggers']
|
||||||
|
alarm_actions = dict()
|
||||||
|
for trigger_name, trigger_dict in triggers.items():
|
||||||
|
alarm_url = mea['attributes'].get(trigger_name)
|
||||||
|
if alarm_url:
|
||||||
|
alarm_url = str(alarm_url)
|
||||||
|
LOG.debug('Alarm url in heat %s', alarm_url)
|
||||||
|
alarm_actions[trigger_name] = dict()
|
||||||
|
alarm_actions[trigger_name]['alarm_actions'] = [alarm_url]
|
||||||
|
return alarm_actions
|
||||||
|
|
||||||
|
|
||||||
|
def get_volumes(template):
|
||||||
|
volume_dict = dict()
|
||||||
|
node_tpl = template['topology_template']['node_templates']
|
||||||
|
for node_name in list(node_tpl.keys()):
|
||||||
|
node_value = node_tpl[node_name]
|
||||||
|
if node_value['type'] != BLOCKSTORAGE:
|
||||||
|
continue
|
||||||
|
volume_dict[node_name] = dict()
|
||||||
|
block_properties = node_value.get('properties', {})
|
||||||
|
for prop_name, prop_value in block_properties.items():
|
||||||
|
if prop_name == 'size':
|
||||||
|
prop_value = \
|
||||||
|
re.compile('(\d+)\s*(\w+)').match(prop_value).groups()[0]
|
||||||
|
volume_dict[node_name][prop_name] = prop_value
|
||||||
|
del node_tpl[node_name]
|
||||||
|
return volume_dict
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_vol_attachments(template):
|
||||||
|
vol_attach_dict = dict()
|
||||||
|
node_tpl = template['topology_template']['node_templates']
|
||||||
|
valid_properties = {
|
||||||
|
'location': 'mountpoint'
|
||||||
|
}
|
||||||
|
for node_name in list(node_tpl.keys()):
|
||||||
|
node_value = node_tpl[node_name]
|
||||||
|
if node_value['type'] != BLOCKSTORAGE_ATTACHMENT:
|
||||||
|
continue
|
||||||
|
vol_attach_dict[node_name] = dict()
|
||||||
|
vol_attach_properties = node_value.get('properties', {})
|
||||||
|
# parse properties
|
||||||
|
for prop_name, prop_value in vol_attach_properties.items():
|
||||||
|
if prop_name in valid_properties:
|
||||||
|
vol_attach_dict[node_name][valid_properties[prop_name]] = \
|
||||||
|
prop_value
|
||||||
|
# parse requirements to get mapping of cinder volume <-> Nova instance
|
||||||
|
for req in node_value.get('requirements', {}):
|
||||||
|
if 'virtualBinding' in req:
|
||||||
|
vol_attach_dict[node_name]['instance_uuid'] = \
|
||||||
|
{'get_resource': req['virtualBinding']['node']}
|
||||||
|
elif 'virtualAttachment' in req:
|
||||||
|
vol_attach_dict[node_name]['volume_id'] = \
|
||||||
|
{'get_resource': req['virtualAttachment']['node']}
|
||||||
|
del node_tpl[node_name]
|
||||||
|
return vol_attach_dict
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_block_storage_details(template):
|
||||||
|
block_storage_details = dict()
|
||||||
|
block_storage_details['volumes'] = get_volumes(template)
|
||||||
|
block_storage_details['volume_attachments'] = get_vol_attachments(template)
|
||||||
|
return block_storage_details
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_mgmt_ports(tosca):
|
||||||
|
mgmt_ports = {}
|
||||||
|
for nt in tosca.nodetemplates:
|
||||||
|
if nt.type_definition.is_derived_from(APMECCP):
|
||||||
|
mgmt = nt.get_property_value('management') or None
|
||||||
|
if mgmt:
|
||||||
|
vdu = None
|
||||||
|
for rel, node in nt.relationships.items():
|
||||||
|
if rel.is_derived_from(TOSCA_BINDS_TO):
|
||||||
|
vdu = node.name
|
||||||
|
break
|
||||||
|
|
||||||
|
if vdu is not None:
|
||||||
|
name = 'mgmt_ip-%s' % vdu
|
||||||
|
mgmt_ports[name] = nt.name
|
||||||
|
LOG.debug('mgmt_ports: %s', mgmt_ports)
|
||||||
|
return mgmt_ports
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def add_resources_tpl(heat_dict, hot_res_tpl):
|
||||||
|
for res, res_dict in (hot_res_tpl).items():
|
||||||
|
for vdu, vdu_dict in (res_dict).items():
|
||||||
|
res_name = vdu + "_" + res
|
||||||
|
heat_dict["resources"][res_name] = {
|
||||||
|
"type": HEAT_RESOURCE_MAP[res],
|
||||||
|
"properties": {}
|
||||||
|
}
|
||||||
|
|
||||||
|
for prop, val in (vdu_dict).items():
|
||||||
|
heat_dict["resources"][res_name]["properties"][prop] = val
|
||||||
|
if heat_dict["resources"].get(vdu):
|
||||||
|
heat_dict["resources"][vdu]["properties"][res] = {
|
||||||
|
"get_resource": res_name
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def convert_unsupported_res_prop(heat_dict, unsupported_res_prop):
|
||||||
|
res_dict = heat_dict['resources']
|
||||||
|
|
||||||
|
for res, attr in (res_dict).items():
|
||||||
|
res_type = attr['type']
|
||||||
|
if res_type in unsupported_res_prop:
|
||||||
|
prop_dict = attr['properties']
|
||||||
|
unsupported_prop_dict = unsupported_res_prop[res_type]
|
||||||
|
unsupported_prop = set(prop_dict.keys()) & set(
|
||||||
|
unsupported_prop_dict.keys())
|
||||||
|
for prop in unsupported_prop:
|
||||||
|
# some properties are just punted to 'value_specs'
|
||||||
|
# property if they are incompatible
|
||||||
|
new_prop = unsupported_prop_dict[prop]
|
||||||
|
if new_prop == 'value_specs':
|
||||||
|
prop_dict.setdefault(new_prop, {})[
|
||||||
|
prop] = prop_dict.pop(prop)
|
||||||
|
else:
|
||||||
|
prop_dict[new_prop] = prop_dict.pop(prop)
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def represent_odict(dump, tag, mapping, flow_style=None):
|
||||||
|
value = []
|
||||||
|
node = yaml.MappingNode(tag, value, flow_style=flow_style)
|
||||||
|
if dump.alias_key is not None:
|
||||||
|
dump.represented_objects[dump.alias_key] = node
|
||||||
|
best_style = True
|
||||||
|
if hasattr(mapping, 'items'):
|
||||||
|
mapping = mapping.items()
|
||||||
|
for item_key, item_value in mapping:
|
||||||
|
node_key = dump.represent_data(item_key)
|
||||||
|
node_value = dump.represent_data(item_value)
|
||||||
|
if not (isinstance(node_key, yaml.ScalarNode) and not node_key.style):
|
||||||
|
best_style = False
|
||||||
|
if not (isinstance(node_value, yaml.ScalarNode)
|
||||||
|
and not node_value.style):
|
||||||
|
best_style = False
|
||||||
|
value.append((node_key, node_value))
|
||||||
|
if flow_style is None:
|
||||||
|
if dump.default_flow_style is not None:
|
||||||
|
node.flow_style = dump.default_flow_style
|
||||||
|
else:
|
||||||
|
node.flow_style = best_style
|
||||||
|
return node
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def post_process_heat_template(heat_tpl, mgmt_ports, metadata,
|
||||||
|
alarm_resources, res_tpl,
|
||||||
|
vol_res={}, unsupported_res_prop=None):
|
||||||
|
#
|
||||||
|
# TODO(bobh) - remove when heat-translator can support literal strings.
|
||||||
|
#
|
||||||
|
def fix_user_data(user_data_string):
|
||||||
|
user_data_string = re.sub('user_data: #', 'user_data: |\n #',
|
||||||
|
user_data_string, re.MULTILINE)
|
||||||
|
return re.sub('\n\n', '\n', user_data_string, re.MULTILINE)
|
||||||
|
|
||||||
|
heat_tpl = fix_user_data(heat_tpl)
|
||||||
|
#
|
||||||
|
# End temporary workaround for heat-translator
|
||||||
|
#
|
||||||
|
heat_dict = yamlparser.simple_ordered_parse(heat_tpl)
|
||||||
|
for outputname, portname in mgmt_ports.items():
|
||||||
|
ipval = {'get_attr': [portname, 'fixed_ips', 0, 'ip_address']}
|
||||||
|
output = {outputname: {'value': ipval}}
|
||||||
|
if 'outputs' in heat_dict:
|
||||||
|
heat_dict['outputs'].update(output)
|
||||||
|
else:
|
||||||
|
heat_dict['outputs'] = output
|
||||||
|
LOG.debug('Added output for %s', outputname)
|
||||||
|
if metadata:
|
||||||
|
for vdu_name, metadata_dict in metadata['vdus'].items():
|
||||||
|
if heat_dict['resources'].get(vdu_name):
|
||||||
|
heat_dict['resources'][vdu_name]['properties']['metadata'] =\
|
||||||
|
metadata_dict
|
||||||
|
matching_metadata = alarm_resources.get('matching_metadata')
|
||||||
|
alarm_actions = alarm_resources.get('alarm_actions')
|
||||||
|
if matching_metadata:
|
||||||
|
for trigger_name, matching_metadata_dict in matching_metadata.items():
|
||||||
|
if heat_dict['resources'].get(trigger_name):
|
||||||
|
matching_mtdata = dict()
|
||||||
|
matching_mtdata['matching_metadata'] =\
|
||||||
|
matching_metadata[trigger_name]
|
||||||
|
heat_dict['resources'][trigger_name]['properties'].\
|
||||||
|
update(matching_mtdata)
|
||||||
|
if alarm_actions:
|
||||||
|
for trigger_name, alarm_actions_dict in alarm_actions.items():
|
||||||
|
if heat_dict['resources'].get(trigger_name):
|
||||||
|
heat_dict['resources'][trigger_name]['properties']. \
|
||||||
|
update(alarm_actions_dict)
|
||||||
|
|
||||||
|
add_resources_tpl(heat_dict, res_tpl)
|
||||||
|
for res in heat_dict["resources"].values():
|
||||||
|
if not res['type'] == HEAT_SOFTWARE_CONFIG:
|
||||||
|
continue
|
||||||
|
config = res["properties"]["config"]
|
||||||
|
if 'get_file' in config:
|
||||||
|
res["properties"]["config"] = open(config["get_file"]).read()
|
||||||
|
|
||||||
|
if vol_res.get('volumes'):
|
||||||
|
add_volume_resources(heat_dict, vol_res)
|
||||||
|
if unsupported_res_prop:
|
||||||
|
convert_unsupported_res_prop(heat_dict, unsupported_res_prop)
|
||||||
|
|
||||||
|
yaml.SafeDumper.add_representer(OrderedDict,
|
||||||
|
lambda dumper, value: represent_odict(dumper,
|
||||||
|
u'tag:yaml.org,2002:map', value))
|
||||||
|
|
||||||
|
return yaml.safe_dump(heat_dict)
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def add_volume_resources(heat_dict, vol_res):
|
||||||
|
# Add cinder volumes
|
||||||
|
for res_name, cinder_vol in vol_res['volumes'].items():
|
||||||
|
heat_dict['resources'][res_name] = {
|
||||||
|
'type': 'OS::Cinder::Volume',
|
||||||
|
'properties': {}
|
||||||
|
}
|
||||||
|
for prop_name, prop_val in cinder_vol.items():
|
||||||
|
heat_dict['resources'][res_name]['properties'][prop_name] = \
|
||||||
|
prop_val
|
||||||
|
# Add cinder volume attachments
|
||||||
|
for res_name, cinder_vol in vol_res['volume_attachments'].items():
|
||||||
|
heat_dict['resources'][res_name] = {
|
||||||
|
'type': 'OS::Cinder::VolumeAttachment',
|
||||||
|
'properties': {}
|
||||||
|
}
|
||||||
|
for prop_name, prop_val in cinder_vol.items():
|
||||||
|
heat_dict['resources'][res_name]['properties'][prop_name] = \
|
||||||
|
prop_val
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def post_process_template(template):
|
||||||
|
for nt in template.nodetemplates:
|
||||||
|
if (nt.type_definition.is_derived_from(MONITORING) or
|
||||||
|
nt.type_definition.is_derived_from(FAILURE) or
|
||||||
|
nt.type_definition.is_derived_from(PLACEMENT)):
|
||||||
|
template.nodetemplates.remove(nt)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if nt.type in delpropmap.keys():
|
||||||
|
for prop in delpropmap[nt.type]:
|
||||||
|
for p in nt.get_properties_objects():
|
||||||
|
if prop == p.name:
|
||||||
|
nt.get_properties_objects().remove(p)
|
||||||
|
|
||||||
|
# change the property value first before the property key
|
||||||
|
if nt.type in convert_prop_values:
|
||||||
|
for prop in convert_prop_values[nt.type].keys():
|
||||||
|
for p in nt.get_properties_objects():
|
||||||
|
if (prop == p.name and
|
||||||
|
p.value in
|
||||||
|
convert_prop_values[nt.type][prop].keys()):
|
||||||
|
v = convert_prop_values[nt.type][prop][p.value]
|
||||||
|
p.value = v
|
||||||
|
|
||||||
|
if nt.type in convert_prop:
|
||||||
|
for prop in convert_prop[nt.type].keys():
|
||||||
|
for p in nt.get_properties_objects():
|
||||||
|
if prop == p.name:
|
||||||
|
schema_dict = {'type': p.type}
|
||||||
|
v = nt.get_property_value(p.name)
|
||||||
|
newprop = properties.Property(
|
||||||
|
convert_prop[nt.type][prop], v, schema_dict)
|
||||||
|
nt.get_properties_objects().append(newprop)
|
||||||
|
nt.get_properties_objects().remove(p)
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_mgmt_driver(template):
|
||||||
|
mgmt_driver = None
|
||||||
|
for nt in template.nodetemplates:
|
||||||
|
if nt.type_definition.is_derived_from(APMECVDU):
|
||||||
|
if (mgmt_driver and nt.get_property_value('mgmt_driver') !=
|
||||||
|
mgmt_driver):
|
||||||
|
raise mem.MultipleMGMTDriversSpecified()
|
||||||
|
else:
|
||||||
|
mgmt_driver = nt.get_property_value('mgmt_driver')
|
||||||
|
|
||||||
|
return mgmt_driver
|
||||||
|
|
||||||
|
|
||||||
|
def findvdus(template):
|
||||||
|
vdus = []
|
||||||
|
for nt in template.nodetemplates:
|
||||||
|
if nt.type_definition.is_derived_from(APMECVDU):
|
||||||
|
vdus.append(nt)
|
||||||
|
return vdus
|
||||||
|
|
||||||
|
|
||||||
|
def get_flavor_dict(template, flavor_extra_input=None):
|
||||||
|
flavor_dict = {}
|
||||||
|
vdus = findvdus(template)
|
||||||
|
for nt in vdus:
|
||||||
|
flavor_tmp = nt.get_properties().get('flavor')
|
||||||
|
if flavor_tmp:
|
||||||
|
continue
|
||||||
|
if nt.get_capabilities().get("mec_compute"):
|
||||||
|
flavor_dict[nt.name] = {}
|
||||||
|
properties = nt.get_capabilities()["mec_compute"].get_properties()
|
||||||
|
for prop, (hot_prop, default, unit) in \
|
||||||
|
(FLAVOR_PROPS).items():
|
||||||
|
hot_prop_val = (properties[prop].value
|
||||||
|
if properties.get(prop, None) else None)
|
||||||
|
if unit and hot_prop_val:
|
||||||
|
hot_prop_val = \
|
||||||
|
utils.change_memory_unit(hot_prop_val, unit)
|
||||||
|
flavor_dict[nt.name][hot_prop] = \
|
||||||
|
hot_prop_val if hot_prop_val else default
|
||||||
|
if any(p in properties for p in FLAVOR_EXTRA_SPECS_LIST):
|
||||||
|
flavor_dict[nt.name]['extra_specs'] = {}
|
||||||
|
es_dict = flavor_dict[nt.name]['extra_specs']
|
||||||
|
populate_flavor_extra_specs(es_dict, properties,
|
||||||
|
flavor_extra_input)
|
||||||
|
return flavor_dict
|
||||||
|
|
||||||
|
|
||||||
|
def populate_flavor_extra_specs(es_dict, properties, flavor_extra_input):
|
||||||
|
if 'mem_page_size' in properties:
|
||||||
|
mval = properties['mem_page_size'].value
|
||||||
|
if str(mval).isdigit():
|
||||||
|
mval = mval * 1024
|
||||||
|
elif mval not in ('small', 'large', 'any'):
|
||||||
|
raise mem.HugePageSizeInvalidInput(
|
||||||
|
error_msg_details=(mval + ":Invalid Input"))
|
||||||
|
es_dict['hw:mem_page_size'] = mval
|
||||||
|
if 'numa_nodes' in properties and 'numa_node_count' in properties:
|
||||||
|
LOG.warning('Both numa_nodes and numa_node_count have been'
|
||||||
|
'specified; numa_node definitions will be ignored and'
|
||||||
|
'numa_node_count will be applied')
|
||||||
|
if 'numa_node_count' in properties:
|
||||||
|
es_dict['hw:numa_nodes'] = \
|
||||||
|
properties['numa_node_count'].value
|
||||||
|
if 'numa_nodes' in properties and 'numa_node_count' not in properties:
|
||||||
|
nodes_dict = dict(properties['numa_nodes'].value)
|
||||||
|
dval = list(nodes_dict.values())
|
||||||
|
ncount = 0
|
||||||
|
for ndict in dval:
|
||||||
|
invalid_input = set(ndict.keys()) - {'id', 'vcpus', 'mem_size'}
|
||||||
|
if invalid_input:
|
||||||
|
raise mem.NumaNodesInvalidKeys(
|
||||||
|
error_msg_details=(', '.join(invalid_input)),
|
||||||
|
valid_keys="id, vcpus and mem_size")
|
||||||
|
if 'id' in ndict and 'vcpus' in ndict:
|
||||||
|
vk = "hw:numa_cpus." + str(ndict['id'])
|
||||||
|
vval = ",".join([str(x) for x in ndict['vcpus']])
|
||||||
|
es_dict[vk] = vval
|
||||||
|
if 'id' in ndict and 'mem_size' in ndict:
|
||||||
|
mk = "hw:numa_mem." + str(ndict['id'])
|
||||||
|
es_dict[mk] = ndict['mem_size']
|
||||||
|
ncount += 1
|
||||||
|
es_dict['hw:numa_nodes'] = ncount
|
||||||
|
if 'cpu_allocation' in properties:
|
||||||
|
cpu_dict = dict(properties['cpu_allocation'].value)
|
||||||
|
invalid_input = set(cpu_dict.keys()) - CPU_PROP_KEY_SET
|
||||||
|
if invalid_input:
|
||||||
|
raise mem.CpuAllocationInvalidKeys(
|
||||||
|
error_msg_details=(', '.join(invalid_input)),
|
||||||
|
valid_keys=(', '.join(CPU_PROP_KEY_SET)))
|
||||||
|
for(k, v) in CPU_PROP_MAP:
|
||||||
|
if v in cpu_dict:
|
||||||
|
es_dict[k] = cpu_dict[v]
|
||||||
|
if flavor_extra_input:
|
||||||
|
es_dict.update(flavor_extra_input)
|
||||||
|
|
||||||
|
|
||||||
|
def get_image_dict(template):
|
||||||
|
image_dict = {}
|
||||||
|
vdus = findvdus(template)
|
||||||
|
for vdu in vdus:
|
||||||
|
if not vdu.entity_tpl.get("artifacts"):
|
||||||
|
continue
|
||||||
|
artifacts = vdu.entity_tpl["artifacts"]
|
||||||
|
for name, artifact in (artifacts).items():
|
||||||
|
if ('type' in artifact.keys() and
|
||||||
|
artifact["type"] == IMAGE):
|
||||||
|
if 'file' not in artifact.keys():
|
||||||
|
raise mem.FilePathMissing()
|
||||||
|
image_dict[vdu.name] = {
|
||||||
|
"location": artifact["file"],
|
||||||
|
"container_format": "bare",
|
||||||
|
"disk_format": "raw",
|
||||||
|
"name": name
|
||||||
|
}
|
||||||
|
return image_dict
|
||||||
|
|
||||||
|
|
||||||
|
def get_resources_dict(template, flavor_extra_input=None):
|
||||||
|
res_dict = dict()
|
||||||
|
for res, method in (OS_RESOURCES).items():
|
||||||
|
res_method = getattr(sys.modules[__name__], method)
|
||||||
|
if res is 'flavor':
|
||||||
|
res_dict[res] = res_method(template, flavor_extra_input)
|
||||||
|
else:
|
||||||
|
res_dict[res] = res_method(template)
|
||||||
|
return res_dict
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_scaling_policy(template):
|
||||||
|
scaling_policy_names = list()
|
||||||
|
for policy in template.policies:
|
||||||
|
if (policy.type_definition.is_derived_from(SCALING)):
|
||||||
|
scaling_policy_names.append(policy.name)
|
||||||
|
return scaling_policy_names
|
||||||
|
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_scaling_group_dict(ht_template, scaling_policy_names):
|
||||||
|
scaling_group_dict = dict()
|
||||||
|
scaling_group_names = list()
|
||||||
|
heat_dict = yamlparser.simple_ordered_parse(ht_template)
|
||||||
|
for resource_name, resource_dict in heat_dict['resources'].items():
|
||||||
|
if resource_dict['type'] == SCALE_GROUP_RESOURCE:
|
||||||
|
scaling_group_names.append(resource_name)
|
||||||
|
if scaling_group_names:
|
||||||
|
scaling_group_dict[scaling_policy_names[0]] = scaling_group_names[0]
|
||||||
|
return scaling_group_dict
|
||||||
|
|
||||||
|
|
||||||
|
def get_nested_resources_name(template):
|
||||||
|
for policy in template.policies:
|
||||||
|
if (policy.type_definition.is_derived_from(SCALING)):
|
||||||
|
nested_resource_name = policy.name + '_res.yaml'
|
||||||
|
return nested_resource_name
|
||||||
|
|
||||||
|
|
||||||
|
def update_nested_scaling_resources(nested_resources, mgmt_ports, metadata,
|
||||||
|
res_tpl, unsupported_res_prop=None):
|
||||||
|
nested_tpl = dict()
|
||||||
|
if nested_resources:
|
||||||
|
nested_resource_name, nested_resources_yaml =\
|
||||||
|
list(nested_resources.items())[0]
|
||||||
|
nested_resources_dict =\
|
||||||
|
yamlparser.simple_ordered_parse(nested_resources_yaml)
|
||||||
|
if metadata:
|
||||||
|
for vdu_name, metadata_dict in metadata['vdus'].items():
|
||||||
|
nested_resources_dict['resources'][vdu_name]['properties']['metadata'] = \
|
||||||
|
metadata_dict
|
||||||
|
add_resources_tpl(nested_resources_dict, res_tpl)
|
||||||
|
for res in nested_resources_dict["resources"].values():
|
||||||
|
if not res['type'] == HEAT_SOFTWARE_CONFIG:
|
||||||
|
continue
|
||||||
|
config = res["properties"]["config"]
|
||||||
|
if 'get_file' in config:
|
||||||
|
res["properties"]["config"] = open(config["get_file"]).read()
|
||||||
|
|
||||||
|
if unsupported_res_prop:
|
||||||
|
convert_unsupported_res_prop(nested_resources_dict,
|
||||||
|
unsupported_res_prop)
|
||||||
|
|
||||||
|
for outputname, portname in mgmt_ports.items():
|
||||||
|
ipval = {'get_attr': [portname, 'fixed_ips', 0, 'ip_address']}
|
||||||
|
output = {outputname: {'value': ipval}}
|
||||||
|
if 'outputs' in nested_resources_dict:
|
||||||
|
nested_resources_dict['outputs'].update(output)
|
||||||
|
else:
|
||||||
|
nested_resources_dict['outputs'] = output
|
||||||
|
LOG.debug(_('Added output for %s'), outputname)
|
||||||
|
yaml.SafeDumper.add_representer(
|
||||||
|
OrderedDict, lambda dumper, value: represent_odict(
|
||||||
|
dumper, u'tag:yaml.org,2002:map', value))
|
||||||
|
nested_tpl[nested_resource_name] =\
|
||||||
|
yaml.safe_dump(nested_resources_dict)
|
||||||
|
return nested_tpl
|
28
apmec/cmd/__init__.py
Normal file
28
apmec/cmd/__init__.py
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import logging as sys_logging
|
||||||
|
|
||||||
|
from oslo_reports import guru_meditation_report as gmr
|
||||||
|
|
||||||
|
from apmec import version
|
||||||
|
|
||||||
|
# During the call to gmr.TextGuruMeditation.setup_autorun(), Guru Meditation
|
||||||
|
# Report tries to start logging. Set a handler here to accommodate this.
|
||||||
|
logger = sys_logging.getLogger(None)
|
||||||
|
if not logger.handlers:
|
||||||
|
logger.addHandler(sys_logging.StreamHandler())
|
||||||
|
|
||||||
|
_version_string = version.version_info.release_string()
|
||||||
|
gmr.TextGuruMeditation.setup_autorun(version=_version_string)
|
17
apmec/cmd/eventlet/__init__.py
Normal file
17
apmec/cmd/eventlet/__init__.py
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from apmec.common import eventlet_utils
|
||||||
|
|
||||||
|
eventlet_utils.monkey_patch()
|
52
apmec/cmd/eventlet/apmec_server.py
Normal file
52
apmec/cmd/eventlet/apmec_server.py
Normal file
@ -0,0 +1,52 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
# Copyright 2011 VMware, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
# If ../apmec/__init__.py exists, add ../ to Python search path, so that
|
||||||
|
# it will override what happens to be installed in /usr/(local/)tosca_lib/python...
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from oslo_config import cfg
|
||||||
|
import oslo_i18n
|
||||||
|
from oslo_service import service as common_service
|
||||||
|
|
||||||
|
from apmec import _i18n
|
||||||
|
_i18n.enable_lazy()
|
||||||
|
from apmec.common import config
|
||||||
|
from apmec import service
|
||||||
|
|
||||||
|
|
||||||
|
oslo_i18n.install("apmec")
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
# the configuration will be read into the cfg.CONF global data structure
|
||||||
|
config.init(sys.argv[1:])
|
||||||
|
if not cfg.CONF.config_file:
|
||||||
|
sys.exit(_("ERROR: Unable to find configuration file via the default"
|
||||||
|
" search paths (~/.apmec/, ~/, /etc/apmec/, /etc/) and"
|
||||||
|
" the '--config-file' option!"))
|
||||||
|
|
||||||
|
try:
|
||||||
|
apmec_api = service.serve_wsgi(service.ApmecApiService)
|
||||||
|
launcher = common_service.launch(cfg.CONF, apmec_api,
|
||||||
|
workers=cfg.CONF.api_workers or None)
|
||||||
|
launcher.wait()
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
pass
|
||||||
|
except RuntimeError as e:
|
||||||
|
sys.exit(_("ERROR: %s") % e)
|
17
apmec/cmd/eventlet/conductor.py
Normal file
17
apmec/cmd/eventlet/conductor.py
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from apmec.conductor import conductor_server
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
conductor_server.main()
|
0
apmec/common/__init__.py
Normal file
0
apmec/common/__init__.py
Normal file
53
apmec/common/clients.py
Normal file
53
apmec/common/clients.py
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from heatclient import client as heatclient
|
||||||
|
from apmec.mem import keystone
|
||||||
|
|
||||||
|
|
||||||
|
class OpenstackClients(object):
|
||||||
|
|
||||||
|
def __init__(self, auth_attr, region_name=None):
|
||||||
|
super(OpenstackClients, self).__init__()
|
||||||
|
self.keystone_plugin = keystone.Keystone()
|
||||||
|
self.heat_client = None
|
||||||
|
self.mistral_client = None
|
||||||
|
self.keystone_client = None
|
||||||
|
self.region_name = region_name
|
||||||
|
self.auth_attr = auth_attr
|
||||||
|
|
||||||
|
def _keystone_client(self):
|
||||||
|
version = self.auth_attr['auth_url'].rpartition('/')[2]
|
||||||
|
return self.keystone_plugin.initialize_client(version,
|
||||||
|
**self.auth_attr)
|
||||||
|
|
||||||
|
def _heat_client(self):
|
||||||
|
endpoint = self.keystone_session.get_endpoint(
|
||||||
|
service_type='orchestration', region_name=self.region_name)
|
||||||
|
return heatclient.Client('1', endpoint=endpoint,
|
||||||
|
session=self.keystone_session)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def keystone_session(self):
|
||||||
|
return self.keystone.session
|
||||||
|
|
||||||
|
@property
|
||||||
|
def keystone(self):
|
||||||
|
if not self.keystone_client:
|
||||||
|
self.keystone_client = self._keystone_client()
|
||||||
|
return self.keystone_client
|
||||||
|
|
||||||
|
@property
|
||||||
|
def heat(self):
|
||||||
|
if not self.heat_client:
|
||||||
|
self.heat_client = self._heat_client()
|
||||||
|
return self.heat_client
|
106
apmec/common/cmd_executer.py
Normal file
106
apmec/common/cmd_executer.py
Normal file
@ -0,0 +1,106 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from oslo_log import log as logging
|
||||||
|
import paramiko
|
||||||
|
|
||||||
|
from apmec.common import exceptions
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class CommandResult(object):
|
||||||
|
"""Result class contains command, stdout, stderror and return code."""
|
||||||
|
def __init__(self, cmd, stdout, stderr, return_code):
|
||||||
|
self.__cmd = cmd
|
||||||
|
self.__stdout = stdout
|
||||||
|
self.__stderr = stderr
|
||||||
|
self.__return_code = return_code
|
||||||
|
|
||||||
|
def get_command(self):
|
||||||
|
return self.__cmd
|
||||||
|
|
||||||
|
def get_stdout(self):
|
||||||
|
return self.__stdout
|
||||||
|
|
||||||
|
def get_stderr(self):
|
||||||
|
return self.__stderr
|
||||||
|
|
||||||
|
def get_return_code(self):
|
||||||
|
return self.__return_code
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return "cmd: %s, stdout: %s, stderr: %s, return code: %s" \
|
||||||
|
% (self.__cmd, self.__stdout, self.__stderr, self.__return_code)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return "cmd: %s, stdout: %s, stderr: %s, return code: %s" \
|
||||||
|
% (self.__cmd, self.__stdout, self.__stderr, self.__return_code)
|
||||||
|
|
||||||
|
|
||||||
|
class RemoteCommandExecutor(object):
|
||||||
|
"""Class to execute a command on remote location"""
|
||||||
|
def __init__(self, user, password, host, timeout=10):
|
||||||
|
self.__user = user
|
||||||
|
self.__password = password
|
||||||
|
self.__host = host
|
||||||
|
self.__paramiko_conn = None
|
||||||
|
self.__ssh = None
|
||||||
|
self.__timeout = timeout
|
||||||
|
self.__connect()
|
||||||
|
|
||||||
|
def __connect(self):
|
||||||
|
try:
|
||||||
|
self.__ssh = paramiko.SSHClient()
|
||||||
|
self.__ssh.set_missing_host_key_policy(paramiko.WarningPolicy())
|
||||||
|
self.__ssh.connect(self.__host, username=self.__user,
|
||||||
|
password=self.__password, timeout=self.__timeout)
|
||||||
|
LOG.info("Connected to %s", self.__host)
|
||||||
|
except paramiko.AuthenticationException:
|
||||||
|
LOG.error("Authentication failed when connecting to %s",
|
||||||
|
self.__host)
|
||||||
|
raise exceptions.NotAuthorized
|
||||||
|
except paramiko.SSHException:
|
||||||
|
LOG.error("Could not connect to %s. Giving up", self.__host)
|
||||||
|
raise
|
||||||
|
|
||||||
|
def close_session(self):
|
||||||
|
self.__ssh.close()
|
||||||
|
LOG.debug("Connection close")
|
||||||
|
|
||||||
|
def execute_command(self, cmd, input_data=None):
|
||||||
|
try:
|
||||||
|
stdin, stdout, stderr = self.__ssh.exec_command(cmd)
|
||||||
|
if input_data:
|
||||||
|
stdin.write(input_data)
|
||||||
|
LOG.debug("Input data written successfully")
|
||||||
|
stdin.flush()
|
||||||
|
LOG.debug("Input data flushed")
|
||||||
|
stdin.channel.shutdown_write()
|
||||||
|
|
||||||
|
# NOTE (dkushwaha): There might be a case, when server can take
|
||||||
|
# too long time to write data in stdout buffer or sometimes hang
|
||||||
|
# itself, in that case readlines() will stuck for long/infinite
|
||||||
|
# time. To handle such cases, timeout logic should be introduce
|
||||||
|
# here.
|
||||||
|
cmd_out = stdout.readlines()
|
||||||
|
cmd_err = stderr.readlines()
|
||||||
|
return_code = stdout.channel.recv_exit_status()
|
||||||
|
except paramiko.SSHException:
|
||||||
|
LOG.error("Command execution failed at %s. Giving up", self.__host)
|
||||||
|
raise
|
||||||
|
result = CommandResult(cmd, cmd_out, cmd_err, return_code)
|
||||||
|
LOG.debug("Remote command execution result: %s", result)
|
||||||
|
return result
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
self.close_session()
|
141
apmec/common/config.py
Normal file
141
apmec/common/config.py
Normal file
@ -0,0 +1,141 @@
|
|||||||
|
# Copyright 2011 VMware, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Routines for configuring Apmec
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_db import options as db_options
|
||||||
|
from oslo_log import log as logging
|
||||||
|
import oslo_messaging
|
||||||
|
from paste import deploy
|
||||||
|
|
||||||
|
from apmec.common import utils
|
||||||
|
from apmec import version
|
||||||
|
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
core_opts = [
|
||||||
|
cfg.HostAddressOpt('bind_host', default='0.0.0.0',
|
||||||
|
help=_("The host IP to bind to")),
|
||||||
|
cfg.IntOpt('bind_port', default=9896,
|
||||||
|
help=_("The port to bind to")),
|
||||||
|
cfg.StrOpt('api_paste_config', default="api-paste.ini",
|
||||||
|
help=_("The API paste config file to use")),
|
||||||
|
cfg.StrOpt('api_extensions_path', default="",
|
||||||
|
help=_("The path for API extensions")),
|
||||||
|
cfg.ListOpt('service_plugins', default=['meo', 'mem'],
|
||||||
|
help=_("The service plugins Apmec will use")),
|
||||||
|
cfg.StrOpt('policy_file', default="policy.json",
|
||||||
|
help=_("The policy file to use")),
|
||||||
|
cfg.StrOpt('auth_strategy', default='keystone',
|
||||||
|
help=_("The type of authentication to use")),
|
||||||
|
cfg.BoolOpt('allow_bulk', default=True,
|
||||||
|
help=_("Allow the usage of the bulk API")),
|
||||||
|
cfg.BoolOpt('allow_pagination', default=False,
|
||||||
|
help=_("Allow the usage of the pagination")),
|
||||||
|
cfg.BoolOpt('allow_sorting', default=False,
|
||||||
|
help=_("Allow the usage of the sorting")),
|
||||||
|
cfg.StrOpt('pagination_max_limit', default="-1",
|
||||||
|
help=_("The maximum number of items returned "
|
||||||
|
"in a single response, value was 'infinite' "
|
||||||
|
"or negative integer means no limit")),
|
||||||
|
cfg.HostAddressOpt('host', default=utils.get_hostname(),
|
||||||
|
help=_("The hostname Apmec is running on")),
|
||||||
|
]
|
||||||
|
|
||||||
|
core_cli_opts = [
|
||||||
|
cfg.StrOpt('state_path',
|
||||||
|
default='/var/lib/apmec',
|
||||||
|
help=_("Where to store Apmec state files. "
|
||||||
|
"This directory must be writable by "
|
||||||
|
"the agent.")),
|
||||||
|
]
|
||||||
|
|
||||||
|
logging.register_options(cfg.CONF)
|
||||||
|
# Register the configuration options
|
||||||
|
cfg.CONF.register_opts(core_opts)
|
||||||
|
cfg.CONF.register_cli_opts(core_cli_opts)
|
||||||
|
|
||||||
|
|
||||||
|
def config_opts():
|
||||||
|
return [(None, core_opts), (None, core_cli_opts)]
|
||||||
|
|
||||||
|
# Ensure that the control exchange is set correctly
|
||||||
|
oslo_messaging.set_transport_defaults(control_exchange='apmec')
|
||||||
|
|
||||||
|
|
||||||
|
def set_db_defaults():
|
||||||
|
# Update the default QueuePool parameters. These can be tweaked by the
|
||||||
|
# conf variables - max_pool_size, max_overflow and pool_timeout
|
||||||
|
db_options.set_defaults(
|
||||||
|
cfg.CONF,
|
||||||
|
connection='sqlite://',
|
||||||
|
max_pool_size=10,
|
||||||
|
max_overflow=20, pool_timeout=10)
|
||||||
|
|
||||||
|
set_db_defaults()
|
||||||
|
|
||||||
|
|
||||||
|
def init(args, **kwargs):
|
||||||
|
cfg.CONF(args=args, project='apmec',
|
||||||
|
version='%%prog %s' % version.version_info.release_string(),
|
||||||
|
**kwargs)
|
||||||
|
|
||||||
|
# FIXME(ihrachys): if import is put in global, circular import
|
||||||
|
# failure occurs
|
||||||
|
from apmec.common import rpc as n_rpc
|
||||||
|
n_rpc.init(cfg.CONF)
|
||||||
|
|
||||||
|
|
||||||
|
def setup_logging(conf):
|
||||||
|
"""Sets up the logging options for a log with supplied name.
|
||||||
|
|
||||||
|
:param conf: a cfg.ConfOpts object
|
||||||
|
"""
|
||||||
|
product_name = "apmec"
|
||||||
|
logging.setup(conf, product_name)
|
||||||
|
LOG.info("Logging enabled!")
|
||||||
|
|
||||||
|
|
||||||
|
def load_paste_app(app_name):
|
||||||
|
"""Builds and returns a WSGI app from a paste config file.
|
||||||
|
|
||||||
|
:param app_name: Name of the application to load
|
||||||
|
:raises ConfigFilesNotFoundError: when config file cannot be located
|
||||||
|
:raises RuntimeError: when application cannot be loaded from config file
|
||||||
|
"""
|
||||||
|
|
||||||
|
config_path = cfg.CONF.find_file(cfg.CONF.api_paste_config)
|
||||||
|
if not config_path:
|
||||||
|
raise cfg.ConfigFilesNotFoundError(
|
||||||
|
config_files=[cfg.CONF.api_paste_config])
|
||||||
|
config_path = os.path.abspath(config_path)
|
||||||
|
LOG.info("Config paste file: %s", config_path)
|
||||||
|
|
||||||
|
try:
|
||||||
|
app = deploy.loadapp("config:%s" % config_path, name=app_name)
|
||||||
|
except (LookupError, ImportError):
|
||||||
|
msg = (_("Unable to load %(app_name)s from "
|
||||||
|
"configuration file %(config_path)s.") %
|
||||||
|
{'app_name': app_name,
|
||||||
|
'config_path': config_path})
|
||||||
|
LOG.exception(msg)
|
||||||
|
raise RuntimeError(msg)
|
||||||
|
return app
|
47
apmec/common/constants.py
Normal file
47
apmec/common/constants.py
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
# Copyright (c) 2012 OpenStack Foundation.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
# implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# TODO(salv-orlando): Verify if a single set of operational
|
||||||
|
# status constants is achievable
|
||||||
|
|
||||||
|
TYPE_BOOL = "bool"
|
||||||
|
TYPE_INT = "int"
|
||||||
|
TYPE_LONG = "long"
|
||||||
|
TYPE_FLOAT = "float"
|
||||||
|
TYPE_LIST = "list"
|
||||||
|
TYPE_DICT = "dict"
|
||||||
|
|
||||||
|
PAGINATION_INFINITE = 'infinite'
|
||||||
|
|
||||||
|
SORT_DIRECTION_ASC = 'asc'
|
||||||
|
SORT_DIRECTION_DESC = 'desc'
|
||||||
|
|
||||||
|
# attribute name for nova boot
|
||||||
|
ATTR_NAME_IMAGE = 'image'
|
||||||
|
ATTR_NAME_FLAVOR = 'flavor'
|
||||||
|
ATTR_NAME_META = 'meta'
|
||||||
|
ATTR_NAME_FILES = "files"
|
||||||
|
ATTR_NAME_RESERVEATION_ID = 'reservation_id'
|
||||||
|
ATTR_NAME_SECURITY_GROUPS = 'security_groups'
|
||||||
|
ATTR_NAME_USER_DATA = 'user_data'
|
||||||
|
ATTR_NAME_KEY_NAME = 'key_name'
|
||||||
|
ATTR_NAME_AVAILABILITY_ZONE = 'availability_zone'
|
||||||
|
ATTR_NAME_BLOCK_DEVICE_MAPPING = 'block_device_mapping'
|
||||||
|
ATTR_NAME_BLOCK_DEVICE_MAPPING_V2 = 'block_device_mapping_v2'
|
||||||
|
ATTR_NAME_NICS = 'nics'
|
||||||
|
ATTR_NAME_NIC = 'nic'
|
||||||
|
ATTR_NAME_SCHEDULER_HINTS = 'sheculer_hints'
|
||||||
|
ATTR_NAME_CONFIG_DRIVE = 'config_drive'
|
||||||
|
ATTR_NAME_DISK_CONFIG = 'disk_config'
|
76
apmec/common/driver_manager.py
Normal file
76
apmec/common/driver_manager.py
Normal file
@ -0,0 +1,76 @@
|
|||||||
|
# Copyright 2013, 2014 Intel Corporation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
from oslo_log import log as logging
|
||||||
|
|
||||||
|
import stevedore.named
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class DriverManager(object):
|
||||||
|
def __init__(self, namespace, driver_list, **kwargs):
|
||||||
|
super(DriverManager, self).__init__()
|
||||||
|
manager = stevedore.named.NamedExtensionManager(
|
||||||
|
namespace, driver_list, invoke_on_load=True, **kwargs)
|
||||||
|
|
||||||
|
drivers = {}
|
||||||
|
for ext in manager:
|
||||||
|
type_ = ext.obj.get_type()
|
||||||
|
if type_ in drivers:
|
||||||
|
msg = _("driver '%(new_driver)s' ignored because "
|
||||||
|
"driver '%(old_driver)s' is already "
|
||||||
|
"registered for driver '%(type)s'") % {
|
||||||
|
'new_driver': ext.name,
|
||||||
|
'old_driver': drivers[type].name,
|
||||||
|
'type': type_}
|
||||||
|
LOG.error(msg)
|
||||||
|
raise SystemExit(msg)
|
||||||
|
drivers[type_] = ext
|
||||||
|
|
||||||
|
self._drivers = dict((type_, ext.obj)
|
||||||
|
for (type_, ext) in drivers.items())
|
||||||
|
LOG.info("Registered drivers from %(namespace)s: %(keys)s",
|
||||||
|
{'namespace': namespace, 'keys': self._drivers.keys()})
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _driver_name(driver):
|
||||||
|
return driver.__module__ + '.' + driver.__class__.__name__
|
||||||
|
|
||||||
|
def register(self, type_, driver):
|
||||||
|
if type_ in self._drivers:
|
||||||
|
new_driver = self._driver_name(driver)
|
||||||
|
old_driver = self._driver_name(self._drivers[type_])
|
||||||
|
msg = _("can't load driver '%(new_driver)s' because "
|
||||||
|
"driver '%(old_driver)s' is already "
|
||||||
|
"registered for driver '%(type)s'") % {
|
||||||
|
'new_driver': new_driver,
|
||||||
|
'old_driver': old_driver,
|
||||||
|
'type': type_}
|
||||||
|
LOG.error(msg)
|
||||||
|
raise SystemExit(msg)
|
||||||
|
self._drivers[type_] = driver
|
||||||
|
|
||||||
|
def invoke(self, type_, method_name, **kwargs):
|
||||||
|
driver = self._drivers[type_]
|
||||||
|
return getattr(driver, method_name)(**kwargs)
|
||||||
|
|
||||||
|
def __getitem__(self, type_):
|
||||||
|
return self._drivers[type_]
|
||||||
|
|
||||||
|
def __contains__(self, type_):
|
||||||
|
return type_ in self._drivers
|
26
apmec/common/eventlet_utils.py
Normal file
26
apmec/common/eventlet_utils.py
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
# copyright (c) 2015 Cloudbase Solutions.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
import eventlet
|
||||||
|
from oslo_utils import importutils
|
||||||
|
|
||||||
|
|
||||||
|
def monkey_patch():
|
||||||
|
eventlet.monkey_patch()
|
||||||
|
if os.name != 'nt':
|
||||||
|
p_c_e = importutils.import_module('pyroute2.config.eventlet')
|
||||||
|
p_c_e.eventlet_config()
|
285
apmec/common/exceptions.py
Normal file
285
apmec/common/exceptions.py
Normal file
@ -0,0 +1,285 @@
|
|||||||
|
# Copyright 2011 VMware, Inc
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Apmec base exception handling.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from oslo_utils import excutils
|
||||||
|
import six
|
||||||
|
|
||||||
|
from apmec._i18n import _
|
||||||
|
|
||||||
|
|
||||||
|
class ApmecException(Exception):
|
||||||
|
"""Base Apmec Exception.
|
||||||
|
|
||||||
|
To correctly use this class, inherit from it and define
|
||||||
|
a 'message' property. That message will get printf'd
|
||||||
|
with the keyword arguments provided to the constructor.
|
||||||
|
"""
|
||||||
|
message = _("An unknown exception occurred.")
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
try:
|
||||||
|
super(ApmecException, self).__init__(self.message % kwargs)
|
||||||
|
self.msg = self.message % kwargs
|
||||||
|
except Exception:
|
||||||
|
with excutils.save_and_reraise_exception() as ctxt:
|
||||||
|
if not self.use_fatal_exceptions():
|
||||||
|
ctxt.reraise = False
|
||||||
|
# at least get the core message out if something happened
|
||||||
|
super(ApmecException, self).__init__(self.message)
|
||||||
|
|
||||||
|
if six.PY2:
|
||||||
|
def __unicode__(self):
|
||||||
|
return unicode(self.msg)
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return self.msg
|
||||||
|
|
||||||
|
def use_fatal_exceptions(self):
|
||||||
|
"""Is the instance using fatal exceptions.
|
||||||
|
|
||||||
|
:returns: Always returns False.
|
||||||
|
"""
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
class BadRequest(ApmecException):
|
||||||
|
message = _('Bad %(resource)s request: %(msg)s')
|
||||||
|
|
||||||
|
|
||||||
|
class NotFound(ApmecException):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class Conflict(ApmecException):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class NotAuthorized(ApmecException):
|
||||||
|
message = _("Not authorized.")
|
||||||
|
|
||||||
|
|
||||||
|
class ServiceUnavailable(ApmecException):
|
||||||
|
message = _("The service is unavailable")
|
||||||
|
|
||||||
|
|
||||||
|
class AdminRequired(NotAuthorized):
|
||||||
|
message = _("User does not have admin privileges: %(reason)s")
|
||||||
|
|
||||||
|
|
||||||
|
class PolicyNotAuthorized(NotAuthorized):
|
||||||
|
message = _("Policy doesn't allow %(action)s to be performed.")
|
||||||
|
|
||||||
|
|
||||||
|
class NetworkNotFound(NotFound):
|
||||||
|
message = _("Network %(net_id)s could not be found")
|
||||||
|
|
||||||
|
|
||||||
|
class PolicyFileNotFound(NotFound):
|
||||||
|
message = _("Policy configuration policy.json could not be found")
|
||||||
|
|
||||||
|
|
||||||
|
class PolicyInitError(ApmecException):
|
||||||
|
message = _("Failed to init policy %(policy)s because %(reason)s")
|
||||||
|
|
||||||
|
|
||||||
|
class PolicyCheckError(ApmecException):
|
||||||
|
message = _("Failed to check policy %(policy)s because %(reason)s")
|
||||||
|
|
||||||
|
|
||||||
|
class StateInvalid(BadRequest):
|
||||||
|
message = _("Unsupported port state: %(port_state)s")
|
||||||
|
|
||||||
|
|
||||||
|
class InUse(ApmecException):
|
||||||
|
message = _("The resource is inuse")
|
||||||
|
|
||||||
|
|
||||||
|
class ResourceExhausted(ServiceUnavailable):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class MalformedRequestBody(BadRequest):
|
||||||
|
message = _("Malformed request body: %(reason)s")
|
||||||
|
|
||||||
|
|
||||||
|
class Invalid(ApmecException):
|
||||||
|
def __init__(self, message=None):
|
||||||
|
self.message = message
|
||||||
|
super(Invalid, self).__init__()
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidInput(BadRequest):
|
||||||
|
message = _("Invalid input for operation: %(error_message)s.")
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidAllocationPool(BadRequest):
|
||||||
|
message = _("The allocation pool %(pool)s is not valid.")
|
||||||
|
|
||||||
|
|
||||||
|
class OverlappingAllocationPools(Conflict):
|
||||||
|
message = _("Found overlapping allocation pools:"
|
||||||
|
"%(pool_1)s %(pool_2)s for subnet %(subnet_cidr)s.")
|
||||||
|
|
||||||
|
|
||||||
|
class OutOfBoundsAllocationPool(BadRequest):
|
||||||
|
message = _("The allocation pool %(pool)s spans "
|
||||||
|
"beyond the subnet cidr %(subnet_cidr)s.")
|
||||||
|
|
||||||
|
|
||||||
|
class MacAddressGenerationFailure(ServiceUnavailable):
|
||||||
|
message = _("Unable to generate unique mac on network %(net_id)s.")
|
||||||
|
|
||||||
|
|
||||||
|
class IpAddressGenerationFailure(Conflict):
|
||||||
|
message = _("No more IP addresses available on network %(net_id)s.")
|
||||||
|
|
||||||
|
|
||||||
|
class BridgeDoesNotExist(ApmecException):
|
||||||
|
message = _("Bridge %(bridge)s does not exist.")
|
||||||
|
|
||||||
|
|
||||||
|
class PreexistingDeviceFailure(ApmecException):
|
||||||
|
message = _("Creation failed. %(dev_name)s already exists.")
|
||||||
|
|
||||||
|
|
||||||
|
class SudoRequired(ApmecException):
|
||||||
|
message = _("Sudo privilege is required to run this command.")
|
||||||
|
|
||||||
|
|
||||||
|
class QuotaResourceUnknown(NotFound):
|
||||||
|
message = _("Unknown quota resources %(unknown)s.")
|
||||||
|
|
||||||
|
|
||||||
|
class OverQuota(Conflict):
|
||||||
|
message = _("Quota exceeded for resources: %(overs)s")
|
||||||
|
|
||||||
|
|
||||||
|
class QuotaMissingTenant(BadRequest):
|
||||||
|
message = _("Tenant-id was missing from Quota request")
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidQuotaValue(Conflict):
|
||||||
|
message = _("Change would make usage less than 0 for the following "
|
||||||
|
"resources: %(unders)s")
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidSharedSetting(Conflict):
|
||||||
|
message = _("Unable to reconfigure sharing settings for network "
|
||||||
|
"%(network)s. Multiple tenants are using it")
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidExtensionEnv(BadRequest):
|
||||||
|
message = _("Invalid extension environment: %(reason)s")
|
||||||
|
|
||||||
|
|
||||||
|
class ExtensionsNotFound(NotFound):
|
||||||
|
message = _("Extensions not found: %(extensions)s")
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidContentType(ApmecException):
|
||||||
|
message = _("Invalid content type %(content_type)s")
|
||||||
|
|
||||||
|
|
||||||
|
class ExternalIpAddressExhausted(BadRequest):
|
||||||
|
message = _("Unable to find any IP address on external "
|
||||||
|
"network %(net_id)s.")
|
||||||
|
|
||||||
|
|
||||||
|
class TooManyExternalNetworks(ApmecException):
|
||||||
|
message = _("More than one external network exists")
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidConfigurationOption(ApmecException):
|
||||||
|
message = _("An invalid value was provided for %(opt_name)s: "
|
||||||
|
"%(opt_value)s")
|
||||||
|
|
||||||
|
|
||||||
|
class GatewayConflictWithAllocationPools(InUse):
|
||||||
|
message = _("Gateway ip %(ip_address)s conflicts with "
|
||||||
|
"allocation pool %(pool)s")
|
||||||
|
|
||||||
|
|
||||||
|
class GatewayIpInUse(InUse):
|
||||||
|
message = _("Current gateway ip %(ip_address)s already in use "
|
||||||
|
"by port %(port_id)s. Unable to update.")
|
||||||
|
|
||||||
|
|
||||||
|
class NetworkVlanRangeError(ApmecException):
|
||||||
|
message = _("Invalid network VLAN range: '%(vlan_range)s' - '%(error)s'")
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
# Convert vlan_range tuple to 'start:end' format for display
|
||||||
|
if isinstance(kwargs['vlan_range'], tuple):
|
||||||
|
kwargs['vlan_range'] = "%d:%d" % kwargs['vlan_range']
|
||||||
|
super(NetworkVlanRangeError, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class NetworkVxlanPortRangeError(ApmecException):
|
||||||
|
message = _("Invalid network VXLAN port range: '%(vxlan_range)s'")
|
||||||
|
|
||||||
|
|
||||||
|
class VxlanNetworkUnsupported(ApmecException):
|
||||||
|
message = _("VXLAN Network unsupported.")
|
||||||
|
|
||||||
|
|
||||||
|
class DuplicatedExtension(ApmecException):
|
||||||
|
message = _("Found duplicate extension: %(alias)s")
|
||||||
|
|
||||||
|
|
||||||
|
class DeviceIDNotOwnedByTenant(Conflict):
|
||||||
|
message = _("The following device_id %(device_id)s is not owned by your "
|
||||||
|
"tenant or matches another tenants router.")
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidCIDR(BadRequest):
|
||||||
|
message = _("Invalid CIDR %(input)s given as IP prefix")
|
||||||
|
|
||||||
|
|
||||||
|
class MgmtDriverException(ApmecException):
|
||||||
|
message = _("MEA configuration failed")
|
||||||
|
|
||||||
|
|
||||||
|
class AlarmUrlInvalid(BadRequest):
|
||||||
|
message = _("Invalid alarm url for MEA %(mea_id)s")
|
||||||
|
|
||||||
|
|
||||||
|
class TriggerNotFound(NotFound):
|
||||||
|
message = _("Trigger %(trigger_name)s does not exist for MEA %(mea_id)s")
|
||||||
|
|
||||||
|
|
||||||
|
class MeaPolicyNotFound(NotFound):
|
||||||
|
message = _("Policy %(policy)s does not exist for MEA %(mea_id)s")
|
||||||
|
|
||||||
|
|
||||||
|
class MeaPolicyActionInvalid(BadRequest):
|
||||||
|
message = _("Invalid action %(action)s for policy %(policy)s, "
|
||||||
|
"should be one of %(valid_actions)s")
|
||||||
|
|
||||||
|
|
||||||
|
class MeaPolicyTypeInvalid(BadRequest):
|
||||||
|
message = _("Invalid type %(type)s for policy %(policy)s, "
|
||||||
|
"should be one of %(valid_types)s")
|
||||||
|
|
||||||
|
|
||||||
|
class DuplicateResourceName(ApmecException):
|
||||||
|
message = _("%(resource)s with name %(name)s already exists")
|
||||||
|
|
||||||
|
|
||||||
|
class DuplicateEntity(ApmecException):
|
||||||
|
message = _("%(_type)s already exist with given %(entry)s")
|
36
apmec/common/log.py
Normal file
36
apmec/common/log.py
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
# Copyright (C) 2013 eNovance SAS <licensing@enovance.com>
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
"""Log helper functions."""
|
||||||
|
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_utils import strutils
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def log(method):
|
||||||
|
"""Decorator helping to log method calls."""
|
||||||
|
def wrapper(*args, **kwargs):
|
||||||
|
instance = args[0]
|
||||||
|
data = {"class_name": (instance.__class__.__module__ + '.'
|
||||||
|
+ instance.__class__.__name__),
|
||||||
|
"method_name": method.__name__,
|
||||||
|
"args": strutils.mask_password(args[1:]),
|
||||||
|
"kwargs": strutils.mask_password(kwargs)}
|
||||||
|
LOG.debug('%(class_name)s method %(method_name)s'
|
||||||
|
' called with arguments %(args)s %(kwargs)s', data)
|
||||||
|
return method(*args, **kwargs)
|
||||||
|
return wrapper
|
338
apmec/common/rpc.py
Normal file
338
apmec/common/rpc.py
Normal file
@ -0,0 +1,338 @@
|
|||||||
|
# Copyright (c) 2012 OpenStack Foundation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import collections
|
||||||
|
import random
|
||||||
|
import time
|
||||||
|
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_log import log as logging
|
||||||
|
import oslo_messaging
|
||||||
|
from oslo_messaging.rpc import dispatcher
|
||||||
|
from oslo_messaging import serializer as om_serializer
|
||||||
|
from oslo_service import service
|
||||||
|
from oslo_utils import excutils
|
||||||
|
|
||||||
|
from apmec.common import exceptions
|
||||||
|
from apmec import context
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
TRANSPORT = None
|
||||||
|
NOTIFICATION_TRANSPORT = None
|
||||||
|
NOTIFIER = None
|
||||||
|
|
||||||
|
ALLOWED_EXMODS = [
|
||||||
|
exceptions.__name__,
|
||||||
|
]
|
||||||
|
EXTRA_EXMODS = []
|
||||||
|
|
||||||
|
|
||||||
|
# NOTE(salv-orlando): I am afraid this is a global variable. While not ideal,
|
||||||
|
# they're however widely used throughout the code base. It should be set to
|
||||||
|
# true if the RPC server is not running in the current process space. This
|
||||||
|
# will prevent get_connection from creating connections to the AMQP server
|
||||||
|
RPC_DISABLED = False
|
||||||
|
|
||||||
|
|
||||||
|
def init_action_rpc(conf):
|
||||||
|
global TRANSPORT
|
||||||
|
TRANSPORT = oslo_messaging.get_transport(conf)
|
||||||
|
|
||||||
|
|
||||||
|
def init(conf):
|
||||||
|
global TRANSPORT, NOTIFICATION_TRANSPORT, NOTIFIER
|
||||||
|
exmods = get_allowed_exmods()
|
||||||
|
TRANSPORT = oslo_messaging.get_transport(conf,
|
||||||
|
allowed_remote_exmods=exmods)
|
||||||
|
NOTIFICATION_TRANSPORT = oslo_messaging.get_notification_transport(
|
||||||
|
conf, allowed_remote_exmods=exmods)
|
||||||
|
serializer = RequestContextSerializer()
|
||||||
|
NOTIFIER = oslo_messaging.Notifier(NOTIFICATION_TRANSPORT,
|
||||||
|
serializer=serializer)
|
||||||
|
|
||||||
|
|
||||||
|
def cleanup():
|
||||||
|
global TRANSPORT, NOTIFICATION_TRANSPORT, NOTIFIER
|
||||||
|
assert TRANSPORT is not None
|
||||||
|
assert NOTIFICATION_TRANSPORT is not None
|
||||||
|
assert NOTIFIER is not None
|
||||||
|
TRANSPORT.cleanup()
|
||||||
|
NOTIFICATION_TRANSPORT.cleanup()
|
||||||
|
_ContextWrapper.reset_timeouts()
|
||||||
|
TRANSPORT = NOTIFICATION_TRANSPORT = NOTIFIER = None
|
||||||
|
|
||||||
|
|
||||||
|
def add_extra_exmods(*args):
|
||||||
|
EXTRA_EXMODS.extend(args)
|
||||||
|
|
||||||
|
|
||||||
|
def clear_extra_exmods():
|
||||||
|
del EXTRA_EXMODS[:]
|
||||||
|
|
||||||
|
|
||||||
|
def get_allowed_exmods():
|
||||||
|
return ALLOWED_EXMODS + EXTRA_EXMODS
|
||||||
|
|
||||||
|
|
||||||
|
def _get_default_method_timeout():
|
||||||
|
return TRANSPORT.conf.rpc_response_timeout
|
||||||
|
|
||||||
|
|
||||||
|
def _get_default_method_timeouts():
|
||||||
|
return collections.defaultdict(_get_default_method_timeout)
|
||||||
|
|
||||||
|
|
||||||
|
class _ContextWrapper(object):
|
||||||
|
"""Wraps oslo messaging contexts to set the timeout for calls.
|
||||||
|
|
||||||
|
This intercepts RPC calls and sets the timeout value to the globally
|
||||||
|
adapting value for each method. An oslo messaging timeout results in
|
||||||
|
a doubling of the timeout value for the method on which it timed out.
|
||||||
|
There currently is no logic to reduce the timeout since busy Apmec
|
||||||
|
servers are more frequently the cause of timeouts rather than lost
|
||||||
|
messages.
|
||||||
|
"""
|
||||||
|
_METHOD_TIMEOUTS = _get_default_method_timeouts()
|
||||||
|
_max_timeout = None
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def reset_timeouts(cls):
|
||||||
|
# restore the original default timeout factory
|
||||||
|
cls._METHOD_TIMEOUTS = _get_default_method_timeouts()
|
||||||
|
cls._max_timeout = None
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get_max_timeout(cls):
|
||||||
|
return cls._max_timeout or _get_default_method_timeout() * 10
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def set_max_timeout(cls, max_timeout):
|
||||||
|
if max_timeout < cls.get_max_timeout():
|
||||||
|
cls._METHOD_TIMEOUTS = collections.defaultdict(
|
||||||
|
lambda: max_timeout, **{
|
||||||
|
k: min(v, max_timeout)
|
||||||
|
for k, v in cls._METHOD_TIMEOUTS.items()
|
||||||
|
})
|
||||||
|
cls._max_timeout = max_timeout
|
||||||
|
|
||||||
|
def __init__(self, original_context):
|
||||||
|
self._original_context = original_context
|
||||||
|
|
||||||
|
def __getattr__(self, name):
|
||||||
|
return getattr(self._original_context, name)
|
||||||
|
|
||||||
|
def call(self, ctxt, method, **kwargs):
|
||||||
|
# two methods with the same name in different namespaces should
|
||||||
|
# be tracked independently
|
||||||
|
if self._original_context.target.namespace:
|
||||||
|
scoped_method = '%s.%s' % (self._original_context.target.namespace,
|
||||||
|
method)
|
||||||
|
else:
|
||||||
|
scoped_method = method
|
||||||
|
# set the timeout from the global method timeout tracker for this
|
||||||
|
# method
|
||||||
|
self._original_context.timeout = self._METHOD_TIMEOUTS[scoped_method]
|
||||||
|
try:
|
||||||
|
return self._original_context.call(ctxt, method, **kwargs)
|
||||||
|
except oslo_messaging.MessagingTimeout:
|
||||||
|
with excutils.save_and_reraise_exception():
|
||||||
|
wait = random.uniform(
|
||||||
|
0,
|
||||||
|
min(self._METHOD_TIMEOUTS[scoped_method],
|
||||||
|
TRANSPORT.conf.rpc_response_timeout)
|
||||||
|
)
|
||||||
|
LOG.error("Timeout in RPC method %(method)s. Waiting for "
|
||||||
|
"%(wait)s seconds before next attempt. If the "
|
||||||
|
"server is not down, consider increasing the "
|
||||||
|
"rpc_response_timeout option as message "
|
||||||
|
"server(s) may be overloaded and unable to "
|
||||||
|
"respond quickly enough.",
|
||||||
|
{'wait': int(round(wait)), 'method': scoped_method})
|
||||||
|
new_timeout = min(
|
||||||
|
self._original_context.timeout * 2, self.get_max_timeout())
|
||||||
|
if new_timeout > self._METHOD_TIMEOUTS[scoped_method]:
|
||||||
|
LOG.warning("Increasing timeout for %(method)s calls "
|
||||||
|
"to %(new)s seconds. Restart the client to "
|
||||||
|
"restore it to the default value.",
|
||||||
|
{'method': scoped_method, 'new': new_timeout})
|
||||||
|
self._METHOD_TIMEOUTS[scoped_method] = new_timeout
|
||||||
|
time.sleep(wait)
|
||||||
|
|
||||||
|
|
||||||
|
class BackingOffClient(oslo_messaging.RPCClient):
|
||||||
|
"""An oslo messaging RPC Client that implements a timeout backoff.
|
||||||
|
|
||||||
|
This has all of the same interfaces as oslo_messaging.RPCClient but
|
||||||
|
if the timeout parameter is not specified, the _ContextWrapper returned
|
||||||
|
will track when call timeout exceptions occur and exponentially increase
|
||||||
|
the timeout for the given call method.
|
||||||
|
"""
|
||||||
|
def prepare(self, *args, **kwargs):
|
||||||
|
ctx = super(BackingOffClient, self).prepare(*args, **kwargs)
|
||||||
|
# don't enclose Contexts that explicitly set a timeout
|
||||||
|
return _ContextWrapper(ctx) if 'timeout' not in kwargs else ctx
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def set_max_timeout(max_timeout):
|
||||||
|
'''Set RPC timeout ceiling for all backing-off RPC clients.'''
|
||||||
|
_ContextWrapper.set_max_timeout(max_timeout)
|
||||||
|
|
||||||
|
|
||||||
|
def get_client(target, version_cap=None, serializer=None):
|
||||||
|
assert TRANSPORT is not None
|
||||||
|
serializer = RequestContextSerializer(serializer)
|
||||||
|
return BackingOffClient(TRANSPORT,
|
||||||
|
target,
|
||||||
|
version_cap=version_cap,
|
||||||
|
serializer=serializer)
|
||||||
|
|
||||||
|
|
||||||
|
def get_server(target, endpoints, serializer=None):
|
||||||
|
assert TRANSPORT is not None
|
||||||
|
serializer = RequestContextSerializer(serializer)
|
||||||
|
access_policy = dispatcher.DefaultRPCAccessPolicy
|
||||||
|
return oslo_messaging.get_rpc_server(TRANSPORT, target, endpoints,
|
||||||
|
'eventlet', serializer,
|
||||||
|
access_policy=access_policy)
|
||||||
|
|
||||||
|
|
||||||
|
def get_notifier(service=None, host=None, publisher_id=None):
|
||||||
|
assert NOTIFIER is not None
|
||||||
|
if not publisher_id:
|
||||||
|
publisher_id = "%s.%s" % (service, host or cfg.CONF.host)
|
||||||
|
return NOTIFIER.prepare(publisher_id=publisher_id)
|
||||||
|
|
||||||
|
|
||||||
|
class RequestContextSerializer(om_serializer.Serializer):
|
||||||
|
"""convert RPC common context int apmec Context."""
|
||||||
|
|
||||||
|
def __init__(self, base=None):
|
||||||
|
super(RequestContextSerializer, self).__init__()
|
||||||
|
self._base = base
|
||||||
|
|
||||||
|
def serialize_entity(self, ctxt, entity):
|
||||||
|
if not self._base:
|
||||||
|
return entity
|
||||||
|
return self._base.serialize_entity(ctxt, entity)
|
||||||
|
|
||||||
|
def deserialize_entity(self, ctxt, entity):
|
||||||
|
if not self._base:
|
||||||
|
return entity
|
||||||
|
return self._base.deserialize_entity(ctxt, entity)
|
||||||
|
|
||||||
|
def serialize_context(self, ctxt):
|
||||||
|
_context = ctxt.to_dict()
|
||||||
|
return _context
|
||||||
|
|
||||||
|
def deserialize_context(self, ctxt):
|
||||||
|
rpc_ctxt_dict = ctxt.copy()
|
||||||
|
return context.Context.from_dict(rpc_ctxt_dict)
|
||||||
|
|
||||||
|
|
||||||
|
class Service(service.Service):
|
||||||
|
"""Service object for binaries running on hosts.
|
||||||
|
|
||||||
|
A service enables rpc by listening to queues based on topic and host.
|
||||||
|
"""
|
||||||
|
def __init__(self, host, topic, manager=None, serializer=None):
|
||||||
|
super(Service, self).__init__()
|
||||||
|
self.host = host
|
||||||
|
self.topic = topic
|
||||||
|
self.serializer = serializer
|
||||||
|
if manager is None:
|
||||||
|
self.manager = self
|
||||||
|
else:
|
||||||
|
self.manager = manager
|
||||||
|
|
||||||
|
def start(self):
|
||||||
|
super(Service, self).start()
|
||||||
|
|
||||||
|
self.conn = create_connection()
|
||||||
|
LOG.debug("Creating Consumer connection for Service %s",
|
||||||
|
self.topic)
|
||||||
|
|
||||||
|
endpoints = [self.manager]
|
||||||
|
|
||||||
|
self.conn.create_consumer(self.topic, endpoints)
|
||||||
|
|
||||||
|
# Hook to allow the manager to do other initializations after
|
||||||
|
# the rpc connection is created.
|
||||||
|
if callable(getattr(self.manager, 'initialize_service_hook', None)):
|
||||||
|
self.manager.initialize_service_hook(self)
|
||||||
|
|
||||||
|
# Consume from all consumers in threads
|
||||||
|
self.conn.consume_in_threads()
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
# Try to shut the connection down, but if we get any sort of
|
||||||
|
# errors, go ahead and ignore them.. as we're shutting down anyway
|
||||||
|
try:
|
||||||
|
self.conn.close()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
super(Service, self).stop()
|
||||||
|
|
||||||
|
|
||||||
|
class Connection(object):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super(Connection, self).__init__()
|
||||||
|
self.servers = []
|
||||||
|
|
||||||
|
def create_consumer(self, topic, endpoints, fanout=False,
|
||||||
|
exchange='apmec', host=None):
|
||||||
|
target = oslo_messaging.Target(
|
||||||
|
topic=topic, server=host or cfg.CONF.host, fanout=fanout,
|
||||||
|
exchange=exchange)
|
||||||
|
server = get_server(target, endpoints)
|
||||||
|
self.servers.append(server)
|
||||||
|
|
||||||
|
def consume_in_threads(self):
|
||||||
|
for server in self.servers:
|
||||||
|
server.start()
|
||||||
|
return self.servers
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
for server in self.servers:
|
||||||
|
server.stop()
|
||||||
|
for server in self.servers:
|
||||||
|
server.wait()
|
||||||
|
|
||||||
|
|
||||||
|
class VoidConnection(object):
|
||||||
|
|
||||||
|
def create_consumer(self, topic, endpoints, fanout=False,
|
||||||
|
exchange='apmec', host=None):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def consume_in_threads(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
# functions
|
||||||
|
def create_connection():
|
||||||
|
# NOTE(salv-orlando): This is a clever interpretation of the factory design
|
||||||
|
# patter aimed at preventing plugins from initializing RPC servers upon
|
||||||
|
# initialization when they are running in the REST over HTTP API server.
|
||||||
|
# The educated reader will perfectly be able that this a fairly dirty hack
|
||||||
|
# to avoid having to change the initialization process of every plugin.
|
||||||
|
if RPC_DISABLED:
|
||||||
|
return VoidConnection()
|
||||||
|
return Connection()
|
42
apmec/common/test_lib.py
Normal file
42
apmec/common/test_lib.py
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
# Copyright (c) 2010 OpenStack Foundation
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# Colorizer Code is borrowed from Twisted:
|
||||||
|
# Copyright (c) 2001-2010 Twisted Matrix Laboratories.
|
||||||
|
#
|
||||||
|
# Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
# a copy of this software and associated documentation files (the
|
||||||
|
# "Software"), to deal in the Software without restriction, including
|
||||||
|
# without limitation the rights to use, copy, modify, merge, publish,
|
||||||
|
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||||
|
# permit persons to whom the Software is furnished to do so, subject to
|
||||||
|
# the following conditions:
|
||||||
|
#
|
||||||
|
# The above copyright notice and this permission notice shall be
|
||||||
|
# included in all copies or substantial portions of the Software.
|
||||||
|
#
|
||||||
|
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||||
|
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||||
|
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||||
|
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||||
|
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||||
|
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||||
|
|
||||||
|
# describes parameters used by different unit/functional tests
|
||||||
|
# a plugin-specific testing mechanism should import this dictionary
|
||||||
|
# and override the values in it if needed (e.g., run_tests.py in
|
||||||
|
# apmec/plugins/openvswitch/ )
|
||||||
|
test_config = {}
|
15
apmec/common/topics.py
Normal file
15
apmec/common/topics.py
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
|
||||||
|
TOPIC_ACTION_KILL = 'KILL_ACTION'
|
||||||
|
TOPIC_CONDUCTOR = 'APMEC_CONDUCTOR'
|
226
apmec/common/utils.py
Normal file
226
apmec/common/utils.py
Normal file
@ -0,0 +1,226 @@
|
|||||||
|
# Copyright 2011, VMware, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
# Borrowed from nova code base, more utilities will be added/borrowed as and
|
||||||
|
# when needed.
|
||||||
|
|
||||||
|
"""Utilities and helper functions."""
|
||||||
|
|
||||||
|
import logging as std_logging
|
||||||
|
import os
|
||||||
|
import random
|
||||||
|
import signal
|
||||||
|
import socket
|
||||||
|
import string
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from eventlet.green import subprocess
|
||||||
|
import netaddr
|
||||||
|
from oslo_concurrency import lockutils
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_log import versionutils
|
||||||
|
from oslo_utils import importutils
|
||||||
|
from stevedore import driver
|
||||||
|
|
||||||
|
from apmec._i18n import _
|
||||||
|
from apmec.common import constants as q_const
|
||||||
|
|
||||||
|
|
||||||
|
TIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ"
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
SYNCHRONIZED_PREFIX = 'apmec-'
|
||||||
|
MEM_UNITS = {
|
||||||
|
"MB": {
|
||||||
|
"MB": {
|
||||||
|
"op": "*",
|
||||||
|
"val": "1"
|
||||||
|
},
|
||||||
|
"GB": {
|
||||||
|
"op": "/",
|
||||||
|
"val": "1024"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"GB": {
|
||||||
|
"MB": {
|
||||||
|
"op": "*",
|
||||||
|
"val": "1024"
|
||||||
|
},
|
||||||
|
"GB": {
|
||||||
|
"op": "*",
|
||||||
|
"val": "1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
CONF = cfg.CONF
|
||||||
|
synchronized = lockutils.synchronized_with_prefix(SYNCHRONIZED_PREFIX)
|
||||||
|
|
||||||
|
|
||||||
|
def find_config_file(options, config_file):
|
||||||
|
"""Return the first config file found.
|
||||||
|
|
||||||
|
We search for the paste config file in the following order:
|
||||||
|
* If --config-file option is used, use that
|
||||||
|
* Search for the configuration files via common cfg directories
|
||||||
|
:retval Full path to config file, or None if no config file found
|
||||||
|
"""
|
||||||
|
fix_path = lambda p: os.path.abspath(os.path.expanduser(p))
|
||||||
|
if options.get('config_file'):
|
||||||
|
if os.path.exists(options['config_file']):
|
||||||
|
return fix_path(options['config_file'])
|
||||||
|
|
||||||
|
dir_to_common = os.path.dirname(os.path.abspath(__file__))
|
||||||
|
root = os.path.join(dir_to_common, '..', '..', '..', '..')
|
||||||
|
# Handle standard directory search for the config file
|
||||||
|
config_file_dirs = [fix_path(os.path.join(os.getcwd(), 'etc')),
|
||||||
|
fix_path(os.path.join('~', '.apmec-venv', 'etc',
|
||||||
|
'apmec')),
|
||||||
|
fix_path('~'),
|
||||||
|
os.path.join(cfg.CONF.state_path, 'etc'),
|
||||||
|
os.path.join(cfg.CONF.state_path, 'etc', 'apmec'),
|
||||||
|
fix_path(os.path.join('~', '.local',
|
||||||
|
'etc', 'apmec')),
|
||||||
|
'/usr/etc/apmec',
|
||||||
|
'/usr/local/etc/apmec',
|
||||||
|
'/etc/apmec/',
|
||||||
|
'/etc']
|
||||||
|
|
||||||
|
if 'plugin' in options:
|
||||||
|
config_file_dirs = [
|
||||||
|
os.path.join(x, 'apmec', 'plugins', options['plugin'])
|
||||||
|
for x in config_file_dirs
|
||||||
|
]
|
||||||
|
|
||||||
|
if os.path.exists(os.path.join(root, 'plugins')):
|
||||||
|
plugins = [fix_path(os.path.join(root, 'plugins', p, 'etc'))
|
||||||
|
for p in os.listdir(os.path.join(root, 'plugins'))]
|
||||||
|
plugins = [p for p in plugins if os.path.isdir(p)]
|
||||||
|
config_file_dirs.extend(plugins)
|
||||||
|
|
||||||
|
for cfg_dir in config_file_dirs:
|
||||||
|
cfg_file = os.path.join(cfg_dir, config_file)
|
||||||
|
if os.path.exists(cfg_file):
|
||||||
|
return cfg_file
|
||||||
|
|
||||||
|
|
||||||
|
def _subprocess_setup():
|
||||||
|
# Python installs a SIGPIPE handler by default. This is usually not what
|
||||||
|
# non-Python subprocesses expect.
|
||||||
|
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
|
||||||
|
|
||||||
|
|
||||||
|
def subprocess_popen(args, stdin=None, stdout=None, stderr=None, shell=False,
|
||||||
|
env=None):
|
||||||
|
return subprocess.Popen(args, shell=shell, stdin=stdin, stdout=stdout,
|
||||||
|
stderr=stderr, preexec_fn=_subprocess_setup,
|
||||||
|
close_fds=True, env=env)
|
||||||
|
|
||||||
|
|
||||||
|
def get_hostname():
|
||||||
|
return socket.gethostname()
|
||||||
|
|
||||||
|
|
||||||
|
def dict2tuple(d):
|
||||||
|
items = list(d.items())
|
||||||
|
items.sort()
|
||||||
|
return tuple(items)
|
||||||
|
|
||||||
|
|
||||||
|
def log_opt_values(log):
|
||||||
|
cfg.CONF.log_opt_values(log, std_logging.DEBUG)
|
||||||
|
|
||||||
|
|
||||||
|
def is_valid_vlan_tag(vlan):
|
||||||
|
return q_const.MIN_VLAN_TAG <= vlan <= q_const.MAX_VLAN_TAG
|
||||||
|
|
||||||
|
|
||||||
|
def is_valid_ipv4(address):
|
||||||
|
"""Verify that address represents a valid IPv4 address."""
|
||||||
|
try:
|
||||||
|
return netaddr.valid_ipv4(address)
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def change_memory_unit(mem, to):
|
||||||
|
"""Changes the memory value(mem) based on the unit('to') specified.
|
||||||
|
|
||||||
|
If the unit is not specified in 'mem', by default, it is considered
|
||||||
|
as "MB". And this method returns only integer.
|
||||||
|
"""
|
||||||
|
|
||||||
|
mem = str(mem) + " MB" if str(mem).isdigit() else mem.upper()
|
||||||
|
for unit, value in (MEM_UNITS).items():
|
||||||
|
mem_arr = mem.split(unit)
|
||||||
|
if len(mem_arr) < 2:
|
||||||
|
continue
|
||||||
|
return eval(mem_arr[0] +
|
||||||
|
MEM_UNITS[unit][to]["op"] +
|
||||||
|
MEM_UNITS[unit][to]["val"])
|
||||||
|
|
||||||
|
|
||||||
|
def load_class_by_alias_or_classname(namespace, name):
|
||||||
|
"""Load class using stevedore alias or the class name
|
||||||
|
|
||||||
|
Load class using the stevedore driver manager
|
||||||
|
:param namespace: namespace where the alias is defined
|
||||||
|
:param name: alias or class name of the class to be loaded
|
||||||
|
:returns: class if calls can be loaded
|
||||||
|
:raises ImportError: if class cannot be loaded
|
||||||
|
"""
|
||||||
|
|
||||||
|
if not name:
|
||||||
|
LOG.error("Alias or class name is not set")
|
||||||
|
raise ImportError(_("Class not found."))
|
||||||
|
try:
|
||||||
|
# Try to resolve class by alias
|
||||||
|
mgr = driver.DriverManager(namespace, name)
|
||||||
|
class_to_load = mgr.driver
|
||||||
|
except RuntimeError:
|
||||||
|
e1_info = sys.exc_info()
|
||||||
|
# Fallback to class name
|
||||||
|
try:
|
||||||
|
class_to_load = importutils.import_class(name)
|
||||||
|
except (ImportError, ValueError):
|
||||||
|
LOG.error("Error loading class by alias",
|
||||||
|
exc_info=e1_info)
|
||||||
|
LOG.error("Error loading class by class name",
|
||||||
|
exc_info=True)
|
||||||
|
raise ImportError(_("Class not found."))
|
||||||
|
return class_to_load
|
||||||
|
|
||||||
|
|
||||||
|
def deep_update(orig_dict, new_dict):
|
||||||
|
for key, value in new_dict.items():
|
||||||
|
if isinstance(value, dict):
|
||||||
|
if key in orig_dict and isinstance(orig_dict[key], dict):
|
||||||
|
deep_update(orig_dict[key], value)
|
||||||
|
continue
|
||||||
|
|
||||||
|
orig_dict[key] = value
|
||||||
|
|
||||||
|
|
||||||
|
def deprecate_warning(what, as_of, in_favor_of=None, remove_in=1):
|
||||||
|
versionutils.deprecation_warning(as_of=as_of, what=what,
|
||||||
|
in_favor_of=in_favor_of,
|
||||||
|
remove_in=remove_in)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_resource_name(resource, prefix='tmpl'):
|
||||||
|
return prefix + '-' \
|
||||||
|
+ ''.join(random.SystemRandom().choice(
|
||||||
|
string.ascii_lowercase + string.digits)
|
||||||
|
for _ in range(16)) \
|
||||||
|
+ '-' + resource
|
0
apmec/conductor/__init__.py
Normal file
0
apmec/conductor/__init__.py
Normal file
91
apmec/conductor/conductor_server.py
Normal file
91
apmec/conductor/conductor_server.py
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
# Copyright 2017 OpenStack Foundation
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_log import log as logging
|
||||||
|
import oslo_messaging
|
||||||
|
from oslo_service import service
|
||||||
|
from oslo_utils import timeutils
|
||||||
|
from sqlalchemy.orm import exc as orm_exc
|
||||||
|
|
||||||
|
from apmec.common import topics
|
||||||
|
from apmec import context as t_context
|
||||||
|
from apmec.db.common_services import common_services_db
|
||||||
|
from apmec.db.meo import meo_db
|
||||||
|
from apmec.extensions import meo
|
||||||
|
from apmec import manager
|
||||||
|
from apmec.plugins.common import constants
|
||||||
|
from apmec import service as apmec_service
|
||||||
|
from apmec import version
|
||||||
|
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class Conductor(manager.Manager):
|
||||||
|
def __init__(self, host, conf=None):
|
||||||
|
if conf:
|
||||||
|
self.conf = conf
|
||||||
|
else:
|
||||||
|
self.conf = cfg.CONF
|
||||||
|
super(Conductor, self).__init__(host=self.conf.host)
|
||||||
|
|
||||||
|
def update_vim(self, context, vim_id, status):
|
||||||
|
t_admin_context = t_context.get_admin_context()
|
||||||
|
update_time = timeutils.utcnow()
|
||||||
|
with t_admin_context.session.begin(subtransactions=True):
|
||||||
|
try:
|
||||||
|
query = t_admin_context.session.query(meo_db.Vim)
|
||||||
|
query.filter(
|
||||||
|
meo_db.Vim.id == vim_id).update(
|
||||||
|
{'status': status,
|
||||||
|
'updated_at': update_time})
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
raise meo.VimNotFoundException(vim_id=vim_id)
|
||||||
|
event_db = common_services_db.Event(
|
||||||
|
resource_id=vim_id,
|
||||||
|
resource_type=constants.RES_TYPE_VIM,
|
||||||
|
resource_state=status,
|
||||||
|
event_details="",
|
||||||
|
event_type=constants.RES_EVT_MONITOR,
|
||||||
|
timestamp=update_time)
|
||||||
|
t_admin_context.session.add(event_db)
|
||||||
|
return status
|
||||||
|
|
||||||
|
|
||||||
|
def init(args, **kwargs):
|
||||||
|
cfg.CONF(args=args, project='apmec',
|
||||||
|
version='%%prog %s' % version.version_info.release_string(),
|
||||||
|
**kwargs)
|
||||||
|
|
||||||
|
# FIXME(ihrachys): if import is put in global, circular import
|
||||||
|
# failure occurs
|
||||||
|
from apmec.common import rpc as n_rpc
|
||||||
|
n_rpc.init(cfg.CONF)
|
||||||
|
|
||||||
|
|
||||||
|
def main(manager='apmec.conductor.conductor_server.Conductor'):
|
||||||
|
init(sys.argv[1:])
|
||||||
|
logging.setup(cfg.CONF, "apmec")
|
||||||
|
oslo_messaging.set_transport_defaults(control_exchange='apmec')
|
||||||
|
logging.setup(cfg.CONF, "apmec")
|
||||||
|
cfg.CONF.log_opt_values(LOG, logging.DEBUG)
|
||||||
|
server = apmec_service.Service.create(
|
||||||
|
binary='apmec-conductor',
|
||||||
|
topic=topics.TOPIC_CONDUCTOR,
|
||||||
|
manager=manager)
|
||||||
|
service.launch(cfg.CONF, server).wait()
|
0
apmec/conductor/conductorrpc/__init__.py
Normal file
0
apmec/conductor/conductorrpc/__init__.py
Normal file
30
apmec/conductor/conductorrpc/vim_monitor_rpc.py
Normal file
30
apmec/conductor/conductorrpc/vim_monitor_rpc.py
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
# Copyright 2017 OpenStack Foundation
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import oslo_messaging
|
||||||
|
|
||||||
|
from apmec.common import topics
|
||||||
|
|
||||||
|
|
||||||
|
class VIMUpdateRPC(object):
|
||||||
|
|
||||||
|
target = oslo_messaging.Target(
|
||||||
|
exchange='apmec',
|
||||||
|
topic=topics.TOPIC_CONDUCTOR,
|
||||||
|
fanout=False,
|
||||||
|
version='1.0')
|
||||||
|
|
||||||
|
def update_vim(self, context, **kwargs):
|
||||||
|
pass
|
141
apmec/context.py
Normal file
141
apmec/context.py
Normal file
@ -0,0 +1,141 @@
|
|||||||
|
# Copyright 2012 OpenStack Foundation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
"""Context: context for security/db session."""
|
||||||
|
|
||||||
|
import copy
|
||||||
|
import datetime
|
||||||
|
|
||||||
|
from oslo_context import context as oslo_context
|
||||||
|
from oslo_db.sqlalchemy import enginefacade
|
||||||
|
|
||||||
|
from apmec.db import api as db_api
|
||||||
|
from apmec import policy
|
||||||
|
|
||||||
|
|
||||||
|
class ContextBase(oslo_context.RequestContext):
|
||||||
|
"""Security context and request information.
|
||||||
|
|
||||||
|
Represents the user taking a given action within the system.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, user_id, tenant_id, is_admin=None, roles=None,
|
||||||
|
timestamp=None, request_id=None, tenant_name=None,
|
||||||
|
user_name=None, overwrite=True, auth_token=None,
|
||||||
|
**kwargs):
|
||||||
|
"""Object initialization.
|
||||||
|
|
||||||
|
:param overwrite: Set to False to ensure that the greenthread local
|
||||||
|
copy of the index is not overwritten.
|
||||||
|
|
||||||
|
:param kwargs: Extra arguments that might be present, but we ignore
|
||||||
|
because they possibly came in from older rpc messages.
|
||||||
|
"""
|
||||||
|
super(ContextBase, self).__init__(auth_token=auth_token,
|
||||||
|
user=user_id, tenant=tenant_id,
|
||||||
|
is_admin=is_admin,
|
||||||
|
request_id=request_id,
|
||||||
|
overwrite=overwrite,
|
||||||
|
roles=roles)
|
||||||
|
self.user_name = user_name
|
||||||
|
self.tenant_name = tenant_name
|
||||||
|
|
||||||
|
if not timestamp:
|
||||||
|
timestamp = datetime.datetime.utcnow()
|
||||||
|
self.timestamp = timestamp
|
||||||
|
if self.is_admin is None:
|
||||||
|
self.is_admin = policy.check_is_admin(self)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def project_id(self):
|
||||||
|
return self.tenant
|
||||||
|
|
||||||
|
@property
|
||||||
|
def tenant_id(self):
|
||||||
|
return self.tenant
|
||||||
|
|
||||||
|
@tenant_id.setter
|
||||||
|
def tenant_id(self, tenant_id):
|
||||||
|
self.tenant = tenant_id
|
||||||
|
|
||||||
|
@property
|
||||||
|
def user_id(self):
|
||||||
|
return self.user
|
||||||
|
|
||||||
|
@user_id.setter
|
||||||
|
def user_id(self, user_id):
|
||||||
|
self.user = user_id
|
||||||
|
|
||||||
|
def to_dict(self):
|
||||||
|
context = super(ContextBase, self).to_dict()
|
||||||
|
context.update({
|
||||||
|
'user_id': self.user_id,
|
||||||
|
'tenant_id': self.tenant_id,
|
||||||
|
'project_id': self.project_id,
|
||||||
|
'timestamp': str(self.timestamp),
|
||||||
|
'tenant_name': self.tenant_name,
|
||||||
|
'project_name': self.tenant_name,
|
||||||
|
'user_name': self.user_name,
|
||||||
|
})
|
||||||
|
return context
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, values):
|
||||||
|
return cls(**values)
|
||||||
|
|
||||||
|
def elevated(self):
|
||||||
|
"""Return a version of this context with admin flag set."""
|
||||||
|
context = copy.copy(self)
|
||||||
|
context.is_admin = True
|
||||||
|
|
||||||
|
if 'admin' not in [x.lower() for x in context.roles]:
|
||||||
|
context.roles = context.roles + ["admin"]
|
||||||
|
|
||||||
|
return context
|
||||||
|
|
||||||
|
|
||||||
|
@enginefacade.transaction_context_provider
|
||||||
|
class ContextBaseWithSession(ContextBase):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class Context(ContextBaseWithSession):
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
super(Context, self).__init__(*args, **kwargs)
|
||||||
|
self._session = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def session(self):
|
||||||
|
# TODO(akamyshnikova): checking for session attribute won't be needed
|
||||||
|
# when reader and writer will be used
|
||||||
|
if hasattr(super(Context, self), 'session'):
|
||||||
|
return super(Context, self).session
|
||||||
|
if self._session is None:
|
||||||
|
self._session = db_api.get_session()
|
||||||
|
return self._session
|
||||||
|
|
||||||
|
|
||||||
|
def get_admin_context():
|
||||||
|
return Context(user_id=None,
|
||||||
|
tenant_id=None,
|
||||||
|
is_admin=True,
|
||||||
|
overwrite=False)
|
||||||
|
|
||||||
|
|
||||||
|
def get_admin_context_without_session():
|
||||||
|
return ContextBase(user_id=None,
|
||||||
|
tenant_id=None,
|
||||||
|
is_admin=True)
|
0
apmec/db/__init__.py
Normal file
0
apmec/db/__init__.py
Normal file
45
apmec/db/api.py
Normal file
45
apmec/db/api.py
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
# Copyright 2011 VMware, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_db.sqlalchemy import enginefacade
|
||||||
|
|
||||||
|
|
||||||
|
context_manager = enginefacade.transaction_context()
|
||||||
|
|
||||||
|
_FACADE = None
|
||||||
|
|
||||||
|
|
||||||
|
def _create_facade_lazily():
|
||||||
|
global _FACADE
|
||||||
|
|
||||||
|
if _FACADE is None:
|
||||||
|
context_manager.configure(sqlite_fk=True, **cfg.CONF.database)
|
||||||
|
_FACADE = context_manager._factory.get_legacy_facade()
|
||||||
|
|
||||||
|
return _FACADE
|
||||||
|
|
||||||
|
|
||||||
|
def get_engine():
|
||||||
|
"""Helper method to grab engine."""
|
||||||
|
facade = _create_facade_lazily()
|
||||||
|
return facade.get_engine()
|
||||||
|
|
||||||
|
|
||||||
|
def get_session(autocommit=True, expire_on_commit=False):
|
||||||
|
"""Helper method to grab session."""
|
||||||
|
facade = _create_facade_lazily()
|
||||||
|
return facade.get_session(autocommit=autocommit,
|
||||||
|
expire_on_commit=expire_on_commit)
|
0
apmec/db/common_services/__init__.py
Normal file
0
apmec/db/common_services/__init__.py
Normal file
31
apmec/db/common_services/common_services_db.py
Normal file
31
apmec/db/common_services/common_services_db.py
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
# Copyright 2016 Brocade Communications System, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
from apmec.db import model_base
|
||||||
|
from apmec.db import types
|
||||||
|
|
||||||
|
|
||||||
|
class Event(model_base.BASE):
|
||||||
|
id = sa.Column(sa.Integer, primary_key=True, nullable=False,
|
||||||
|
autoincrement=True)
|
||||||
|
resource_id = sa.Column(types.Uuid, nullable=False)
|
||||||
|
resource_state = sa.Column(sa.String(64), nullable=False)
|
||||||
|
resource_type = sa.Column(sa.String(64), nullable=False)
|
||||||
|
timestamp = sa.Column(sa.DateTime, nullable=False)
|
||||||
|
event_type = sa.Column(sa.String(64), nullable=False)
|
||||||
|
event_details = sa.Column(types.Json)
|
88
apmec/db/common_services/common_services_db_plugin.py
Normal file
88
apmec/db/common_services/common_services_db_plugin.py
Normal file
@ -0,0 +1,88 @@
|
|||||||
|
# Copyright 2016 Brocade Communications System, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from sqlalchemy.orm import exc as orm_exc
|
||||||
|
|
||||||
|
from oslo_log import log as logging
|
||||||
|
|
||||||
|
from apmec.common import log
|
||||||
|
from apmec.db.common_services import common_services_db
|
||||||
|
from apmec.db import db_base
|
||||||
|
from apmec.extensions import common_services
|
||||||
|
from apmec import manager
|
||||||
|
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
EVENT_ATTRIBUTES = ('id', 'resource_id', 'resource_type', 'resource_state',
|
||||||
|
'timestamp', 'event_type', 'event_details')
|
||||||
|
|
||||||
|
|
||||||
|
class CommonServicesPluginDb(common_services.CommonServicesPluginBase,
|
||||||
|
db_base.CommonDbMixin):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super(CommonServicesPluginDb, self).__init__()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _core_plugin(self):
|
||||||
|
return manager.ApmecManager.get_plugin()
|
||||||
|
|
||||||
|
def _make_event_dict(self, event_db, fields=None):
|
||||||
|
res = dict((key, event_db[key]) for key in EVENT_ATTRIBUTES)
|
||||||
|
return self._fields(res, fields)
|
||||||
|
|
||||||
|
def _fields(self, resource, fields):
|
||||||
|
if fields:
|
||||||
|
return dict(((key, item) for key, item in resource.items()
|
||||||
|
if key in fields))
|
||||||
|
return resource
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def create_event(self, context, res_id, res_type, res_state, evt_type,
|
||||||
|
tstamp, details=""):
|
||||||
|
try:
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
event_db = common_services_db.Event(
|
||||||
|
resource_id=res_id,
|
||||||
|
resource_type=res_type,
|
||||||
|
resource_state=res_state,
|
||||||
|
event_details=details,
|
||||||
|
event_type=evt_type,
|
||||||
|
timestamp=tstamp)
|
||||||
|
context.session.add(event_db)
|
||||||
|
except Exception as e:
|
||||||
|
LOG.exception("create event error: %s", str(e))
|
||||||
|
raise common_services.EventCreationFailureException(
|
||||||
|
error_str=str(e))
|
||||||
|
return self._make_event_dict(event_db)
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_event(self, context, event_id, fields=None):
|
||||||
|
try:
|
||||||
|
events_db = self._get_by_id(context,
|
||||||
|
common_services_db.Event, event_id)
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
raise common_services.EventNotFoundException(evt_id=event_id)
|
||||||
|
return self._make_event_dict(events_db, fields)
|
||||||
|
|
||||||
|
@log.log
|
||||||
|
def get_events(self, context, filters=None, fields=None, sorts=None,
|
||||||
|
limit=None, marker_obj=None, page_reverse=False):
|
||||||
|
return self._get_collection(context, common_services_db.Event,
|
||||||
|
self._make_event_dict,
|
||||||
|
filters, fields, sorts, limit,
|
||||||
|
marker_obj, page_reverse)
|
217
apmec/db/db_base.py
Normal file
217
apmec/db/db_base.py
Normal file
@ -0,0 +1,217 @@
|
|||||||
|
# Copyright (c) 2012 OpenStack Foundation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from datetime import datetime
|
||||||
|
import weakref
|
||||||
|
|
||||||
|
from oslo_log import log as logging
|
||||||
|
import six
|
||||||
|
from six import iteritems
|
||||||
|
from sqlalchemy.orm import exc as orm_exc
|
||||||
|
from sqlalchemy import sql
|
||||||
|
|
||||||
|
from apmec.common import exceptions as n_exc
|
||||||
|
from apmec.db import sqlalchemyutils
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class CommonDbMixin(object):
|
||||||
|
"""Common methods used in core and service plugins."""
|
||||||
|
# Plugins, mixin classes implementing extension will register
|
||||||
|
# hooks into the dict below for "augmenting" the "core way" of
|
||||||
|
# building a query for retrieving objects from a model class.
|
||||||
|
# To this aim, the register_model_query_hook and unregister_query_hook
|
||||||
|
# from this class should be invoked
|
||||||
|
_model_query_hooks = {}
|
||||||
|
|
||||||
|
# This dictionary will store methods for extending attributes of
|
||||||
|
# api resources. Mixins can use this dict for adding their own methods
|
||||||
|
# TODO(salvatore-orlando): Avoid using class-level variables
|
||||||
|
_dict_extend_functions = {}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def register_model_query_hook(cls, model, name, query_hook, filter_hook,
|
||||||
|
result_filters=None):
|
||||||
|
"""Register a hook to be invoked when a query is executed.
|
||||||
|
|
||||||
|
Add the hooks to the _model_query_hooks dict. Models are the keys
|
||||||
|
of this dict, whereas the value is another dict mapping hook names to
|
||||||
|
callables performing the hook.
|
||||||
|
Each hook has a "query" component, used to build the query expression
|
||||||
|
and a "filter" component, which is used to build the filter expression.
|
||||||
|
|
||||||
|
Query hooks take as input the query being built and return a
|
||||||
|
transformed query expression.
|
||||||
|
|
||||||
|
Filter hooks take as input the filter expression being built and return
|
||||||
|
a transformed filter expression
|
||||||
|
"""
|
||||||
|
model_hooks = cls._model_query_hooks.get(model)
|
||||||
|
if not model_hooks:
|
||||||
|
# add key to dict
|
||||||
|
model_hooks = {}
|
||||||
|
cls._model_query_hooks[model] = model_hooks
|
||||||
|
model_hooks[name] = {'query': query_hook, 'filter': filter_hook,
|
||||||
|
'result_filters': result_filters}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def safe_reference(self):
|
||||||
|
"""Return a weakref to the instance.
|
||||||
|
|
||||||
|
Minimize the potential for the instance persisting
|
||||||
|
unnecessarily in memory by returning a weakref proxy that
|
||||||
|
won't prevent deallocation.
|
||||||
|
"""
|
||||||
|
return weakref.proxy(self)
|
||||||
|
|
||||||
|
def _model_query(self, context, model):
|
||||||
|
query = context.session.query(model)
|
||||||
|
# define basic filter condition for model query
|
||||||
|
# NOTE(jkoelker) non-admin queries are scoped to their tenant_id
|
||||||
|
# NOTE(salvatore-orlando): unless the model allows for shared objects
|
||||||
|
query_filter = None
|
||||||
|
if not context.is_admin and hasattr(model, 'tenant_id'):
|
||||||
|
if hasattr(model, 'shared'):
|
||||||
|
query_filter = ((model.tenant_id == context.tenant_id) |
|
||||||
|
(model.shared == sql.true()))
|
||||||
|
else:
|
||||||
|
query_filter = (model.tenant_id == context.tenant_id)
|
||||||
|
# Execute query hooks registered from mixins and plugins
|
||||||
|
for _name, hooks in iteritems(self._model_query_hooks.get(model, {})):
|
||||||
|
query_hook = hooks.get('query')
|
||||||
|
if isinstance(query_hook, six.string_types):
|
||||||
|
query_hook = getattr(self, query_hook, None)
|
||||||
|
if query_hook:
|
||||||
|
query = query_hook(context, model, query)
|
||||||
|
|
||||||
|
filter_hook = hooks.get('filter')
|
||||||
|
if isinstance(filter_hook, six.string_types):
|
||||||
|
filter_hook = getattr(self, filter_hook, None)
|
||||||
|
if filter_hook:
|
||||||
|
query_filter = filter_hook(context, model, query_filter)
|
||||||
|
|
||||||
|
# NOTE(salvatore-orlando): 'if query_filter' will try to evaluate the
|
||||||
|
# condition, raising an exception
|
||||||
|
if query_filter is not None:
|
||||||
|
query = query.filter(query_filter)
|
||||||
|
|
||||||
|
# Don't list the deleted entries
|
||||||
|
if hasattr(model, 'deleted_at'):
|
||||||
|
query = query.filter_by(deleted_at=datetime.min)
|
||||||
|
|
||||||
|
return query
|
||||||
|
|
||||||
|
def _fields(self, resource, fields):
|
||||||
|
if fields:
|
||||||
|
return dict(((key, item) for key, item in resource.items()
|
||||||
|
if key in fields))
|
||||||
|
return resource
|
||||||
|
|
||||||
|
def _get_tenant_id_for_create(self, context, resource):
|
||||||
|
if context.is_admin and 'tenant_id' in resource:
|
||||||
|
tenant_id = resource['tenant_id']
|
||||||
|
elif ('tenant_id' in resource and
|
||||||
|
resource['tenant_id'] != context.tenant_id):
|
||||||
|
reason = _('Cannot create resource for another tenant')
|
||||||
|
raise n_exc.AdminRequired(reason=reason)
|
||||||
|
else:
|
||||||
|
tenant_id = context.tenant_id
|
||||||
|
return tenant_id
|
||||||
|
|
||||||
|
def _get_by_id(self, context, model, id):
|
||||||
|
query = self._model_query(context, model)
|
||||||
|
return query.filter(model.id == id).one()
|
||||||
|
|
||||||
|
def _apply_filters_to_query(self, query, model, filters):
|
||||||
|
if filters:
|
||||||
|
for key, value in iteritems(filters):
|
||||||
|
column = getattr(model, key, None)
|
||||||
|
if column:
|
||||||
|
query = query.filter(column.in_(value))
|
||||||
|
for _name, hooks in iteritems(
|
||||||
|
self._model_query_hooks.get(model, {})):
|
||||||
|
result_filter = hooks.get('result_filters', None)
|
||||||
|
if isinstance(result_filter, six.string_types):
|
||||||
|
result_filter = getattr(self, result_filter, None)
|
||||||
|
|
||||||
|
if result_filter:
|
||||||
|
query = result_filter(query, filters)
|
||||||
|
|
||||||
|
return query
|
||||||
|
|
||||||
|
def _apply_dict_extend_functions(self, resource_type,
|
||||||
|
response, db_object):
|
||||||
|
for func in self._dict_extend_functions.get(
|
||||||
|
resource_type, []):
|
||||||
|
args = (response, db_object)
|
||||||
|
if isinstance(func, six.string_types):
|
||||||
|
func = getattr(self, func, None)
|
||||||
|
else:
|
||||||
|
# must call unbound method - use self as 1st argument
|
||||||
|
args = (self,) + args
|
||||||
|
if func:
|
||||||
|
func(*args)
|
||||||
|
|
||||||
|
def _get_collection_query(self, context, model, filters=None,
|
||||||
|
sorts=None, limit=None, marker_obj=None,
|
||||||
|
page_reverse=False):
|
||||||
|
collection = self._model_query(context, model)
|
||||||
|
collection = self._apply_filters_to_query(collection, model, filters)
|
||||||
|
if limit and page_reverse and sorts:
|
||||||
|
sorts = [(s[0], not s[1]) for s in sorts]
|
||||||
|
collection = sqlalchemyutils.paginate_query(collection, model, limit,
|
||||||
|
sorts,
|
||||||
|
marker_obj=marker_obj)
|
||||||
|
return collection
|
||||||
|
|
||||||
|
def _get_collection(self, context, model, dict_func, filters=None,
|
||||||
|
fields=None, sorts=None, limit=None, marker_obj=None,
|
||||||
|
page_reverse=False):
|
||||||
|
query = self._get_collection_query(context, model, filters=filters,
|
||||||
|
sorts=sorts,
|
||||||
|
limit=limit,
|
||||||
|
marker_obj=marker_obj,
|
||||||
|
page_reverse=page_reverse)
|
||||||
|
items = [dict_func(c, fields) for c in query]
|
||||||
|
if limit and page_reverse:
|
||||||
|
items.reverse()
|
||||||
|
return items
|
||||||
|
|
||||||
|
def _get_collection_count(self, context, model, filters=None):
|
||||||
|
return self._get_collection_query(context, model, filters).count()
|
||||||
|
|
||||||
|
def _get_marker_obj(self, context, resource, limit, marker):
|
||||||
|
if limit and marker:
|
||||||
|
return getattr(self, '_get_%s' % resource)(context, marker)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _filter_non_model_columns(self, data, model):
|
||||||
|
"""Removes attributes from data.
|
||||||
|
|
||||||
|
Remove all the attributes from data which are not columns of
|
||||||
|
the model passed as second parameter.
|
||||||
|
"""
|
||||||
|
columns = [c.name for c in model.__table__.columns]
|
||||||
|
return dict((k, v) for (k, v) in
|
||||||
|
iteritems(data) if k in columns)
|
||||||
|
|
||||||
|
def _get_by_name(self, context, model, name):
|
||||||
|
try:
|
||||||
|
query = self._model_query(context, model)
|
||||||
|
return query.filter(model.name == name).one()
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
LOG.info("No result found for %(name)s in %(model)s table",
|
||||||
|
{'name': name, 'model': model})
|
0
apmec/db/mem/__init__.py
Normal file
0
apmec/db/mem/__init__.py
Normal file
683
apmec/db/mem/mem_db.py
Normal file
683
apmec/db/mem/mem_db.py
Normal file
@ -0,0 +1,683 @@
|
|||||||
|
# Copyright 2013, 2014 Intel Corporation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from oslo_db.exception import DBDuplicateEntry
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_utils import timeutils
|
||||||
|
from oslo_utils import uuidutils
|
||||||
|
|
||||||
|
import sqlalchemy as sa
|
||||||
|
from sqlalchemy import orm
|
||||||
|
from sqlalchemy.orm import exc as orm_exc
|
||||||
|
from sqlalchemy import schema
|
||||||
|
|
||||||
|
from apmec.api.v1 import attributes
|
||||||
|
from apmec.common import exceptions
|
||||||
|
from apmec import context as t_context
|
||||||
|
from apmec.db.common_services import common_services_db_plugin
|
||||||
|
from apmec.db import db_base
|
||||||
|
from apmec.db import model_base
|
||||||
|
from apmec.db import models_v1
|
||||||
|
from apmec.db import types
|
||||||
|
from apmec.extensions import mem
|
||||||
|
from apmec import manager
|
||||||
|
from apmec.plugins.common import constants
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
_ACTIVE_UPDATE = (constants.ACTIVE, constants.PENDING_UPDATE)
|
||||||
|
_ACTIVE_UPDATE_ERROR_DEAD = (
|
||||||
|
constants.PENDING_CREATE, constants.ACTIVE, constants.PENDING_UPDATE,
|
||||||
|
constants.ERROR, constants.DEAD)
|
||||||
|
CREATE_STATES = (constants.PENDING_CREATE, constants.DEAD)
|
||||||
|
|
||||||
|
|
||||||
|
###########################################################################
|
||||||
|
# db tables
|
||||||
|
|
||||||
|
class MEAD(model_base.BASE, models_v1.HasId, models_v1.HasTenant,
|
||||||
|
models_v1.Audit):
|
||||||
|
"""Represents MEAD to create MEA."""
|
||||||
|
|
||||||
|
__tablename__ = 'mead'
|
||||||
|
# Descriptive name
|
||||||
|
name = sa.Column(sa.String(255), nullable=False)
|
||||||
|
description = sa.Column(sa.Text)
|
||||||
|
|
||||||
|
# service type that this service vm provides.
|
||||||
|
# At first phase, this includes only single service
|
||||||
|
# In future, single service VM may accommodate multiple services.
|
||||||
|
service_types = orm.relationship('ServiceType', backref='mead')
|
||||||
|
|
||||||
|
# driver to communicate with service management
|
||||||
|
mgmt_driver = sa.Column(sa.String(255))
|
||||||
|
|
||||||
|
# (key, value) pair to spin up
|
||||||
|
attributes = orm.relationship('MEADAttribute',
|
||||||
|
backref='mead')
|
||||||
|
|
||||||
|
# mead template source - inline or onboarded
|
||||||
|
template_source = sa.Column(sa.String(255), server_default='onboarded')
|
||||||
|
|
||||||
|
__table_args__ = (
|
||||||
|
schema.UniqueConstraint(
|
||||||
|
"tenant_id",
|
||||||
|
"name",
|
||||||
|
"deleted_at",
|
||||||
|
name="uniq_mead0tenant_id0name0deleted_at"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class ServiceType(model_base.BASE, models_v1.HasId, models_v1.HasTenant):
|
||||||
|
"""Represents service type which hosting mea provides.
|
||||||
|
|
||||||
|
Since a mea may provide many services, This is one-to-many
|
||||||
|
relationship.
|
||||||
|
"""
|
||||||
|
mead_id = sa.Column(types.Uuid, sa.ForeignKey('mead.id'),
|
||||||
|
nullable=False)
|
||||||
|
service_type = sa.Column(sa.String(64), nullable=False)
|
||||||
|
|
||||||
|
|
||||||
|
class MEADAttribute(model_base.BASE, models_v1.HasId):
|
||||||
|
"""Represents attributes necessary for spinning up VM in (key, value) pair
|
||||||
|
|
||||||
|
key value pair is adopted for being agnostic to actuall manager of VMs.
|
||||||
|
The interpretation is up to actual driver of hosting mea.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__tablename__ = 'mead_attribute'
|
||||||
|
mead_id = sa.Column(types.Uuid, sa.ForeignKey('mead.id'),
|
||||||
|
nullable=False)
|
||||||
|
key = sa.Column(sa.String(255), nullable=False)
|
||||||
|
value = sa.Column(sa.TEXT(65535), nullable=True)
|
||||||
|
|
||||||
|
|
||||||
|
class MEA(model_base.BASE, models_v1.HasId, models_v1.HasTenant,
|
||||||
|
models_v1.Audit):
|
||||||
|
"""Represents meas that hosts services.
|
||||||
|
|
||||||
|
Here the term, 'VM', is intentionally avoided because it can be
|
||||||
|
VM or other container.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__tablename__ = 'mea'
|
||||||
|
mead_id = sa.Column(types.Uuid, sa.ForeignKey('mead.id'))
|
||||||
|
mead = orm.relationship('MEAD')
|
||||||
|
|
||||||
|
name = sa.Column(sa.String(255), nullable=False)
|
||||||
|
description = sa.Column(sa.Text, nullable=True)
|
||||||
|
|
||||||
|
# sufficient information to uniquely identify hosting mea.
|
||||||
|
# In case of openstack manager, it's UUID of heat stack.
|
||||||
|
instance_id = sa.Column(sa.String(64), nullable=True)
|
||||||
|
|
||||||
|
# For a management tool to talk to manage this hosting mea.
|
||||||
|
# opaque string.
|
||||||
|
# e.g. (driver, mgmt_url) = (ssh, ip address), ...
|
||||||
|
mgmt_url = sa.Column(sa.String(255), nullable=True)
|
||||||
|
attributes = orm.relationship("MEAAttribute", backref="mea")
|
||||||
|
|
||||||
|
status = sa.Column(sa.String(64), nullable=False)
|
||||||
|
vim_id = sa.Column(types.Uuid, sa.ForeignKey('vims.id'), nullable=False)
|
||||||
|
placement_attr = sa.Column(types.Json, nullable=True)
|
||||||
|
vim = orm.relationship('Vim')
|
||||||
|
error_reason = sa.Column(sa.Text, nullable=True)
|
||||||
|
|
||||||
|
__table_args__ = (
|
||||||
|
schema.UniqueConstraint(
|
||||||
|
"tenant_id",
|
||||||
|
"name",
|
||||||
|
"deleted_at",
|
||||||
|
name="uniq_mea0tenant_id0name0deleted_at"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class MEAAttribute(model_base.BASE, models_v1.HasId):
|
||||||
|
"""Represents kwargs necessary for spinning up VM in (key, value) pair.
|
||||||
|
|
||||||
|
key value pair is adopted for being agnostic to actuall manager of VMs.
|
||||||
|
The interpretation is up to actual driver of hosting mea.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__tablename__ = 'mea_attribute'
|
||||||
|
mea_id = sa.Column(types.Uuid, sa.ForeignKey('mea.id'),
|
||||||
|
nullable=False)
|
||||||
|
key = sa.Column(sa.String(255), nullable=False)
|
||||||
|
# json encoded value. example
|
||||||
|
# "nic": [{"net-id": <net-uuid>}, {"port-id": <port-uuid>}]
|
||||||
|
value = sa.Column(sa.TEXT(65535), nullable=True)
|
||||||
|
|
||||||
|
|
||||||
|
class MEMPluginDb(mem.MEMPluginBase, db_base.CommonDbMixin):
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _core_plugin(self):
|
||||||
|
return manager.ApmecManager.get_plugin()
|
||||||
|
|
||||||
|
def subnet_id_to_network_id(self, context, subnet_id):
|
||||||
|
subnet = self._core_plugin.get_subnet(context, subnet_id)
|
||||||
|
return subnet['network_id']
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super(MEMPluginDb, self).__init__()
|
||||||
|
self._cos_db_plg = common_services_db_plugin.CommonServicesPluginDb()
|
||||||
|
|
||||||
|
def _get_resource(self, context, model, id):
|
||||||
|
try:
|
||||||
|
if uuidutils.is_uuid_like(id):
|
||||||
|
return self._get_by_id(context, model, id)
|
||||||
|
return self._get_by_name(context, model, id)
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
if issubclass(model, MEAD):
|
||||||
|
raise mem.MEADNotFound(mead_id=id)
|
||||||
|
elif issubclass(model, ServiceType):
|
||||||
|
raise mem.ServiceTypeNotFound(service_type_id=id)
|
||||||
|
if issubclass(model, MEA):
|
||||||
|
raise mem.MEANotFound(mea_id=id)
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
def _make_attributes_dict(self, attributes_db):
|
||||||
|
return dict((attr.key, attr.value) for attr in attributes_db)
|
||||||
|
|
||||||
|
def _make_service_types_list(self, service_types):
|
||||||
|
return [service_type.service_type
|
||||||
|
for service_type in service_types]
|
||||||
|
|
||||||
|
def _make_mead_dict(self, mead, fields=None):
|
||||||
|
res = {
|
||||||
|
'attributes': self._make_attributes_dict(mead['attributes']),
|
||||||
|
'service_types': self._make_service_types_list(
|
||||||
|
mead.service_types)
|
||||||
|
}
|
||||||
|
key_list = ('id', 'tenant_id', 'name', 'description',
|
||||||
|
'mgmt_driver', 'created_at', 'updated_at',
|
||||||
|
'template_source')
|
||||||
|
res.update((key, mead[key]) for key in key_list)
|
||||||
|
return self._fields(res, fields)
|
||||||
|
|
||||||
|
def _make_dev_attrs_dict(self, dev_attrs_db):
|
||||||
|
return dict((arg.key, arg.value) for arg in dev_attrs_db)
|
||||||
|
|
||||||
|
def _make_mea_dict(self, mea_db, fields=None):
|
||||||
|
LOG.debug('mea_db %s', mea_db)
|
||||||
|
LOG.debug('mea_db attributes %s', mea_db.attributes)
|
||||||
|
res = {
|
||||||
|
'mead':
|
||||||
|
self._make_mead_dict(mea_db.mead),
|
||||||
|
'attributes': self._make_dev_attrs_dict(mea_db.attributes),
|
||||||
|
}
|
||||||
|
key_list = ('id', 'tenant_id', 'name', 'description', 'instance_id',
|
||||||
|
'vim_id', 'placement_attr', 'mead_id', 'status',
|
||||||
|
'mgmt_url', 'error_reason', 'created_at', 'updated_at')
|
||||||
|
res.update((key, mea_db[key]) for key in key_list)
|
||||||
|
return self._fields(res, fields)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _mgmt_driver_name(mea_dict):
|
||||||
|
return mea_dict['mead']['mgmt_driver']
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _instance_id(mea_dict):
|
||||||
|
return mea_dict['instance_id']
|
||||||
|
|
||||||
|
def create_mead(self, context, mead):
|
||||||
|
mead = mead['mead']
|
||||||
|
LOG.debug('mead %s', mead)
|
||||||
|
tenant_id = self._get_tenant_id_for_create(context, mead)
|
||||||
|
service_types = mead.get('service_types')
|
||||||
|
mgmt_driver = mead.get('mgmt_driver')
|
||||||
|
template_source = mead.get("template_source")
|
||||||
|
|
||||||
|
if (not attributes.is_attr_set(service_types)):
|
||||||
|
LOG.debug('service types unspecified')
|
||||||
|
raise mem.ServiceTypesNotSpecified()
|
||||||
|
|
||||||
|
try:
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mead_id = uuidutils.generate_uuid()
|
||||||
|
mead_db = MEAD(
|
||||||
|
id=mead_id,
|
||||||
|
tenant_id=tenant_id,
|
||||||
|
name=mead.get('name'),
|
||||||
|
description=mead.get('description'),
|
||||||
|
mgmt_driver=mgmt_driver,
|
||||||
|
template_source=template_source,
|
||||||
|
deleted_at=datetime.min)
|
||||||
|
context.session.add(mead_db)
|
||||||
|
for (key, value) in mead.get('attributes', {}).items():
|
||||||
|
attribute_db = MEADAttribute(
|
||||||
|
id=uuidutils.generate_uuid(),
|
||||||
|
mead_id=mead_id,
|
||||||
|
key=key,
|
||||||
|
value=value)
|
||||||
|
context.session.add(attribute_db)
|
||||||
|
for service_type in (item['service_type']
|
||||||
|
for item in mead['service_types']):
|
||||||
|
service_type_db = ServiceType(
|
||||||
|
id=uuidutils.generate_uuid(),
|
||||||
|
tenant_id=tenant_id,
|
||||||
|
mead_id=mead_id,
|
||||||
|
service_type=service_type)
|
||||||
|
context.session.add(service_type_db)
|
||||||
|
except DBDuplicateEntry as e:
|
||||||
|
raise exceptions.DuplicateEntity(
|
||||||
|
_type="mead",
|
||||||
|
entry=e.columns)
|
||||||
|
LOG.debug('mead_db %(mead_db)s %(attributes)s ',
|
||||||
|
{'mead_db': mead_db,
|
||||||
|
'attributes': mead_db.attributes})
|
||||||
|
mead_dict = self._make_mead_dict(mead_db)
|
||||||
|
LOG.debug('mead_dict %s', mead_dict)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mead_dict['id'],
|
||||||
|
res_type=constants.RES_TYPE_MEAD,
|
||||||
|
res_state=constants.RES_EVT_ONBOARDED,
|
||||||
|
evt_type=constants.RES_EVT_CREATE,
|
||||||
|
tstamp=mead_dict[constants.RES_EVT_CREATED_FLD])
|
||||||
|
return mead_dict
|
||||||
|
|
||||||
|
def update_mead(self, context, mead_id,
|
||||||
|
mead):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mead_db = self._get_resource(context, MEAD,
|
||||||
|
mead_id)
|
||||||
|
mead_db.update(mead['mead'])
|
||||||
|
mead_db.update({'updated_at': timeutils.utcnow()})
|
||||||
|
mead_dict = self._make_mead_dict(mead_db)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mead_dict['id'],
|
||||||
|
res_type=constants.RES_TYPE_MEAD,
|
||||||
|
res_state=constants.RES_EVT_NA_STATE,
|
||||||
|
evt_type=constants.RES_EVT_UPDATE,
|
||||||
|
tstamp=mead_dict[constants.RES_EVT_UPDATED_FLD])
|
||||||
|
return mead_dict
|
||||||
|
|
||||||
|
def delete_mead(self,
|
||||||
|
context,
|
||||||
|
mead_id,
|
||||||
|
soft_delete=True):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
# TODO(yamahata): race. prevent from newly inserting hosting mea
|
||||||
|
# that refers to this mead
|
||||||
|
meas_db = context.session.query(MEA).filter_by(
|
||||||
|
mead_id=mead_id).first()
|
||||||
|
if meas_db is not None and meas_db.deleted_at is None:
|
||||||
|
raise mem.MEADInUse(mead_id=mead_id)
|
||||||
|
mead_db = self._get_resource(context, MEAD,
|
||||||
|
mead_id)
|
||||||
|
if soft_delete:
|
||||||
|
mead_db.update({'deleted_at': timeutils.utcnow()})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mead_db['id'],
|
||||||
|
res_type=constants.RES_TYPE_MEAD,
|
||||||
|
res_state=constants.RES_EVT_NA_STATE,
|
||||||
|
evt_type=constants.RES_EVT_DELETE,
|
||||||
|
tstamp=mead_db[constants.RES_EVT_DELETED_FLD])
|
||||||
|
else:
|
||||||
|
context.session.query(ServiceType).filter_by(
|
||||||
|
mead_id=mead_id).delete()
|
||||||
|
context.session.query(MEADAttribute).filter_by(
|
||||||
|
mead_id=mead_id).delete()
|
||||||
|
context.session.delete(mead_db)
|
||||||
|
|
||||||
|
def get_mead(self, context, mead_id, fields=None):
|
||||||
|
mead_db = self._get_resource(context, MEAD, mead_id)
|
||||||
|
return self._make_mead_dict(mead_db)
|
||||||
|
|
||||||
|
def get_meads(self, context, filters, fields=None):
|
||||||
|
if 'template_source' in filters and \
|
||||||
|
filters['template_source'][0] == 'all':
|
||||||
|
filters.pop('template_source')
|
||||||
|
return self._get_collection(context, MEAD,
|
||||||
|
self._make_mead_dict,
|
||||||
|
filters=filters, fields=fields)
|
||||||
|
|
||||||
|
def choose_mead(self, context, service_type,
|
||||||
|
required_attributes=None):
|
||||||
|
required_attributes = required_attributes or []
|
||||||
|
LOG.debug('required_attributes %s', required_attributes)
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
query = (
|
||||||
|
context.session.query(MEAD).
|
||||||
|
filter(
|
||||||
|
sa.exists().
|
||||||
|
where(sa.and_(
|
||||||
|
MEAD.id == ServiceType.mead_id,
|
||||||
|
ServiceType.service_type == service_type))))
|
||||||
|
for key in required_attributes:
|
||||||
|
query = query.filter(
|
||||||
|
sa.exists().
|
||||||
|
where(sa.and_(
|
||||||
|
MEAD.id ==
|
||||||
|
MEADAttribute.mead_id,
|
||||||
|
MEADAttribute.key == key)))
|
||||||
|
LOG.debug('statements %s', query)
|
||||||
|
mead_db = query.first()
|
||||||
|
if mead_db:
|
||||||
|
return self._make_mead_dict(mead_db)
|
||||||
|
|
||||||
|
def _mea_attribute_update_or_create(
|
||||||
|
self, context, mea_id, key, value):
|
||||||
|
arg = (self._model_query(context, MEAAttribute).
|
||||||
|
filter(MEAAttribute.mea_id == mea_id).
|
||||||
|
filter(MEAAttribute.key == key).first())
|
||||||
|
if arg:
|
||||||
|
arg.value = value
|
||||||
|
else:
|
||||||
|
arg = MEAAttribute(
|
||||||
|
id=uuidutils.generate_uuid(), mea_id=mea_id,
|
||||||
|
key=key, value=value)
|
||||||
|
context.session.add(arg)
|
||||||
|
|
||||||
|
# called internally, not by REST API
|
||||||
|
def _create_mea_pre(self, context, mea):
|
||||||
|
LOG.debug('mea %s', mea)
|
||||||
|
tenant_id = self._get_tenant_id_for_create(context, mea)
|
||||||
|
mead_id = mea['mead_id']
|
||||||
|
name = mea.get('name')
|
||||||
|
mea_id = uuidutils.generate_uuid()
|
||||||
|
attributes = mea.get('attributes', {})
|
||||||
|
vim_id = mea.get('vim_id')
|
||||||
|
placement_attr = mea.get('placement_attr', {})
|
||||||
|
try:
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mead_db = self._get_resource(context, MEAD,
|
||||||
|
mead_id)
|
||||||
|
mea_db = MEA(id=mea_id,
|
||||||
|
tenant_id=tenant_id,
|
||||||
|
name=name,
|
||||||
|
description=mead_db.description,
|
||||||
|
instance_id=None,
|
||||||
|
mead_id=mead_id,
|
||||||
|
vim_id=vim_id,
|
||||||
|
placement_attr=placement_attr,
|
||||||
|
status=constants.PENDING_CREATE,
|
||||||
|
error_reason=None,
|
||||||
|
deleted_at=datetime.min)
|
||||||
|
context.session.add(mea_db)
|
||||||
|
for key, value in attributes.items():
|
||||||
|
arg = MEAAttribute(
|
||||||
|
id=uuidutils.generate_uuid(), mea_id=mea_id,
|
||||||
|
key=key, value=value)
|
||||||
|
context.session.add(arg)
|
||||||
|
except DBDuplicateEntry as e:
|
||||||
|
raise exceptions.DuplicateEntity(
|
||||||
|
_type="mea",
|
||||||
|
entry=e.columns)
|
||||||
|
evt_details = "MEA UUID assigned."
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mea_id,
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=constants.PENDING_CREATE,
|
||||||
|
evt_type=constants.RES_EVT_CREATE,
|
||||||
|
tstamp=mea_db[constants.RES_EVT_CREATED_FLD],
|
||||||
|
details=evt_details)
|
||||||
|
return self._make_mea_dict(mea_db)
|
||||||
|
|
||||||
|
# called internally, not by REST API
|
||||||
|
# intsance_id = None means error on creation
|
||||||
|
def _create_mea_post(self, context, mea_id, instance_id,
|
||||||
|
mgmt_url, mea_dict):
|
||||||
|
LOG.debug('mea_dict %s', mea_dict)
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
query = (self._model_query(context, MEA).
|
||||||
|
filter(MEA.id == mea_id).
|
||||||
|
filter(MEA.status.in_(CREATE_STATES)).
|
||||||
|
one())
|
||||||
|
query.update({'instance_id': instance_id, 'mgmt_url': mgmt_url})
|
||||||
|
if instance_id is None or mea_dict['status'] == constants.ERROR:
|
||||||
|
query.update({'status': constants.ERROR})
|
||||||
|
|
||||||
|
for (key, value) in mea_dict['attributes'].items():
|
||||||
|
# do not store decrypted vim auth in mea attr table
|
||||||
|
if 'vim_auth' not in key:
|
||||||
|
self._mea_attribute_update_or_create(context, mea_id,
|
||||||
|
key, value)
|
||||||
|
evt_details = ("Infra Instance ID created: %s and "
|
||||||
|
"Mgmt URL set: %s") % (instance_id, mgmt_url)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mea_dict['id'],
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=mea_dict['status'],
|
||||||
|
evt_type=constants.RES_EVT_CREATE,
|
||||||
|
tstamp=timeutils.utcnow(), details=evt_details)
|
||||||
|
|
||||||
|
def _create_mea_status(self, context, mea_id, new_status):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
query = (self._model_query(context, MEA).
|
||||||
|
filter(MEA.id == mea_id).
|
||||||
|
filter(MEA.status.in_(CREATE_STATES)).one())
|
||||||
|
query.update({'status': new_status})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mea_id,
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=new_status,
|
||||||
|
evt_type=constants.RES_EVT_CREATE,
|
||||||
|
tstamp=timeutils.utcnow(), details="MEA creation completed")
|
||||||
|
|
||||||
|
def _get_mea_db(self, context, mea_id, current_statuses, new_status):
|
||||||
|
try:
|
||||||
|
mea_db = (
|
||||||
|
self._model_query(context, MEA).
|
||||||
|
filter(MEA.id == mea_id).
|
||||||
|
filter(MEA.status.in_(current_statuses)).
|
||||||
|
with_lockmode('update').one())
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
raise mem.MEANotFound(mea_id=mea_id)
|
||||||
|
if mea_db.status == constants.PENDING_UPDATE:
|
||||||
|
raise mem.MEAInUse(mea_id=mea_id)
|
||||||
|
mea_db.update({'status': new_status})
|
||||||
|
return mea_db
|
||||||
|
|
||||||
|
def _update_mea_scaling_status(self,
|
||||||
|
context,
|
||||||
|
policy,
|
||||||
|
previous_statuses,
|
||||||
|
status,
|
||||||
|
mgmt_url=None):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mea_db = self._get_mea_db(
|
||||||
|
context, policy['mea']['id'], previous_statuses, status)
|
||||||
|
if mgmt_url:
|
||||||
|
mea_db.update({'mgmt_url': mgmt_url})
|
||||||
|
updated_mea_dict = self._make_mea_dict(mea_db)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=updated_mea_dict['id'],
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=updated_mea_dict['status'],
|
||||||
|
evt_type=constants.RES_EVT_SCALE,
|
||||||
|
tstamp=timeutils.utcnow())
|
||||||
|
return updated_mea_dict
|
||||||
|
|
||||||
|
def _update_mea_pre(self, context, mea_id):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mea_db = self._get_mea_db(
|
||||||
|
context, mea_id, _ACTIVE_UPDATE, constants.PENDING_UPDATE)
|
||||||
|
updated_mea_dict = self._make_mea_dict(mea_db)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mea_id,
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=updated_mea_dict['status'],
|
||||||
|
evt_type=constants.RES_EVT_UPDATE,
|
||||||
|
tstamp=timeutils.utcnow())
|
||||||
|
return updated_mea_dict
|
||||||
|
|
||||||
|
def _update_mea_post(self, context, mea_id, new_status,
|
||||||
|
new_mea_dict):
|
||||||
|
updated_time_stamp = timeutils.utcnow()
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
(self._model_query(context, MEA).
|
||||||
|
filter(MEA.id == mea_id).
|
||||||
|
filter(MEA.status == constants.PENDING_UPDATE).
|
||||||
|
update({'status': new_status,
|
||||||
|
'updated_at': updated_time_stamp}))
|
||||||
|
|
||||||
|
dev_attrs = new_mea_dict.get('attributes', {})
|
||||||
|
(context.session.query(MEAAttribute).
|
||||||
|
filter(MEAAttribute.mea_id == mea_id).
|
||||||
|
filter(~MEAAttribute.key.in_(dev_attrs.keys())).
|
||||||
|
delete(synchronize_session='fetch'))
|
||||||
|
|
||||||
|
for (key, value) in dev_attrs.items():
|
||||||
|
if 'vim_auth' not in key:
|
||||||
|
self._mea_attribute_update_or_create(context, mea_id,
|
||||||
|
key, value)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mea_id,
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=new_status,
|
||||||
|
evt_type=constants.RES_EVT_UPDATE,
|
||||||
|
tstamp=updated_time_stamp)
|
||||||
|
|
||||||
|
def _delete_mea_pre(self, context, mea_id):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mea_db = self._get_mea_db(
|
||||||
|
context, mea_id, _ACTIVE_UPDATE_ERROR_DEAD,
|
||||||
|
constants.PENDING_DELETE)
|
||||||
|
deleted_mea_db = self._make_mea_dict(mea_db)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mea_id,
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=deleted_mea_db['status'],
|
||||||
|
evt_type=constants.RES_EVT_DELETE,
|
||||||
|
tstamp=timeutils.utcnow(), details="MEA delete initiated")
|
||||||
|
return deleted_mea_db
|
||||||
|
|
||||||
|
def _delete_mea_post(self, context, mea_dict, error, soft_delete=True):
|
||||||
|
mea_id = mea_dict['id']
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
query = (
|
||||||
|
self._model_query(context, MEA).
|
||||||
|
filter(MEA.id == mea_id).
|
||||||
|
filter(MEA.status == constants.PENDING_DELETE))
|
||||||
|
if error:
|
||||||
|
query.update({'status': constants.ERROR})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mea_id,
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=constants.ERROR,
|
||||||
|
evt_type=constants.RES_EVT_DELETE,
|
||||||
|
tstamp=timeutils.utcnow(),
|
||||||
|
details="MEA Delete ERROR")
|
||||||
|
else:
|
||||||
|
if soft_delete:
|
||||||
|
deleted_time_stamp = timeutils.utcnow()
|
||||||
|
query.update({'deleted_at': deleted_time_stamp})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mea_id,
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=constants.PENDING_DELETE,
|
||||||
|
evt_type=constants.RES_EVT_DELETE,
|
||||||
|
tstamp=deleted_time_stamp,
|
||||||
|
details="MEA Delete Complete")
|
||||||
|
else:
|
||||||
|
(self._model_query(context, MEAAttribute).
|
||||||
|
filter(MEAAttribute.mea_id == mea_id).delete())
|
||||||
|
query.delete()
|
||||||
|
|
||||||
|
# Delete corresponding mead
|
||||||
|
if mea_dict['mead']['template_source'] == "inline":
|
||||||
|
self.delete_mead(context, mea_dict["mead_id"])
|
||||||
|
|
||||||
|
# reference implementation. needs to be overrided by subclass
|
||||||
|
def create_mea(self, context, mea):
|
||||||
|
mea_dict = self._create_mea_pre(context, mea)
|
||||||
|
# start actual creation of hosting mea.
|
||||||
|
# Waiting for completion of creation should be done backgroundly
|
||||||
|
# by another thread if it takes a while.
|
||||||
|
instance_id = uuidutils.generate_uuid()
|
||||||
|
mea_dict['instance_id'] = instance_id
|
||||||
|
self._create_mea_post(context, mea_dict['id'], instance_id, None,
|
||||||
|
mea_dict)
|
||||||
|
self._create_mea_status(context, mea_dict['id'],
|
||||||
|
constants.ACTIVE)
|
||||||
|
return mea_dict
|
||||||
|
|
||||||
|
# reference implementation. needs to be overrided by subclass
|
||||||
|
def update_mea(self, context, mea_id, mea):
|
||||||
|
mea_dict = self._update_mea_pre(context, mea_id)
|
||||||
|
# start actual update of hosting mea
|
||||||
|
# waiting for completion of update should be done backgroundly
|
||||||
|
# by another thread if it takes a while
|
||||||
|
self._update_mea_post(context, mea_id,
|
||||||
|
constants.ACTIVE,
|
||||||
|
mea_dict)
|
||||||
|
return mea_dict
|
||||||
|
|
||||||
|
# reference implementation. needs to be overrided by subclass
|
||||||
|
def delete_mea(self, context, mea_id, soft_delete=True):
|
||||||
|
mea_dict = self._delete_mea_pre(context, mea_id)
|
||||||
|
# start actual deletion of hosting mea.
|
||||||
|
# Waiting for completion of deletion should be done backgroundly
|
||||||
|
# by another thread if it takes a while.
|
||||||
|
self._delete_mea_post(context,
|
||||||
|
mea_dict,
|
||||||
|
False,
|
||||||
|
soft_delete=soft_delete)
|
||||||
|
|
||||||
|
def get_mea(self, context, mea_id, fields=None):
|
||||||
|
mea_db = self._get_resource(context, MEA, mea_id)
|
||||||
|
return self._make_mea_dict(mea_db, fields)
|
||||||
|
|
||||||
|
def get_meas(self, context, filters=None, fields=None):
|
||||||
|
return self._get_collection(context, MEA, self._make_mea_dict,
|
||||||
|
filters=filters, fields=fields)
|
||||||
|
|
||||||
|
def set_mea_error_status_reason(self, context, mea_id, new_reason):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
(self._model_query(context, MEA).
|
||||||
|
filter(MEA.id == mea_id).
|
||||||
|
update({'error_reason': new_reason}))
|
||||||
|
|
||||||
|
def _mark_mea_status(self, mea_id, exclude_status, new_status):
|
||||||
|
context = t_context.get_admin_context()
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
try:
|
||||||
|
mea_db = (
|
||||||
|
self._model_query(context, MEA).
|
||||||
|
filter(MEA.id == mea_id).
|
||||||
|
filter(~MEA.status.in_(exclude_status)).
|
||||||
|
with_lockmode('update').one())
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
LOG.warning('no mea found %s', mea_id)
|
||||||
|
return False
|
||||||
|
|
||||||
|
mea_db.update({'status': new_status})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mea_id,
|
||||||
|
res_type=constants.RES_TYPE_MEA,
|
||||||
|
res_state=new_status,
|
||||||
|
evt_type=constants.RES_EVT_MONITOR,
|
||||||
|
tstamp=timeutils.utcnow())
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _mark_mea_error(self, mea_id):
|
||||||
|
return self._mark_mea_status(
|
||||||
|
mea_id, [constants.DEAD], constants.ERROR)
|
||||||
|
|
||||||
|
def _mark_mea_dead(self, mea_id):
|
||||||
|
exclude_status = [
|
||||||
|
constants.DOWN,
|
||||||
|
constants.PENDING_CREATE,
|
||||||
|
constants.PENDING_UPDATE,
|
||||||
|
constants.PENDING_DELETE,
|
||||||
|
constants.INACTIVE,
|
||||||
|
constants.ERROR]
|
||||||
|
return self._mark_mea_status(
|
||||||
|
mea_id, exclude_status, constants.DEAD)
|
0
apmec/db/meo/__init__.py
Normal file
0
apmec/db/meo/__init__.py
Normal file
57
apmec/db/meo/meo_db.py
Normal file
57
apmec/db/meo/meo_db.py
Normal file
@ -0,0 +1,57 @@
|
|||||||
|
# Copyright 2016 Brocade Communications System, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import sqlalchemy as sa
|
||||||
|
from sqlalchemy import orm
|
||||||
|
from sqlalchemy import schema
|
||||||
|
from sqlalchemy import sql
|
||||||
|
|
||||||
|
from apmec.db import model_base
|
||||||
|
from apmec.db import models_v1
|
||||||
|
from apmec.db import types
|
||||||
|
|
||||||
|
|
||||||
|
class Vim(model_base.BASE,
|
||||||
|
models_v1.HasId,
|
||||||
|
models_v1.HasTenant,
|
||||||
|
models_v1.Audit):
|
||||||
|
type = sa.Column(sa.String(64), nullable=False)
|
||||||
|
name = sa.Column(sa.String(255), nullable=False)
|
||||||
|
description = sa.Column(sa.Text, nullable=True)
|
||||||
|
placement_attr = sa.Column(types.Json, nullable=True)
|
||||||
|
shared = sa.Column(sa.Boolean, default=False, server_default=sql.false(
|
||||||
|
), nullable=False)
|
||||||
|
is_default = sa.Column(sa.Boolean, default=False, server_default=sql.false(
|
||||||
|
), nullable=False)
|
||||||
|
vim_auth = orm.relationship('VimAuth')
|
||||||
|
status = sa.Column(sa.String(255), nullable=False)
|
||||||
|
|
||||||
|
__table_args__ = (
|
||||||
|
schema.UniqueConstraint(
|
||||||
|
"tenant_id",
|
||||||
|
"name",
|
||||||
|
"deleted_at",
|
||||||
|
name="uniq_vim0tenant_id0name0deleted_at"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class VimAuth(model_base.BASE, models_v1.HasId):
|
||||||
|
vim_id = sa.Column(types.Uuid, sa.ForeignKey('vims.id'),
|
||||||
|
nullable=False)
|
||||||
|
password = sa.Column(sa.String(255), nullable=False)
|
||||||
|
auth_url = sa.Column(sa.String(255), nullable=False)
|
||||||
|
vim_project = sa.Column(types.Json, nullable=False)
|
||||||
|
auth_cred = sa.Column(types.Json, nullable=False)
|
208
apmec/db/meo/meo_db_plugin.py
Normal file
208
apmec/db/meo/meo_db_plugin.py
Normal file
@ -0,0 +1,208 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from oslo_db.exception import DBDuplicateEntry
|
||||||
|
from oslo_utils import strutils
|
||||||
|
from oslo_utils import timeutils
|
||||||
|
from oslo_utils import uuidutils
|
||||||
|
from sqlalchemy.orm import exc as orm_exc
|
||||||
|
from sqlalchemy import sql
|
||||||
|
|
||||||
|
from apmec.common import exceptions
|
||||||
|
from apmec.db.common_services import common_services_db_plugin
|
||||||
|
from apmec.db import db_base
|
||||||
|
from apmec.db.meo import meo_db
|
||||||
|
from apmec.db.mem import mem_db
|
||||||
|
from apmec.extensions import meo
|
||||||
|
from apmec import manager
|
||||||
|
from apmec.plugins.common import constants
|
||||||
|
|
||||||
|
|
||||||
|
VIM_ATTRIBUTES = ('id', 'type', 'tenant_id', 'name', 'description',
|
||||||
|
'placement_attr', 'shared', 'is_default',
|
||||||
|
'created_at', 'updated_at', 'status')
|
||||||
|
|
||||||
|
VIM_AUTH_ATTRIBUTES = ('auth_url', 'vim_project', 'password', 'auth_cred')
|
||||||
|
|
||||||
|
|
||||||
|
class MeoPluginDb(meo.MEOPluginBase, db_base.CommonDbMixin):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super(MeoPluginDb, self).__init__()
|
||||||
|
self._cos_db_plg = common_services_db_plugin.CommonServicesPluginDb()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _core_plugin(self):
|
||||||
|
return manager.ApmecManager.get_plugin()
|
||||||
|
|
||||||
|
def _make_vim_dict(self, vim_db, fields=None, mask_password=True):
|
||||||
|
res = dict((key, vim_db[key]) for key in VIM_ATTRIBUTES)
|
||||||
|
vim_auth_db = vim_db.vim_auth
|
||||||
|
res['auth_url'] = vim_auth_db[0].auth_url
|
||||||
|
res['vim_project'] = vim_auth_db[0].vim_project
|
||||||
|
res['auth_cred'] = vim_auth_db[0].auth_cred
|
||||||
|
res['auth_cred']['password'] = vim_auth_db[0].password
|
||||||
|
if mask_password:
|
||||||
|
res['auth_cred'] = strutils.mask_dict_password(res['auth_cred'])
|
||||||
|
return self._fields(res, fields)
|
||||||
|
|
||||||
|
def _fields(self, resource, fields):
|
||||||
|
if fields:
|
||||||
|
return dict(((key, item) for key, item in resource.items()
|
||||||
|
if key in fields))
|
||||||
|
return resource
|
||||||
|
|
||||||
|
def _get_resource(self, context, model, id):
|
||||||
|
try:
|
||||||
|
return self._get_by_id(context, model, id)
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
if issubclass(model, meo_db.Vim):
|
||||||
|
raise meo.VimNotFoundException(vim_id=id)
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
def create_vim(self, context, vim):
|
||||||
|
self._validate_default_vim(context, vim)
|
||||||
|
vim_cred = vim['auth_cred']
|
||||||
|
|
||||||
|
try:
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
vim_db = meo_db.Vim(
|
||||||
|
id=vim.get('id'),
|
||||||
|
type=vim.get('type'),
|
||||||
|
tenant_id=vim.get('tenant_id'),
|
||||||
|
name=vim.get('name'),
|
||||||
|
description=vim.get('description'),
|
||||||
|
placement_attr=vim.get('placement_attr'),
|
||||||
|
is_default=vim.get('is_default'),
|
||||||
|
status=vim.get('status'),
|
||||||
|
deleted_at=datetime.min)
|
||||||
|
context.session.add(vim_db)
|
||||||
|
vim_auth_db = meo_db.VimAuth(
|
||||||
|
id=uuidutils.generate_uuid(),
|
||||||
|
vim_id=vim.get('id'),
|
||||||
|
password=vim_cred.pop('password'),
|
||||||
|
vim_project=vim.get('vim_project'),
|
||||||
|
auth_url=vim.get('auth_url'),
|
||||||
|
auth_cred=vim_cred)
|
||||||
|
context.session.add(vim_auth_db)
|
||||||
|
except DBDuplicateEntry as e:
|
||||||
|
raise exceptions.DuplicateEntity(
|
||||||
|
_type="vim",
|
||||||
|
entry=e.columns)
|
||||||
|
vim_dict = self._make_vim_dict(vim_db)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=vim_dict['id'],
|
||||||
|
res_type=constants.RES_TYPE_VIM,
|
||||||
|
res_state=vim_dict['status'],
|
||||||
|
evt_type=constants.RES_EVT_CREATE,
|
||||||
|
tstamp=vim_dict['created_at'])
|
||||||
|
return vim_dict
|
||||||
|
|
||||||
|
def delete_vim(self, context, vim_id, soft_delete=True):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
vim_db = self._get_resource(context, meo_db.Vim, vim_id)
|
||||||
|
if soft_delete:
|
||||||
|
vim_db.update({'deleted_at': timeutils.utcnow()})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=vim_db['id'],
|
||||||
|
res_type=constants.RES_TYPE_VIM,
|
||||||
|
res_state=vim_db['status'],
|
||||||
|
evt_type=constants.RES_EVT_DELETE,
|
||||||
|
tstamp=vim_db[constants.RES_EVT_DELETED_FLD])
|
||||||
|
else:
|
||||||
|
context.session.query(meo_db.VimAuth).filter_by(
|
||||||
|
vim_id=vim_id).delete()
|
||||||
|
context.session.delete(vim_db)
|
||||||
|
|
||||||
|
def is_vim_still_in_use(self, context, vim_id):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
meas_db = self._model_query(context, mem_db.MEA).filter_by(
|
||||||
|
vim_id=vim_id).first()
|
||||||
|
if meas_db is not None:
|
||||||
|
raise meo.VimInUseException(vim_id=vim_id)
|
||||||
|
return meas_db
|
||||||
|
|
||||||
|
def get_vim(self, context, vim_id, fields=None, mask_password=True):
|
||||||
|
vim_db = self._get_resource(context, meo_db.Vim, vim_id)
|
||||||
|
return self._make_vim_dict(vim_db, mask_password=mask_password)
|
||||||
|
|
||||||
|
def get_vims(self, context, filters=None, fields=None):
|
||||||
|
return self._get_collection(context, meo_db.Vim, self._make_vim_dict,
|
||||||
|
filters=filters, fields=fields)
|
||||||
|
|
||||||
|
def update_vim(self, context, vim_id, vim):
|
||||||
|
self._validate_default_vim(context, vim, vim_id=vim_id)
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
vim_cred = vim['auth_cred']
|
||||||
|
vim_project = vim['vim_project']
|
||||||
|
vim_db = self._get_resource(context, meo_db.Vim, vim_id)
|
||||||
|
try:
|
||||||
|
if 'name' in vim:
|
||||||
|
vim_db.update({'name': vim.get('name')})
|
||||||
|
if 'description' in vim:
|
||||||
|
vim_db.update({'description': vim.get('description')})
|
||||||
|
if 'is_default' in vim:
|
||||||
|
vim_db.update({'is_default': vim.get('is_default')})
|
||||||
|
if 'placement_attr' in vim:
|
||||||
|
vim_db.update(
|
||||||
|
{'placement_attr': vim.get('placement_attr')})
|
||||||
|
vim_auth_db = (self._model_query(
|
||||||
|
context, meo_db.VimAuth).filter(
|
||||||
|
meo_db.VimAuth.vim_id == vim_id).with_lockmode(
|
||||||
|
'update').one())
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
raise meo.VimNotFoundException(vim_id=vim_id)
|
||||||
|
vim_auth_db.update({'auth_cred': vim_cred, 'password':
|
||||||
|
vim_cred.pop('password'), 'vim_project':
|
||||||
|
vim_project})
|
||||||
|
vim_db.update({'updated_at': timeutils.utcnow()})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=vim_db['id'],
|
||||||
|
res_type=constants.RES_TYPE_VIM,
|
||||||
|
res_state=vim_db['status'],
|
||||||
|
evt_type=constants.RES_EVT_UPDATE,
|
||||||
|
tstamp=vim_db[constants.RES_EVT_UPDATED_FLD])
|
||||||
|
|
||||||
|
return self.get_vim(context, vim_id)
|
||||||
|
|
||||||
|
def update_vim_status(self, context, vim_id, status):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
try:
|
||||||
|
vim_db = (self._model_query(context, meo_db.Vim).filter(
|
||||||
|
meo_db.Vim.id == vim_id).with_lockmode('update').one())
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
raise meo.VimNotFoundException(vim_id=vim_id)
|
||||||
|
vim_db.update({'status': status,
|
||||||
|
'updated_at': timeutils.utcnow()})
|
||||||
|
return self._make_vim_dict(vim_db)
|
||||||
|
|
||||||
|
def _validate_default_vim(self, context, vim, vim_id=None):
|
||||||
|
if not vim.get('is_default'):
|
||||||
|
return True
|
||||||
|
try:
|
||||||
|
vim_db = self._get_default_vim(context)
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
return True
|
||||||
|
if vim_id == vim_db.id:
|
||||||
|
return True
|
||||||
|
raise meo.VimDefaultDuplicateException(vim_id=vim_db.id)
|
||||||
|
|
||||||
|
def _get_default_vim(self, context):
|
||||||
|
query = self._model_query(context, meo_db.Vim)
|
||||||
|
return query.filter(meo_db.Vim.is_default == sql.true()).one()
|
||||||
|
|
||||||
|
def get_default_vim(self, context):
|
||||||
|
vim_db = self._get_default_vim(context)
|
||||||
|
return self._make_vim_dict(vim_db, mask_password=False)
|
384
apmec/db/meo/mes_db.py
Normal file
384
apmec/db/meo/mes_db.py
Normal file
@ -0,0 +1,384 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import ast
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from oslo_db.exception import DBDuplicateEntry
|
||||||
|
from oslo_log import log as logging
|
||||||
|
from oslo_utils import timeutils
|
||||||
|
from oslo_utils import uuidutils
|
||||||
|
from six import iteritems
|
||||||
|
|
||||||
|
import sqlalchemy as sa
|
||||||
|
from sqlalchemy import orm
|
||||||
|
from sqlalchemy.orm import exc as orm_exc
|
||||||
|
from sqlalchemy import schema
|
||||||
|
|
||||||
|
from apmec.common import exceptions
|
||||||
|
from apmec.db.common_services import common_services_db_plugin
|
||||||
|
from apmec.db import db_base
|
||||||
|
from apmec.db import model_base
|
||||||
|
from apmec.db import models_v1
|
||||||
|
from apmec.db import types
|
||||||
|
from apmec.extensions import meo
|
||||||
|
from apmec.extensions.meo_plugins import edge_service
|
||||||
|
from apmec.plugins.common import constants
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
_ACTIVE_UPDATE = (constants.ACTIVE, constants.PENDING_UPDATE)
|
||||||
|
_ACTIVE_UPDATE_ERROR_DEAD = (
|
||||||
|
constants.PENDING_CREATE, constants.ACTIVE, constants.PENDING_UPDATE,
|
||||||
|
constants.ERROR, constants.DEAD)
|
||||||
|
CREATE_STATES = (constants.PENDING_CREATE, constants.DEAD)
|
||||||
|
|
||||||
|
|
||||||
|
###########################################################################
|
||||||
|
# db tables
|
||||||
|
|
||||||
|
class MESD(model_base.BASE, models_v1.HasId, models_v1.HasTenant,
|
||||||
|
models_v1.Audit):
|
||||||
|
"""Represents MESD to create MES."""
|
||||||
|
|
||||||
|
__tablename__ = 'mesd'
|
||||||
|
# Descriptive name
|
||||||
|
name = sa.Column(sa.String(255), nullable=False)
|
||||||
|
description = sa.Column(sa.Text)
|
||||||
|
meads = sa.Column(types.Json, nullable=True)
|
||||||
|
|
||||||
|
# Mesd template source - onboarded
|
||||||
|
template_source = sa.Column(sa.String(255), server_default='onboarded')
|
||||||
|
|
||||||
|
# (key, value) pair to spin up
|
||||||
|
attributes = orm.relationship('MESDAttribute',
|
||||||
|
backref='mesd')
|
||||||
|
|
||||||
|
__table_args__ = (
|
||||||
|
schema.UniqueConstraint(
|
||||||
|
"tenant_id",
|
||||||
|
"name",
|
||||||
|
name="uniq_mesd0tenant_id0name"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class MESDAttribute(model_base.BASE, models_v1.HasId):
|
||||||
|
"""Represents attributes necessary for creation of mes in (key, value) pair
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
__tablename__ = 'mesd_attribute'
|
||||||
|
mesd_id = sa.Column(types.Uuid, sa.ForeignKey('mesd.id'),
|
||||||
|
nullable=False)
|
||||||
|
key = sa.Column(sa.String(255), nullable=False)
|
||||||
|
value = sa.Column(sa.TEXT(65535), nullable=True)
|
||||||
|
|
||||||
|
|
||||||
|
class MES(model_base.BASE, models_v1.HasId, models_v1.HasTenant,
|
||||||
|
models_v1.Audit):
|
||||||
|
"""Represents network services that deploys services.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
__tablename__ = 'mes'
|
||||||
|
mesd_id = sa.Column(types.Uuid, sa.ForeignKey('mesd.id'))
|
||||||
|
mesd = orm.relationship('MESD')
|
||||||
|
|
||||||
|
name = sa.Column(sa.String(255), nullable=False)
|
||||||
|
description = sa.Column(sa.Text, nullable=True)
|
||||||
|
|
||||||
|
# Dict of MEA details that network service launches
|
||||||
|
mea_ids = sa.Column(sa.TEXT(65535), nullable=True)
|
||||||
|
|
||||||
|
# Dict of mgmt urls that network servic launches
|
||||||
|
mgmt_urls = sa.Column(sa.TEXT(65535), nullable=True)
|
||||||
|
|
||||||
|
status = sa.Column(sa.String(64), nullable=False)
|
||||||
|
vim_id = sa.Column(types.Uuid, sa.ForeignKey('vims.id'), nullable=False)
|
||||||
|
error_reason = sa.Column(sa.Text, nullable=True)
|
||||||
|
|
||||||
|
__table_args__ = (
|
||||||
|
schema.UniqueConstraint(
|
||||||
|
"tenant_id",
|
||||||
|
"name",
|
||||||
|
name="uniq_mes0tenant_id0name"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class MESPluginDb(edge_service.MESPluginBase, db_base.CommonDbMixin):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super(MESPluginDb, self).__init__()
|
||||||
|
self._cos_db_plg = common_services_db_plugin.CommonServicesPluginDb()
|
||||||
|
|
||||||
|
def _get_resource(self, context, model, id):
|
||||||
|
try:
|
||||||
|
return self._get_by_id(context, model, id)
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
if issubclass(model, MESD):
|
||||||
|
raise edge_service.MESDNotFound(mesd_id=id)
|
||||||
|
if issubclass(model, MES):
|
||||||
|
raise edge_service.MESNotFound(mes_id=id)
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
def _get_mes_db(self, context, mes_id, current_statuses, new_status):
|
||||||
|
try:
|
||||||
|
mes_db = (
|
||||||
|
self._model_query(context, MES).
|
||||||
|
filter(MES.id == mes_id).
|
||||||
|
filter(MES.status.in_(current_statuses)).
|
||||||
|
with_lockmode('update').one())
|
||||||
|
except orm_exc.NoResultFound:
|
||||||
|
raise edge_service.MESNotFound(mes_id=mes_id)
|
||||||
|
mes_db.update({'status': new_status})
|
||||||
|
return mes_db
|
||||||
|
|
||||||
|
def _make_attributes_dict(self, attributes_db):
|
||||||
|
return dict((attr.key, attr.value) for attr in attributes_db)
|
||||||
|
|
||||||
|
def _make_mesd_dict(self, mesd, fields=None):
|
||||||
|
res = {
|
||||||
|
'attributes': self._make_attributes_dict(mesd['attributes']),
|
||||||
|
}
|
||||||
|
key_list = ('id', 'tenant_id', 'name', 'description',
|
||||||
|
'created_at', 'updated_at', 'meads', 'template_source')
|
||||||
|
res.update((key, mesd[key]) for key in key_list)
|
||||||
|
return self._fields(res, fields)
|
||||||
|
|
||||||
|
def _make_dev_attrs_dict(self, dev_attrs_db):
|
||||||
|
return dict((arg.key, arg.value) for arg in dev_attrs_db)
|
||||||
|
|
||||||
|
def _make_mes_dict(self, mes_db, fields=None):
|
||||||
|
LOG.debug('mes_db %s', mes_db)
|
||||||
|
res = {}
|
||||||
|
key_list = ('id', 'tenant_id', 'mesd_id', 'name', 'description',
|
||||||
|
'mea_ids', 'status', 'mgmt_urls', 'error_reason',
|
||||||
|
'vim_id', 'created_at', 'updated_at')
|
||||||
|
res.update((key, mes_db[key]) for key in key_list)
|
||||||
|
return self._fields(res, fields)
|
||||||
|
|
||||||
|
def create_mesd(self, context, mesd):
|
||||||
|
meads = mesd['meads']
|
||||||
|
mesd = mesd['mesd']
|
||||||
|
LOG.debug('mesd %s', mesd)
|
||||||
|
tenant_id = self._get_tenant_id_for_create(context, mesd)
|
||||||
|
template_source = mesd.get('template_source')
|
||||||
|
|
||||||
|
try:
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mesd_id = uuidutils.generate_uuid()
|
||||||
|
mesd_db = MESD(
|
||||||
|
id=mesd_id,
|
||||||
|
tenant_id=tenant_id,
|
||||||
|
name=mesd.get('name'),
|
||||||
|
meads=meads,
|
||||||
|
description=mesd.get('description'),
|
||||||
|
deleted_at=datetime.min,
|
||||||
|
template_source=template_source)
|
||||||
|
context.session.add(mesd_db)
|
||||||
|
for (key, value) in mesd.get('attributes', {}).items():
|
||||||
|
attribute_db = MESDAttribute(
|
||||||
|
id=uuidutils.generate_uuid(),
|
||||||
|
mesd_id=mesd_id,
|
||||||
|
key=key,
|
||||||
|
value=value)
|
||||||
|
context.session.add(attribute_db)
|
||||||
|
except DBDuplicateEntry as e:
|
||||||
|
raise exceptions.DuplicateEntity(
|
||||||
|
_type="mesd",
|
||||||
|
entry=e.columns)
|
||||||
|
LOG.debug('mesd_db %(mesd_db)s %(attributes)s ',
|
||||||
|
{'mesd_db': mesd_db,
|
||||||
|
'attributes': mesd_db.attributes})
|
||||||
|
mesd_dict = self._make_mesd_dict(mesd_db)
|
||||||
|
LOG.debug('mesd_dict %s', mesd_dict)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mesd_dict['id'],
|
||||||
|
res_type=constants.RES_TYPE_MESD,
|
||||||
|
res_state=constants.RES_EVT_ONBOARDED,
|
||||||
|
evt_type=constants.RES_EVT_CREATE,
|
||||||
|
tstamp=mesd_dict[constants.RES_EVT_CREATED_FLD])
|
||||||
|
return mesd_dict
|
||||||
|
|
||||||
|
def delete_mesd(self,
|
||||||
|
context,
|
||||||
|
mesd_id,
|
||||||
|
soft_delete=True):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mess_db = context.session.query(MES).filter_by(
|
||||||
|
mesd_id=mesd_id).first()
|
||||||
|
if mess_db is not None and mess_db.deleted_at is None:
|
||||||
|
raise meo.MESDInUse(mesd_id=mesd_id)
|
||||||
|
|
||||||
|
mesd_db = self._get_resource(context, MESD,
|
||||||
|
mesd_id)
|
||||||
|
if soft_delete:
|
||||||
|
mesd_db.update({'deleted_at': timeutils.utcnow()})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mesd_db['id'],
|
||||||
|
res_type=constants.RES_TYPE_MESD,
|
||||||
|
res_state=constants.RES_EVT_NA_STATE,
|
||||||
|
evt_type=constants.RES_EVT_DELETE,
|
||||||
|
tstamp=mesd_db[constants.RES_EVT_DELETED_FLD])
|
||||||
|
else:
|
||||||
|
context.session.query(MESDAttribute).filter_by(
|
||||||
|
mesd_id=mesd_id).delete()
|
||||||
|
context.session.delete(mesd_db)
|
||||||
|
|
||||||
|
def get_mesd(self, context, mesd_id, fields=None):
|
||||||
|
mesd_db = self._get_resource(context, MESD, mesd_id)
|
||||||
|
return self._make_mesd_dict(mesd_db)
|
||||||
|
|
||||||
|
def get_mesds(self, context, filters, fields=None):
|
||||||
|
if ('template_source' in filters) and \
|
||||||
|
(filters['template_source'][0] == 'all'):
|
||||||
|
filters.pop('template_source')
|
||||||
|
return self._get_collection(context, MESD,
|
||||||
|
self._make_mesd_dict,
|
||||||
|
filters=filters, fields=fields)
|
||||||
|
|
||||||
|
# reference implementation. needs to be overrided by subclass
|
||||||
|
def create_mes(self, context, mes):
|
||||||
|
LOG.debug('mes %s', mes)
|
||||||
|
mes = mes['mes']
|
||||||
|
tenant_id = self._get_tenant_id_for_create(context, mes)
|
||||||
|
mesd_id = mes['mesd_id']
|
||||||
|
vim_id = mes['vim_id']
|
||||||
|
name = mes.get('name')
|
||||||
|
mes_id = uuidutils.generate_uuid()
|
||||||
|
try:
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mesd_db = self._get_resource(context, MESD,
|
||||||
|
mesd_id)
|
||||||
|
mes_db = MES(id=mes_id,
|
||||||
|
tenant_id=tenant_id,
|
||||||
|
name=name,
|
||||||
|
description=mesd_db.description,
|
||||||
|
mea_ids=None,
|
||||||
|
status=constants.PENDING_CREATE,
|
||||||
|
mgmt_urls=None,
|
||||||
|
mesd_id=mesd_id,
|
||||||
|
vim_id=vim_id,
|
||||||
|
error_reason=None,
|
||||||
|
deleted_at=datetime.min)
|
||||||
|
context.session.add(mes_db)
|
||||||
|
except DBDuplicateEntry as e:
|
||||||
|
raise exceptions.DuplicateEntity(
|
||||||
|
_type="mes",
|
||||||
|
entry=e.columns)
|
||||||
|
evt_details = "MES UUID assigned."
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mes_id,
|
||||||
|
res_type=constants.RES_TYPE_mes,
|
||||||
|
res_state=constants.PENDING_CREATE,
|
||||||
|
evt_type=constants.RES_EVT_CREATE,
|
||||||
|
tstamp=mes_db[constants.RES_EVT_CREATED_FLD],
|
||||||
|
details=evt_details)
|
||||||
|
return self._make_mes_dict(mes_db)
|
||||||
|
|
||||||
|
def create_mes_post(self, context, mes_id, mistral_obj,
|
||||||
|
mead_dict, error_reason):
|
||||||
|
LOG.debug('mes ID %s', mes_id)
|
||||||
|
output = ast.literal_eval(mistral_obj.output)
|
||||||
|
mgmt_urls = dict()
|
||||||
|
mea_ids = dict()
|
||||||
|
if len(output) > 0:
|
||||||
|
for mead_name, mead_val in iteritems(mead_dict):
|
||||||
|
for instance in mead_val['instances']:
|
||||||
|
if 'mgmt_url_' + instance in output:
|
||||||
|
mgmt_urls[instance] = ast.literal_eval(
|
||||||
|
output['mgmt_url_' + instance].strip())
|
||||||
|
mea_ids[instance] = output['mea_id_' + instance]
|
||||||
|
mea_ids = str(mea_ids)
|
||||||
|
mgmt_urls = str(mgmt_urls)
|
||||||
|
|
||||||
|
if not mea_ids:
|
||||||
|
mea_ids = None
|
||||||
|
if not mgmt_urls:
|
||||||
|
mgmt_urls = None
|
||||||
|
status = constants.ACTIVE if mistral_obj.state == 'SUCCESS' \
|
||||||
|
else constants.ERROR
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mes_db = self._get_resource(context, MES,
|
||||||
|
mes_id)
|
||||||
|
mes_db.update({'mea_ids': mea_ids})
|
||||||
|
mes_db.update({'mgmt_urls': mgmt_urls})
|
||||||
|
mes_db.update({'status': status})
|
||||||
|
mes_db.update({'error_reason': error_reason})
|
||||||
|
mes_db.update({'updated_at': timeutils.utcnow()})
|
||||||
|
mes_dict = self._make_mes_dict(mes_db)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mes_dict['id'],
|
||||||
|
res_type=constants.RES_TYPE_mes,
|
||||||
|
res_state=constants.RES_EVT_NA_STATE,
|
||||||
|
evt_type=constants.RES_EVT_UPDATE,
|
||||||
|
tstamp=mes_dict[constants.RES_EVT_UPDATED_FLD])
|
||||||
|
return mes_dict
|
||||||
|
|
||||||
|
# reference implementation. needs to be overrided by subclass
|
||||||
|
def delete_mes(self, context, mes_id):
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
mes_db = self._get_mes_db(
|
||||||
|
context, mes_id, _ACTIVE_UPDATE_ERROR_DEAD,
|
||||||
|
constants.PENDING_DELETE)
|
||||||
|
deleted_mes_db = self._make_mes_dict(mes_db)
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mes_id,
|
||||||
|
res_type=constants.RES_TYPE_mes,
|
||||||
|
res_state=deleted_mes_db['status'],
|
||||||
|
evt_type=constants.RES_EVT_DELETE,
|
||||||
|
tstamp=timeutils.utcnow(), details="MES delete initiated")
|
||||||
|
return deleted_mes_db
|
||||||
|
|
||||||
|
def delete_mes_post(self, context, mes_id, mistral_obj,
|
||||||
|
error_reason, soft_delete=True):
|
||||||
|
mes = self.get_mes(context, mes_id)
|
||||||
|
mesd_id = mes.get('mesd_id')
|
||||||
|
with context.session.begin(subtransactions=True):
|
||||||
|
query = (
|
||||||
|
self._model_query(context, MES).
|
||||||
|
filter(MES.id == mes_id).
|
||||||
|
filter(MES.status == constants.PENDING_DELETE))
|
||||||
|
if mistral_obj and mistral_obj.state == 'ERROR':
|
||||||
|
query.update({'status': constants.ERROR})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mes_id,
|
||||||
|
res_type=constants.RES_TYPE_mes,
|
||||||
|
res_state=constants.ERROR,
|
||||||
|
evt_type=constants.RES_EVT_DELETE,
|
||||||
|
tstamp=timeutils.utcnow(),
|
||||||
|
details="MES Delete ERROR")
|
||||||
|
else:
|
||||||
|
if soft_delete:
|
||||||
|
deleted_time_stamp = timeutils.utcnow()
|
||||||
|
query.update({'deleted_at': deleted_time_stamp})
|
||||||
|
self._cos_db_plg.create_event(
|
||||||
|
context, res_id=mes_id,
|
||||||
|
res_type=constants.RES_TYPE_mes,
|
||||||
|
res_state=constants.PENDING_DELETE,
|
||||||
|
evt_type=constants.RES_EVT_DELETE,
|
||||||
|
tstamp=deleted_time_stamp,
|
||||||
|
details="mes Delete Complete")
|
||||||
|
else:
|
||||||
|
query.delete()
|
||||||
|
template_db = self._get_resource(context, MESD, mesd_id)
|
||||||
|
if template_db.get('template_source') == 'inline':
|
||||||
|
self.delete_mesd(context, mesd_id)
|
||||||
|
|
||||||
|
def get_mes(self, context, mes_id, fields=None):
|
||||||
|
mes_db = self._get_resource(context, MES, mes_id)
|
||||||
|
return self._make_mes_dict(mes_db)
|
||||||
|
|
||||||
|
def get_mess(self, context, filters=None, fields=None):
|
||||||
|
return self._get_collection(context, MES,
|
||||||
|
self._make_mes_dict,
|
||||||
|
filters=filters, fields=fields)
|
87
apmec/db/migration/README
Normal file
87
apmec/db/migration/README
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
# Copyright 2012 New Dream Network, LLC (DreamHost)
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
# @author Mark McClain (DreamHost)
|
||||||
|
|
||||||
|
The migrations in the alembic/versions contain the changes needed to migrate
|
||||||
|
from older Apmec releases to newer versions. A migration occurs by executing
|
||||||
|
a script that details the changes needed to upgrade the database. The
|
||||||
|
migration scripts are ordered so that multiple scripts can run sequentially to
|
||||||
|
update the database. The scripts are executed by Apmec's migration wrapper
|
||||||
|
which uses the Alembic library to manage the migration. Apmec supports
|
||||||
|
migration from Folsom or later.
|
||||||
|
|
||||||
|
|
||||||
|
If you are a deployer or developer and want to migrate from Folsom to Grizzly
|
||||||
|
or later you must first add version tracking to the database:
|
||||||
|
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini stamp folsom
|
||||||
|
|
||||||
|
You can then upgrade to the latest database version via:
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini upgrade head
|
||||||
|
|
||||||
|
To check the current database version:
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini current
|
||||||
|
|
||||||
|
To create a script to run the migration offline:
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini upgrade head --sql
|
||||||
|
|
||||||
|
To run the offline migration between specific migration versions:
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini upgrade \
|
||||||
|
<start version>:<end version> --sql
|
||||||
|
|
||||||
|
Upgrade the database incrementally:
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini upgrade --delta <# of revs>
|
||||||
|
|
||||||
|
|
||||||
|
DEVELOPERS:
|
||||||
|
A database migration script is required when you submit a change to Apmec
|
||||||
|
that alters the database model definition. The migration script is a special
|
||||||
|
python file that includes code to upgrade the database to match the
|
||||||
|
changes in the model definition. Alembic will execute these scripts in order to
|
||||||
|
provide a linear migration path between revision. The apmec-db-manage command
|
||||||
|
can be used to generate migration template for you to complete. The operations
|
||||||
|
in the template are those supported by the Alembic migration library.
|
||||||
|
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini revision \
|
||||||
|
-m "description of revision" \
|
||||||
|
--autogenerate
|
||||||
|
|
||||||
|
This generates a prepopulated template with the changes needed to match the
|
||||||
|
database state with the models. You should inspect the autogenerated template
|
||||||
|
to ensure that the proper models have been altered.
|
||||||
|
|
||||||
|
In rare circumstances, you may want to start with an empty migration template
|
||||||
|
and manually author the changes necessary for an upgrade. You can
|
||||||
|
create a blank file via:
|
||||||
|
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini revision \
|
||||||
|
-m "description of revision"
|
||||||
|
|
||||||
|
The migration timeline should remain linear so that there is a clear path when
|
||||||
|
upgrading. To verify that the timeline does branch, you can run this command:
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini check_migration
|
||||||
|
|
||||||
|
If the migration path does branch, you can find the branch point via:
|
||||||
|
$ apmec-db-manage --config-file /path/to/apmec.conf \
|
||||||
|
--config-file /path/to/plugin/config.ini history
|
95
apmec/db/migration/__init__.py
Normal file
95
apmec/db/migration/__init__.py
Normal file
@ -0,0 +1,95 @@
|
|||||||
|
# Copyright 2012 New Dream Network, LLC (DreamHost)
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import contextlib
|
||||||
|
import sqlalchemy as sa
|
||||||
|
from sqlalchemy.engine import reflection
|
||||||
|
|
||||||
|
|
||||||
|
def alter_enum(table, column, enum_type, nullable):
|
||||||
|
bind = op.get_bind()
|
||||||
|
engine = bind.engine
|
||||||
|
if engine.name == 'postgresql':
|
||||||
|
values = {'table': table,
|
||||||
|
'column': column,
|
||||||
|
'name': enum_type.name}
|
||||||
|
op.execute("ALTER TYPE %(name)s RENAME TO old_%(name)s" % values)
|
||||||
|
enum_type.create(bind, checkfirst=False)
|
||||||
|
op.execute("ALTER TABLE %(table)s RENAME COLUMN %(column)s TO "
|
||||||
|
"old_%(column)s" % values)
|
||||||
|
op.add_column(table, sa.Column(column, enum_type, nullable=nullable))
|
||||||
|
op.execute("UPDATE %(table)s SET %(column)s = "
|
||||||
|
"old_%(column)s::text::%(name)s" % values)
|
||||||
|
op.execute("ALTER TABLE %(table)s DROP COLUMN old_%(column)s" % values)
|
||||||
|
op.execute("DROP TYPE old_%(name)s" % values)
|
||||||
|
else:
|
||||||
|
op.alter_column(table, column, type_=enum_type,
|
||||||
|
existing_nullable=nullable)
|
||||||
|
|
||||||
|
|
||||||
|
def create_foreign_key_constraint(table_name, fk_constraints):
|
||||||
|
for fk in fk_constraints:
|
||||||
|
op.create_foreign_key(
|
||||||
|
constraint_name=fk['name'],
|
||||||
|
source_table=table_name,
|
||||||
|
referent_table=fk['referred_table'],
|
||||||
|
local_cols=fk['constrained_columns'],
|
||||||
|
remote_cols=fk['referred_columns'],
|
||||||
|
ondelete=fk['options'].get('ondelete')
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def drop_foreign_key_constraint(table_name, fk_constraints):
|
||||||
|
for fk in fk_constraints:
|
||||||
|
op.drop_constraint(
|
||||||
|
constraint_name=fk['name'],
|
||||||
|
table_name=table_name,
|
||||||
|
type_='foreignkey'
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@contextlib.contextmanager
|
||||||
|
def modify_foreign_keys_constraint(table_names):
|
||||||
|
inspector = reflection.Inspector.from_engine(op.get_bind())
|
||||||
|
try:
|
||||||
|
for table in table_names:
|
||||||
|
fk_constraints = inspector.get_foreign_keys(table)
|
||||||
|
drop_foreign_key_constraint(table, fk_constraints)
|
||||||
|
yield
|
||||||
|
finally:
|
||||||
|
for table in table_names:
|
||||||
|
fk_constraints = inspector.get_foreign_keys(table)
|
||||||
|
create_foreign_key_constraint(table, fk_constraints)
|
||||||
|
|
||||||
|
|
||||||
|
def modify_foreign_keys_constraint_with_col_change(
|
||||||
|
table_name, old_local_col, new_local_col, existing_type,
|
||||||
|
nullable=False):
|
||||||
|
inspector = reflection.Inspector.from_engine(op.get_bind())
|
||||||
|
fk_constraints = inspector.get_foreign_keys(table_name)
|
||||||
|
for fk in fk_constraints:
|
||||||
|
if old_local_col in fk['constrained_columns']:
|
||||||
|
drop_foreign_key_constraint(table_name, [fk])
|
||||||
|
op.alter_column(table_name, old_local_col,
|
||||||
|
new_column_name=new_local_col,
|
||||||
|
existing_type=existing_type,
|
||||||
|
nullable=nullable)
|
||||||
|
fk_constraints = inspector.get_foreign_keys(table_name)
|
||||||
|
for fk in fk_constraints:
|
||||||
|
for i in range(len(fk['constrained_columns'])):
|
||||||
|
if old_local_col == fk['constrained_columns'][i]:
|
||||||
|
fk['constrained_columns'][i] = new_local_col
|
||||||
|
create_foreign_key_constraint(table_name, [fk])
|
||||||
|
break
|
52
apmec/db/migration/alembic.ini
Normal file
52
apmec/db/migration/alembic.ini
Normal file
@ -0,0 +1,52 @@
|
|||||||
|
# A generic, single database configuration.
|
||||||
|
|
||||||
|
[alembic]
|
||||||
|
# path to migration scripts
|
||||||
|
script_location = %(here)s/alembic_migrations
|
||||||
|
|
||||||
|
# template used to generate migration files
|
||||||
|
# file_template = %%(rev)s_%%(slug)s
|
||||||
|
|
||||||
|
# set to 'true' to run the environment during
|
||||||
|
# the 'revision' command, regardless of autogenerate
|
||||||
|
# revision_environment = false
|
||||||
|
|
||||||
|
# default to an empty string because the Apmec migration cli will
|
||||||
|
# extract the correct value and set it programatically before alemic is fully
|
||||||
|
# invoked.
|
||||||
|
sqlalchemy.url =
|
||||||
|
|
||||||
|
# Logging configuration
|
||||||
|
[loggers]
|
||||||
|
keys = root,sqlalchemy,alembic
|
||||||
|
|
||||||
|
[handlers]
|
||||||
|
keys = console
|
||||||
|
|
||||||
|
[formatters]
|
||||||
|
keys = generic
|
||||||
|
|
||||||
|
[logger_root]
|
||||||
|
level = WARN
|
||||||
|
handlers = console
|
||||||
|
qualname =
|
||||||
|
|
||||||
|
[logger_sqlalchemy]
|
||||||
|
level = WARN
|
||||||
|
handlers =
|
||||||
|
qualname = sqlalchemy.engine
|
||||||
|
|
||||||
|
[logger_alembic]
|
||||||
|
level = INFO
|
||||||
|
handlers =
|
||||||
|
qualname = alembic
|
||||||
|
|
||||||
|
[handler_console]
|
||||||
|
class = StreamHandler
|
||||||
|
args = (sys.stderr,)
|
||||||
|
level = NOTSET
|
||||||
|
formatter = generic
|
||||||
|
|
||||||
|
[formatter_generic]
|
||||||
|
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||||
|
datefmt = %H:%M:%S
|
0
apmec/db/migration/alembic_migrations/__init__.py
Normal file
0
apmec/db/migration/alembic_migrations/__init__.py
Normal file
84
apmec/db/migration/alembic_migrations/env.py
Normal file
84
apmec/db/migration/alembic_migrations/env.py
Normal file
@ -0,0 +1,84 @@
|
|||||||
|
# Copyright 2012 New Dream Network, LLC (DreamHost)
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
from logging import config as logging_config
|
||||||
|
|
||||||
|
from alembic import context
|
||||||
|
from sqlalchemy import create_engine, pool
|
||||||
|
|
||||||
|
from apmec.db import model_base
|
||||||
|
|
||||||
|
|
||||||
|
# this is the Alembic Config object, which provides
|
||||||
|
# access to the values within the .ini file in use.
|
||||||
|
config = context.config
|
||||||
|
apmec_config = config.apmec_config
|
||||||
|
|
||||||
|
# Interpret the config file for Python logging.
|
||||||
|
# This line sets up loggers basically.
|
||||||
|
logging_config.fileConfig(config.config_file_name)
|
||||||
|
|
||||||
|
# set the target for 'autogenerate' support
|
||||||
|
target_metadata = model_base.BASE.metadata
|
||||||
|
|
||||||
|
|
||||||
|
def run_migrations_offline():
|
||||||
|
"""Run migrations in 'offline' mode.
|
||||||
|
|
||||||
|
This configures the context with either a URL
|
||||||
|
or an Engine.
|
||||||
|
|
||||||
|
Calls to context.execute() here emit the given string to the
|
||||||
|
script output.
|
||||||
|
|
||||||
|
"""
|
||||||
|
kwargs = dict()
|
||||||
|
if apmec_config.database.connection:
|
||||||
|
kwargs['url'] = apmec_config.database.connection
|
||||||
|
else:
|
||||||
|
kwargs['dialect_name'] = apmec_config.database.engine
|
||||||
|
context.configure(**kwargs)
|
||||||
|
|
||||||
|
with context.begin_transaction():
|
||||||
|
context.run_migrations()
|
||||||
|
|
||||||
|
|
||||||
|
def run_migrations_online():
|
||||||
|
"""Run migrations in 'online' mode.
|
||||||
|
|
||||||
|
In this scenario we need to create an Engine
|
||||||
|
and associate a connection with the context.
|
||||||
|
|
||||||
|
"""
|
||||||
|
engine = create_engine(
|
||||||
|
apmec_config.database.connection,
|
||||||
|
poolclass=pool.NullPool)
|
||||||
|
|
||||||
|
connection = engine.connect()
|
||||||
|
context.configure(
|
||||||
|
connection=connection,
|
||||||
|
target_metadata=target_metadata
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with context.begin_transaction():
|
||||||
|
context.run_migrations()
|
||||||
|
finally:
|
||||||
|
connection.close()
|
||||||
|
|
||||||
|
|
||||||
|
if context.is_offline_mode():
|
||||||
|
run_migrations_offline()
|
||||||
|
else:
|
||||||
|
run_migrations_online()
|
36
apmec/db/migration/alembic_migrations/script.py.mako
Normal file
36
apmec/db/migration/alembic_migrations/script.py.mako
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
# Copyright ${create_date.year} OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""${message}
|
||||||
|
|
||||||
|
Revision ID: ${up_revision}
|
||||||
|
Revises: ${down_revision}
|
||||||
|
Create Date: ${create_date}
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = ${repr(up_revision)}
|
||||||
|
down_revision = ${repr(down_revision)}
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
${imports if imports else ""}
|
||||||
|
|
||||||
|
from apmec.db import migration
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
${upgrades if upgrades else "pass"}
|
@ -0,0 +1,33 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""add template_source column
|
||||||
|
|
||||||
|
Revision ID: 000632983ada
|
||||||
|
Revises: 0ae5b1ce3024
|
||||||
|
Create Date: 2016-12-22 20:30:03.931290
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '000632983ada'
|
||||||
|
down_revision = '0ad3bbce1c19'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.add_column('mead', sa.Column('template_source', sa.String(length=255)))
|
@ -0,0 +1,73 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""create of Network service tables
|
||||||
|
|
||||||
|
Revision ID: 0ad3bbce1c18
|
||||||
|
Revises: 0ae5b1ce3024
|
||||||
|
Create Date: 2016-12-17 19:41:01.906138
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '0ad3bbce1c18'
|
||||||
|
down_revision = '8f7145914cb0'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
from apmec.db import types
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.create_table('mesd',
|
||||||
|
sa.Column('tenant_id', sa.String(length=64), nullable=False),
|
||||||
|
sa.Column('id', types.Uuid(length=36), nullable=False),
|
||||||
|
sa.Column('created_at', sa.DateTime(), nullable=True),
|
||||||
|
sa.Column('updated_at', sa.DateTime(), nullable=True),
|
||||||
|
sa.Column('deleted_at', sa.DateTime(), nullable=True),
|
||||||
|
sa.Column('name', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('description', sa.Text(), nullable=True),
|
||||||
|
sa.Column('meads', types.Json, nullable=True),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
op.create_table('mes',
|
||||||
|
sa.Column('tenant_id', sa.String(length=64), nullable=False),
|
||||||
|
sa.Column('id', types.Uuid(length=36), nullable=False),
|
||||||
|
sa.Column('created_at', sa.DateTime(), nullable=True),
|
||||||
|
sa.Column('updated_at', sa.DateTime(), nullable=True),
|
||||||
|
sa.Column('deleted_at', sa.DateTime(), nullable=True),
|
||||||
|
sa.Column('mesd_id', types.Uuid(length=36), nullable=True),
|
||||||
|
sa.Column('vim_id', sa.String(length=64), nullable=False),
|
||||||
|
sa.Column('name', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('description', sa.Text(), nullable=True),
|
||||||
|
sa.Column('mea_ids', sa.TEXT(length=65535), nullable=True),
|
||||||
|
sa.Column('mgmt_urls', sa.TEXT(length=65535), nullable=True),
|
||||||
|
sa.Column('status', sa.String(length=64), nullable=False),
|
||||||
|
sa.Column('error_reason', sa.Text(), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['mesd_id'], ['mesd.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
op.create_table('mesd_attribute',
|
||||||
|
sa.Column('id', types.Uuid(length=36), nullable=False),
|
||||||
|
sa.Column('mesd_id', types.Uuid(length=36), nullable=False),
|
||||||
|
sa.Column('key', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('value', sa.TEXT(length=65535), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['mesd_id'], ['mesd.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
@ -0,0 +1,35 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""increase_vim_password_size
|
||||||
|
|
||||||
|
Revision ID: 0ad3bbce1c19
|
||||||
|
Revises: 0ad3bbce1c19
|
||||||
|
Create Date: 2017-01-17 09:50:46.296206
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '0ad3bbce1c19'
|
||||||
|
down_revision = '0ad3bbce1c18'
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.alter_column('vimauths',
|
||||||
|
'password',
|
||||||
|
type_=sa.String(length=255))
|
@ -0,0 +1,58 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""unique_constraint_name
|
||||||
|
|
||||||
|
Revision ID: 0ae5b1ce3024
|
||||||
|
Revises: 507122918800
|
||||||
|
Create Date: 2016-09-15 16:27:08.736673
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '0ae5b1ce3024'
|
||||||
|
down_revision = '507122918800'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def _migrate_duplicate_names(table):
|
||||||
|
|
||||||
|
meta = sa.MetaData(bind=op.get_bind())
|
||||||
|
t = sa.Table(table, meta, autoload=True)
|
||||||
|
|
||||||
|
session = sa.orm.Session(bind=op.get_bind())
|
||||||
|
with session.begin(subtransactions=True):
|
||||||
|
dup_names = session.query(t.c.name).group_by(
|
||||||
|
t.c.name).having(sa.func.count() > 1).all()
|
||||||
|
if dup_names:
|
||||||
|
for name in dup_names:
|
||||||
|
duplicate_obj_query = session.query(t).filter(t.c.name == name[
|
||||||
|
0]).all()
|
||||||
|
for dup_obj in duplicate_obj_query:
|
||||||
|
name = dup_obj.name[:242] if dup_obj.name > 242 else \
|
||||||
|
dup_obj.name
|
||||||
|
new_name = '{0}-{1}'.format(name, dup_obj.id[-12:])
|
||||||
|
session.execute(t.update().where(
|
||||||
|
t.c.id == dup_obj.id).values(name=new_name))
|
||||||
|
session.commit()
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
|
||||||
|
_migrate_duplicate_names('mea')
|
||||||
|
_migrate_duplicate_names('mead')
|
||||||
|
_migrate_duplicate_names('vims')
|
@ -0,0 +1,98 @@
|
|||||||
|
# Copyright 2015 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""Add Service related dbs
|
||||||
|
|
||||||
|
Revision ID: 12a57080b277
|
||||||
|
Revises: 5958429bcb3c
|
||||||
|
Create Date: 2015-11-26 15:18:19.623170
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '12a57080b277'
|
||||||
|
down_revision = '5958429bcb3c'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
# commands auto generated by Alembic - please adjust! #
|
||||||
|
op.create_table(
|
||||||
|
'servicetypes',
|
||||||
|
sa.Column('tenant_id', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('template_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('service_type', sa.String(length=255), nullable=False),
|
||||||
|
sa.ForeignKeyConstraint(['template_id'], ['devicetemplates.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
op.create_table(
|
||||||
|
'deviceservicecontexts',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('device_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('network_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('subnet_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('port_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('router_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('role', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('index', sa.Integer(), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['device_id'], ['devices.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
op.create_table(
|
||||||
|
'serviceinstances',
|
||||||
|
sa.Column('tenant_id', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('name', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('service_type_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('service_table_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('managed_by_user', sa.Boolean(), nullable=True),
|
||||||
|
sa.Column('mgmt_driver', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('mgmt_url', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('status', sa.String(length=255), nullable=False),
|
||||||
|
sa.ForeignKeyConstraint(['service_type_id'], ['servicetypes.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
op.create_table(
|
||||||
|
'servicecontexts',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('service_instance_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('network_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('subnet_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('port_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('router_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('role', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('index', sa.Integer(), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['service_instance_id'],
|
||||||
|
['serviceinstances.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
op.create_table(
|
||||||
|
'servicedevicebindings',
|
||||||
|
sa.Column('service_instance_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('device_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.ForeignKeyConstraint(['device_id'], ['devices.id'], ),
|
||||||
|
sa.ForeignKeyConstraint(['service_instance_id'],
|
||||||
|
['serviceinstances.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('service_instance_id', 'device_id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
# end Alembic commands #
|
@ -0,0 +1,42 @@
|
|||||||
|
# Copyright 2015 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""Alter devices
|
||||||
|
|
||||||
|
Revision ID: 12a57080b278
|
||||||
|
Revises: 12a57080b277
|
||||||
|
Create Date: 2015-11-26 15:18:19.623170
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '12a57080b278'
|
||||||
|
down_revision = '12a57080b277'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
from sqlalchemy.dialects import mysql
|
||||||
|
from apmec.db import migration
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
# commands auto generated by Alembic - please adjust! #
|
||||||
|
fk_constraint = ('deviceattributes', )
|
||||||
|
with migration.modify_foreign_keys_constraint(fk_constraint):
|
||||||
|
op.alter_column(u'deviceattributes', 'device_id',
|
||||||
|
existing_type=mysql.VARCHAR(length=255),
|
||||||
|
nullable=False)
|
||||||
|
op.alter_column(u'devices', 'status', existing_type=mysql.VARCHAR(
|
||||||
|
length=255), nullable=False)
|
||||||
|
# end Alembic commands #s
|
@ -0,0 +1,35 @@
|
|||||||
|
# Copyright 2015 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""add descrition to mea
|
||||||
|
|
||||||
|
Revision ID: 13c0e0661015
|
||||||
|
Revises: 4c31092895b8
|
||||||
|
Create Date: 2015-05-18 18:47:22.180962
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '13c0e0661015'
|
||||||
|
down_revision = '4c31092895b8'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.add_column('devices',
|
||||||
|
sa.Column('description', sa.String(255),
|
||||||
|
nullable=True, server_default=''))
|
@ -0,0 +1,72 @@
|
|||||||
|
# Copyright 2013 Intel Corporation.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
"""add tables for apmec framework
|
||||||
|
|
||||||
|
Revision ID: 1c6b0d82afcd
|
||||||
|
Revises: 2db5203cb7a9
|
||||||
|
Create Date: 2013-11-25 18:06:13.980301
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '1c6b0d82afcd'
|
||||||
|
down_revision = None
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.create_table(
|
||||||
|
'devicetemplates',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('tenant_id', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('name', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('description', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('infra_driver', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('mgmt_driver', sa.String(length=255), nullable=True),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
)
|
||||||
|
op.create_table(
|
||||||
|
'devicetemplateattributes',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('template_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('key', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('value', sa.String(length=4096), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['template_id'], ['devicetemplates.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
)
|
||||||
|
op.create_table(
|
||||||
|
'devices',
|
||||||
|
sa.Column('id', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('tenant_id', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('name', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('template_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('instance_id', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('mgmt_url', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('status', sa.String(length=255), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['template_id'], ['devicetemplates.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
)
|
||||||
|
op.create_table(
|
||||||
|
'deviceattributes',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('device_id', sa.String(length=255)),
|
||||||
|
sa.Column('key', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('value', sa.String(length=4096), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['device_id'], ['devices.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
)
|
@ -0,0 +1,35 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""Add status to vims
|
||||||
|
|
||||||
|
Revision ID: 22f5385a3d3f
|
||||||
|
Revises: 5246a6bd410f
|
||||||
|
Create Date: 2016-05-12 13:29:30.615609
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '22f5385a3d3f'
|
||||||
|
down_revision = '5f88e86b35c7'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.add_column('vims',
|
||||||
|
sa.Column('status', sa.String(255),
|
||||||
|
nullable=False, server_default=''))
|
@ -0,0 +1,35 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""remove proxydb related
|
||||||
|
|
||||||
|
Revision ID: 22f5385a3d4f
|
||||||
|
Revises: d4f265e8eb9d
|
||||||
|
Create Date: 2016-08-01 15:47:51.161749
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '22f5385a3d4f'
|
||||||
|
down_revision = 'd4f265e8eb9d'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
# commands auto generated by Alembic - please adjust! #
|
||||||
|
op.drop_table('proxymgmtports')
|
||||||
|
op.drop_table('proxyserviceports')
|
||||||
|
# end Alembic commands #
|
@ -0,0 +1,52 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""rename device db tables
|
||||||
|
|
||||||
|
Revision ID: 22f5385a3d50
|
||||||
|
Revises: 22f5385a3d4f
|
||||||
|
Create Date: 2016-08-01 15:47:51.161749
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '22f5385a3d50'
|
||||||
|
down_revision = '22f5385a3d4f'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
from apmec.db import migration
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
# commands auto generated by Alembic - please adjust! #
|
||||||
|
op.rename_table('devicetemplates', 'mead')
|
||||||
|
op.rename_table('devicetemplateattributes', 'mead_attribute')
|
||||||
|
op.rename_table('devices', 'mea')
|
||||||
|
op.rename_table('deviceattributes', 'mea_attribute')
|
||||||
|
migration.modify_foreign_keys_constraint_with_col_change(
|
||||||
|
'mead_attribute', 'template_id', 'mead_id',
|
||||||
|
sa.String(length=36))
|
||||||
|
migration.modify_foreign_keys_constraint_with_col_change(
|
||||||
|
'servicetypes', 'template_id', 'mead_id',
|
||||||
|
sa.String(length=36))
|
||||||
|
migration.modify_foreign_keys_constraint_with_col_change(
|
||||||
|
'mea', 'template_id', 'mead_id',
|
||||||
|
sa.String(length=36))
|
||||||
|
migration.modify_foreign_keys_constraint_with_col_change(
|
||||||
|
'mea_attribute', 'device_id', 'mea_id',
|
||||||
|
sa.String(length=36))
|
||||||
|
# end Alembic commands #
|
@ -0,0 +1,34 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""Alter value in deviceattributes
|
||||||
|
|
||||||
|
Revision ID: 24bec5f211c7
|
||||||
|
Revises: 2774a42c7163
|
||||||
|
Create Date: 2016-01-24 19:21:03.410029
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '24bec5f211c7'
|
||||||
|
down_revision = '2774a42c7163'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.alter_column('deviceattributes',
|
||||||
|
'value', type_=sa.TEXT(65535), nullable=True)
|
@ -0,0 +1,37 @@
|
|||||||
|
# Copyright 2015 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""remove service related
|
||||||
|
|
||||||
|
Revision ID: 2774a42c7163
|
||||||
|
Revises: 12a57080b278
|
||||||
|
Create Date: 2015-11-26 15:47:51.161749
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '2774a42c7163'
|
||||||
|
down_revision = '12a57080b278'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
# commands auto generated by Alembic - please adjust! #
|
||||||
|
op.drop_table('servicecontexts')
|
||||||
|
op.drop_table('deviceservicecontexts')
|
||||||
|
op.drop_table('servicedevicebindings')
|
||||||
|
op.drop_table('serviceinstances')
|
||||||
|
# end Alembic commands #
|
@ -0,0 +1,37 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""audit support
|
||||||
|
|
||||||
|
Revision ID: 2ff0a0e360f1
|
||||||
|
Revises: 22f5385a3d50
|
||||||
|
Create Date: 2016-06-02 15:14:31.888078
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '2ff0a0e360f1'
|
||||||
|
down_revision = '22f5385a3d50'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
for table in ['vims', 'mea', 'mead']:
|
||||||
|
op.add_column(table,
|
||||||
|
sa.Column('created_at', sa.DateTime(), nullable=True))
|
||||||
|
op.add_column(table,
|
||||||
|
sa.Column('updated_at', sa.DateTime(), nullable=True))
|
@ -0,0 +1,36 @@
|
|||||||
|
# Copyright 2017 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""change_vim_shared_property_to_false
|
||||||
|
|
||||||
|
Revision ID: 31acbaeb8299
|
||||||
|
Revises: e7993093baf1
|
||||||
|
Create Date: 2017-05-30 23:46:20.034085
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '31acbaeb8299'
|
||||||
|
down_revision = 'e7993093baf1'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.alter_column('vims', 'shared',
|
||||||
|
existing_type=sa.Boolean(),
|
||||||
|
server_default=sa.text('false'),
|
||||||
|
nullable=False)
|
@ -0,0 +1,37 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""set-mandatory-columns-not-null
|
||||||
|
|
||||||
|
Revision ID: 354de64ba129
|
||||||
|
Revises: b07673bb8654
|
||||||
|
Create Date: 2016-06-02 10:05:22.299780
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '354de64ba129'
|
||||||
|
down_revision = 'b07673bb8654'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
for table in ['devices', 'devicetemplates', 'vims', 'servicetypes']:
|
||||||
|
op.alter_column(table,
|
||||||
|
'tenant_id',
|
||||||
|
existing_type=sa.String(64),
|
||||||
|
nullable=False)
|
@ -0,0 +1,30 @@
|
|||||||
|
# Copyright 2014 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""empty message
|
||||||
|
|
||||||
|
Revision ID: 4c31092895b8
|
||||||
|
Revises: 81ffa86020d
|
||||||
|
Create Date: 2014-08-01 11:48:10.319498
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '4c31092895b8'
|
||||||
|
down_revision = '81ffa86020d'
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
pass
|
@ -0,0 +1,45 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""audit_support_events
|
||||||
|
|
||||||
|
Revision ID: 4ee19c8a6d0a
|
||||||
|
Revises: acf941e54075
|
||||||
|
Create Date: 2016-06-07 03:16:53.513392
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '4ee19c8a6d0a'
|
||||||
|
down_revision = '941b5a6fff9e'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
from apmec.db import types
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.create_table('events',
|
||||||
|
sa.Column('id', sa.Integer, nullable=False, autoincrement=True),
|
||||||
|
sa.Column('resource_id', types.Uuid, nullable=False),
|
||||||
|
sa.Column('resource_state', sa.String(64), nullable=False),
|
||||||
|
sa.Column('resource_type', sa.String(64), nullable=False),
|
||||||
|
sa.Column('event_type', sa.String(64), nullable=False),
|
||||||
|
sa.Column('timestamp', sa.DateTime, nullable=False),
|
||||||
|
sa.Column('event_details', types.Json),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
@ -0,0 +1,145 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""adds_NFY
|
||||||
|
|
||||||
|
Revision ID: 507122918800
|
||||||
|
Revises: 4ee19c8a6d0a
|
||||||
|
Create Date: 2016-07-29 21:48:18.816277
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '507122918800'
|
||||||
|
down_revision = '4ee19c8a6d0a'
|
||||||
|
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
from apmec.db.types import Json
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
|
||||||
|
op.create_table(
|
||||||
|
'NANYtemplates',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('tenant_id', sa.String(length=64), nullable=False),
|
||||||
|
sa.Column('name', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('description', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('template', Json),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
|
||||||
|
op.create_table(
|
||||||
|
'NANYs',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('tenant_id', sa.String(length=64), nullable=False),
|
||||||
|
sa.Column('name', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('description', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('NANYD_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('status', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('mea_mapping', Json),
|
||||||
|
sa.ForeignKeyConstraint(['NANYD_id'], ['NANYtemplates.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
|
||||||
|
op.create_table(
|
||||||
|
'NANYnfps',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('tenant_id', sa.String(length=64), nullable=False),
|
||||||
|
sa.Column('NANY_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('name', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('status', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('path_id', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('symmetrical', sa.Boolean, default=False),
|
||||||
|
sa.ForeignKeyConstraint(['NANY_id'], ['NANYs.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
|
||||||
|
op.create_table(
|
||||||
|
'NANYchains',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('tenant_id', sa.String(length=64), nullable=False),
|
||||||
|
sa.Column('instance_id', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('nfp_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('status', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('path_id', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('symmetrical', sa.Boolean, default=False),
|
||||||
|
sa.Column('chain', Json),
|
||||||
|
sa.ForeignKeyConstraint(['nfp_id'], ['NANYnfps.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
|
||||||
|
op.create_table(
|
||||||
|
'NANYclassifiers',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('tenant_id', sa.String(length=64), nullable=False),
|
||||||
|
sa.Column('nfp_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('instance_id', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('chain_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('status', sa.String(length=255), nullable=False),
|
||||||
|
sa.ForeignKeyConstraint(['nfp_id'], ['NANYnfps.id'], ),
|
||||||
|
sa.ForeignKeyConstraint(['chain_id'], ['NANYchains.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
|
||||||
|
op.create_table(
|
||||||
|
'aclmatchcriterias',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('NANYc_id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('eth_src', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('eth_dst', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('eth_type', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('vlan_id', sa.Integer, nullable=True),
|
||||||
|
sa.Column('vlan_pcp', sa.Integer, nullable=True),
|
||||||
|
sa.Column('mpls_label', sa.Integer, nullable=True),
|
||||||
|
sa.Column('mpls_tc', sa.Integer, nullable=True),
|
||||||
|
sa.Column('ip_dscp', sa.Integer, nullable=True),
|
||||||
|
sa.Column('ip_ecn', sa.Integer, nullable=True),
|
||||||
|
sa.Column('ip_src_prefix', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('ip_dst_prefix', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('source_port_min', sa.Integer, nullable=True),
|
||||||
|
sa.Column('source_port_max', sa.Integer, nullable=True),
|
||||||
|
sa.Column('destination_port_min', sa.Integer, nullable=True),
|
||||||
|
sa.Column('destination_port_max', sa.Integer, nullable=True),
|
||||||
|
sa.Column('ip_proto', sa.Integer, nullable=True),
|
||||||
|
sa.Column('network_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('network_src_port_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('network_dst_port_id', sa.String(length=36), nullable=True),
|
||||||
|
sa.Column('tenant_id', sa.String(length=64), nullable=True),
|
||||||
|
sa.Column('icmpv4_type', sa.Integer, nullable=True),
|
||||||
|
sa.Column('icmpv4_code', sa.Integer, nullable=True),
|
||||||
|
sa.Column('arp_op', sa.Integer, nullable=True),
|
||||||
|
sa.Column('arp_spa', sa.Integer, nullable=True),
|
||||||
|
sa.Column('arp_tpa', sa.Integer, nullable=True),
|
||||||
|
sa.Column('arp_sha', sa.Integer, nullable=True),
|
||||||
|
sa.Column('arp_tha', sa.Integer, nullable=True),
|
||||||
|
sa.Column('ipv6_src', sa.String(36), nullable=True),
|
||||||
|
sa.Column('ipv6_dst', sa.String(36), nullable=True),
|
||||||
|
sa.Column('ipv6_flabel', sa.Integer, nullable=True),
|
||||||
|
sa.Column('icmpv6_type', sa.Integer, nullable=True),
|
||||||
|
sa.Column('icmpv6_code', sa.Integer, nullable=True),
|
||||||
|
sa.Column('ipv6_nd_target', sa.String(36), nullable=True),
|
||||||
|
sa.Column('ipv6_nd_sll', sa.String(36), nullable=True),
|
||||||
|
sa.Column('ipv6_nd_tll', sa.String(36), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['NANYc_id'], ['NANYclassifiers.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
)
|
@ -0,0 +1,60 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""multisite_vim
|
||||||
|
|
||||||
|
Revision ID: 5246a6bd410f
|
||||||
|
Revises: 24bec5f211c7
|
||||||
|
Create Date: 2016-03-22 14:05:15.129330
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '5246a6bd410f'
|
||||||
|
down_revision = '24bec5f211c7'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.create_table('vims',
|
||||||
|
sa.Column('id', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('type', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('tenant_id', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('name', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('description', sa.String(length=255), nullable=True),
|
||||||
|
sa.Column('placement_attr', sa.PickleType(), nullable=True),
|
||||||
|
sa.Column('shared', sa.Boolean(), server_default=sa.text(u'true'),
|
||||||
|
nullable=False),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
op.create_table('vimauths',
|
||||||
|
sa.Column('id', sa.String(length=36), nullable=False),
|
||||||
|
sa.Column('vim_id', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('password', sa.String(length=128), nullable=False),
|
||||||
|
sa.Column('auth_url', sa.String(length=255), nullable=False),
|
||||||
|
sa.Column('vim_project', sa.PickleType(), nullable=False),
|
||||||
|
sa.Column('auth_cred', sa.PickleType(), nullable=False),
|
||||||
|
sa.ForeignKeyConstraint(['vim_id'], ['vims.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
sa.UniqueConstraint('auth_url')
|
||||||
|
)
|
||||||
|
op.add_column(u'devices', sa.Column('placement_attr', sa.PickleType(),
|
||||||
|
nullable=True))
|
||||||
|
op.add_column(u'devices', sa.Column('vim_id', sa.String(length=36),
|
||||||
|
nullable=False))
|
||||||
|
op.create_foreign_key(None, 'devices', 'vims', ['vim_id'], ['id'])
|
@ -0,0 +1,34 @@
|
|||||||
|
# Copyright 2015 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""modify datatype of value
|
||||||
|
|
||||||
|
Revision ID: 5958429bcb3c
|
||||||
|
Revises: 13c0e0661015
|
||||||
|
Create Date: 2015-10-05 17:09:24.710961
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '5958429bcb3c'
|
||||||
|
down_revision = '13c0e0661015'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.alter_column('devicetemplateattributes',
|
||||||
|
'value', type_=sa.TEXT(65535), nullable=True)
|
@ -0,0 +1,41 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""make MEAD/MEA/VIM name mandatory
|
||||||
|
|
||||||
|
Revision ID: 5f88e86b35c7
|
||||||
|
Revises: 354de64ba129
|
||||||
|
Create Date: 2016-06-14 11:16:16.303343
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '5f88e86b35c7'
|
||||||
|
down_revision = '354de64ba129'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
from sqlalchemy.dialects import mysql
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.alter_column('devices', 'name',
|
||||||
|
existing_type=mysql.VARCHAR(length=255),
|
||||||
|
nullable=False)
|
||||||
|
op.alter_column('devicetemplates', 'name',
|
||||||
|
existing_type=mysql.VARCHAR(length=255),
|
||||||
|
nullable=False)
|
||||||
|
op.alter_column('vims', 'name',
|
||||||
|
existing_type=mysql.VARCHAR(length=255),
|
||||||
|
nullable=False)
|
@ -0,0 +1,55 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""blob-to-json-text
|
||||||
|
|
||||||
|
Revision ID: 6e56d4474b2a
|
||||||
|
Revises: f958f58e5daa
|
||||||
|
Create Date: 2016-06-01 09:50:46.296206
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import pickle
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
from apmec.db import types
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '6e56d4474b2a'
|
||||||
|
down_revision = 'f958f58e5daa'
|
||||||
|
|
||||||
|
|
||||||
|
def _migrate_data(table, column_name):
|
||||||
|
meta = sa.MetaData(bind=op.get_bind())
|
||||||
|
t = sa.Table(table, meta, autoload=True)
|
||||||
|
|
||||||
|
for r in t.select().execute():
|
||||||
|
stmt = t.update().where(t.c.id == r.id).values(
|
||||||
|
{column_name: json.dumps(pickle.loads(getattr(r, column_name)))})
|
||||||
|
op.execute(stmt)
|
||||||
|
|
||||||
|
op.alter_column(table,
|
||||||
|
column_name,
|
||||||
|
type_=types.Json)
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
_migrate_data('vims', 'placement_attr')
|
||||||
|
_migrate_data('vimauths', 'vim_project')
|
||||||
|
_migrate_data('vimauths', 'auth_cred')
|
||||||
|
_migrate_data('devices', 'placement_attr')
|
@ -0,0 +1,54 @@
|
|||||||
|
# Copyright 2014 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""rpc_proxy
|
||||||
|
|
||||||
|
Revision ID: 81ffa86020d
|
||||||
|
Revises: 1c6b0d82afcd
|
||||||
|
Create Date: 2014-03-19 15:50:11.712686
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '81ffa86020d'
|
||||||
|
down_revision = '1c6b0d82afcd'
|
||||||
|
|
||||||
|
# Change to ['*'] if this migration applies to all plugins
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.create_table(
|
||||||
|
'proxymgmtports',
|
||||||
|
sa.Column('device_id', sa.String(255)),
|
||||||
|
sa.Column('port_id', sa.String(36), nullable=False),
|
||||||
|
sa.Column('dst_transport_url', sa.String(255)),
|
||||||
|
sa.Column('svr_proxy_id', sa.String(36)),
|
||||||
|
sa.Column('svr_mes_proxy_id', sa.String(36)),
|
||||||
|
sa.Column('clt_proxy_id', sa.String(36)),
|
||||||
|
sa.Column('clt_mes_proxy_id', sa.String(36)),
|
||||||
|
sa.PrimaryKeyConstraint('device_id'),
|
||||||
|
)
|
||||||
|
op.create_table(
|
||||||
|
'proxyserviceports',
|
||||||
|
sa.Column('service_instance_id', sa.String(255)),
|
||||||
|
sa.Column('svr_proxy_id', sa.String(36)),
|
||||||
|
sa.Column('svr_mes_proxy_id', sa.String(36)),
|
||||||
|
sa.Column('clt_proxy_id', sa.String(36)),
|
||||||
|
sa.Column('clt_mes_proxy_id', sa.String(36)),
|
||||||
|
sa.PrimaryKeyConstraint('service_instance_id'),
|
||||||
|
)
|
@ -0,0 +1,32 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""remove infra_driver column
|
||||||
|
|
||||||
|
Revision ID: 8f7145914cb0
|
||||||
|
Revises: 0ae5b1ce3024
|
||||||
|
Create Date: 2016-12-08 17:28:26.609343
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '8f7145914cb0'
|
||||||
|
down_revision = '0ae5b1ce3024'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.drop_column('mead', 'infra_driver')
|
@ -0,0 +1,39 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""enable soft delete
|
||||||
|
|
||||||
|
Revision ID: 941b5a6fff9e
|
||||||
|
Revises: 2ff0a0e360f1
|
||||||
|
Create Date: 2016-06-06 10:12:49.787430
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '941b5a6fff9e'
|
||||||
|
down_revision = '2ff0a0e360f1'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
for table in ['vims', 'mea', 'mead']:
|
||||||
|
op.add_column(table,
|
||||||
|
sa.Column('deleted_at', sa.DateTime(), nullable=True))
|
||||||
|
|
||||||
|
# unique constraint is taken care by the meo_db plugin to support
|
||||||
|
# soft deletion of vim
|
||||||
|
op.drop_index('auth_url', table_name='vimauths')
|
1
apmec/db/migration/alembic_migrations/versions/HEAD
Normal file
1
apmec/db/migration/alembic_migrations/versions/HEAD
Normal file
@ -0,0 +1 @@
|
|||||||
|
e9a1e47fb0b5
|
5
apmec/db/migration/alembic_migrations/versions/README
Normal file
5
apmec/db/migration/alembic_migrations/versions/README
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
This directory contains the migration scripts for the Apmec project. Please
|
||||||
|
see the README in apmec/db/migration on how to use and generate new
|
||||||
|
migrations.
|
||||||
|
|
||||||
|
|
@ -0,0 +1,34 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""Add error_reason to device
|
||||||
|
|
||||||
|
Revision ID: acf941e54075
|
||||||
|
Revises: 5246a6bd410f
|
||||||
|
Create Date: 2016-04-07 23:53:56.623647
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = 'acf941e54075'
|
||||||
|
down_revision = '5246a6bd410f'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
op.add_column('devices', sa.Column('error_reason',
|
||||||
|
sa.Text(), nullable=True))
|
@ -0,0 +1,54 @@
|
|||||||
|
# Copyright 2016 OpenStack Foundation
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
"""set-status-type-tenant-id-length
|
||||||
|
|
||||||
|
Revision ID: b07673bb8654
|
||||||
|
Revises: c7cde2f45f82
|
||||||
|
Create Date: 2016-06-01 12:46:07.499279
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = 'b07673bb8654'
|
||||||
|
down_revision = 'c7cde2f45f82'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(active_plugins=None, options=None):
|
||||||
|
for table in ['devices', 'devicetemplates', 'vims', 'servicetypes']:
|
||||||
|
op.alter_column(table,
|
||||||
|
'tenant_id',
|
||||||
|
type_=sa.String(64), nullable=False)
|
||||||
|
op.alter_column('vims',
|
||||||
|
'type',
|
||||||
|
type_=sa.String(64))
|
||||||
|
op.alter_column('devices',
|
||||||
|
'instance_id',
|
||||||
|
type_=sa.String(64), nullable=True)
|
||||||
|
op.alter_column('devices',
|
||||||
|
'status',
|
||||||
|
type_=sa.String(64))
|
||||||
|
op.alter_column('proxymgmtports',
|
||||||
|
'device_id',
|
||||||
|
type_=sa.String(64), nullable=False)
|
||||||
|
op.alter_column('proxyserviceports',
|
||||||
|
'service_instance_id',
|
||||||
|
type_=sa.String(64), nullable=False)
|
||||||
|
op.alter_column('servicetypes',
|
||||||
|
'service_type',
|
||||||
|
type_=sa.String(64))
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user