Refactor Distil API to support new endpoints

Change-Id: I4d456f9228e879234898177d1f2d791d2ffbbf46
This commit is contained in:
Fei Long Wang 2016-04-27 16:26:22 +12:00 committed by Lingxian Kong
parent 690ecec82c
commit 0d20c80a6e
53 changed files with 3221 additions and 22 deletions

12
HACKING.rst Normal file
View File

@ -0,0 +1,12 @@
Distil Style Commandments
==========================
- Step 1: Read the OpenStack Style Commandments
http://docs.openstack.org/developer/hacking/
- Step 2: Read on
Distil Specific Commandments
-----------------------------
None so far

175
LICENSE Normal file
View File

@ -0,0 +1,175 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

109
README.rst Normal file
View File

@ -0,0 +1,109 @@
# Distil
## What
Distil is a web app to provide easy interactions with ERP systems, by exposing a configurable set of collection tools and transformers to make usable billing data out of Ceilometer entries.
Distil provides a rest api to integrate with arbitrary ERP systems, and returns sales orders as json.
What the ranges are, and how Ceilometer data is aggregated is intended to be configurable, and defined in the configuration file.
The Distil data store will prevent overlapping bills for a given tenant and resource ever being stored, while still allowing for regeneration of a given sales order.
## Requirements:
See: requirements.txt
## Configuration
Configuring Distil is handled through its primary configuration file, which defaults to: /etc/distil/conf.yaml
A base configuration is included, but must be modified appropriately. It can be located at: /examples/conf.yaml
### Collection
Under collection > meter_mappings in the configs is how we define the transformers being used, and the meters mapped to them. This is the main functionality of Distil, and works as a way to make usable piece of usage data out of ceilometer samples.
We are also able to configure metadata fetching from the samples via collection > metadata_def, with the ability to pull from multiple metadata fields as the same data can be in different field names based on sample origin.
### Transformers
Active transformers are currently hard coded as a dict of names to classes, but adding additional transformers is a straightforward process assuming new transformers follow the same input/output conventions of the existing ones. Once listed under the active transformers dict, they can be used and referenced in the config.
## Setup
Provided all the requirements are met, a database must be created, and then setup with artifice/initdb.py
The web app itself consists of running bin/web.py with specified config, at which point you will have the app running locally at: http://0.0.0.0:8000/
### Setup with Openstack environment
As mentioned, Distil relies entirely on the Ceilometer project for its metering and measurement collection.
It needs to be given admin access, and provided with the keystone endpoint in the config.
Currently it also relies on the "state" metric existing in Ceilometer, but that will be patched out later. As well as a few other pollster we've made for it.
### Setup in Production
Puppet install to setup as mod_wsgi app.
More details to come.
## Using Distil
Distil comes with a command-line tool to provide some simple commands. These are mainly commands as accessible via the web api, and they can be used from command-line, or by importing the client module and using it in python.
IMPORTANT: Distil assumes all incoming datetimes are in UTC, conversion from local timezone must occur before passing to the api.
### Web Api
The web app is a rest style api for starting usage collection, and for generating sales orders, drafts, and regenerating sales orders.
#### Commands
* /collect_usage
* runs usage collection on all tenants present in Keystone
* /sales_order
* generate a sales order for a given tenant from the last generated sales order, or the first ever usage entry.
* tenant - tenant id for a given tenant, required.
* end - end date for the sales order (yyyy-mm-dd), defaults to 00:00:00 UTC for the current date.
* /sales_draft
* same as generating a sales order, but does not create the sales order in the database.
* tenant - tenant id for a given tenant, required.
* end - end date for the sales order (yyyy-mm-dd or yyyy-mm-ddThh-mm-ss), defaults to now in UTC.
* /sales_historic
* regenerate a sales order for a tenant that intersects with the given date
* tenant - tenant id for a given tenant, required.
* date - target date (yyyy-mm-dd).
* /sales_range
* get all sales orders that intersect with the given range
* tenant - tenant id for a given tenant, required.
* start - start of the range (yyyy-mm-dd).
* end - end of the range (yyyy-mm-dd), defaults to now in UTC.
### Client/Command-line
The client is a simple object that once given a target endpoint for the web api, provides functions that match the web api.
The command-line tool is the same, and has relatively comprehensive help text from the command-line.
## Running Tests
The tests are currently expected to run with Nosetests, against a pre-provisioned database.
## Future things
Eventually we also want Distil to:
* Authenticate via Keystone
* Have a public endpoint on keystone, with commands limited by user role and tenant.
* Have separate usage collection from the web app layer and a scheduler to handle it.
Things we may eventually want include:
* Alarms built on top of our hourly usage collection.
* Horizon page that builds graphs based on billing data, both past(sales order), and present (sales draft).

View File

@ -0,0 +1,37 @@
# Copyright 2014 Catalyst IT Ltd
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log
REST_SERVICE_OPTS = [
cfg.IntOpt('port',
default=9999,
help='The port for the Distil API server',
),
cfg.StrOpt('host',
default='0.0.0.0',
help='The listen IP for the Distil API server',
),
cfg.ListOpt('public_api_routes',
default=['/', '/v2/prices'],
help='The list of public API routes',
),
]
CONF = cfg.CONF
CONF.register_opts(REST_SERVICE_OPTS)
log.register_options(CONF)

42
distil/api/app.py Normal file
View File

@ -0,0 +1,42 @@
# Copyright 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import flask
from oslo_config import cfg
from distil.api import auth
from distil.api import v2 as api_v2
from distil.utils import api
from distil import config
CONF = cfg.CONF
def make_app():
for group, opts in config.config_options():
CONF.register_opts(opts, group=group)
app = flask.Flask(__name__)
@app.route('/', methods=['GET'])
def version_list():
return api.render({
"versions": [
{"id": "v2.0", "status": "CURRENT"}
]})
app.register_blueprint(api_v2.rest, url_prefix="/v2")
app.wsgi_app = auth.wrap(app.wsgi_app, CONF)
return app

96
distil/api/auth.py Normal file
View File

@ -0,0 +1,96 @@
# Copyright 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keystonemiddleware import auth_token
from keystonemiddleware import opts
from oslo_config import cfg
from oslo_log import log as logging
import re
CONF = cfg.CONF
AUTH_GROUP_NAME = 'keystone_authtoken'
def _register_opts():
options = []
keystone_opts = opts.list_auth_token_opts()
for n in keystone_opts:
if (n[0] == AUTH_GROUP_NAME):
options = n[1]
break
CONF.register_opts(options, group=AUTH_GROUP_NAME)
auth_token.CONF = CONF
_register_opts()
LOG = logging.getLogger(__name__)
class AuthTokenMiddleware(auth_token.AuthProtocol):
"""A wrapper on Keystone auth_token middleware.
Does not perform verification of authentication tokens
for public routes in the API.
"""
def __init__(self, app, conf, public_api_routes=None):
if public_api_routes is None:
public_api_routes = []
route_pattern_tpl = '%s(\.json)?$'
try:
self.public_api_routes = [re.compile(route_pattern_tpl % route_tpl)
for route_tpl in public_api_routes]
except re.error as e:
msg = _('Cannot compile public API routes: %s') % e
LOG.error(msg)
raise exception.ConfigInvalid(error_msg=msg)
super(AuthTokenMiddleware, self).__init__(app, conf)
def __call__(self, env, start_response):
path = env.get('PATH_INFO', "/")
# The information whether the API call is being performed against the
# public API is required for some other components. Saving it to the
# WSGI environment is reasonable thereby.
env['is_public_api'] = any(map(lambda pattern: re.match(pattern, path),
self.public_api_routes))
if env['is_public_api']:
return self._app(env, start_response)
return super(AuthTokenMiddleware, self).__call__(env, start_response)
@classmethod
def factory(cls, global_config, **local_conf):
public_routes = local_conf.get('acl_public_routes', '')
public_api_routes = [path.strip() for path in public_routes.split(',')]
def _factory(app):
return cls(app, global_config, public_api_routes=public_api_routes)
return _factory
def wrap(app, conf):
"""Wrap wsgi application with auth validator check."""
auth_cfg = dict(conf.get(AUTH_GROUP_NAME))
public_api_routes = CONF.public_api_routes
auth_protocol = AuthTokenMiddleware(app, conf=auth_cfg,
public_api_routes=public_api_routes)
return auth_protocol

35
distil/api/v2.py Normal file
View File

@ -0,0 +1,35 @@
# Copyright (c) 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dateutil import parser
from oslo_log import log
from distil.service.api.v2 import prices
from distil.utils import api
LOG = log.getLogger(__name__)
rest = api.Rest('v2', __name__)
@rest.get('/prices')
def prices_get():
format = api.get_request_args().get('format', None)
return api.render(prices=prices.get_prices(format=format))
@rest.get('/costs')
def costs_get():
return api.render(costs=costs.get_costs())

0
distil/cli/__init__.py Normal file
View File

57
distil/cli/distil_api.py Normal file
View File

@ -0,0 +1,57 @@
# Copyright 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import eventlet
from eventlet import wsgi
from oslo_config import cfg
import logging as std_logging
from distil.api import app
from oslo_log import log
from distil import config
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class WritableLogger(object):
"""A thin wrapper that responds to `write` and logs."""
def __init__(self, LOG, level=std_logging.DEBUG):
self.LOG = LOG
self.level = level
def write(self, msg):
self.LOG.log(self.level, msg.rstrip("\n"))
def main():
CONF(project='distil', prog='distil-api')
log.setup(CONF, 'distil')
application = app.make_app()
CONF.log_opt_values(LOG, logging.INFO)
try:
wsgi.server(eventlet.listen((CONF.host, CONF.port), backlog=500),
application, log=WritableLogger(LOG))
except KeyboardInterrupt:
pass
if __name__ == '__main__':
main()

View File

@ -12,6 +12,37 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
from oslo_config import cfg
from oslo_log import log
DEFAULT_OPTIONS = (
cfg.ListOpt('ignore_tenants', default=[],
help=(''),),
)
ODOO_OPTS = [
cfg.StrOpt('version', default='8.0',
help=''),
cfg.StrOpt('hostname',
help=''),
cfg.IntOpt('port', default=443,
help=''),
cfg.StrOpt('protocol', default='jsonrpc+ssl',
help=''),
cfg.StrOpt('database',
help=''),
cfg.StrOpt('user',
help=''),
cfg.StrOpt('password',
help=''),
]
ODOO_GROUP = 'odoo'
def config_options():
return [(None, DEFAULT_OPTIONS),
(ODOO_GROUP, ODOO_OPTS),]
# This is simply a namespace for global config storage # This is simply a namespace for global config storage
main = None main = None
rates_config = None rates_config = None

122
distil/context.py Normal file
View File

@ -0,0 +1,122 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import eventlet
from eventlet.green import threading
from eventlet.green import time
from eventlet import greenpool
from eventlet import semaphore
from oslo_config import cfg
from distil.api import acl
from distil import exceptions as ex
from distil.i18n import _
from distil.i18n import _LE
from distil.i18n import _LW
from oslo_context import context
from oslo_log import log as logging
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class Context(context.RequestContext):
def __init__(self,
user_id=None,
tenant_id=None,
token=None,
service_catalog=None,
username=None,
tenant_name=None,
roles=None,
is_admin=None,
remote_semaphore=None,
auth_uri=None,
**kwargs):
if kwargs:
LOG.warn(_LW('Arguments dropped when creating context: %s'),
kwargs)
self.user_id = user_id
self.tenant_id = tenant_id
self.token = token
self.service_catalog = service_catalog
self.username = username
self.tenant_name = tenant_name
self.is_admin = is_admin
self.remote_semaphore = remote_semaphore or semaphore.Semaphore(
CONF.cluster_remote_threshold)
self.roles = roles
self.auth_uri = auth_uri
def clone(self):
return Context(
self.user_id,
self.tenant_id,
self.token,
self.service_catalog,
self.username,
self.tenant_name,
self.roles,
self.is_admin,
self.remote_semaphore,
self.auth_uri)
def to_dict(self):
return {
'user_id': self.user_id,
'tenant_id': self.tenant_id,
'token': self.token,
'service_catalog': self.service_catalog,
'username': self.username,
'tenant_name': self.tenant_name,
'is_admin': self.is_admin,
'roles': self.roles,
'auth_uri': self.auth_uri,
}
def is_auth_capable(self):
return (self.service_catalog and self.token and self.tenant_id and
self.user_id)
def get_admin_context():
return Context(is_admin=True)
_CTX_STORE = threading.local()
_CTX_KEY = 'current_ctx'
def has_ctx():
return hasattr(_CTX_STORE, _CTX_KEY)
def ctx():
if not has_ctx():
raise ex.IncorrectStateError(_("Context isn't available here"))
return getattr(_CTX_STORE, _CTX_KEY)
def current():
return ctx()
def set_ctx(new_ctx):
if not new_ctx and has_ctx():
delattr(_CTX_STORE, _CTX_KEY)
if new_ctx:
setattr(_CTX_STORE, _CTX_KEY, new_ctx)

0
distil/db/__init__.py Normal file
View File

108
distil/db/api.py Normal file
View File

@ -0,0 +1,108 @@
# Copyright (c) 2013 Mirantis Inc.
# Copyright 2014 Catalyst IT Ltd
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Defines interface for DB access.
Functions in this module are imported into the distil.db namespace. Call these
functions from distil.db namespace, not the distil.db.api namespace.
All functions in this module return objects that implement a dictionary-like
interface.
**Related Flags**
:db_backend: string to lookup in the list of LazyPluggable backends.
`sqlalchemy` is the only supported backend right now.
:sql_connection: string specifying the sqlalchemy connection to use, like:
`sqlite:///var/lib/distil/distil.sqlite`.
"""
from oslo.config import cfg
from distil.openstack.common.db import api as db_api
from distil.openstack.common import log as logging
CONF = cfg.CONF
CONF.import_opt('backend', 'distil.openstack.common.db.options',
group='database')
_BACKEND_MAPPING = {
'sqlalchemy': 'distil.db.sqlalchemy.api',
}
IMPL = db_api.DBAPI(CONF.database.backend, backend_mapping=_BACKEND_MAPPING)
LOG = logging.getLogger(__name__)
def setup_db():
"""Set up database, create tables, etc.
Return True on success, False otherwise
"""
return IMPL.setup_db()
def drop_db():
"""Drop database.
Return True on success, False otherwise
"""
return IMPL.drop_db()
def to_dict(func):
def decorator(*args, **kwargs):
res = func(*args, **kwargs)
if isinstance(res, list):
return [item.to_dict() for item in res]
if res:
return res.to_dict()
else:
return None
return decorator
@to_dict
def usage_get(project_id, start_at, end_at):
"""Get usage for specific tenant based on time range.
"""
return IMPL.usage_get(project_id, start_at, end_at)
def usage_add(project_id, resource_id, samples, unit,
start_at, end_at):
"""If a tenant exists does nothing,
and if it doesn't, creates and inserts it.
"""
return IMPL.usage_add(project_id, resource_id, samples, unit,
start_at, end_at)
def resource_add(project_id, resource_id, resource_type, rawdata, metadata):
return IMPL.resource_add(project_id, resource_id, resource_type,
rawdata, metadata)
def project_add(project):
return IMPL.project_add(project)

View File

View File

@ -0,0 +1,53 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = distil/db/migration/alembic_migrations
# template used to generate migration files
#file_template = %%(rev)s_%%(slug)s
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
#revision_environment = false
sqlalchemy.url =
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@ -0,0 +1,78 @@
<!--
Copyright 2012 New Dream Network, LLC (DreamHost)
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
-->
The migrations in `alembic_migrations/versions` contain the changes needed to migrate
between Distil database revisions. A migration occurs by executing a script that
details the changes needed to upgrade or downgrade the database. The migration scripts
are ordered so that multiple scripts can run sequentially. The scripts are executed by
Distil's migration wrapper which uses the Alembic library to manage the migration. Distil
supports migration from Icehouse or later.
You can upgrade to the latest database version via:
```
$ distil-db-manage --config-file /path/to/distil.conf upgrade head
```
To check the current database version:
```
$ distil-db-manage --config-file /path/to/distil.conf current
```
To create a script to run the migration offline:
```
$ distil-db-manage --config-file /path/to/distil.conf upgrade head --sql
```
To run the offline migration between specific migration versions:
```
$ distil-db-manage --config-file /path/to/distil.conf upgrade <start version>:<end version> --sql
```
Upgrade the database incrementally:
```
$ distil-db-manage --config-file /path/to/distil.conf upgrade --delta <# of revs>
```
Downgrade the database by a certain number of revisions:
```
$ distil-db-manage --config-file /path/to/distil.conf downgrade --delta <# of revs>
```
Create new revision:
```
$ distil-db-manage --config-file /path/to/distil.conf revision -m "description of revision" --autogenerate
```
Create a blank file:
```
$ distil-db-manage --config-file /path/to/distil.conf revision -m "description of revision"
```
This command does not perform any migrations, it only sets the revision.
Revision may be any existing revision. Use this command carefully.
```
$ distil-db-manage --config-file /path/to/distil.conf stamp <revision>
```
To verify that the timeline does branch, you can run this command:
```
$ distil-db-manage --config-file /path/to/distil.conf check_migration
```
If the migration path does branch, you can find the branch point via:
```
$ distil-db-manage --config-file /path/to/distil.conf history
```

View File

@ -0,0 +1,96 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Based on Neutron's migration/cli.py
from __future__ import with_statement
from logging import config as c
from alembic import context
from sqlalchemy import create_engine
from sqlalchemy import pool
from distil.db.sqlalchemy import model_base
from distil.openstack.common import importutils
importutils.import_module('distil.db.sqlalchemy.models')
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
distil_config = config.distil_config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
c.fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = model_base.DistilBase.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
context.configure(url=distil_config.database.connection)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
engine = create_engine(
distil_config.database.connection,
poolclass=pool.NullPool)
connection = engine.connect()
context.configure(
connection=connection,
target_metadata=target_metadata)
try:
with context.begin_transaction():
context.run_migrations()
finally:
connection.close()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -0,0 +1,37 @@
# Copyright ${create_date.year} OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@ -0,0 +1,115 @@
# Copyright 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Juno release
Revision ID: 001
Revises: None
Create Date: 2014-04-01 20:46:25.783444
"""
# revision identifiers, used by Alembic.
revision = '001'
down_revision = None
from alembic import op
import sqlalchemy as sa
from distil.db.sqlalchemy import model_base
MYSQL_ENGINE = 'InnoDB'
MYSQL_CHARSET = 'utf8'
# TODO(flwang): Porting all the table structure we're using.
def upgrade():
op.create_table('project',
sa.Column('id', sa.String(length=64), nullable=False),
sa.Column('name', sa.String(length=64), nullable=False),
sa.Column('meta_data', model_base.JSONEncodedDict(),
nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id'),
mysql_engine=MYSQL_ENGINE,
mysql_charset=MYSQL_CHARSET)
op.create_table('resource',
sa.Column('id', sa.String(length=64)),
sa.Column('project_id', sa.String(length=64),
nullable=False),
sa.Column('resource_type', sa.String(length=64),
nullable=True),
sa.Column('meta_data', model_base.JSONEncodedDict(),
nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id', 'project_id'),
sa.ForeignKeyConstraint(['project_id'], ['project.id'], ),
mysql_engine=MYSQL_ENGINE,
mysql_charset=MYSQL_CHARSET)
op.create_table('usage',
sa.Column('service', sa.String(length=64),
primary_key=True),
sa.Column('unit', sa.String(length=255),
nullable=False),
sa.Column('volume', sa.Numeric(precision=20, scale=2),
nullable=True),
sa.Column('project_id', sa.String(length=64),
primary_key=True, nullable=False),
sa.Column('resource_id', sa.String(length=64),
primary_key=True, nullable=False),
sa.Column('start_at', sa.DateTime(), primary_key=True,
nullable=True),
sa.Column('end_at', sa.DateTime(), primary_key=True,
nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['project_id'], ['project.id'], ),
sa.ForeignKeyConstraint(['resource_id'],
['resource.id'], ),
mysql_engine=MYSQL_ENGINE,
mysql_charset=MYSQL_CHARSET)
# op.create_table('sales_order',
# sa.Column('id', sa.Integer, primary_key=True),
# sa.Column('project_id', sa.String(length=64),
# nullable=False, primary_key=True),
# sa.Column('start_at', sa.DateTime(), primary_key=True,
# nullable=True),
# sa.Column('end_at', sa.DateTime(), primary_key=True,
# nullable=True),
# sa.Column('created_at', sa.DateTime(), nullable=True),
# sa.Column('updated_at', sa.DateTime(), nullable=True),
# sa.PrimaryKeyConstraint('id', 'project_id', 'start_at',
# 'end_at'),
# sa.ForeignKeyConstraint(['project_id'], ['project.id'], ),
# mysql_engine=MYSQL_ENGINE,
# mysql_charset=MYSQL_CHARSET)
op.create_table('last_run',
sa.Column('id', sa.Integer, primary_key=True,
sa.Sequence("last_run_id_seq")),
sa.Column('last_run', sa.DateTime(), nullable=True),
mysql_engine=MYSQL_ENGINE,
mysql_charset=MYSQL_CHARSET)
def downgrade():
op.drop_table('project')
op.drop_table('usage')
op.drop_table('resource')
op.drop_table('last_run')

110
distil/db/migration/cli.py Normal file
View File

@ -0,0 +1,110 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from alembic import command as alembic_cmd
from alembic import config as alembic_cfg
from alembic import util as alembic_u
from oslo.config import cfg
from oslo.db import options
CONF = cfg.CONF
options.set_defaults(CONF)
def do_alembic_command(config, cmd, *args, **kwargs):
try:
getattr(alembic_cmd, cmd)(config, *args, **kwargs)
except alembic_u.CommandError as e:
alembic_u.err(str(e))
def do_check_migration(config, _cmd):
do_alembic_command(config, 'branches')
def do_upgrade_downgrade(config, cmd):
if not CONF.command.revision and not CONF.command.delta:
raise SystemExit('You must provide a revision or relative delta')
revision = CONF.command.revision
if CONF.command.delta:
sign = '+' if CONF.command.name == 'upgrade' else '-'
revision = sign + str(CONF.command.delta)
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
def do_stamp(config, cmd):
do_alembic_command(config, cmd,
CONF.command.revision,
sql=CONF.command.sql)
def do_revision(config, cmd):
do_alembic_command(config, cmd,
message=CONF.command.message,
autogenerate=CONF.command.autogenerate,
sql=CONF.command.sql)
def add_command_parsers(subparsers):
for name in ['current', 'history', 'branches']:
parser = subparsers.add_parser(name)
parser.set_defaults(func=do_alembic_command)
parser = subparsers.add_parser('check_migration')
parser.set_defaults(func=do_check_migration)
for name in ['upgrade', 'downgrade']:
parser = subparsers.add_parser(name)
parser.add_argument('--delta', type=int)
parser.add_argument('--sql', action='store_true')
parser.add_argument('revision', nargs='?')
parser.set_defaults(func=do_upgrade_downgrade)
parser = subparsers.add_parser('stamp')
parser.add_argument('--sql', action='store_true')
parser.add_argument('revision')
parser.set_defaults(func=do_stamp)
parser = subparsers.add_parser('revision')
parser.add_argument('-m', '--message')
parser.add_argument('--autogenerate', action='store_true')
parser.add_argument('--sql', action='store_true')
parser.set_defaults(func=do_revision)
command_opt = cfg.SubCommandOpt('command',
title='Command',
help='Available commands',
handler=add_command_parsers)
CONF.register_cli_opt(command_opt)
def main():
config = alembic_cfg.Config(
os.path.join(os.path.dirname(__file__), 'alembic.ini')
)
config.set_main_option('script_location',
'distil.db.migration:alembic_migrations')
# attach the Disil conf to the Alembic conf
config.distil_config = CONF
CONF(project='distil')
CONF.command.func(config, CONF.command.name)

View File

190
distil/db/sqlalchemy/api.py Normal file
View File

@ -0,0 +1,190 @@
# Copyright 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Implementation of SQLAlchemy backend."""
import sys
from oslo.config import cfg
import sqlalchemy as sa
from distil.db.sqlalchemy import models as m
from distil import exceptions
from distil.openstack.common.db import exception as db_exception
from distil.openstack.common.db.sqlalchemy import session as db_session
from distil.openstack.common import log as logging
from distil.db.sqlalchemy.models import Project
from distil.db.sqlalchemy.models import Resource
from distil.db.sqlalchemy.models import Usage
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
_FACADE = None
def _create_facade_lazily():
global _FACADE
if _FACADE is None:
params = dict(CONF.database.iteritems())
params["sqlite_fk"] = True
_FACADE = db_session.EngineFacade(
CONF.database.connection,
**params
)
return _FACADE
def get_engine():
facade = _create_facade_lazily()
return facade.get_engine()
def get_session(**kwargs):
facade = _create_facade_lazily()
return facade.get_session(**kwargs)
def cleanup():
global _FACADE
_FACADE = None
def get_backend():
return sys.modules[__name__]
def setup_db():
try:
engine = get_engine()
m.Cluster.metadata.create_all(engine)
except sa.exc.OperationalError as e:
LOG.exception("Database registration exception: %s", e)
return False
return True
def drop_db():
try:
engine = get_engine()
m.Cluster.metadata.drop_all(engine)
except Exception as e:
LOG.exception("Database shutdown exception: %s", e)
return False
return True
def model_query(model, context, session=None, project_only=True):
"""Query helper.
:param model: base model to query
:param context: context to query under
:param project_only: if present and context is user-type, then restrict
query to match the context's tenant_id.
"""
session = session or get_session()
query = session.query(model)
if project_only and not context.is_admin:
query = query.filter_by(tenant_id=context.tenant_id)
return query
def project_add(project):
session = get_session()
project_ref = Project(id=project.id, name=project.name)
try:
project_ref.save(session=session)
except sa.exc.InvalidRequestError as e:
# FIXME(flwang): I assume there should be a DBDuplicateEntry error
if str(e).rfind("Duplicate entry '\s' for key 'PRIMARY'"):
LOG.warning(e)
return
raise e
def usage_get(project_id, start_at, end_at):
session = get_session()
query = session.query(Usage)
query = (query.filter(Usage.start_at >= start_at, Usage.end_at <= end_at).
filter(Usage.project_id == project_id))
return query.all()
def usage_add(project_id, resource_id, samples, unit,
start_at, end_at):
session = get_session()
try:
# NOTE(flwang): For now, there is only one entry in the samples dict
service, volume = samples.popitem()
resource_ref = Usage(service=service,
volume=volume,
unit=unit,
resource_id=resource_id, project_id=project_id,
start_at=start_at, end_at=end_at)
resource_ref.save(session=session)
except sa.exc.InvalidRequestError as e:
# FIXME(flwang): I assume there should be a DBDuplicateEntry error
if str(e).rfind("Duplicate entry '\s' for key 'PRIMARY'"):
LOG.warning(e)
return
raise e
except Exception as e:
raise e
def resource_add(project_id, resource_id, resource_type, raw, metadata):
session = get_session()
metadata = _merge_resource_metadata({'type': resource_type}, raw, metadata)
resource_ref = Resource(id=resource_id, project_id=project_id,
resource_type=resource_type, meta_data=metadata)
try:
resource_ref.save(session=session)
except sa.exc.InvalidRequestError as e:
# FIXME(flwang): I assume there should be a DBDuplicateEntry error
if str(e).rfind("Duplicate entry '\s' for key 'PRIMARY'"):
LOG.warning(e)
return
raise e
except Exception as e:
raise e
def _merge_resource_metadata(md_dict, entry, md_def):
"""Strips metadata from the entry as defined in the config,
and merges it with the given metadata dict.
"""
for field, parameters in md_def.iteritems():
for _, source in enumerate(parameters['sources']):
try:
value = entry['resource_metadata'][source]
if 'template' in parameters:
md_dict[field] = parameters['template'] % value
break
else:
md_dict[field] = value
break
except KeyError:
# Just means we haven't found the right value yet.
# Or value isn't present.
pass
return md_dict

View File

@ -0,0 +1,68 @@
# Copyright 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo.utils import timeutils
from oslo.db.sqlalchemy import models as oslo_models
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy.ext import declarative
from sqlalchemy import Text
from sqlalchemy.types import TypeDecorator
from distil.openstack.common import jsonutils
class JSONEncodedDict(TypeDecorator):
"""Represents an immutable structure as a json-encoded string."""
impl = Text
def process_bind_param(self, value, dialect):
if value is not None:
value = jsonutils.dumps(value)
return value
def process_result_value(self, value, dialect):
if value is not None:
value = jsonutils.loads(value)
return value
class _DistilBase(oslo_models.ModelBase, oslo_models.TimestampMixin):
"""Base class for all SQLAlchemy DB Models."""
__table_args__ = {'mysql_engine': 'InnoDB'}
created_at = Column(DateTime, default=lambda: timeutils.utcnow(),
nullable=False)
updated_at = Column(DateTime, default=lambda: timeutils.utcnow(),
nullable=False, onupdate=lambda: timeutils.utcnow())
def keys(self):
return self.__dict__.keys()
def values(self):
return self.__dict__.values()
def items(self):
return self.__dict__.items()
def to_dict(self):
d = self.__dict__.copy()
d.pop("_sa_instance_state")
return d
DistilBase = declarative.declarative_base(cls=_DistilBase)

View File

@ -0,0 +1,111 @@
# Copyright (C) 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.ext.hybrid import hybrid_property, hybrid_method
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import DateTime
from sqlalchemy import ForeignKey
from sqlalchemy import Numeric
from sqlalchemy import Sequence
from sqlalchemy import String
from sqlalchemy.orm import relationship
from distil.db.sqlalchemy.model_base import JSONEncodedDict
from distil.db.sqlalchemy.model_base import DistilBase
class Resource(DistilBase):
"""Database model for storing metadata associated with a resource.
"""
__tablename__ = 'resource'
id = Column(String(64), primary_key=True)
project_id = Column(String(64), ForeignKey("project.id"),
primary_key=True)
resource_type = Column(String(64), nullable=True)
meta_data = Column(JSONEncodedDict(), default={})
class Usage(DistilBase):
"""Simplified data store of usage information for a given service,
in a resource, in a project. Similar to ceilometer datastore,
but stores local transformed data.
"""
__tablename__ = 'usage'
service = Column(String(100), primary_key=True)
unit = Column(String(255))
volume = Column(Numeric(precision=20, scale=2), nullable=False)
resource_id = Column(String(64), ForeignKey('resource.id'),
primary_key=True)
project_id = Column(String(64), ForeignKey('project.id'), primary_key=True)
start_at = Column(DateTime, nullable=False, primary_key=True)
end_at = Column(DateTime, nullable=False, primary_key=True)
@hybrid_property
def length(self):
return self.end_at - self.start_at
@hybrid_method
def intersects(self, other):
return (self.start_at <= other.end_at and
other.start_at <= self.end_at)
def __str__(self):
return ('<Usage {project_id=%s resource_id=%s service=%s'
'start_at=%s end_at =%s volume=%s}>' % (self.project_id,
self.resource_id,
self.service,
self.start_at,
self.end_at,
self.volume))
class Project(DistilBase):
"""Model for storage of metadata related to a project.
"""
__tablename__ = 'project'
id = Column(String(64), primary_key=True, nullable=False)
name = Column(String(64), nullable=False)
meta_data = Column(JSONEncodedDict(), default={})
class LastRun():
__tablename__ = 'last_run'
id = Column(Integer, Sequence("last_run_id_seq"), primary_key=True)
start_at = Column(DateTime, primary_key=True, nullable=False)
# class SalesOrder(DistilBase):
# """Historic billing periods so that tenants
# cannot be rebilled accidentally.
# """
# __tablename__ = 'sales_orders'
# id = Column(Integer, primary_key=True)
# project_id = Column(
# String(100),
# ForeignKey("project.id"),
# primary_key=True)
# start = Column(DateTime, nullable=False, primary_key=True)
# end = Column(DateTime, nullable=False, primary_key=True)
#
# project = relationship("Project")
#
# @hybrid_property
# def length(self):
# return self.end - self.start
#
# @hybrid_method
# def intersects(self, other):
# return (self.start <= other.end and other.start <= self.end)

80
distil/exceptions.py Normal file
View File

@ -0,0 +1,80 @@
# Copyright 2014 Catalyst IT Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from distil.i18n import _
# FIXME(flwang): configration?
_FATAL_EXCEPTION_FORMAT_ERRORS = False
class DistilException(Exception):
"""Base Distil Exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
msg_fmt = _("An unknown exception occurred.")
def __init__(self, message=None, **kwargs):
self.kwargs = kwargs
if 'code' not in self.kwargs:
try:
self.kwargs['code'] = self.code
except AttributeError:
pass
if not message:
try:
message = self.msg_fmt % kwargs
except KeyError:
exc_info = sys.exc_info()
if _FATAL_EXCEPTION_FORMAT_ERRORS:
raise exc_info[0], exc_info[1], exc_info[2]
else:
message = self.msg_fmt
super(DistilException, self).__init__(message)
def format_message(self):
if self.__class__.__name__.endswith('_Remote'):
return self.args[0]
else:
return unicode(self)
class IncorrectStateError(DistilException):
code = "INCORRECT_STATE_ERROR"
def __init__(self, message):
self.message = message
class NotFoundException(DistilException):
message = _("Object '%s' is not found")
value = None
def __init__(self, value, message=None):
self.code = "NOT_FOUND"
self.value = value
if message:
self.message = message % value
class DuplicateException(DistilException):
message = _("An object with the same identifier already exists.")

31
distil/i18n.py Normal file
View File

@ -0,0 +1,31 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_i18n import * # noqa
_translators = TranslatorFactory(domain='zaqar')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

23
distil/rater/__init__.py Normal file
View File

@ -0,0 +1,23 @@
# Copyright 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class BaseRater(object):
def __init__(self, conf):
self.conf = conf
def rate(self, name, region=None):
raise NotImplementedError("Not implemented in base class")

47
distil/rater/file.py Normal file
View File

@ -0,0 +1,47 @@
# Copyright 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import csv
from decimal import Decimal
import logging as log
from distil import rater
class FileRater(rater.BaseRater):
def __init__(self, conf):
super(FileRater, self).__init__(conf)
try:
with open(self.config['file']) as fh:
# Makes no opinions on the file structure
reader = csv.reader(fh, delimiter="|")
self.__rates = {
row[1].strip(): {
'rate': Decimal(row[3].strip()),
'region': row[0].strip(),
'unit': row[2].strip()
} for row in reader
}
except Exception as e:
log.critical('Failed to load rates file: `%s`' % e)
raise
def rate(self, name, region=None):
return {
'rate': self.__rates[name]['rate'],
'unit': self.__rates[name]['unit']
}

25
distil/rater/odoo.py Normal file
View File

@ -0,0 +1,25 @@
# Copyright 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from distil import rater
from distil.utils import odoo
class OdooRater(rater.BaseRater):
def rate(self, name, region=None):
erp = odoo.Odoo()
import pdb
pdb.set_trace()
pass

View File

View File

View File

@ -0,0 +1,29 @@
# Copyright (c) 2016 Catalyst IT Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from oslo_log import log as logging
from distil.utils import odoo
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
def get_prices(format=None):
import pdb
pdb.set_trace()
erp = odoo.Odoo()
erp.get_prices()
pdb.set_trace()
return {'id': 1}

View File

@ -0,0 +1,28 @@
# Copyright (c) 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from distil.utils import general
class BaseTransformer(object):
def __init__(self):
self.config = general.get_collector_config()['transformers']
def transform_usage(self, meter_name, raw_data, start_at, end_at):
return self._transform_usage(meter_name, raw_data, start_at, end_at)
def _transform_usage(self, meter_name, raw_data, start_at, end_at):
raise NotImplementedError

View File

@ -0,0 +1,44 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
from distil.transformer import BaseTransformer
class MaxTransformer(BaseTransformer):
"""Transformer for max-integration of a gauge value over time.
If the raw unit is 'gigabytes', then the transformed unit is
'gigabyte-hours'.
"""
def _transform_usage(self, meter_name, raw_data, start_at, end_at):
max_vol = max([v["counter_volume"]
for v in raw_data]) if len(raw_data) else 0
hours = (end_at - start_at).total_seconds() / 3600.0
return {meter_name: max_vol * hours}
class SumTransformer(BaseTransformer):
"""Transformer for sum-integration of a gauge value for given period.
"""
def _transform_usage(self, meter_name, raw_data, start_at, end_at):
sum_vol = 0
for sample in raw_data:
t = datetime.datetime.strptime(sample['timestamp'],
'%Y-%m-%dT%H:%M:%S.%f')
if t >= start_at and t < end_at:
sum_vol += sample["counter_volume"]
return {meter_name: sum_vol}

View File

@ -0,0 +1,148 @@
# Copyright (c) 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
from distil.utils import general
from distil.utils import constants
from distil.transformer import BaseTransformer
class UpTimeTransformer(BaseTransformer):
"""
Transformer to calculate uptime based on states,
which is broken apart into flavor at point in time.
"""
def _transform_usage(self, name, data, start, end):
# get tracked states from config
tracked = self.config['uptime']['tracked_states']
tracked_states = {constants.states[i] for i in tracked}
usage_dict = {}
def sort_and_clip_end(usage):
cleaned = (self._clean_entry(s) for s in usage)
clipped = (s for s in cleaned if s['timestamp'] < end)
return sorted(clipped, key=lambda x: x['timestamp'])
state = sort_and_clip_end(data)
if not len(state):
# there was no data for this period.
return usage_dict
last_state = state[0]
if last_state['timestamp'] >= start:
last_timestamp = last_state['timestamp']
seen_sample_in_window = True
else:
last_timestamp = start
seen_sample_in_window = False
def _add_usage(diff):
flav = last_state['flavor']
usage_dict[flav] = usage_dict.get(flav, 0) + diff.total_seconds()
for val in state[1:]:
if last_state["counter_volume"] in tracked_states:
diff = val["timestamp"] - last_timestamp
if val['timestamp'] > last_timestamp:
# if diff < 0 then we were looking back before the start
# of the window.
_add_usage(diff)
last_timestamp = val['timestamp']
seen_sample_in_window = True
last_state = val
# extend the last state we know about, to the end of the window,
# if we saw any actual uptime.
if (end and last_state['counter_volume'] in tracked_states
and seen_sample_in_window):
diff = end - last_timestamp
_add_usage(diff)
# map the flavors to names on the way out
return {general.flavor_name(f): v for f, v in usage_dict.items()}
def _clean_entry(self, entry):
result = {
'counter_volume': entry['counter_volume'],
'flavor': entry['resource_metadata'].get(
'flavor.id', entry['resource_metadata'].get(
'instance_flavor_id', 0
)
)
}
try:
result['timestamp'] = datetime.datetime.strptime(
entry['timestamp'], constants.date_format)
except ValueError:
result['timestamp'] = datetime.datetime.strptime(
entry['timestamp'], constants.date_format_f)
return result
class FromImageTransformer(BaseTransformer):
"""
Transformer for creating Volume entries from instance metadata.
Checks if image was booted from image, and finds largest root
disk size among entries.
This relies heaviliy on instance metadata.
"""
def _transform_usage(self, name, data, start, end):
checks = self.config['from_image']['md_keys']
none_values = self.config['from_image']['none_values']
service = self.config['from_image']['service']
size_sources = self.config['from_image']['size_keys']
size = 0
for entry in data:
for source in checks:
try:
if (entry['resource_metadata'][source] in none_values):
return None
break
except KeyError:
pass
for source in size_sources:
try:
root_size = float(entry['resource_metadata'][source])
if root_size > size:
size = root_size
except KeyError:
pass
hours = (end - start).total_seconds() / 3600.0
return {service: size * hours}
class NetworkServiceTransformer(BaseTransformer):
"""Transformer for Neutron network service, such as LBaaS, VPNaaS,
FWaaS, etc.
"""
def _transform_usage(self, name, data, start, end):
# NOTE(flwang): The network service pollster of Ceilometer is using
# status as the volume(see https://github.com/openstack/ceilometer/
# blob/master/ceilometer/network/services/vpnaas.py#L55), so we have
# to check the volume to make sure only the active service is
# charged(0=inactive, 1=active).
max_vol = max([v["counter_volume"] for v in data
if v["counter_volume"] < 2]) if len(data) else 0
hours = (end - start).total_seconds() / 3600.0
return {name: max_vol * hours}

0
distil/utils/__init__.py Normal file
View File

229
distil/utils/api.py Normal file
View File

@ -0,0 +1,229 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import traceback
import flask
from werkzeug import datastructures
from distil import exceptions as ex
from distil.i18n import _
from distil.i18n import _LE
from oslo_log import log as logging
from distil.utils import wsgi
LOG = logging.getLogger(__name__)
class Rest(flask.Blueprint):
def get(self, rule, status_code=200):
return self._mroute('GET', rule, status_code)
def post(self, rule, status_code=202):
return self._mroute('POST', rule, status_code)
def put(self, rule, status_code=202):
return self._mroute('PUT', rule, status_code)
def delete(self, rule, status_code=204):
return self._mroute('DELETE', rule, status_code)
def _mroute(self, methods, rule, status_code=None, **kw):
if type(methods) is str:
methods = [methods]
return self.route(rule, methods=methods, status_code=status_code, **kw)
def route(self, rule, **options):
status = options.pop('status_code', None)
def decorator(func):
endpoint = options.pop('endpoint', func.__name__)
def handler(**kwargs):
LOG.debug("Rest.route.decorator.handler, kwargs=%s", kwargs)
_init_resp_type()
if status:
flask.request.status_code = status
if flask.request.method in ['POST', 'PUT']:
kwargs['data'] = request_data()
try:
return func(**kwargs)
except ex.DistilException as e:
return bad_request(e)
except Exception as e:
return internal_error(500, 'Internal Server Error', e)
f_rule = rule
self.add_url_rule(f_rule, endpoint, handler, **options)
self.add_url_rule(f_rule + '.json', endpoint, handler, **options)
self.add_url_rule(f_rule + '.xml', endpoint, handler, **options)
return func
return decorator
RT_JSON = datastructures.MIMEAccept([("application/json", 1)])
RT_XML = datastructures.MIMEAccept([("application/xml", 1)])
def _init_resp_type():
"""Extracts response content type."""
# get content type from Accept header
resp_type = flask.request.accept_mimetypes
# url /foo.xml
if flask.request.path.endswith('.xml'):
resp_type = RT_XML
# url /foo.json
if flask.request.path.endswith('.json'):
resp_type = RT_JSON
flask.request.resp_type = resp_type
def render(res=None, resp_type=None, status=None, **kwargs):
if not res:
res = {}
if type(res) is dict:
res.update(kwargs)
elif kwargs:
# can't merge kwargs into the non-dict res
abort_and_log(500,
_("Non-dict and non-empty kwargs passed to render"))
status_code = getattr(flask.request, 'status_code', None)
if status:
status_code = status
if not status_code:
status_code = 200
if not resp_type:
resp_type = getattr(flask.request, 'resp_type', None)
if not resp_type:
resp_type = RT_JSON
serializer = None
if "application/json" in resp_type:
resp_type = RT_JSON
serializer = wsgi.JSONDictSerializer()
elif "application/xml" in resp_type:
resp_type = RT_XML
serializer = wsgi.XMLDictSerializer()
else:
abort_and_log(400, _("Content type '%s' isn't supported") % resp_type)
body = serializer.serialize(res)
resp_type = str(resp_type)
return flask.Response(response=body, status=status_code,
mimetype=resp_type)
def request_data():
if hasattr(flask.request, 'parsed_data'):
return flask.request.parsed_data
if not flask.request.content_length > 0:
LOG.debug("Empty body provided in request")
return dict()
if flask.request.file_upload:
return flask.request.data
deserializer = None
content_type = flask.request.mimetype
if not content_type or content_type in RT_JSON:
deserializer = wsgi.JSONDeserializer()
elif content_type in RT_XML:
abort_and_log(400, _("XML requests are not supported yet"))
else:
abort_and_log(400,
_("Content type '%s' isn't supported") % content_type)
# parsed request data to avoid unwanted re-parsings
parsed_data = deserializer.deserialize(flask.request.data)['body']
flask.request.parsed_data = parsed_data
return flask.request.parsed_data
def get_request_args():
return flask.request.args
def abort_and_log(status_code, descr, exc=None):
LOG.error(_LE("Request aborted with status code %(code)s and "
"message '%(message)s'"),
{'code': status_code, 'message': descr})
if exc is not None:
LOG.error(traceback.format_exc())
flask.abort(status_code, description=descr)
def render_error_message(error_code, error_message, error_name):
message = {
"error_code": error_code,
"error_message": error_message,
"error_name": error_name
}
resp = render(message)
resp.status_code = error_code
return resp
def internal_error(status_code, descr, exc=None):
LOG.error(_LE("Request aborted with status code %(code)s and "
"message '%(message)s'"),
{'code': status_code, 'message': descr})
if exc is not None:
LOG.error(traceback.format_exc())
error_code = "INTERNAL_SERVER_ERROR"
if status_code == 501:
error_code = "NOT_IMPLEMENTED_ERROR"
return render_error_message(status_code, descr, error_code)
def bad_request(error):
error_code = 400
LOG.debug("Validation Error occurred: "
"error_code=%s, error_message=%s, error_name=%s",
error_code, error.message, error.code)
return render_error_message(error_code, error.message, error.code)
def not_found(error):
error_code = 404
LOG.debug("Not Found exception occurred: "
"error_code=%s, error_message=%s, error_name=%s",
error_code, error.message, error.code)
return render_error_message(error_code, error.message, error.code)

49
distil/utils/constants.py Normal file
View File

@ -0,0 +1,49 @@
# Copyright (C) 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from datetime import datetime
# Date format Ceilometer uses
# 2013-07-03T13:34:17
# which is, as an strftime:
# timestamp = datetime.strptime(res["timestamp"], "%Y-%m-%dT%H:%M:%S.%f")
# or
# timestamp = datetime.strptime(res["timestamp"], "%Y-%m-%dT%H:%M:%S")
# Most of the time we use date_format
date_format = "%Y-%m-%dT%H:%M:%S"
# Sometimes things also have milliseconds, so we look for that too.
# Because why not be annoying in all the ways?
date_format_f = "%Y-%m-%dT%H:%M:%S.%f"
# Some useful constants
iso_time = "%Y-%m-%dT%H:%M:%S"
iso_date = "%Y-%m-%d"
dawn_of_time = datetime(2014, 4, 1)
# VM states:
states = {'active': 1,
'building': 2,
'paused': 3,
'suspended': 4,
'stopped': 5,
'rescued': 6,
'resized': 7,
'soft_deleted': 8,
'deleted': 9,
'error': 10,
'shelved': 11,
'shelved_offloaded': 12}

104
distil/utils/general.py Normal file
View File

@ -0,0 +1,104 @@
# Copyright (C) 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from datetime import datetime
from datetime import timedelta
from decimal import Decimal
import math
import yaml
from oslo.config import cfg
from novaclient.v1_1 import client
from distil.openstack.common import log as logging
COLLECTOR_OPTS = [
cfg.StrOpt('transformer_config',
default='/etc/distil/collector.yaml',
help='The configuration file of collector',
),
]
CONF = cfg.CONF
CONF.register_opts(COLLECTOR_OPTS, group='collector')
cache = {}
LOG = logging.getLogger(__name__)
def get_collector_config():
# FIXME(flwang): The config should be cached or find a better way to load
# it dynamically.
conf = None
try:
with open(CONF.collector.transformer_config) as f:
conf = yaml.load(f)
except IOError as e:
raise e
return conf
def generate_windows(start, end):
"""Generator for configured hour windows in a given range."""
# FIXME(flwang): CONF.collector.period
window_size = timedelta(hours=1)
while start + window_size <= end:
window_end = start + window_size
yield start, window_end
start = window_end
def log_and_time_it(f):
def decorator(*args, **kwargs):
start = datetime.utcnow()
LOG.info('Entering %s at %s' % (f.__name__, start))
f(*args, **kwargs)
LOG.info('Exiting %s at %s, elapsed %s' % (f.__name__,
datetime.utcnow(),
datetime.utcnow() - start))
return decorator
def flavor_name(flavor_id):
"""Grabs the correct flavor name from Nova given the correct ID."""
# FIXME(flwang): Read the auth info from CONF
if flavor_id not in cache:
nova = client.Client()
cache[flavor_id] = nova.flavors.get(flavor_id).name
return cache[flavor_id]
def to_gigabytes_from_bytes(value):
"""From Bytes, unrounded."""
return ((value / Decimal(1024)) / Decimal(1024)) / Decimal(1024)
def to_hours_from_seconds(value):
"""From seconds to rounded hours."""
return Decimal(math.ceil((value / Decimal(60)) / Decimal(60)))
conversions = {'byte': {'gigabyte': to_gigabytes_from_bytes},
'second': {'hour': to_hours_from_seconds}}
def convert_to(value, from_unit, to_unit):
"""Converts a given value to the given unit.
Assumes that the value is in the lowest unit form,
of the given unit (seconds or bytes).
e.g. if the unit is gigabyte we assume the value is in bytes
"""
if from_unit == to_unit:
return value
return conversions[from_unit][to_unit](value)

45
distil/utils/keystone.py Normal file
View File

@ -0,0 +1,45 @@
# Copyright (C) 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import requests
import json
import urllib
from keystoneclient.v2_0 import client as keystone_client
class KeystoneClient(keystone_client.Client):
def tenant_by_name(self, name):
authenticator = self.auth_url
url = "%(url)s/tenants?%(query)s" % {
"url": authenticator,
"query": urllib.urlencode({"name": name})
}
r = requests.get(url, headers={
"X-Auth-Token": self.auth_token,
"Content-Type": "application/json"
})
if r.ok:
data = json.loads(r.text)
assert data
return data
else:
if r.status_code == 404:
raise
def get_ceilometer_endpoint(self):
endpoint = self.service_catalog.url_for(service_type="metering",
endpoint_type="adminURL")
return endpoint

90
distil/utils/wsgi.py Normal file
View File

@ -0,0 +1,90 @@
# Copyright 2011 OpenStack LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Only (de)serialization utils hasn't been removed to decrease requirements
# number.
"""Utility methods for working with WSGI servers."""
import datetime
from xml.dom import minidom
from xml.parsers import expat
from distil import exceptions
from distil.i18n import _
from oslo_serialization import jsonutils
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class ActionDispatcher(object):
"""Maps method name to local methods through action name."""
def dispatch(self, *args, **kwargs):
"""Find and call local method."""
action = kwargs.pop('action', 'default')
action_method = getattr(self, str(action), self.default)
return action_method(*args, **kwargs)
def default(self, data):
raise NotImplementedError()
class DictSerializer(ActionDispatcher):
"""Default request body serialization"""
def serialize(self, data, action='default'):
return self.dispatch(data, action=action)
def default(self, data):
return ""
class JSONDictSerializer(DictSerializer):
"""Default JSON request body serialization"""
def default(self, data):
def sanitizer(obj):
if isinstance(obj, datetime.datetime):
_dtime = obj - datetime.timedelta(microseconds=obj.microsecond)
return _dtime.isoformat()
return unicode(obj)
return jsonutils.dumps(data, default=sanitizer)
class TextDeserializer(ActionDispatcher):
"""Default request body deserialization"""
def deserialize(self, datastring, action='default'):
return self.dispatch(datastring, action=action)
def default(self, datastring):
return {}
class JSONDeserializer(TextDeserializer):
def _from_json(self, datastring):
try:
return jsonutils.loads(datastring)
except ValueError:
msg = _("cannot understand JSON")
raise exception.MalformedRequestBody(reason=msg)
def default(self, datastring):
return {'body': self._from_json(datastring)}

18
distil/version.py Normal file
View File

@ -0,0 +1,18 @@
# Copyright (c) 2014 Catalyst IT Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pbr import version
version_info = version.VersionInfo('distil')

136
etc/collector.yaml Normal file
View File

@ -0,0 +1,136 @@
---
main:
region: nz_wlg_2
# timezone is unused at this time
timezone: Pacific/Auckland
database_uri: postgres://admin:password@localhost:5432/billing
trust_sources:
- openstack
log_file: logs/billing.log
ignore_tenants:
- test
rates_config:
file: test_rates.csv
# Keystone auth user details
auth:
end_point: http://localhost:5000/v2.0
default_tenant: demo
username: admin
password: openstack
insecure: True
authenticate_clients: True
# used for authenticate_clients
identity_url: http://localhost:35357
# configuration for defining usage collection
collection:
# defines which meter is mapped to which transformer
meter_mappings:
# meter name as seen in ceilometer
state:
# type of resource it maps to (seen on sales order)
type: Virtual Machine
# which transformer to use
transformer: Uptime
# what unit type is coming in via the meter
unit: second
ip.floating:
type: Floating IP
transformer: GaugeMax
unit: hour
volume.size:
type: Volume
transformer: GaugeMax
unit: gigabyte
instance:
type: Volume
transformer: FromImage
unit: gigabyte
# if true allows id pattern, and metadata patterns
transform_info: True
# allows us to put the id into a pattern,
# only if transform_info is true,
# such as to append something to it
res_id_template: "%s-root_disk"
image.size:
type: Image
transformer: GaugeMax
unit: byte
bandwidth:
type: Network Traffic
transformer: GaugeSum
unit: byte
network.services.vpn:
type: VPN
transformer: GaugeNetworkService
unit: hour
network:
type: Network
transformer: GaugeMax
unit: hour
# metadata definition for resources (seen on invoice)
metadata_def:
# resource type (must match above definition)
Virtual Machine:
# name the field will have on the sales order
name:
sources:
# which keys to search for in the ceilometer entry metadata
# this can be more than one as metadata is inconsistent between source types
- display_name
availability zone:
sources:
- OS-EXT-AZ:availability_zone
Volume:
name:
sources:
- display_name
# template is only used if 'transform_info' in meter mappings is true.
template: "%s - root disk"
availability zone:
sources:
- availability_zone
Floating IP:
ip address:
sources:
- floating_ip_address
Image:
name:
sources:
- name
- properties.image_name
VPN:
name:
sources:
- name
subnet:
sources:
- subnet_id
Network:
name:
sources:
- name
NetworkTraffic:
meter_label_id:
sources:
- label_id
# transformer configs
transformers:
uptime:
# states marked as "billable" for VMs.
tracked_states:
- active
- paused
- rescued
- resized
from_image:
service: volume.size
# What metadata values to check
md_keys:
- image_ref
- image_meta.base_image_ref
none_values:
- None
- ""
# where to get volume size from
size_keys:
- root_gb

40
etc/distil.conf.sample Normal file
View File

@ -0,0 +1,40 @@
[DEFAULT]
debug = True
ignore_tenants = demo
timezone = Pacific/Auckland
host = localhost
port = 9999
[collector]
collector_config = /etc/distil/collector.yaml
transformer_config = /etc/distil/transformer.yaml
[rater]
type = file
rates_file = /etc/distil/rates.csv
[odoo]
version=8.0
hostname=
port=443
protocol=jsonrpc+ssl
database=
user=
password=
[database]
connection = mysql://root:passw0rd@127.0.0.1/distil?charset=utf8
backend = sqlalchemy
[keystone_authtoken]
memcache_servers = 127.0.0.1:11211
signing_dir = /var/cache/distil
cafile = /opt/stack/data/ca-bundle.pem
auth_uri = http://127.0.0.1:5000
project_domain_id = default
project_name = service
user_domain_id = default
password = passw0rd
username = distil
auth_url = http://127.0.0.1:35357
auth_type = password

116
etc/transformer.yaml Normal file
View File

@ -0,0 +1,116 @@
# configuration for defining usage collection
collection:
# defines which meter is mapped to which transformer
meter_mappings:
# meter name as seen in ceilometer
state:
# type of resource it maps to (seen on sales order)
type: Virtual Machine
# which transformer to use
transformer: uptime
# what unit type is coming in via the meter
unit: second
metadata:
name:
sources:
# which keys to search for in the ceilometer entry metadata
# this can be more than one as metadata is inconsistent between
# source types
- display_name
availability zone:
sources:
- OS-EXT-AZ:availability_zone
ip.floating:
type: Floating IP
transformer: max
unit: hour
metadata:
ip address:
sources:
- floating_ip_address
volume.size:
type: Volume
transformer: max
unit: gigabyte
metadata:
name:
sources:
- display_name
availability zone:
sources:
- availability_zone
instance:
type: Volume
transformer: fromimage
unit: gigabyte
# if true allows id pattern, and metadata patterns
transform_info: True
# allows us to put the id into a pattern,
# only if transform_info is true,
# such as to append something to it
res_id_template: "%s-root_disk"
metadata:
name:
sources:
- display_name
template: "%s - root disk"
availability zone:
sources:
- availability_zone
image.size:
type: Image
transformer: max
unit: byte
metadata:
name:
sources:
- name
- properties.image_name
bandwidth:
type: Network Traffic
transformer: sum
unit: byte
metadata:
meter_label_id:
sources:
- label_id
network.services.vpn:
type: VPN
transformer: networkservice
unit: hour
metadata:
name:
sources:
- name
subnet:
sources:
- subnet_id
network:
type: Network
transformer: max
unit: hour
metadata:
name:
sources:
- name
# transformer configs
transformers:
uptime:
# states marked as "billable" for VMs.
tracked_states:
- active
- paused
- rescued
- resized
from_image:
service: volume.size
# What metadata values to check
md_keys:
- image_ref
- image_meta.base_image_ref
none_values:
- None
- ""
# where to get volume size from
size_keys:
- root_gb

View File

@ -1,10 +0,0 @@
region | service0 | gigabyte | 0.32
region | service1 | gigabyte | 0.312
region | service2 | gigabyte | 0.43
region | service3 | gigabyte | 0.38
region | service4 | gigabyte | 0.43
region | service5 | gigabyte | 0.32
region | service6 | gigabyte | 0.312
region | service7 | gigabyte | 0.53
region | service8 | gigabyte | 0.73
region | service9 | gigabyte | 0.9
1 region service0 gigabyte 0.32
2 region service1 gigabyte 0.312
3 region service2 gigabyte 0.43
4 region service3 gigabyte 0.38
5 region service4 gigabyte 0.43
6 region service5 gigabyte 0.32
7 region service6 gigabyte 0.312
8 region service7 gigabyte 0.53
9 region service8 gigabyte 0.73
10 region service9 gigabyte 0.9

36
old-requirements.txt Normal file
View File

@ -0,0 +1,36 @@
Babel==1.3
Flask==0.10.1
Jinja2==2.7.2
MarkupSafe==0.18
MySQL-python==1.2.5
PyMySQL==0.6.1
PyYAML==3.10
SQLAlchemy>=1.0.10,<1.1.0 # MIT
WebOb==1.3.1
WebTest==2.0.14
Werkzeug==0.9.4
beautifulsoup4==4.3.2
decorator==3.4.0
httplib2==0.8
iso8601==0.1.8
itsdangerous==0.23
mock==1.0.1
netaddr==0.7.10
#nose==1.3.0
prettytable==0.7.2
psycopg2==2.5.2
pyaml==13.07.0
pytz==2013.9
requests==1.1.0
requirements-parser==0.0.6
simplejson==3.3.3
urllib3==1.5
waitress==0.8.8
six>=1.7.0
pbr>=0.6,!=0.7,<1.0
python-novaclient>=2.17.0
python-cinderclient>=1.0.8
keystonemiddleware!=4.1.0,>=4.0.0 # Apache-2.0
python-glanceclient>=0.18.0 # From Nova stable/liberty

View File

@ -1,14 +1,13 @@
Babel==1.3
Flask==0.10.1
Jinja2==2.7.2 Jinja2==2.7.2
MarkupSafe==0.18 MarkupSafe==0.18
MySQL-python==1.2.5 MySQL-python==1.2.5
PyMySQL==0.6.1 PyMySQL==0.6.1
PyYAML==3.10 PyYAML==3.10
SQLAlchemy>=1.0.10,<1.1.0 # MIT
WebOb==1.3.1 WebOb==1.3.1
WebTest==2.0.14 WebTest==2.0.14
Werkzeug==0.9.4
argparse==1.2.1
beautifulsoup4==4.3.2 beautifulsoup4==4.3.2
decorator==3.4.0 decorator==3.4.0
httplib2==0.8 httplib2==0.8
@ -16,21 +15,52 @@ iso8601==0.1.8
itsdangerous==0.23 itsdangerous==0.23
mock==1.0.1 mock==1.0.1
netaddr==0.7.10 netaddr==0.7.10
#nose==1.3.0 nose==1.3.0
prettytable==0.7.2 prettytable==0.7.2
psycopg2==2.5.2 psycopg2==2.5.2
pyaml==13.07.0 pyaml==13.07.0
pytz==2013.9 pytz==2013.9
requests==1.1.0 requests==1.1.0
requirements-parser==0.0.6 requirements-parser==0.0.6
simplejson==3.3.3 simplejson==3.3.3
urllib3==1.5 urllib3==1.5
waitress==0.8.8 waitress==0.8.8
six>=1.7.0
pbr>=0.6,!=0.7,<1.0
python-novaclient>=2.17.0 # =========================== Must-Have ============================
python-cinderclient>=1.0.8 # TODO(flwang): Make the list as short as possible when porting dependency
# from above list. And make sure the versions are sync with OpenStack global
# requirements.
Babel==1.3
Flask<1.0,>=0.10 # BSD
pbr>=1.6 # Apache-2.0
six>=1.9.0 # MIT
odoorpc==0.4.2
SQLAlchemy<1.1.0,>=1.0.10 # MIT
keystonemiddleware!=4.1.0,>=4.0.0 # Apache-2.0 keystonemiddleware!=4.1.0,>=4.0.0 # Apache-2.0
python-glanceclient>=0.18.0 # From Nova stable/liberty
python-cinderclient>=1.6.0 # Apache-2.0
python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
python-manilaclient>=1.3.0 # Apache-2.0
python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
python-swiftclient>=2.2.0 # Apache-2.0
python-neutronclient>=4.2.0 # Apache-2.0
python-heatclient>=0.6.0 # Apache-2.0
oslo.config>=3.9.0 # Apache-2.0
oslo.concurrency>=3.5.0 # Apache-2.0
oslo.context>=2.2.0 # Apache-2.0
oslo.db>=4.1.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.log>=1.14.0 # Apache-2.0
oslo.messaging>=4.5.0 # Apache-2.0
oslo.middleware>=3.0.0 # Apache-2.0
oslo.policy>=0.5.0 # Apache-2.0
oslo.rootwrap>=2.0.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
oslo.service>=1.0.0 # Apache-2.0
oslo.utils>=3.5.0 # Apache-2.0

161
run_tests.sh Executable file
View File

@ -0,0 +1,161 @@
#!/bin/bash
set -eu
function usage {
echo "Usage: $0 [OPTION]..."
echo "Run distil test suite"
echo ""
echo " -V, --virtual-env Always use virtualenv. Install automatically if not present"
echo " -N, --no-virtual-env Don't use virtualenv. Run tests in local environment"
echo " -s, --no-site-packages Isolate the virtualenv from the global Python environment"
echo " -x, --stop Stop running tests after the first error or failure."
echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added."
echo " -p, --pep8 Just run pep8"
echo " -P, --no-pep8 Don't run pep8"
echo " -c, --coverage Generate coverage report"
echo " -h, --help Print this usage message"
echo " --hide-elapsed Don't print the elapsed time for each test along with slow test list"
echo ""
echo "Note: with no options specified, the script will try to run the tests in a virtual environment,"
echo " If no virtualenv is found, the script will ask if you would like to create one. If you "
echo " prefer to run tests NOT in a virtual environment, simply pass the -N option."
exit
}
function process_option {
case "$1" in
-h|--help) usage;;
-V|--virtual-env) always_venv=1; never_venv=0;;
-N|--no-virtual-env) always_venv=0; never_venv=1;;
-s|--no-site-packages) no_site_packages=1;;
-f|--force) force=1;;
-p|--pep8) just_pep8=1;;
-P|--no-pep8) no_pep8=1;;
-c|--coverage) coverage=1;;
-*) testropts="$testropts $1";;
*) testrargs="$testrargs $1"
esac
}
venv=.venv
with_venv=tools/with_venv.sh
always_venv=0
never_venv=0
force=0
no_site_packages=0
installvenvopts=
testrargs=
testropts=
wrapper=""
just_pep8=0
no_pep8=0
coverage=0
for arg in "$@"; do
process_option $arg
done
if [ $no_site_packages -eq 1 ]; then
installvenvopts="--no-site-packages"
fi
function init_testr {
if [ ! -d .testrepository ]; then
${wrapper} testr init
fi
}
function run_tests {
# Cleanup *pyc
${wrapper} find . -type f -name "*.pyc" -delete
if [ $coverage -eq 1 ]; then
# Do not test test_coverage_ext when gathering coverage.
if [ "x$testrargs" = "x" ]; then
testrargs="^(?!.*test_coverage_ext).*$"
fi
export PYTHON="${wrapper} coverage run --source distil --parallel-mode"
fi
# Just run the test suites in current environment
set +e
TESTRTESTS="$TESTRTESTS $testrargs"
echo "Running \`${wrapper} $TESTRTESTS\`"
export DISCOVER_DIRECTORY=distil/tests/unit
${wrapper} $TESTRTESTS
RESULT=$?
set -e
copy_subunit_log
return $RESULT
}
function copy_subunit_log {
LOGNAME=`cat .testrepository/next-stream`
LOGNAME=$(($LOGNAME - 1))
LOGNAME=".testrepository/${LOGNAME}"
cp $LOGNAME subunit.log
}
function run_pep8 {
echo "Running flake8 ..."
${wrapper} flake8
}
TESTRTESTS="testr run --parallel $testropts"
if [ $never_venv -eq 0 ]
then
# Remove the virtual environment if --force used
if [ $force -eq 1 ]; then
echo "Cleaning virtualenv..."
rm -rf ${venv}
fi
if [ -e ${venv} ]; then
wrapper="${with_venv}"
else
if [ $always_venv -eq 1 ]; then
# Automatically install the virtualenv
python tools/install_venv.py $installvenvopts
wrapper="${with_venv}"
else
echo -e "No virtual environment found...create one? (Y/n) \c"
read use_ve
if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
# Install the virtualenv and run the test suite in it
python tools/install_venv.py $installvenvopts
wrapper=${with_venv}
fi
fi
fi
fi
# Delete old coverage data from previous runs
if [ $coverage -eq 1 ]; then
${wrapper} coverage erase
fi
if [ $just_pep8 -eq 1 ]; then
run_pep8
exit
fi
init_testr
run_tests
# NOTE(sirp): we only want to run pep8 when we're running the full-test suite,
# not when we're running tests individually. To handle this, we need to
# distinguish between options (noseopts), which begin with a '-', and
# arguments (testrargs).
if [ -z "$testrargs" ]; then
if [ $no_pep8 -eq 0 ]; then
run_pep8
fi
fi
if [ $coverage -eq 1 ]; then
echo "Generating coverage report in covhtml/"
${wrapper} coverage combine
${wrapper} coverage html --include='distil/*' --omit='distil/openstack/common/*' -d covhtml -i
fi

View File

@ -2,7 +2,7 @@
name = distil name = distil
version = 2014.1 version = 2014.1
summary = Distil project summary = Distil project
description-file = README.md description-file = README.rst
license = Apache Software License license = Apache Software License
classifiers = classifiers =
Programming Language :: Python Programming Language :: Python
@ -27,6 +27,24 @@ packages =
data_files = data_files =
share/distil = etc/distil/* share/distil = etc/distil/*
[entry_points]
console_scripts =
distil-api = distil.cli.distil_api:main
distil-db-manage = distil.db.migration.cli:main
distil.rater =
file = distil.rater.file:FileRater
odoo = distil.rater.odoo:OdooRater
distil.transformer =
max = distil.transformer.arithmetic:MaxTransformer
sum = distil.transformer.arithmetic:SumTransformer
uptime = distil.transformer.conversion:UpTimeTransformer
fromimage = distil.transformer.conversion:FromImageTransformer
networkservice = distil.transformer.conversion:NetworkServiceTransformer
[build_sphinx] [build_sphinx]
all_files = 1 all_files = 1
build-dir = doc/build build-dir = doc/build