aodh/doc/source/install/manual.rst
Dina Belova 9881d0989a Describe storage backends in the collector installation guide
Move information about the storage backends to the collector
section and add info about their usage.

Change-Id: I979a458c91f5dee486799f201b8f4d75a4163782
2014-04-23 23:29:33 +04:00

20 KiB

Installing Manually

Storage Backend Installation

This step is a prerequisite for the collector, notification agent and API services. You may use one of the listed database backends below to store Ceilometer data.

Note

Please notice, MongoDB (and some other backends like DB2 and HBase) require pymongo to be installed on the system. The required minimum version of pymongo is 2.4.

MongoDB

The recommended Ceilometer storage backend is MongoDB. Follow the instructions to install the MongoDB package for your operating system, then start the service. The required minimum version of MongoDB is 2.4.

To use MongoDB as the storage backend, change the 'database' section in ceilometer.conf as follows:

[database]
connection = mongodb://username:password@host:27017/ceilometer

SQLalchemy-supported DBs

You may alternatively use MySQL (or any other SQLAlchemy-supported DB like PostgreSQL).

In case of SQL-based database backends, you need to create a ceilometer database first and then initialise it by running:

ceilometer-dbsync

To use MySQL as the storage backend, change the 'database' section in ceilometer.conf as follows:

[database]
connection = mysql://username:password@host/ceilometer?charset=utf8

HBase

HBase backend is implemented to use HBase Thrift interface, therefore it is mandatory to have the HBase Thrift server installed and running. To start the Thrift server, please run the following command:

${HBASE_HOME}/bin/hbase thrift start

The implementation uses HappyBase, which is a wrapper library used to interact with HBase via Thrift protocol. You can verify the thrift connection by running a quick test from a client:

import happybase

conn = happybase.Connection(host=$hbase-thrift-server, port=9090, table_prefix=None)
print conn.tables() # this returns a list of HBase tables in your HBase server

Note

HappyBase version 0.5 or greater is required. Additionally, version 0.7 is not currently supported.

In case of HBase, the needed database tables (project, user, resource, meter, alarm, alarm_h) should be created manually with f column family for each one.

To use HBase as the storage backend, change the 'database' section in ceilometer.conf as follows:

[database]
connection = hbase://hbase-thrift-host:9090

DB2

DB2 installation should follow fresh IBM DB2 NoSQL installation docs.

To use DB2 as the storage backend, change the 'database' section in ceilometer.conf as follows:

[database]
connection = db2://username:password@host:27017/ceilometer

Installing the notification agent

double: installing; agent-notification

  1. If you want to be able to retrieve image samples, you need to instruct Glance to send notifications to the bus by changing notifier_strategy to rabbit or qpid in glance-api.conf and restarting the service.

  2. If you want to be able to retrieve volume samples, you need to instruct Cinder to send notifications to the bus by changing notification_driver to cinder.openstack.common.notifier.rpc_notifier and control_exchange to cinder, before restarting the service.

  3. In order to retrieve object store statistics, ceilometer needs access to swift with ResellerAdmin role. You should give this role to your os_username user for tenant os_tenant_name:

    $ keystone role-create --name=ResellerAdmin
    +----------+----------------------------------+
    | Property |              Value               |
    +----------+----------------------------------+
    |    id    | 462fa46c13fd4798a95a3bfbe27b5e54 |
    |   name   |          ResellerAdmin           |
    +----------+----------------------------------+
    
    $ keystone user-role-add --tenant_id $SERVICE_TENANT \
                             --user_id $CEILOMETER_USER \
                             --role_id 462fa46c13fd4798a95a3bfbe27b5e54

    You'll also need to add the Ceilometer middleware to Swift to account for incoming and outgoing traffic, by adding these lines to /etc/swift/proxy-server.conf:

    [filter:ceilometer]
    use = egg:ceilometer#swift

    And adding ceilometer in the pipeline of that same file, right before proxy-server.

    Additionally, if you want to store extra metadata from headers, you need to set metadata_headers so it would look like:

    [filter:ceilometer]
    use = egg:ceilometer#swift
    metadata_headers = X-FOO, X-BAR

    Note

    Please make sure that ceilometer's logging directory (if it's configured) is read and write accessible for the user swift is started by.

  4. Clone the ceilometer git repository to the management server:

    $ cd /opt/stack
    $ git clone https://git.openstack.org/openstack/ceilometer.git
  5. As a user with root permissions or sudo privileges, run the ceilometer installer:

    $ cd ceilometer
    $ sudo python setup.py install
  6. Copy the sample configuration files from the source tree to their final location.

    $ mkdir -p /etc/ceilometer
    $ cp etc/ceilometer/*.json /etc/ceilometer
    $ cp etc/ceilometer/*.yaml /etc/ceilometer
    $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
  7. Edit /etc/ceilometer/ceilometer.conf

    1. Configure RPC

      Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.

      In particular, look for the *_control_exchange options and make sure the names are correct. If you did not change the control_exchange settings for the other components, the defaults should be correct.

      Note

      Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.

    2. Set the metering_secret value.

      Set the metering_secret value to a large, random, value. Use the same value in all ceilometer configuration files, on all nodes, so that messages passing between the nodes can be validated.

    Refer to /configuration for details about any other options you might want to modify before starting the service.

  8. Start the notification daemon.

    $ ceilometer-agent-notification

    Note

    The default development configuration of the collector logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.

Installing the collector

double: installing; collector

  1. Clone the ceilometer git repository to the management server:

    $ cd /opt/stack
    $ git clone https://git.openstack.org/openstack/ceilometer.git
  2. As a user with root permissions or sudo privileges, run the ceilometer installer:

    $ cd ceilometer
    $ sudo python setup.py install
  3. Copy the sample configuration files from the source tree to their final location.

    $ mkdir -p /etc/ceilometer
    $ cp etc/ceilometer/*.json /etc/ceilometer
    $ cp etc/ceilometer/*.yaml /etc/ceilometer
    $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
  4. Edit /etc/ceilometer/ceilometer.conf

    1. Configure RPC

      Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.

      In particular, look for the *_control_exchange options and make sure the names are correct. If you did not change the control_exchange settings for the other components, the defaults should be correct.

      Note

      Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.

    2. Set the metering_secret value.

      Set the metering_secret value to a large, random, value. Use the same value in all ceilometer configuration files, on all nodes, so that messages passing between the nodes can be validated.

    Refer to /configuration for details about any other options you might want to modify before starting the service.

  5. Start the collector.

    $ ceilometer-collector

    Note

    The default development configuration of the collector logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.

Installing the Compute Agent

double: installing; compute agent

Note

The compute agent must be installed on each nova compute node.

  1. Configure nova.

    The nova compute service needs the following configuration to be set in nova.conf:

    # nova-compute configuration for ceilometer
    instance_usage_audit=True
    instance_usage_audit_period=hour
    notify_on_state_change=vm_and_task_state
    notification_driver=nova.openstack.common.notifier.rpc_notifier
    notification_driver=ceilometer.compute.nova_notifier
  2. Clone the ceilometer git repository to the server:

    $ cd /opt/stack
    $ git clone https://git.openstack.org/openstack/ceilometer.git
  3. As a user with root permissions or sudo privileges, run the ceilometer installer:

    $ cd ceilometer
    $ sudo python setup.py install
  4. Copy the sample configuration files from the source tree to their final location.

    $ mkdir -p /etc/ceilometer
    $ cp etc/ceilometer/*.json /etc/ceilometer
    $ cp etc/ceilometer/*.yaml /etc/ceilometer
    $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
  5. Edit /etc/ceilometer/ceilometer.conf

    1. Configure RPC

      Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.

      In particular, look for the *_control_exchange options and make sure the names are correct. If you did not change the control_exchange settings for the other components, the defaults should be correct.

      Note

      Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.

    2. Set the metering_secret value.

      Set the metering_secret value to a large, random, value. Use the same value in all ceilometer configuration files, on all nodes, so that messages passing between the nodes can be validated.

    Refer to /configuration for details about any other options you might want to modify before starting the service.

  6. Start the agent.

    $ ceilometer-agent-compute

    Note

    The default development configuration of the agent logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.

Installing the Central Agent

double: installing; agent

Note

The central agent needs to be able to talk to keystone and any of the services being polled for updates.

  1. Clone the ceilometer git repository to the server:

    $ cd /opt/stack
    $ git clone https://git.openstack.org/openstack/ceilometer.git
  2. As a user with root permissions or sudo privileges, run the ceilometer installer:

    $ cd ceilometer
    $ sudo python setup.py install
  3. Copy the sample configuration files from the source tree to their final location.

    $ mkdir -p /etc/ceilometer
    $ cp etc/ceilometer/*.json /etc/ceilometer
    $ cp etc/ceilometer/*.yaml /etc/ceilometer
    $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
  4. Edit /etc/ceilometer/ceilometer.conf

    1. Configure RPC

      Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.

      In particular, look for the *_control_exchange options and make sure the names are correct. If you did not change the control_exchange settings for the other components, the defaults should be correct.

      Note

      Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.

    2. Set the metering_secret value.

      Set the metering_secret value to a large, random, value. Use the same value in all ceilometer configuration files, on all nodes, so that messages passing between the nodes can be validated.

    Refer to /configuration for details about any other options you might want to modify before starting the service.

  5. Start the agent

    $ ceilometer-agent-central

Installing the API Server

double: installing; API

Note

The API server needs to be able to talk to keystone and ceilometer's database.

  1. Clone the ceilometer git repository to the server:

    $ cd /opt/stack
    $ git clone https://git.openstack.org/openstack/ceilometer.git
  2. As a user with root permissions or sudo privileges, run the ceilometer installer:

    $ cd ceilometer
    $ sudo python setup.py install
  3. Copy the sample configuration files from the source tree to their final location.

    $ mkdir -p /etc/ceilometer
    $ cp etc/ceilometer/*.json /etc/ceilometer
    $ cp etc/ceilometer/*.yaml /etc/ceilometer
    $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
  4. Edit /etc/ceilometer/ceilometer.conf

    1. Configure RPC

      Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.

      In particular, look for the *_control_exchange options and make sure the names are correct. If you did not change the control_exchange settings for the other components, the defaults should be correct.

      Note

      Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.

    Refer to /configuration for details about any other options you might want to modify before starting the service.

  5. Start the API server.

    $ ceilometer-api

Note

The development version of the API server logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.

Configuring keystone to work with API

double: installing; configure keystone

Note

The API server needs to be able to talk to keystone to authenticate.

  1. Create a service for ceilometer in keystone

    $ keystone service-create --name=ceilometer \
                              --type=metering \
                              --description="Ceilometer Service"
  2. Create an endpoint in keystone for ceilometer

    $ keystone endpoint-create --region RegionOne \
                               --service_id $CEILOMETER_SERVICE \
                               --publicurl "http://$SERVICE_HOST:8777/" \
                               --adminurl "http://$SERVICE_HOST:8777/" \
                               --internalurl "http://$SERVICE_HOST:8777/"

Note

CEILOMETER_SERVICE is the id of the service created by the first command and SERVICE_HOST is the host where the Ceilometer API is running. The default port value for ceilometer API is 8777. If the port value has been customized, adjust accordingly.

Configuring Heat to send notifications

Configure the driver in heat.conf

notification_driver=heat.openstack.common.notifier.rpc_notifier

Or if migration to oslo.messaging is done for Icehouse:

notification_driver=oslo.messaging.notifier.Notifier

Notifications queues

double: installing; notifications queues

By default, Ceilometer consumes notifications on the RPC bus sent to notification_topics by using a queue/pool name that is identical to the topic name. You shouldn't have different applications consuming messages from this queue. If you want to also consume the topic notifications with a system other than Ceilometer, you should configure a separate queue that listens for the same messages.

Using multiple dispatchers

double: installing; multiple dispatchers

The Ceilometer collector allows multiple dispatchers to be configured so that metering data can be easily sent to multiple internal and external systems.

Ceilometer by default only saves metering data in a database, to allow Ceilometer to send metering data to other systems in addition to the database, multiple dispatchers can be developed and enabled by modifying Ceilometer configuration file.

Ceilometer ships two dispatchers currently. One is called database dispatcher, and the other is called file dispatcher. As the names imply, database dispatcher basically sends metering data to a database driver, eventually metering data will be saved in database. File dispatcher sends metering data into a file. The location, name, size of the file can be configured in ceilometer configuration file. These two dispatchers are shipped in the Ceilometer egg and defined in the entry_points as follows:

[ceilometer.dispatcher]
file = ceilometer.dispatcher.file:FileDispatcher
database = ceilometer.dispatcher.database:DatabaseDispatcher

To use both dispatchers on a Ceilometer collector service, add the following line in file ceilometer.conf:

[DEFAULT]
dispatcher=database
dispatcher=file

Note

dispatcher is in [collector] section in Havana release, but it is deprecated in Icehouse.

If there is no dispatcher present, database dispatcher is used as the default. If in some cases such as traffic tests, no dispatcher is needed, one can configure the line like the following:

dispatcher=

With above configuration, no dispatcher is used by the Ceilometer collector service, all metering data received by Ceilometer collector will be dropped.