aodh/doc/source/install.rst
Nicolas Barcet (nijaba) 7f9264b194 Assorted doc fixes
Remove FIXME:... and move to LP as 1064126

Reorganize Install section TOC

Fix little gnats in configuration.rst

Change-Id: I005574054ccdcfc293e7b2cbb29fbffae8e7ef51
2012-10-09 02:34:50 +02:00

7.1 KiB

Install

Installing and Running the Development Version ++++++++++++++++++++++++++++++++++++++++++++++=

Ceilometer has four daemons. The compute agent runs on the Nova compute node(s) while the central agent and collector run on the cloud's management node(s). In a development environment created by devstack, these two are typically the same server. They do not have to be, though, so some of the instructions below are duplicated. Skip the steps you have already done.

Configuring Devstack

double: installing; devstack

  1. Create a localrc file as input to devstack.
  2. Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.
  3. Nova does not generate the periodic notifications for all known instances by default. To enable these auditing events, set instance_usage_audit to true in the nova configuration file.
  4. The ceilometer services are not enabled by default, so they must be enabled in localrc before running stack.sh.

This example localrc file shows all of the settings required for ceilometer:

# Configure the notifier to talk to the message queue
# and turn on usage audit events
EXTRA_OPTS=(notification_driver=nova.openstack.common.notifier.rabbit_notifier,ceilometer.compute.nova_notifier)

# Enable the ceilometer services
enable_service ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api

Installing Manually

Installing the Collector

double: installing; collector

  1. Install and configure nova.

    The collector daemon imports code from nova, so it needs to be run on a server where nova has already been installed.

    Note

    Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.

  2. Install MongoDB.

    Follow the instructions to install the MongoDB package for your operating system, then start the service.

  3. Clone the ceilometer git repository to the management server:

    $ cd /opt/stack
    $ git clone https://github.com/stackforge/ceilometer.git
  4. As a user with root permissions or sudo privileges, run the ceilometer installer:

    $ cd ceilometer
    $ sudo python setup.py install
  5. Configure ceilometer.

    Ceilometer needs to know about some of the nova configuration options, so the simplest way to start is copying /etc/nova/nova.conf to /etc/ceilometer-collector.conf. Some of the logging settings used in nova break ceilometer, so they need to be removed. For example, as a user with root permissions:

    $ grep -v format_string /etc/nova/nova.conf > /etc/ceilometer-collector.conf

    Refer to configuration for details about any other options you might want to modify before starting the service.

  6. Start the collector.

    $ ./bin/ceilometer-collector

    Note

    The default development configuration of the collector logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.

Installing the Compute Agent

double: installing; compute agent

Note

The compute agent must be installed on each nova compute node.

  1. Install and configure nova.

    The collector daemon imports code from nova, so it needs to be run on a server where nova has already been installed.

    Note

    Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.

  2. Clone the ceilometer git repository to the server:

    $ cd /opt/stack
    $ git clone https://github.com/stackforge/ceilometer.git
  3. As a user with root permissions or sudo privileges, run the ceilometer installer:

    $ cd ceilometer
    $ sudo python setup.py install
  4. Configure ceilometer.

    Ceilometer needs to know about some of the nova configuration options, so the simplest way to start is copying /etc/nova/nova.conf to /etc/ceilometer-agent.conf. Some of the logging settings used in nova break ceilometer, so they need to be removed. For example, as a user with root permissions:

    $ grep -v format_string /etc/nova/nova.conf > /etc/ceilometer-agent.conf

    Refer to configuration for details about any other options you might want to modify before starting the service.

  5. Start the agent.

    $ ./bin/ceilometer-agent

    Note

    The default development configuration of the agent logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.

Installing the API Server

double: installing; API

Note

The API server needs to be able to talk to keystone and ceilometer's database.

  1. Install and configure nova.

    The the ceilometer api server imports code from nova, so it needs to be run on a server where nova has already been installed.

  2. Clone the ceilometer git repository to the server:

    $ cd /opt/stack
    $ git clone https://github.com/stackforge/ceilometer.git
  3. As a user with root permissions or sudo privileges, run the ceilometer installer:

    $ cd ceilometer
    $ sudo python setup.py install
  4. Configure ceilometer.

    Ceilometer needs to know about some of the nova configuration options, so the simplest way to start is copying /etc/nova/nova.conf to /etc/ceilometer-agent.conf. Some of the logging settings used in nova break ceilometer, so they need to be removed. For example, as a user with root permissions:

    $ grep -v format_string /etc/nova/nova.conf > /etc/ceilometer-agent.conf

    Refer to configuration for details about any other options you might want to modify before starting the service.

  5. Start the agent.

    $ ./bin/ceilometer-api

Note

The development version of the API server logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.