Move information about the storage backends to the collector section and add info about their usage. Change-Id: I979a458c91f5dee486799f201b8f4d75a4163782
20 KiB
Installing Manually
Storage Backend Installation
This step is a prerequisite for the collector, notification agent and API services. You may use one of the listed database backends below to store Ceilometer data.
Note
Please notice, MongoDB (and some other backends like DB2 and HBase) require pymongo to be installed on the system. The required minimum version of pymongo is 2.4.
MongoDB
The recommended Ceilometer storage backend is MongoDB. Follow the instructions to install the MongoDB package for your operating system, then start the service. The required minimum version of MongoDB is 2.4.
To use MongoDB as the storage backend, change the 'database' section in ceilometer.conf as follows:
[database] connection = mongodb://username:password@host:27017/ceilometer
SQLalchemy-supported DBs
You may alternatively use MySQL (or any other SQLAlchemy-supported DB like PostgreSQL).
In case of SQL-based database backends, you need to create a ceilometer database first and then initialise it by running:
ceilometer-dbsync
To use MySQL as the storage backend, change the 'database' section in ceilometer.conf as follows:
[database] connection = mysql://username:password@host/ceilometer?charset=utf8
HBase
HBase backend is implemented to use HBase Thrift interface, therefore it is mandatory to have the HBase Thrift server installed and running. To start the Thrift server, please run the following command:
${HBASE_HOME}/bin/hbase thrift start
The implementation uses HappyBase, which is a wrapper library used to interact with HBase via Thrift protocol. You can verify the thrift connection by running a quick test from a client:
import happybase conn = happybase.Connection(host=$hbase-thrift-server, port=9090, table_prefix=None) print conn.tables() # this returns a list of HBase tables in your HBase server
Note
HappyBase version 0.5 or greater is required. Additionally, version 0.7 is not currently supported.
In case of HBase, the needed database tables (project, user, resource, meter, alarm, alarm_h) should be created manually with f column family for each one.
To use HBase as the storage backend, change the 'database' section in ceilometer.conf as follows:
[database] connection = hbase://hbase-thrift-host:9090
DB2
DB2 installation should follow fresh IBM DB2 NoSQL installation docs.
To use DB2 as the storage backend, change the 'database' section in ceilometer.conf as follows:
[database] connection = db2://username:password@host:27017/ceilometer
Installing the notification agent
double: installing; agent-notification
If you want to be able to retrieve image samples, you need to instruct Glance to send notifications to the bus by changing
notifier_strategy
torabbit
orqpid
inglance-api.conf
and restarting the service.If you want to be able to retrieve volume samples, you need to instruct Cinder to send notifications to the bus by changing
notification_driver
tocinder.openstack.common.notifier.rpc_notifier
andcontrol_exchange
tocinder
, before restarting the service.In order to retrieve object store statistics, ceilometer needs access to swift with
ResellerAdmin
role. You should give this role to youros_username
user for tenantos_tenant_name
:$ keystone role-create --name=ResellerAdmin +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | 462fa46c13fd4798a95a3bfbe27b5e54 | | name | ResellerAdmin | +----------+----------------------------------+ $ keystone user-role-add --tenant_id $SERVICE_TENANT \ --user_id $CEILOMETER_USER \ --role_id 462fa46c13fd4798a95a3bfbe27b5e54
You'll also need to add the Ceilometer middleware to Swift to account for incoming and outgoing traffic, by adding these lines to
/etc/swift/proxy-server.conf
:[filter:ceilometer] use = egg:ceilometer#swift
And adding
ceilometer
in thepipeline
of that same file, right beforeproxy-server
.Additionally, if you want to store extra metadata from headers, you need to set
metadata_headers
so it would look like:[filter:ceilometer] use = egg:ceilometer#swift metadata_headers = X-FOO, X-BAR
Note
Please make sure that ceilometer's logging directory (if it's configured) is read and write accessible for the user swift is started by.
Clone the ceilometer git repository to the management server:
$ cd /opt/stack $ git clone https://git.openstack.org/openstack/ceilometer.git
As a user with
root
permissions orsudo
privileges, run the ceilometer installer:$ cd ceilometer $ sudo python setup.py install
Copy the sample configuration files from the source tree to their final location.
$ mkdir -p /etc/ceilometer $ cp etc/ceilometer/*.json /etc/ceilometer $ cp etc/ceilometer/*.yaml /etc/ceilometer $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
Edit
/etc/ceilometer/ceilometer.conf
Configure RPC
Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.
In particular, look for the
*_control_exchange
options and make sure the names are correct. If you did not change thecontrol_exchange
settings for the other components, the defaults should be correct.Note
Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.
Set the
metering_secret
value.Set the
metering_secret
value to a large, random, value. Use the same value in all ceilometer configuration files, on all nodes, so that messages passing between the nodes can be validated.
Refer to
/configuration
for details about any other options you might want to modify before starting the service.Start the notification daemon.
$ ceilometer-agent-notification
Note
The default development configuration of the collector logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.
Installing the collector
double: installing; collector
Clone the ceilometer git repository to the management server:
$ cd /opt/stack $ git clone https://git.openstack.org/openstack/ceilometer.git
As a user with
root
permissions orsudo
privileges, run the ceilometer installer:$ cd ceilometer $ sudo python setup.py install
Copy the sample configuration files from the source tree to their final location.
$ mkdir -p /etc/ceilometer $ cp etc/ceilometer/*.json /etc/ceilometer $ cp etc/ceilometer/*.yaml /etc/ceilometer $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
Edit
/etc/ceilometer/ceilometer.conf
Configure RPC
Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.
In particular, look for the
*_control_exchange
options and make sure the names are correct. If you did not change thecontrol_exchange
settings for the other components, the defaults should be correct.Note
Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.
Set the
metering_secret
value.Set the
metering_secret
value to a large, random, value. Use the same value in all ceilometer configuration files, on all nodes, so that messages passing between the nodes can be validated.
Refer to
/configuration
for details about any other options you might want to modify before starting the service.Start the collector.
$ ceilometer-collector
Note
The default development configuration of the collector logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.
Installing the Compute Agent
double: installing; compute agent
Note
The compute agent must be installed on each nova compute node.
Configure nova.
The
nova
compute service needs the following configuration to be set innova.conf
:# nova-compute configuration for ceilometer instance_usage_audit=True instance_usage_audit_period=hour notify_on_state_change=vm_and_task_state notification_driver=nova.openstack.common.notifier.rpc_notifier notification_driver=ceilometer.compute.nova_notifier
Clone the ceilometer git repository to the server:
$ cd /opt/stack $ git clone https://git.openstack.org/openstack/ceilometer.git
As a user with
root
permissions orsudo
privileges, run the ceilometer installer:$ cd ceilometer $ sudo python setup.py install
Copy the sample configuration files from the source tree to their final location.
$ mkdir -p /etc/ceilometer $ cp etc/ceilometer/*.json /etc/ceilometer $ cp etc/ceilometer/*.yaml /etc/ceilometer $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
Edit
/etc/ceilometer/ceilometer.conf
Configure RPC
Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.
In particular, look for the
*_control_exchange
options and make sure the names are correct. If you did not change thecontrol_exchange
settings for the other components, the defaults should be correct.Note
Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.
Set the
metering_secret
value.Set the
metering_secret
value to a large, random, value. Use the same value in all ceilometer configuration files, on all nodes, so that messages passing between the nodes can be validated.
Refer to
/configuration
for details about any other options you might want to modify before starting the service.Start the agent.
$ ceilometer-agent-compute
Note
The default development configuration of the agent logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.
Installing the Central Agent
double: installing; agent
Note
The central agent needs to be able to talk to keystone and any of the services being polled for updates.
Clone the ceilometer git repository to the server:
$ cd /opt/stack $ git clone https://git.openstack.org/openstack/ceilometer.git
As a user with
root
permissions orsudo
privileges, run the ceilometer installer:$ cd ceilometer $ sudo python setup.py install
Copy the sample configuration files from the source tree to their final location.
$ mkdir -p /etc/ceilometer $ cp etc/ceilometer/*.json /etc/ceilometer $ cp etc/ceilometer/*.yaml /etc/ceilometer $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
Edit
/etc/ceilometer/ceilometer.conf
Configure RPC
Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.
In particular, look for the
*_control_exchange
options and make sure the names are correct. If you did not change thecontrol_exchange
settings for the other components, the defaults should be correct.Note
Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.
Set the
metering_secret
value.Set the
metering_secret
value to a large, random, value. Use the same value in all ceilometer configuration files, on all nodes, so that messages passing between the nodes can be validated.
Refer to
/configuration
for details about any other options you might want to modify before starting the service.Start the agent
$ ceilometer-agent-central
Installing the API Server
double: installing; API
Note
The API server needs to be able to talk to keystone and ceilometer's database.
Clone the ceilometer git repository to the server:
$ cd /opt/stack $ git clone https://git.openstack.org/openstack/ceilometer.git
As a user with
root
permissions orsudo
privileges, run the ceilometer installer:$ cd ceilometer $ sudo python setup.py install
Copy the sample configuration files from the source tree to their final location.
$ mkdir -p /etc/ceilometer $ cp etc/ceilometer/*.json /etc/ceilometer $ cp etc/ceilometer/*.yaml /etc/ceilometer $ cp etc/ceilometer/ceilometer.conf.sample /etc/ceilometer/ceilometer.conf
Edit
/etc/ceilometer/ceilometer.conf
Configure RPC
Set the RPC-related options correctly so ceilometer's daemons can communicate with each other and receive notifications from the other projects.
In particular, look for the
*_control_exchange
options and make sure the names are correct. If you did not change thecontrol_exchange
settings for the other components, the defaults should be correct.Note
Ceilometer makes extensive use of the messaging bus, but has not yet been tested with ZeroMQ. We recommend using Rabbit or qpid for now.
Refer to
/configuration
for details about any other options you might want to modify before starting the service.Start the API server.
$ ceilometer-api
Note
The development version of the API server logs to stderr, so you may want to run this step using a screen session or other tool for maintaining a long-running program in the background.
Configuring keystone to work with API
double: installing; configure keystone
Note
The API server needs to be able to talk to keystone to authenticate.
Create a service for ceilometer in keystone
$ keystone service-create --name=ceilometer \ --type=metering \ --description="Ceilometer Service"
Create an endpoint in keystone for ceilometer
$ keystone endpoint-create --region RegionOne \ --service_id $CEILOMETER_SERVICE \ --publicurl "http://$SERVICE_HOST:8777/" \ --adminurl "http://$SERVICE_HOST:8777/" \ --internalurl "http://$SERVICE_HOST:8777/"
Note
CEILOMETER_SERVICE is the id of the service created by the first command and SERVICE_HOST is the host where the Ceilometer API is running. The default port value for ceilometer API is 8777. If the port value has been customized, adjust accordingly.
Configuring Heat to send notifications
Configure the driver in heat.conf
notification_driver=heat.openstack.common.notifier.rpc_notifier
Or if migration to oslo.messaging is done for Icehouse:
notification_driver=oslo.messaging.notifier.Notifier
Notifications queues
double: installing; notifications queues
By default, Ceilometer consumes notifications on the RPC bus sent to notification_topics by using a queue/pool name that is identical to the topic name. You shouldn't have different applications consuming messages from this queue. If you want to also consume the topic notifications with a system other than Ceilometer, you should configure a separate queue that listens for the same messages.
Using multiple dispatchers
double: installing; multiple dispatchers
The Ceilometer collector allows multiple dispatchers to be configured so that metering data can be easily sent to multiple internal and external systems.
Ceilometer by default only saves metering data in a database, to allow Ceilometer to send metering data to other systems in addition to the database, multiple dispatchers can be developed and enabled by modifying Ceilometer configuration file.
Ceilometer ships two dispatchers currently. One is called database dispatcher, and the other is called file dispatcher. As the names imply, database dispatcher basically sends metering data to a database driver, eventually metering data will be saved in database. File dispatcher sends metering data into a file. The location, name, size of the file can be configured in ceilometer configuration file. These two dispatchers are shipped in the Ceilometer egg and defined in the entry_points as follows:
[ceilometer.dispatcher]
file = ceilometer.dispatcher.file:FileDispatcher
database = ceilometer.dispatcher.database:DatabaseDispatcher
To use both dispatchers on a Ceilometer collector service, add the following line in file ceilometer.conf:
[DEFAULT]
dispatcher=database
dispatcher=file
Note
dispatcher is in [collector] section in Havana release, but it is deprecated in Icehouse.
If there is no dispatcher present, database dispatcher is used as the default. If in some cases such as traffic tests, no dispatcher is needed, one can configure the line like the following:
dispatcher=
With above configuration, no dispatcher is used by the Ceilometer collector service, all metering data received by Ceilometer collector will be dropped.