Merge "Marconi Operations Document"

This commit is contained in:
Jenkins 2014-03-18 20:38:52 +00:00 committed by Gerrit Code Review
commit 5ce7f69610
5 changed files with 431 additions and 3 deletions

64
doc/source/api.rst Normal file
View File

@ -0,0 +1,64 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Using Marconi's Public APIs
===========================
Marconi fully implements version 1.0 of the OpenStack Messaging API by now.
Generally, you can use any HTTP client to talk with Marconi public REST API,
though Marconi client is the recommended approach.
Marconi Client
############################################
We can easily access the Marconi REST API via Marconi client. Below is an example
to create a queue, post messages to it and finally delete it::
from marconiclient.queues.v1 import client
URL = 'http://localhost:8888'
messages = [{'body': {'id': idx}, 'ttl': 360} for idx in range(20)]
cli = client.Client(URL)
queue = cli.queue('myqueue')
queue.post(messages)
for msg in queue.messages(echo=True):
print(msg.body)
msg.delete()
queue.delete()
curl
####
Define these variables::
# USERNAME=my identity username
# APIKEY=my-long-api-key
# ENDPOINT=test-queue.mydomain.com < keystone endpoint >
# QUEUE=test-queue
# CLIENTID=c5a6114a-523c-4085-84fb-533c5ac40789
# HTTP=http
# PORT=80
# TOKEN=9abb6d47de3143bf80c9208d37db58cf < your token here >
Create the queue::
# curl -i -X PUT $HTTP://$ENDPOINT:$PORT/v1/queues/$QUEUE -H "X-Auth-Token: $TOKEN" -H "Client-ID: $CLIENTID"
HTTP/1.1 201 Created
content-length: 0
location: /v1/queues/test-queue
```HTTP/1.1 201 Created``` response proves that service is functioning properly.

77
doc/source/glossary.rst Normal file
View File

@ -0,0 +1,77 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
==========
Glossary
==========
Messaging Service Concepts
==========================
The Messaging Service is a multi-tenant, message queue implementation that
utilizes a RESTful HTTP interface to provide an asynchronous communications
protocol, which is one of the main requirements in todays scalable applications.
.. glossary::
Queue
Queue is a logical entity that groups messages. Ideally a queue is created
per work type. For example, if you want to compress files, you would create
a queue dedicated for this job. Any application that reads from this queue
would only compress files.
Message
Message is sent through a queue and exists until it is deleted by a recipient
or automatically by the system based on a TTL (time-to-live) value.
Claim
Claim is a mechanism to mark messages so that other workers will not process the same message.
Worker
Worker is an application that reads one or multiple messages from the queue.
Producer
Producer is an application that creates messages in one or multiple queues.
Publish - Subscribe
Publish - Subscribe is a pattern where all worker applications have access
to all messages in the queue. Workers can not delete or update messages.
Producer - Consumer
Producer - Consumer is a pattern where each worker application that reads
the queue has to claim the message in order to prevent duplicate processing.
Later, when the work is done, the worker is responsible from deleting the
message. If message is not deleted in a predefined time (claim TTL), it can
be claimed by other workers.
Message TTL
Message TTL is time-to-live value and defines how long a message will be accessible.
Claim TTL
Claim TTL is time-to-live value and defines how long a message will be in
claimed state. A message can be claimed by one worker at a time.
Queues Database
Queues database stores the information about the queues and the messages
within these queues. Storage layer has to guarantee durability and availability of the data.
Sharding
If sharding enabled, queuing service uses multiple queues databases in order
to scale horizontally. A shard (queues database) can be added anytime without
stopping the service. Each shard has a weight that is assigned during the
creation time but can be changed later. Sharding is done by queue which
indicates that all messages for a particular queue can be found in the same shard (queues database).
Catalog Database
If sharding is enabled, catalog database has to be created. Catalog database
maintains ```queues``` to ```queues database``` mapping. Storage layer has
to guarantee durability and availability of data.

30
doc/source/ha.rst Normal file
View File

@ -0,0 +1,30 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Minimum Scalable HA Setup
=========================
OpenStack Queuing Service has two main layers. First one is the transport
(queuing application) layer which provides the RESTful interface, second one
is the storage layer which keeps all the data and meta-data about queues and messages.
For a HA setup, a load balancer has to be placed in front of the web servers.
Load balancer setup is out of scope in this document.
For storage we will use ```mongoDB``` in order to provide high availability with
minimum administration overhead. For transport, we will use ```wsgi```.
To have a small footprint while providing HA, we will use 2 web servers which
will host the application and 3 mongoDB servers (configured as replica-sets)
which will host the catalog and queues databases. At larger scale, catalog
database and the queues database are advised to be hosted on different mongoDB replica sets.

View File

@ -1,7 +1,4 @@
..
Copyright 2010-2014 OpenStack Foundation
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
@ -41,20 +38,28 @@ Concepts
.. toctree::
:maxdepth: 1
glossary
Installing/Configuring Marconi
==============================
.. toctree::
:maxdepth: 1
installing
Operating Marconi
=================
.. toctree::
:maxdepth: 1
ha
Using Marconi
=============
.. toctree::
:maxdepth: 1
api

252
doc/source/installing.rst Normal file
View File

@ -0,0 +1,252 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Installing and Configuring
============================
System Requirements
~~~~~~~~~~~~~~~~~~~
Before you install OpenStack Queuing Service, you must meet the following system requirements.lst::
- OpenStack Compute Installation.
- Enable the Identity Service for user and project management.
- Python 2.6 or 2.7
Installing from packages
~~~~~~~~~~~~~~~~~~~~~~~~
Before you install and configure queuing service, meet the requirements in the
section called "System Requirements". Below samples are installing the packages on
RedHat operating system.
Install mongoDB on Database Servers
###################################
Install mongoDB on three servers and setup the replica-set.
Configure Package Management System (YUM)
#########################################
Create a ```/etc/yum.repos.d/mongodb.repo``` file to hold the following
configuration information for the MongoDB repository:
If you are running a 64-bit system, use the following configuration::
[mongodb]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/
gpgcheck=0
enabled=1
If you are running a 32-bit system, which is not recommended for production
deployments, use the following configuration::
[mongodb]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/i686/
gpgcheck=0
enabled=1
Install Packages
################
Issue the following command (as root or with sudo) to install the latest stable
version of MongoDB and the associated tools::
#yum install mongo-10gen mongo-10gen-server
Edit ```/etc/sysconfig/mongod```::
logpath=/var/log/mongo/mongod.log
logappend=true
fork = true
dbpath=/var/lib/mongo
pidfilepath = /var/run/mongodb/mongod.pid
replSet = catalog
nojournal = true
profile = 1
slowms = 200
oplogSize = 2048
Start mongoDB on all database servers::
#service mongodb start
Configure Replica Set
#####################
Assuming that primary mongodb servers hostname is ```mydb0.example-queues.net```,
once you install mongoDB on three servers go to ```mydb0``` and run the commands below;::
mydb0# mongo local --eval "printjson(rs.initiate())"
mydb0# rs.add("mydb1.example-queues.net")
mydb0# rs.add("mydb2.example-queues.net")
To check if replica-set is established run this command;::
mydb0:~# mongo local --eval "printjson(rs.status())"
Install memcached on Web Servers
################################
Install memcached on web servers in order to cache identity tokens and catalog mappings.::
web# yum install memcached
Start memcached service::
web# service memcached start
Install uwsgi on Web Servers::
web# yum -y install python-pip
web# pip install uwsgi
Configure OpenStack Marconi
###########################
On web servers run these commands::
web# git clone https://github.com/openstack/marconi.git .
web# pip install . -r ./requirements.txt --upgrade --log /tmp/marconi-pip.log
Create ```/srv/marconi``` folder to store related configurations files.
Create ```/srv/marconi/marconi_uwsgi.py``` with the following content::
from keystoneclient.middleware import auth_token
from marconi.transport.wsgi import app
app = auth_token.AuthProtocol(app.app, {})
Create ```/srv/marconi/uwsgi.ini``` file with the following content::
[uwsgi]
http = 192.168.192.168:80
daemonize = /var/log/marconi.log
pidfile = /var/run/marconi.pid
gevent = 2000
gevent-monkey-patch = true
listen = 1024
enable-threads = true
module = marconi_uwsgi:app
workers = 4
The uwsgi configuration options above can be modified for different performance requirements.
Create marconi configuration file ```/etc/marconi.conf```::
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
#verbose = False
# Show debugging output in logs (sets DEBUG log level output)
#debug = False
# Sharding and admin mode configs
sharding = True
admin_mode = True
# Log to this file!
log_file = /var/log/marconi-queues.log
debug = False
verbose = False
# This is taken care of in our custom app.py, so disable here
;auth_strategy = keystone
[keystone_authtoken]
admin_password = < admin password >
admin_tenant_name = < admin tenant name >
admin_user = < admin user >
auth_host = < identity service host >
auth_port = '443'
auth_protocol = 'https'
auth_uri = < identity service uri >
auth_version = < auth version >
token_cache_time = < token cache time >
memcache_servers = 'localhost:11211'
[oslo_cache]
cache_backend = memcached
memcache_servers = 'localhost:11211'
[drivers]
# Transport driver module (e.g., wsgi, zmq)
transport = wsgi
# Storage driver module (e.g., mongodb, sqlite)
storage = mongodb
[drivers:storage:mongodb]
uri = mongodb://mydb0,mydb1,mydb2:27017/?replicaSet=catalog&w=2&readPreference=secondaryPreferred
database = marconi
partitions = 8
# Maximum number of times to retry a failed operation. Currently
# only used for retrying a message post.
;max_attempts = 1000
# Maximum sleep interval between retries (actual sleep time
# increases linearly according to number of attempts performed).
;max_retry_sleep = 0.1
# Maximum jitter interval, to be added to the sleep interval, in
# order to decrease probability that parallel requests will retry
# at the same instant.
;max_retry_jitter = 0.005
# Frequency of message garbage collections, in seconds
;gc_interval = 5 * 60
# Threshold of number of expired messages to reach in a given
# queue, before performing the GC. Useful for reducing frequent
# locks on the DB for non-busy queues, or for worker queues
# which process jobs quickly enough to keep the number of in-
# flight messages low.
#
# Note: The higher this number, the larger the memory-mapped DB
# files will be.
;gc_threshold = 1000
[limits:transport]
queue_paging_uplimit = 1000
metadata_size_uplimit = 262144
message_paging_uplimit = 10
message_size_uplimit = 262144
message_ttl_max = 1209600
claim_ttl_max = 43200
claim_grace_max = 43200
[limits:storage]
default_queue_paging = 10
default_message_paging = 10
Start queuing service::
#/usr/bin/uwsgi --ini /srv/marconi/uwsgi.ini
Configure Shards
~~~~~~~~~~~~~~~~
To have a functional queuing service, we need to define a shard. On one of the
web servers run this command::
curl -i -X PUT -H 'X-Auth-Token: $TOKEN' -d '{"weight": 100, "uri": "mongodb://mydb0,mydb1,mydb2:27017/?replicaSet=catalog&w=2&readPreference=secondaryPreferred", "options": {"partitions": 8}}' http://localhost:8888/v1/shards/shard1
Above ```$TOKEN``` is the authentication token retrieved from identity service.
If you choose not to enable keystone authentication you won't have to pass a token.
Reminder: In larger deployments, catalog database and queues databases (shards)
are going to be on different mongoDB replica-sets.