openstack-ansible-os_trove/doc/source/configure-trove.rst
Andrew Smith 3c4f4127b4 Update to use oslo.messaging service for RPC and Notify
This introduces oslo.messaging variables that define the RPC and
Notify transports for the OpenStack services. These parameters replace
the rabbitmq values and are used to generate the messaging
transport_url for the service. The association of the messaging
backend server to the oslo.messaging services will then be transparent
to the trove service.

This patch:
* Add oslo.messaging variables for RPC and Notify to defaults
* Update transport_url generation (add for notification)
* Add oslo.messaging to tests inventory
* Update tests
* Update docs
* Update extras
* Add release note

Change-Id: Ia01317343ae6fbc790d64b5ba282c8c069750d45
2018-07-03 12:43:55 -04:00

106 lines
4.4 KiB
ReStructuredText

=================
Configuring Trove
=================
.. note::
Care should be taken when deploying Trove in production environments.
Be sure to fully understand the security implications of the deployed
architecture.
Trove provides DBaaS to an OpenStack deployment. It deploys guest VMs that
provide the desired DB for use by the end consumer. The trove guest VMs need
connectivity back to the trove services via RPC (oslo.messaging) and the
OpenStack services. The way these guest VM get access to those services could be
via internal networking (in the case of oslo.messaging) or via public interfaces
(in the case of OpenStack services). For the example configuration, we'll
designate a provider network as the network for trove to provision on each guest
VM. The guest can then connect to oslo.messaging via this network and to the
OpenStack services externally. Optionally, the guest VMs could use the internal
network to access OpenStack services, but that would require more containers
being bound to this network.
The deployment configuration outlined below may not be appropriate for
production environments. Review this very carefully with your own security
requirements.
Setup a neutron network for use by trove
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Trove needs connectivity between the control plane and the DB guest VMs. For
this purpose a provider network should be created which bridges the trove
containers (if the control plane is installed in a container) or hosts with
VMs. In a general case, neutron networking can be a simple flat network.
An example entry into ``openstack_user_config.yml`` is shown below:
.. code-block:: yaml
- network:
container_bridge: "br-dbaas"
container_type: "veth"
container_interface: "eth14"
host_bind_override: "eth14"
ip_from_q: "dbaas"
type: "flat"
net_name: "dbaas-mgmt"
group_binds:
- neutron_linuxbridge_agent
- oslomsg_rpc
Make sure to modify the other entries in this file as well.
The ``net_name`` will be the physical network that is specified when creating
the neutron network. The default value of ``dbaas-mgmt`` is also used to
lookup the addresses of the rpc messaging container. If the default is not used
then some variables in ``defaults\main.yml`` will need to be overwritten.
By default this role will not create the neutron network automaticaly. However,
the default values can be changed to create the neutron network. See the
``trove_service_net_*`` variable in ``defaults\main.yml``. By customizing the
``trove_service_net_*`` variables and having this role create the neutron
network a full deployment of the OpenStack and DBaaS can proceed
without interruption or intervention.
The following is an example how to set up a provider network in neutron
manually, if so desired:
.. code-block:: bash
neutron net-create dbaas_service_net --shared \
--provider:network_type flat \
--provider:physical_network dbaas-mgmt
neutron subnet-create dbaas_service_net 172.19.0.0/22 --name dbaas_service_subnet
--ip-version=4 \
--allocation-pool start=172.19.1.100,end=172.19.1.200 \
--enable-dhcp \
--dns-nameservers list=true 8.8.4.4 8.8.8.8
Special attention needs to be applied to the ``--allocation-pool`` to not have
ips which overlap with ips assigned to hosts or containers (see the ``used_ips``
variable in ``openstack_user_config.yml``)
.. note::
This role needs the neutron network created before it can run properly
since the trove guest agent configuration file contains that information.
Building Trove images
~~~~~~~~~~~~~~~~~~~~~
When building disk image for the guest VM deployments there are many items
to consider. Listed below are a few:
#. Security of the VM and the network infrastructure
#. What DBs will be installed
#. What DB services will be supported
#. How will the images be maintained
Images can be built using the ``diskimage-builder`` tooling. The trove
virtual environment can be tar'd up from the trove containers and deployed to
the images using custom ``diskimage-builder`` elements.
See the ``trove/integration/scripts/files/elements`` directory contents in
the OpenStack Trove project for ``diskimage-builder`` elements to build trove
disk images.