Merge "Fix handful of trivial items in docs"

This commit is contained in:
Zuul 2018-06-01 15:41:51 +00:00 committed by Gerrit Code Review
commit 4a2118ec2a
8 changed files with 11 additions and 11 deletions

View File

@ -101,5 +101,5 @@ various namespaces.
By default, each endpoint is located in the same namespace as the current By default, each endpoint is located in the same namespace as the current
service's helm chart. To connect to a service which is running in a different service's helm chart. To connect to a service which is running in a different
kubernetes namespace, a ``namespace`` can be provided to each individual Kubernetes namespace, a ``namespace`` can be provided to each individual
endpoint. endpoint.

View File

@ -6,7 +6,7 @@ Logging Requirements
OpenStack-Helm defines a centralized logging mechanism to provide insight into OpenStack-Helm defines a centralized logging mechanism to provide insight into
the the state of the OpenStack services and infrastructure components as the the state of the OpenStack services and infrastructure components as
well as underlying kubernetes platform. Among the requirements for a logging well as underlying Kubernetes platform. Among the requirements for a logging
platform, where log data can come from and where log data need to be delivered platform, where log data can come from and where log data need to be delivered
are very variable. To support various logging scenarios, OpenStack-Helm should are very variable. To support various logging scenarios, OpenStack-Helm should
provide a flexible mechanism to meet with certain operation needs. provide a flexible mechanism to meet with certain operation needs.
@ -20,14 +20,14 @@ Especially, Fluent-bit is used as a log forwarder and Fluentd is used as a main
log aggregator and processor. log aggregator and processor.
Fluent-bit, Fluentd meet OpenStack-Helm's logging requirements for gathering, Fluent-bit, Fluentd meet OpenStack-Helm's logging requirements for gathering,
aggregating, and delivering of logged events. Flunt-bit runs as a daemonset on aggregating, and delivering of logged events. Fluent-bit runs as a daemonset on
each node and mounts the `/var/lib/docker/containers` directory. The Docker each node and mounts the `/var/lib/docker/containers` directory. The Docker
container runtime engine directs events posted to stdout and stderr to this container runtime engine directs events posted to stdout and stderr to this
directory on the host. Fluent-bit then forward the contents of that directory to directory on the host. Fluent-bit then forward the contents of that directory to
Fluentd. Fluentd runs as deployment at the designated nodes and expose service Fluentd. Fluentd runs as deployment at the designated nodes and expose service
for Fluent-bit to forward logs. Fluentd should then apply the Logstash format to for Fluent-bit to forward logs. Fluentd should then apply the Logstash format to
the logs. Fluentd can also write kubernetes and OpenStack metadata to the logs. the logs. Fluentd can also write kubernetes and OpenStack metadata to the logs.
Fluentd will then forward the results to Elasticsearch and to optionally kafka. Fluentd will then forward the results to Elasticsearch and to optionally Kafka.
Elasticsearch indexes the logs in a logstash-* index by default. Kafka stores Elasticsearch indexes the logs in a logstash-* index by default. Kafka stores
the logs in a ``logs`` topic by default. Any external tool can then consume the the logs in a ``logs`` topic by default. Any external tool can then consume the
``logs`` topic. ``logs`` topic.

View File

@ -13,7 +13,7 @@ to a templated variable inside the ``values.yaml`` file.
The ``min_available`` within each service's ``values.yaml`` file can be The ``min_available`` within each service's ``values.yaml`` file can be
represented by either a whole number, such as ``1``, or a percentage, represented by either a whole number, such as ``1``, or a percentage,
such as ``80%``. For example, when deploying 5 replicas of a pod (such as such as ``80%``. For example, when deploying 5 replicas of a pod (such as
keystone-api), using ``min_available: 3`` would enfore policy to ensure at keystone-api), using ``min_available: 3`` would enforce policy to ensure at
least 3 replicas were running, whereas using ``min_available: 80%`` would ensure least 3 replicas were running, whereas using ``min_available: 80%`` would ensure
that 4 replicas of that pod are running. that 4 replicas of that pod are running.

View File

@ -63,7 +63,7 @@ with rolling update strategies:
maxSurge: {{ .Values.upgrades.rolling_update.max_surge }} maxSurge: {{ .Values.upgrades.rolling_update.max_surge }}
{{ end }} {{ end }}
In values.yaml in each chart, the same defaults are supplied in every In ``values.yaml`` in each chart, the same defaults are supplied in every
chart, which allows the operator to override at upgrade or deployment chart, which allows the operator to override at upgrade or deployment
time. time.

View File

@ -18,7 +18,7 @@ All OpenStack projects can be configured such that upon deletion, their database
will also be removed. To delete the database when the chart is deleted the will also be removed. To delete the database when the chart is deleted the
database drop job must be enabled before installing the chart. There are two database drop job must be enabled before installing the chart. There are two
ways to enable the job, set the job_db_drop value to true in the chart's ways to enable the job, set the job_db_drop value to true in the chart's
values.yaml file, or override the value using the helm install command as ``values.yaml`` file, or override the value using the helm install command as
follows: follows:
.. code-block:: shell .. code-block:: shell

View File

@ -19,7 +19,7 @@ Alternatively, this step can be performed by running the script directly:
./tools/deployment/developer/ceph/040-ceph.sh ./tools/deployment/developer/ceph/040-ceph.sh
Activate the openstack namespace to be able to use Ceph Activate the OpenStack namespace to be able to use Ceph
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/045-ceph-ns-activate.sh .. literalinclude:: ../../../../tools/deployment/developer/ceph/045-ceph-ns-activate.sh

View File

@ -28,7 +28,7 @@ should be cloned:
.. note:: .. note::
This installation, by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4 This installation, by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4
and updates resolv.conf. These DNS nameserver entries can be changed by and updates ``resolv.conf``. These DNS nameserver entries can be changed by
updating file ``/opt/openstack-helm-infra/tools/images/kubeadm-aio/assets/opt/playbooks/vars.yaml`` updating file ``/opt/openstack-helm-infra/tools/images/kubeadm-aio/assets/opt/playbooks/vars.yaml``
under section ``external_dns_nameservers``. under section ``external_dns_nameservers``.

View File

@ -5,7 +5,7 @@ Multinode
Overview Overview
======== ========
In order to drive towards a production-ready Openstack solution, our In order to drive towards a production-ready OpenStack solution, our
goal is to provide containerized, yet stable `persistent goal is to provide containerized, yet stable `persistent
volumes <https://kubernetes.io/docs/concepts/storage/persistent-volumes/>`_ volumes <https://kubernetes.io/docs/concepts/storage/persistent-volumes/>`_
that Kubernetes can use to schedule applications that require state, that Kubernetes can use to schedule applications that require state,
@ -81,7 +81,7 @@ you intend to join the cluster.
the ``ssh-copy-id`` command, for example: *ssh-copy-id the ``ssh-copy-id`` command, for example: *ssh-copy-id
ubuntu@192.168.122.178* ubuntu@192.168.122.178*
3. Copy the key: ``sudo cp ~/.ssh/id_rsa /etc/openstack-helm/deploy-key.pem`` 3. Copy the key: ``sudo cp ~/.ssh/id_rsa /etc/openstack-helm/deploy-key.pem``
4. Set correct owenership: ``sudo chown ubuntu 4. Set correct ownership: ``sudo chown ubuntu
/etc/openstack-helm/deploy-key.pem`` /etc/openstack-helm/deploy-key.pem``
Test this by ssh'ing to a node and then executing a command with Test this by ssh'ing to a node and then executing a command with