CephFS RWX Support in Host-based Ceph
Incorporated patchset 1 review comments Updated patchset 5 review comments Updated patchset 6 review comments Fixed merge conflicts Updated patchset 8 review comments Change-Id: Icd7b08ab69273f6073b960a13cf59905532f851a Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com>
This commit is contained in:
parent
ec42ebdda0
commit
63cd4f5fdc
@ -1 +1 @@
|
|||||||
.. [#f1] See :ref:`Data Network Planning <data-network-planning>` for more information.
|
.. [#]_ See :ref:`Data Network Planning <data-network-planning>` for more information.
|
||||||
|
@ -2,8 +2,8 @@
|
|||||||
.. begin-redfish-vms
|
.. begin-redfish-vms
|
||||||
|
|
||||||
For subclouds with servers that support Redfish Virtual Media Service \(version
|
For subclouds with servers that support Redfish Virtual Media Service \(version
|
||||||
1.2 or higher\), you can use the Central Cloud's CLI to install the ISO and
|
1.2 or higher\), you can use the Central Cloud's CLI to install the ISO and
|
||||||
bootstrap subclouds from the Central Cloud. For more information, see
|
bootstrap subclouds from the Central Cloud. For more information, see
|
||||||
:ref:`Installing a Subcloud Using Redfish Platform Management Service
|
:ref:`Installing a Subcloud Using Redfish Platform Management Service
|
||||||
<installing-a-subcloud-using-redfish-platform-management-service>`.
|
<installing-a-subcloud-using-redfish-platform-management-service>`.
|
||||||
|
|
||||||
|
@ -1,13 +1,13 @@
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
system_mode: duplex
|
system_mode: duplex
|
||||||
distributed_cloud_role: systemcontroller
|
distributed_cloud_role: systemcontroller
|
||||||
|
|
||||||
|
|
||||||
management_start_address: <X.Y.Z>.2
|
management_start_address: <X.Y.Z>.2
|
||||||
management_end_address: <X.Y.Z>.50
|
management_end_address: <X.Y.Z>.50
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
additional_local_registry_images:
|
additional_local_registry_images:
|
||||||
- docker.io/starlingx/rvmc:stx.5.0-v1.0.0
|
- docker.io/starlingx/rvmc:stx.5.0-v1.0.0
|
||||||
|
@ -18,7 +18,7 @@ For example:
|
|||||||
An authorized administrator \('admin' and 'sysinv'\) can perform any Docker
|
An authorized administrator \('admin' and 'sysinv'\) can perform any Docker
|
||||||
action. Regular users can only interact with their own repositories \(i.e.
|
action. Regular users can only interact with their own repositories \(i.e.
|
||||||
registry.local:9001/<keystoneUserName>/\). Any authenticated user can pull from
|
registry.local:9001/<keystoneUserName>/\). Any authenticated user can pull from
|
||||||
the following list of public images:
|
the following list of public images:
|
||||||
|
|
||||||
.. _kubernetes-admin-tutorials-authentication-and-authorization-d383e50:
|
.. _kubernetes-admin-tutorials-authentication-and-authorization-d383e50:
|
||||||
|
|
||||||
|
@ -21,5 +21,5 @@ OpenStack
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
|
|
||||||
openstack/index
|
openstack/index
|
@ -315,7 +315,7 @@ conditions are in place:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ kubectl get pods -n kube-system | grep -e calico -e coredns
|
~(keystone_admin)]$ kubectl get pods -n kube-system | grep -e calico -e coredns
|
||||||
calico-kube-controllers-5cd4695574-d7zwt 1/1 Running
|
calico-kube-controllers-5cd4695574-d7zwt 1/1 Running
|
||||||
calico-node-6km72 1/1 Running
|
calico-node-6km72 1/1 Running
|
||||||
calico-node-c7xnd 1/1 Running
|
calico-node-c7xnd 1/1 Running
|
||||||
|
@ -13,7 +13,7 @@ Use the following command to run the Ansible Backup playbook and back up the
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass=<sysadmin password> admin_password=<sysadmin password>" -e "backup_user_local_registry=true"
|
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass=<sysadmin password> admin_password=<sysadmin password>" -e "backup_user_local_registry=true"
|
||||||
|
|
||||||
The <admin\_password> and <ansible\_become\_pass\> need to be set correctly
|
The <admin\_password> and <ansible\_become\_pass\> need to be set correctly
|
||||||
using the ``-e`` option on the command line, or an override file, or in the
|
using the ``-e`` option on the command line, or an override file, or in the
|
||||||
|
@ -9,7 +9,7 @@ Backup and Restore
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
back-up-openstack
|
back-up-openstack
|
||||||
restore-openstack-from-a-backup
|
restore-openstack-from-a-backup
|
||||||
openstack-backup-considerations
|
openstack-backup-considerations
|
@ -9,7 +9,7 @@ Create, Test, and Terminate a PTP Notification Demo
|
|||||||
This section provides instructions on accessing, creating, testing and
|
This section provides instructions on accessing, creating, testing and
|
||||||
terminating a **ptp-notification-demo**.
|
terminating a **ptp-notification-demo**.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
|
||||||
Use the following procedure to copy the tarball from |dnload-loc|, create, test,
|
Use the following procedure to copy the tarball from |dnload-loc|, create, test,
|
||||||
@ -97,8 +97,8 @@ and terminate a ptp-notification-demo.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ kubectl create namespace ptpdemo
|
$ kubectl create namespace ptpdemo
|
||||||
$ helm install -n notification-demo ~/charts/ptp-notification-demo -f ~/charts/ptp-notification-demo/ptp-notification-override.yaml
|
$ helm install -n notification-demo ~/charts/ptp-notification-demo -f ~/charts/ptp-notification-demo/ptp-notification-override.yaml
|
||||||
$ kubectl get pods -n ptpdemo
|
$ kubectl get pods -n ptpdemo
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
@ -11,7 +11,7 @@ using the :command:`system application` and :command:`system-helm-override`
|
|||||||
commands.
|
commands.
|
||||||
|
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
|
||||||
|prod| provides the capability for application\(s\) to subscribe to
|
|prod| provides the capability for application\(s\) to subscribe to
|
||||||
@ -23,8 +23,8 @@ asynchronous |PTP| status notifications and pull for the |PTP| state on demand.
|
|||||||
.. _install-ptp-notifications-ul-ydy-ggf-t4b:
|
.. _install-ptp-notifications-ul-ydy-ggf-t4b:
|
||||||
|
|
||||||
- The |PTP| port must be configured as Subordinate mode \(Slave mode\). For
|
- The |PTP| port must be configured as Subordinate mode \(Slave mode\). For
|
||||||
more information, see,
|
more information, see,
|
||||||
|
|
||||||
.. xbooklink :ref:`|prod-long| System Configuration
|
.. xbooklink :ref:`|prod-long| System Configuration
|
||||||
<system-configuration-management-overview>`:
|
<system-configuration-management-overview>`:
|
||||||
|
|
||||||
@ -35,7 +35,7 @@ asynchronous |PTP| status notifications and pull for the |PTP| state on demand.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
|
||||||
Use the following steps to install the **ptp-notification** application.
|
Use the following steps to install the **ptp-notification** application.
|
||||||
@ -52,7 +52,7 @@ Use the following steps to install the **ptp-notification** application.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ source /etc/platform/openrc
|
$ source /etc/platform/openrc
|
||||||
~(keystone_admin)]$
|
~(keystone_admin)]$
|
||||||
|
|
||||||
#. Assign the |PTP| registration label to the controller\(s\).
|
#. Assign the |PTP| registration label to the controller\(s\).
|
||||||
|
|
||||||
@ -66,7 +66,7 @@ Use the following steps to install the **ptp-notification** application.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ system host-label-assign controller-0 ptp-notification=true
|
~(keystone_admin)]$ system host-label-assign controller-0 ptp-notification=true
|
||||||
|
|
||||||
|
|
||||||
#. Upload the |PTP| application using the following command:
|
#. Upload the |PTP| application using the following command:
|
||||||
|
@ -47,6 +47,6 @@ The following prerequisites are required before the integration:
|
|||||||
For instructions on creating, testing and terminating a
|
For instructions on creating, testing and terminating a
|
||||||
**ptp-notification-demo**, see, :ref:`Create, Test, and Terminate |PTP|
|
**ptp-notification-demo**, see, :ref:`Create, Test, and Terminate |PTP|
|
||||||
Notification Demo <create-test-and-terminate-a-ptp-notification-demo>`.
|
Notification Demo <create-test-and-terminate-a-ptp-notification-demo>`.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -26,7 +26,7 @@ are included in the outer IP header.
|
|||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: ../../_includes/vxlan-data-networks.rest
|
.. include:: ../../_includes/vxlan-data-networks.rest
|
||||||
|
|
||||||
.. _vxlan-data-networks-ul-rzs-kqf-zbb:
|
.. _vxlan-data-networks-ul-rzs-kqf-zbb:
|
||||||
|
|
||||||
- Dynamic |VXLAN|, see :ref:`Dynamic VXLAN <dynamic-vxlan>`
|
- Dynamic |VXLAN|, see :ref:`Dynamic VXLAN <dynamic-vxlan>`
|
||||||
@ -41,8 +41,8 @@ are included in the outer IP header.
|
|||||||
Before you can create project networks on a |VXLAN| provider network, you must
|
Before you can create project networks on a |VXLAN| provider network, you must
|
||||||
define at least one network segment range.
|
define at least one network segment range.
|
||||||
|
|
||||||
- :ref:`Dynamic VXLAN <dynamic-vxlan>`
|
- :ref:`Dynamic VXLAN <dynamic-vxlan>`
|
||||||
|
|
||||||
- :ref:`Static VXLAN <static-vxlan>`
|
- :ref:`Static VXLAN <static-vxlan>`
|
||||||
|
|
||||||
- :ref:`Differences Between Dynamic and Static VXLAN Modes <differences-between-dynamic-and-static-vxlan-modes>`
|
- :ref:`Differences Between Dynamic and Static VXLAN Modes <differences-between-dynamic-and-static-vxlan-modes>`
|
||||||
|
@ -56,7 +56,7 @@ Multicast Endpoint Distribution
|
|||||||
|
|
||||||
:start-after: vswitch-text-1-begin
|
:start-after: vswitch-text-1-begin
|
||||||
:end-before: vswitch-text-1-end
|
:end-before: vswitch-text-1-end
|
||||||
|
|
||||||
|
|
||||||
.. _dynamic-vxlan-section-N10054-N1001F-N10001:
|
.. _dynamic-vxlan-section-N10054-N1001F-N10001:
|
||||||
|
|
||||||
|
@ -2,63 +2,63 @@
|
|||||||
sphinx-quickstart on Thu Sep 3 15:14:59 2020.
|
sphinx-quickstart on Thu Sep 3 15:14:59 2020.
|
||||||
You can adapt this file completely to your liking, but it should at least
|
You can adapt this file completely to your liking, but it should at least
|
||||||
contain the root `toctree` directive.
|
contain the root `toctree` directive.
|
||||||
|
|
||||||
========
|
========
|
||||||
Contents
|
Contents
|
||||||
========
|
========
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
data-networks-overview
|
data-networks-overview
|
||||||
|
|
||||||
-------------------
|
-------------------
|
||||||
VXLAN data networks
|
VXLAN data networks
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
dynamic-vxlan
|
dynamic-vxlan
|
||||||
static-vxlan
|
static-vxlan
|
||||||
differences-between-dynamic-and-static-vxlan-modes
|
differences-between-dynamic-and-static-vxlan-modes
|
||||||
|
|
||||||
-----------------------
|
-----------------------
|
||||||
Add segmentation ranges
|
Add segmentation ranges
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
adding-segmentation-ranges-using-the-cli
|
adding-segmentation-ranges-using-the-cli
|
||||||
|
|
||||||
------------------------------------
|
------------------------------------
|
||||||
Data network interface configuration
|
Data network interface configuration
|
||||||
------------------------------------
|
------------------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
configuring-data-interfaces
|
configuring-data-interfaces
|
||||||
configuring-data-interfaces-for-vxlans
|
configuring-data-interfaces-for-vxlans
|
||||||
|
|
||||||
------------------------------
|
------------------------------
|
||||||
MTU values of a data interface
|
MTU values of a data interface
|
||||||
------------------------------
|
------------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
changing-the-mtu-of-a-data-interface-using-the-cli
|
changing-the-mtu-of-a-data-interface-using-the-cli
|
||||||
changing-the-mtu-of-a-data-interface
|
changing-the-mtu-of-a-data-interface
|
||||||
|
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
VXLAN data network setup completion
|
VXLAN data network setup completion
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
adding-a-static-ip-address-to-a-data-interface
|
adding-a-static-ip-address-to-a-data-interface
|
||||||
managing-data-interface-static-ip-addresses-using-the-cli
|
managing-data-interface-static-ip-addresses-using-the-cli
|
||||||
using-ip-address-pools-for-data-interfaces
|
using-ip-address-pools-for-data-interfaces
|
||||||
|
@ -11,10 +11,10 @@ or the |CLI|.
|
|||||||
|
|
||||||
For more information on setting up |VXLAN| Data Networks, see :ref:`VXLAN Data Networks <vxlan-data-networks>`.
|
For more information on setting up |VXLAN| Data Networks, see :ref:`VXLAN Data Networks <vxlan-data-networks>`.
|
||||||
|
|
||||||
- :ref:`Adding a Static IP Address to a Data Interface <adding-a-static-ip-address-to-a-data-interface>`
|
- :ref:`Adding a Static IP Address to a Data Interface <adding-a-static-ip-address-to-a-data-interface>`
|
||||||
|
|
||||||
- :ref:`Using IP Address Pools for Data Interfaces <using-ip-address-pools-for-data-interfaces>`
|
- :ref:`Using IP Address Pools for Data Interfaces <using-ip-address-pools-for-data-interfaces>`
|
||||||
|
|
||||||
- :ref:`Adding and Maintaining Routes for a VXLAN Network <adding-and-maintaining-routes-for-a-vxlan-network>`
|
- :ref:`Adding and Maintaining Routes for a VXLAN Network <adding-and-maintaining-routes-for-a-vxlan-network>`
|
||||||
|
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ Ensure that all subclouds are managed and online.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ source /etc/platform/openrc
|
$ source /etc/platform/openrc
|
||||||
~(keystone_admin)]$
|
~(keystone_admin)]$
|
||||||
|
|
||||||
|
|
||||||
#. After five minutes, lock and then unlock each controller in the System
|
#. After five minutes, lock and then unlock each controller in the System
|
||||||
|
@ -25,7 +25,7 @@ You can use the |CLI| to review alarm summaries for the |prod-dc|.
|
|||||||
| subcloud-5 | 0 | 2 | 0 | 0 | degraded |
|
| subcloud-5 | 0 | 2 | 0 | 0 | degraded |
|
||||||
| subcloud-1 | 0 | 0 | 0 | 0 | OK |
|
| subcloud-1 | 0 | 0 | 0 | 0 | OK |
|
||||||
+------------+-----------------+--------------+--------------+----------+----------+
|
+------------+-----------------+--------------+--------------+----------+----------+
|
||||||
|
|
||||||
|
|
||||||
System Controller alarms and warnings are not included.
|
System Controller alarms and warnings are not included.
|
||||||
|
|
||||||
@ -53,7 +53,7 @@ You can use the |CLI| to review alarm summaries for the |prod-dc|.
|
|||||||
+-----------------+--------------+--------------+----------+
|
+-----------------+--------------+--------------+----------+
|
||||||
| 0 | 0 | 0 | 0 |
|
| 0 | 0 | 0 | 0 |
|
||||||
+-----------------+--------------+--------------+----------+
|
+-----------------+--------------+--------------+----------+
|
||||||
|
|
||||||
|
|
||||||
The following command is equivalent to the :command:`fm alarm-summary`,
|
The following command is equivalent to the :command:`fm alarm-summary`,
|
||||||
providing a count of alarms and warnings for the System Controller:
|
providing a count of alarms and warnings for the System Controller:
|
||||||
@ -75,7 +75,7 @@ You can use the |CLI| to review alarm summaries for the |prod-dc|.
|
|||||||
+-----------------+--------------+--------------+----------+
|
+-----------------+--------------+--------------+----------+
|
||||||
| 0 | 0 | 0 | 0 |
|
| 0 | 0 | 0 | 0 |
|
||||||
+-----------------+--------------+--------------+----------+
|
+-----------------+--------------+--------------+----------+
|
||||||
|
|
||||||
|
|
||||||
- To list the alarms for a subcloud:
|
- To list the alarms for a subcloud:
|
||||||
|
|
||||||
|
@ -19,26 +19,26 @@ subcloud groups. The |CLI| commands for managing subcloud groups are:
|
|||||||
|
|
||||||
.. _creating-subcloud-groups-ul-fvw-cj4-3jb:
|
.. _creating-subcloud-groups-ul-fvw-cj4-3jb:
|
||||||
|
|
||||||
:command:`dcmanager subcloud-group add`:
|
:command:`dcmanager subcloud-group add`:
|
||||||
Adds a new subcloud group.
|
Adds a new subcloud group.
|
||||||
|
|
||||||
:command:`dcmanager subcloud-group delete`:
|
:command:`dcmanager subcloud-group delete`:
|
||||||
Deletes subcloud group details from the database.
|
Deletes subcloud group details from the database.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
The 'Default' subcloud group cannot be deleted
|
The 'Default' subcloud group cannot be deleted
|
||||||
|
|
||||||
:command:`dcmanager subcloud-group list`:
|
:command:`dcmanager subcloud-group list`:
|
||||||
Lists subcloud groups.
|
Lists subcloud groups.
|
||||||
|
|
||||||
:command:`dcmanager subcloud-group list-subclouds`:
|
:command:`dcmanager subcloud-group list-subclouds`:
|
||||||
List subclouds referencing a subcloud group.
|
List subclouds referencing a subcloud group.
|
||||||
|
|
||||||
:command:`dcmanager subcloud-group show`:
|
:command:`dcmanager subcloud-group show`:
|
||||||
Shows the details of a subcloud group.
|
Shows the details of a subcloud group.
|
||||||
|
|
||||||
:command:`dcmanager subcloud-group update`:
|
:command:`dcmanager subcloud-group update`:
|
||||||
Updates attributes of a subcloud group.
|
Updates attributes of a subcloud group.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
@ -100,7 +100,7 @@ Deletes subcloud group details from the database.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ dcmanager subcloud-group list-subclouds Group1
|
~(keystone_admin)]$ dcmanager subcloud-group list-subclouds Group1
|
||||||
|
|
||||||
+--+------+----+----+-------+-------+------+-----------+-----------+-------------+-----------+------------+------------+------+----------+----------+
|
+--+------+----+----+-------+-------+------+-----------+-----------+-------------+-----------+------------+------------+------+----------+----------+
|
||||||
|id|name |desc|loc.|sof.ver|mgmnt |avail |deploy_stat|mgmt_subnet|mgmt_start_ip|mgmt_end_ip|mgmt_gtwy_ip|sysctrl_gtwy|grp_id|created_at|updated_at|
|
|id|name |desc|loc.|sof.ver|mgmnt |avail |deploy_stat|mgmt_subnet|mgmt_start_ip|mgmt_end_ip|mgmt_gtwy_ip|sysctrl_gtwy|grp_id|created_at|updated_at|
|
||||||
+--+------+----+----+-------+-------+------+-----------+-----------+-------------+-----------+------------+------------+------+----------+----------+
|
+--+------+----+----+-------+-------+------+-----------+-----------+-------------+-----------+------------+------------+------+----------+----------+
|
||||||
|
@ -97,8 +97,8 @@ if using the CLI.
|
|||||||
All messaging between SystemControllers and Subclouds uses the **admin**
|
All messaging between SystemControllers and Subclouds uses the **admin**
|
||||||
REST API service endpoints which, in this distributed cloud environment,
|
REST API service endpoints which, in this distributed cloud environment,
|
||||||
are all configured for secure HTTPS. Certificates for these HTTPS
|
are all configured for secure HTTPS. Certificates for these HTTPS
|
||||||
connections are managed internally by |prod|.
|
connections are managed internally by |prod|.
|
||||||
|
|
||||||
.. xbooklink For more information, see, :ref:`Certificate Management for Admin
|
.. xbooklink For more information, see, :ref:`Certificate Management for Admin
|
||||||
REST API Endpoints <certificate-management-for-admin-rest-endpoints>`.
|
REST API Endpoints <certificate-management-for-admin-rest-endpoints>`.
|
||||||
|
|
||||||
|
@ -20,7 +20,7 @@ subcloud, the subcloud installation has these phases:
|
|||||||
|
|
||||||
- Executing the :command:`dcmanager subcloud add` command in the Central Cloud:
|
- Executing the :command:`dcmanager subcloud add` command in the Central Cloud:
|
||||||
|
|
||||||
- Uses Redfish Virtual Media Service to remote install the ISO on
|
- Uses Redfish Virtual Media Service to remote install the ISO on
|
||||||
controller-0 in the subcloud
|
controller-0 in the subcloud
|
||||||
|
|
||||||
- Uses Ansible to bootstrap |prod-long| on controller-0 in
|
- Uses Ansible to bootstrap |prod-long| on controller-0 in
|
||||||
@ -114,23 +114,23 @@ subcloud, the subcloud installation has these phases:
|
|||||||
bootstrap_interface: <bootstrap_interface_name> # e.g. eno1
|
bootstrap_interface: <bootstrap_interface_name> # e.g. eno1
|
||||||
bootstrap_address: <bootstrap_interface_ip_address> # e.g.128.224.151.183
|
bootstrap_address: <bootstrap_interface_ip_address> # e.g.128.224.151.183
|
||||||
bootstrap_address_prefix: <bootstrap_netmask> # e.g. 23
|
bootstrap_address_prefix: <bootstrap_netmask> # e.g. 23
|
||||||
|
|
||||||
# Board Management Controller
|
# Board Management Controller
|
||||||
bmc_address: <BMCs_IPv4_or_IPv6_address> # e.g. 128.224.64.180
|
bmc_address: <BMCs_IPv4_or_IPv6_address> # e.g. 128.224.64.180
|
||||||
bmc_username: <bmc_username> # e.g. root
|
bmc_username: <bmc_username> # e.g. root
|
||||||
|
|
||||||
# If the subcloud's bootstrap IP interface and the system controller are not on the
|
# If the subcloud's bootstrap IP interface and the system controller are not on the
|
||||||
# same network then the customer must configure a default route or static route
|
# same network then the customer must configure a default route or static route
|
||||||
# so that the Central Cloud can login bootstrap the newly installed subcloud.
|
# so that the Central Cloud can login bootstrap the newly installed subcloud.
|
||||||
|
|
||||||
# If nexthop_gateway is specified and the network_address is not specified then a
|
# If nexthop_gateway is specified and the network_address is not specified then a
|
||||||
# default route will be configured. Otherwise, if a network_address is specified then
|
# default route will be configured. Otherwise, if a network_address is specified then
|
||||||
# a static route will be configured.
|
# a static route will be configured.
|
||||||
|
|
||||||
nexthop_gateway: <default_route_address> for # e.g. 128.224.150.1 (required)
|
nexthop_gateway: <default_route_address> for # e.g. 128.224.150.1 (required)
|
||||||
network_address: <static_route_address> # e.g. 128.224.144.0
|
network_address: <static_route_address> # e.g. 128.224.144.0
|
||||||
network_mask: <static_route_mask> # e.g. 255.255.254.0
|
network_mask: <static_route_mask> # e.g. 255.255.254.0
|
||||||
|
|
||||||
# Installation type codes
|
# Installation type codes
|
||||||
#0 - Standard Controller, Serial Console
|
#0 - Standard Controller, Serial Console
|
||||||
#1 - Standard Controller, Graphical Console
|
#1 - Standard Controller, Graphical Console
|
||||||
@ -139,22 +139,22 @@ subcloud, the subcloud installation has these phases:
|
|||||||
#4 - AIO Low-latency, Serial Console
|
#4 - AIO Low-latency, Serial Console
|
||||||
#5 - AIO Low-latency, Graphical Console
|
#5 - AIO Low-latency, Graphical Console
|
||||||
install_type: 3
|
install_type: 3
|
||||||
|
|
||||||
# Optional parameters defaults can be modified by uncommenting the option with a modified value.
|
# Optional parameters defaults can be modified by uncommenting the option with a modified value.
|
||||||
|
|
||||||
# This option can be set to extend the installing stage timeout value
|
# This option can be set to extend the installing stage timeout value
|
||||||
# wait_for_timeout: 3600
|
# wait_for_timeout: 3600
|
||||||
|
|
||||||
# Set this options for https
|
# Set this options for https
|
||||||
no_check_certificate: True
|
no_check_certificate: True
|
||||||
|
|
||||||
# If the bootstrap interface is a vlan interface then configure the vlan ID.
|
# If the bootstrap interface is a vlan interface then configure the vlan ID.
|
||||||
# bootstrap_vlan: <vlan_id>
|
# bootstrap_vlan: <vlan_id>
|
||||||
|
|
||||||
# Override default filesystem device.
|
# Override default filesystem device.
|
||||||
# rootfs_device: "/dev/disk/by-path/pci-0000:00:1f.2-ata-1.0"
|
# rootfs_device: "/dev/disk/by-path/pci-0000:00:1f.2-ata-1.0"
|
||||||
# boot_device: "/dev/disk/by-path/pci-0000:00:1f.2-ata-1.0"
|
# boot_device: "/dev/disk/by-path/pci-0000:00:1f.2-ata-1.0"
|
||||||
|
|
||||||
|
|
||||||
#. At the SystemController, create a
|
#. At the SystemController, create a
|
||||||
/home/sysadmin/subcloud1-bootstrap-values.yaml overrides file for the
|
/home/sysadmin/subcloud1-bootstrap-values.yaml overrides file for the
|
||||||
@ -166,21 +166,21 @@ subcloud, the subcloud installation has these phases:
|
|||||||
|
|
||||||
system_mode: simplex
|
system_mode: simplex
|
||||||
name: "subcloud1"
|
name: "subcloud1"
|
||||||
|
|
||||||
description: "test"
|
description: "test"
|
||||||
location: "loc"
|
location: "loc"
|
||||||
|
|
||||||
management_subnet: 192.168.101.0/24
|
management_subnet: 192.168.101.0/24
|
||||||
management_start_address: 192.168.101.2
|
management_start_address: 192.168.101.2
|
||||||
management_end_address: 192.168.101.50
|
management_end_address: 192.168.101.50
|
||||||
management_gateway_address: 192.168.101.1
|
management_gateway_address: 192.168.101.1
|
||||||
|
|
||||||
external_oam_subnet: 10.10.10.0/24
|
external_oam_subnet: 10.10.10.0/24
|
||||||
external_oam_gateway_address: 10.10.10.1
|
external_oam_gateway_address: 10.10.10.1
|
||||||
external_oam_floating_address: 10.10.10.12
|
external_oam_floating_address: 10.10.10.12
|
||||||
|
|
||||||
systemcontroller_gateway_address: 192.168.204.101
|
systemcontroller_gateway_address: 192.168.204.101
|
||||||
|
|
||||||
docker_registries:
|
docker_registries:
|
||||||
k8s.gcr.io:
|
k8s.gcr.io:
|
||||||
url: registry.central:9001/k8s.gcr.io
|
url: registry.central:9001/k8s.gcr.io
|
||||||
@ -212,7 +212,7 @@ subcloud, the subcloud installation has these phases:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
"DNS.1: registry.local DNS.2: registry.central IP.1: floating_management IP.2: floating_OAM"
|
"DNS.1: registry.local DNS.2: registry.central IP.1: floating_management IP.2: floating_OAM"
|
||||||
|
|
||||||
If required, run the following command on the Central Cloud prior to
|
If required, run the following command on the Central Cloud prior to
|
||||||
bootstrapping the subcloud to install the new certificate for the Central
|
bootstrapping the subcloud to install the new certificate for the Central
|
||||||
@ -220,7 +220,7 @@ subcloud, the subcloud installation has these phases:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ system certificate-install -m docker_registry path_to_cert
|
~(keystone_admin)]$ system certificate-install -m docker_registry path_to_cert
|
||||||
|
|
||||||
If you prefer to install container images from the default WRS AWS ECR
|
If you prefer to install container images from the default WRS AWS ECR
|
||||||
external registries, make the following substitutions for the
|
external registries, make the following substitutions for the
|
||||||
@ -338,15 +338,15 @@ subcloud, the subcloud installation has these phases:
|
|||||||
controller-0:/home/sysadmin# tail /var/log/dcmanager/subcloud1_install_2019-09-23-19-19-42.log
|
controller-0:/home/sysadmin# tail /var/log/dcmanager/subcloud1_install_2019-09-23-19-19-42.log
|
||||||
TASK [wait_for] ****************************************************************
|
TASK [wait_for] ****************************************************************
|
||||||
ok: [subcloud1]
|
ok: [subcloud1]
|
||||||
|
|
||||||
controller-0:/home/sysadmin# tail /var/log/dcmanager/subcloud1_bootstrap_2019-09-23-19-03-44.log
|
controller-0:/home/sysadmin# tail /var/log/dcmanager/subcloud1_bootstrap_2019-09-23-19-03-44.log
|
||||||
k8s.gcr.io: {password: secret, url: null}
|
k8s.gcr.io: {password: secret, url: null}
|
||||||
quay.io: {password: secret, url: null}
|
quay.io: {password: secret, url: null}
|
||||||
)
|
)
|
||||||
|
|
||||||
TASK [bootstrap/bringup-essential-services : Mark the bootstrap as completed] ***
|
TASK [bootstrap/bringup-essential-services : Mark the bootstrap as completed] ***
|
||||||
changed: [subcloud1]
|
changed: [subcloud1]
|
||||||
|
|
||||||
PLAY RECAP *********************************************************************
|
PLAY RECAP *********************************************************************
|
||||||
subcloud1 : ok=230 changed=137 unreachable=0 failed=0
|
subcloud1 : ok=230 changed=137 unreachable=0 failed=0
|
||||||
|
|
||||||
@ -364,7 +364,7 @@ subcloud, the subcloud installation has these phases:
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
REGISTRY="docker-registry"
|
REGISTRY="docker-registry"
|
||||||
SECRET_UUID='system service-parameter-list | fgrep
|
SECRET_UUID='system service-parameter-list | fgrep
|
||||||
$REGISTRY | fgrep auth-secret | awk '{print $10}''
|
$REGISTRY | fgrep auth-secret | awk '{print $10}''
|
||||||
SECRET_REF='openstack secret list | fgrep $
|
SECRET_REF='openstack secret list | fgrep $
|
||||||
{SECRET_UUID} | awk '{print $2}''
|
{SECRET_UUID} | awk '{print $2}''
|
||||||
|
@ -78,15 +78,15 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
-i <file>: Specify input ISO file
|
-i <file>: Specify input ISO file
|
||||||
-o <file>: Specify output ISO file
|
-o <file>: Specify output ISO file
|
||||||
-a <file>: Specify ks-addon.cfg file
|
-a <file>: Specify ks-addon.cfg file
|
||||||
|
|
||||||
-p <p=v>: Specify boot parameter
|
-p <p=v>: Specify boot parameter
|
||||||
Examples:
|
Examples:
|
||||||
-p rootfs_device=nvme0n1
|
-p rootfs_device=nvme0n1
|
||||||
-p boot_device=nvme0n1
|
-p boot_device=nvme0n1
|
||||||
|
|
||||||
-p rootfs_device=/dev/disk/by-path/pci-0000:00:0d.0-ata-1.0
|
-p rootfs_device=/dev/disk/by-path/pci-0000:00:0d.0-ata-1.0
|
||||||
-p boot_device=/dev/disk/by-path/pci-0000:00:0d.0-ata-1.0
|
-p boot_device=/dev/disk/by-path/pci-0000:00:0d.0-ata-1.0
|
||||||
|
|
||||||
-d <default menu option>:
|
-d <default menu option>:
|
||||||
Specify default boot menu option:
|
Specify default boot menu option:
|
||||||
0 - Standard Controller, Serial Console
|
0 - Standard Controller, Serial Console
|
||||||
@ -98,7 +98,7 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
NULL - Clear default selection
|
NULL - Clear default selection
|
||||||
-t <menu timeout>:
|
-t <menu timeout>:
|
||||||
Specify boot menu timeout, in seconds
|
Specify boot menu timeout, in seconds
|
||||||
|
|
||||||
|
|
||||||
The following example ks-addon.cfg, used with the -a option, sets up an
|
The following example ks-addon.cfg, used with the -a option, sets up an
|
||||||
initial IP interface at boot time by defining a |VLAN| on an Ethernet
|
initial IP interface at boot time by defining a |VLAN| on an Ethernet
|
||||||
@ -109,14 +109,14 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
#### start ks-addon.cfg
|
#### start ks-addon.cfg
|
||||||
OAM_DEV=enp0s3
|
OAM_DEV=enp0s3
|
||||||
OAM_VLAN=1234
|
OAM_VLAN=1234
|
||||||
|
|
||||||
cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$OAM_DEV
|
cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$OAM_DEV
|
||||||
DEVICE=$OAM_DEV
|
DEVICE=$OAM_DEV
|
||||||
BOOTPROTO=none
|
BOOTPROTO=none
|
||||||
ONBOOT=yes
|
ONBOOT=yes
|
||||||
LINKDELAY=20
|
LINKDELAY=20
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$OAM_DEV.$OAM_VLAN
|
cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$OAM_DEV.$OAM_VLAN
|
||||||
DEVICE=$OAM_DEV.$OAM_VLAN
|
DEVICE=$OAM_DEV.$OAM_VLAN
|
||||||
BOOTPROTO=dhcp
|
BOOTPROTO=dhcp
|
||||||
@ -151,21 +151,21 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
|
|
||||||
system_mode: simplex
|
system_mode: simplex
|
||||||
name: "subcloud1"
|
name: "subcloud1"
|
||||||
|
|
||||||
description: "test"
|
description: "test"
|
||||||
location: "loc"
|
location: "loc"
|
||||||
|
|
||||||
management_subnet: 192.168.101.0/24
|
management_subnet: 192.168.101.0/24
|
||||||
management_start_address: 192.168.101.2
|
management_start_address: 192.168.101.2
|
||||||
management_end_address: 192.168.101.50
|
management_end_address: 192.168.101.50
|
||||||
management_gateway_address: 192.168.101.1
|
management_gateway_address: 192.168.101.1
|
||||||
|
|
||||||
external_oam_subnet: 10.10.10.0/24
|
external_oam_subnet: 10.10.10.0/24
|
||||||
external_oam_gateway_address: 10.10.10.1
|
external_oam_gateway_address: 10.10.10.1
|
||||||
external_oam_floating_address: 10.10.10.12
|
external_oam_floating_address: 10.10.10.12
|
||||||
|
|
||||||
systemcontroller_gateway_address: 192.168.204.101
|
systemcontroller_gateway_address: 192.168.204.101
|
||||||
|
|
||||||
docker_registries:
|
docker_registries:
|
||||||
k8s.gcr.io:
|
k8s.gcr.io:
|
||||||
url: registry.central:9001/k8s.gcr.io
|
url: registry.central:9001/k8s.gcr.io
|
||||||
@ -204,7 +204,7 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
If you have a reason not to use the Central Cloud's local registry you
|
If you have a reason not to use the Central Cloud's local registry you
|
||||||
can pull the images from another local private docker registry.
|
can pull the images from another local private docker registry.
|
||||||
|
|
||||||
#. You can use the Central Cloud's local registry to pull images on subclouds.
|
#. You can use the Central Cloud's local registry to pull images on subclouds.
|
||||||
The Central Cloud's local registry's HTTPS certificate must have the
|
The Central Cloud's local registry's HTTPS certificate must have the
|
||||||
@ -283,10 +283,10 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
k8s.gcr.io: {password: secret, url: null}
|
k8s.gcr.io: {password: secret, url: null}
|
||||||
quay.io: {password: secret, url: null}
|
quay.io: {password: secret, url: null}
|
||||||
)
|
)
|
||||||
|
|
||||||
TASK [bootstrap/bringup-essential-services : Mark the bootstrap as completed] ***
|
TASK [bootstrap/bringup-essential-services : Mark the bootstrap as completed] ***
|
||||||
changed: [subcloud1]
|
changed: [subcloud1]
|
||||||
|
|
||||||
PLAY RECAP *********************************************************************
|
PLAY RECAP *********************************************************************
|
||||||
subcloud1 : ok=230 changed=137 unreachable=0 failed=0
|
subcloud1 : ok=230 changed=137 unreachable=0 failed=0
|
||||||
|
|
||||||
@ -304,7 +304,7 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
REGISTRY="docker-registry"
|
REGISTRY="docker-registry"
|
||||||
SECRET_UUID='system service-parameter-list | fgrep
|
SECRET_UUID='system service-parameter-list | fgrep
|
||||||
$REGISTRY | fgrep auth-secret | awk '{print $10}''
|
$REGISTRY | fgrep auth-secret | awk '{print $10}''
|
||||||
SECRET_REF='openstack secret list | fgrep $
|
SECRET_REF='openstack secret list | fgrep $
|
||||||
{SECRET_UUID} | awk '{print $2}''
|
{SECRET_UUID} | awk '{print $2}''
|
||||||
|
@ -7,18 +7,18 @@ Install and Provisioning the Central Cloud
|
|||||||
==========================================
|
==========================================
|
||||||
|
|
||||||
Installing the Central Cloud is similar to installing a standalone |prod|
|
Installing the Central Cloud is similar to installing a standalone |prod|
|
||||||
system.
|
system.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
The Central Cloud supports either
|
The Central Cloud supports either
|
||||||
|
|
||||||
- an |AIO|-Duplex deployment configuration
|
- an |AIO|-Duplex deployment configuration
|
||||||
|
|
||||||
- a Standard with Dedicated Storage Nodes deployment Standard with Controller
|
- a Standard with Dedicated Storage Nodes deployment Standard with Controller
|
||||||
Storage and one or more workers deployment configuration, or
|
Storage and one or more workers deployment configuration, or
|
||||||
|
|
||||||
- a Standard with Dedicated Storage Nodes deployment configuration.
|
- a Standard with Dedicated Storage Nodes deployment configuration.
|
||||||
|
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
@ -42,7 +42,7 @@ You will also need to make the following modification:
|
|||||||
management_start_address and management_end_address, as shown below) to
|
management_start_address and management_end_address, as shown below) to
|
||||||
exclude the IP addresses reserved for gateway routers that provide routing
|
exclude the IP addresses reserved for gateway routers that provide routing
|
||||||
to the subclouds' management subnets.
|
to the subclouds' management subnets.
|
||||||
|
|
||||||
- Also, include the container images shown in bold below in
|
- Also, include the container images shown in bold below in
|
||||||
additional\_local\_registry\_images, required for support of subcloud
|
additional\_local\_registry\_images, required for support of subcloud
|
||||||
installs with the Redfish Platform Management Service, and subcloud installs
|
installs with the Redfish Platform Management Service, and subcloud installs
|
||||||
|
@ -38,30 +38,30 @@ or firmware updates, see:
|
|||||||
.. _managing-subcloud-groups-ul-a3s-nqf-1nb:
|
.. _managing-subcloud-groups-ul-a3s-nqf-1nb:
|
||||||
|
|
||||||
- To create an upgrade orchestration strategy use the :command:`dcmanager
|
- To create an upgrade orchestration strategy use the :command:`dcmanager
|
||||||
upgrade-strategy create` command.
|
upgrade-strategy create` command.
|
||||||
|
|
||||||
.. xbooklink For more information see, :ref:`Distributed
|
.. xbooklink For more information see, :ref:`Distributed
|
||||||
Upgrade Orchestration Process Using the CLI
|
Upgrade Orchestration Process Using the CLI
|
||||||
<distributed-upgrade-orchestration-process-using-the-cli>`.
|
<distributed-upgrade-orchestration-process-using-the-cli>`.
|
||||||
|
|
||||||
- To create an update \(patch\) orchestration strategy use the
|
- To create an update \(patch\) orchestration strategy use the
|
||||||
:command:`dcmanager patch-strategy create` command.
|
:command:`dcmanager patch-strategy create` command.
|
||||||
|
|
||||||
.. xbooklink For more information see,
|
.. xbooklink For more information see,
|
||||||
:ref:`Creating an Update Strategy for Distributed Cloud Update Orchestration
|
:ref:`Creating an Update Strategy for Distributed Cloud Update Orchestration
|
||||||
<creating-an-update-strategy-for-distributed-cloud-update-orchestration>`.
|
<creating-an-update-strategy-for-distributed-cloud-update-orchestration>`.
|
||||||
|
|
||||||
- To create a firmware update orchestration strategy use the
|
- To create a firmware update orchestration strategy use the
|
||||||
:command:`dcmanager fw-update-strategy create` command.
|
:command:`dcmanager fw-update-strategy create` command.
|
||||||
|
|
||||||
.. xbooklink For more information
|
.. xbooklink For more information
|
||||||
see, :ref:`Device Image Update Orchestration
|
see, :ref:`Device Image Update Orchestration
|
||||||
<device-image-update-orchestration>`.
|
<device-image-update-orchestration>`.
|
||||||
|
|
||||||
.. seealso::
|
.. seealso::
|
||||||
|
|
||||||
:ref:`Creating Subcloud Groups <creating-subcloud-groups>`
|
:ref:`Creating Subcloud Groups <creating-subcloud-groups>`
|
||||||
|
|
||||||
:ref:`Orchestration Strategy Using Subcloud Groups <ochestration-strategy-using-subcloud-groups>`
|
:ref:`Orchestration Strategy Using Subcloud Groups <ochestration-strategy-using-subcloud-groups>`
|
||||||
|
|
||||||
|
|
||||||
|
@ -28,14 +28,14 @@ fails, delete subclouds, and monitor or change the managed status of subclouds.
|
|||||||
| 3 | subcloud3 | managed | online | out-of-sync |
|
| 3 | subcloud3 | managed | online | out-of-sync |
|
||||||
| 4 | subcloud4 | managed | offline | unknown |
|
| 4 | subcloud4 | managed | offline | unknown |
|
||||||
+----+-----------+--------------+--------------------+-------------+
|
+----+-----------+--------------+--------------------+-------------+
|
||||||
|
|
||||||
|
|
||||||
- To show information for a subcloud, use the :command:`subcloud show` command.
|
- To show information for a subcloud, use the :command:`subcloud show` command.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ dcmanager subcloud show <subcloud-name>
|
~(keystone_admin)]$ dcmanager subcloud show <subcloud-name>
|
||||||
|
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
@ -64,7 +64,7 @@ fails, delete subclouds, and monitor or change the managed status of subclouds.
|
|||||||
| patching_sync_status | in-sync |
|
| patching_sync_status | in-sync |
|
||||||
| platform_sync_status | in-sync |
|
| platform_sync_status | in-sync |
|
||||||
+-----------------------------+----------------------------+
|
+-----------------------------+----------------------------+
|
||||||
|
|
||||||
|
|
||||||
- To show information about the oam-floating-ip field for a specific
|
- To show information about the oam-floating-ip field for a specific
|
||||||
subcloud, use the :command:`subcloud
|
subcloud, use the :command:`subcloud
|
||||||
@ -98,7 +98,7 @@ fails, delete subclouds, and monitor or change the managed status of subclouds.
|
|||||||
| platform_sync_status | in-sync |
|
| platform_sync_status | in-sync |
|
||||||
| oam_floating_ip | 10.10.10.12 |
|
| oam_floating_ip | 10.10.10.12 |
|
||||||
+-----------------------------+----------------------------+
|
+-----------------------------+----------------------------+
|
||||||
|
|
||||||
|
|
||||||
- To edit the settings for a subcloud, use the :command:`subcloud update`
|
- To edit the settings for a subcloud, use the :command:`subcloud update`
|
||||||
command.
|
command.
|
||||||
@ -109,7 +109,7 @@ fails, delete subclouds, and monitor or change the managed status of subclouds.
|
|||||||
[–- description] <description> \
|
[–- description] <description> \
|
||||||
[– location] <location> \
|
[– location] <location> \
|
||||||
<subcloud-name>
|
<subcloud-name>
|
||||||
|
|
||||||
|
|
||||||
- To toggle a subcloud between **Unmanaged** and **Managed**, pass these
|
- To toggle a subcloud between **Unmanaged** and **Managed**, pass these
|
||||||
parameters to the :command:`subcloud` command.
|
parameters to the :command:`subcloud` command.
|
||||||
@ -119,12 +119,12 @@ fails, delete subclouds, and monitor or change the managed status of subclouds.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ dcmanager subcloud unmanage <subcloud-name>
|
~(keystone_admin)]$ dcmanager subcloud unmanage <subcloud-name>
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ dcmanager subcloud manage <subcloud-name>
|
~(keystone_admin)]$ dcmanager subcloud manage <subcloud-name>
|
||||||
|
|
||||||
|
|
||||||
- To reconfigure a subcloud, if deployment fails, use the :command:`subcloud reconfig` command.
|
- To reconfigure a subcloud, if deployment fails, use the :command:`subcloud reconfig` command.
|
||||||
|
|
||||||
@ -135,11 +135,11 @@ fails, delete subclouds, and monitor or change the managed status of subclouds.
|
|||||||
|
|
||||||
~(keystone_admin)]$ dcmanager subcloud reconfig <subcloud-id/name> --deploy-config \
|
~(keystone_admin)]$ dcmanager subcloud reconfig <subcloud-id/name> --deploy-config \
|
||||||
<<filepath>> --sysadmin-password <<password>>
|
<<filepath>> --sysadmin-password <<password>>
|
||||||
|
|
||||||
|
|
||||||
where``--deploy-config`` must reference the deployment configuration file.
|
where``--deploy-config`` must reference the deployment configuration file.
|
||||||
For more information, see either,
|
For more information, see either,
|
||||||
|
|
||||||
.. xbooklink |inst-doc|: :ref:`The
|
.. xbooklink |inst-doc|: :ref:`The
|
||||||
Deployment Manager <the-deployment-manager>` or |inst-doc|:
|
Deployment Manager <the-deployment-manager>` or |inst-doc|:
|
||||||
:ref:`Deployment Manager Examples <deployment-manager-examples>` for more
|
:ref:`Deployment Manager Examples <deployment-manager-examples>` for more
|
||||||
@ -154,14 +154,14 @@ fails, delete subclouds, and monitor or change the managed status of subclouds.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ dcmanager subcloud manage <subcloud-id/name>
|
~(keystone_admin)]$ dcmanager subcloud manage <subcloud-id/name>
|
||||||
|
|
||||||
|
|
||||||
- To delete a subcloud, use the :command:`subcloud delete` command.
|
- To delete a subcloud, use the :command:`subcloud delete` command.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ dcmanager subcloud delete <subcloud-name>
|
~(keystone_admin)]$ dcmanager subcloud delete <subcloud-name>
|
||||||
|
|
||||||
|
|
||||||
.. caution::
|
.. caution::
|
||||||
|
|
||||||
|
@ -14,20 +14,20 @@ subclouds from the System Controller.
|
|||||||
- To list subclouds, select **Distributed Cloud Admin** \> **Cloud Overview**.
|
- To list subclouds, select **Distributed Cloud Admin** \> **Cloud Overview**.
|
||||||
|
|
||||||
.. image:: figures/uhp1521894539008.png
|
.. image:: figures/uhp1521894539008.png
|
||||||
|
|
||||||
|
|
||||||
You can perform full-text searches or filter by column using the search-bar
|
You can perform full-text searches or filter by column using the search-bar
|
||||||
above the subcloud list.
|
above the subcloud list.
|
||||||
|
|
||||||
.. image:: figures/pdn1591034100660.png
|
.. image:: figures/pdn1591034100660.png
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
- To perform operations on a subcloud, use the **Actions** menu.
|
- To perform operations on a subcloud, use the **Actions** menu.
|
||||||
|
|
||||||
.. image:: figures/pvr1591032739503.png
|
.. image:: figures/pvr1591032739503.png
|
||||||
|
|
||||||
|
|
||||||
.. caution::
|
.. caution::
|
||||||
|
|
||||||
If you delete a subcloud, then you must reinstall it before you can
|
If you delete a subcloud, then you must reinstall it before you can
|
||||||
@ -39,7 +39,7 @@ subclouds from the System Controller.
|
|||||||
window.
|
window.
|
||||||
|
|
||||||
.. image:: figures/rpn1518108364837.png
|
.. image:: figures/rpn1518108364837.png
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -27,7 +27,7 @@ subcloud hosts and networks, just as you would for any |prod| system
|
|||||||
- From the Horizon header, select a subcloud from the **Admin** menu.
|
- From the Horizon header, select a subcloud from the **Admin** menu.
|
||||||
|
|
||||||
.. image:: figures/rpn1518108364837.png
|
.. image:: figures/rpn1518108364837.png
|
||||||
|
|
||||||
|
|
||||||
- From the Cloud Overview page, select **Alarm & Event Details** \> **Host Details** for the subcloud.
|
- From the Cloud Overview page, select **Alarm & Event Details** \> **Host Details** for the subcloud.
|
||||||
|
|
||||||
|
@ -37,9 +37,9 @@ subcloud to the sysinv service credentials of the systemController.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
#!/bin/bash -e
|
#!/bin/bash -e
|
||||||
|
|
||||||
USAGE="usage: ${0##*/} <username> <password>"
|
USAGE="usage: ${0##*/} <username> <password>"
|
||||||
|
|
||||||
if [ "$#" -ne 2 ]
|
if [ "$#" -ne 2 ]
|
||||||
then
|
then
|
||||||
echo Missing arguments.
|
echo Missing arguments.
|
||||||
@ -47,14 +47,14 @@ subcloud to the sysinv service credentials of the systemController.
|
|||||||
echo
|
echo
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
|
|
||||||
NEW_CREDS="username:$1 password:$2"
|
NEW_CREDS="username:$1 password:$2"
|
||||||
|
|
||||||
echo
|
echo
|
||||||
|
|
||||||
for REGISTRY in docker-registry quay-registry elastic-registry gcr-registry k8s-registry
|
for REGISTRY in docker-registry quay-registry elastic-registry gcr-registry k8s-registry
|
||||||
do
|
do
|
||||||
|
|
||||||
echo -n "Updating" $REGISTRY "credentials ."
|
echo -n "Updating" $REGISTRY "credentials ."
|
||||||
SECRET_UUID=`system service-parameter-list | fgrep $REGISTRY | fgrep auth-secret | awk '{print $10}'`
|
SECRET_UUID=`system service-parameter-list | fgrep $REGISTRY | fgrep auth-secret | awk '{print $10}'`
|
||||||
if [ -z "$SECRET_UUID" ]
|
if [ -z "$SECRET_UUID" ]
|
||||||
@ -67,7 +67,7 @@ subcloud to the sysinv service credentials of the systemController.
|
|||||||
echo -n "."
|
echo -n "."
|
||||||
SECRET_VALUE=`openstack secret get ${SECRET_REF} --payload -f value`
|
SECRET_VALUE=`openstack secret get ${SECRET_REF} --payload -f value`
|
||||||
echo -n "."
|
echo -n "."
|
||||||
|
|
||||||
openstack secret delete ${SECRET_REF} > /dev/null
|
openstack secret delete ${SECRET_REF} > /dev/null
|
||||||
echo -n "."
|
echo -n "."
|
||||||
NEW_SECRET_VALUE=$NEW_CREDS
|
NEW_SECRET_VALUE=$NEW_CREDS
|
||||||
@ -78,7 +78,7 @@ subcloud to the sysinv service credentials of the systemController.
|
|||||||
system service-parameter-modify docker $REGISTRY auth-secret="${NEW_SECRET_UUID}" > /dev/null
|
system service-parameter-modify docker $REGISTRY auth-secret="${NEW_SECRET_UUID}" > /dev/null
|
||||||
echo -n "."
|
echo -n "."
|
||||||
echo " done."
|
echo " done."
|
||||||
|
|
||||||
echo -n "Validating $REGISTRY credentials updated to: "
|
echo -n "Validating $REGISTRY credentials updated to: "
|
||||||
SECRET_UUID=`system service-parameter-list | fgrep $REGISTRY | fgrep auth-secret | awk '{print $10}'`
|
SECRET_UUID=`system service-parameter-list | fgrep $REGISTRY | fgrep auth-secret | awk '{print $10}'`
|
||||||
if [ -z "$SECRET_UUID" ]
|
if [ -z "$SECRET_UUID" ]
|
||||||
@ -88,9 +88,9 @@ subcloud to the sysinv service credentials of the systemController.
|
|||||||
SECRET_REF=`openstack secret list | fgrep ${SECRET_UUID} | awk '{print $2}'`
|
SECRET_REF=`openstack secret list | fgrep ${SECRET_UUID} | awk '{print $2}'`
|
||||||
SECRET_VALUE=`openstack secret get ${SECRET_REF} --payload -f value`
|
SECRET_VALUE=`openstack secret get ${SECRET_REF} --payload -f value`
|
||||||
echo $SECRET_VALUE
|
echo $SECRET_VALUE
|
||||||
|
|
||||||
echo
|
echo
|
||||||
|
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
|
@ -82,7 +82,7 @@ CPU cores from the Horizon Web interface.
|
|||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
../../_includes/configure-cpu-core-vswitch.rest
|
../../_includes/configure-cpu-core-vswitch.rest
|
||||||
|
|
||||||
**Shared**
|
**Shared**
|
||||||
Not currently supported.
|
Not currently supported.
|
||||||
|
@ -35,5 +35,5 @@ For a list of tasks see:
|
|||||||
|
|
||||||
- :ref:`Initiating a Device Image Update for a Host <initiating-a-device-image-update-for-a-host>`
|
- :ref:`Initiating a Device Image Update for a Host <initiating-a-device-image-update-for-a-host>`
|
||||||
|
|
||||||
- :ref:`Displaying the Status of Device Images <displaying-the-status-of-device-images>`
|
- :ref:`Displaying the Status of Device Images <displaying-the-status-of-device-images>`
|
||||||
|
|
||||||
|
@ -55,7 +55,7 @@ For example, run the following commands:
|
|||||||
~(keystone_admin)$ system host-unlock worker-0
|
~(keystone_admin)$ system host-unlock worker-0
|
||||||
|
|
||||||
To pass the |FEC| device to a container, the following requests/limits must be
|
To pass the |FEC| device to a container, the following requests/limits must be
|
||||||
entered into the pod specification:
|
entered into the pod specification:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
@ -76,7 +76,7 @@ can reproduce them later.
|
|||||||
#. Power up the host.
|
#. Power up the host.
|
||||||
|
|
||||||
If the host has been deleted from the Host Inventory, the host software
|
If the host has been deleted from the Host Inventory, the host software
|
||||||
is reinstalled.
|
is reinstalled.
|
||||||
|
|
||||||
.. From Power up the host
|
.. From Power up the host
|
||||||
.. xbookref For details, see :ref:`|inst-doc| <platform-installation-overview>`.
|
.. xbookref For details, see :ref:`|inst-doc| <platform-installation-overview>`.
|
||||||
|
@ -25,7 +25,7 @@ per host.
|
|||||||
.. only:: starlingx
|
.. only:: starlingx
|
||||||
|
|
||||||
A node may only allocate huge pages for a single size, either 2MiB or 1GiB.
|
A node may only allocate huge pages for a single size, either 2MiB or 1GiB.
|
||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: ../../../_includes/avs-note.rest
|
.. include:: ../../../_includes/avs-note.rest
|
||||||
@ -92,11 +92,11 @@ informative message is displayed.
|
|||||||
|NUMA| Node. If no 2 MiB pages are required, type 0. Due to
|
|NUMA| Node. If no 2 MiB pages are required, type 0. Due to
|
||||||
limitations in Kubernetes, only a single huge page size can be used per
|
limitations in Kubernetes, only a single huge page size can be used per
|
||||||
host, across Application memory.
|
host, across Application memory.
|
||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: ../../../_includes/allocating-host-memory-using-horizon.rest
|
.. include:: ../../../_includes/allocating-host-memory-using-horizon.rest
|
||||||
|
|
||||||
:start-after: application-2m-text-begin
|
:start-after: application-2m-text-begin
|
||||||
:end-before: application-2m-text-end
|
:end-before: application-2m-text-end
|
||||||
|
|
||||||
@ -108,25 +108,25 @@ informative message is displayed.
|
|||||||
|NUMA| Node. If no 1 GiB pages are required, type 0. Due to
|
|NUMA| Node. If no 1 GiB pages are required, type 0. Due to
|
||||||
limitations in Kubernetes, only a single huge page size can be used per
|
limitations in Kubernetes, only a single huge page size can be used per
|
||||||
host, across Application memory.
|
host, across Application memory.
|
||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: ../../../_includes/allocating-host-memory-using-horizon.rest
|
.. include:: ../../../_includes/allocating-host-memory-using-horizon.rest
|
||||||
|
|
||||||
:start-after: application-1g-text-begin
|
:start-after: application-1g-text-begin
|
||||||
:end-before: application-1g-text-end
|
:end-before: application-1g-text-end
|
||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: ../../../_includes/allocating-host-memory-using-horizon.rest
|
.. include:: ../../../_includes/allocating-host-memory-using-horizon.rest
|
||||||
|
|
||||||
:start-after: vswitch-hugepage-1g-text-begin
|
:start-after: vswitch-hugepage-1g-text-begin
|
||||||
:end-before: vswitch-hugepage-1g-text-end
|
:end-before: vswitch-hugepage-1g-text-end
|
||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: ../../../_includes/allocating-host-memory-using-horizon.rest
|
.. include:: ../../../_includes/allocating-host-memory-using-horizon.rest
|
||||||
|
|
||||||
:start-after: vswitch-hugepage-size-node-text-begin
|
:start-after: vswitch-hugepage-size-node-text-begin
|
||||||
:end-before: vswitch-hugepage-size-node-text-end
|
:end-before: vswitch-hugepage-size-node-text-end
|
||||||
|
|
||||||
|
@ -9,7 +9,7 @@ Allocate Host Memory Using the CLI
|
|||||||
.. only:: starlingx
|
.. only:: starlingx
|
||||||
|
|
||||||
You can edit the platform and huge page memory allocations for a |NUMA|
|
You can edit the platform and huge page memory allocations for a |NUMA|
|
||||||
node from the CLI.
|
node from the CLI.
|
||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
@ -85,7 +85,7 @@ worker or an |AIO| controller.
|
|||||||
given processor.
|
given processor.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
.. only:: starlingx
|
.. only:: starlingx
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -149,7 +149,7 @@ worker or an |AIO| controller.
|
|||||||
|
|
||||||
Use with the optional ``-f`` argument. This option specifies the
|
Use with the optional ``-f`` argument. This option specifies the
|
||||||
intended function for hugepage allocation on application.
|
intended function for hugepage allocation on application.
|
||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: ../../../_includes/allocating-host-memory-using-the-cli.rest
|
.. include:: ../../../_includes/allocating-host-memory-using-the-cli.rest
|
||||||
@ -183,7 +183,7 @@ worker or an |AIO| controller.
|
|||||||
number of 1 GiB huge pages to make available. Due to limitations in
|
number of 1 GiB huge pages to make available. Due to limitations in
|
||||||
Kubernetes, only a single huge page size can be used per host, across
|
Kubernetes, only a single huge page size can be used per host, across
|
||||||
Application memory.
|
Application memory.
|
||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: ../../../_includes/allocating-host-memory-using-the-cli.rest
|
.. include:: ../../../_includes/allocating-host-memory-using-the-cli.rest
|
||||||
@ -197,14 +197,14 @@ worker or an |AIO| controller.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
(keystone_admin)$ system host-memory-modify worker-0 1 -2M 4
|
(keystone_admin)$ system host-memory-modify worker-0 1 -2M 4
|
||||||
|
|
||||||
.. only:: starlingx
|
.. only:: starlingx
|
||||||
|
|
||||||
For openstack-compute labeled worker node or |AIO| controller, since
|
For openstack-compute labeled worker node or |AIO| controller, since
|
||||||
Kubernetes only supports a single huge page size per worker node.
|
Kubernetes only supports a single huge page size per worker node.
|
||||||
'application' huge pages must also be 1G. The following example shows
|
'application' huge pages must also be 1G. The following example shows
|
||||||
configuring 10 1G huge pages for application usage. For example:
|
configuring 10 1G huge pages for application usage. For example:
|
||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: ../../../_includes/allocating-host-memory-using-the-cli.rest
|
.. include:: ../../../_includes/allocating-host-memory-using-the-cli.rest
|
||||||
|
@ -118,7 +118,7 @@ Settings <link-aggregation-settings>`.
|
|||||||
|
|
||||||
For example, to attach an aggregated Ethernet interface named **bond0** to
|
For example, to attach an aggregated Ethernet interface named **bond0** to
|
||||||
the platform management network, using member interfaces **enp0s8**
|
the platform management network, using member interfaces **enp0s8**
|
||||||
and **enp0s11** on **controller-0**:
|
and **enp0s11** on **controller-0**:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
@ -38,7 +38,7 @@ to platform networks using the CLI.
|
|||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| e7bd04f...| cluster0 | platform | vlan | 158 | [] | [u'sriov0'] | [] | MTU=1500 |
|
| e7bd04f...| cluster0 | platform | vlan | 158 | [] | [u'sriov0'] | [] | MTU=1500 |
|
||||||
+-----------+----------+-----------+----------+------+---------------+-------------+--------------------------------+------------+
|
+-----------+----------+-----------+----------+------+---------------+-------------+--------------------------------+------------+
|
||||||
|
|
||||||
|
|
||||||
#. Create an Ethernet interface.
|
#. Create an Ethernet interface.
|
||||||
|
|
||||||
|
@ -65,7 +65,7 @@ For more information, see :ref:`Provisioning SR-IOV VF Interfaces using the CLI
|
|||||||
|
|
||||||
~(keystone_admin)$ system host-if-add <hostname> -V <vlan_id> \
|
~(keystone_admin)$ system host-if-add <hostname> -V <vlan_id> \
|
||||||
-c <--ifclass> <interface name> <sriov_intf_name> [<datanetwork>]
|
-c <--ifclass> <interface name> <sriov_intf_name> [<datanetwork>]
|
||||||
|
|
||||||
|
|
||||||
where the following options are available:
|
where the following options are available:
|
||||||
|
|
||||||
|
@ -32,16 +32,16 @@ kernel Ethernet Bonding Driver documentation available online
|
|||||||
- Description
|
- Description
|
||||||
- Supported Interface Types
|
- Supported Interface Types
|
||||||
* - Active-backup
|
* - Active-backup
|
||||||
|
|
||||||
\(default value\)
|
\(default value\)
|
||||||
- Provides fault tolerance. Only one slave interface at a time is
|
- Provides fault tolerance. Only one slave interface at a time is
|
||||||
available. The backup slave interface becomes active only when the
|
available. The backup slave interface becomes active only when the
|
||||||
active slave interface fails.
|
active slave interface fails.
|
||||||
|
|
||||||
For platform interfaces \(such as, |OAM|, cluster-host, and management
|
For platform interfaces \(such as, |OAM|, cluster-host, and management
|
||||||
interfaces\), the system will select the interface with the lowest
|
interfaces\), the system will select the interface with the lowest
|
||||||
|MAC| address as the primary interface when all slave interfaces are
|
|MAC| address as the primary interface when all slave interfaces are
|
||||||
enabled.
|
enabled.
|
||||||
- Management, |OAM|, cluster-host, and data interface
|
- Management, |OAM|, cluster-host, and data interface
|
||||||
* - Balanced XOR
|
* - Balanced XOR
|
||||||
- Provides aggregated bandwidth and fault tolerance. The same
|
- Provides aggregated bandwidth and fault tolerance. The same
|
||||||
@ -53,12 +53,12 @@ kernel Ethernet Bonding Driver documentation available online
|
|||||||
|
|
||||||
You can modify the transmit policy using the xmit-hash-policy option.
|
You can modify the transmit policy using the xmit-hash-policy option.
|
||||||
For details, see :ref:`Table 2
|
For details, see :ref:`Table 2
|
||||||
<link-aggregation-settings-xmit-hash-policy>`.
|
<link-aggregation-settings-xmit-hash-policy>`.
|
||||||
- |OAM|, cluster-host, and data interfaces
|
- |OAM|, cluster-host, and data interfaces
|
||||||
* - 802.3ad
|
* - 802.3ad
|
||||||
- Provides aggregated bandwidth and fault tolerance. Implements dynamic
|
- Provides aggregated bandwidth and fault tolerance. Implements dynamic
|
||||||
link aggregation as per the IEEE 802.3ad |LACP| specification.
|
link aggregation as per the IEEE 802.3ad |LACP| specification.
|
||||||
|
|
||||||
You can modify the transmit policy using the xmit-hash-policy option.
|
You can modify the transmit policy using the xmit-hash-policy option.
|
||||||
For details, see :ref:`Table 2
|
For details, see :ref:`Table 2
|
||||||
<link-aggregation-settings-xmit-hash-policy>`.
|
<link-aggregation-settings-xmit-hash-policy>`.
|
||||||
@ -70,8 +70,8 @@ kernel Ethernet Bonding Driver documentation available online
|
|||||||
one of the aggregated interfaces during |PXE| boot. If the far-end
|
one of the aggregated interfaces during |PXE| boot. If the far-end
|
||||||
switch is configured to use active |LACP|, it can establish a |LAG| and
|
switch is configured to use active |LACP|, it can establish a |LAG| and
|
||||||
use either interface, potentially resulting in a communication failure
|
use either interface, potentially resulting in a communication failure
|
||||||
during the boot process.
|
during the boot process.
|
||||||
- Management, |OAM|, cluster-host, and data interface
|
- Management, |OAM|, cluster-host, and data interface
|
||||||
|
|
||||||
.. _link-aggregation-settings-xmit-hash-policy:
|
.. _link-aggregation-settings-xmit-hash-policy:
|
||||||
|
|
||||||
@ -81,22 +81,22 @@ kernel Ethernet Bonding Driver documentation available online
|
|||||||
|
|
||||||
* - Options
|
* - Options
|
||||||
- Description
|
- Description
|
||||||
- Supported Interface Types
|
- Supported Interface Types
|
||||||
* - Layer 2
|
* - Layer 2
|
||||||
|
|
||||||
\(default value\)
|
\(default value\)
|
||||||
- Hashes on source and destination |MAC| addresses.
|
- Hashes on source and destination |MAC| addresses.
|
||||||
- |OAM|, internal management, cluster-host, and data interfaces \(worker
|
- |OAM|, internal management, cluster-host, and data interfaces \(worker
|
||||||
nodes\).
|
nodes\).
|
||||||
* - Layer 2 + 3
|
* - Layer 2 + 3
|
||||||
- Hashes on source and destination |MAC| addresses, and on source and
|
- Hashes on source and destination |MAC| addresses, and on source and
|
||||||
destination IP addresses.
|
destination IP addresses.
|
||||||
- |OAM|, internal management, and cluster-host
|
- |OAM|, internal management, and cluster-host
|
||||||
* - Layer 3 + 4
|
* - Layer 3 + 4
|
||||||
- Hashes on source and destination IP addresses, and on source and
|
- Hashes on source and destination IP addresses, and on source and
|
||||||
destination ports.
|
destination ports.
|
||||||
- |OAM|, internal management, and cluster-host
|
- |OAM|, internal management, and cluster-host
|
||||||
|
|
||||||
|
|
||||||
.. list-table:: Table 3. primary_reselect Options
|
.. list-table:: Table 3. primary_reselect Options
|
||||||
:widths: auto
|
:widths: auto
|
||||||
@ -104,32 +104,32 @@ kernel Ethernet Bonding Driver documentation available online
|
|||||||
|
|
||||||
* - Options
|
* - Options
|
||||||
- Description
|
- Description
|
||||||
- Supported Interface Types
|
- Supported Interface Types
|
||||||
* - Always
|
* - Always
|
||||||
|
|
||||||
\(default value\)
|
\(default value\)
|
||||||
- The primary slave becomes an active slave whenever it comes back up.
|
- The primary slave becomes an active slave whenever it comes back up.
|
||||||
- |OAM|, internal management, and cluster-host
|
- |OAM|, internal management, and cluster-host
|
||||||
* - Better
|
* - Better
|
||||||
- The primary slave becomes active slave whenever it comes back up, if the
|
- The primary slave becomes active slave whenever it comes back up, if the
|
||||||
speed and the duplex of the primary slave is better than the speed duplex of the current active slave.
|
speed and the duplex of the primary slave is better than the speed duplex of the current active slave.
|
||||||
- |OAM|, internal management, and cluster-host
|
- |OAM|, internal management, and cluster-host
|
||||||
* - Failure
|
* - Failure
|
||||||
- The primary slave becomes the active slave only if the current active
|
- The primary slave becomes the active slave only if the current active
|
||||||
slave fails and the primary slave is up.
|
slave fails and the primary slave is up.
|
||||||
- |OAM|, internal management, and cluster-host
|
- |OAM|, internal management, and cluster-host
|
||||||
|
|
||||||
-----------------------------------------
|
-----------------------------------------
|
||||||
LAG Configurations for AIO Duplex Systems
|
LAG Configurations for AIO Duplex Systems
|
||||||
-----------------------------------------
|
-----------------------------------------
|
||||||
|
|
||||||
For a duplex-direct system set-up, use a |LAG| mode with active-backup for the
|
For a duplex-direct system set-up, use a |LAG| mode with active-backup for the
|
||||||
management interface when attaching cables between the active and standby
|
management interface when attaching cables between the active and standby
|
||||||
controller nodes. When both interfaces are enabled, the system automatically
|
controller nodes. When both interfaces are enabled, the system automatically
|
||||||
selects the primary interface within the |LAG| with the lowest |MAC| address on
|
selects the primary interface within the |LAG| with the lowest |MAC| address on
|
||||||
the active controller to connect to the primary interface within the |LAG| with
|
the active controller to connect to the primary interface within the |LAG| with
|
||||||
the lowest |MAC| address on the standby controller.
|
the lowest |MAC| address on the standby controller.
|
||||||
|
|
||||||
The controllers act independently of each other when selecting the primary
|
The controllers act independently of each other when selecting the primary
|
||||||
interface. Therefore, it is critical that the inter-node cabling is completed
|
interface. Therefore, it is critical that the inter-node cabling is completed
|
||||||
to ensure that both nodes select a primary interface that is attached to the
|
to ensure that both nodes select a primary interface that is attached to the
|
||||||
@ -138,15 +138,14 @@ attachments must be from the lowest |MAC| address to the lowest |MAC| address
|
|||||||
for the first cable, and the next lowest |MAC| address to the next lowest |MAC|
|
for the first cable, and the next lowest |MAC| address to the next lowest |MAC|
|
||||||
address for the second cable. Failure to follow these cabling requirements
|
address for the second cable. Failure to follow these cabling requirements
|
||||||
will result in a loss of communication between the two nodes.
|
will result in a loss of communication between the two nodes.
|
||||||
|
|
||||||
In addition to the special cabling requirements, the node BIOS settings may
|
In addition to the special cabling requirements, the node BIOS settings may
|
||||||
need to be configured to ensure that the node attempts to network boot from
|
need to be configured to ensure that the node attempts to network boot from
|
||||||
the lowest |MAC| address interface within the |LAG|. This may be required only on
|
the lowest |MAC| address interface within the |LAG|. This may be required only on
|
||||||
systems that enable all hardware interfaces during network booting rather than
|
systems that enable all hardware interfaces during network booting rather than
|
||||||
only enabling the interface that is currently selected for booting.
|
only enabling the interface that is currently selected for booting.
|
||||||
|
|
||||||
Configure the cables associated with the management |LAG| so that the primary
|
Configure the cables associated with the management |LAG| so that the primary
|
||||||
interface within the |LAG| with the lowest |MAC| address on the active
|
interface within the |LAG| with the lowest |MAC| address on the active
|
||||||
controller connects to the primary interface within the |LAG| with the lowest
|
controller connects to the primary interface within the |LAG| with the lowest
|
||||||
|MAC| address on standby controller.
|
|MAC| address on standby controller.
|
||||||
|
|
@ -58,7 +58,7 @@ a Generic PCI Device for Use by VMs
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="pci_alias[:number_of_devices]"
|
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="pci_alias[:number_of_devices]"
|
||||||
|
|
||||||
where
|
where
|
||||||
|
|
||||||
**<flavor\_name>**
|
**<flavor\_name>**
|
||||||
@ -97,7 +97,7 @@ a Generic PCI Device for Use by VMs
|
|||||||
defined class code for 'Display Controller' \(0x03\).
|
defined class code for 'Display Controller' \(0x03\).
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
On a system with multiple cards that use the same default |PCI|
|
On a system with multiple cards that use the same default |PCI|
|
||||||
alias, you must assign and use a unique |PCI| alias for each one.
|
alias, you must assign and use a unique |PCI| alias for each one.
|
||||||
|
|
||||||
@ -115,20 +115,20 @@ a Generic PCI Device for Use by VMs
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1"
|
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1"
|
||||||
|
|
||||||
|
|
||||||
To make a GPU device from a specific vendor available to a guest:
|
To make a GPU device from a specific vendor available to a guest:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="nvidia-tesla-p40:1"
|
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="nvidia-tesla-p40:1"
|
||||||
|
|
||||||
|
|
||||||
To make multiple |PCI| devices available, use the following command:
|
To make multiple |PCI| devices available, use the following command:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1, qat-c62x-vf:2"
|
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1, qat-c62x-vf:2"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -50,8 +50,8 @@ already, and that |VLAN| ID 10 is a valid segmentation ID assigned to
|
|||||||
The Edit Interface dialog appears.
|
The Edit Interface dialog appears.
|
||||||
|
|
||||||
.. image:: ../figures/ptj1538163621289.png
|
.. image:: ../figures/ptj1538163621289.png
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Select **pci-passthrough**, from the **Interface Class** drop-down, and
|
Select **pci-passthrough**, from the **Interface Class** drop-down, and
|
||||||
then select the data network to attach the interface.
|
then select the data network to attach the interface.
|
||||||
@ -78,8 +78,8 @@ already, and that |VLAN| ID 10 is a valid segmentation ID assigned to
|
|||||||
|
|
||||||
|
|
||||||
.. image:: ../figures/bek1516655307871.png
|
.. image:: ../figures/bek1516655307871.png
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Click the **Next** button to proceed to the Subnet tab.
|
Click the **Next** button to proceed to the Subnet tab.
|
||||||
|
|
||||||
|
@ -45,7 +45,7 @@ To edit a device, you must first lock the host.
|
|||||||
#. Click **Edit Device**.
|
#. Click **Edit Device**.
|
||||||
|
|
||||||
.. image:: ../figures/jow1452530556357.png
|
.. image:: ../figures/jow1452530556357.png
|
||||||
|
|
||||||
|
|
||||||
#. Update the information as required.
|
#. Update the information as required.
|
||||||
|
|
||||||
|
@ -20,7 +20,7 @@ automatically inventoried on a host:
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-device-list controller-0 --all
|
~(keystone_admin)$ system host-device-list controller-0 --all
|
||||||
|
|
||||||
|
|
||||||
You can use the following command from the |CLI| to list the devices for a
|
You can use the following command from the |CLI| to list the devices for a
|
||||||
host, for example:
|
host, for example:
|
||||||
@ -30,7 +30,7 @@ host, for example:
|
|||||||
~(keystone_admin)$ system host-device-list --all controller-0
|
~(keystone_admin)$ system host-device-list --all controller-0
|
||||||
+-------------+----------+------+-------+-------+------+--------+--------+-------------+-------+
|
+-------------+----------+------+-------+-------+------+--------+--------+-------------+-------+
|
||||||
| name | address | class| vendor| device| class| vendor | device | numa_node |enabled|
|
| name | address | class| vendor| device| class| vendor | device | numa_node |enabled|
|
||||||
| | | id | id | id | | name | name | | |
|
| | | id | id | id | | name | name | | |
|
||||||
+------------+----------+-------+-------+-------+------+--------+--------+-------------+-------+
|
+------------+----------+-------+-------+-------+------+--------+--------+-------------+-------+
|
||||||
| pci_0000_05.| 0000:05:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True |
|
| pci_0000_05.| 0000:05:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True |
|
||||||
| pci_0000_06.| 0000:06:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True |
|
| pci_0000_06.| 0000:06:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True |
|
||||||
|
@ -16,7 +16,7 @@ PCI Device Access for VMs
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
sr-iov-encryption-acceleration
|
sr-iov-encryption-acceleration
|
||||||
configuring-pci-passthrough-ethernet-interfaces
|
configuring-pci-passthrough-ethernet-interfaces
|
||||||
pci-passthrough-ethernet-interface-devices
|
pci-passthrough-ethernet-interface-devices
|
||||||
|
@ -92,4 +92,4 @@ Nodes must be locked before labels can be assigned or removed.
|
|||||||
|
|
||||||
.. include:: ../../_includes/using-labels-to-identify-openstack-nodes.rest
|
.. include:: ../../_includes/using-labels-to-identify-openstack-nodes.rest
|
||||||
|
|
||||||
:start-after: table-1-of-contents-end
|
:start-after: table-1-of-contents-end
|
||||||
|
@ -935,7 +935,7 @@ Login to pod rook-ceph-tools, get generated key for client.admin and ceph.conf i
|
|||||||
On host controller-0 and controller-1 replace /etc/ceph/ceph.conf and /etc/ceph/keyring
|
On host controller-0 and controller-1 replace /etc/ceph/ceph.conf and /etc/ceph/keyring
|
||||||
with content got from pod rook-ceph-tools.
|
with content got from pod rook-ceph-tools.
|
||||||
|
|
||||||
Update configmap ceph-etc, with data field, with new mon ip
|
Update configmap ceph-etc, with data field, with new mon ip
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -22,8 +22,8 @@ table lists IPv4 and IPv6 support for different networks:
|
|||||||
- IPv6 Support
|
- IPv6 Support
|
||||||
- Comment
|
- Comment
|
||||||
* - |PXE| boot
|
* - |PXE| boot
|
||||||
- Y
|
- Y
|
||||||
- N
|
- N
|
||||||
- If present, the |PXE| boot network is used for |PXE| booting of new
|
- If present, the |PXE| boot network is used for |PXE| booting of new
|
||||||
hosts \(instead of using the internal management network\), and must be
|
hosts \(instead of using the internal management network\), and must be
|
||||||
untagged. It is limited to IPv4, because the |prod| installer does not
|
untagged. It is limited to IPv4, because the |prod| installer does not
|
||||||
@ -39,4 +39,4 @@ table lists IPv4 and IPv6 support for different networks:
|
|||||||
* - Cluster Host
|
* - Cluster Host
|
||||||
- Y
|
- Y
|
||||||
- Y
|
- Y
|
||||||
- The Cluster Host network supports IPv4 or IPv6 addressing.
|
- The Cluster Host network supports IPv4 or IPv6 addressing.
|
||||||
|
@ -113,8 +113,8 @@ in the following table.
|
|||||||
* - Minimum Processor Class
|
* - Minimum Processor Class
|
||||||
- Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket
|
- Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket
|
||||||
|
|
||||||
or
|
or
|
||||||
|
|
||||||
Single-CPU Intel Xeon D-15xx Family, 8 cores \(low-power/low-cost
|
Single-CPU Intel Xeon D-15xx Family, 8 cores \(low-power/low-cost
|
||||||
option for Simplex deployments\)
|
option for Simplex deployments\)
|
||||||
|
|
||||||
@ -126,13 +126,13 @@ in the following table.
|
|||||||
- Platform:
|
- Platform:
|
||||||
|
|
||||||
* Socket 0: 7GB \(by default, configurable\)
|
* Socket 0: 7GB \(by default, configurable\)
|
||||||
|
|
||||||
* Socket 1: 1GB \(by default, configurable\)
|
* Socket 1: 1GB \(by default, configurable\)
|
||||||
|
|
||||||
- Application:
|
- Application:
|
||||||
|
|
||||||
* Socket 0: Remaining memory
|
* Socket 0: Remaining memory
|
||||||
|
|
||||||
* Socket 1: Remaining memory
|
* Socket 1: Remaining memory
|
||||||
* - Minimum Primary Disk
|
* - Minimum Primary Disk
|
||||||
- 500 GB - |SSD| or |NVMe|
|
- 500 GB - |SSD| or |NVMe|
|
||||||
|
@ -99,7 +99,7 @@ storage by setting a flavor extra specification.
|
|||||||
.. caution::
|
.. caution::
|
||||||
Unlike Cinder-based storage, Ephemeral storage does not persist if the
|
Unlike Cinder-based storage, Ephemeral storage does not persist if the
|
||||||
instance is terminated or the compute node fails.
|
instance is terminated or the compute node fails.
|
||||||
|
|
||||||
.. _block-storage-for-virtual-machines-d29e17:
|
.. _block-storage-for-virtual-machines-d29e17:
|
||||||
|
|
||||||
In addition, for local Ephemeral storage, migration and resizing support
|
In addition, for local Ephemeral storage, migration and resizing support
|
||||||
|
@ -220,7 +220,7 @@ only the Interface sections are modified for |prod-os|.
|
|||||||
| Intel Virtualization \(VTD, VTX\) | Enabled |
|
| Intel Virtualization \(VTD, VTX\) | Enabled |
|
||||||
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
|
||||||
.. [#] For more information, see :ref:`The PXE Boot Network <the-pxe-boot-network>`.
|
.. [#] For more information, see :ref:`The PXE Boot Network <the-pxe-boot-network>`.
|
||||||
|
|
||||||
.. _hardware-requirements-section-if-scenarios:
|
.. _hardware-requirements-section-if-scenarios:
|
||||||
|
|
||||||
|
@ -16,7 +16,7 @@ here.
|
|||||||
|
|
||||||
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| Component | Approved Hardware |
|
| Component | Approved Hardware |
|
||||||
+==========================================================+=========================================================================================================================================================================================================================================================================================================================================================================================================================================+
|
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| Hardware Platforms | - Hewlett Packard Enterprise |
|
| Hardware Platforms | - Hewlett Packard Enterprise |
|
||||||
| | |
|
| | |
|
||||||
| | |
|
| | |
|
||||||
@ -122,7 +122,7 @@ here.
|
|||||||
| | |
|
| | |
|
||||||
| | - Mellanox MT27700 Family \(ConnectX-4\) 40G |
|
| | - Mellanox MT27700 Family \(ConnectX-4\) 40G |
|
||||||
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| NICs Verified for Data Interfaces [#f1]_ | The following NICs are supported: |
|
| NICs Verified for Data Interfaces | The following NICs are supported: |
|
||||||
| | |
|
| | |
|
||||||
| | - Intel I350 \(Powerville\) 1G |
|
| | - Intel I350 \(Powerville\) 1G |
|
||||||
| | |
|
| | |
|
||||||
|
@ -17,7 +17,7 @@ Resource Placement
|
|||||||
maximum throughput with |SRIOV|.
|
maximum throughput with |SRIOV|.
|
||||||
|
|
||||||
.. only:: starlingx
|
.. only:: starlingx
|
||||||
|
|
||||||
A |VM| such as VNF 6 in NUMA-REF will not have the same performance as VNF
|
A |VM| such as VNF 6 in NUMA-REF will not have the same performance as VNF
|
||||||
1 and VNF 2. There are multiple ways to maximize performance for VNF 6 in
|
1 and VNF 2. There are multiple ways to maximize performance for VNF 6 in
|
||||||
this case:
|
this case:
|
||||||
|
@ -41,5 +41,5 @@ OpenStack
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
|
|
||||||
openstack/index
|
openstack/index
|
@ -24,7 +24,7 @@ directly create pods since they have access to the **privileged** |PSP|. Also,
|
|||||||
based on the ClusterRoleBindings and RoleBindings automatically added by
|
based on the ClusterRoleBindings and RoleBindings automatically added by
|
||||||
|prod|, all users with cluster-admin roles can also create privileged
|
|prod|, all users with cluster-admin roles can also create privileged
|
||||||
Deployment/ReplicaSets/etc. in the kube-system namespace and restricted
|
Deployment/ReplicaSets/etc. in the kube-system namespace and restricted
|
||||||
Deployment/ReplicaSets/etc. in any other namespace.
|
Deployment/ReplicaSets/etc. in any other namespace.
|
||||||
|
|
||||||
|
|
||||||
In order to enable privileged Deployment/ReplicaSets/etc. to be created in
|
In order to enable privileged Deployment/ReplicaSets/etc. to be created in
|
||||||
|
@ -128,7 +128,7 @@ and uploaded by default.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ system helm-override-show oidc-auth-apps dex kube-system
|
~(keystone_admin)]$ system helm-override-show oidc-auth-apps dex kube-system
|
||||||
|
|
||||||
config:
|
config:
|
||||||
staticClients:
|
staticClients:
|
||||||
- id: stx-oidc-client-app
|
- id: stx-oidc-client-app
|
||||||
@ -147,7 +147,7 @@ and uploaded by default.
|
|||||||
oidc-client container and the dex container. It is recommended that you
|
oidc-client container and the dex container. It is recommended that you
|
||||||
configure a unique, more secure **client\_secret** by specifying the
|
configure a unique, more secure **client\_secret** by specifying the
|
||||||
value in the dex overrides file, as shown in the example below.
|
value in the dex overrides file, as shown in the example below.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
config:
|
config:
|
||||||
@ -155,7 +155,7 @@ and uploaded by default.
|
|||||||
- id: stx-oidc-client-app
|
- id: stx-oidc-client-app
|
||||||
name: STX OIDC Client app
|
name: STX OIDC Client app
|
||||||
redirectURIs: ['<OAM floating IP address>/callback']
|
redirectURIs: ['<OAM floating IP address>/callback']
|
||||||
secret: BetterSecret
|
secret: BetterSecret
|
||||||
client_secret: BetterSecret
|
client_secret: BetterSecret
|
||||||
expiry:
|
expiry:
|
||||||
idTokens: "10h"
|
idTokens: "10h"
|
||||||
@ -212,7 +212,7 @@ and uploaded by default.
|
|||||||
/home/sysadmin/oidc-client-overrides.yaml file.
|
/home/sysadmin/oidc-client-overrides.yaml file.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
config:
|
config:
|
||||||
client_secret: BetterSecret
|
client_secret: BetterSecret
|
||||||
|
|
||||||
@ -223,7 +223,7 @@ and uploaded by default.
|
|||||||
~(keystone_admin)]$ system helm-override-update oidc-auth-apps oidc-client kube-system --values /home/sysadmin/oidc-client-overrides.yaml
|
~(keystone_admin)]$ system helm-override-update oidc-auth-apps oidc-client kube-system --values /home/sysadmin/oidc-client-overrides.yaml
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
If you need to manually override the secrets, the client\_secret in the
|
If you need to manually override the secrets, the client\_secret in the
|
||||||
oidc-client overrides must match the staticClients secret and
|
oidc-client overrides must match the staticClients secret and
|
||||||
client\_secret in the dex overrides, otherwise the oidc-auth |CLI|
|
client\_secret in the dex overrides, otherwise the oidc-auth |CLI|
|
||||||
|
@ -37,7 +37,7 @@ either of the above two methods.
|
|||||||
<security-configure-container-backed-remote-clis-and-clients>`
|
<security-configure-container-backed-remote-clis-and-clients>`
|
||||||
|
|
||||||
:ref:`Using Container-backed Remote CLIs and Clients
|
:ref:`Using Container-backed Remote CLIs and Clients
|
||||||
<using-container-backed-remote-clis-and-clients>`
|
<using-container-backed-remote-clis-and-clients>`
|
||||||
|
|
||||||
:ref:`Install Kubectl and Helm Clients Directly on a Host
|
:ref:`Install Kubectl and Helm Clients Directly on a Host
|
||||||
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
|
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
|
||||||
|
@ -121,7 +121,7 @@ Deprovision Windows Active Directory
|
|||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
deprovision-windows-active-directory-authentication
|
deprovision-windows-active-directory-authentication
|
||||||
|
|
||||||
****************
|
****************
|
||||||
Firewall Options
|
Firewall Options
|
||||||
@ -240,7 +240,7 @@ UEFI Secure Boot
|
|||||||
|
|
||||||
overview-of-uefi-secure-boot
|
overview-of-uefi-secure-boot
|
||||||
use-uefi-secure-boot
|
use-uefi-secure-boot
|
||||||
|
|
||||||
***********************
|
***********************
|
||||||
Trusted Platform Module
|
Trusted Platform Module
|
||||||
***********************
|
***********************
|
||||||
@ -317,13 +317,13 @@ Security Features
|
|||||||
secure-https-external-connectivity
|
secure-https-external-connectivity
|
||||||
security-hardening-firewall-options
|
security-hardening-firewall-options
|
||||||
isolate-starlingx-internal-cloud-management-network
|
isolate-starlingx-internal-cloud-management-network
|
||||||
|
|
||||||
***************************************
|
***************************************
|
||||||
Appendix: Locally creating certifciates
|
Appendix: Locally creating certifciates
|
||||||
***************************************
|
***************************************
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
creating-certificates-locally-using-cert-manager-on-the-controller
|
creating-certificates-locally-using-cert-manager-on-the-controller
|
||||||
creating-certificates-locally-using-openssl
|
creating-certificates-locally-using-openssl
|
||||||
|
@ -55,7 +55,7 @@ your Kubernetes Service Account.
|
|||||||
$ kubectl config set-cluster mycluster --server=https://192.168.206.1:6443 --insecure-skip-tls-verify
|
$ kubectl config set-cluster mycluster --server=https://192.168.206.1:6443 --insecure-skip-tls-verify
|
||||||
$ kubectl config set-credentials joe-admin@mycluster --token=$TOKEN
|
$ kubectl config set-credentials joe-admin@mycluster --token=$TOKEN
|
||||||
$ kubectl config set-context joe-admin@mycluster --cluster=mycluster --user joe-admin@mycluster
|
$ kubectl config set-context joe-admin@mycluster --cluster=mycluster --user joe-admin@mycluster
|
||||||
$ kubectl config use-context joe-admin@mycluster
|
$ kubectl config use-context joe-admin@mycluster
|
||||||
|
|
||||||
You now have admin access to |prod| Kubernetes cluster.
|
You now have admin access to |prod| Kubernetes cluster.
|
||||||
|
|
||||||
|
@ -30,7 +30,7 @@ servers connecting to the |prod|'s Kubernetes API endpoint.
|
|||||||
The administrator can also provide values to add to the Kubernetes API
|
The administrator can also provide values to add to the Kubernetes API
|
||||||
server certificate **Subject Alternative Name** list using the
|
server certificate **Subject Alternative Name** list using the
|
||||||
apiserver\_cert\_sans override parameter.
|
apiserver\_cert\_sans override parameter.
|
||||||
|
|
||||||
|
|
||||||
Use the bootstrap override values <k8s\_root\_ca\_cert> and
|
Use the bootstrap override values <k8s\_root\_ca\_cert> and
|
||||||
<k8s\_root\_ca\_key>, as part of the installation procedure to specify the
|
<k8s\_root\_ca\_key>, as part of the installation procedure to specify the
|
||||||
|
@ -89,4 +89,4 @@ from the console ports of the hosts; no SSH access is allowed.
|
|||||||
|
|
||||||
.. seealso::
|
.. seealso::
|
||||||
|
|
||||||
:ref:`Creating LDAP Linux Accounts <create-ldap-linux-accounts>`
|
:ref:`Creating LDAP Linux Accounts <create-ldap-linux-accounts>`
|
@ -4,7 +4,7 @@ Remote CLI Access
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
configure-remote-cli-access
|
configure-remote-cli-access
|
||||||
security-configure-container-backed-remote-clis-and-clients
|
security-configure-container-backed-remote-clis-and-clients
|
||||||
security-install-kubectl-and-helm-clients-directly-on-a-host
|
security-install-kubectl-and-helm-clients-directly-on-a-host
|
||||||
|
@ -16,29 +16,29 @@ from a browser.
|
|||||||
* Do one of the following:
|
* Do one of the following:
|
||||||
|
|
||||||
* **For the StarlingX Horizon Web interface**
|
* **For the StarlingX Horizon Web interface**
|
||||||
|
|
||||||
Access the Horizon in your browser at the address:
|
Access the Horizon in your browser at the address:
|
||||||
|
|
||||||
http://<oam-floating-ip-address>:8080
|
http://<oam-floating-ip-address>:8080
|
||||||
|
|
||||||
Use the username **admin** and the sysadmin password to log in.
|
Use the username **admin** and the sysadmin password to log in.
|
||||||
|
|
||||||
* **For the Kubernetes Dashboard**
|
* **For the Kubernetes Dashboard**
|
||||||
|
|
||||||
Access the Kubernetes Dashboard GUI in your browser at the address:
|
Access the Kubernetes Dashboard GUI in your browser at the address:
|
||||||
|
|
||||||
http://<oam-floating-ip-address>:<kube-dashboard-port>
|
http://<oam-floating-ip-address>:<kube-dashboard-port>
|
||||||
|
|
||||||
Where <kube-dashboard-port> is the port that the dashboard was installed
|
Where <kube-dashboard-port> is the port that the dashboard was installed
|
||||||
on.
|
on.
|
||||||
|
|
||||||
Login using credentials in kubectl config on your remote workstation
|
Login using credentials in kubectl config on your remote workstation
|
||||||
running the browser; see :ref:`Install Kubectl and Helm Clients Directly
|
running the browser; see :ref:`Install Kubectl and Helm Clients Directly
|
||||||
on a Host <security-install-kubectl-and-helm-clients-directly-on-a-host>`
|
on a Host <security-install-kubectl-and-helm-clients-directly-on-a-host>`
|
||||||
as an example for setting up kubectl config credentials for an admin
|
as an example for setting up kubectl config credentials for an admin
|
||||||
user.
|
user.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
The Kubernetes Dashboard is not installed by default. See |prod|
|
The Kubernetes Dashboard is not installed by default. See |prod|
|
||||||
System Configuration: :ref:`Install the Kubernetes Dashboard
|
System Configuration: :ref:`Install the Kubernetes Dashboard
|
||||||
<install-the-kubernetes-dashboard>` for information on how to install
|
<install-the-kubernetes-dashboard>` for information on how to install
|
||||||
|
@ -72,7 +72,7 @@ additional configuration is required in order to use :command:`helm`.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ source /etc/platform/openrc
|
$ source /etc/platform/openrc
|
||||||
~(keystone_admin)]$
|
~(keystone_admin)]$
|
||||||
|
|
||||||
|
|
||||||
#. Set environment variables.
|
#. Set environment variables.
|
||||||
@ -180,13 +180,13 @@ additional configuration is required in order to use :command:`helm`.
|
|||||||
|
|
||||||
|
|
||||||
#. Log in to Horizon as the user and tenant that you want to configure remote access for.
|
#. Log in to Horizon as the user and tenant that you want to configure remote access for.
|
||||||
|
|
||||||
In this example, the 'admin' user in the 'admin' tenant.
|
In this example, the 'admin' user in the 'admin' tenant.
|
||||||
|
|
||||||
#. Navigate to **Project** \> **API Access** \> **Download Openstack RC file**.
|
#. Navigate to **Project** \> **API Access** \> **Download Openstack RC file**.
|
||||||
|
|
||||||
#. Select **Openstack RC file**.
|
#. Select **Openstack RC file**.
|
||||||
|
|
||||||
The file admin-openrc.sh downloads.
|
The file admin-openrc.sh downloads.
|
||||||
|
|
||||||
|
|
||||||
@ -206,7 +206,7 @@ additional configuration is required in order to use :command:`helm`.
|
|||||||
This step will also generate a remote CLI/client RC file.
|
This step will also generate a remote CLI/client RC file.
|
||||||
|
|
||||||
#. Change to the location of the extracted tarball.
|
#. Change to the location of the extracted tarball.
|
||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
$ cd $HOME/|prefix|-remote-clients-<version>/
|
$ cd $HOME/|prefix|-remote-clients-<version>/
|
||||||
@ -222,9 +222,9 @@ additional configuration is required in order to use :command:`helm`.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ mkdir -p $HOME/remote_cli_wd
|
$ mkdir -p $HOME/remote_cli_wd
|
||||||
|
|
||||||
|
|
||||||
#. Run the :command:`configure\_client.sh` script.
|
|
||||||
|
#. Run the :command:`configure\_client.sh` script.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
@ -288,8 +288,8 @@ additional configuration is required in order to use :command:`helm`.
|
|||||||
$ ./configure_client.sh -t platform -r admin-openrc.sh -k
|
$ ./configure_client.sh -t platform -r admin-openrc.sh -k
|
||||||
admin-kubeconfig -w $HOME/remote_cli_wd -p
|
admin-kubeconfig -w $HOME/remote_cli_wd -p
|
||||||
625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-wrs.1
|
625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-wrs.1
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
If you specify repositories that require authentication, you must first
|
If you specify repositories that require authentication, you must first
|
||||||
perform a :command:`docker login` to that repository before using
|
perform a :command:`docker login` to that repository before using
|
||||||
@ -312,14 +312,14 @@ remote platform CLIs can be used in any shell after sourcing the generated
|
|||||||
remote CLI/client RC file. This RC file sets up the required environment
|
remote CLI/client RC file. This RC file sets up the required environment
|
||||||
variables and aliases for the remote |CLI| commands.
|
variables and aliases for the remote |CLI| commands.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Consider adding this command to your .login or shell rc file, such
|
Consider adding this command to your .login or shell rc file, such
|
||||||
that your shells will automatically be initialized with the environment
|
that your shells will automatically be initialized with the environment
|
||||||
variables and aliases for the remote |CLI| commands.
|
variables and aliases for the remote |CLI| commands.
|
||||||
|
|
||||||
See :ref:`Using Container-backed Remote CLIs and Clients <using-container-backed-remote-clis-and-clients>` for details.
|
See :ref:`Using Container-backed Remote CLIs and Clients <using-container-backed-remote-clis-and-clients>` for details.
|
||||||
|
|
||||||
**Related information**
|
**Related information**
|
||||||
|
|
||||||
.. seealso::
|
.. seealso::
|
||||||
|
|
||||||
|
@ -44,7 +44,7 @@ variables and aliases for the remote |CLI| commands.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
Please enter your OpenStack Password for project admin as user admin:
|
Please enter your OpenStack Password for project admin as user admin:
|
||||||
root@myclient:/home/user/remote_cli_wd# system host-list
|
root@myclient:/home/user/remote_cli_wd# system host-list
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
| id | hostname | personality | administrative | operational | availability |
|
| id | hostname | personality | administrative | operational | availability |
|
||||||
@ -66,7 +66,7 @@ variables and aliases for the remote |CLI| commands.
|
|||||||
ceph-pools-audit-1569849000-cb988 0/1 Completed 0 2m25s
|
ceph-pools-audit-1569849000-cb988 0/1 Completed 0 2m25s
|
||||||
coredns-7cf476b5c8-5x724 1/1 Running 1 3d2h
|
coredns-7cf476b5c8-5x724 1/1 Running 1 3d2h
|
||||||
...
|
...
|
||||||
root@myclient:/home/user/remote_cli_wd#
|
root@myclient:/home/user/remote_cli_wd#
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Some |CLI| commands are designed to leave you in a shell prompt, for example:
|
Some |CLI| commands are designed to leave you in a shell prompt, for example:
|
||||||
@ -143,7 +143,7 @@ variables and aliases for the remote |CLI| commands.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
**Related information**
|
**Related information**
|
||||||
|
|
||||||
.. seealso::
|
.. seealso::
|
||||||
|
|
||||||
|
@ -46,8 +46,8 @@ variables and aliases for the remote |CLI| commands.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
root@myclient:/home/user/remote_cli_wd# source remote_client_openstack.sh
|
root@myclient:/home/user/remote_cli_wd# source remote_client_openstack.sh
|
||||||
Please enter your OpenStack Password for project admin as user admin:
|
Please enter your OpenStack Password for project admin as user admin:
|
||||||
root@myclient:/home/user/remote_cli_wd# openstack endpoint list
|
root@myclient:/home/user/remote_cli_wd# openstack endpoint list
|
||||||
+----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------+
|
+----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------+
|
||||||
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
|
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
|
||||||
@ -77,7 +77,7 @@ variables and aliases for the remote |CLI| commands.
|
|||||||
+--------------------------------------+-----------+-----------+------+-------------+
|
+--------------------------------------+-----------+-----------+------+-------------+
|
||||||
| f2421d88-69e8-4e2f-b8aa-abd7fb4de1c5 | my-volume | available | 8 | |
|
| f2421d88-69e8-4e2f-b8aa-abd7fb4de1c5 | my-volume | available | 8 | |
|
||||||
+--------------------------------------+-----------+-----------+------+-------------+
|
+--------------------------------------+-----------+-----------+------+-------------+
|
||||||
root@myclient:/home/user/remote_cli_wd#
|
root@myclient:/home/user/remote_cli_wd#
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Some commands used by remote |CLI| are designed to leave you in a shell
|
Some commands used by remote |CLI| are designed to leave you in a shell
|
||||||
@ -108,6 +108,6 @@ variables and aliases for the remote |CLI| commands.
|
|||||||
root@myclient:/home/user/remote_cli_wd# openstack image create --public
|
root@myclient:/home/user/remote_cli_wd# openstack image create --public
|
||||||
--disk-format qcow2 --container-format bare --file ubuntu.qcow2
|
--disk-format qcow2 --container-format bare --file ubuntu.qcow2
|
||||||
ubuntu_image
|
ubuntu_image
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -93,7 +93,7 @@ The following procedure shows how to configure the Container-backed Remote
|
|||||||
$ ./configure_client.sh -t openstack -r admin_openrc.sh -w
|
$ ./configure_client.sh -t openstack -r admin_openrc.sh -w
|
||||||
$HOME/remote_cli_wd -p
|
$HOME/remote_cli_wd -p
|
||||||
625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1
|
625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1
|
||||||
|
|
||||||
|
|
||||||
If you specify repositories that require authentication, as shown
|
If you specify repositories that require authentication, as shown
|
||||||
above, you must remember to perform a :command:`docker login` to that
|
above, you must remember to perform a :command:`docker login` to that
|
||||||
@ -145,7 +145,7 @@ The following procedure shows how to configure the Container-backed Remote
|
|||||||
$ ./configure_client.sh -t platform -r admin-openrc.sh -k
|
$ ./configure_client.sh -t platform -r admin-openrc.sh -k
|
||||||
admin-kubeconfig -w $HOME/remote_cli_wd -p
|
admin-kubeconfig -w $HOME/remote_cli_wd -p
|
||||||
625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1
|
625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1
|
||||||
|
|
||||||
If you specify repositories that require authentication, you must
|
If you specify repositories that require authentication, you must
|
||||||
first perform a :command:`docker login` to that repository before
|
first perform a :command:`docker login` to that repository before
|
||||||
using remote |CLIs|.
|
using remote |CLIs|.
|
||||||
|
@ -17,6 +17,6 @@ or the CLI. Projects and users can also be managed using the OpenStack REST
|
|||||||
API.
|
API.
|
||||||
|
|
||||||
.. seealso::
|
.. seealso::
|
||||||
:ref:`System Account Password Rules <security-system-account-password-rules>`
|
:ref:`System Account Password Rules <security-system-account-password-rules>`
|
||||||
|
|
||||||
|
|
||||||
|
@ -59,44 +59,44 @@ service‐parameter-add` command to configure and set the OpenStack domain name:
|
|||||||
|
|
||||||
# define A record for general domain for |prod| system
|
# define A record for general domain for |prod| system
|
||||||
<my-|prefix|-domain> IN A 10.10.10.10
|
<my-|prefix|-domain> IN A 10.10.10.10
|
||||||
|
|
||||||
# define alias for general domain for horizon dashboard REST API URL
|
# define alias for general domain for horizon dashboard REST API URL
|
||||||
horizon.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
horizon.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
|
|
||||||
# define alias for general domain for keystone identity service REST
|
# define alias for general domain for keystone identity service REST
|
||||||
API URLs keystone.<my-|prefix|-domain> IN CNAME
|
API URLs keystone.<my-|prefix|-domain> IN CNAME
|
||||||
<my-|prefix|-domain>.<my-company>.com. keystone-api.<my-|prefix|-domain> IN
|
<my-|prefix|-domain>.<my-company>.com. keystone-api.<my-|prefix|-domain> IN
|
||||||
CNAME <my-|prefix|-domain>.<my-company>.com.
|
CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
|
|
||||||
# define alias for general domain for neutron networking REST API URL
|
# define alias for general domain for neutron networking REST API URL
|
||||||
neutron.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
neutron.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
|
|
||||||
# define alias for general domain for nova compute provisioning REST
|
# define alias for general domain for nova compute provisioning REST
|
||||||
API URLs nova.<my-|prefix|-domain> IN CNAME
|
API URLs nova.<my-|prefix|-domain> IN CNAME
|
||||||
<my-|prefix|-domain>.<my-company>.com. placement.<my-|prefix|-domain> IN CNAME
|
<my-|prefix|-domain>.<my-company>.com. placement.<my-|prefix|-domain> IN CNAME
|
||||||
<my-|prefix|-domain>.<my-company>.com. rest-api.<my-|prefix|-domain> IN CNAME
|
<my-|prefix|-domain>.<my-company>.com. rest-api.<my-|prefix|-domain> IN CNAME
|
||||||
<my-|prefix|-domain>.<my-company>.com.
|
<my-|prefix|-domain>.<my-company>.com.
|
||||||
|
|
||||||
# define no vnc procy alias for VM console access through Horizon REST
|
# define no vnc procy alias for VM console access through Horizon REST
|
||||||
API URL novncproxy.<my-|prefix|-domain> IN CNAME
|
API URL novncproxy.<my-|prefix|-domain> IN CNAME
|
||||||
<my-|prefix|-domain>.<my-company>.com.
|
<my-|prefix|-domain>.<my-company>.com.
|
||||||
|
|
||||||
# define alias for general domain for barbican secure storage REST API URL
|
# define alias for general domain for barbican secure storage REST API URL
|
||||||
barbican.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
barbican.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
|
|
||||||
# define alias for general domain for glance VM management REST API URL
|
# define alias for general domain for glance VM management REST API URL
|
||||||
glance.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
glance.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
|
|
||||||
# define alias for general domain for cinder block storage REST API URL
|
# define alias for general domain for cinder block storage REST API URL
|
||||||
cinder.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
cinder.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
cinder2.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
cinder2.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
cinder3.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
cinder3.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
|
|
||||||
# define alias for general domain for heat orchestration REST API URLs
|
# define alias for general domain for heat orchestration REST API URLs
|
||||||
heat.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
heat.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
cloudformation.<my-|prefix|-domain> IN CNAME
|
cloudformation.<my-|prefix|-domain> IN CNAME
|
||||||
my-|prefix|-domain.<my-company>.com.
|
my-|prefix|-domain.<my-company>.com.
|
||||||
|
|
||||||
# define alias for general domain for starlingx REST API URLs
|
# define alias for general domain for starlingx REST API URLs
|
||||||
# ( for fault, patching, service management, system and VIM )
|
# ( for fault, patching, service management, system and VIM )
|
||||||
fm.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
fm.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
@ -104,7 +104,7 @@ service‐parameter-add` command to configure and set the OpenStack domain name:
|
|||||||
smapi.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
smapi.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
sysinv.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
sysinv.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
vim.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
vim.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
#. Source the environment.
|
#. Source the environment.
|
||||||
@ -112,7 +112,7 @@ service‐parameter-add` command to configure and set the OpenStack domain name:
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ source /etc/platform/openrc
|
$ source /etc/platform/openrc
|
||||||
~(keystone_admin)$
|
~(keystone_admin)$
|
||||||
|
|
||||||
#. To set a unique domain name, use the :command:`system
|
#. To set a unique domain name, use the :command:`system
|
||||||
service‐parameter-add` command.
|
service‐parameter-add` command.
|
||||||
|
@ -51,7 +51,7 @@ For example:
|
|||||||
| 8b9971df-6d83.. | vanilla | 1 | 1 | 0 | 1 | True |
|
| 8b9971df-6d83.. | vanilla | 1 | 1 | 0 | 1 | True |
|
||||||
| e94c8123-2602.. | xlarge.8c.4G.8G | 4096 | 8 | 0 | 8 | True |
|
| e94c8123-2602.. | xlarge.8c.4G.8G | 4096 | 8 | 0 | 8 | True |
|
||||||
+-----------------+------------------+------+------+-----+-------+-----------+
|
+-----------------+------------------+------+------+-----+-------+-----------+
|
||||||
|
|
||||||
~(keystone_admin)$ openstack image list
|
~(keystone_admin)$ openstack image list
|
||||||
+----------------+----------------------------------------+--------+
|
+----------------+----------------------------------------+--------+
|
||||||
| ID | Name | Status |
|
| ID | Name | Status |
|
||||||
@ -60,5 +60,5 @@ For example:
|
|||||||
| 15aaf0de-b369..| opensquidbox.amd64.1.06a.iso | active |
|
| 15aaf0de-b369..| opensquidbox.amd64.1.06a.iso | active |
|
||||||
| eeda4642-db83..| xenial-server-cloudimg-amd64-disk1.img | active |
|
| eeda4642-db83..| xenial-server-cloudimg-amd64-disk1.img | active |
|
||||||
+----------------+----------------------------------------+--------+
|
+----------------+----------------------------------------+--------+
|
||||||
|
|
||||||
|
|
||||||
|
@ -88,6 +88,7 @@
|
|||||||
.. |QoS| replace:: :abbr:`QoS (Quality of Service)`
|
.. |QoS| replace:: :abbr:`QoS (Quality of Service)`
|
||||||
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
|
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
|
||||||
.. |RBAC| replace:: :abbr:`RBAC (Role-Based Access Control)`
|
.. |RBAC| replace:: :abbr:`RBAC (Role-Based Access Control)`
|
||||||
|
.. |RBD| replace:: :abbr:`RBD (RADOS Block Device)`
|
||||||
.. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)`
|
.. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)`
|
||||||
.. |SAN| replace:: :abbr:`SAN (Subject Alternative Name)`
|
.. |SAN| replace:: :abbr:`SAN (Subject Alternative Name)`
|
||||||
.. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)`
|
.. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)`
|
||||||
|
3
doc/source/storage/.vscode/settings.json
vendored
Normal file
3
doc/source/storage/.vscode/settings.json
vendored
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"restructuredtext.confPath": ""
|
||||||
|
}
|
@ -14,7 +14,7 @@ and the requirements of the system.
|
|||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
|
|
||||||
kubernetes/index
|
kubernetes/index
|
||||||
|
|
||||||
---------
|
---------
|
||||||
OpenStack
|
OpenStack
|
||||||
---------
|
---------
|
||||||
|
@ -12,9 +12,56 @@ for containers to persist files beyond the lifetime of the container, a
|
|||||||
Persistent Volume Claim can be created to obtain a persistent volume which the
|
Persistent Volume Claim can be created to obtain a persistent volume which the
|
||||||
container can mount and read/write files.
|
container can mount and read/write files.
|
||||||
|
|
||||||
Management and customization tasks for Kubernetes Persistent Volume Claims can
|
Management and customization tasks for Kubernetes |PVCs|
|
||||||
be accomplished using the **rbd-provisioner** helm chart. The
|
can be accomplished by using StorageClasses set up by two Helm charts;
|
||||||
**rbd-provisioner** helm chart is included in the **platform-integ-apps**
|
**rbd-provisioner** and **cephfs-provisioner**. The **rbd-provisioner**,
|
||||||
system application, which is automatically loaded and applied as part of the
|
and **cephfs-provisioner** Helm charts are included in the
|
||||||
|prod| installation.
|
**platform-integ-apps** system application, which is automatically loaded and
|
||||||
|
applied as part of the |prod| installation.
|
||||||
|
|
||||||
|
PVCs are supported with the following options:
|
||||||
|
|
||||||
|
- with accessMode of ReadWriteOnce backed by Ceph |RBD|
|
||||||
|
|
||||||
|
- only one container can attach to these PVCs
|
||||||
|
- management and customization tasks related to these PVCs are done
|
||||||
|
through the **rbd-provisioner** Helm chart provided by
|
||||||
|
platform-integ-apps
|
||||||
|
|
||||||
|
- with accessMode of ReadWriteMany backed by CephFS
|
||||||
|
|
||||||
|
- multiple containers can attach to these PVCs
|
||||||
|
- management and customization tasks related to these PVCs are done
|
||||||
|
through the **cephfs-provisioner** Helm chart provided by
|
||||||
|
platform-integ-apps
|
||||||
|
|
||||||
|
After platform-integ-apps is applied the following system configurations are
|
||||||
|
created:
|
||||||
|
|
||||||
|
- **Ceph Pools**
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ ceph osd lspools
|
||||||
|
kube-rbd
|
||||||
|
kube-cephfs-data
|
||||||
|
kube-cephfs-metadata
|
||||||
|
|
||||||
|
- **CephFS**
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ ceph fs ls
|
||||||
|
name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ]
|
||||||
|
|
||||||
|
- **Kubernetes StorageClasses**
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ kubectl get sc
|
||||||
|
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
|
||||||
|
cephfs ceph.com/cephfs Delete Immediate false
|
||||||
|
general (default) ceph.com/rbd Delete Immediate false
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -60,7 +60,7 @@ for performance reasons, you must either use a non-root disk for
|
|||||||
**nova-local** storage, or ensure that the host is not used for VMs
|
**nova-local** storage, or ensure that the host is not used for VMs
|
||||||
with ephemeral local storage.
|
with ephemeral local storage.
|
||||||
|
|
||||||
For example, to add a volume with the UUID
|
For example, to add a volume with the UUID
|
||||||
67b368ab-626a-4168-9b2a-d1d239d4f3b0 to compute-1, use the following command.
|
67b368ab-626a-4168-9b2a-d1d239d4f3b0 to compute-1, use the following command.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
@ -12,7 +12,7 @@ system installation using a |prod|-provided ansible playbook.
|
|||||||
.. rubric:: |prereq|
|
.. rubric:: |prereq|
|
||||||
|
|
||||||
|prod-long| must be installed and fully deployed before performing this
|
|prod-long| must be installed and fully deployed before performing this
|
||||||
procedure.
|
procedure.
|
||||||
|
|
||||||
.. xbooklink See the :ref:`Installation Overview <installation-overview>`
|
.. xbooklink See the :ref:`Installation Overview <installation-overview>`
|
||||||
for more information.
|
for more information.
|
||||||
@ -250,8 +250,8 @@ appropriate storage-class name you set up in step :ref:`2
|
|||||||
<configure-an-external-netapp-deployment-as-the-storage-backend>`
|
<configure-an-external-netapp-deployment-as-the-storage-backend>`
|
||||||
\(**netapp-nas-backend** in this example\) to the persistent volume
|
\(**netapp-nas-backend** in this example\) to the persistent volume
|
||||||
claim's yaml configuration file. For more information about this file, see
|
claim's yaml configuration file. For more information about this file, see
|
||||||
|usertasks-doc|: :ref:`Create Persistent Volume Claims
|
|usertasks-doc|: :ref:`Create ReadWriteOnce Persistent Volume Claims
|
||||||
<kubernetes-user-tutorials-creating-persistent-volume-claims>`.
|
<kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims>`.
|
||||||
|
|
||||||
.. seealso::
|
.. seealso::
|
||||||
|
|
||||||
|
@ -1,243 +0,0 @@
|
|||||||
|
|
||||||
.. clb1615317605723
|
|
||||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend:
|
|
||||||
|
|
||||||
============================================================
|
|
||||||
Configure Ceph File System for Internal Ceph Storage Backend
|
|
||||||
============================================================
|
|
||||||
|
|
||||||
CephFS \(Ceph File System\) is a highly available, mutli-use, performant file
|
|
||||||
store for a variety of applications, built on top of Ceph's Distributed Object
|
|
||||||
Store \(RADOS\).
|
|
||||||
|
|
||||||
.. rubric:: |context|
|
|
||||||
|
|
||||||
CephFS provides the following functionality:
|
|
||||||
|
|
||||||
|
|
||||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-h2b-h1k-x4b:
|
|
||||||
|
|
||||||
- Enabled by default \(along with existing Ceph RDB\)
|
|
||||||
|
|
||||||
- Highly available, multi-use, performant file storage
|
|
||||||
|
|
||||||
- Scalability using a separate RADOS pool for the file's metadata
|
|
||||||
|
|
||||||
- Metadata using Metadata Servers \(MDS\) that provide high availability and
|
|
||||||
scalability
|
|
||||||
|
|
||||||
- Deployed in HA configurations for all |prod| deployment options
|
|
||||||
|
|
||||||
- Integrates **cephfs-provisioner** supporting Kubernetes **StorageClass**
|
|
||||||
|
|
||||||
- Enables configuration of:
|
|
||||||
|
|
||||||
|
|
||||||
- **PersistentVolumeClaim** \(|PVC|\) using **StorageClass** and
|
|
||||||
ReadWriteMany accessmode
|
|
||||||
|
|
||||||
- Two or more application pods mounting |PVC| and reading/writing data to it
|
|
||||||
|
|
||||||
CephFS is configured automatically when a Ceph backend is enabled and provides
|
|
||||||
a Kubernetes **StorageClass**. Once enabled, every node in the cluster that
|
|
||||||
serves as a Ceph monitor will also be configured as a CephFS Metadata Server
|
|
||||||
\(MDS\). Creation of the CephFS pools, filesystem initialization, and creation
|
|
||||||
of Kubernetes resource is done by the **platform-integ-apps** application,
|
|
||||||
using **cephfs-provisioner** Helm chart.
|
|
||||||
|
|
||||||
When applied, **platform-integ-apps** creates two Ceph pools for each storage
|
|
||||||
backend configured, one for CephFS data and a second pool for metadata:
|
|
||||||
|
|
||||||
|
|
||||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-jp2-yn2-x4b:
|
|
||||||
|
|
||||||
- **CephFS data pool**: The pool name for the default storage backend is
|
|
||||||
**kube-cephfs-data**
|
|
||||||
|
|
||||||
- **Metadata pool**: The pool name is **kube-cephfs-metadata**
|
|
||||||
|
|
||||||
When a new storage backend is created, a new CephFS data pool will be
|
|
||||||
created with the name **kube-cephfs-data-** \<storage\_backend\_name\>, and
|
|
||||||
the metadata pool will be created with the name
|
|
||||||
**kube-cephfs-metadata-** \<storage\_backend\_name\>. The default
|
|
||||||
filesystem name is **kube-cephfs**.
|
|
||||||
|
|
||||||
When a new storage backend is created, a new filesystem will be created
|
|
||||||
with the name **kube-cephfs-** \<storage\_backend\_name\>.
|
|
||||||
|
|
||||||
|
|
||||||
For example, if the user adds a storage backend named, 'test',
|
|
||||||
**cephfs-provisioner** will create the following pools:
|
|
||||||
|
|
||||||
|
|
||||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-i3w-h1f-x4b:
|
|
||||||
|
|
||||||
- kube-cephfs-data-test
|
|
||||||
|
|
||||||
- kube-cephfs-metadata-test
|
|
||||||
|
|
||||||
|
|
||||||
Also, the application **platform-integ-apps** will create a filesystem **kube
|
|
||||||
cephfs-test**.
|
|
||||||
|
|
||||||
If you list all the pools in a cluster with 'test' storage backend, you should
|
|
||||||
see four pools created by **cephfs-provisioner** using **platform-integ-apps**.
|
|
||||||
Use the following command to list the CephFS |OSD| pools created.
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
$ ceph osd lspools
|
|
||||||
|
|
||||||
|
|
||||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-nnv-lr2-x4b:
|
|
||||||
|
|
||||||
- kube-rbd
|
|
||||||
|
|
||||||
- kube-rbd-test
|
|
||||||
|
|
||||||
- **kube-cephfs-data**
|
|
||||||
|
|
||||||
- **kube-cephfs-data-test**
|
|
||||||
|
|
||||||
- **kube-cephfs-metadata**
|
|
||||||
|
|
||||||
- **kube-cephfs-metadata-test**
|
|
||||||
|
|
||||||
|
|
||||||
Use the following command to list Ceph File Systems:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
$ ceph fs ls
|
|
||||||
name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ]
|
|
||||||
name: kube-cephfs-silver, metadata pool: kube-cephfs-metadata-silver, data pools: [kube-cephfs-data-silver ]
|
|
||||||
|
|
||||||
:command:`cephfs-provisioner` creates in a Kubernetes cluster, a
|
|
||||||
**StorageClass** for each storage backend present.
|
|
||||||
|
|
||||||
These **StorageClass** resources should be used to create
|
|
||||||
**PersistentVolumeClaim** resources in order to allow pods to use CephFS. The
|
|
||||||
default **StorageClass** resource is named **cephfs**, and additional resources
|
|
||||||
are created with the name \<storage\_backend\_name\> **-cephfs** for each
|
|
||||||
additional storage backend created.
|
|
||||||
|
|
||||||
For example, when listing **StorageClass** resources in a cluster that is
|
|
||||||
configured with a storage backend named 'test', the following storage classes
|
|
||||||
are created:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
$ kubectl get sc
|
|
||||||
NAME PROVISIONER RECLAIM.. VOLUME.. ALLOWVOLUME.. AGE
|
|
||||||
cephfs ceph.com/cephfs Delete Immediate false 65m
|
|
||||||
general (default) ceph.com/rbd Delete Immediate false 66m
|
|
||||||
test-cephfs ceph.com/cephfs Delete Immediate false 65m
|
|
||||||
test-general ceph.com/rbd Delete Immediate false 66m
|
|
||||||
|
|
||||||
All Kubernetes resources \(pods, StorageClasses, PersistentVolumeClaims,
|
|
||||||
configmaps, etc.\) used by the provisioner are created in the **kube-system
|
|
||||||
namespace.**
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
Multiple Ceph file systems are not enabled by default in the cluster. You
|
|
||||||
can enable it manually, for example, using the command; :command:`ceph fs
|
|
||||||
flag set enable\_multiple true --yes-i-really-mean-it`.
|
|
||||||
|
|
||||||
|
|
||||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-section-dq5-wgk-x4b:
|
|
||||||
|
|
||||||
-------------------------------
|
|
||||||
Persistent Volume Claim \(PVC\)
|
|
||||||
-------------------------------
|
|
||||||
|
|
||||||
.. rubric:: |context|
|
|
||||||
|
|
||||||
If you need to create a Persistent Volume Claim, you can create it using
|
|
||||||
**kubectl**. For example:
|
|
||||||
|
|
||||||
|
|
||||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ol-lrh-pdf-x4b:
|
|
||||||
|
|
||||||
#. Create a file named **my\_pvc.yaml**, and add the following content:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
apiVersion: v1
|
|
||||||
metadata:
|
|
||||||
name: claim1
|
|
||||||
namespace: kube-system
|
|
||||||
spec:
|
|
||||||
storageClassName: cephfs
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteMany
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 1Gi
|
|
||||||
|
|
||||||
#. To apply the updates, use the following command:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
$ kubectl apply -f my_pvc.yaml
|
|
||||||
|
|
||||||
#. After the |PVC| is created, use the following command to see the |PVC|
|
|
||||||
bound to the existing **StorageClass**.
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
$ kubectl get pvc -n kube-system
|
|
||||||
|
|
||||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
|
||||||
claim1 Boundpvc.. 1Gi RWX cephfs
|
|
||||||
|
|
||||||
#. The |PVC| is automatically provisioned by the **StorageClass**, and a |PVC|
|
|
||||||
is created. Use the following command to list the |PVC|.
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
$ kubectl get pv -n kube-system
|
|
||||||
|
|
||||||
NAME CAPACITY ACCESS..RECLAIM.. STATUS CLAIM STORAGE.. REASON AGE
|
|
||||||
pvc-5.. 1Gi RWX Delete Bound kube-system/claim1 cephfs 26s
|
|
||||||
|
|
||||||
|
|
||||||
#. Create Pods to use the |PVC|. Create a file my\_pod.yaml:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
kind: Pod
|
|
||||||
apiVersion: v1
|
|
||||||
metadata:
|
|
||||||
name: test-pod
|
|
||||||
namespace: kube-system
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: test-pod
|
|
||||||
image: gcr.io/google_containers/busybox:1.24
|
|
||||||
command:
|
|
||||||
- "/bin/sh"
|
|
||||||
args:
|
|
||||||
- "-c"
|
|
||||||
- "touch /mnt/SUCCESS && exit 0 || exit 1"
|
|
||||||
volumeMounts:
|
|
||||||
- name: pvc
|
|
||||||
mountPath: "/mnt"
|
|
||||||
restartPolicy: "Never"
|
|
||||||
volumes:
|
|
||||||
- name: pvc
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: claim1
|
|
||||||
|
|
||||||
#. Apply the inputs to the **pod.yaml** file, using the following command.
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
$ kubectl apply -f my_pod.yaml
|
|
||||||
|
|
||||||
|
|
||||||
For more information on Persistent Volume Support, see, :ref:`About Persistent
|
|
||||||
Volume Support <about-persistent-volume-support>`, and,
|
|
||||||
|usertasks-doc|: :ref:`Creating Persistent Volume Claims
|
|
||||||
<kubernetes-user-tutorials-creating-persistent-volume-claims>`.
|
|
||||||
|
|
@ -9,4 +9,4 @@ Configure Netapps Using a Private Docker Registry
|
|||||||
Use the ``docker_registries`` parameter to pull from the local registry rather
|
Use the ``docker_registries`` parameter to pull from the local registry rather
|
||||||
than public ones.
|
than public ones.
|
||||||
|
|
||||||
You must first push the files to the local registry.
|
You must first push the files to the local registry.
|
||||||
|
@ -48,6 +48,11 @@ the internal Ceph storage backend.
|
|||||||
third Ceph monitor instance is configured by default on the first
|
third Ceph monitor instance is configured by default on the first
|
||||||
storage node.
|
storage node.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
CephFS support requires Metadata servers \(MDS\) to be deployed. When
|
||||||
|
CephFS is configured, an MDS is deployed automatically along with each
|
||||||
|
node that has been configured to run a Ceph Monitor.
|
||||||
|
|
||||||
#. Configure Ceph OSDs. For more information, see :ref:`Provision
|
#. Configure Ceph OSDs. For more information, see :ref:`Provision
|
||||||
Storage on a Controller or Storage Host Using Horizon
|
Storage on a Controller or Storage Host Using Horizon
|
||||||
<provision-storage-on-a-controller-or-storage-host-using-horizon>`.
|
<provision-storage-on-a-controller-or-storage-host-using-horizon>`.
|
||||||
|
@ -0,0 +1,71 @@
|
|||||||
|
|
||||||
|
.. iqu1616951298602
|
||||||
|
.. _create-readwritemany-persistent-volume-claims:
|
||||||
|
|
||||||
|
=============================================
|
||||||
|
Create ReadWriteMany Persistent Volume Claims
|
||||||
|
=============================================
|
||||||
|
|
||||||
|
Container images have an ephemeral file system by default. For data to survive
|
||||||
|
beyond the lifetime of a container, it can read and write files to a persistent
|
||||||
|
volume obtained with a Persistent Volume Claim \(PVC\) created to provide
|
||||||
|
persistent storage.
|
||||||
|
|
||||||
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
For multiple containers to mount the same |PVC|, create a |PVC| with accessMode
|
||||||
|
of ReadWriteMany \(RWX\).
|
||||||
|
|
||||||
|
The following steps show an example of creating a 1GB |PVC| with ReadWriteMany
|
||||||
|
accessMode.
|
||||||
|
|
||||||
|
.. rubric:: |proc|
|
||||||
|
|
||||||
|
.. _iqu1616951298602-steps-bdr-qnm-tkb:
|
||||||
|
|
||||||
|
#. Create the **rwx-test-claim** Persistent Volume Claim.
|
||||||
|
|
||||||
|
#. Create a yaml file defining the claim and its attributes.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$
|
||||||
|
cat <<EOF > rwx-claim.yaml
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: rwx-test-claim
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteMany
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
|
storageClassName: cephfs
|
||||||
|
EOF
|
||||||
|
|
||||||
|
2. Apply the settings created above.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ kubectl apply -f rwx-claim.yaml
|
||||||
|
persistentvolumeclaim/rwx-test-claim created
|
||||||
|
|
||||||
|
|
||||||
|
This results in 1GB |PVC| being created. You can view the |PVC| using the
|
||||||
|
following command.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ kubectl get persistentvolumeclaims
|
||||||
|
|
||||||
|
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
|
||||||
|
rwx-test-claim Bound pvc-df9f.. 1Gi RWX cephfs
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ kubectl get persistentvolume
|
||||||
|
NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS
|
||||||
|
pvc-df9f.. 1Gi RWX Delete Bound default/rwx-test-claim cephfs
|
@ -0,0 +1,38 @@
|
|||||||
|
|
||||||
|
.. mgt1616518429546
|
||||||
|
.. _default-behavior-of-the-cephfs-provisioner:
|
||||||
|
|
||||||
|
==========================================
|
||||||
|
Default Behavior of the CephFS Provisioner
|
||||||
|
==========================================
|
||||||
|
|
||||||
|
The default Ceph Cluster configuration set up during |prod| installation
|
||||||
|
contains a single storage tier, storage, containing all the OSDs.
|
||||||
|
|
||||||
|
The default CephFS provisioner service runs within the kube-system namespace
|
||||||
|
and has a single storage class, '**cephfs**', which is configured to:
|
||||||
|
|
||||||
|
.. _mgt1616518429546-ul-g3n-qdb-bpb:
|
||||||
|
|
||||||
|
- use the default 'storage' Ceph storage tier
|
||||||
|
- use a **kube-cephfs-data** and **kube-cephfs-metadata** Ceph pool, and
|
||||||
|
- only support |PVC| requests from the following namespaces: kube-system,
|
||||||
|
default and kube-public.
|
||||||
|
|
||||||
|
The full details of the **cephfs-provisioner** configuration can be viewed
|
||||||
|
using the following commands:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ system helm-override-list platform-integ-apps
|
||||||
|
|
||||||
|
The following command provides the chart names and the overrides namespaces.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
|
||||||
|
|
||||||
|
See :ref:`Create ReadWriteMany Persistent Volume Claims <create-readwritemany-persistent-volume-claims>`
|
||||||
|
and :ref:`Mount ReadWriteMany Persistent Volumes in Containers <mount-readwritemany-persistent-volumes-in-containers>`
|
||||||
|
for an example of how to create and mount a ReadWriteMany PVC from the **cephfs**
|
||||||
|
storage class.
|
@ -9,9 +9,8 @@ Default Behavior of the RBD Provisioner
|
|||||||
The default Ceph Cluster configuration set up during |prod| installation
|
The default Ceph Cluster configuration set up during |prod| installation
|
||||||
contains a single storage tier, storage, containing all the |OSDs|.
|
contains a single storage tier, storage, containing all the |OSDs|.
|
||||||
|
|
||||||
The default rbd-provisioner service runs within the kube-system namespace
|
The default |RBD| provisioner service runs within the kube-system namespace and
|
||||||
and has a single storage class, 'general', which is configured to:
|
has a single storage class, 'general', which is configured to:
|
||||||
|
|
||||||
|
|
||||||
.. _default-behavior-of-the-rbd-provisioner-ul-zg2-r2q-43b:
|
.. _default-behavior-of-the-rbd-provisioner-ul-zg2-r2q-43b:
|
||||||
|
|
||||||
@ -19,7 +18,8 @@ and has a single storage class, 'general', which is configured to:
|
|||||||
|
|
||||||
- use a **kube-rbd** ceph pool, and
|
- use a **kube-rbd** ceph pool, and
|
||||||
|
|
||||||
- only support PVC requests from the following namespaces: kube-system, default and kube-public.
|
- only support PVC requests from the following namespaces: kube-system,
|
||||||
|
default and kube-public.
|
||||||
|
|
||||||
|
|
||||||
The full details of the rbd-provisioner configuration can be viewed with
|
The full details of the rbd-provisioner configuration can be viewed with
|
||||||
@ -35,9 +35,7 @@ This command provides the chart names and the overrides namespaces.
|
|||||||
|
|
||||||
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
|
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
|
||||||
|
|
||||||
See :ref:`Create Persistent Volume Claims
|
See :ref:`Creating ReadWriteOnce Persistent Volume Claims <storage-configuration-create-readwriteonce-persistent-volume-claims>` and
|
||||||
<storage-configuration-create-persistent-volume-claims>` and
|
:ref:`Mounting ReadWriteOnce Persistent Volumes in Containers <storage-configuration-mount-readwriteonce-persistent-volumes-in-containers>`
|
||||||
:ref:`Mount Persistent Volumes in Containers
|
for an example of how to create and mount a ReadWriteOnce |PVC| from the
|
||||||
<storage-configuration-mount-persistent-volumes-in-containers>` for
|
'general' storage class.
|
||||||
details on how to create and mount a PVC from this storage class.
|
|
||||||
|
|
@ -1,27 +1,27 @@
|
|||||||
|
|
||||||
.. csl1561030322454
|
.. csl1561030322454
|
||||||
.. _enable-additional-storage-classes:
|
.. _enable-rbd-readwriteonly-additional-storage-classes:
|
||||||
|
|
||||||
=================================
|
===================================================
|
||||||
Enable Additional Storage Classes
|
Enable RBD ReadWriteOnly Additional Storage Classes
|
||||||
=================================
|
===================================================
|
||||||
|
|
||||||
Additional storage classes can be added to the default rbd-provisioner
|
Additional storage classes can be added to the default |RBD| provisioner
|
||||||
service.
|
service.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
Some reasons for adding an additional storage class include:
|
Some reasons for adding an additional storage class include:
|
||||||
|
|
||||||
.. _enable-additional-storage-classes-ul-nz1-r3q-43b:
|
.. _enable-rbd-readwriteonly-additional-storage-classes-ul-nz1-r3q-43b:
|
||||||
|
|
||||||
- managing Ceph resources for particular namespaces in a separate Ceph
|
- managing Ceph resources for particular namespaces in a separate Ceph
|
||||||
pool; simply for Ceph partitioning reasons
|
pool; simply for Ceph partitioning reasons
|
||||||
|
|
||||||
- using an alternate Ceph Storage Tier, for example. with faster drives
|
- using an alternate Ceph Storage Tier, for example. with faster drives
|
||||||
|
|
||||||
A modification to the configuration \(helm overrides\) of the
|
A modification to the configuration \(Helm overrides\) of the
|
||||||
**rbd-provisioner** service is required to enable an additional storage class
|
|RBD| provisioner service is required to enable an additional storage class
|
||||||
|
|
||||||
The following example that illustrates adding a second storage class to be
|
The following example that illustrates adding a second storage class to be
|
||||||
utilized by a specific namespace.
|
utilized by a specific namespace.
|
||||||
@ -33,19 +33,19 @@ utilized by a specific namespace.
|
|||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
#. List installed helm chart overrides for the platform-integ-apps.
|
#. List installed Helm chart overrides for the platform-integ-apps.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system helm-override-list platform-integ-apps
|
~(keystone_admin)$ system helm-override-list platform-integ-apps
|
||||||
+------------------+----------------------+
|
+--------------------+----------------------+
|
||||||
| chart name | overrides namespaces |
|
| chart name | overrides namespaces |
|
||||||
+------------------+----------------------+
|
+--------------------+----------------------+
|
||||||
| ceph-pools-audit | [u'kube-system'] |
|
| ceph-pools-audit | [u'kube-system'] |
|
||||||
| helm-toolkit | [] |
|
| cephfs-provisioner | [u'kube-system'] |
|
||||||
| rbd-provisioner | [u'kube-system'] |
|
| helm-toolkit | [] |
|
||||||
+------------------+----------------------+
|
| rbd-provisioner | [u'kube-system'] |
|
||||||
|
+--------------------+----------------------+
|
||||||
|
|
||||||
#. Review existing overrides for the rbd-provisioner chart. You will refer
|
#. Review existing overrides for the rbd-provisioner chart. You will refer
|
||||||
to this information in the following step.
|
to this information in the following step.
|
||||||
@ -85,7 +85,6 @@ utilized by a specific namespace.
|
|||||||
|
|
||||||
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \
|
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \
|
||||||
platform-integ-apps rbd-provisioner
|
platform-integ-apps rbd-provisioner
|
||||||
|
|
||||||
+----------------+-----------------------------------------+
|
+----------------+-----------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+----------------+-----------------------------------------+
|
+----------------+-----------------------------------------+
|
||||||
@ -123,7 +122,6 @@ utilized by a specific namespace.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
|
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
|
||||||
|
|
||||||
+--------------------+-----------------------------------------+
|
+--------------------+-----------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+--------------------+-----------------------------------------+
|
+--------------------+-----------------------------------------+
|
||||||
@ -161,13 +159,11 @@ utilized by a specific namespace.
|
|||||||
|
|
||||||
#. Apply the overrides.
|
#. Apply the overrides.
|
||||||
|
|
||||||
|
|
||||||
#. Run the :command:`application-apply` command.
|
#. Run the :command:`application-apply` command.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system application-apply platform-integ-apps
|
~(keystone_admin)$ system application-apply platform-integ-apps
|
||||||
|
|
||||||
+---------------+----------------------------------+
|
+---------------+----------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+---------------+----------------------------------+
|
+---------------+----------------------------------+
|
||||||
@ -187,7 +183,6 @@ utilized by a specific namespace.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system application-list
|
~(keystone_admin)$ system application-list
|
||||||
|
|
||||||
+-------------+---------+---------------+---------------+---------+-----------+
|
+-------------+---------+---------------+---------------+---------+-----------+
|
||||||
| application | version | manifest name | manifest file | status | progress |
|
| application | version | manifest name | manifest file | status | progress |
|
||||||
+-------------+---------+---------------+---------------+---------+-----------+
|
+-------------+---------+---------------+---------------+---------+-----------+
|
||||||
@ -196,9 +191,8 @@ utilized by a specific namespace.
|
|||||||
| | | manifest | | | |
|
| | | manifest | | | |
|
||||||
+-------------+---------+------ --------+---------------+---------+-----------+
|
+-------------+---------+------ --------+---------------+---------+-----------+
|
||||||
|
|
||||||
|
You can now create and mount persistent volumes from the new |RBD|
|
||||||
You can now create and mount persistent volumes from the new
|
provisioner's **special** storage class from within the **new-sc-app**
|
||||||
rbd-provisioner's **special** storage class from within the
|
application-specific namespace.
|
||||||
**new-sc-app** application-specific namespace.
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,220 @@
|
|||||||
|
|
||||||
|
.. wyf1616954377690
|
||||||
|
.. _enable-readwritemany-pvc-support-in-additional-namespaces:
|
||||||
|
|
||||||
|
=========================================================
|
||||||
|
Enable ReadWriteMany PVC Support in Additional Namespaces
|
||||||
|
=========================================================
|
||||||
|
|
||||||
|
The default general **cephfs-provisioner** storage class is enabled for the
|
||||||
|
default, kube-system, and kube-public namespaces. To enable an additional
|
||||||
|
namespace, for example for an application-specific namespace, a modification
|
||||||
|
to the configuration \(Helm overrides\) of the **cephfs-provisioner** service
|
||||||
|
is required.
|
||||||
|
|
||||||
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
The following example illustrates the configuration of three additional
|
||||||
|
application-specific namespaces to access the **cephfs-provisioner**
|
||||||
|
**cephfs** storage class.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Due to limitations with templating and merging of overrides, the entire
|
||||||
|
storage class must be redefined in the override when updating specific
|
||||||
|
values.
|
||||||
|
|
||||||
|
.. rubric:: |proc|
|
||||||
|
|
||||||
|
#. List installed Helm chart overrides for the platform-integ-apps.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ system helm-override-list platform-integ-apps
|
||||||
|
+--------------------+----------------------+
|
||||||
|
| chart name | overrides namespaces |
|
||||||
|
+--------------------+----------------------+
|
||||||
|
| ceph-pools-audit | [u'kube-system'] |
|
||||||
|
| cephfs-provisioner | [u'kube-system'] |
|
||||||
|
| helm-toolkit | [] |
|
||||||
|
| rbd-provisioner | [u'kube-system'] |
|
||||||
|
+--------------------+----------------------+
|
||||||
|
|
||||||
|
#. Review existing overrides for the cephfs-provisioner chart. You will refer
|
||||||
|
to this information in the following step.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
|
||||||
|
|
||||||
|
+--------------------+----------------------------------------------------------+
|
||||||
|
| Property | Value |
|
||||||
|
+--------------------+----------------------------------------------------------+
|
||||||
|
| attributes | enabled: true |
|
||||||
|
| | |
|
||||||
|
| combined_overrides | classdefaults: |
|
||||||
|
| | adminId: admin |
|
||||||
|
| | adminSecretName: ceph-secret-admin |
|
||||||
|
| | monitors: |
|
||||||
|
| | - 192.168.204.3:6789 |
|
||||||
|
| | - 192.168.204.1:6789 |
|
||||||
|
| | - 192.168.204.2:6789 |
|
||||||
|
| | classes: |
|
||||||
|
| | - additionalNamespaces: |
|
||||||
|
| | - default |
|
||||||
|
| | - kube-public |
|
||||||
|
| | chunk_size: 64 |
|
||||||
|
| | claim_root: /pvc-volumes |
|
||||||
|
| | crush_rule_name: storage_tier_ruleset |
|
||||||
|
| | data_pool_name: kube-cephfs-data |
|
||||||
|
| | fs_name: kube-cephfs |
|
||||||
|
| | metadata_pool_name: kube-cephfs-metadata |
|
||||||
|
| | name: cephfs |
|
||||||
|
| | replication: 2 |
|
||||||
|
| | userId: ceph-pool-kube-cephfs-data |
|
||||||
|
| | userSecretName: ceph-pool-kube-cephfs-data |
|
||||||
|
| | global: |
|
||||||
|
| | replicas: 2 |
|
||||||
|
| | |
|
||||||
|
| name | cephfs-provisioner |
|
||||||
|
| namespace | kube-system |
|
||||||
|
| system_overrides | classdefaults: |
|
||||||
|
| | adminId: admin |
|
||||||
|
| | adminSecretName: ceph-secret-admin |
|
||||||
|
| | monitors: ['192.168.204.3:6789', '192.168.204.1:6789', |
|
||||||
|
| | '192.168.204.2:6789'] |
|
||||||
|
| | classes: |
|
||||||
|
| | - additionalNamespaces: [default, kube-public] |
|
||||||
|
| | chunk_size: 64 |
|
||||||
|
| | claim_root: /pvc-volumes |
|
||||||
|
| | crush_rule_name: storage_tier_ruleset |
|
||||||
|
| | data_pool_name: kube-cephfs-data |
|
||||||
|
| | fs_name: kube-cephfs |
|
||||||
|
| | metadata_pool_name: kube-cephfs-metadata |
|
||||||
|
| | name: cephfs |
|
||||||
|
| | replication: 2 |
|
||||||
|
| | userId: ceph-pool-kube-cephfs-data |
|
||||||
|
| | userSecretName: ceph-pool-kube-cephfs-data |
|
||||||
|
| | global: {replicas: 2} |
|
||||||
|
| | |
|
||||||
|
| user_overrides | None |
|
||||||
|
+--------------------+----------------------------------------------------------+
|
||||||
|
|
||||||
|
#. Create an overrides yaml file defining the new namespaces.
|
||||||
|
|
||||||
|
In this example, create the file /home/sysadmin/update-namespaces.yaml with the following content:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ cat <<EOF > ~/update-namespaces.yaml
|
||||||
|
classes:
|
||||||
|
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
|
||||||
|
chunk_size: 64
|
||||||
|
claim_root: /pvc-volumes
|
||||||
|
crush_rule_name: storage_tier_ruleset
|
||||||
|
data_pool_name: kube-cephfs-data
|
||||||
|
fs_name: kube-cephfs
|
||||||
|
metadata_pool_name: kube-cephfs-metadata
|
||||||
|
name: cephfs
|
||||||
|
replication: 2
|
||||||
|
userId: ceph-pool-kube-cephfs-data
|
||||||
|
userSecretName: ceph-pool-kube-cephfs-data
|
||||||
|
EOF
|
||||||
|
|
||||||
|
#. Apply the overrides file to the chart.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml platform-integ-apps cephfs-provisioner kube-system
|
||||||
|
+----------------+----------------------------------------------+
|
||||||
|
| Property | Value |
|
||||||
|
+----------------+----------------------------------------------+
|
||||||
|
| name | cephfs-provisioner |
|
||||||
|
| namespace | kube-system |
|
||||||
|
| user_overrides | classes: |
|
||||||
|
| | - additionalNamespaces: |
|
||||||
|
| | - default |
|
||||||
|
| | - kube-public |
|
||||||
|
| | - new-app |
|
||||||
|
| | - new-app2 |
|
||||||
|
| | - new-app3 |
|
||||||
|
| | chunk_size: 64 |
|
||||||
|
| | claim_root: /pvc-volumes |
|
||||||
|
| | crush_rule_name: storage_tier_ruleset |
|
||||||
|
| | data_pool_name: kube-cephfs-data |
|
||||||
|
| | fs_name: kube-cephfs |
|
||||||
|
| | metadata_pool_name: kube-cephfs-metadata |
|
||||||
|
| | name: cephfs |
|
||||||
|
| | replication: 2 |
|
||||||
|
| | userId: ceph-pool-kube-cephfs-data |
|
||||||
|
| | userSecretName: ceph-pool-kube-cephfs-data |
|
||||||
|
+----------------+----------------------------------------------+
|
||||||
|
|
||||||
|
#. Confirm that the new overrides have been applied to the chart.
|
||||||
|
|
||||||
|
The following output has been edited for brevity.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
|
||||||
|
+--------------------+---------------------------------------------+
|
||||||
|
| Property | Value |
|
||||||
|
+--------------------+---------------------------------------------+
|
||||||
|
| user_overrides | classes: |
|
||||||
|
| | - additionalNamespaces: |
|
||||||
|
| | - default |
|
||||||
|
| | - kube-public |
|
||||||
|
| | - new-app |
|
||||||
|
| | - new-app2 |
|
||||||
|
| | - new-app3 |
|
||||||
|
| | chunk_size: 64 |
|
||||||
|
| | claim_root: /pvc-volumes |
|
||||||
|
| | crush_rule_name: storage_tier_ruleset |
|
||||||
|
| | data_pool_name: kube-cephfs-data |
|
||||||
|
| | fs_name: kube-cephfs |
|
||||||
|
| | metadata_pool_name: kube-cephfs-metadata |
|
||||||
|
| | name: cephfs |
|
||||||
|
| | replication: 2 |
|
||||||
|
| | userId: ceph-pool-kube-cephfs-data |
|
||||||
|
| | userSecretName: ceph-pool-kube-cephfs-data|
|
||||||
|
+--------------------+---------------------------------------------+
|
||||||
|
|
||||||
|
#. Apply the overrides.
|
||||||
|
|
||||||
|
#. Run the :command:`application-apply` command.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ system application-apply platform-integ-apps
|
||||||
|
+---------------+----------------------------------+
|
||||||
|
| Property | Value |
|
||||||
|
+---------------+----------------------------------+
|
||||||
|
| active | True |
|
||||||
|
| app_version | 1.0-24 |
|
||||||
|
| created_at | 2019-05-26T06:22:20.711732+00:00 |
|
||||||
|
| manifest_file | manifest.yaml |
|
||||||
|
| manifest_name | platform-integration-manifest |
|
||||||
|
| name | platform-integ-apps |
|
||||||
|
| progress | None |
|
||||||
|
| status | applying |
|
||||||
|
| updated_at | 2019-05-26T22:27:26.547181+00:00 |
|
||||||
|
+---------------+----------------------------------+
|
||||||
|
|
||||||
|
#. Monitor progress using the :command:`application-list` command.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ system application-list
|
||||||
|
+-------------+---------+---------------+---------------+---------+-----------+
|
||||||
|
| application | version | manifest name | manifest file | status | progress |
|
||||||
|
+-------------+---------+---------------+---------------+---------+-----------+
|
||||||
|
| platform- | 1.0-24 | platform | manifest.yaml | applied | completed |
|
||||||
|
| integ-apps | | -integration | | | |
|
||||||
|
| | | -manifest | | | |
|
||||||
|
+-------------+---------+---------------+---------------+---------+-----------+
|
||||||
|
|
||||||
|
You can now create and mount PVCs from the default |RBD| provisioner's
|
||||||
|
**general** storage class, from within these application-specific
|
||||||
|
namespaces.
|
||||||
|
|
||||||
|
|
@ -1,22 +1,21 @@
|
|||||||
|
|
||||||
.. vqw1561030204071
|
.. vqw1561030204071
|
||||||
.. _enable-pvc-support-in-additional-namespaces:
|
.. _enable-readwriteonce-pvc-support-in-additional-namespaces:
|
||||||
|
|
||||||
===========================================
|
=========================================================
|
||||||
Enable PVC Support in Additional Namespaces
|
Enable ReadWriteOnce PVC Support in Additional Namespaces
|
||||||
===========================================
|
=========================================================
|
||||||
|
|
||||||
The default general **rbd-provisioner** storage class is enabled for the
|
The default general **rbd-provisioner** storage class is enabled for the
|
||||||
default, kube-system, and kube-public namespaces. To enable an additional
|
default, kube-system, and kube-public namespaces. To enable an additional
|
||||||
namespace, for example for an application-specific namespace, a
|
namespace, for example for an application-specific namespace, a
|
||||||
modification to the configuration \(helm overrides\) of the
|
modification to the configuration \(helm overrides\) of the
|
||||||
**rbd-provisioner** service is required.
|
|RBD| provisioner service is required.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
The following example illustrates the configuration of three additional
|
The following example illustrates the configuration of three additional
|
||||||
application-specific namespaces to access the rbd-provisioner's **general**
|
application-specific namespaces to access the |RBD| provisioner's **general storage class**.
|
||||||
storage class.
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Due to limitations with templating and merging of overrides, the entire
|
Due to limitations with templating and merging of overrides, the entire
|
||||||
@ -30,13 +29,14 @@ storage class.
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system helm-override-list platform-integ-apps
|
~(keystone_admin)$ system helm-override-list platform-integ-apps
|
||||||
+------------------+----------------------+
|
+--------------------+----------------------+
|
||||||
| chart name | overrides namespaces |
|
| chart name | overrides namespaces |
|
||||||
+------------------+----------------------+
|
+--------------------+----------------------+
|
||||||
| ceph-pools-audit | [u'kube-system'] |
|
| ceph-pools-audit | [u'kube-system'] |
|
||||||
| helm-toolkit | [] |
|
| cephfs-provisioner | [u'kube-system'] |
|
||||||
| rbd-provisioner | [u'kube-system'] |
|
| helm-toolkit | [] |
|
||||||
+------------------+----------------------+
|
| rbd-provisioner | [u'kube-system'] |
|
||||||
|
+--------------------+----------------------+
|
||||||
|
|
||||||
#. Review existing overrides for the rbd-provisioner chart. You will refer
|
#. Review existing overrides for the rbd-provisioner chart. You will refer
|
||||||
to this information in the following step.
|
to this information in the following step.
|
||||||
@ -94,29 +94,28 @@ storage class.
|
|||||||
+--------------------+--------------------------------------------------+
|
+--------------------+--------------------------------------------------+
|
||||||
|
|
||||||
|
|
||||||
#. Create an overrides yaml file defining the new namespaces.
|
#. Create an overrides yaml file defining the new namespaces. In this example we will create the file /home/sysadmin/update-namespaces.yaml with the following content:
|
||||||
|
|
||||||
In this example we will create the file
|
|
||||||
/home/sysadmin/update-namespaces.yaml with the following content:
|
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ cat <<EOF > ~/update-namespaces.yaml
|
||||||
|
|
||||||
classes:
|
classes:
|
||||||
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
|
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
|
||||||
chunk_size: 64
|
chunk_size: 64
|
||||||
crush_rule_name: storage_tier_ruleset
|
crush_rule_name: storage_tier_ruleset
|
||||||
name: general
|
name: general
|
||||||
pool_name: kube-rbd
|
pool_name: kube-rbd
|
||||||
replication: 1
|
replication: 2
|
||||||
userId: ceph-pool-kube-rbd
|
userId: ceph-pool-kube-rbd
|
||||||
userSecretName: ceph-pool-kube-rbd
|
userSecretName: ceph-pool-kube-rbd
|
||||||
|
EOF
|
||||||
|
|
||||||
#. Apply the overrides file to the chart.
|
#. Apply the overrides file to the chart.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \
|
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml platform-integ-apps rbd-provisioner kube-system
|
||||||
platform-integ-apps rbd-provisioner kube-system
|
|
||||||
+----------------+-----------------------------------------+
|
+----------------+-----------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+----------------+-----------------------------------------+
|
+----------------+-----------------------------------------+
|
||||||
@ -133,7 +132,7 @@ storage class.
|
|||||||
| | crush_rule_name: storage_tier_ruleset |
|
| | crush_rule_name: storage_tier_ruleset |
|
||||||
| | name: general |
|
| | name: general |
|
||||||
| | pool_name: kube-rbd |
|
| | pool_name: kube-rbd |
|
||||||
| | replication: 1 |
|
| | replication: 2 |
|
||||||
| | userId: ceph-pool-kube-rbd |
|
| | userId: ceph-pool-kube-rbd |
|
||||||
| | userSecretName: ceph-pool-kube-rbd |
|
| | userSecretName: ceph-pool-kube-rbd |
|
||||||
+----------------+-----------------------------------------+
|
+----------------+-----------------------------------------+
|
||||||
@ -166,14 +165,13 @@ storage class.
|
|||||||
| | crush_rule_name: storage_tier_ruleset|
|
| | crush_rule_name: storage_tier_ruleset|
|
||||||
| | name: general |
|
| | name: general |
|
||||||
| | pool_name: kube-rbd |
|
| | pool_name: kube-rbd |
|
||||||
| | replication: 1 |
|
| | replication: 2 |
|
||||||
| | userId: ceph-pool-kube-rbd |
|
| | userId: ceph-pool-kube-rbd |
|
||||||
| | userSecretName: ceph-pool-kube-rbd |
|
| | userSecretName: ceph-pool-kube-rbd |
|
||||||
+--------------------+----------------------------------------+
|
+--------------------+----------------------------------------+
|
||||||
|
|
||||||
#. Apply the overrides.
|
#. Apply the overrides.
|
||||||
|
|
||||||
|
|
||||||
#. Run the :command:`application-apply` command.
|
#. Run the :command:`application-apply` command.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -183,7 +181,7 @@ storage class.
|
|||||||
| Property | Value |
|
| Property | Value |
|
||||||
+---------------+----------------------------------+
|
+---------------+----------------------------------+
|
||||||
| active | True |
|
| active | True |
|
||||||
| app_version | 1.0-5 |
|
| app_version | 1.0-24 |
|
||||||
| created_at | 2019-05-26T06:22:20.711732+00:00 |
|
| created_at | 2019-05-26T06:22:20.711732+00:00 |
|
||||||
| manifest_file | manifest.yaml |
|
| manifest_file | manifest.yaml |
|
||||||
| manifest_name | platform-integration-manifest |
|
| manifest_name | platform-integration-manifest |
|
||||||
@ -201,18 +199,12 @@ storage class.
|
|||||||
+-------------+---------+---------------+---------------+---------+-----------+
|
+-------------+---------+---------------+---------------+---------+-----------+
|
||||||
| application | version | manifest name | manifest file | status | progress |
|
| application | version | manifest name | manifest file | status | progress |
|
||||||
+-------------+---------+---------------+---------------+---------+-----------+
|
+-------------+---------+---------------+---------------+---------+-----------+
|
||||||
| platform- | 1.0-5 | platform | manifest.yaml | applied | completed |
|
| platform- | 1.0-24 | platform | manifest.yaml | applied | completed |
|
||||||
| integ-apps | | -integration | | | |
|
| integ-apps | | -integration | | | |
|
||||||
| | | -manifest | | | |
|
| | | -manifest | | | |
|
||||||
+-------------+---------+---------------+---------------+---------+-----------+
|
+-------------+---------+---------------+---------------+---------+-----------+
|
||||||
|
|
||||||
|
You can now create and mount PVCs from the default |RBD| provisioner's
|
||||||
|
**general storage class**, from within these application-specific namespaces.
|
||||||
|
|
||||||
You can now create and mount PVCs from the default
|
|
||||||
**rbd-provisioner's general** storage class, from within these
|
|
||||||
application-specific namespaces.
|
|
||||||
|
|
||||||
#. Apply the secret to the new **rbd-provisioner** namespace.
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
~(keystone_admin)$ kubectl get secret ceph-pool-kube-rbd -n default -o yaml | grep -v '^\s*namespace:\s' | kubectl apply -n <namespace> -f
|
|
@ -86,7 +86,7 @@ The default **rootfs** device is **/dev/sda**.
|
|||||||
|
|
||||||
#. Assign the unused partition on **controller-0** as a physical volume to
|
#. Assign the unused partition on **controller-0** as a physical volume to
|
||||||
**cgts-vg** volume group.
|
**cgts-vg** volume group.
|
||||||
|
|
||||||
For example
|
For example
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -116,7 +116,7 @@ The default **rootfs** device is **/dev/sda**.
|
|||||||
#. To assign the unused partition on **controller-1** as a physical volume to
|
#. To assign the unused partition on **controller-1** as a physical volume to
|
||||||
**cgts-vg** volume group, **swact** the hosts and repeat the procedure on
|
**cgts-vg** volume group, **swact** the hosts and repeat the procedure on
|
||||||
**controller-1**.
|
**controller-1**.
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
After increasing the **cgts-vg** volume size, you can provision the filesystem
|
After increasing the **cgts-vg** volume size, you can provision the filesystem
|
||||||
|
@ -8,7 +8,7 @@ Overview
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
storage-configuration-storage-resources
|
storage-configuration-storage-resources
|
||||||
disk-naming-conventions
|
disk-naming-conventions
|
||||||
|
|
||||||
@ -18,7 +18,7 @@ Disks, Partitions, Volumes, and Volume Groups
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
work-with-local-volume-groups
|
work-with-local-volume-groups
|
||||||
local-volume-groups-cli-commands
|
local-volume-groups-cli-commands
|
||||||
increase-the-size-for-lvm-local-volumes-on-controller-filesystems
|
increase-the-size-for-lvm-local-volumes-on-controller-filesystems
|
||||||
@ -29,7 +29,7 @@ Work with Disk Partitions
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
work-with-disk-partitions
|
work-with-disk-partitions
|
||||||
identify-space-available-for-partitions
|
identify-space-available-for-partitions
|
||||||
list-partitions
|
list-partitions
|
||||||
@ -44,65 +44,64 @@ Work with Physical Volumes
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
work-with-physical-volumes
|
work-with-physical-volumes
|
||||||
add-a-physical-volume
|
add-a-physical-volume
|
||||||
list-physical-volumes
|
list-physical-volumes
|
||||||
view-details-for-a-physical-volume
|
view-details-for-a-physical-volume
|
||||||
delete-a-physical-volume
|
delete-a-physical-volume
|
||||||
|
|
||||||
----------------
|
----------------
|
||||||
Storage Backends
|
Storage Backends
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
storage-backends
|
storage-backends
|
||||||
configure-the-internal-ceph-storage-backend
|
configure-the-internal-ceph-storage-backend
|
||||||
configure-ceph-file-system-for-internal-ceph-storage-backend
|
|
||||||
configure-an-external-netapp-deployment-as-the-storage-backend
|
configure-an-external-netapp-deployment-as-the-storage-backend
|
||||||
configure-netapps-using-a-private-docker-registry
|
configure-netapps-using-a-private-docker-registry
|
||||||
uninstall-the-netapp-backend
|
uninstall-the-netapp-backend
|
||||||
|
|
||||||
----------------
|
----------------
|
||||||
Controller Hosts
|
Controller Hosts
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
controller-hosts-storage-on-controller-hosts
|
controller-hosts-storage-on-controller-hosts
|
||||||
ceph-cluster-on-a-controller-host
|
ceph-cluster-on-a-controller-host
|
||||||
increase-controller-filesystem-storage-allotments-using-horizon
|
increase-controller-filesystem-storage-allotments-using-horizon
|
||||||
increase-controller-filesystem-storage-allotments-using-the-cli
|
increase-controller-filesystem-storage-allotments-using-the-cli
|
||||||
|
|
||||||
------------
|
------------
|
||||||
Worker Hosts
|
Worker Hosts
|
||||||
------------
|
------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
storage-configuration-storage-on-worker-hosts
|
storage-configuration-storage-on-worker-hosts
|
||||||
|
|
||||||
-------------
|
-------------
|
||||||
Storage Hosts
|
Storage Hosts
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
storage-hosts-storage-on-storage-hosts
|
storage-hosts-storage-on-storage-hosts
|
||||||
replication-groups
|
replication-groups
|
||||||
|
|
||||||
-----------------------------
|
-----------------------------
|
||||||
Configure Ceph OSDs on a Host
|
Configure Ceph OSDs on a Host
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
ceph-storage-pools
|
ceph-storage-pools
|
||||||
osd-replication-factors-journal-functions-and-storage-tiers
|
osd-replication-factors-journal-functions-and-storage-tiers
|
||||||
storage-functions-osds-and-ssd-backed-journals
|
storage-functions-osds-and-ssd-backed-journals
|
||||||
@ -119,22 +118,42 @@ Persistent Volume Support
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
about-persistent-volume-support
|
about-persistent-volume-support
|
||||||
|
|
||||||
|
***************
|
||||||
|
RBD Provisioner
|
||||||
|
***************
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 1
|
||||||
|
|
||||||
default-behavior-of-the-rbd-provisioner
|
default-behavior-of-the-rbd-provisioner
|
||||||
storage-configuration-create-persistent-volume-claims
|
storage-configuration-create-readwriteonce-persistent-volume-claims
|
||||||
storage-configuration-mount-persistent-volumes-in-containers
|
storage-configuration-mount-readwriteonce-persistent-volumes-in-containers
|
||||||
enable-pvc-support-in-additional-namespaces
|
enable-readwriteonce-pvc-support-in-additional-namespaces
|
||||||
enable-additional-storage-classes
|
enable-rbd-readwriteonly-additional-storage-classes
|
||||||
install-additional-rbd-provisioners
|
install-additional-rbd-provisioners
|
||||||
|
|
||||||
|
****************************
|
||||||
|
Ceph File System Provisioner
|
||||||
|
****************************
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 1
|
||||||
|
|
||||||
|
default-behavior-of-the-cephfs-provisioner
|
||||||
|
create-readwritemany-persistent-volume-claims
|
||||||
|
mount-readwritemany-persistent-volumes-in-containers
|
||||||
|
enable-readwritemany-pvc-support-in-additional-namespaces
|
||||||
|
|
||||||
----------------
|
----------------
|
||||||
Storage Profiles
|
Storage Profiles
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
storage-profiles
|
storage-profiles
|
||||||
|
|
||||||
----------------------------
|
----------------------------
|
||||||
@ -143,15 +162,15 @@ Storage-Related CLI Commands
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
storage-configuration-storage-related-cli-commands
|
storage-configuration-storage-related-cli-commands
|
||||||
|
|
||||||
---------------------
|
---------------------
|
||||||
Storage Usage Details
|
Storage Usage Details
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
storage-usage-details-storage-utilization-display
|
storage-usage-details-storage-utilization-display
|
||||||
view-storage-utilization-using-horizon
|
view-storage-utilization-using-horizon
|
||||||
|
@ -6,7 +6,7 @@
|
|||||||
Install Additional RBD Provisioners
|
Install Additional RBD Provisioners
|
||||||
===================================
|
===================================
|
||||||
|
|
||||||
You can launch additional dedicated rdb-provisioners to support specific
|
You can launch additional dedicated |RBD| provisioners to support specific
|
||||||
applications using dedicated pools, storage classes, and namespaces.
|
applications using dedicated pools, storage classes, and namespaces.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
@ -14,11 +14,11 @@ applications using dedicated pools, storage classes, and namespaces.
|
|||||||
This can be useful if, for example, to allow an application to have control
|
This can be useful if, for example, to allow an application to have control
|
||||||
over its own persistent volume provisioner, that is, managing the Ceph
|
over its own persistent volume provisioner, that is, managing the Ceph
|
||||||
pool, storage tier, allowed namespaces, and so on, without requiring the
|
pool, storage tier, allowed namespaces, and so on, without requiring the
|
||||||
kubernetes admin to modify the default rbd-provisioner service in the
|
kubernetes admin to modify the default |RBD| provisioner service in the
|
||||||
kube-system namespace.
|
kube-system namespace.
|
||||||
|
|
||||||
This procedure uses standard Helm mechanisms to install a second
|
This procedure uses standard Helm mechanisms to install a second
|
||||||
rbd-provisioner.
|
|RBD| provisioner.
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
@ -101,6 +101,6 @@ rbd-provisioner.
|
|||||||
general (default) ceph.com/rbd 6h39m
|
general (default) ceph.com/rbd 6h39m
|
||||||
special-storage-class ceph.com/rbd 5h58m
|
special-storage-class ceph.com/rbd 5h58m
|
||||||
|
|
||||||
You can now create and mount PVCs from the new rbd-provisioner's
|
You can now create and mount PVCs from the new |RBD| provisioner's
|
||||||
**2nd-storage** storage class, from within the **isolated-app**
|
**2nd-storage** storage class, from within the **isolated-app**
|
||||||
namespace.
|
namespace.
|
||||||
|
@ -0,0 +1,169 @@
|
|||||||
|
|
||||||
|
.. fkk1616520068837
|
||||||
|
.. _mount-readwritemany-persistent-volumes-in-containers:
|
||||||
|
|
||||||
|
====================================================
|
||||||
|
Mount ReadWriteMany Persistent Volumes in Containers
|
||||||
|
====================================================
|
||||||
|
|
||||||
|
You can attach a ReadWriteMany PVC to multiple containers, and that |PVC| can
|
||||||
|
be written to, by all containers.
|
||||||
|
|
||||||
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
This example shows how a volume is claimed and mounted by each container
|
||||||
|
replica of a deployment with 2 replicas, and each container replica can read
|
||||||
|
and write to the |PVC|. It is the responsibility of an individual micro-service
|
||||||
|
within an application to make a volume claim, mount it, and use it.
|
||||||
|
|
||||||
|
.. rubric:: |prereq|
|
||||||
|
|
||||||
|
You must have created the |PVCs|. This procedure uses |PVCs| with names and
|
||||||
|
configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persistent Volume Claims <create-readwritemany-persistent-volume-claims>` .
|
||||||
|
|
||||||
|
.. rubric:: |proc|
|
||||||
|
.. _fkk1616520068837-steps-fqj-flr-tkb:
|
||||||
|
|
||||||
|
#. Create the busybox container with the persistent volumes created from the PVCs mounted. This deployment will create two replicas mounting the same persistent volume.
|
||||||
|
|
||||||
|
#. Create a yaml file definition for the busybox container.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% cat <<EOF > wrx-busybox.yaml
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: wrx-busybox
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
progressDeadlineSeconds: 600
|
||||||
|
replicas: 2
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
run: busybox
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
run: busybox
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- args:
|
||||||
|
- sh
|
||||||
|
image: busybox
|
||||||
|
imagePullPolicy: Always
|
||||||
|
name: busybox
|
||||||
|
stdin: true
|
||||||
|
tty: true
|
||||||
|
volumeMounts:
|
||||||
|
- name: pvc1
|
||||||
|
mountPath: "/mnt1"
|
||||||
|
restartPolicy: Always
|
||||||
|
volumes:
|
||||||
|
- name: pvc1
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: rwx-test-claim
|
||||||
|
EOF
|
||||||
|
|
||||||
|
#. Apply the busybox configuration.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% kubectl apply -f wrx-busybox.yaml
|
||||||
|
deployment.apps/wrx-busybox created
|
||||||
|
|
||||||
|
#. Attach to the busybox and create files on the Persistent Volumes.
|
||||||
|
|
||||||
|
|
||||||
|
#. List the available pods.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% kubectl get pods
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
wrx-busybox-6455997c76-4kg8v 1/1 Running 0 108s
|
||||||
|
wrx-busybox-6455997c76-crmw6 1/1 Running 0 108s
|
||||||
|
|
||||||
|
#. Connect to the pod shell for CLI access.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% kubectl attach wrx-busybox-6455997c76-4kg8v -c busybox -i -t
|
||||||
|
|
||||||
|
#. From the container's console, list the disks to verify that the Persistent Volume is attached.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% df
|
||||||
|
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||||
|
overlay 31441920 1783748 29658172 6% /
|
||||||
|
tmpfs 65536 0 65536 0% /dev
|
||||||
|
tmpfs 5033188 0 5033188 0% /sys/fs/cgroup
|
||||||
|
ceph-fuse 516542464 643072 515899392 0% /mnt1
|
||||||
|
|
||||||
|
The PVC is mounted as /mnt1.
|
||||||
|
|
||||||
|
|
||||||
|
#. Create files in the mount.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
# cd /mnt1
|
||||||
|
# touch i-was-here-${HOSTNAME}
|
||||||
|
# ls /mnt1
|
||||||
|
i-was-here-wrx-busybox-6455997c76-4kg8vi
|
||||||
|
|
||||||
|
#. End the container session.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% exit
|
||||||
|
wrx-busybox-6455997c76-4kg8v -c busybox -i -t' command when the pod is running
|
||||||
|
|
||||||
|
#. Connect to the other busybox container
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t
|
||||||
|
|
||||||
|
#. Optional: From the container's console list the disks to verify that the PVC is attached.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% df
|
||||||
|
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||||
|
overlay 31441920 1783888 29658032 6% /
|
||||||
|
tmpfs 65536 0 65536 0% /dev
|
||||||
|
tmpfs 5033188 0 5033188 0% /sys/fs/cgroup
|
||||||
|
ceph-fuse 516542464 643072 515899392 0% /mnt1
|
||||||
|
|
||||||
|
|
||||||
|
#. Verify that the file created from the other container exists and that this container can also write to the Persistent Volume.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
# cd /mnt1
|
||||||
|
# ls /mnt1
|
||||||
|
i-was-here-wrx-busybox-6455997c76-4kg8v
|
||||||
|
# echo ${HOSTNAME}
|
||||||
|
wrx-busybox-6455997c76-crmw6
|
||||||
|
# touch i-was-here-${HOSTNAME}
|
||||||
|
# ls /mnt1
|
||||||
|
i-was-here-wrx-busybox-6455997c76-4kg8v i-was-here-wrx-busybox-6455997c76-crmw6
|
||||||
|
|
||||||
|
#. End the container session.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% exit
|
||||||
|
Session ended, resume using 'kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t' command when the pod is running
|
||||||
|
|
||||||
|
#. Terminate the busybox container.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
% kubectl delete -f wrx-busybox.yaml
|
||||||
|
|
||||||
|
For more information on Persistent Volume Support, see, :ref:`About Persistent Volume Support <about-persistent-volume-support>`.
|
||||||
|
|
||||||
|
|
@ -15,6 +15,6 @@ the same peer group. Do not substitute a smaller disk than the original.
|
|||||||
|
|
||||||
The replacement disk is automatically formatted and updated with data when the
|
The replacement disk is automatically formatted and updated with data when the
|
||||||
storage host is unlocked. For more information, see |node-doc|: :ref:`Change
|
storage host is unlocked. For more information, see |node-doc|: :ref:`Change
|
||||||
Hardware Components for a Storage Host
|
Hardware Components for a Storage Host
|
||||||
<changing-hardware-components-for-a-storage-host>`.
|
<changing-hardware-components-for-a-storage-host>`.
|
||||||
|
|
||||||
|
@ -116,12 +116,7 @@ For more information about Trident, see
|
|||||||
- :ref:`Configure the Internal Ceph Storage Backend
|
- :ref:`Configure the Internal Ceph Storage Backend
|
||||||
<configure-the-internal-ceph-storage-backend>`
|
<configure-the-internal-ceph-storage-backend>`
|
||||||
|
|
||||||
- :ref:`Configuring Ceph File System for Internal Ceph Storage Backend
|
- :ref:`Configure an External Netapp Deployment as the Storage Backend
|
||||||
<configure-ceph-file-system-for-internal-ceph-storage-backend>`
|
|
||||||
|
|
||||||
- :ref:`Configure an External Netapp Deployment as the Storage Backend
|
|
||||||
<configure-an-external-netapp-deployment-as-the-storage-backend>`
|
<configure-an-external-netapp-deployment-as-the-storage-backend>`
|
||||||
|
|
||||||
- :ref:`Uninstall the Netapp Backend <uninstall-the-netapp-backend>`
|
- :ref:`Uninstall the Netapp Backend <uninstall-the-netapp-backend>`
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,98 +0,0 @@
|
|||||||
|
|
||||||
.. xco1564696647432
|
|
||||||
.. _storage-configuration-create-persistent-volume-claims:
|
|
||||||
|
|
||||||
===============================
|
|
||||||
Create Persistent Volume Claims
|
|
||||||
===============================
|
|
||||||
|
|
||||||
Container images have an ephemeral file system by default. For data to
|
|
||||||
survive beyond the lifetime of a container, it can read and write files to
|
|
||||||
a persistent volume obtained with a |PVC| created to provide persistent
|
|
||||||
storage.
|
|
||||||
|
|
||||||
.. rubric:: |context|
|
|
||||||
|
|
||||||
The following steps create two 1Gb persistent volume claims.
|
|
||||||
|
|
||||||
.. rubric:: |proc|
|
|
||||||
|
|
||||||
|
|
||||||
.. _storage-configuration-create-persistent-volume-claims-d891e32:
|
|
||||||
|
|
||||||
#. Create the **test-claim1** persistent volume claim.
|
|
||||||
|
|
||||||
|
|
||||||
#. Create a yaml file defining the claim and its attributes.
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
~(keystone_admin)]$ cat <<EOF > claim1.yaml
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
apiVersion: v1
|
|
||||||
metadata:
|
|
||||||
name: test-claim1
|
|
||||||
spec:
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 1Gi
|
|
||||||
storageClassName: general
|
|
||||||
EOF
|
|
||||||
|
|
||||||
#. Apply the settings created above.
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
~(keystone_admin)]$ kubectl apply -f claim1.yaml
|
|
||||||
|
|
||||||
persistentvolumeclaim/test-claim1 created
|
|
||||||
|
|
||||||
#. Create the **test-claim2** persistent volume claim.
|
|
||||||
|
|
||||||
|
|
||||||
#. Create a yaml file defining the claim and its attributes.
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
~(keystone_admin)]$ cat <<EOF > claim2.yaml
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
apiVersion: v1
|
|
||||||
metadata:
|
|
||||||
name: test-claim2
|
|
||||||
spec:
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 1Gi
|
|
||||||
storageClassName: general
|
|
||||||
EOF
|
|
||||||
|
|
||||||
|
|
||||||
#. Apply the settings created above.
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
~(keystone_admin)]$ kubectl apply -f claim2.yaml
|
|
||||||
persistentvolumeclaim/test-claim2 created
|
|
||||||
|
|
||||||
.. rubric:: |result|
|
|
||||||
|
|
||||||
Two 1Gb persistent volume claims have been created. You can view them with
|
|
||||||
the following command.
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
~(keystone_admin)]$ kubectl get persistentvolumeclaims
|
|
||||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
|
||||||
test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s
|
|
||||||
test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s
|
|
||||||
|
|
||||||
For more information on using CephFS for internal Ceph backends, see,
|
|
||||||
:ref:`Using CephFS for Internal Ceph Storage Backend <configure-ceph-file-system-for-internal-ceph-storage-backend>`
|
|
@ -0,0 +1,105 @@
|
|||||||
|
|
||||||
|
.. xco1564696647432
|
||||||
|
.. _storage-configuration-create-readwriteonce-persistent-volume-claims:
|
||||||
|
|
||||||
|
=============================================
|
||||||
|
Create ReadWriteOnce Persistent Volume Claims
|
||||||
|
=============================================
|
||||||
|
|
||||||
|
Container images have an ephemeral file system by default. For data to
|
||||||
|
survive beyond the lifetime of a container, it can read and write files to
|
||||||
|
a persistent volume obtained with a |PVC| created to provide persistent
|
||||||
|
storage.
|
||||||
|
|
||||||
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
The following steps show an example of creating two 1GB |PVCs| with
|
||||||
|
ReadWriteOnce accessMode.
|
||||||
|
|
||||||
|
.. rubric:: |proc|
|
||||||
|
|
||||||
|
|
||||||
|
.. _storage-configuration-create-readwriteonce-persistent-volume-claims-d891e32:
|
||||||
|
|
||||||
|
#. Create the **rwo-test-claim1** Persistent Volume Claim.
|
||||||
|
|
||||||
|
|
||||||
|
#. Create a yaml file defining the claim and its attributes.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ cat <<EOF > rwo-claim1.yaml
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: rwo-test-claim1
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
|
storageClassName: general
|
||||||
|
EOF
|
||||||
|
|
||||||
|
#. Apply the settings created above.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ kubectl apply -f rwo-claim1.yaml
|
||||||
|
|
||||||
|
persistentvolumeclaim/rwo-test-claim1 created
|
||||||
|
|
||||||
|
#. Create the **rwo-test-claim2** Persistent Volume Claim.
|
||||||
|
|
||||||
|
|
||||||
|
#. Create a yaml file defining the claim and its attributes.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ cat <<EOF > rwo-claim2.yaml
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: rwo-test-claim2
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
|
storageClassName: general
|
||||||
|
EOF
|
||||||
|
|
||||||
|
#. Apply the settings created above.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ kubectl apply -f rwo-claim2.yaml
|
||||||
|
persistentvolumeclaim/rwo-test-claim2 created
|
||||||
|
|
||||||
|
.. rubric:: |result|
|
||||||
|
|
||||||
|
Two 1Gb |PVCs| have been created. You can view the |PVCs| using
|
||||||
|
the following command.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ kubectl get persistentvolumeclaims
|
||||||
|
|
||||||
|
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
|
||||||
|
rwo-test-claim1 Bound pvc-aaca.. 1Gi RWO general
|
||||||
|
rwo-test-claim2 Bound pvc-e93f.. 1Gi RWO general
|
||||||
|
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)]$ kubectl get persistentvolume
|
||||||
|
|
||||||
|
NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS
|
||||||
|
pvc-08d8.. 1Gi RWO Delete Bound default/rwo-test-claim1 general
|
||||||
|
pvc-af10.. 1Gi RWO Delete Bound default/rwo-test-claim2 general
|
@ -1,28 +1,26 @@
|
|||||||
|
|
||||||
.. pjw1564749970685
|
.. pjw1564749970685
|
||||||
.. _storage-configuration-mount-persistent-volumes-in-containers:
|
.. _storage-configuration-mount-readwriteonce-persistent-volumes-in-containers:
|
||||||
|
|
||||||
======================================
|
====================================================
|
||||||
Mount Persistent Volumes in Containers
|
Mount ReadWriteOnce Persistent Volumes in Containers
|
||||||
======================================
|
====================================================
|
||||||
|
|
||||||
You can launch, attach, and terminate a busybox container to mount |PVCs| in
|
You can attach ReadWriteOnce |PVCs| to a container when launching a container,
|
||||||
your cluster.
|
and changes to those |PVCs| will persist even if that container gets terminated
|
||||||
|
and restarted.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
This example shows how a volume is claimed and mounted by a simple running
|
This example shows how a volume is claimed and mounted by a simple running
|
||||||
container. It is the responsibility of an individual micro-service within
|
container, and the contents of the volume claim persists across restarts of
|
||||||
an application to make a volume claim, mount it, and use it. For example,
|
the container. It is the responsibility of an individual micro-service within
|
||||||
the stx-openstack application will make volume claims for **mariadb** and
|
an application to make a volume claim, mount it, and use it.
|
||||||
**rabbitmq** via their helm charts to orchestrate this.
|
|
||||||
|
|
||||||
.. rubric:: |prereq|
|
.. rubric:: |prereq|
|
||||||
|
|
||||||
You must have created the persistent volume claims. This procedure uses
|
You should refer to the Volume Claim examples. For more information, see,
|
||||||
PVCs with names and configurations created in |stor-doc|: :ref:`Create
|
:ref:`Create ReadWriteOnce Persistent Volume Claims <storage-configuration-create-readwriteonce-persistent-volume-claims>`.
|
||||||
Persistent Volume Claims
|
|
||||||
<storage-configuration-create-persistent-volume-claims>`.
|
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
@ -30,18 +28,18 @@ Persistent Volume Claims
|
|||||||
.. _storage-configuration-mount-persistent-volumes-in-containers-d583e55:
|
.. _storage-configuration-mount-persistent-volumes-in-containers-d583e55:
|
||||||
|
|
||||||
#. Create the busybox container with the persistent volumes created from
|
#. Create the busybox container with the persistent volumes created from
|
||||||
the PVCs mounted.
|
the |PVCs| mounted.
|
||||||
|
|
||||||
|
|
||||||
#. Create a yaml file definition for the busybox container.
|
#. Create a yaml file definition for the busybox container.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
% cat <<EOF > busybox.yaml
|
% cat <<EOF > rwo-busybox.yaml
|
||||||
apiVersion: apps/v1
|
apiVersion: apps/v1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
name: busybox
|
name: rwo-busybox
|
||||||
namespace: default
|
namespace: default
|
||||||
spec:
|
spec:
|
||||||
progressDeadlineSeconds: 600
|
progressDeadlineSeconds: 600
|
||||||
@ -71,10 +69,10 @@ Persistent Volume Claims
|
|||||||
volumes:
|
volumes:
|
||||||
- name: pvc1
|
- name: pvc1
|
||||||
persistentVolumeClaim:
|
persistentVolumeClaim:
|
||||||
claimName: test-claim1
|
claimName: rwo-test-claim1
|
||||||
- name: pvc2
|
- name: pvc2
|
||||||
persistentVolumeClaim:
|
persistentVolumeClaim:
|
||||||
claimName: test-claim2
|
claimName: rwo-test-claim2
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
|
|
||||||
@ -82,10 +80,11 @@ Persistent Volume Claims
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
% kubectl apply -f busybox.yaml
|
% kubectl apply -f rwo-busybox.yaml
|
||||||
|
deployment.apps/rwo-busybox created
|
||||||
|
|
||||||
|
|
||||||
#. Attach to the busybox and create files on the persistent volumes.
|
#. Attach to the busybox and create files on the Persistent Volumes.
|
||||||
|
|
||||||
|
|
||||||
#. List the available pods.
|
#. List the available pods.
|
||||||
@ -93,34 +92,31 @@ Persistent Volume Claims
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
% kubectl get pods
|
% kubectl get pods
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
busybox-5c4f877455-gkg2s 1/1 Running 0 19s
|
rwo-busybox-5c4f877455-gkg2s 1/1 Running 0 19s
|
||||||
|
|
||||||
|
|
||||||
#. Connect to the pod shell for CLI access.
|
#. Connect to the pod shell for CLI access.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
% kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t
|
% kubectl attach rwo-busybox-5c4f877455-gkg2s -c busybox -i -t
|
||||||
|
|
||||||
#. From the container's console, list the disks to verify that the
|
#. From the container's console, list the disks to verify that the
|
||||||
persistent volumes are attached.
|
Persistent Volumes are attached.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
# df
|
# df
|
||||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||||
overlay 31441920 3239984 28201936 10% /
|
overlay 31441920 3239984 28201936 10% /
|
||||||
tmpfs 65536 0 65536 0% /dev
|
tmpfs 65536 0 65536 0% /dev
|
||||||
tmpfs 65900776 0 65900776 0% /sys/fs/cgroup
|
tmpfs 65900776 0 65900776 0% /sys/fs/cgroup
|
||||||
/dev/rbd0 999320 2564 980372 0% /mnt1
|
/dev/rbd0 999320 2564 980372 0% /mnt1
|
||||||
/dev/rbd1 999320 2564 980372 0% /mnt2
|
/dev/rbd1 999320 2564 980372 0% /mnt2
|
||||||
/dev/sda4 20027216 4952208 14034624 26%
|
/dev/sda4 20027216 4952208 14034624 26%
|
||||||
...
|
|
||||||
|
|
||||||
The PVCs are mounted as /mnt1 and /mnt2.
|
The PVCs are mounted as /mnt1 and /mnt2.
|
||||||
|
|
||||||
|
|
||||||
#. Create files in the mounted volumes.
|
#. Create files in the mounted volumes.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -140,31 +136,32 @@ Persistent Volume Claims
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
# exit
|
# exit
|
||||||
Session ended, resume using 'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when the pod is running
|
Session ended, resume using
|
||||||
|
'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when
|
||||||
|
the pod is running
|
||||||
|
|
||||||
#. Terminate the busybox container.
|
#. Terminate the busybox container.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
% kubectl delete -f busybox.yaml
|
% kubectl delete -f rwo-busybox.yaml
|
||||||
|
|
||||||
#. Recreate the busybox container, again attached to persistent volumes.
|
#. Recreate the busybox container, again attached to persistent volumes.
|
||||||
|
|
||||||
|
|
||||||
#. Apply the busybox configuration.
|
#. Apply the busybox configuration.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
% kubectl apply -f busybox.yaml
|
% kubectl apply -f rwo-busybox.yaml
|
||||||
|
deployment.apps/rwo-busybox created
|
||||||
|
|
||||||
#. List the available pods.
|
#. List the available pods.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
% kubectl get pods
|
% kubectl get pods
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
busybox-5c4f877455-jgcc4 1/1 Running 0 19s
|
rwo-busybox-5c4f877455-jgcc4 1/1 Running 0 19s
|
||||||
|
|
||||||
|
|
||||||
#. Connect to the pod shell for CLI access.
|
#. Connect to the pod shell for CLI access.
|
||||||
|
|
||||||
@ -197,5 +194,3 @@ Persistent Volume Claims
|
|||||||
i-was-here lost+found
|
i-was-here lost+found
|
||||||
# ls /mnt2
|
# ls /mnt2
|
||||||
i-was-here-too lost+found
|
i-was-here-too lost+found
|
||||||
|
|
||||||
|
|
@ -6,7 +6,7 @@
|
|||||||
View Details for a Partition
|
View Details for a Partition
|
||||||
============================
|
============================
|
||||||
|
|
||||||
You can view details for a partition, use the **system host-disk-partition-show**
|
You can view details for a partition, use the **system host-disk-partition-show**
|
||||||
command.
|
command.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
@ -127,7 +127,7 @@ For example:
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ system host-fs-modify controller-0 image-conversion=8
|
~(keystone_admin)]$ system host-fs-modify controller-0 image-conversion=8
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. _configure-an-optional-cinder-file-system-section-ubp-f14-tnb:
|
.. _configure-an-optional-cinder-file-system-section-ubp-f14-tnb:
|
||||||
|
@ -40,8 +40,8 @@ can no longer be removed.
|
|||||||
|
|
||||||
~(keystone_admin)$ system host-disk-list compute-0
|
~(keystone_admin)$ system host-disk-list compute-0
|
||||||
+--------------------------------------+--------------++--------------+
|
+--------------------------------------+--------------++--------------+
|
||||||
| uuid | device_node | available_gib |
|
| uuid | device_node | available_gib |
|
||||||
| | | |
|
| | | |
|
||||||
+--------------------------------------+--------------+---------------+
|
+--------------------------------------+--------------+---------------+
|
||||||
| 5dcb3a0e-c677-4363-a030-58e245008504 | /dev/sda | 12216 |
|
| 5dcb3a0e-c677-4363-a030-58e245008504 | /dev/sda | 12216 |
|
||||||
| c2932691-1b46-4faf-b823-2911a9ecdb9b | /dev/sdb | 20477 |
|
| c2932691-1b46-4faf-b823-2911a9ecdb9b | /dev/sdb | 20477 |
|
||||||
@ -71,7 +71,7 @@ can no longer be removed.
|
|||||||
| updated_at | None |
|
| updated_at | None |
|
||||||
| parameters | {u'instance_backing': u'lvm', u'instances_lv_size_mib': 0} |
|
| parameters | {u'instance_backing': u'lvm', u'instances_lv_size_mib': 0} |
|
||||||
+-----------------+------------------------------------------------------------+
|
+-----------------+------------------------------------------------------------+
|
||||||
|
|
||||||
|
|
||||||
#. Obtain the |UUID| of the disk or partition to use for **nova-local** storage.
|
#. Obtain the |UUID| of the disk or partition to use for **nova-local** storage.
|
||||||
|
|
||||||
@ -140,7 +140,7 @@ can no longer be removed.
|
|||||||
|
|
||||||
#. Obtain the |UUID| of the partition to use for **nova-local** storage as
|
#. Obtain the |UUID| of the partition to use for **nova-local** storage as
|
||||||
described in step
|
described in step
|
||||||
|
|
||||||
.. xbooklink :ref:`5 <creating-or-changing-the-size-of-nova-local-storage-uuid>`.
|
.. xbooklink :ref:`5 <creating-or-changing-the-size-of-nova-local-storage-uuid>`.
|
||||||
|
|
||||||
#. Add a disk or partition to the **nova-local** group, using a command of the
|
#. Add a disk or partition to the **nova-local** group, using a command of the
|
||||||
|
@ -6,7 +6,7 @@ Contents
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
storage-configuration-and-management-overview
|
storage-configuration-and-management-overview
|
||||||
storage-configuration-and-management-storage-resources
|
storage-configuration-and-management-storage-resources
|
||||||
config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster
|
config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster
|
||||||
|
@ -69,6 +69,6 @@ controllers. This process requires a swact on the controllers. Then you must
|
|||||||
lock and unlock the worker nodes one at a time, ensuring that sufficient
|
lock and unlock the worker nodes one at a time, ensuring that sufficient
|
||||||
resources are available to migrate any running instances.
|
resources are available to migrate any running instances.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
On AIO Simplex systems you do not need to lock and unlock the host. The
|
On AIO Simplex systems you do not need to lock and unlock the host. The
|
||||||
changes are applied automatically.
|
changes are applied automatically.
|
@ -33,7 +33,7 @@ override.
|
|||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
~(keystone_admin)$ system helm-override-show |prefix|-openstack nova openstack
|
~(keystone_admin)$ system helm-override-show |prefix|-openstack nova openstack
|
||||||
|
|
||||||
|
|
||||||
The output should include the following:
|
The output should include the following:
|
||||||
|
|
||||||
|
@ -42,7 +42,7 @@ where the following are optional arguments:
|
|||||||
| | stable- | | | | |
|
| | stable- | | | | |
|
||||||
| | versioned| | | | |
|
| | versioned| | | | |
|
||||||
+--------------------------+----------+-------------------------------+---------------------------+----------+-----------+
|
+--------------------------+----------+-------------------------------+---------------------------+----------+-----------+
|
||||||
|
|
||||||
The output indicates that the currently installed version of
|
The output indicates that the currently installed version of
|
||||||
|prefix|-openstack is 20.10-0.
|
|prefix|-openstack is 20.10-0.
|
||||||
|
|
||||||
@ -78,7 +78,7 @@ and the following is a positional argument which must come last:
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ source /etc/platform/openrc
|
$ source /etc/platform/openrc
|
||||||
~(keystone_admin)$
|
~(keystone_admin)$
|
||||||
|
|
||||||
#. Update the application.
|
#. Update the application.
|
||||||
|
|
||||||
@ -110,6 +110,6 @@ and the following is a positional argument which must come last:
|
|||||||
| status | applied |
|
| status | applied |
|
||||||
| updated_at | 2020-05-02T17:44:40.152201+00:00 |
|
| updated_at | 2020-05-02T17:44:40.152201+00:00 |
|
||||||
+---------------+----------------------------------+
|
+---------------+----------------------------------+
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -11,7 +11,7 @@ end-user tasks.
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
|
|
||||||
kubernetes/index
|
kubernetes/index
|
||||||
|
|
||||||
---------
|
---------
|
||||||
|
@ -2,17 +2,18 @@
|
|||||||
Contents
|
Contents
|
||||||
========
|
========
|
||||||
|
|
||||||
*************
|
-------------
|
||||||
System access
|
System access
|
||||||
*************
|
-------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
kubernetes-user-tutorials-access-overview
|
kubernetes-user-tutorials-access-overview
|
||||||
|
|
||||||
|
-----------------
|
||||||
Remote CLI access
|
Remote CLI access
|
||||||
*****************
|
-----------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
@ -24,34 +25,36 @@ Remote CLI access
|
|||||||
configuring-remote-helm-client
|
configuring-remote-helm-client
|
||||||
using-container-based-remote-clis-and-clients
|
using-container-based-remote-clis-and-clients
|
||||||
|
|
||||||
|
----------
|
||||||
GUI access
|
GUI access
|
||||||
**********
|
----------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
accessing-the-kubernetes-dashboard
|
accessing-the-kubernetes-dashboard
|
||||||
|
|
||||||
|
----------
|
||||||
API access
|
API access
|
||||||
**********
|
----------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
kubernetes-user-tutorials-rest-api-access
|
kubernetes-user-tutorials-rest-api-access
|
||||||
|
|
||||||
**********************
|
----------------------
|
||||||
Application management
|
Application management
|
||||||
**********************
|
----------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
kubernetes-user-tutorials-helm-package-manager
|
kubernetes-user-tutorials-helm-package-manager
|
||||||
|
|
||||||
*********************
|
---------------------
|
||||||
Local docker registry
|
Local Docker registry
|
||||||
*********************
|
---------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
@ -59,18 +62,18 @@ Local docker registry
|
|||||||
kubernetes-user-tutorials-authentication-and-authorization
|
kubernetes-user-tutorials-authentication-and-authorization
|
||||||
using-an-image-from-the-local-docker-registry-in-a-container-spec
|
using-an-image-from-the-local-docker-registry-in-a-container-spec
|
||||||
|
|
||||||
***************************
|
---------------------------
|
||||||
NodePort usage restrictions
|
NodePort usage restrictions
|
||||||
***************************
|
---------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
nodeport-usage-restrictions
|
nodeport-usage-restrictions
|
||||||
|
|
||||||
************
|
------------
|
||||||
Cert Manager
|
Cert Manager
|
||||||
************
|
------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
@ -78,9 +81,9 @@ Cert Manager
|
|||||||
kubernetes-user-tutorials-cert-manager
|
kubernetes-user-tutorials-cert-manager
|
||||||
letsencrypt-example
|
letsencrypt-example
|
||||||
|
|
||||||
********************************
|
--------------------------------
|
||||||
Vault secret and data management
|
Vault secret and data management
|
||||||
********************************
|
--------------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
@ -89,9 +92,9 @@ Vault secret and data management
|
|||||||
vault-aware
|
vault-aware
|
||||||
vault-unaware
|
vault-unaware
|
||||||
|
|
||||||
****************************
|
-----------------------------
|
||||||
Using Kata container runtime
|
Using Kata Containers runtime
|
||||||
****************************
|
-----------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
@ -100,19 +103,38 @@ Using Kata container runtime
|
|||||||
specifying-kata-container-runtime-in-pod-spec
|
specifying-kata-container-runtime-in-pod-spec
|
||||||
known-limitations
|
known-limitations
|
||||||
|
|
||||||
*******************************
|
-------------------------------
|
||||||
Adding persistent volume claims
|
Adding persistent volume claims
|
||||||
*******************************
|
-------------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
kubernetes-user-tutorials-creating-persistent-volume-claims
|
kubernetes-user-tutorials-about-persistent-volume-support
|
||||||
kubernetes-user-tutorials-mounting-persistent-volumes-in-containers
|
|
||||||
|
|
||||||
****************************************
|
***************
|
||||||
|
RBD Provisioner
|
||||||
|
***************
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 1
|
||||||
|
|
||||||
|
kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims
|
||||||
|
kubernetes-user-tutorials-mount-readwriteonce-persistent-volumes-in-containers
|
||||||
|
|
||||||
|
****************************
|
||||||
|
Ceph File System Provisioner
|
||||||
|
****************************
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 1
|
||||||
|
|
||||||
|
kubernetes-user-tutorials-create-readwritemany-persistent-volume-claims
|
||||||
|
kubernetes-user-tutorials-mount-readwritemany-persistent-volumes-in-containers
|
||||||
|
|
||||||
|
----------------------------------------
|
||||||
Adding an SRIOV interface to a container
|
Adding an SRIOV interface to a container
|
||||||
****************************************
|
----------------------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
@ -120,9 +142,9 @@ Adding an SRIOV interface to a container
|
|||||||
creating-network-attachment-definitions
|
creating-network-attachment-definitions
|
||||||
using-network-attachment-definitions-in-a-container
|
using-network-attachment-definitions-in-a-container
|
||||||
|
|
||||||
**************************
|
--------------------------
|
||||||
CPU Manager for Kubernetes
|
CPU Manager for Kubernetes
|
||||||
**************************
|
--------------------------
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user