diff --git a/doc/source/_includes/installation-and-resource-planning-verified-commercial-hardware.rest b/doc/source/_includes/installation-and-resource-planning-verified-commercial-hardware.rest index 3105a8c50..8b31cc4c2 100644 --- a/doc/source/_includes/installation-and-resource-planning-verified-commercial-hardware.rest +++ b/doc/source/_includes/installation-and-resource-planning-verified-commercial-hardware.rest @@ -1 +1 @@ -.. [#f1] See :ref:`Data Network Planning ` for more information. \ No newline at end of file +.. [#]_ See :ref:`Data Network Planning ` for more information. diff --git a/doc/source/_includes/installing-and-provisioning-a-subcloud.rest b/doc/source/_includes/installing-and-provisioning-a-subcloud.rest index 3db767bdd..7817b17ac 100644 --- a/doc/source/_includes/installing-and-provisioning-a-subcloud.rest +++ b/doc/source/_includes/installing-and-provisioning-a-subcloud.rest @@ -2,8 +2,8 @@ .. begin-redfish-vms For subclouds with servers that support Redfish Virtual Media Service \(version -1.2 or higher\), you can use the Central Cloud's CLI to install the ISO and -bootstrap subclouds from the Central Cloud. For more information, see +1.2 or higher\), you can use the Central Cloud's CLI to install the ISO and +bootstrap subclouds from the Central Cloud. For more information, see :ref:`Installing a Subcloud Using Redfish Platform Management Service `. diff --git a/doc/source/_includes/installing-and-provisioning-the-central-cloud.rest b/doc/source/_includes/installing-and-provisioning-the-central-cloud.rest index faf3b0f20..95ef1e833 100644 --- a/doc/source/_includes/installing-and-provisioning-the-central-cloud.rest +++ b/doc/source/_includes/installing-and-provisioning-the-central-cloud.rest @@ -1,13 +1,13 @@ .. code-block:: none - system_mode: duplex + system_mode: duplex distributed_cloud_role: systemcontroller - - - management_start_address: .2 + + + management_start_address: .2 management_end_address: .50 - - + + additional_local_registry_images: - docker.io/starlingx/rvmc:stx.5.0-v1.0.0 diff --git a/doc/source/admintasks/kubernetes-admin-tutorials-authentication-and-authorization.rst b/doc/source/admintasks/kubernetes-admin-tutorials-authentication-and-authorization.rst index 2cb6e1d55..f6d8e0010 100644 --- a/doc/source/admintasks/kubernetes-admin-tutorials-authentication-and-authorization.rst +++ b/doc/source/admintasks/kubernetes-admin-tutorials-authentication-and-authorization.rst @@ -18,7 +18,7 @@ For example: An authorized administrator \('admin' and 'sysinv'\) can perform any Docker action. Regular users can only interact with their own repositories \(i.e. registry.local:9001//\). Any authenticated user can pull from -the following list of public images: +the following list of public images: .. _kubernetes-admin-tutorials-authentication-and-authorization-d383e50: diff --git a/doc/source/backup/index.rst b/doc/source/backup/index.rst index 72f4c7e58..3dfacea93 100644 --- a/doc/source/backup/index.rst +++ b/doc/source/backup/index.rst @@ -21,5 +21,5 @@ OpenStack .. toctree:: :maxdepth: 2 - + openstack/index \ No newline at end of file diff --git a/doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst b/doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst index da8ab0d42..34ab6ba70 100644 --- a/doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst +++ b/doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst @@ -315,7 +315,7 @@ conditions are in place: .. code-block:: none - ~(keystone_admin)]$ kubectl get pods -n kube-system | grep -e calico -e coredns + ~(keystone_admin)]$ kubectl get pods -n kube-system | grep -e calico -e coredns calico-kube-controllers-5cd4695574-d7zwt 1/1 Running calico-node-6km72 1/1 Running calico-node-c7xnd 1/1 Running diff --git a/doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst b/doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst index 7b2d65729..c01d508bf 100644 --- a/doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst +++ b/doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst @@ -13,7 +13,7 @@ Use the following command to run the Ansible Backup playbook and back up the .. code-block:: none - ~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass= admin_password=" -e "backup_user_local_registry=true" + ~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass= admin_password=" -e "backup_user_local_registry=true" The and need to be set correctly using the ``-e`` option on the command line, or an override file, or in the diff --git a/doc/source/backup/openstack/index.rst b/doc/source/backup/openstack/index.rst index cb2d4305d..14c11cd74 100644 --- a/doc/source/backup/openstack/index.rst +++ b/doc/source/backup/openstack/index.rst @@ -9,7 +9,7 @@ Backup and Restore .. toctree:: :maxdepth: 1 - + back-up-openstack restore-openstack-from-a-backup openstack-backup-considerations \ No newline at end of file diff --git a/doc/source/container_integration/kubernetes/create-test-and-terminate-a-ptp-notification-demo.rst b/doc/source/container_integration/kubernetes/create-test-and-terminate-a-ptp-notification-demo.rst index db282f38f..80ced2b85 100644 --- a/doc/source/container_integration/kubernetes/create-test-and-terminate-a-ptp-notification-demo.rst +++ b/doc/source/container_integration/kubernetes/create-test-and-terminate-a-ptp-notification-demo.rst @@ -9,7 +9,7 @@ Create, Test, and Terminate a PTP Notification Demo This section provides instructions on accessing, creating, testing and terminating a **ptp-notification-demo**. -.. rubric:: |context| +.. rubric:: |context| Use the following procedure to copy the tarball from |dnload-loc|, create, test, @@ -97,8 +97,8 @@ and terminate a ptp-notification-demo. .. code-block:: none - $ kubectl create namespace ptpdemo - $ helm install -n notification-demo ~/charts/ptp-notification-demo -f ~/charts/ptp-notification-demo/ptp-notification-override.yaml + $ kubectl create namespace ptpdemo + $ helm install -n notification-demo ~/charts/ptp-notification-demo -f ~/charts/ptp-notification-demo/ptp-notification-override.yaml $ kubectl get pods -n ptpdemo .. code-block:: none diff --git a/doc/source/container_integration/kubernetes/install-ptp-notifications.rst b/doc/source/container_integration/kubernetes/install-ptp-notifications.rst index 2ff86833d..9b68b54d7 100644 --- a/doc/source/container_integration/kubernetes/install-ptp-notifications.rst +++ b/doc/source/container_integration/kubernetes/install-ptp-notifications.rst @@ -11,7 +11,7 @@ using the :command:`system application` and :command:`system-helm-override` commands. -.. rubric:: |context| +.. rubric:: |context| |prod| provides the capability for application\(s\) to subscribe to @@ -23,8 +23,8 @@ asynchronous |PTP| status notifications and pull for the |PTP| state on demand. .. _install-ptp-notifications-ul-ydy-ggf-t4b: - The |PTP| port must be configured as Subordinate mode \(Slave mode\). For - more information, see, - + more information, see, + .. xbooklink :ref:`|prod-long| System Configuration `: @@ -35,7 +35,7 @@ asynchronous |PTP| status notifications and pull for the |PTP| state on demand. -.. rubric:: |context| +.. rubric:: |context| Use the following steps to install the **ptp-notification** application. @@ -52,7 +52,7 @@ Use the following steps to install the **ptp-notification** application. .. code-block:: none $ source /etc/platform/openrc - ~(keystone_admin)]$ + ~(keystone_admin)]$ #. Assign the |PTP| registration label to the controller\(s\). @@ -66,7 +66,7 @@ Use the following steps to install the **ptp-notification** application. .. code-block:: none - ~(keystone_admin)]$ system host-label-assign controller-0 ptp-notification=true + ~(keystone_admin)]$ system host-label-assign controller-0 ptp-notification=true #. Upload the |PTP| application using the following command: diff --git a/doc/source/container_integration/kubernetes/integrate-the-application-with-notification-client-sidecar.rst b/doc/source/container_integration/kubernetes/integrate-the-application-with-notification-client-sidecar.rst index e35f90f8b..1d5a15c79 100644 --- a/doc/source/container_integration/kubernetes/integrate-the-application-with-notification-client-sidecar.rst +++ b/doc/source/container_integration/kubernetes/integrate-the-application-with-notification-client-sidecar.rst @@ -47,6 +47,6 @@ The following prerequisites are required before the integration: For instructions on creating, testing and terminating a **ptp-notification-demo**, see, :ref:`Create, Test, and Terminate |PTP| Notification Demo `. - + diff --git a/doc/source/datanet/kubernetes/vxlan-data-networks.rst b/doc/source/datanet/kubernetes/vxlan-data-networks.rst index 633fd8439..57eeb5a58 100644 --- a/doc/source/datanet/kubernetes/vxlan-data-networks.rst +++ b/doc/source/datanet/kubernetes/vxlan-data-networks.rst @@ -26,7 +26,7 @@ are included in the outer IP header. .. only:: partner .. include:: ../../_includes/vxlan-data-networks.rest - + .. _vxlan-data-networks-ul-rzs-kqf-zbb: - Dynamic |VXLAN|, see :ref:`Dynamic VXLAN ` @@ -41,8 +41,8 @@ are included in the outer IP header. Before you can create project networks on a |VXLAN| provider network, you must define at least one network segment range. -- :ref:`Dynamic VXLAN ` +- :ref:`Dynamic VXLAN ` -- :ref:`Static VXLAN ` +- :ref:`Static VXLAN ` -- :ref:`Differences Between Dynamic and Static VXLAN Modes ` +- :ref:`Differences Between Dynamic and Static VXLAN Modes ` diff --git a/doc/source/datanet/openstack/dynamic-vxlan.rst b/doc/source/datanet/openstack/dynamic-vxlan.rst index ef0ed1827..1a56118c3 100644 --- a/doc/source/datanet/openstack/dynamic-vxlan.rst +++ b/doc/source/datanet/openstack/dynamic-vxlan.rst @@ -56,7 +56,7 @@ Multicast Endpoint Distribution :start-after: vswitch-text-1-begin :end-before: vswitch-text-1-end - + .. _dynamic-vxlan-section-N10054-N1001F-N10001: diff --git a/doc/source/datanet/openstack/index.rst b/doc/source/datanet/openstack/index.rst index 6ab8b1b55..e946331e7 100644 --- a/doc/source/datanet/openstack/index.rst +++ b/doc/source/datanet/openstack/index.rst @@ -2,63 +2,63 @@ sphinx-quickstart on Thu Sep 3 15:14:59 2020. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. - + ======== Contents ======== - + .. toctree:: :maxdepth: 1 - + data-networks-overview - + ------------------- VXLAN data networks ------------------- .. toctree:: :maxdepth: 1 - + dynamic-vxlan static-vxlan differences-between-dynamic-and-static-vxlan-modes - + ----------------------- Add segmentation ranges ----------------------- - + .. toctree:: :maxdepth: 1 - + adding-segmentation-ranges-using-the-cli - + ------------------------------------ Data network interface configuration ------------------------------------ - + .. toctree:: :maxdepth: 1 - + configuring-data-interfaces configuring-data-interfaces-for-vxlans - + ------------------------------ MTU values of a data interface ------------------------------ - + .. toctree:: :maxdepth: 1 - + changing-the-mtu-of-a-data-interface-using-the-cli changing-the-mtu-of-a-data-interface - + ----------------------------------- VXLAN data network setup completion ----------------------------------- - + .. toctree:: :maxdepth: 1 - + adding-a-static-ip-address-to-a-data-interface managing-data-interface-static-ip-addresses-using-the-cli using-ip-address-pools-for-data-interfaces diff --git a/doc/source/datanet/openstack/vxlan-data-network-setup-completion.rst b/doc/source/datanet/openstack/vxlan-data-network-setup-completion.rst index e78bcddce..05aa5c3f3 100644 --- a/doc/source/datanet/openstack/vxlan-data-network-setup-completion.rst +++ b/doc/source/datanet/openstack/vxlan-data-network-setup-completion.rst @@ -11,10 +11,10 @@ or the |CLI|. For more information on setting up |VXLAN| Data Networks, see :ref:`VXLAN Data Networks `. -- :ref:`Adding a Static IP Address to a Data Interface ` +- :ref:`Adding a Static IP Address to a Data Interface ` -- :ref:`Using IP Address Pools for Data Interfaces ` +- :ref:`Using IP Address Pools for Data Interfaces ` -- :ref:`Adding and Maintaining Routes for a VXLAN Network ` +- :ref:`Adding and Maintaining Routes for a VXLAN Network ` diff --git a/doc/source/dist_cloud/changing-the-admin-password-on-distributed-cloud.rst b/doc/source/dist_cloud/changing-the-admin-password-on-distributed-cloud.rst index 57905cfd3..1e40ed5e1 100644 --- a/doc/source/dist_cloud/changing-the-admin-password-on-distributed-cloud.rst +++ b/doc/source/dist_cloud/changing-the-admin-password-on-distributed-cloud.rst @@ -40,7 +40,7 @@ Ensure that all subclouds are managed and online. .. code-block:: none $ source /etc/platform/openrc - ~(keystone_admin)]$ + ~(keystone_admin)]$ #. After five minutes, lock and then unlock each controller in the System diff --git a/doc/source/dist_cloud/cli-commands-for-alarms-management.rst b/doc/source/dist_cloud/cli-commands-for-alarms-management.rst index 5a9b1a337..ec711e863 100644 --- a/doc/source/dist_cloud/cli-commands-for-alarms-management.rst +++ b/doc/source/dist_cloud/cli-commands-for-alarms-management.rst @@ -25,7 +25,7 @@ You can use the |CLI| to review alarm summaries for the |prod-dc|. | subcloud-5 | 0 | 2 | 0 | 0 | degraded | | subcloud-1 | 0 | 0 | 0 | 0 | OK | +------------+-----------------+--------------+--------------+----------+----------+ - + System Controller alarms and warnings are not included. @@ -53,7 +53,7 @@ You can use the |CLI| to review alarm summaries for the |prod-dc|. +-----------------+--------------+--------------+----------+ | 0 | 0 | 0 | 0 | +-----------------+--------------+--------------+----------+ - + The following command is equivalent to the :command:`fm alarm-summary`, providing a count of alarms and warnings for the System Controller: @@ -75,7 +75,7 @@ You can use the |CLI| to review alarm summaries for the |prod-dc|. +-----------------+--------------+--------------+----------+ | 0 | 0 | 0 | 0 | +-----------------+--------------+--------------+----------+ - + - To list the alarms for a subcloud: diff --git a/doc/source/dist_cloud/creating-subcloud-groups.rst b/doc/source/dist_cloud/creating-subcloud-groups.rst index 4233a1952..a4c652cd6 100644 --- a/doc/source/dist_cloud/creating-subcloud-groups.rst +++ b/doc/source/dist_cloud/creating-subcloud-groups.rst @@ -19,26 +19,26 @@ subcloud groups. The |CLI| commands for managing subcloud groups are: .. _creating-subcloud-groups-ul-fvw-cj4-3jb: -:command:`dcmanager subcloud-group add`: +:command:`dcmanager subcloud-group add`: Adds a new subcloud group. -:command:`dcmanager subcloud-group delete`: +:command:`dcmanager subcloud-group delete`: Deletes subcloud group details from the database. .. note:: The 'Default' subcloud group cannot be deleted - :command:`dcmanager subcloud-group list`: + :command:`dcmanager subcloud-group list`: Lists subcloud groups. - :command:`dcmanager subcloud-group list-subclouds`: + :command:`dcmanager subcloud-group list-subclouds`: List subclouds referencing a subcloud group. - :command:`dcmanager subcloud-group show`: + :command:`dcmanager subcloud-group show`: Shows the details of a subcloud group. - :command:`dcmanager subcloud-group update`: + :command:`dcmanager subcloud-group update`: Updates attributes of a subcloud group. .. note:: @@ -100,7 +100,7 @@ Deletes subcloud group details from the database. .. code-block:: none ~(keystone_admin)]$ dcmanager subcloud-group list-subclouds Group1 - + +--+------+----+----+-------+-------+------+-----------+-----------+-------------+-----------+------------+------------+------+----------+----------+ |id|name |desc|loc.|sof.ver|mgmnt |avail |deploy_stat|mgmt_subnet|mgmt_start_ip|mgmt_end_ip|mgmt_gtwy_ip|sysctrl_gtwy|grp_id|created_at|updated_at| +--+------+----+----+-------+-------+------+-----------+-----------+-------------+-----------+------------+------------+------+----------+----------+ diff --git a/doc/source/dist_cloud/distributed-cloud-architecture.rst b/doc/source/dist_cloud/distributed-cloud-architecture.rst index cec69a885..201b058d1 100644 --- a/doc/source/dist_cloud/distributed-cloud-architecture.rst +++ b/doc/source/dist_cloud/distributed-cloud-architecture.rst @@ -97,8 +97,8 @@ if using the CLI. All messaging between SystemControllers and Subclouds uses the **admin** REST API service endpoints which, in this distributed cloud environment, are all configured for secure HTTPS. Certificates for these HTTPS - connections are managed internally by |prod|. - + connections are managed internally by |prod|. + .. xbooklink For more information, see, :ref:`Certificate Management for Admin REST API Endpoints `. diff --git a/doc/source/dist_cloud/installing-a-subcloud-using-redfish-platform-management-service.rst b/doc/source/dist_cloud/installing-a-subcloud-using-redfish-platform-management-service.rst index d78ddb52e..efbe43f79 100644 --- a/doc/source/dist_cloud/installing-a-subcloud-using-redfish-platform-management-service.rst +++ b/doc/source/dist_cloud/installing-a-subcloud-using-redfish-platform-management-service.rst @@ -20,7 +20,7 @@ subcloud, the subcloud installation has these phases: - Executing the :command:`dcmanager subcloud add` command in the Central Cloud: - - Uses Redfish Virtual Media Service to remote install the ISO on + - Uses Redfish Virtual Media Service to remote install the ISO on controller-0 in the subcloud - Uses Ansible to bootstrap |prod-long| on controller-0 in @@ -114,23 +114,23 @@ subcloud, the subcloud installation has these phases: bootstrap_interface: # e.g. eno1 bootstrap_address: # e.g.128.224.151.183 bootstrap_address_prefix: # e.g. 23 - - # Board Management Controller + + # Board Management Controller bmc_address: # e.g. 128.224.64.180 bmc_username: # e.g. root - - # If the subcloud's bootstrap IP interface and the system controller are not on the + + # If the subcloud's bootstrap IP interface and the system controller are not on the # same network then the customer must configure a default route or static route - # so that the Central Cloud can login bootstrap the newly installed subcloud. - + # so that the Central Cloud can login bootstrap the newly installed subcloud. + # If nexthop_gateway is specified and the network_address is not specified then a # default route will be configured. Otherwise, if a network_address is specified then # a static route will be configured. - + nexthop_gateway: for # e.g. 128.224.150.1 (required) network_address: # e.g. 128.224.144.0 network_mask: # e.g. 255.255.254.0 - + # Installation type codes #0 - Standard Controller, Serial Console #1 - Standard Controller, Graphical Console @@ -139,22 +139,22 @@ subcloud, the subcloud installation has these phases: #4 - AIO Low-latency, Serial Console #5 - AIO Low-latency, Graphical Console install_type: 3 - + # Optional parameters defaults can be modified by uncommenting the option with a modified value. - + # This option can be set to extend the installing stage timeout value - # wait_for_timeout: 3600 - + # wait_for_timeout: 3600 + # Set this options for https no_check_certificate: True - + # If the bootstrap interface is a vlan interface then configure the vlan ID. - # bootstrap_vlan: - + # bootstrap_vlan: + # Override default filesystem device. # rootfs_device: "/dev/disk/by-path/pci-0000:00:1f.2-ata-1.0" # boot_device: "/dev/disk/by-path/pci-0000:00:1f.2-ata-1.0" - + #. At the SystemController, create a /home/sysadmin/subcloud1-bootstrap-values.yaml overrides file for the @@ -166,21 +166,21 @@ subcloud, the subcloud installation has these phases: system_mode: simplex name: "subcloud1" - + description: "test" location: "loc" - + management_subnet: 192.168.101.0/24 management_start_address: 192.168.101.2 management_end_address: 192.168.101.50 management_gateway_address: 192.168.101.1 - + external_oam_subnet: 10.10.10.0/24 external_oam_gateway_address: 10.10.10.1 external_oam_floating_address: 10.10.10.12 - + systemcontroller_gateway_address: 192.168.204.101 - + docker_registries: k8s.gcr.io: url: registry.central:9001/k8s.gcr.io @@ -212,7 +212,7 @@ subcloud, the subcloud installation has these phases: .. code-block:: none - "DNS.1: registry.local DNS.2: registry.central IP.1: floating_management IP.2: floating_OAM" + "DNS.1: registry.local DNS.2: registry.central IP.1: floating_management IP.2: floating_OAM" If required, run the following command on the Central Cloud prior to bootstrapping the subcloud to install the new certificate for the Central @@ -220,7 +220,7 @@ subcloud, the subcloud installation has these phases: .. code-block:: none - ~(keystone_admin)]$ system certificate-install -m docker_registry path_to_cert + ~(keystone_admin)]$ system certificate-install -m docker_registry path_to_cert If you prefer to install container images from the default WRS AWS ECR external registries, make the following substitutions for the @@ -338,15 +338,15 @@ subcloud, the subcloud installation has these phases: controller-0:/home/sysadmin# tail /var/log/dcmanager/subcloud1_install_2019-09-23-19-19-42.log TASK [wait_for] **************************************************************** ok: [subcloud1] - + controller-0:/home/sysadmin# tail /var/log/dcmanager/subcloud1_bootstrap_2019-09-23-19-03-44.log k8s.gcr.io: {password: secret, url: null} quay.io: {password: secret, url: null} ) - + TASK [bootstrap/bringup-essential-services : Mark the bootstrap as completed] *** changed: [subcloud1] - + PLAY RECAP ********************************************************************* subcloud1 : ok=230 changed=137 unreachable=0 failed=0 @@ -364,7 +364,7 @@ subcloud, the subcloud installation has these phases: .. code-block:: none REGISTRY="docker-registry" - SECRET_UUID='system service-parameter-list | fgrep + SECRET_UUID='system service-parameter-list | fgrep $REGISTRY | fgrep auth-secret | awk '{print $10}'' SECRET_REF='openstack secret list | fgrep $ {SECRET_UUID} | awk '{print $2}'' diff --git a/doc/source/dist_cloud/installing-a-subcloud-without-redfish-platform-management-service.rst b/doc/source/dist_cloud/installing-a-subcloud-without-redfish-platform-management-service.rst index 9556df67a..e0f5d3bca 100644 --- a/doc/source/dist_cloud/installing-a-subcloud-without-redfish-platform-management-service.rst +++ b/doc/source/dist_cloud/installing-a-subcloud-without-redfish-platform-management-service.rst @@ -78,15 +78,15 @@ subcloud, the subcloud installation process has two phases: -i : Specify input ISO file -o : Specify output ISO file -a : Specify ks-addon.cfg file - + -p : Specify boot parameter Examples: - -p rootfs_device=nvme0n1 + -p rootfs_device=nvme0n1 -p boot_device=nvme0n1 - + -p rootfs_device=/dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 -p boot_device=/dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 - + -d : Specify default boot menu option: 0 - Standard Controller, Serial Console @@ -98,7 +98,7 @@ subcloud, the subcloud installation process has two phases: NULL - Clear default selection -t : Specify boot menu timeout, in seconds - + The following example ks-addon.cfg, used with the -a option, sets up an initial IP interface at boot time by defining a |VLAN| on an Ethernet @@ -109,14 +109,14 @@ subcloud, the subcloud installation process has two phases: #### start ks-addon.cfg OAM_DEV=enp0s3 OAM_VLAN=1234 - + cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$OAM_DEV DEVICE=$OAM_DEV BOOTPROTO=none ONBOOT=yes LINKDELAY=20 EOF - + cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$OAM_DEV.$OAM_VLAN DEVICE=$OAM_DEV.$OAM_VLAN BOOTPROTO=dhcp @@ -151,21 +151,21 @@ subcloud, the subcloud installation process has two phases: system_mode: simplex name: "subcloud1" - + description: "test" location: "loc" - + management_subnet: 192.168.101.0/24 management_start_address: 192.168.101.2 management_end_address: 192.168.101.50 management_gateway_address: 192.168.101.1 - + external_oam_subnet: 10.10.10.0/24 external_oam_gateway_address: 10.10.10.1 external_oam_floating_address: 10.10.10.12 - + systemcontroller_gateway_address: 192.168.204.101 - + docker_registries: k8s.gcr.io: url: registry.central:9001/k8s.gcr.io @@ -204,7 +204,7 @@ subcloud, the subcloud installation process has two phases: .. note:: If you have a reason not to use the Central Cloud's local registry you - can pull the images from another local private docker registry. + can pull the images from another local private docker registry. #. You can use the Central Cloud's local registry to pull images on subclouds. The Central Cloud's local registry's HTTPS certificate must have the @@ -283,10 +283,10 @@ subcloud, the subcloud installation process has two phases: k8s.gcr.io: {password: secret, url: null} quay.io: {password: secret, url: null} ) - + TASK [bootstrap/bringup-essential-services : Mark the bootstrap as completed] *** changed: [subcloud1] - + PLAY RECAP ********************************************************************* subcloud1 : ok=230 changed=137 unreachable=0 failed=0 @@ -304,7 +304,7 @@ subcloud, the subcloud installation process has two phases: .. code-block:: none REGISTRY="docker-registry" - SECRET_UUID='system service-parameter-list | fgrep + SECRET_UUID='system service-parameter-list | fgrep $REGISTRY | fgrep auth-secret | awk '{print $10}'' SECRET_REF='openstack secret list | fgrep $ {SECRET_UUID} | awk '{print $2}'' diff --git a/doc/source/dist_cloud/installing-and-provisioning-the-central-cloud.rst b/doc/source/dist_cloud/installing-and-provisioning-the-central-cloud.rst index 3e31c6d2d..d9651e6ef 100644 --- a/doc/source/dist_cloud/installing-and-provisioning-the-central-cloud.rst +++ b/doc/source/dist_cloud/installing-and-provisioning-the-central-cloud.rst @@ -7,18 +7,18 @@ Install and Provisioning the Central Cloud ========================================== Installing the Central Cloud is similar to installing a standalone |prod| -system. +system. .. rubric:: |context| -The Central Cloud supports either +The Central Cloud supports either - an |AIO|-Duplex deployment configuration - a Standard with Dedicated Storage Nodes deployment Standard with Controller Storage and one or more workers deployment configuration, or -- a Standard with Dedicated Storage Nodes deployment configuration. +- a Standard with Dedicated Storage Nodes deployment configuration. .. rubric:: |proc| @@ -42,7 +42,7 @@ You will also need to make the following modification: management_start_address and management_end_address, as shown below) to exclude the IP addresses reserved for gateway routers that provide routing to the subclouds' management subnets. - + - Also, include the container images shown in bold below in additional\_local\_registry\_images, required for support of subcloud installs with the Redfish Platform Management Service, and subcloud installs diff --git a/doc/source/dist_cloud/managing-subcloud-groups.rst b/doc/source/dist_cloud/managing-subcloud-groups.rst index 70cedb4f1..91c813e3f 100644 --- a/doc/source/dist_cloud/managing-subcloud-groups.rst +++ b/doc/source/dist_cloud/managing-subcloud-groups.rst @@ -38,30 +38,30 @@ or firmware updates, see: .. _managing-subcloud-groups-ul-a3s-nqf-1nb: - To create an upgrade orchestration strategy use the :command:`dcmanager - upgrade-strategy create` command. - + upgrade-strategy create` command. + .. xbooklink For more information see, :ref:`Distributed Upgrade Orchestration Process Using the CLI `. - To create an update \(patch\) orchestration strategy use the - :command:`dcmanager patch-strategy create` command. - + :command:`dcmanager patch-strategy create` command. + .. xbooklink For more information see, :ref:`Creating an Update Strategy for Distributed Cloud Update Orchestration `. - To create a firmware update orchestration strategy use the - :command:`dcmanager fw-update-strategy create` command. - + :command:`dcmanager fw-update-strategy create` command. + .. xbooklink For more information see, :ref:`Device Image Update Orchestration `. .. seealso:: - :ref:`Creating Subcloud Groups ` + :ref:`Creating Subcloud Groups ` - :ref:`Orchestration Strategy Using Subcloud Groups ` + :ref:`Orchestration Strategy Using Subcloud Groups ` diff --git a/doc/source/dist_cloud/managing-subclouds-using-the-cli.rst b/doc/source/dist_cloud/managing-subclouds-using-the-cli.rst index 797fdbdec..6e447ef73 100644 --- a/doc/source/dist_cloud/managing-subclouds-using-the-cli.rst +++ b/doc/source/dist_cloud/managing-subclouds-using-the-cli.rst @@ -28,14 +28,14 @@ fails, delete subclouds, and monitor or change the managed status of subclouds. | 3 | subcloud3 | managed | online | out-of-sync | | 4 | subcloud4 | managed | offline | unknown | +----+-----------+--------------+--------------------+-------------+ - + - To show information for a subcloud, use the :command:`subcloud show` command. .. code-block:: none ~(keystone_admin)]$ dcmanager subcloud show - + For example: @@ -64,7 +64,7 @@ fails, delete subclouds, and monitor or change the managed status of subclouds. | patching_sync_status | in-sync | | platform_sync_status | in-sync | +-----------------------------+----------------------------+ - + - To show information about the oam-floating-ip field for a specific subcloud, use the :command:`subcloud @@ -98,7 +98,7 @@ fails, delete subclouds, and monitor or change the managed status of subclouds. | platform_sync_status | in-sync | | oam_floating_ip | 10.10.10.12 | +-----------------------------+----------------------------+ - + - To edit the settings for a subcloud, use the :command:`subcloud update` command. @@ -109,7 +109,7 @@ fails, delete subclouds, and monitor or change the managed status of subclouds. [–- description] \ [– location] \ - + - To toggle a subcloud between **Unmanaged** and **Managed**, pass these parameters to the :command:`subcloud` command. @@ -119,12 +119,12 @@ fails, delete subclouds, and monitor or change the managed status of subclouds. .. code-block:: none ~(keystone_admin)]$ dcmanager subcloud unmanage - + .. code-block:: none ~(keystone_admin)]$ dcmanager subcloud manage - + - To reconfigure a subcloud, if deployment fails, use the :command:`subcloud reconfig` command. @@ -135,11 +135,11 @@ fails, delete subclouds, and monitor or change the managed status of subclouds. ~(keystone_admin)]$ dcmanager subcloud reconfig --deploy-config \ <> --sysadmin-password <> - + where``--deploy-config`` must reference the deployment configuration file. - For more information, see either, - + For more information, see either, + .. xbooklink |inst-doc|: :ref:`The Deployment Manager ` or |inst-doc|: :ref:`Deployment Manager Examples ` for more @@ -154,14 +154,14 @@ fails, delete subclouds, and monitor or change the managed status of subclouds. .. code-block:: none ~(keystone_admin)]$ dcmanager subcloud manage - + - To delete a subcloud, use the :command:`subcloud delete` command. .. code-block:: none ~(keystone_admin)]$ dcmanager subcloud delete - + .. caution:: diff --git a/doc/source/dist_cloud/monitoring-subclouds-using-horizon.rst b/doc/source/dist_cloud/monitoring-subclouds-using-horizon.rst index 59f995228..8ffc0feac 100644 --- a/doc/source/dist_cloud/monitoring-subclouds-using-horizon.rst +++ b/doc/source/dist_cloud/monitoring-subclouds-using-horizon.rst @@ -14,20 +14,20 @@ subclouds from the System Controller. - To list subclouds, select **Distributed Cloud Admin** \> **Cloud Overview**. .. image:: figures/uhp1521894539008.png - + You can perform full-text searches or filter by column using the search-bar above the subcloud list. .. image:: figures/pdn1591034100660.png - - + + - To perform operations on a subcloud, use the **Actions** menu. .. image:: figures/pvr1591032739503.png - - + + .. caution:: If you delete a subcloud, then you must reinstall it before you can @@ -39,7 +39,7 @@ subclouds from the System Controller. window. .. image:: figures/rpn1518108364837.png - - + + diff --git a/doc/source/dist_cloud/switching-to-a-subcloud-from-the-system-controller.rst b/doc/source/dist_cloud/switching-to-a-subcloud-from-the-system-controller.rst index c8ec21e7e..d5ad4878b 100644 --- a/doc/source/dist_cloud/switching-to-a-subcloud-from-the-system-controller.rst +++ b/doc/source/dist_cloud/switching-to-a-subcloud-from-the-system-controller.rst @@ -27,7 +27,7 @@ subcloud hosts and networks, just as you would for any |prod| system - From the Horizon header, select a subcloud from the **Admin** menu. .. image:: figures/rpn1518108364837.png - + - From the Cloud Overview page, select **Alarm & Event Details** \> **Host Details** for the subcloud. diff --git a/doc/source/dist_cloud/updating-docker-registry-credentials-on-a-subcloud.rst b/doc/source/dist_cloud/updating-docker-registry-credentials-on-a-subcloud.rst index ddf4a8d1d..27816fbf0 100644 --- a/doc/source/dist_cloud/updating-docker-registry-credentials-on-a-subcloud.rst +++ b/doc/source/dist_cloud/updating-docker-registry-credentials-on-a-subcloud.rst @@ -37,9 +37,9 @@ subcloud to the sysinv service credentials of the systemController. .. code-block:: none #!/bin/bash -e - + USAGE="usage: ${0##*/} " - + if [ "$#" -ne 2 ] then echo Missing arguments. @@ -47,14 +47,14 @@ subcloud to the sysinv service credentials of the systemController. echo exit fi - + NEW_CREDS="username:$1 password:$2" - + echo - - for REGISTRY in docker-registry quay-registry elastic-registry gcr-registry k8s-registry + + for REGISTRY in docker-registry quay-registry elastic-registry gcr-registry k8s-registry do - + echo -n "Updating" $REGISTRY "credentials ." SECRET_UUID=`system service-parameter-list | fgrep $REGISTRY | fgrep auth-secret | awk '{print $10}'` if [ -z "$SECRET_UUID" ] @@ -67,7 +67,7 @@ subcloud to the sysinv service credentials of the systemController. echo -n "." SECRET_VALUE=`openstack secret get ${SECRET_REF} --payload -f value` echo -n "." - + openstack secret delete ${SECRET_REF} > /dev/null echo -n "." NEW_SECRET_VALUE=$NEW_CREDS @@ -78,7 +78,7 @@ subcloud to the sysinv service credentials of the systemController. system service-parameter-modify docker $REGISTRY auth-secret="${NEW_SECRET_UUID}" > /dev/null echo -n "." echo " done." - + echo -n "Validating $REGISTRY credentials updated to: " SECRET_UUID=`system service-parameter-list | fgrep $REGISTRY | fgrep auth-secret | awk '{print $10}'` if [ -z "$SECRET_UUID" ] @@ -88,9 +88,9 @@ subcloud to the sysinv service credentials of the systemController. SECRET_REF=`openstack secret list | fgrep ${SECRET_UUID} | awk '{print $2}'` SECRET_VALUE=`openstack secret get ${SECRET_REF} --payload -f value` echo $SECRET_VALUE - + echo - + done diff --git a/doc/source/node_management/kubernetes/configuring_cpu_core_assignments/configuring-cpu-core-assignments.rst b/doc/source/node_management/kubernetes/configuring_cpu_core_assignments/configuring-cpu-core-assignments.rst index 63f696448..65bdb0bfa 100644 --- a/doc/source/node_management/kubernetes/configuring_cpu_core_assignments/configuring-cpu-core-assignments.rst +++ b/doc/source/node_management/kubernetes/configuring_cpu_core_assignments/configuring-cpu-core-assignments.rst @@ -82,7 +82,7 @@ CPU cores from the Horizon Web interface. .. only:: partner - ../../_includes/configure-cpu-core-vswitch.rest + ../../_includes/configure-cpu-core-vswitch.rest **Shared** Not currently supported. diff --git a/doc/source/node_management/kubernetes/hardware_acceleration_devices/common-device-management-tasks.rst b/doc/source/node_management/kubernetes/hardware_acceleration_devices/common-device-management-tasks.rst index 6151eb379..b47d0206f 100644 --- a/doc/source/node_management/kubernetes/hardware_acceleration_devices/common-device-management-tasks.rst +++ b/doc/source/node_management/kubernetes/hardware_acceleration_devices/common-device-management-tasks.rst @@ -35,5 +35,5 @@ For a list of tasks see: - :ref:`Initiating a Device Image Update for a Host ` -- :ref:`Displaying the Status of Device Images ` +- :ref:`Displaying the Status of Device Images ` diff --git a/doc/source/node_management/kubernetes/hardware_acceleration_devices/n3000-fpga-forward-error-correction.rst b/doc/source/node_management/kubernetes/hardware_acceleration_devices/n3000-fpga-forward-error-correction.rst index 17c43a4c3..240849430 100644 --- a/doc/source/node_management/kubernetes/hardware_acceleration_devices/n3000-fpga-forward-error-correction.rst +++ b/doc/source/node_management/kubernetes/hardware_acceleration_devices/n3000-fpga-forward-error-correction.rst @@ -55,7 +55,7 @@ For example, run the following commands: ~(keystone_admin)$ system host-unlock worker-0 To pass the |FEC| device to a container, the following requests/limits must be -entered into the pod specification: +entered into the pod specification: .. code-block:: none diff --git a/doc/source/node_management/kubernetes/host_hardware_management/changing-hardware-components-for-a-storage-host.rst b/doc/source/node_management/kubernetes/host_hardware_management/changing-hardware-components-for-a-storage-host.rst index b5a025561..9e84ec2fd 100644 --- a/doc/source/node_management/kubernetes/host_hardware_management/changing-hardware-components-for-a-storage-host.rst +++ b/doc/source/node_management/kubernetes/host_hardware_management/changing-hardware-components-for-a-storage-host.rst @@ -76,7 +76,7 @@ can reproduce them later. #. Power up the host. If the host has been deleted from the Host Inventory, the host software - is reinstalled. + is reinstalled. .. From Power up the host .. xbookref For details, see :ref:`|inst-doc| `. diff --git a/doc/source/node_management/kubernetes/host_memory_provisioning/allocating-host-memory-using-horizon.rst b/doc/source/node_management/kubernetes/host_memory_provisioning/allocating-host-memory-using-horizon.rst index 36d3c2826..93571ebbc 100644 --- a/doc/source/node_management/kubernetes/host_memory_provisioning/allocating-host-memory-using-horizon.rst +++ b/doc/source/node_management/kubernetes/host_memory_provisioning/allocating-host-memory-using-horizon.rst @@ -25,7 +25,7 @@ per host. .. only:: starlingx A node may only allocate huge pages for a single size, either 2MiB or 1GiB. - + .. only:: partner .. include:: ../../../_includes/avs-note.rest @@ -92,11 +92,11 @@ informative message is displayed. |NUMA| Node. If no 2 MiB pages are required, type 0. Due to limitations in Kubernetes, only a single huge page size can be used per host, across Application memory. - + .. only:: partner .. include:: ../../../_includes/allocating-host-memory-using-horizon.rest - + :start-after: application-2m-text-begin :end-before: application-2m-text-end @@ -108,25 +108,25 @@ informative message is displayed. |NUMA| Node. If no 1 GiB pages are required, type 0. Due to limitations in Kubernetes, only a single huge page size can be used per host, across Application memory. - + .. only:: partner .. include:: ../../../_includes/allocating-host-memory-using-horizon.rest - + :start-after: application-1g-text-begin :end-before: application-1g-text-end .. only:: partner .. include:: ../../../_includes/allocating-host-memory-using-horizon.rest - + :start-after: vswitch-hugepage-1g-text-begin :end-before: vswitch-hugepage-1g-text-end .. only:: partner .. include:: ../../../_includes/allocating-host-memory-using-horizon.rest - + :start-after: vswitch-hugepage-size-node-text-begin :end-before: vswitch-hugepage-size-node-text-end diff --git a/doc/source/node_management/kubernetes/host_memory_provisioning/allocating-host-memory-using-the-cli.rst b/doc/source/node_management/kubernetes/host_memory_provisioning/allocating-host-memory-using-the-cli.rst index 7475d8868..4e106e3f4 100644 --- a/doc/source/node_management/kubernetes/host_memory_provisioning/allocating-host-memory-using-the-cli.rst +++ b/doc/source/node_management/kubernetes/host_memory_provisioning/allocating-host-memory-using-the-cli.rst @@ -9,7 +9,7 @@ Allocate Host Memory Using the CLI .. only:: starlingx You can edit the platform and huge page memory allocations for a |NUMA| - node from the CLI. + node from the CLI. .. only:: partner @@ -85,7 +85,7 @@ worker or an |AIO| controller. given processor. For example: - + .. only:: starlingx .. code-block:: none @@ -149,7 +149,7 @@ worker or an |AIO| controller. Use with the optional ``-f`` argument. This option specifies the intended function for hugepage allocation on application. - + .. only:: partner .. include:: ../../../_includes/allocating-host-memory-using-the-cli.rest @@ -183,7 +183,7 @@ worker or an |AIO| controller. number of 1 GiB huge pages to make available. Due to limitations in Kubernetes, only a single huge page size can be used per host, across Application memory. - + .. only:: partner .. include:: ../../../_includes/allocating-host-memory-using-the-cli.rest @@ -197,14 +197,14 @@ worker or an |AIO| controller. .. code-block:: none (keystone_admin)$ system host-memory-modify worker-0 1 -2M 4 - + .. only:: starlingx For openstack-compute labeled worker node or |AIO| controller, since Kubernetes only supports a single huge page size per worker node. 'application' huge pages must also be 1G. The following example shows configuring 10 1G huge pages for application usage. For example: - + .. only:: partner .. include:: ../../../_includes/allocating-host-memory-using-the-cli.rest diff --git a/doc/source/node_management/kubernetes/node_interfaces/configuring-aggregated-ethernet-interfaces-using-the-cli.rst b/doc/source/node_management/kubernetes/node_interfaces/configuring-aggregated-ethernet-interfaces-using-the-cli.rst index 833f2cb36..85b1f9b6d 100644 --- a/doc/source/node_management/kubernetes/node_interfaces/configuring-aggregated-ethernet-interfaces-using-the-cli.rst +++ b/doc/source/node_management/kubernetes/node_interfaces/configuring-aggregated-ethernet-interfaces-using-the-cli.rst @@ -118,7 +118,7 @@ Settings `. For example, to attach an aggregated Ethernet interface named **bond0** to the platform management network, using member interfaces **enp0s8** - and **enp0s11** on **controller-0**: + and **enp0s11** on **controller-0**: .. code-block:: none diff --git a/doc/source/node_management/kubernetes/node_interfaces/configuring-ethernet-interfaces-on-sriov-interface-using-cli.rst b/doc/source/node_management/kubernetes/node_interfaces/configuring-ethernet-interfaces-on-sriov-interface-using-cli.rst index ec77aed51..b9d2be21a 100644 --- a/doc/source/node_management/kubernetes/node_interfaces/configuring-ethernet-interfaces-on-sriov-interface-using-cli.rst +++ b/doc/source/node_management/kubernetes/node_interfaces/configuring-ethernet-interfaces-on-sriov-interface-using-cli.rst @@ -38,7 +38,7 @@ to platform networks using the CLI. | | | | | | | | | | | e7bd04f...| cluster0 | platform | vlan | 158 | [] | [u'sriov0'] | [] | MTU=1500 | +-----------+----------+-----------+----------+------+---------------+-------------+--------------------------------+------------+ - + #. Create an Ethernet interface. diff --git a/doc/source/node_management/kubernetes/node_interfaces/configuring-vlan-type-interfaces-using-the-sriov-interface-from-the-cli.rst b/doc/source/node_management/kubernetes/node_interfaces/configuring-vlan-type-interfaces-using-the-sriov-interface-from-the-cli.rst index be1a71de2..b2230de76 100644 --- a/doc/source/node_management/kubernetes/node_interfaces/configuring-vlan-type-interfaces-using-the-sriov-interface-from-the-cli.rst +++ b/doc/source/node_management/kubernetes/node_interfaces/configuring-vlan-type-interfaces-using-the-sriov-interface-from-the-cli.rst @@ -65,7 +65,7 @@ For more information, see :ref:`Provisioning SR-IOV VF Interfaces using the CLI ~(keystone_admin)$ system host-if-add -V \ -c <--ifclass> [] - + where the following options are available: diff --git a/doc/source/node_management/kubernetes/node_interfaces/link-aggregation-settings.rst b/doc/source/node_management/kubernetes/node_interfaces/link-aggregation-settings.rst index a48e8e8c9..af676dc42 100644 --- a/doc/source/node_management/kubernetes/node_interfaces/link-aggregation-settings.rst +++ b/doc/source/node_management/kubernetes/node_interfaces/link-aggregation-settings.rst @@ -32,16 +32,16 @@ kernel Ethernet Bonding Driver documentation available online - Description - Supported Interface Types * - Active-backup - + \(default value\) - Provides fault tolerance. Only one slave interface at a time is available. The backup slave interface becomes active only when the active slave interface fails. - + For platform interfaces \(such as, |OAM|, cluster-host, and management interfaces\), the system will select the interface with the lowest |MAC| address as the primary interface when all slave interfaces are - enabled. + enabled. - Management, |OAM|, cluster-host, and data interface * - Balanced XOR - Provides aggregated bandwidth and fault tolerance. The same @@ -53,12 +53,12 @@ kernel Ethernet Bonding Driver documentation available online You can modify the transmit policy using the xmit-hash-policy option. For details, see :ref:`Table 2 - `. + `. - |OAM|, cluster-host, and data interfaces * - 802.3ad - - Provides aggregated bandwidth and fault tolerance. Implements dynamic + - Provides aggregated bandwidth and fault tolerance. Implements dynamic link aggregation as per the IEEE 802.3ad |LACP| specification. - + You can modify the transmit policy using the xmit-hash-policy option. For details, see :ref:`Table 2 `. @@ -70,8 +70,8 @@ kernel Ethernet Bonding Driver documentation available online one of the aggregated interfaces during |PXE| boot. If the far-end switch is configured to use active |LACP|, it can establish a |LAG| and use either interface, potentially resulting in a communication failure - during the boot process. - - Management, |OAM|, cluster-host, and data interface + during the boot process. + - Management, |OAM|, cluster-host, and data interface .. _link-aggregation-settings-xmit-hash-policy: @@ -81,22 +81,22 @@ kernel Ethernet Bonding Driver documentation available online * - Options - Description - - Supported Interface Types + - Supported Interface Types * - Layer 2 \(default value\) - Hashes on source and destination |MAC| addresses. - |OAM|, internal management, cluster-host, and data interfaces \(worker - nodes\). + nodes\). * - Layer 2 + 3 - Hashes on source and destination |MAC| addresses, and on source and destination IP addresses. - - |OAM|, internal management, and cluster-host + - |OAM|, internal management, and cluster-host * - Layer 3 + 4 - Hashes on source and destination IP addresses, and on source and destination ports. - - |OAM|, internal management, and cluster-host - + - |OAM|, internal management, and cluster-host + .. list-table:: Table 3. primary_reselect Options :widths: auto @@ -104,32 +104,32 @@ kernel Ethernet Bonding Driver documentation available online * - Options - Description - - Supported Interface Types + - Supported Interface Types * - Always \(default value\) - The primary slave becomes an active slave whenever it comes back up. - - |OAM|, internal management, and cluster-host + - |OAM|, internal management, and cluster-host * - Better - The primary slave becomes active slave whenever it comes back up, if the speed and the duplex of the primary slave is better than the speed duplex of the current active slave. - - |OAM|, internal management, and cluster-host + - |OAM|, internal management, and cluster-host * - Failure - The primary slave becomes the active slave only if the current active - slave fails and the primary slave is up. - - |OAM|, internal management, and cluster-host + slave fails and the primary slave is up. + - |OAM|, internal management, and cluster-host ----------------------------------------- LAG Configurations for AIO Duplex Systems ----------------------------------------- - + For a duplex-direct system set-up, use a |LAG| mode with active-backup for the management interface when attaching cables between the active and standby controller nodes. When both interfaces are enabled, the system automatically selects the primary interface within the |LAG| with the lowest |MAC| address on the active controller to connect to the primary interface within the |LAG| with the lowest |MAC| address on the standby controller. - + The controllers act independently of each other when selecting the primary interface. Therefore, it is critical that the inter-node cabling is completed to ensure that both nodes select a primary interface that is attached to the @@ -138,15 +138,14 @@ attachments must be from the lowest |MAC| address to the lowest |MAC| address for the first cable, and the next lowest |MAC| address to the next lowest |MAC| address for the second cable. Failure to follow these cabling requirements will result in a loss of communication between the two nodes. - + In addition to the special cabling requirements, the node BIOS settings may need to be configured to ensure that the node attempts to network boot from the lowest |MAC| address interface within the |LAG|. This may be required only on systems that enable all hardware interfaces during network booting rather than only enabling the interface that is currently selected for booting. - + Configure the cables associated with the management |LAG| so that the primary interface within the |LAG| with the lowest |MAC| address on the active controller connects to the primary interface within the |LAG| with the lowest |MAC| address on standby controller. - \ No newline at end of file diff --git a/doc/source/node_management/openstack/configuring-a-flavor-to-use-a-generic-pci-device.rst b/doc/source/node_management/openstack/configuring-a-flavor-to-use-a-generic-pci-device.rst index 7f80904d3..db032fba2 100644 --- a/doc/source/node_management/openstack/configuring-a-flavor-to-use-a-generic-pci-device.rst +++ b/doc/source/node_management/openstack/configuring-a-flavor-to-use-a-generic-pci-device.rst @@ -58,7 +58,7 @@ a Generic PCI Device for Use by VMs .. code-block:: none ~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="pci_alias[:number_of_devices]" - + where **** @@ -97,7 +97,7 @@ a Generic PCI Device for Use by VMs defined class code for 'Display Controller' \(0x03\). .. note:: - + On a system with multiple cards that use the same default |PCI| alias, you must assign and use a unique |PCI| alias for each one. @@ -115,20 +115,20 @@ a Generic PCI Device for Use by VMs .. code-block:: none ~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1" - + To make a GPU device from a specific vendor available to a guest: .. code-block:: none ~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="nvidia-tesla-p40:1" - + To make multiple |PCI| devices available, use the following command: .. code-block:: none ~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1, qat-c62x-vf:2" - + diff --git a/doc/source/node_management/openstack/configuring-pci-passthrough-ethernet-interfaces.rst b/doc/source/node_management/openstack/configuring-pci-passthrough-ethernet-interfaces.rst index 4947a6671..54635b17d 100644 --- a/doc/source/node_management/openstack/configuring-pci-passthrough-ethernet-interfaces.rst +++ b/doc/source/node_management/openstack/configuring-pci-passthrough-ethernet-interfaces.rst @@ -50,8 +50,8 @@ already, and that |VLAN| ID 10 is a valid segmentation ID assigned to The Edit Interface dialog appears. .. image:: ../figures/ptj1538163621289.png - - + + Select **pci-passthrough**, from the **Interface Class** drop-down, and then select the data network to attach the interface. @@ -78,8 +78,8 @@ already, and that |VLAN| ID 10 is a valid segmentation ID assigned to .. image:: ../figures/bek1516655307871.png - - + + Click the **Next** button to proceed to the Subnet tab. diff --git a/doc/source/node_management/openstack/exposing-a-generic-pci-device-for-use-by-vms.rst b/doc/source/node_management/openstack/exposing-a-generic-pci-device-for-use-by-vms.rst index c0f6ef9ed..6908881b2 100644 --- a/doc/source/node_management/openstack/exposing-a-generic-pci-device-for-use-by-vms.rst +++ b/doc/source/node_management/openstack/exposing-a-generic-pci-device-for-use-by-vms.rst @@ -45,7 +45,7 @@ To edit a device, you must first lock the host. #. Click **Edit Device**. .. image:: ../figures/jow1452530556357.png - + #. Update the information as required. diff --git a/doc/source/node_management/openstack/generic-pci-passthrough.rst b/doc/source/node_management/openstack/generic-pci-passthrough.rst index 58940c5e8..3bd276b18 100644 --- a/doc/source/node_management/openstack/generic-pci-passthrough.rst +++ b/doc/source/node_management/openstack/generic-pci-passthrough.rst @@ -20,7 +20,7 @@ automatically inventoried on a host: .. code-block:: none ~(keystone_admin)$ system host-device-list controller-0 --all - + You can use the following command from the |CLI| to list the devices for a host, for example: @@ -30,7 +30,7 @@ host, for example: ~(keystone_admin)$ system host-device-list --all controller-0 +-------------+----------+------+-------+-------+------+--------+--------+-------------+-------+ | name | address | class| vendor| device| class| vendor | device | numa_node |enabled| - | | | id | id | id | | name | name | | | + | | | id | id | id | | name | name | | | +------------+----------+-------+-------+-------+------+--------+--------+-------------+-------+ | pci_0000_05.| 0000:05:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True | | pci_0000_06.| 0000:06:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True | diff --git a/doc/source/node_management/openstack/index.rst b/doc/source/node_management/openstack/index.rst index f714cc0f1..048dcebb3 100644 --- a/doc/source/node_management/openstack/index.rst +++ b/doc/source/node_management/openstack/index.rst @@ -16,7 +16,7 @@ PCI Device Access for VMs .. toctree:: :maxdepth: 1 - + sr-iov-encryption-acceleration configuring-pci-passthrough-ethernet-interfaces pci-passthrough-ethernet-interface-devices diff --git a/doc/source/node_management/openstack/using-labels-to-identify-openstack-nodes.rst b/doc/source/node_management/openstack/using-labels-to-identify-openstack-nodes.rst index 3a4df182b..de6368b1f 100644 --- a/doc/source/node_management/openstack/using-labels-to-identify-openstack-nodes.rst +++ b/doc/source/node_management/openstack/using-labels-to-identify-openstack-nodes.rst @@ -92,4 +92,4 @@ Nodes must be locked before labels can be assigned or removed. .. include:: ../../_includes/using-labels-to-identify-openstack-nodes.rest - :start-after: table-1-of-contents-end + :start-after: table-1-of-contents-end diff --git a/doc/source/operations/ceph_cluster_aio_duplex_migration.rst b/doc/source/operations/ceph_cluster_aio_duplex_migration.rst index 18f78d576..7eca6fb62 100644 --- a/doc/source/operations/ceph_cluster_aio_duplex_migration.rst +++ b/doc/source/operations/ceph_cluster_aio_duplex_migration.rst @@ -935,7 +935,7 @@ Login to pod rook-ceph-tools, get generated key for client.admin and ceph.conf i On host controller-0 and controller-1 replace /etc/ceph/ceph.conf and /etc/ceph/keyring with content got from pod rook-ceph-tools. -Update configmap ceph-etc, with data field, with new mon ip +Update configmap ceph-etc, with data field, with new mon ip :: diff --git a/doc/source/planning/kubernetes/network-requirements-ip-support.rst b/doc/source/planning/kubernetes/network-requirements-ip-support.rst index e1d46f9a8..65c1bcd0a 100755 --- a/doc/source/planning/kubernetes/network-requirements-ip-support.rst +++ b/doc/source/planning/kubernetes/network-requirements-ip-support.rst @@ -22,8 +22,8 @@ table lists IPv4 and IPv6 support for different networks: - IPv6 Support - Comment * - |PXE| boot - - Y - - N + - Y + - N - If present, the |PXE| boot network is used for |PXE| booting of new hosts \(instead of using the internal management network\), and must be untagged. It is limited to IPv4, because the |prod| installer does not @@ -39,4 +39,4 @@ table lists IPv4 and IPv6 support for different networks: * - Cluster Host - Y - Y - - The Cluster Host network supports IPv4 or IPv6 addressing. + - The Cluster Host network supports IPv4 or IPv6 addressing. diff --git a/doc/source/planning/kubernetes/starlingx-hardware-requirements.rst b/doc/source/planning/kubernetes/starlingx-hardware-requirements.rst index 384ef50ff..19d210233 100755 --- a/doc/source/planning/kubernetes/starlingx-hardware-requirements.rst +++ b/doc/source/planning/kubernetes/starlingx-hardware-requirements.rst @@ -113,8 +113,8 @@ in the following table. * - Minimum Processor Class - Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket - or - + or + Single-CPU Intel Xeon D-15xx Family, 8 cores \(low-power/low-cost option for Simplex deployments\) @@ -126,13 +126,13 @@ in the following table. - Platform: * Socket 0: 7GB \(by default, configurable\) - + * Socket 1: 1GB \(by default, configurable\) - Application: * Socket 0: Remaining memory - + * Socket 1: Remaining memory * - Minimum Primary Disk - 500 GB - |SSD| or |NVMe| diff --git a/doc/source/planning/openstack/block-storage-for-virtual-machines.rst b/doc/source/planning/openstack/block-storage-for-virtual-machines.rst index e5feeb2dc..a938d685b 100755 --- a/doc/source/planning/openstack/block-storage-for-virtual-machines.rst +++ b/doc/source/planning/openstack/block-storage-for-virtual-machines.rst @@ -99,7 +99,7 @@ storage by setting a flavor extra specification. .. caution:: Unlike Cinder-based storage, Ephemeral storage does not persist if the instance is terminated or the compute node fails. - + .. _block-storage-for-virtual-machines-d29e17: In addition, for local Ephemeral storage, migration and resizing support diff --git a/doc/source/planning/openstack/hardware-requirements.rst b/doc/source/planning/openstack/hardware-requirements.rst index 1d1d7beaa..458e5cae0 100755 --- a/doc/source/planning/openstack/hardware-requirements.rst +++ b/doc/source/planning/openstack/hardware-requirements.rst @@ -220,7 +220,7 @@ only the Interface sections are modified for |prod-os|. | Intel Virtualization \(VTD, VTX\) | Enabled | +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. [#] For more information, see :ref:`The PXE Boot Network `. +.. [#] For more information, see :ref:`The PXE Boot Network `. .. _hardware-requirements-section-if-scenarios: diff --git a/doc/source/planning/openstack/installation-and-resource-planning-verified-commercial-hardware.rst b/doc/source/planning/openstack/installation-and-resource-planning-verified-commercial-hardware.rst index 1308b0de4..9229e3ef9 100755 --- a/doc/source/planning/openstack/installation-and-resource-planning-verified-commercial-hardware.rst +++ b/doc/source/planning/openstack/installation-and-resource-planning-verified-commercial-hardware.rst @@ -16,7 +16,7 @@ here. +----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Component | Approved Hardware | - +==========================================================+=========================================================================================================================================================================================================================================================================================================================================================================================================================================+ + +----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Hardware Platforms | - Hewlett Packard Enterprise | | | | | | | @@ -122,7 +122,7 @@ here. | | | | | - Mellanox MT27700 Family \(ConnectX-4\) 40G | +----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | NICs Verified for Data Interfaces [#f1]_ | The following NICs are supported: | + | NICs Verified for Data Interfaces | The following NICs are supported: | | | | | | - Intel I350 \(Powerville\) 1G | | | | diff --git a/doc/source/planning/openstack/resource-planning.rst b/doc/source/planning/openstack/resource-planning.rst index cc733acde..629d35f0f 100644 --- a/doc/source/planning/openstack/resource-planning.rst +++ b/doc/source/planning/openstack/resource-planning.rst @@ -17,7 +17,7 @@ Resource Placement maximum throughput with |SRIOV|. .. only:: starlingx - + A |VM| such as VNF 6 in NUMA-REF will not have the same performance as VNF 1 and VNF 2. There are multiple ways to maximize performance for VNF 6 in this case: diff --git a/doc/source/security/index.rst b/doc/source/security/index.rst index ae30964c9..3839deb0b 100644 --- a/doc/source/security/index.rst +++ b/doc/source/security/index.rst @@ -41,5 +41,5 @@ OpenStack .. toctree:: :maxdepth: 2 - + openstack/index \ No newline at end of file diff --git a/doc/source/security/kubernetes/assign-pod-security-policies.rst b/doc/source/security/kubernetes/assign-pod-security-policies.rst index 0cfd3e206..1a81a3d6f 100644 --- a/doc/source/security/kubernetes/assign-pod-security-policies.rst +++ b/doc/source/security/kubernetes/assign-pod-security-policies.rst @@ -24,7 +24,7 @@ directly create pods since they have access to the **privileged** |PSP|. Also, based on the ClusterRoleBindings and RoleBindings automatically added by |prod|, all users with cluster-admin roles can also create privileged Deployment/ReplicaSets/etc. in the kube-system namespace and restricted -Deployment/ReplicaSets/etc. in any other namespace. +Deployment/ReplicaSets/etc. in any other namespace. In order to enable privileged Deployment/ReplicaSets/etc. to be created in diff --git a/doc/source/security/kubernetes/configure-oidc-auth-applications.rst b/doc/source/security/kubernetes/configure-oidc-auth-applications.rst index 66a3fdc7f..c406705ce 100644 --- a/doc/source/security/kubernetes/configure-oidc-auth-applications.rst +++ b/doc/source/security/kubernetes/configure-oidc-auth-applications.rst @@ -128,7 +128,7 @@ and uploaded by default. .. code-block:: none ~(keystone_admin)]$ system helm-override-show oidc-auth-apps dex kube-system - + config: staticClients: - id: stx-oidc-client-app @@ -147,7 +147,7 @@ and uploaded by default. oidc-client container and the dex container. It is recommended that you configure a unique, more secure **client\_secret** by specifying the value in the dex overrides file, as shown in the example below. - + .. code-block:: none config: @@ -155,7 +155,7 @@ and uploaded by default. - id: stx-oidc-client-app name: STX OIDC Client app redirectURIs: ['/callback'] - secret: BetterSecret + secret: BetterSecret client_secret: BetterSecret expiry: idTokens: "10h" @@ -212,7 +212,7 @@ and uploaded by default. /home/sysadmin/oidc-client-overrides.yaml file. .. code-block:: none - + config: client_secret: BetterSecret @@ -223,7 +223,7 @@ and uploaded by default. ~(keystone_admin)]$ system helm-override-update oidc-auth-apps oidc-client kube-system --values /home/sysadmin/oidc-client-overrides.yaml .. note:: - + If you need to manually override the secrets, the client\_secret in the oidc-client overrides must match the staticClients secret and client\_secret in the dex overrides, otherwise the oidc-auth |CLI| diff --git a/doc/source/security/kubernetes/configure-remote-cli-access.rst b/doc/source/security/kubernetes/configure-remote-cli-access.rst index 7e2da90e4..d1d2ae512 100644 --- a/doc/source/security/kubernetes/configure-remote-cli-access.rst +++ b/doc/source/security/kubernetes/configure-remote-cli-access.rst @@ -37,7 +37,7 @@ either of the above two methods. ` :ref:`Using Container-backed Remote CLIs and Clients - ` + ` :ref:`Install Kubectl and Helm Clients Directly on a Host ` diff --git a/doc/source/security/kubernetes/index.rst b/doc/source/security/kubernetes/index.rst index 4791ce98a..121463ffc 100644 --- a/doc/source/security/kubernetes/index.rst +++ b/doc/source/security/kubernetes/index.rst @@ -121,7 +121,7 @@ Deprovision Windows Active Directory .. toctree:: :maxdepth: 1 - deprovision-windows-active-directory-authentication + deprovision-windows-active-directory-authentication **************** Firewall Options @@ -240,7 +240,7 @@ UEFI Secure Boot overview-of-uefi-secure-boot use-uefi-secure-boot - + *********************** Trusted Platform Module *********************** @@ -317,13 +317,13 @@ Security Features secure-https-external-connectivity security-hardening-firewall-options isolate-starlingx-internal-cloud-management-network - + *************************************** -Appendix: Locally creating certifciates +Appendix: Locally creating certifciates *************************************** .. toctree:: :maxdepth: 1 - + creating-certificates-locally-using-cert-manager-on-the-controller creating-certificates-locally-using-openssl diff --git a/doc/source/security/kubernetes/kubernetes-cli-from-local-ldap-linux-account-login.rst b/doc/source/security/kubernetes/kubernetes-cli-from-local-ldap-linux-account-login.rst index 0c906621b..5f5e4957f 100644 --- a/doc/source/security/kubernetes/kubernetes-cli-from-local-ldap-linux-account-login.rst +++ b/doc/source/security/kubernetes/kubernetes-cli-from-local-ldap-linux-account-login.rst @@ -55,7 +55,7 @@ your Kubernetes Service Account. $ kubectl config set-cluster mycluster --server=https://192.168.206.1:6443 --insecure-skip-tls-verify $ kubectl config set-credentials joe-admin@mycluster --token=$TOKEN $ kubectl config set-context joe-admin@mycluster --cluster=mycluster --user joe-admin@mycluster - $ kubectl config use-context joe-admin@mycluster + $ kubectl config use-context joe-admin@mycluster You now have admin access to |prod| Kubernetes cluster. diff --git a/doc/source/security/kubernetes/kubernetes-root-ca-certificate.rst b/doc/source/security/kubernetes/kubernetes-root-ca-certificate.rst index c37a1ed1c..e2a708b46 100644 --- a/doc/source/security/kubernetes/kubernetes-root-ca-certificate.rst +++ b/doc/source/security/kubernetes/kubernetes-root-ca-certificate.rst @@ -30,7 +30,7 @@ servers connecting to the |prod|'s Kubernetes API endpoint. The administrator can also provide values to add to the Kubernetes API server certificate **Subject Alternative Name** list using the apiserver\_cert\_sans override parameter. - + Use the bootstrap override values and , as part of the installation procedure to specify the diff --git a/doc/source/security/kubernetes/local-ldap-linux-user-accounts.rst b/doc/source/security/kubernetes/local-ldap-linux-user-accounts.rst index 4f091ad37..4a21692fc 100644 --- a/doc/source/security/kubernetes/local-ldap-linux-user-accounts.rst +++ b/doc/source/security/kubernetes/local-ldap-linux-user-accounts.rst @@ -89,4 +89,4 @@ from the console ports of the hosts; no SSH access is allowed. .. seealso:: - :ref:`Creating LDAP Linux Accounts ` \ No newline at end of file + :ref:`Creating LDAP Linux Accounts ` \ No newline at end of file diff --git a/doc/source/security/kubernetes/remote-access-index.rst b/doc/source/security/kubernetes/remote-access-index.rst index ed38f387d..df231ba7a 100644 --- a/doc/source/security/kubernetes/remote-access-index.rst +++ b/doc/source/security/kubernetes/remote-access-index.rst @@ -4,7 +4,7 @@ Remote CLI Access .. toctree:: :maxdepth: 1 - + configure-remote-cli-access security-configure-container-backed-remote-clis-and-clients security-install-kubectl-and-helm-clients-directly-on-a-host diff --git a/doc/source/security/kubernetes/security-access-the-gui.rst b/doc/source/security/kubernetes/security-access-the-gui.rst index 0804e25b6..ae34fc004 100644 --- a/doc/source/security/kubernetes/security-access-the-gui.rst +++ b/doc/source/security/kubernetes/security-access-the-gui.rst @@ -16,29 +16,29 @@ from a browser. * Do one of the following: * **For the StarlingX Horizon Web interface** - - Access the Horizon in your browser at the address: - - http://:8080 - + + Access the Horizon in your browser at the address: + + http://:8080 + Use the username **admin** and the sysadmin password to log in. * **For the Kubernetes Dashboard** - Access the Kubernetes Dashboard GUI in your browser at the address: - - http://: - + Access the Kubernetes Dashboard GUI in your browser at the address: + + http://: + Where is the port that the dashboard was installed on. - + Login using credentials in kubectl config on your remote workstation running the browser; see :ref:`Install Kubectl and Helm Clients Directly on a Host ` as an example for setting up kubectl config credentials for an admin - user. - - .. note:: + user. + + .. note:: The Kubernetes Dashboard is not installed by default. See |prod| System Configuration: :ref:`Install the Kubernetes Dashboard ` for information on how to install diff --git a/doc/source/security/kubernetes/security-configure-container-backed-remote-clis-and-clients.rst b/doc/source/security/kubernetes/security-configure-container-backed-remote-clis-and-clients.rst index 8b8088745..3e81f8bcb 100644 --- a/doc/source/security/kubernetes/security-configure-container-backed-remote-clis-and-clients.rst +++ b/doc/source/security/kubernetes/security-configure-container-backed-remote-clis-and-clients.rst @@ -72,7 +72,7 @@ additional configuration is required in order to use :command:`helm`. .. code-block:: none $ source /etc/platform/openrc - ~(keystone_admin)]$ + ~(keystone_admin)]$ #. Set environment variables. @@ -180,13 +180,13 @@ additional configuration is required in order to use :command:`helm`. #. Log in to Horizon as the user and tenant that you want to configure remote access for. - + In this example, the 'admin' user in the 'admin' tenant. #. Navigate to **Project** \> **API Access** \> **Download Openstack RC file**. #. Select **Openstack RC file**. - + The file admin-openrc.sh downloads. @@ -206,7 +206,7 @@ additional configuration is required in order to use :command:`helm`. This step will also generate a remote CLI/client RC file. #. Change to the location of the extracted tarball. - + .. parsed-literal:: $ cd $HOME/|prefix|-remote-clients-/ @@ -222,9 +222,9 @@ additional configuration is required in order to use :command:`helm`. .. code-block:: none $ mkdir -p $HOME/remote_cli_wd - - #. Run the :command:`configure\_client.sh` script. + + #. Run the :command:`configure\_client.sh` script. .. code-block:: none @@ -288,8 +288,8 @@ additional configuration is required in order to use :command:`helm`. $ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-wrs.1 - - + + If you specify repositories that require authentication, you must first perform a :command:`docker login` to that repository before using @@ -312,14 +312,14 @@ remote platform CLIs can be used in any shell after sourcing the generated remote CLI/client RC file. This RC file sets up the required environment variables and aliases for the remote |CLI| commands. -.. note:: +.. note:: Consider adding this command to your .login or shell rc file, such that your shells will automatically be initialized with the environment variables and aliases for the remote |CLI| commands. See :ref:`Using Container-backed Remote CLIs and Clients ` for details. -**Related information** +**Related information** .. seealso:: diff --git a/doc/source/security/kubernetes/using-container-backed-remote-clis-and-clients.rst b/doc/source/security/kubernetes/using-container-backed-remote-clis-and-clients.rst index 35fdb938e..efa13daeb 100644 --- a/doc/source/security/kubernetes/using-container-backed-remote-clis-and-clients.rst +++ b/doc/source/security/kubernetes/using-container-backed-remote-clis-and-clients.rst @@ -44,7 +44,7 @@ variables and aliases for the remote |CLI| commands. .. code-block:: none - Please enter your OpenStack Password for project admin as user admin: + Please enter your OpenStack Password for project admin as user admin: root@myclient:/home/user/remote_cli_wd# system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | @@ -66,7 +66,7 @@ variables and aliases for the remote |CLI| commands. ceph-pools-audit-1569849000-cb988 0/1 Completed 0 2m25s coredns-7cf476b5c8-5x724 1/1 Running 1 3d2h ... - root@myclient:/home/user/remote_cli_wd# + root@myclient:/home/user/remote_cli_wd# .. note:: Some |CLI| commands are designed to leave you in a shell prompt, for example: @@ -143,7 +143,7 @@ variables and aliases for the remote |CLI| commands. -**Related information** +**Related information** .. seealso:: diff --git a/doc/source/security/openstack/config-and-management-using-container-backed-remote-clis-and-clients.rst b/doc/source/security/openstack/config-and-management-using-container-backed-remote-clis-and-clients.rst index 3ec01e05c..1857b12ef 100644 --- a/doc/source/security/openstack/config-and-management-using-container-backed-remote-clis-and-clients.rst +++ b/doc/source/security/openstack/config-and-management-using-container-backed-remote-clis-and-clients.rst @@ -46,8 +46,8 @@ variables and aliases for the remote |CLI| commands. .. code-block:: none - root@myclient:/home/user/remote_cli_wd# source remote_client_openstack.sh - Please enter your OpenStack Password for project admin as user admin: + root@myclient:/home/user/remote_cli_wd# source remote_client_openstack.sh + Please enter your OpenStack Password for project admin as user admin: root@myclient:/home/user/remote_cli_wd# openstack endpoint list +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | @@ -77,7 +77,7 @@ variables and aliases for the remote |CLI| commands. +--------------------------------------+-----------+-----------+------+-------------+ | f2421d88-69e8-4e2f-b8aa-abd7fb4de1c5 | my-volume | available | 8 | | +--------------------------------------+-----------+-----------+------+-------------+ - root@myclient:/home/user/remote_cli_wd# + root@myclient:/home/user/remote_cli_wd# .. note:: Some commands used by remote |CLI| are designed to leave you in a shell @@ -108,6 +108,6 @@ variables and aliases for the remote |CLI| commands. root@myclient:/home/user/remote_cli_wd# openstack image create --public --disk-format qcow2 --container-format bare --file ubuntu.qcow2 ubuntu_image - + diff --git a/doc/source/security/openstack/configure-remote-clis-and-clients.rst b/doc/source/security/openstack/configure-remote-clis-and-clients.rst index bea3c06e0..c02539de8 100644 --- a/doc/source/security/openstack/configure-remote-clis-and-clients.rst +++ b/doc/source/security/openstack/configure-remote-clis-and-clients.rst @@ -93,7 +93,7 @@ The following procedure shows how to configure the Container-backed Remote $ ./configure_client.sh -t openstack -r admin_openrc.sh -w $HOME/remote_cli_wd -p 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1 - + If you specify repositories that require authentication, as shown above, you must remember to perform a :command:`docker login` to that @@ -145,7 +145,7 @@ The following procedure shows how to configure the Container-backed Remote $ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1 - + If you specify repositories that require authentication, you must first perform a :command:`docker login` to that repository before using remote |CLIs|. diff --git a/doc/source/security/openstack/openstack-keystone-accounts.rst b/doc/source/security/openstack/openstack-keystone-accounts.rst index 7fa72f5a8..60d0a36a2 100644 --- a/doc/source/security/openstack/openstack-keystone-accounts.rst +++ b/doc/source/security/openstack/openstack-keystone-accounts.rst @@ -17,6 +17,6 @@ or the CLI. Projects and users can also be managed using the OpenStack REST API. .. seealso:: - :ref:`System Account Password Rules ` + :ref:`System Account Password Rules ` diff --git a/doc/source/security/openstack/update-the-domain-name.rst b/doc/source/security/openstack/update-the-domain-name.rst index e1ec33747..29a848201 100644 --- a/doc/source/security/openstack/update-the-domain-name.rst +++ b/doc/source/security/openstack/update-the-domain-name.rst @@ -59,44 +59,44 @@ service‐parameter-add` command to configure and set the OpenStack domain name: # define A record for general domain for |prod| system IN A 10.10.10.10 - + # define alias for general domain for horizon dashboard REST API URL horizon. IN CNAME ..com. - + # define alias for general domain for keystone identity service REST API URLs keystone. IN CNAME ..com. keystone-api. IN CNAME ..com. - - # define alias for general domain for neutron networking REST API URL + + # define alias for general domain for neutron networking REST API URL neutron. IN CNAME ..com. - + # define alias for general domain for nova compute provisioning REST API URLs nova. IN CNAME ..com. placement. IN CNAME ..com. rest-api. IN CNAME ..com. - + # define no vnc procy alias for VM console access through Horizon REST API URL novncproxy. IN CNAME ..com. - + # define alias for general domain for barbican secure storage REST API URL barbican. IN CNAME ..com. - + # define alias for general domain for glance VM management REST API URL glance. IN CNAME ..com. - + # define alias for general domain for cinder block storage REST API URL cinder. IN CNAME ..com. cinder2. IN CNAME ..com. cinder3. IN CNAME ..com. - + # define alias for general domain for heat orchestration REST API URLs heat. IN CNAME ..com. cloudformation. IN CNAME my-|prefix|-domain..com. - + # define alias for general domain for starlingx REST API URLs # ( for fault, patching, service management, system and VIM ) fm. IN CNAME ..com. @@ -104,7 +104,7 @@ service‐parameter-add` command to configure and set the OpenStack domain name: smapi. IN CNAME ..com. sysinv. IN CNAME ..com. vim. IN CNAME ..com. - + .. rubric:: |proc| #. Source the environment. @@ -112,7 +112,7 @@ service‐parameter-add` command to configure and set the OpenStack domain name: .. code-block:: none $ source /etc/platform/openrc - ~(keystone_admin)$ + ~(keystone_admin)$ #. To set a unique domain name, use the :command:`system service‐parameter-add` command. diff --git a/doc/source/security/openstack/use-local-clis.rst b/doc/source/security/openstack/use-local-clis.rst index ba6a4bcfc..f26eaf485 100644 --- a/doc/source/security/openstack/use-local-clis.rst +++ b/doc/source/security/openstack/use-local-clis.rst @@ -51,7 +51,7 @@ For example: | 8b9971df-6d83.. | vanilla | 1 | 1 | 0 | 1 | True | | e94c8123-2602.. | xlarge.8c.4G.8G | 4096 | 8 | 0 | 8 | True | +-----------------+------------------+------+------+-----+-------+-----------+ - + ~(keystone_admin)$ openstack image list +----------------+----------------------------------------+--------+ | ID | Name | Status | @@ -60,5 +60,5 @@ For example: | 15aaf0de-b369..| opensquidbox.amd64.1.06a.iso | active | | eeda4642-db83..| xenial-server-cloudimg-amd64-disk1.img | active | +----------------+----------------------------------------+--------+ - + diff --git a/doc/source/shared/abbrevs.txt b/doc/source/shared/abbrevs.txt index ff9e4124c..c30a2dd3a 100755 --- a/doc/source/shared/abbrevs.txt +++ b/doc/source/shared/abbrevs.txt @@ -88,6 +88,7 @@ .. |QoS| replace:: :abbr:`QoS (Quality of Service)` .. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)` .. |RBAC| replace:: :abbr:`RBAC (Role-Based Access Control)` +.. |RBD| replace:: :abbr:`RBD (RADOS Block Device)` .. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)` .. |SAN| replace:: :abbr:`SAN (Subject Alternative Name)` .. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)` diff --git a/doc/source/storage/.vscode/settings.json b/doc/source/storage/.vscode/settings.json new file mode 100644 index 000000000..3cce948f6 --- /dev/null +++ b/doc/source/storage/.vscode/settings.json @@ -0,0 +1,3 @@ +{ + "restructuredtext.confPath": "" +} \ No newline at end of file diff --git a/doc/source/storage/index.rst b/doc/source/storage/index.rst index 8cb6ee872..4aba3e003 100644 --- a/doc/source/storage/index.rst +++ b/doc/source/storage/index.rst @@ -14,7 +14,7 @@ and the requirements of the system. :maxdepth: 2 kubernetes/index - + --------- OpenStack --------- diff --git a/doc/source/storage/kubernetes/about-persistent-volume-support.rst b/doc/source/storage/kubernetes/about-persistent-volume-support.rst index 4c76f9a76..15f68137b 100644 --- a/doc/source/storage/kubernetes/about-persistent-volume-support.rst +++ b/doc/source/storage/kubernetes/about-persistent-volume-support.rst @@ -12,9 +12,56 @@ for containers to persist files beyond the lifetime of the container, a Persistent Volume Claim can be created to obtain a persistent volume which the container can mount and read/write files. -Management and customization tasks for Kubernetes Persistent Volume Claims can -be accomplished using the **rbd-provisioner** helm chart. The -**rbd-provisioner** helm chart is included in the **platform-integ-apps** -system application, which is automatically loaded and applied as part of the -|prod| installation. +Management and customization tasks for Kubernetes |PVCs| +can be accomplished by using StorageClasses set up by two Helm charts; +**rbd-provisioner** and **cephfs-provisioner**. The **rbd-provisioner**, +and **cephfs-provisioner** Helm charts are included in the +**platform-integ-apps** system application, which is automatically loaded and +applied as part of the |prod| installation. + +PVCs are supported with the following options: + +- with accessMode of ReadWriteOnce backed by Ceph |RBD| + + - only one container can attach to these PVCs + - management and customization tasks related to these PVCs are done + through the **rbd-provisioner** Helm chart provided by + platform-integ-apps + +- with accessMode of ReadWriteMany backed by CephFS + + - multiple containers can attach to these PVCs + - management and customization tasks related to these PVCs are done + through the **cephfs-provisioner** Helm chart provided by + platform-integ-apps + +After platform-integ-apps is applied the following system configurations are +created: + +- **Ceph Pools** + + .. code-block:: none + + ~(keystone_admin)]$ ceph osd lspools + kube-rbd + kube-cephfs-data + kube-cephfs-metadata + +- **CephFS** + + .. code-block:: none + + ~(keystone_admin)]$ ceph fs ls + name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ] + +- **Kubernetes StorageClasses** + + .. code-block:: none + + ~(keystone_admin)]$ kubectl get sc + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION + cephfs ceph.com/cephfs Delete Immediate false + general (default) ceph.com/rbd Delete Immediate false + + diff --git a/doc/source/storage/kubernetes/add-a-physical-volume.rst b/doc/source/storage/kubernetes/add-a-physical-volume.rst index c44f2c900..ab6f36773 100644 --- a/doc/source/storage/kubernetes/add-a-physical-volume.rst +++ b/doc/source/storage/kubernetes/add-a-physical-volume.rst @@ -60,7 +60,7 @@ for performance reasons, you must either use a non-root disk for **nova-local** storage, or ensure that the host is not used for VMs with ephemeral local storage. -For example, to add a volume with the UUID +For example, to add a volume with the UUID 67b368ab-626a-4168-9b2a-d1d239d4f3b0 to compute-1, use the following command. .. code-block:: none diff --git a/doc/source/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.rst b/doc/source/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.rst index e404ed157..85f4eb1ed 100644 --- a/doc/source/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.rst +++ b/doc/source/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.rst @@ -12,7 +12,7 @@ system installation using a |prod|-provided ansible playbook. .. rubric:: |prereq| |prod-long| must be installed and fully deployed before performing this -procedure. +procedure. .. xbooklink See the :ref:`Installation Overview ` for more information. @@ -250,8 +250,8 @@ appropriate storage-class name you set up in step :ref:`2 ` \(**netapp-nas-backend** in this example\) to the persistent volume claim's yaml configuration file. For more information about this file, see -|usertasks-doc|: :ref:`Create Persistent Volume Claims -`. +|usertasks-doc|: :ref:`Create ReadWriteOnce Persistent Volume Claims +`. .. seealso:: diff --git a/doc/source/storage/kubernetes/configure-ceph-file-system-for-internal-ceph-storage-backend.rst b/doc/source/storage/kubernetes/configure-ceph-file-system-for-internal-ceph-storage-backend.rst deleted file mode 100644 index a8a9b7356..000000000 --- a/doc/source/storage/kubernetes/configure-ceph-file-system-for-internal-ceph-storage-backend.rst +++ /dev/null @@ -1,243 +0,0 @@ - -.. clb1615317605723 -.. _configure-ceph-file-system-for-internal-ceph-storage-backend: - -============================================================ -Configure Ceph File System for Internal Ceph Storage Backend -============================================================ - -CephFS \(Ceph File System\) is a highly available, mutli-use, performant file -store for a variety of applications, built on top of Ceph's Distributed Object -Store \(RADOS\). - -.. rubric:: |context| - -CephFS provides the following functionality: - - -.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-h2b-h1k-x4b: - -- Enabled by default \(along with existing Ceph RDB\) - -- Highly available, multi-use, performant file storage - -- Scalability using a separate RADOS pool for the file's metadata - -- Metadata using Metadata Servers \(MDS\) that provide high availability and - scalability - -- Deployed in HA configurations for all |prod| deployment options - -- Integrates **cephfs-provisioner** supporting Kubernetes **StorageClass** - -- Enables configuration of: - - - - **PersistentVolumeClaim** \(|PVC|\) using **StorageClass** and - ReadWriteMany accessmode - - - Two or more application pods mounting |PVC| and reading/writing data to it - -CephFS is configured automatically when a Ceph backend is enabled and provides -a Kubernetes **StorageClass**. Once enabled, every node in the cluster that -serves as a Ceph monitor will also be configured as a CephFS Metadata Server -\(MDS\). Creation of the CephFS pools, filesystem initialization, and creation -of Kubernetes resource is done by the **platform-integ-apps** application, -using **cephfs-provisioner** Helm chart. - -When applied, **platform-integ-apps** creates two Ceph pools for each storage -backend configured, one for CephFS data and a second pool for metadata: - - -.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-jp2-yn2-x4b: - -- **CephFS data pool**: The pool name for the default storage backend is - **kube-cephfs-data** - -- **Metadata pool**: The pool name is **kube-cephfs-metadata** - - When a new storage backend is created, a new CephFS data pool will be - created with the name **kube-cephfs-data-** \, and - the metadata pool will be created with the name - **kube-cephfs-metadata-** \. The default - filesystem name is **kube-cephfs**. - - When a new storage backend is created, a new filesystem will be created - with the name **kube-cephfs-** \. - - -For example, if the user adds a storage backend named, 'test', -**cephfs-provisioner** will create the following pools: - - -.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-i3w-h1f-x4b: - -- kube-cephfs-data-test - -- kube-cephfs-metadata-test - - -Also, the application **platform-integ-apps** will create a filesystem **kube -cephfs-test**. - -If you list all the pools in a cluster with 'test' storage backend, you should -see four pools created by **cephfs-provisioner** using **platform-integ-apps**. -Use the following command to list the CephFS |OSD| pools created. - -.. code-block:: none - - $ ceph osd lspools - - -.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-nnv-lr2-x4b: - -- kube-rbd - -- kube-rbd-test - -- **kube-cephfs-data** - -- **kube-cephfs-data-test** - -- **kube-cephfs-metadata** - -- **kube-cephfs-metadata-test** - - -Use the following command to list Ceph File Systems: - -.. code-block:: none - - $ ceph fs ls - name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ] - name: kube-cephfs-silver, metadata pool: kube-cephfs-metadata-silver, data pools: [kube-cephfs-data-silver ] - -:command:`cephfs-provisioner` creates in a Kubernetes cluster, a -**StorageClass** for each storage backend present. - -These **StorageClass** resources should be used to create -**PersistentVolumeClaim** resources in order to allow pods to use CephFS. The -default **StorageClass** resource is named **cephfs**, and additional resources -are created with the name \ **-cephfs** for each -additional storage backend created. - -For example, when listing **StorageClass** resources in a cluster that is -configured with a storage backend named 'test', the following storage classes -are created: - -.. code-block:: none - - $ kubectl get sc - NAME PROVISIONER RECLAIM.. VOLUME.. ALLOWVOLUME.. AGE - cephfs ceph.com/cephfs Delete Immediate false 65m - general (default) ceph.com/rbd Delete Immediate false 66m - test-cephfs ceph.com/cephfs Delete Immediate false 65m - test-general ceph.com/rbd Delete Immediate false 66m - -All Kubernetes resources \(pods, StorageClasses, PersistentVolumeClaims, -configmaps, etc.\) used by the provisioner are created in the **kube-system -namespace.** - -.. note:: - Multiple Ceph file systems are not enabled by default in the cluster. You - can enable it manually, for example, using the command; :command:`ceph fs - flag set enable\_multiple true --yes-i-really-mean-it`. - - -.. _configure-ceph-file-system-for-internal-ceph-storage-backend-section-dq5-wgk-x4b: - -------------------------------- -Persistent Volume Claim \(PVC\) -------------------------------- - -.. rubric:: |context| - -If you need to create a Persistent Volume Claim, you can create it using -**kubectl**. For example: - - -.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ol-lrh-pdf-x4b: - -#. Create a file named **my\_pvc.yaml**, and add the following content: - - .. code-block:: none - - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - name: claim1 - namespace: kube-system - spec: - storageClassName: cephfs - accessModes: - - ReadWriteMany - resources: - requests: - storage: 1Gi - -#. To apply the updates, use the following command: - - .. code-block:: none - - $ kubectl apply -f my_pvc.yaml - -#. After the |PVC| is created, use the following command to see the |PVC| - bound to the existing **StorageClass**. - - .. code-block:: none - - $ kubectl get pvc -n kube-system - - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - claim1 Boundpvc.. 1Gi RWX cephfs - -#. The |PVC| is automatically provisioned by the **StorageClass**, and a |PVC| - is created. Use the following command to list the |PVC|. - - .. code-block:: none - - $ kubectl get pv -n kube-system - - NAME CAPACITY ACCESS..RECLAIM.. STATUS CLAIM STORAGE.. REASON AGE - pvc-5.. 1Gi RWX Delete Bound kube-system/claim1 cephfs 26s - - -#. Create Pods to use the |PVC|. Create a file my\_pod.yaml: - - .. code-block:: none - - kind: Pod - apiVersion: v1 - metadata: - name: test-pod - namespace: kube-system - spec: - containers: - - name: test-pod - image: gcr.io/google_containers/busybox:1.24 - command: - - "/bin/sh" - args: - - "-c" - - "touch /mnt/SUCCESS && exit 0 || exit 1" - volumeMounts: - - name: pvc - mountPath: "/mnt" - restartPolicy: "Never" - volumes: - - name: pvc - persistentVolumeClaim: - claimName: claim1 - -#. Apply the inputs to the **pod.yaml** file, using the following command. - - .. code-block:: none - - $ kubectl apply -f my_pod.yaml - - -For more information on Persistent Volume Support, see, :ref:`About Persistent -Volume Support `, and, -|usertasks-doc|: :ref:`Creating Persistent Volume Claims -`. - diff --git a/doc/source/storage/kubernetes/configure-netapps-using-a-private-docker-registry.rst b/doc/source/storage/kubernetes/configure-netapps-using-a-private-docker-registry.rst index 4409ea8d0..ee6b1148f 100644 --- a/doc/source/storage/kubernetes/configure-netapps-using-a-private-docker-registry.rst +++ b/doc/source/storage/kubernetes/configure-netapps-using-a-private-docker-registry.rst @@ -9,4 +9,4 @@ Configure Netapps Using a Private Docker Registry Use the ``docker_registries`` parameter to pull from the local registry rather than public ones. -You must first push the files to the local registry. +You must first push the files to the local registry. diff --git a/doc/source/storage/kubernetes/configure-the-internal-ceph-storage-backend.rst b/doc/source/storage/kubernetes/configure-the-internal-ceph-storage-backend.rst index 8a3509986..2243ff610 100644 --- a/doc/source/storage/kubernetes/configure-the-internal-ceph-storage-backend.rst +++ b/doc/source/storage/kubernetes/configure-the-internal-ceph-storage-backend.rst @@ -48,6 +48,11 @@ the internal Ceph storage backend. third Ceph monitor instance is configured by default on the first storage node. + .. note:: + CephFS support requires Metadata servers \(MDS\) to be deployed. When + CephFS is configured, an MDS is deployed automatically along with each + node that has been configured to run a Ceph Monitor. + #. Configure Ceph OSDs. For more information, see :ref:`Provision Storage on a Controller or Storage Host Using Horizon `. diff --git a/doc/source/storage/kubernetes/create-readwritemany-persistent-volume-claims.rst b/doc/source/storage/kubernetes/create-readwritemany-persistent-volume-claims.rst new file mode 100644 index 000000000..73e336af4 --- /dev/null +++ b/doc/source/storage/kubernetes/create-readwritemany-persistent-volume-claims.rst @@ -0,0 +1,71 @@ + +.. iqu1616951298602 +.. _create-readwritemany-persistent-volume-claims: + +============================================= +Create ReadWriteMany Persistent Volume Claims +============================================= + +Container images have an ephemeral file system by default. For data to survive +beyond the lifetime of a container, it can read and write files to a persistent +volume obtained with a Persistent Volume Claim \(PVC\) created to provide +persistent storage. + +.. rubric:: |context| + +For multiple containers to mount the same |PVC|, create a |PVC| with accessMode +of ReadWriteMany \(RWX\). + +The following steps show an example of creating a 1GB |PVC| with ReadWriteMany +accessMode. + +.. rubric:: |proc| + +.. _iqu1616951298602-steps-bdr-qnm-tkb: + +#. Create the **rwx-test-claim** Persistent Volume Claim. + + #. Create a yaml file defining the claim and its attributes. + + For example: + + .. code-block:: none + + ~(keystone_admin)]$ + cat < rwx-claim.yaml + kind: PersistentVolumeClaim + apiVersion: v1 + metadata: + name: rwx-test-claim + spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 1Gi + storageClassName: cephfs + EOF + + 2. Apply the settings created above. + + .. code-block:: none + + ~(keystone_admin)]$ kubectl apply -f rwx-claim.yaml + persistentvolumeclaim/rwx-test-claim created + + +This results in 1GB |PVC| being created. You can view the |PVC| using the +following command. + +.. code-block:: none + + ~(keystone_admin)]$ kubectl get persistentvolumeclaims + + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS + rwx-test-claim Bound pvc-df9f.. 1Gi RWX cephfs + +.. code-block:: none + + ~(keystone_admin)]$ kubectl get persistentvolume + NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS + pvc-df9f.. 1Gi RWX Delete Bound default/rwx-test-claim cephfs diff --git a/doc/source/storage/kubernetes/default-behavior-of-the-cephfs-provisioner.rst b/doc/source/storage/kubernetes/default-behavior-of-the-cephfs-provisioner.rst new file mode 100644 index 000000000..e3d0c9b0b --- /dev/null +++ b/doc/source/storage/kubernetes/default-behavior-of-the-cephfs-provisioner.rst @@ -0,0 +1,38 @@ + +.. mgt1616518429546 +.. _default-behavior-of-the-cephfs-provisioner: + +========================================== +Default Behavior of the CephFS Provisioner +========================================== + +The default Ceph Cluster configuration set up during |prod| installation +contains a single storage tier, storage, containing all the OSDs. + +The default CephFS provisioner service runs within the kube-system namespace +and has a single storage class, '**cephfs**', which is configured to: + +.. _mgt1616518429546-ul-g3n-qdb-bpb: + +- use the default 'storage' Ceph storage tier +- use a **kube-cephfs-data** and **kube-cephfs-metadata** Ceph pool, and +- only support |PVC| requests from the following namespaces: kube-system, + default and kube-public. + +The full details of the **cephfs-provisioner** configuration can be viewed +using the following commands: + +.. code-block:: none + + ~(keystone_admin)]$ system helm-override-list platform-integ-apps + +The following command provides the chart names and the overrides namespaces. + +.. code-block:: none + + ~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system + +See :ref:`Create ReadWriteMany Persistent Volume Claims ` +and :ref:`Mount ReadWriteMany Persistent Volumes in Containers ` +for an example of how to create and mount a ReadWriteMany PVC from the **cephfs** +storage class. diff --git a/doc/source/storage/kubernetes/default-behavior-of-the-rbd-provisioner.rst b/doc/source/storage/kubernetes/default-behavior-of-the-rbd-provisioner.rst index 6e8db168b..3a76c1a60 100644 --- a/doc/source/storage/kubernetes/default-behavior-of-the-rbd-provisioner.rst +++ b/doc/source/storage/kubernetes/default-behavior-of-the-rbd-provisioner.rst @@ -9,9 +9,8 @@ Default Behavior of the RBD Provisioner The default Ceph Cluster configuration set up during |prod| installation contains a single storage tier, storage, containing all the |OSDs|. -The default rbd-provisioner service runs within the kube-system namespace -and has a single storage class, 'general', which is configured to: - +The default |RBD| provisioner service runs within the kube-system namespace and +has a single storage class, 'general', which is configured to: .. _default-behavior-of-the-rbd-provisioner-ul-zg2-r2q-43b: @@ -19,7 +18,8 @@ and has a single storage class, 'general', which is configured to: - use a **kube-rbd** ceph pool, and -- only support PVC requests from the following namespaces: kube-system, default and kube-public. +- only support PVC requests from the following namespaces: kube-system, + default and kube-public. The full details of the rbd-provisioner configuration can be viewed with @@ -35,9 +35,7 @@ This command provides the chart names and the overrides namespaces. ~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system -See :ref:`Create Persistent Volume Claims -` and -:ref:`Mount Persistent Volumes in Containers -` for -details on how to create and mount a PVC from this storage class. - +See :ref:`Creating ReadWriteOnce Persistent Volume Claims ` and +:ref:`Mounting ReadWriteOnce Persistent Volumes in Containers ` +for an example of how to create and mount a ReadWriteOnce |PVC| from the +'general' storage class. \ No newline at end of file diff --git a/doc/source/storage/kubernetes/enable-additional-storage-classes.rst b/doc/source/storage/kubernetes/enable-rbd-readwriteonly-additional-storage-classes.rst similarity index 88% rename from doc/source/storage/kubernetes/enable-additional-storage-classes.rst rename to doc/source/storage/kubernetes/enable-rbd-readwriteonly-additional-storage-classes.rst index 8acea718b..0a633040e 100644 --- a/doc/source/storage/kubernetes/enable-additional-storage-classes.rst +++ b/doc/source/storage/kubernetes/enable-rbd-readwriteonly-additional-storage-classes.rst @@ -1,27 +1,27 @@ .. csl1561030322454 -.. _enable-additional-storage-classes: +.. _enable-rbd-readwriteonly-additional-storage-classes: -================================= -Enable Additional Storage Classes -================================= +=================================================== +Enable RBD ReadWriteOnly Additional Storage Classes +=================================================== -Additional storage classes can be added to the default rbd-provisioner +Additional storage classes can be added to the default |RBD| provisioner service. .. rubric:: |context| Some reasons for adding an additional storage class include: -.. _enable-additional-storage-classes-ul-nz1-r3q-43b: +.. _enable-rbd-readwriteonly-additional-storage-classes-ul-nz1-r3q-43b: - managing Ceph resources for particular namespaces in a separate Ceph pool; simply for Ceph partitioning reasons - using an alternate Ceph Storage Tier, for example. with faster drives -A modification to the configuration \(helm overrides\) of the -**rbd-provisioner** service is required to enable an additional storage class +A modification to the configuration \(Helm overrides\) of the +|RBD| provisioner service is required to enable an additional storage class The following example that illustrates adding a second storage class to be utilized by a specific namespace. @@ -33,19 +33,19 @@ utilized by a specific namespace. .. rubric:: |proc| -#. List installed helm chart overrides for the platform-integ-apps. +#. List installed Helm chart overrides for the platform-integ-apps. .. code-block:: none ~(keystone_admin)$ system helm-override-list platform-integ-apps - +------------------+----------------------+ - | chart name | overrides namespaces | - +------------------+----------------------+ - | ceph-pools-audit | [u'kube-system'] | - | helm-toolkit | [] | - | rbd-provisioner | [u'kube-system'] | - +------------------+----------------------+ - + +--------------------+----------------------+ + | chart name | overrides namespaces | + +--------------------+----------------------+ + | ceph-pools-audit | [u'kube-system'] | + | cephfs-provisioner | [u'kube-system'] | + | helm-toolkit | [] | + | rbd-provisioner | [u'kube-system'] | + +--------------------+----------------------+ #. Review existing overrides for the rbd-provisioner chart. You will refer to this information in the following step. @@ -85,7 +85,6 @@ utilized by a specific namespace. ~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \ platform-integ-apps rbd-provisioner - +----------------+-----------------------------------------+ | Property | Value | +----------------+-----------------------------------------+ @@ -123,7 +122,6 @@ utilized by a specific namespace. .. code-block:: none ~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system - +--------------------+-----------------------------------------+ | Property | Value | +--------------------+-----------------------------------------+ @@ -161,13 +159,11 @@ utilized by a specific namespace. #. Apply the overrides. - #. Run the :command:`application-apply` command. .. code-block:: none ~(keystone_admin)$ system application-apply platform-integ-apps - +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ @@ -187,7 +183,6 @@ utilized by a specific namespace. .. code-block:: none ~(keystone_admin)$ system application-list - +-------------+---------+---------------+---------------+---------+-----------+ | application | version | manifest name | manifest file | status | progress | +-------------+---------+---------------+---------------+---------+-----------+ @@ -196,9 +191,8 @@ utilized by a specific namespace. | | | manifest | | | | +-------------+---------+------ --------+---------------+---------+-----------+ - - You can now create and mount persistent volumes from the new - rbd-provisioner's **special** storage class from within the - **new-sc-app** application-specific namespace. + You can now create and mount persistent volumes from the new |RBD| + provisioner's **special** storage class from within the **new-sc-app** + application-specific namespace. diff --git a/doc/source/storage/kubernetes/enable-readwritemany-pvc-support-in-additional-namespaces.rst b/doc/source/storage/kubernetes/enable-readwritemany-pvc-support-in-additional-namespaces.rst new file mode 100644 index 000000000..06a2b655f --- /dev/null +++ b/doc/source/storage/kubernetes/enable-readwritemany-pvc-support-in-additional-namespaces.rst @@ -0,0 +1,220 @@ + +.. wyf1616954377690 +.. _enable-readwritemany-pvc-support-in-additional-namespaces: + +========================================================= +Enable ReadWriteMany PVC Support in Additional Namespaces +========================================================= + +The default general **cephfs-provisioner** storage class is enabled for the +default, kube-system, and kube-public namespaces. To enable an additional +namespace, for example for an application-specific namespace, a modification +to the configuration \(Helm overrides\) of the **cephfs-provisioner** service +is required. + +.. rubric:: |context| + +The following example illustrates the configuration of three additional +application-specific namespaces to access the **cephfs-provisioner** +**cephfs** storage class. + +.. note:: + + Due to limitations with templating and merging of overrides, the entire + storage class must be redefined in the override when updating specific + values. + +.. rubric:: |proc| + +#. List installed Helm chart overrides for the platform-integ-apps. + + .. code-block:: none + + ~(keystone_admin)]$ system helm-override-list platform-integ-apps + +--------------------+----------------------+ + | chart name | overrides namespaces | + +--------------------+----------------------+ + | ceph-pools-audit | [u'kube-system'] | + | cephfs-provisioner | [u'kube-system'] | + | helm-toolkit | [] | + | rbd-provisioner | [u'kube-system'] | + +--------------------+----------------------+ + +#. Review existing overrides for the cephfs-provisioner chart. You will refer + to this information in the following step. + + .. code-block:: none + + ~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system + + +--------------------+----------------------------------------------------------+ + | Property | Value | + +--------------------+----------------------------------------------------------+ + | attributes | enabled: true | + | | | + | combined_overrides | classdefaults: | + | | adminId: admin | + | | adminSecretName: ceph-secret-admin | + | | monitors: | + | | - 192.168.204.3:6789 | + | | - 192.168.204.1:6789 | + | | - 192.168.204.2:6789 | + | | classes: | + | | - additionalNamespaces: | + | | - default | + | | - kube-public | + | | chunk_size: 64 | + | | claim_root: /pvc-volumes | + | | crush_rule_name: storage_tier_ruleset | + | | data_pool_name: kube-cephfs-data | + | | fs_name: kube-cephfs | + | | metadata_pool_name: kube-cephfs-metadata | + | | name: cephfs | + | | replication: 2 | + | | userId: ceph-pool-kube-cephfs-data | + | | userSecretName: ceph-pool-kube-cephfs-data | + | | global: | + | | replicas: 2 | + | | | + | name | cephfs-provisioner | + | namespace | kube-system | + | system_overrides | classdefaults: | + | | adminId: admin | + | | adminSecretName: ceph-secret-admin | + | | monitors: ['192.168.204.3:6789', '192.168.204.1:6789', | + | | '192.168.204.2:6789'] | + | | classes: | + | | - additionalNamespaces: [default, kube-public] | + | | chunk_size: 64 | + | | claim_root: /pvc-volumes | + | | crush_rule_name: storage_tier_ruleset | + | | data_pool_name: kube-cephfs-data | + | | fs_name: kube-cephfs | + | | metadata_pool_name: kube-cephfs-metadata | + | | name: cephfs | + | | replication: 2 | + | | userId: ceph-pool-kube-cephfs-data | + | | userSecretName: ceph-pool-kube-cephfs-data | + | | global: {replicas: 2} | + | | | + | user_overrides | None | + +--------------------+----------------------------------------------------------+ + +#. Create an overrides yaml file defining the new namespaces. + + In this example, create the file /home/sysadmin/update-namespaces.yaml with the following content: + + .. code-block:: none + + ~(keystone_admin)]$ cat < ~/update-namespaces.yaml + classes: + - additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3] + chunk_size: 64 + claim_root: /pvc-volumes + crush_rule_name: storage_tier_ruleset + data_pool_name: kube-cephfs-data + fs_name: kube-cephfs + metadata_pool_name: kube-cephfs-metadata + name: cephfs + replication: 2 + userId: ceph-pool-kube-cephfs-data + userSecretName: ceph-pool-kube-cephfs-data + EOF + +#. Apply the overrides file to the chart. + + .. code-block:: none + + ~(keystone_admin)]$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml platform-integ-apps cephfs-provisioner kube-system + +----------------+----------------------------------------------+ + | Property | Value | + +----------------+----------------------------------------------+ + | name | cephfs-provisioner | + | namespace | kube-system | + | user_overrides | classes: | + | | - additionalNamespaces: | + | | - default | + | | - kube-public | + | | - new-app | + | | - new-app2 | + | | - new-app3 | + | | chunk_size: 64 | + | | claim_root: /pvc-volumes | + | | crush_rule_name: storage_tier_ruleset | + | | data_pool_name: kube-cephfs-data | + | | fs_name: kube-cephfs | + | | metadata_pool_name: kube-cephfs-metadata | + | | name: cephfs | + | | replication: 2 | + | | userId: ceph-pool-kube-cephfs-data | + | | userSecretName: ceph-pool-kube-cephfs-data | + +----------------+----------------------------------------------+ + +#. Confirm that the new overrides have been applied to the chart. + + The following output has been edited for brevity. + + .. code-block:: none + + ~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system + +--------------------+---------------------------------------------+ + | Property | Value | + +--------------------+---------------------------------------------+ + | user_overrides | classes: | + | | - additionalNamespaces: | + | | - default | + | | - kube-public | + | | - new-app | + | | - new-app2 | + | | - new-app3 | + | | chunk_size: 64 | + | | claim_root: /pvc-volumes | + | | crush_rule_name: storage_tier_ruleset | + | | data_pool_name: kube-cephfs-data | + | | fs_name: kube-cephfs | + | | metadata_pool_name: kube-cephfs-metadata | + | | name: cephfs | + | | replication: 2 | + | | userId: ceph-pool-kube-cephfs-data | + | | userSecretName: ceph-pool-kube-cephfs-data| + +--------------------+---------------------------------------------+ + +#. Apply the overrides. + + #. Run the :command:`application-apply` command. + + .. code-block:: none + + ~(keystone_admin)]$ system application-apply platform-integ-apps + +---------------+----------------------------------+ + | Property | Value | + +---------------+----------------------------------+ + | active | True | + | app_version | 1.0-24 | + | created_at | 2019-05-26T06:22:20.711732+00:00 | + | manifest_file | manifest.yaml | + | manifest_name | platform-integration-manifest | + | name | platform-integ-apps | + | progress | None | + | status | applying | + | updated_at | 2019-05-26T22:27:26.547181+00:00 | + +---------------+----------------------------------+ + + #. Monitor progress using the :command:`application-list` command. + + .. code-block:: none + + ~(keystone_admin)]$ system application-list + +-------------+---------+---------------+---------------+---------+-----------+ + | application | version | manifest name | manifest file | status | progress | + +-------------+---------+---------------+---------------+---------+-----------+ + | platform- | 1.0-24 | platform | manifest.yaml | applied | completed | + | integ-apps | | -integration | | | | + | | | -manifest | | | | + +-------------+---------+---------------+---------------+---------+-----------+ + + You can now create and mount PVCs from the default |RBD| provisioner's + **general** storage class, from within these application-specific + namespaces. + + diff --git a/doc/source/storage/kubernetes/enable-pvc-support-in-additional-namespaces.rst b/doc/source/storage/kubernetes/enable-readwriteonce-pvc-support-in-additional-namespaces.rst similarity index 86% rename from doc/source/storage/kubernetes/enable-pvc-support-in-additional-namespaces.rst rename to doc/source/storage/kubernetes/enable-readwriteonce-pvc-support-in-additional-namespaces.rst index 81526797d..954ff3bc7 100644 --- a/doc/source/storage/kubernetes/enable-pvc-support-in-additional-namespaces.rst +++ b/doc/source/storage/kubernetes/enable-readwriteonce-pvc-support-in-additional-namespaces.rst @@ -1,22 +1,21 @@ .. vqw1561030204071 -.. _enable-pvc-support-in-additional-namespaces: +.. _enable-readwriteonce-pvc-support-in-additional-namespaces: -=========================================== -Enable PVC Support in Additional Namespaces -=========================================== +========================================================= +Enable ReadWriteOnce PVC Support in Additional Namespaces +========================================================= The default general **rbd-provisioner** storage class is enabled for the default, kube-system, and kube-public namespaces. To enable an additional namespace, for example for an application-specific namespace, a modification to the configuration \(helm overrides\) of the -**rbd-provisioner** service is required. +|RBD| provisioner service is required. .. rubric:: |context| The following example illustrates the configuration of three additional -application-specific namespaces to access the rbd-provisioner's **general** -storage class. +application-specific namespaces to access the |RBD| provisioner's **general storage class**. .. note:: Due to limitations with templating and merging of overrides, the entire @@ -30,13 +29,14 @@ storage class. .. code-block:: none ~(keystone_admin)$ system helm-override-list platform-integ-apps - +------------------+----------------------+ - | chart name | overrides namespaces | - +------------------+----------------------+ - | ceph-pools-audit | [u'kube-system'] | - | helm-toolkit | [] | - | rbd-provisioner | [u'kube-system'] | - +------------------+----------------------+ + +--------------------+----------------------+ + | chart name | overrides namespaces | + +--------------------+----------------------+ + | ceph-pools-audit | [u'kube-system'] | + | cephfs-provisioner | [u'kube-system'] | + | helm-toolkit | [] | + | rbd-provisioner | [u'kube-system'] | + +--------------------+----------------------+ #. Review existing overrides for the rbd-provisioner chart. You will refer to this information in the following step. @@ -94,29 +94,28 @@ storage class. +--------------------+--------------------------------------------------+ -#. Create an overrides yaml file defining the new namespaces. - - In this example we will create the file - /home/sysadmin/update-namespaces.yaml with the following content: +#. Create an overrides yaml file defining the new namespaces. In this example we will create the file /home/sysadmin/update-namespaces.yaml with the following content: .. code-block:: none + ~(keystone_admin)]$ cat < ~/update-namespaces.yaml + classes: - additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3] chunk_size: 64 crush_rule_name: storage_tier_ruleset name: general pool_name: kube-rbd - replication: 1 + replication: 2 userId: ceph-pool-kube-rbd userSecretName: ceph-pool-kube-rbd + EOF #. Apply the overrides file to the chart. .. code-block:: none - ~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \ - platform-integ-apps rbd-provisioner kube-system + ~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml platform-integ-apps rbd-provisioner kube-system +----------------+-----------------------------------------+ | Property | Value | +----------------+-----------------------------------------+ @@ -133,7 +132,7 @@ storage class. | | crush_rule_name: storage_tier_ruleset | | | name: general | | | pool_name: kube-rbd | - | | replication: 1 | + | | replication: 2 | | | userId: ceph-pool-kube-rbd | | | userSecretName: ceph-pool-kube-rbd | +----------------+-----------------------------------------+ @@ -166,14 +165,13 @@ storage class. | | crush_rule_name: storage_tier_ruleset| | | name: general | | | pool_name: kube-rbd | - | | replication: 1 | + | | replication: 2 | | | userId: ceph-pool-kube-rbd | | | userSecretName: ceph-pool-kube-rbd | +--------------------+----------------------------------------+ #. Apply the overrides. - #. Run the :command:`application-apply` command. .. code-block:: none @@ -183,7 +181,7 @@ storage class. | Property | Value | +---------------+----------------------------------+ | active | True | - | app_version | 1.0-5 | + | app_version | 1.0-24 | | created_at | 2019-05-26T06:22:20.711732+00:00 | | manifest_file | manifest.yaml | | manifest_name | platform-integration-manifest | @@ -201,18 +199,12 @@ storage class. +-------------+---------+---------------+---------------+---------+-----------+ | application | version | manifest name | manifest file | status | progress | +-------------+---------+---------------+---------------+---------+-----------+ - | platform- | 1.0-5 | platform | manifest.yaml | applied | completed | + | platform- | 1.0-24 | platform | manifest.yaml | applied | completed | | integ-apps | | -integration | | | | | | | -manifest | | | | +-------------+---------+---------------+---------------+---------+-----------+ + You can now create and mount PVCs from the default |RBD| provisioner's + **general storage class**, from within these application-specific namespaces. - You can now create and mount PVCs from the default - **rbd-provisioner's general** storage class, from within these - application-specific namespaces. -#. Apply the secret to the new **rbd-provisioner** namespace. - - .. code-block:: none - - ~(keystone_admin)$ kubectl get secret ceph-pool-kube-rbd -n default -o yaml | grep -v '^\s*namespace:\s' | kubectl apply -n -f diff --git a/doc/source/storage/kubernetes/increase-the-size-for-lvm-local-volumes-on-controller-filesystems.rst b/doc/source/storage/kubernetes/increase-the-size-for-lvm-local-volumes-on-controller-filesystems.rst index 02914c689..aa1207676 100644 --- a/doc/source/storage/kubernetes/increase-the-size-for-lvm-local-volumes-on-controller-filesystems.rst +++ b/doc/source/storage/kubernetes/increase-the-size-for-lvm-local-volumes-on-controller-filesystems.rst @@ -86,7 +86,7 @@ The default **rootfs** device is **/dev/sda**. #. Assign the unused partition on **controller-0** as a physical volume to **cgts-vg** volume group. - + For example .. code-block:: none @@ -116,7 +116,7 @@ The default **rootfs** device is **/dev/sda**. #. To assign the unused partition on **controller-1** as a physical volume to **cgts-vg** volume group, **swact** the hosts and repeat the procedure on **controller-1**. - + .. rubric:: |proc| After increasing the **cgts-vg** volume size, you can provision the filesystem diff --git a/doc/source/storage/kubernetes/index.rst b/doc/source/storage/kubernetes/index.rst index 3705b22bb..b7d0cc258 100644 --- a/doc/source/storage/kubernetes/index.rst +++ b/doc/source/storage/kubernetes/index.rst @@ -8,7 +8,7 @@ Overview .. toctree:: :maxdepth: 1 - + storage-configuration-storage-resources disk-naming-conventions @@ -18,7 +18,7 @@ Disks, Partitions, Volumes, and Volume Groups .. toctree:: :maxdepth: 1 - + work-with-local-volume-groups local-volume-groups-cli-commands increase-the-size-for-lvm-local-volumes-on-controller-filesystems @@ -29,7 +29,7 @@ Work with Disk Partitions .. toctree:: :maxdepth: 1 - + work-with-disk-partitions identify-space-available-for-partitions list-partitions @@ -44,65 +44,64 @@ Work with Physical Volumes .. toctree:: :maxdepth: 1 - + work-with-physical-volumes add-a-physical-volume list-physical-volumes view-details-for-a-physical-volume delete-a-physical-volume - + ---------------- Storage Backends ---------------- .. toctree:: :maxdepth: 1 - + storage-backends configure-the-internal-ceph-storage-backend - configure-ceph-file-system-for-internal-ceph-storage-backend configure-an-external-netapp-deployment-as-the-storage-backend configure-netapps-using-a-private-docker-registry uninstall-the-netapp-backend - + ---------------- Controller Hosts ---------------- .. toctree:: :maxdepth: 1 - + controller-hosts-storage-on-controller-hosts ceph-cluster-on-a-controller-host increase-controller-filesystem-storage-allotments-using-horizon increase-controller-filesystem-storage-allotments-using-the-cli - + ------------ Worker Hosts ------------ .. toctree:: :maxdepth: 1 - + storage-configuration-storage-on-worker-hosts - + ------------- Storage Hosts ------------- .. toctree:: :maxdepth: 1 - + storage-hosts-storage-on-storage-hosts replication-groups - + ----------------------------- Configure Ceph OSDs on a Host ----------------------------- .. toctree:: :maxdepth: 1 - + ceph-storage-pools osd-replication-factors-journal-functions-and-storage-tiers storage-functions-osds-and-ssd-backed-journals @@ -119,22 +118,42 @@ Persistent Volume Support .. toctree:: :maxdepth: 1 - + about-persistent-volume-support + +*************** +RBD Provisioner +*************** + +.. toctree:: + :maxdepth: 1 + default-behavior-of-the-rbd-provisioner - storage-configuration-create-persistent-volume-claims - storage-configuration-mount-persistent-volumes-in-containers - enable-pvc-support-in-additional-namespaces - enable-additional-storage-classes + storage-configuration-create-readwriteonce-persistent-volume-claims + storage-configuration-mount-readwriteonce-persistent-volumes-in-containers + enable-readwriteonce-pvc-support-in-additional-namespaces + enable-rbd-readwriteonly-additional-storage-classes install-additional-rbd-provisioners +**************************** +Ceph File System Provisioner +**************************** + +.. toctree:: + :maxdepth: 1 + + default-behavior-of-the-cephfs-provisioner + create-readwritemany-persistent-volume-claims + mount-readwritemany-persistent-volumes-in-containers + enable-readwritemany-pvc-support-in-additional-namespaces + ---------------- Storage Profiles ---------------- .. toctree:: :maxdepth: 1 - + storage-profiles ---------------------------- @@ -143,15 +162,15 @@ Storage-Related CLI Commands .. toctree:: :maxdepth: 1 - + storage-configuration-storage-related-cli-commands - + --------------------- Storage Usage Details --------------------- .. toctree:: :maxdepth: 1 - + storage-usage-details-storage-utilization-display view-storage-utilization-using-horizon diff --git a/doc/source/storage/kubernetes/install-additional-rbd-provisioners.rst b/doc/source/storage/kubernetes/install-additional-rbd-provisioners.rst index b3a601902..986361e74 100644 --- a/doc/source/storage/kubernetes/install-additional-rbd-provisioners.rst +++ b/doc/source/storage/kubernetes/install-additional-rbd-provisioners.rst @@ -6,7 +6,7 @@ Install Additional RBD Provisioners =================================== -You can launch additional dedicated rdb-provisioners to support specific +You can launch additional dedicated |RBD| provisioners to support specific applications using dedicated pools, storage classes, and namespaces. .. rubric:: |context| @@ -14,11 +14,11 @@ applications using dedicated pools, storage classes, and namespaces. This can be useful if, for example, to allow an application to have control over its own persistent volume provisioner, that is, managing the Ceph pool, storage tier, allowed namespaces, and so on, without requiring the -kubernetes admin to modify the default rbd-provisioner service in the +kubernetes admin to modify the default |RBD| provisioner service in the kube-system namespace. This procedure uses standard Helm mechanisms to install a second -rbd-provisioner. +|RBD| provisioner. .. rubric:: |proc| @@ -101,6 +101,6 @@ rbd-provisioner. general (default) ceph.com/rbd 6h39m special-storage-class ceph.com/rbd 5h58m - You can now create and mount PVCs from the new rbd-provisioner's + You can now create and mount PVCs from the new |RBD| provisioner's **2nd-storage** storage class, from within the **isolated-app** namespace. diff --git a/doc/source/storage/kubernetes/mount-readwritemany-persistent-volumes-in-containers.rst b/doc/source/storage/kubernetes/mount-readwritemany-persistent-volumes-in-containers.rst new file mode 100644 index 000000000..a96ff490b --- /dev/null +++ b/doc/source/storage/kubernetes/mount-readwritemany-persistent-volumes-in-containers.rst @@ -0,0 +1,169 @@ + +.. fkk1616520068837 +.. _mount-readwritemany-persistent-volumes-in-containers: + +==================================================== +Mount ReadWriteMany Persistent Volumes in Containers +==================================================== + +You can attach a ReadWriteMany PVC to multiple containers, and that |PVC| can +be written to, by all containers. + +.. rubric:: |context| + +This example shows how a volume is claimed and mounted by each container +replica of a deployment with 2 replicas, and each container replica can read +and write to the |PVC|. It is the responsibility of an individual micro-service +within an application to make a volume claim, mount it, and use it. + +.. rubric:: |prereq| + +You must have created the |PVCs|. This procedure uses |PVCs| with names and +configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persistent Volume Claims ` . + +.. rubric:: |proc| +.. _fkk1616520068837-steps-fqj-flr-tkb: + +#. Create the busybox container with the persistent volumes created from the PVCs mounted. This deployment will create two replicas mounting the same persistent volume. + + #. Create a yaml file definition for the busybox container. + + .. code-block:: none + + % cat < wrx-busybox.yaml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: wrx-busybox + namespace: default + spec: + progressDeadlineSeconds: 600 + replicas: 2 + selector: + matchLabels: + run: busybox + template: + metadata: + labels: + run: busybox + spec: + containers: + - args: + - sh + image: busybox + imagePullPolicy: Always + name: busybox + stdin: true + tty: true + volumeMounts: + - name: pvc1 + mountPath: "/mnt1" + restartPolicy: Always + volumes: + - name: pvc1 + persistentVolumeClaim: + claimName: rwx-test-claim + EOF + + #. Apply the busybox configuration. + + .. code-block:: none + + % kubectl apply -f wrx-busybox.yaml + deployment.apps/wrx-busybox created + +#. Attach to the busybox and create files on the Persistent Volumes. + + + #. List the available pods. + + .. code-block:: none + + % kubectl get pods + NAME READY STATUS RESTARTS AGE + wrx-busybox-6455997c76-4kg8v 1/1 Running 0 108s + wrx-busybox-6455997c76-crmw6 1/1 Running 0 108s + + #. Connect to the pod shell for CLI access. + + .. code-block:: none + + % kubectl attach wrx-busybox-6455997c76-4kg8v -c busybox -i -t + + #. From the container's console, list the disks to verify that the Persistent Volume is attached. + + .. code-block:: none + + % df + Filesystem 1K-blocks Used Available Use% Mounted on + overlay 31441920 1783748 29658172 6% / + tmpfs 65536 0 65536 0% /dev + tmpfs 5033188 0 5033188 0% /sys/fs/cgroup + ceph-fuse 516542464 643072 515899392 0% /mnt1 + + The PVC is mounted as /mnt1. + + +#. Create files in the mount. + + .. code-block:: none + + # cd /mnt1 + # touch i-was-here-${HOSTNAME} + # ls /mnt1 + i-was-here-wrx-busybox-6455997c76-4kg8vi + +#. End the container session. + + .. code-block:: none + + % exit + wrx-busybox-6455997c76-4kg8v -c busybox -i -t' command when the pod is running + +#. Connect to the other busybox container + + .. code-block:: none + + % kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t + +#. Optional: From the container's console list the disks to verify that the PVC is attached. + + .. code-block:: none + + % df + Filesystem 1K-blocks Used Available Use% Mounted on + overlay 31441920 1783888 29658032 6% / + tmpfs 65536 0 65536 0% /dev + tmpfs 5033188 0 5033188 0% /sys/fs/cgroup + ceph-fuse 516542464 643072 515899392 0% /mnt1 + + +#. Verify that the file created from the other container exists and that this container can also write to the Persistent Volume. + + .. code-block:: none + + # cd /mnt1 + # ls /mnt1 + i-was-here-wrx-busybox-6455997c76-4kg8v + # echo ${HOSTNAME} + wrx-busybox-6455997c76-crmw6 + # touch i-was-here-${HOSTNAME} + # ls /mnt1 + i-was-here-wrx-busybox-6455997c76-4kg8v i-was-here-wrx-busybox-6455997c76-crmw6 + +#. End the container session. + + .. code-block:: none + + % exit + Session ended, resume using 'kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t' command when the pod is running + +#. Terminate the busybox container. + + .. code-block:: none + + % kubectl delete -f wrx-busybox.yaml + + For more information on Persistent Volume Support, see, :ref:`About Persistent Volume Support `. + + diff --git a/doc/source/storage/kubernetes/replace-osds-and-journal-disks.rst b/doc/source/storage/kubernetes/replace-osds-and-journal-disks.rst index ac814bbf3..5868deeeb 100644 --- a/doc/source/storage/kubernetes/replace-osds-and-journal-disks.rst +++ b/doc/source/storage/kubernetes/replace-osds-and-journal-disks.rst @@ -15,6 +15,6 @@ the same peer group. Do not substitute a smaller disk than the original. The replacement disk is automatically formatted and updated with data when the storage host is unlocked. For more information, see |node-doc|: :ref:`Change -Hardware Components for a Storage Host +Hardware Components for a Storage Host `. diff --git a/doc/source/storage/kubernetes/storage-backends.rst b/doc/source/storage/kubernetes/storage-backends.rst index d00947567..91ce8e004 100644 --- a/doc/source/storage/kubernetes/storage-backends.rst +++ b/doc/source/storage/kubernetes/storage-backends.rst @@ -116,12 +116,7 @@ For more information about Trident, see - :ref:`Configure the Internal Ceph Storage Backend ` - - :ref:`Configuring Ceph File System for Internal Ceph Storage Backend - ` - - - :ref:`Configure an External Netapp Deployment as the Storage Backend + - :ref:`Configure an External Netapp Deployment as the Storage Backend ` - - :ref:`Uninstall the Netapp Backend ` - - + - :ref:`Uninstall the Netapp Backend ` diff --git a/doc/source/storage/kubernetes/storage-configuration-create-persistent-volume-claims.rst b/doc/source/storage/kubernetes/storage-configuration-create-persistent-volume-claims.rst deleted file mode 100644 index a707a5014..000000000 --- a/doc/source/storage/kubernetes/storage-configuration-create-persistent-volume-claims.rst +++ /dev/null @@ -1,98 +0,0 @@ - -.. xco1564696647432 -.. _storage-configuration-create-persistent-volume-claims: - -=============================== -Create Persistent Volume Claims -=============================== - -Container images have an ephemeral file system by default. For data to -survive beyond the lifetime of a container, it can read and write files to -a persistent volume obtained with a |PVC| created to provide persistent -storage. - -.. rubric:: |context| - -The following steps create two 1Gb persistent volume claims. - -.. rubric:: |proc| - - -.. _storage-configuration-create-persistent-volume-claims-d891e32: - -#. Create the **test-claim1** persistent volume claim. - - - #. Create a yaml file defining the claim and its attributes. - - For example: - - .. code-block:: none - - ~(keystone_admin)]$ cat < claim1.yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - name: test-claim1 - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi - storageClassName: general - EOF - - #. Apply the settings created above. - - .. code-block:: none - - ~(keystone_admin)]$ kubectl apply -f claim1.yaml - - persistentvolumeclaim/test-claim1 created - -#. Create the **test-claim2** persistent volume claim. - - - #. Create a yaml file defining the claim and its attributes. - - For example: - - .. code-block:: none - - ~(keystone_admin)]$ cat < claim2.yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - name: test-claim2 - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi - storageClassName: general - EOF - - - #. Apply the settings created above. - - .. code-block:: none - - ~(keystone_admin)]$ kubectl apply -f claim2.yaml - persistentvolumeclaim/test-claim2 created - -.. rubric:: |result| - -Two 1Gb persistent volume claims have been created. You can view them with -the following command. - -.. code-block:: none - - ~(keystone_admin)]$ kubectl get persistentvolumeclaims - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s - test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s - -For more information on using CephFS for internal Ceph backends, see, -:ref:`Using CephFS for Internal Ceph Storage Backend ` \ No newline at end of file diff --git a/doc/source/storage/kubernetes/storage-configuration-create-readwriteonce-persistent-volume-claims.rst b/doc/source/storage/kubernetes/storage-configuration-create-readwriteonce-persistent-volume-claims.rst new file mode 100644 index 000000000..f85f66db3 --- /dev/null +++ b/doc/source/storage/kubernetes/storage-configuration-create-readwriteonce-persistent-volume-claims.rst @@ -0,0 +1,105 @@ + +.. xco1564696647432 +.. _storage-configuration-create-readwriteonce-persistent-volume-claims: + +============================================= +Create ReadWriteOnce Persistent Volume Claims +============================================= + +Container images have an ephemeral file system by default. For data to +survive beyond the lifetime of a container, it can read and write files to +a persistent volume obtained with a |PVC| created to provide persistent +storage. + +.. rubric:: |context| + +The following steps show an example of creating two 1GB |PVCs| with +ReadWriteOnce accessMode. + +.. rubric:: |proc| + + +.. _storage-configuration-create-readwriteonce-persistent-volume-claims-d891e32: + +#. Create the **rwo-test-claim1** Persistent Volume Claim. + + + #. Create a yaml file defining the claim and its attributes. + + For example: + + .. code-block:: none + + ~(keystone_admin)]$ cat < rwo-claim1.yaml + kind: PersistentVolumeClaim + apiVersion: v1 + metadata: + name: rwo-test-claim1 + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: general + EOF + + #. Apply the settings created above. + + .. code-block:: none + + ~(keystone_admin)]$ kubectl apply -f rwo-claim1.yaml + + persistentvolumeclaim/rwo-test-claim1 created + +#. Create the **rwo-test-claim2** Persistent Volume Claim. + + + #. Create a yaml file defining the claim and its attributes. + + For example: + + .. code-block:: none + + ~(keystone_admin)]$ cat < rwo-claim2.yaml + kind: PersistentVolumeClaim + apiVersion: v1 + metadata: + name: rwo-test-claim2 + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: general + EOF + + #. Apply the settings created above. + + .. code-block:: none + + ~(keystone_admin)]$ kubectl apply -f rwo-claim2.yaml + persistentvolumeclaim/rwo-test-claim2 created + +.. rubric:: |result| + +Two 1Gb |PVCs| have been created. You can view the |PVCs| using +the following command. + +.. code-block:: none + + ~(keystone_admin)]$ kubectl get persistentvolumeclaims + + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS + rwo-test-claim1 Bound pvc-aaca.. 1Gi RWO general + rwo-test-claim2 Bound pvc-e93f.. 1Gi RWO general + + +.. code-block:: none + + ~(keystone_admin)]$ kubectl get persistentvolume + + NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS + pvc-08d8.. 1Gi RWO Delete Bound default/rwo-test-claim1 general + pvc-af10.. 1Gi RWO Delete Bound default/rwo-test-claim2 general diff --git a/doc/source/storage/kubernetes/storage-configuration-mount-persistent-volumes-in-containers.rst b/doc/source/storage/kubernetes/storage-configuration-mount-readwriteonce-persistent-volumes-in-containers.rst similarity index 60% rename from doc/source/storage/kubernetes/storage-configuration-mount-persistent-volumes-in-containers.rst rename to doc/source/storage/kubernetes/storage-configuration-mount-readwriteonce-persistent-volumes-in-containers.rst index 8871a2ac5..56cafff9a 100644 --- a/doc/source/storage/kubernetes/storage-configuration-mount-persistent-volumes-in-containers.rst +++ b/doc/source/storage/kubernetes/storage-configuration-mount-readwriteonce-persistent-volumes-in-containers.rst @@ -1,28 +1,26 @@ .. pjw1564749970685 -.. _storage-configuration-mount-persistent-volumes-in-containers: +.. _storage-configuration-mount-readwriteonce-persistent-volumes-in-containers: -====================================== -Mount Persistent Volumes in Containers -====================================== +==================================================== +Mount ReadWriteOnce Persistent Volumes in Containers +==================================================== -You can launch, attach, and terminate a busybox container to mount |PVCs| in -your cluster. +You can attach ReadWriteOnce |PVCs| to a container when launching a container, +and changes to those |PVCs| will persist even if that container gets terminated +and restarted. .. rubric:: |context| This example shows how a volume is claimed and mounted by a simple running -container. It is the responsibility of an individual micro-service within -an application to make a volume claim, mount it, and use it. For example, -the stx-openstack application will make volume claims for **mariadb** and -**rabbitmq** via their helm charts to orchestrate this. +container, and the contents of the volume claim persists across restarts of +the container. It is the responsibility of an individual micro-service within +an application to make a volume claim, mount it, and use it. .. rubric:: |prereq| -You must have created the persistent volume claims. This procedure uses -PVCs with names and configurations created in |stor-doc|: :ref:`Create -Persistent Volume Claims -`. +You should refer to the Volume Claim examples. For more information, see, +:ref:`Create ReadWriteOnce Persistent Volume Claims `. .. rubric:: |proc| @@ -30,18 +28,18 @@ Persistent Volume Claims .. _storage-configuration-mount-persistent-volumes-in-containers-d583e55: #. Create the busybox container with the persistent volumes created from - the PVCs mounted. + the |PVCs| mounted. #. Create a yaml file definition for the busybox container. .. code-block:: none - % cat < busybox.yaml + % cat < rwo-busybox.yaml apiVersion: apps/v1 kind: Deployment metadata: - name: busybox + name: rwo-busybox namespace: default spec: progressDeadlineSeconds: 600 @@ -71,10 +69,10 @@ Persistent Volume Claims volumes: - name: pvc1 persistentVolumeClaim: - claimName: test-claim1 + claimName: rwo-test-claim1 - name: pvc2 persistentVolumeClaim: - claimName: test-claim2 + claimName: rwo-test-claim2 EOF @@ -82,10 +80,11 @@ Persistent Volume Claims .. code-block:: none - % kubectl apply -f busybox.yaml + % kubectl apply -f rwo-busybox.yaml + deployment.apps/rwo-busybox created -#. Attach to the busybox and create files on the persistent volumes. +#. Attach to the busybox and create files on the Persistent Volumes. #. List the available pods. @@ -93,34 +92,31 @@ Persistent Volume Claims .. code-block:: none % kubectl get pods - NAME READY STATUS RESTARTS AGE - busybox-5c4f877455-gkg2s 1/1 Running 0 19s - + NAME READY STATUS RESTARTS AGE + rwo-busybox-5c4f877455-gkg2s 1/1 Running 0 19s #. Connect to the pod shell for CLI access. .. code-block:: none - % kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t + % kubectl attach rwo-busybox-5c4f877455-gkg2s -c busybox -i -t #. From the container's console, list the disks to verify that the - persistent volumes are attached. + Persistent Volumes are attached. .. code-block:: none # df - Filesystem 1K-blocks Used Available Use% Mounted on - overlay 31441920 3239984 28201936 10% / - tmpfs 65536 0 65536 0% /dev - tmpfs 65900776 0 65900776 0% /sys/fs/cgroup - /dev/rbd0 999320 2564 980372 0% /mnt1 - /dev/rbd1 999320 2564 980372 0% /mnt2 - /dev/sda4 20027216 4952208 14034624 26% - ... + Filesystem 1K-blocks Used Available Use% Mounted on + overlay 31441920 3239984 28201936 10% / + tmpfs 65536 0 65536 0% /dev + tmpfs 65900776 0 65900776 0% /sys/fs/cgroup + /dev/rbd0 999320 2564 980372 0% /mnt1 + /dev/rbd1 999320 2564 980372 0% /mnt2 + /dev/sda4 20027216 4952208 14034624 26% The PVCs are mounted as /mnt1 and /mnt2. - #. Create files in the mounted volumes. .. code-block:: none @@ -140,31 +136,32 @@ Persistent Volume Claims .. code-block:: none # exit - Session ended, resume using 'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when the pod is running + Session ended, resume using + 'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when + the pod is running #. Terminate the busybox container. .. code-block:: none - % kubectl delete -f busybox.yaml + % kubectl delete -f rwo-busybox.yaml #. Recreate the busybox container, again attached to persistent volumes. - #. Apply the busybox configuration. .. code-block:: none - % kubectl apply -f busybox.yaml + % kubectl apply -f rwo-busybox.yaml + deployment.apps/rwo-busybox created #. List the available pods. .. code-block:: none % kubectl get pods - NAME READY STATUS RESTARTS AGE - busybox-5c4f877455-jgcc4 1/1 Running 0 19s - + NAME READY STATUS RESTARTS AGE + rwo-busybox-5c4f877455-jgcc4 1/1 Running 0 19s #. Connect to the pod shell for CLI access. @@ -197,5 +194,3 @@ Persistent Volume Claims i-was-here lost+found # ls /mnt2 i-was-here-too lost+found - - diff --git a/doc/source/storage/kubernetes/view-details-for-a-partition.rst b/doc/source/storage/kubernetes/view-details-for-a-partition.rst index 1b2806d44..1bca51c60 100644 --- a/doc/source/storage/kubernetes/view-details-for-a-partition.rst +++ b/doc/source/storage/kubernetes/view-details-for-a-partition.rst @@ -6,7 +6,7 @@ View Details for a Partition ============================ -You can view details for a partition, use the **system host-disk-partition-show** +You can view details for a partition, use the **system host-disk-partition-show** command. .. rubric:: |context| diff --git a/doc/source/storage/openstack/configure-an-optional-cinder-file-system.rst b/doc/source/storage/openstack/configure-an-optional-cinder-file-system.rst index dc2163ac7..ba76b16b6 100644 --- a/doc/source/storage/openstack/configure-an-optional-cinder-file-system.rst +++ b/doc/source/storage/openstack/configure-an-optional-cinder-file-system.rst @@ -127,7 +127,7 @@ For example: .. code-block:: none ~(keystone_admin)]$ system host-fs-modify controller-0 image-conversion=8 - + .. _configure-an-optional-cinder-file-system-section-ubp-f14-tnb: diff --git a/doc/source/storage/openstack/create-or-change-the-size-of-nova-local-storage.rst b/doc/source/storage/openstack/create-or-change-the-size-of-nova-local-storage.rst index 8e44f9f96..019e85ec5 100644 --- a/doc/source/storage/openstack/create-or-change-the-size-of-nova-local-storage.rst +++ b/doc/source/storage/openstack/create-or-change-the-size-of-nova-local-storage.rst @@ -40,8 +40,8 @@ can no longer be removed. ~(keystone_admin)$ system host-disk-list compute-0 +--------------------------------------+--------------++--------------+ - | uuid | device_node | available_gib | - | | | | + | uuid | device_node | available_gib | + | | | | +--------------------------------------+--------------+---------------+ | 5dcb3a0e-c677-4363-a030-58e245008504 | /dev/sda | 12216 | | c2932691-1b46-4faf-b823-2911a9ecdb9b | /dev/sdb | 20477 | @@ -71,7 +71,7 @@ can no longer be removed. | updated_at | None | | parameters | {u'instance_backing': u'lvm', u'instances_lv_size_mib': 0} | +-----------------+------------------------------------------------------------+ - + #. Obtain the |UUID| of the disk or partition to use for **nova-local** storage. @@ -140,7 +140,7 @@ can no longer be removed. #. Obtain the |UUID| of the partition to use for **nova-local** storage as described in step - + .. xbooklink :ref:`5 `. #. Add a disk or partition to the **nova-local** group, using a command of the diff --git a/doc/source/storage/openstack/index.rst b/doc/source/storage/openstack/index.rst index 30cc9d158..4ef60fee8 100644 --- a/doc/source/storage/openstack/index.rst +++ b/doc/source/storage/openstack/index.rst @@ -6,7 +6,7 @@ Contents .. toctree:: :maxdepth: 1 - + storage-configuration-and-management-overview storage-configuration-and-management-storage-resources config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster diff --git a/doc/source/system_configuration/kubernetes/changing-the-oam-ip-configuration-using-the-cli.rst b/doc/source/system_configuration/kubernetes/changing-the-oam-ip-configuration-using-the-cli.rst index f0e203710..6fd956128 100644 --- a/doc/source/system_configuration/kubernetes/changing-the-oam-ip-configuration-using-the-cli.rst +++ b/doc/source/system_configuration/kubernetes/changing-the-oam-ip-configuration-using-the-cli.rst @@ -69,6 +69,6 @@ controllers. This process requires a swact on the controllers. Then you must lock and unlock the worker nodes one at a time, ensuring that sufficient resources are available to migrate any running instances. -.. note:: +.. note:: On AIO Simplex systems you do not need to lock and unlock the host. The changes are applied automatically. \ No newline at end of file diff --git a/doc/source/system_configuration/openstack/configuring-the-rpc-response-timeout-in-cinder.rst b/doc/source/system_configuration/openstack/configuring-the-rpc-response-timeout-in-cinder.rst index a2ca7ec5a..32ac371e2 100644 --- a/doc/source/system_configuration/openstack/configuring-the-rpc-response-timeout-in-cinder.rst +++ b/doc/source/system_configuration/openstack/configuring-the-rpc-response-timeout-in-cinder.rst @@ -33,7 +33,7 @@ override. .. parsed-literal:: ~(keystone_admin)$ system helm-override-show |prefix|-openstack nova openstack - + The output should include the following: diff --git a/doc/source/updates/openstack/apply-update-to-the-stx-openstack-application.rst b/doc/source/updates/openstack/apply-update-to-the-stx-openstack-application.rst index 0f1e7ad0a..00e2eb7e9 100644 --- a/doc/source/updates/openstack/apply-update-to-the-stx-openstack-application.rst +++ b/doc/source/updates/openstack/apply-update-to-the-stx-openstack-application.rst @@ -42,7 +42,7 @@ where the following are optional arguments: | | stable- | | | | | | | versioned| | | | | +--------------------------+----------+-------------------------------+---------------------------+----------+-----------+ - + The output indicates that the currently installed version of |prefix|-openstack is 20.10-0. @@ -78,7 +78,7 @@ and the following is a positional argument which must come last: .. code-block:: none $ source /etc/platform/openrc - ~(keystone_admin)$ + ~(keystone_admin)$ #. Update the application. @@ -110,6 +110,6 @@ and the following is a positional argument which must come last: | status | applied | | updated_at | 2020-05-02T17:44:40.152201+00:00 | +---------------+----------------------------------+ - + diff --git a/doc/source/usertasks/index.rst b/doc/source/usertasks/index.rst index f3212c851..88f193c94 100644 --- a/doc/source/usertasks/index.rst +++ b/doc/source/usertasks/index.rst @@ -11,7 +11,7 @@ end-user tasks. .. toctree:: :maxdepth: 2 - + kubernetes/index --------- diff --git a/doc/source/usertasks/kubernetes/index.rst b/doc/source/usertasks/kubernetes/index.rst index 0a039a1ca..cb074fa7e 100644 --- a/doc/source/usertasks/kubernetes/index.rst +++ b/doc/source/usertasks/kubernetes/index.rst @@ -2,17 +2,18 @@ Contents ======== -************* +------------- System access -************* +------------- .. toctree:: :maxdepth: 1 kubernetes-user-tutorials-access-overview +----------------- Remote CLI access -***************** +----------------- .. toctree:: :maxdepth: 1 @@ -24,34 +25,36 @@ Remote CLI access configuring-remote-helm-client using-container-based-remote-clis-and-clients +---------- GUI access -********** +---------- .. toctree:: :maxdepth: 1 accessing-the-kubernetes-dashboard +---------- API access -********** +---------- .. toctree:: :maxdepth: 1 kubernetes-user-tutorials-rest-api-access -********************** +---------------------- Application management -********************** +---------------------- .. toctree:: :maxdepth: 1 kubernetes-user-tutorials-helm-package-manager -********************* -Local docker registry -********************* +--------------------- +Local Docker registry +--------------------- .. toctree:: :maxdepth: 1 @@ -59,18 +62,18 @@ Local docker registry kubernetes-user-tutorials-authentication-and-authorization using-an-image-from-the-local-docker-registry-in-a-container-spec -*************************** +--------------------------- NodePort usage restrictions -*************************** +--------------------------- .. toctree:: :maxdepth: 1 nodeport-usage-restrictions -************ +------------ Cert Manager -************ +------------ .. toctree:: :maxdepth: 1 @@ -78,9 +81,9 @@ Cert Manager kubernetes-user-tutorials-cert-manager letsencrypt-example -******************************** +-------------------------------- Vault secret and data management -******************************** +-------------------------------- .. toctree:: :maxdepth: 1 @@ -89,9 +92,9 @@ Vault secret and data management vault-aware vault-unaware -**************************** -Using Kata container runtime -**************************** +----------------------------- +Using Kata Containers runtime +----------------------------- .. toctree:: :maxdepth: 1 @@ -100,19 +103,38 @@ Using Kata container runtime specifying-kata-container-runtime-in-pod-spec known-limitations -******************************* +------------------------------- Adding persistent volume claims -******************************* +------------------------------- .. toctree:: :maxdepth: 1 - kubernetes-user-tutorials-creating-persistent-volume-claims - kubernetes-user-tutorials-mounting-persistent-volumes-in-containers + kubernetes-user-tutorials-about-persistent-volume-support -**************************************** +*************** +RBD Provisioner +*************** + +.. toctree:: + :maxdepth: 1 + + kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims + kubernetes-user-tutorials-mount-readwriteonce-persistent-volumes-in-containers + +**************************** +Ceph File System Provisioner +**************************** + +.. toctree:: + :maxdepth: 1 + + kubernetes-user-tutorials-create-readwritemany-persistent-volume-claims + kubernetes-user-tutorials-mount-readwritemany-persistent-volumes-in-containers + +---------------------------------------- Adding an SRIOV interface to a container -**************************************** +---------------------------------------- .. toctree:: :maxdepth: 1 @@ -120,9 +142,9 @@ Adding an SRIOV interface to a container creating-network-attachment-definitions using-network-attachment-definitions-in-a-container -************************** +-------------------------- CPU Manager for Kubernetes -************************** +-------------------------- .. toctree:: :maxdepth: 1 diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-about-persistent-volume-support.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-about-persistent-volume-support.rst new file mode 100644 index 000000000..a26fa49e6 --- /dev/null +++ b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-about-persistent-volume-support.rst @@ -0,0 +1,67 @@ + +.. rhb1561120463240 +.. _kubernetes-user-tutorials-about-persistent-volume-support: + +=============================== +About Persistent Volume Support +=============================== + +Persistent Volume Claims \(PVCs\) are requests for storage resources in your +cluster. By default, container images have an ephemeral file system. In order +for containers to persist files beyond the lifetime of the container, a +Persistent Volume Claim can be created to obtain a persistent volume which the +container can mount and read/write files. + +Management and customization tasks for Kubernetes |PVCs| +can be accomplished by using StorageClasses set up by two Helm charts; +**rbd-provisioner** and **cephfs-provisioner**. The **rbd-provisioner**, +and **cephfs-provisioner** Helm charts are included in the +**platform-integ-apps** system application, which is automatically loaded and +applied as part of the |prod| installation. + +PVCs are supported with the following options: + +- with accessMode of ReadWriteOnce backed by Ceph RBD + + - only one container can attach to these PVCs + - management and customization tasks related to these PVCs are done + through the **rbd-provisioner** Helm chart provided by + platform-integ-apps + +- with accessMode of ReadWriteMany backed by CephFS + + - multiple containers can attach to these PVCs + - management and customization tasks related to these PVCs are done + through the **cephfs-provisioner** Helm chart provided by + platform-integ-apps + +After platform-integ-apps is applied the following system configurations are +created: + +- **Ceph Pools** + + .. code-block:: none + + ~(keystone_admin)]$ ceph osd lspools + kube-rbd + kube-cephfs-data + kube-cephfs-metadata + +- **CephFS** + + .. code-block:: none + + ~(keystone_admin)]$ ceph fs ls + name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ] + +- **Kubernetes StorageClasses** + + .. code-block:: none + + ~(keystone_admin)]$ kubectl get sc + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION + cephfs ceph.com/cephfs Delete Immediate false + general (default) ceph.com/rbd Delete Immediate false + + + diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients.rst index 32cee678c..1f9cf279b 100644 --- a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients.rst +++ b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients.rst @@ -120,8 +120,8 @@ and clients for a non-admin user. .. code-block:: none - $ ./configure_client.sh -t platform -r my_openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd - + $ ./configure_client.sh -t platform -r my_openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd + .. only:: partner .. include:: ../../_includes/kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients.rest @@ -216,7 +216,7 @@ See :ref:`Using Container-backed Remote CLIs and Clients ` :ref:`Installing Kubectl and Helm Clients Directly on a Host diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-create-readwritemany-persistent-volume-claims.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-create-readwritemany-persistent-volume-claims.rst new file mode 100644 index 000000000..1e2556081 --- /dev/null +++ b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-create-readwritemany-persistent-volume-claims.rst @@ -0,0 +1,70 @@ + +.. xms1617036308112 +.. _kubernetes-user-tutorials-create-readwritemany-persistent-volume-claims: + +============================================= +Create ReadWriteMany Persistent Volume Claims +============================================= + +Container images have an ephemeral file system by default. For data to survive +beyond the lifetime of a container, it can read and write files to a +volume obtained with a Persistent Volume Claim \(PVC\) created to provide +persistent storage. + +.. rubric:: |context| + +For multiple containers to mount the same PVC, create a PVC with accessMode of +ReadWriteMany \(RWX\). + +The following steps show an example of creating a 1GB |PVC| +with ReadWriteMany accessMode. + +.. rubric:: |proc| + +.. _xms1617036308112-steps-bdr-qnm-tkb: + +#. Create the **rwx-test-claim** Persistent Volume Claim. + + #. Create a yaml file defining the claim and its attributes. For example: + + .. code-block:: none + + ~(keystone_admin)]$ cat < rwx-claim.yaml + kind: PersistentVolumeClaim + apiVersion: v1 + metadata: + name: rwx-test-claim + spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 1Gi + storageClassName: cephfs + EOF + + #. Apply the settings created above. + + .. code-block:: none + + ~(keystone_admin)]$ kubectl apply -f rwx-claim.yaml + persistentvolumeclaim/rwx-test-claim created + +1GB PVC has been created. You can view the PVCs using the following command. + +.. code-block:: none + + ~(keystone_admin)]$ kubectl get persistentvolumeclaims + + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS + rwx-test-claim Bound pvc-df9f.. 1Gi RWX cephfs + + +.. code-block:: none + + ~(keystone_admin)]$ kubectl get persistentvolume + NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS + pvc-df9f.. 1Gi RWX Delete Bound default/rwx-test-claim cephfs + +For more information on using CephFS for internal Ceph backends, see, +|stor-doc| :ref:`About Persistent Volume Support `. diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims.rst new file mode 100644 index 000000000..037a2dee9 --- /dev/null +++ b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims.rst @@ -0,0 +1,102 @@ + +.. rqy1582055871598 +.. _kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims: + +============================================= +Create ReadWriteOnce Persistent Volume Claims +============================================= + +Container images have an ephemeral file system by default. For data to survive +beyond the lifetime of a container, it can read and write files to a persistent +volume obtained with a Persistent Volume Claim \(PVC\) created to provide +persistent storage. + +.. rubric:: |context| + +For the use case of a single container mounting a specific |PVC|, create a PVC +with accessMode of ReadWriteOnce (RWO). + +The following steps show an example of creating two 1GB |PVCs| with +ReadWriteOnce accessMode. + +.. rubric:: |proc| + +.. _kubernetes-user-tutorials-creating-persistent-volume-claims-d380e32: + +#. Create the **rwo-test-claim1** Persistent Volume Claim. + + #. Create a yaml file defining the claim and its attributes. + + For example: + + .. code-block:: none + + ~(keystone_admin)]$ + cat < rwo-claim1.yaml + kind: PersistentVolumeClaim + apiVersion: v1 + metadata: + name: rwo-test-claim1 + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: general + EOF + + #. Apply the settings created above. + + .. code-block:: none + + ~(keystone_admin)]$ kubectl apply -f rwo-claim1.yaml + persistentvolumeclaim/rwo-test-claim1 created + +#. Create the **rwo-test-claim2** Persistent Volume Claim. + + #. Create a yaml file defining the claim and its attributes. + + For example: + + .. code-block:: none + + ~(keystone_admin)]$ cat < rwo-claim2.yaml + kind: PersistentVolumeClaim + apiVersion: v1 + metadata: + name: rwo-test-claim2 + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: general + EOF + + #. Apply the settings created above. + + .. code-block:: none + + ~(keystone_admin)]$ kubectl apply -f rwo-claim2.yaml + persistentvolumeclaim/rwo-test-claim2 created + +Result: Two 1GB Persistent Volume Claims have been created. You can view the PVCs using +the following command. + +.. code-block:: none + + ~(keystone_admin)]$ kubectl get persistentvolumeclaims + + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS + rwo-test-claim1 Bound pvc-08d8.. 1Gi RWO general + rwo-test-claim2 Bound pvc-af10.. 1Gi RWO general + +.. code-block:: none + + ~(keystone_admin)]$ kubectl get persistentvolume + + NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS + pvc-08d8.. 1Gi RWO Delete Bound default/rwo-test-claim1 general + pvc-af10.. 1Gi RWO Delete Bound default/rwo-test-claim2 general diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-creating-persistent-volume-claims.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-creating-persistent-volume-claims.rst deleted file mode 100644 index c4142b789..000000000 --- a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-creating-persistent-volume-claims.rst +++ /dev/null @@ -1,91 +0,0 @@ - -.. rqy1582055871598 -.. _kubernetes-user-tutorials-creating-persistent-volume-claims: - -=============================== -Create Persistent Volume Claims -=============================== - -Container images have an ephemeral file system by default. For data to survive -beyond the lifetime of a container, it can read and write files to a persistent -volume obtained with a :abbr:`PVC (Persistent Volume Claim)` created to provide -persistent storage. - -.. rubric:: |context| - -The following steps create two 1Gb persistent volume claims. - -.. rubric:: |proc| - -.. _kubernetes-user-tutorials-creating-persistent-volume-claims-d395e32: - -#. Create the **test-claim1** persistent volume claim. - - - #. Create a yaml file defining the claim and its attributes. - - For example: - - .. code-block:: yaml - - ~(keystone_admin)]$ cat < claim1.yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - name: test-claim1 - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi - storageClassName: general - EOF - - #. Apply the settings created above. - - .. code-block:: none - - ~(keystone_admin)]$ kubectl apply -f claim1.yaml - persistentvolumeclaim/test-claim1 created - -#. Create the **test-claim2** persistent volume claim. - - #. Create a yaml file defining the claim and its attributes. - - For example: - - .. code-block:: yaml - - ~(keystone_admin)]$ cat < claim2.yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - name: test-claim2 - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi - storageClassName: general - EOF - - #. Apply the settings created above. - - .. code-block:: none - - ~(keystone_admin)]$ kubectl apply -f claim2.yaml - persistentvolumeclaim/test-claim2 created - -.. rubric:: |result| - -Two 1Gb persistent volume claims have been created. You can view them with the -following command. - -.. code-block:: none - - ~(keystone_admin)]$ kubectl get persistentvolumeclaims - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s - test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-mount-readwritemany-persistent-volumes-in-containers.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-mount-readwritemany-persistent-volumes-in-containers.rst new file mode 100644 index 000000000..b9d0ac25e --- /dev/null +++ b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-mount-readwritemany-persistent-volumes-in-containers.rst @@ -0,0 +1,178 @@ + +.. iqs1617036367453 +.. _kubernetes-user-tutorials-mount-readwritemany-persistent-volumes-in-containers: + +==================================================== +Mount ReadWriteMany Persistent Volumes in Containers +==================================================== + +You can attach a ReadWriteMany PVC to multiple containers, and that PVC can be +written to, by all containers. + +.. rubric:: |context| + +This example shows how a volume is claimed and mounted by each container +replica of a deployment with 2 replicas, and each container replica can read +and write to the PVC. It is the responsibility of an individual micro-service +within an application to make a volume claim, mount it, and use it. + +.. rubric:: |prereq| + +You should refer to the Volume Claim examples. For more information, +see :ref:`Create ReadWriteMany Persistent Volume Claims `. + +.. rubric:: |proc| + +.. _iqs1617036367453-steps-fqj-flr-tkb: + +#. Create the busybox container with the persistent volumes created from the + |PVCs| mounted. This deployment will create two replicas mounting the same + persistent volume. + + #. Create a yaml file definition for the busybox container. + + .. code-block:: none + + % cat < wrx-busybox.yaml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: wrx-busybox + namespace: default + spec: + progressDeadlineSeconds: 600 + replicas: 2 + selector: + matchLabels: + run: busybox + template: + metadata: + labels: + run: busybox + spec: + containers: + - args: + - sh + image: busybox + imagePullPolicy: Always + name: busybox + stdin: true + tty: true + volumeMounts: + - name: pvc1 + mountPath: "/mnt1" + restartPolicy: Always + volumes: + - name: pvc1 + persistentVolumeClaim: + claimName: rwx-test-claim + EOF + + #. Apply the busybox configuration. + + .. code-block:: none + + % kubectl apply -f wrx-busybox.yaml + deployment.apps/wrx-busybox created + + +#. Attach to the busybox and create files on the Persistent Volumes. + + #. List the available pods. + + .. code-block:: none + + % kubectl get pods + NAME READY STATUS RESTARTS AGE + wrx-busybox-6455997c76-4kg8v 1/1 Running 0 108s + wrx-busybox-6455997c76-crmw6 1/1 Running 0 108s + + #. Connect to the pod shell for CLI access. + + .. code-block:: none + + % kubectl attach wrx-busybox-6455997c76-4kg8v -c busybox -i -t + + #. From the container's console, list the disks to verify that the + Persistent Volume is attached. + + .. code-block:: none + + % df + Filesystem 1K-blocks Used Available Use% Mounted on + overlay 31441920 1783748 29658172 6% / + tmpfs 65536 0 65536 0% /dev + tmpfs 5033188 0 5033188 0% /sys/fs/cgroup + ceph-fuse 516542464 643072 515899392 0% /mnt1 + + The PVC is mounted as /mnt1. + + +#. Create files in the mount. + + .. code-block:: none + + # cd /mnt1 + # touch i-was-here-${HOSTNAME} + # ls /mnt1 + i-was-here-wrx-busybox-6455997c76-4kg8vi + +#. End the container session. + + .. code-block:: none + + % exit + wrx-busybox-6455997c76-4kg8v -c busybox -i -t' command when the pod is running + +#. Connect to the other busybox container + + .. code-block:: none + + % kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t + +#. \(Optional\): From the container's console list the disks to verify that + the |PVC| is attached. + + .. code-block:: none + + % df + Filesystem 1K-blocks Used Available Use% Mounted on + overlay 31441920 1783888 29658032 6% / + tmpfs 65536 0 65536 0% /dev + tmpfs 5033188 0 5033188 0% /sys/fs/cgroup + ceph-fuse 516542464 643072 515899392 0% /mnt1 + + +#. Verify that the file created from the other container exists and that this + container can also write to the Persistent Volume. + + .. code-block:: none + + # cd /mnt1 + # ls /mnt1 + i-was-here-wrx-busybox-6455997c76-4kg8v + # echo ${HOSTNAME} + wrx-busybox-6455997c76-crmw6 + # touch i-was-here-${HOSTNAME} + # ls /mnt1 + i-was-here-wrx-busybox-6455997c76-4kg8v i-was-here-wrx-busybox-6455997c76-crmw6 + +#. End the container session. + + .. code-block:: none + + % exit + Session ended, resume using + :command:`kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t` + command when the pod is running + +#. Terminate the busybox container. + + .. code-block:: none + + % kubectl delete -f wrx-busybox.yaml + + For more information on Persistent Volume Support, see, |prod| |stor-doc| + :ref:`About Persistent Volume Support `. + + diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-mounting-persistent-volumes-in-containers.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-mount-readwriteonce-persistent-volumes-in-containers.rst similarity index 68% rename from doc/source/usertasks/kubernetes/kubernetes-user-tutorials-mounting-persistent-volumes-in-containers.rst rename to doc/source/usertasks/kubernetes/kubernetes-user-tutorials-mount-readwriteonce-persistent-volumes-in-containers.rst index b5ef4076e..2f7bb3bcc 100644 --- a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-mounting-persistent-volumes-in-containers.rst +++ b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-mount-readwriteonce-persistent-volumes-in-containers.rst @@ -1,46 +1,46 @@ .. nos1582114374670 -.. _kubernetes-user-tutorials-mounting-persistent-volumes-in-containers: +.. _kubernetes-user-tutorials-mount-readwriteonce-persistent-volumes-in-containers: -====================================== -Mount Persistent Volumes in Containers -====================================== +==================================================== +Mount ReadWriteOnce Persistent Volumes in Containers +==================================================== -You can launch, attach, and terminate a busybox container to mount :abbr:`PVCs -(Persistent Volume Claims)` in your cluster. +You can attach ReadWriteOnce |PVCs| to a container when launching a container, +and changes to those PVCs will persist even if that container gets terminated +and restarted. + +.. note:: + A ReadWriteOnce PVC can only be mounted by a single container. .. rubric:: |context| This example shows how a volume is claimed and mounted by a simple running +container, and the contents of the volume claim persists across restarts of the container. It is the responsibility of an individual micro-service within an -application to make a volume claim, mount it, and use it. For example, the -stx-openstack application will make volume claims for **mariadb** and -**rabbitmq** via their helm charts to orchestrate this. +application to make a volume claim, mount it, and use it. .. rubric:: |prereq| -You must have created the persistent volume claims. - -.. xreflink This procedure uses PVCs - with names and configurations created in |stor-doc|: :ref:`Creating Persistent - Volume Claims `. +You should refer to the Volume Claim examples. For more information, see +:ref:`Create ReadWriteOnce Persistent Volume Claims `. .. rubric:: |proc| -.. _kubernetes-user-tutorials-mounting-persistent-volumes-in-containers-d18e55: +.. _kubernetes-user-tutorials-mounting-persistent-volumes-in-containers-d18e44: #. Create the busybox container with the persistent volumes created from the - PVCs mounted. + |PVCs| mounted. #. Create a yaml file definition for the busybox container. .. code-block:: none - % cat < busybox.yaml + % cat < rwo-busybox.yaml apiVersion: apps/v1 kind: Deployment metadata: - name: busybox + name: rwo-busybox namespace: default spec: progressDeadlineSeconds: 600 @@ -70,17 +70,19 @@ You must have created the persistent volume claims. volumes: - name: pvc1 persistentVolumeClaim: - claimName: test-claim1 + claimName: rwo-test-claim1 - name: pvc2 persistentVolumeClaim: - claimName: test-claim2 + claimName: rwo-test-claim2 EOF #. Apply the busybox configuration. .. code-block:: none - % kubectl apply -f busybox.yaml + % kubectl apply -f rwo-busybox.yaml + deployment.apps/rwo-busybox created + #. Attach to the busybox and create files on the persistent volumes. @@ -89,18 +91,17 @@ You must have created the persistent volume claims. .. code-block:: none % kubectl get pods - NAME READY STATUS RESTARTS AGE - busybox-5c4f877455-gkg2s 1/1 Running 0 19s - + NAME READY STATUS RESTARTS AGE + rwo-busybox-5c4f877455-gkg2s 1/1 Running 0 19s #. Connect to the pod shell for CLI access. .. code-block:: none - % kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t + % kubectl attach rwo-busybox-5c4f877455-gkg2s -c busybox -i -t #. From the container's console, list the disks to verify that the - persistent volumes are attached. + Persistent Volumes are attached. .. code-block:: none @@ -116,6 +117,7 @@ You must have created the persistent volume claims. The PVCs are mounted as /mnt1 and /mnt2. + #. Create files in the mounted volumes. .. code-block:: none @@ -124,7 +126,6 @@ You must have created the persistent volume claims. # touch i-was-here # ls /mnt1 i-was-here lost+found - # # cd /mnt2 # touch i-was-here-too # ls /mnt2 @@ -135,13 +136,14 @@ You must have created the persistent volume claims. .. code-block:: none # exit - Session ended, resume using 'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when the pod is running + Session ended, resume using :command:`kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t` + when the pod is running #. Terminate the busybox container. .. code-block:: none - % kubectl delete -f busybox.yaml + % kubectl delete -f rwo-busybox.yaml #. Re-create the busybox container, again attached to persistent volumes. @@ -149,15 +151,16 @@ You must have created the persistent volume claims. .. code-block:: none - % kubectl apply -f busybox.yaml + % kubectl apply -f rwo-busybox.yaml + deployment.apps/rwo-busybox created #. List the available pods. .. code-block:: none % kubectl get pods - NAME READY STATUS RESTARTS AGE - busybox-5c4f877455-jgcc4 1/1 Running 0 19s + NAME READY STATUS RESTARTS AGE + rwo-busybox-5c4f877455-jgcc4 1/1 Running 0 19s #. Connect to the pod shell for CLI access. @@ -165,8 +168,7 @@ You must have created the persistent volume claims. % kubectl attach busybox-5c4f877455-jgcc4 -c busybox -i -t - #. From the container's console, list the disks to verify that the PVCs are - attached. + #. From the container's console list the disks to verify that the PVCs are attached. .. code-block:: none @@ -180,8 +182,7 @@ You must have created the persistent volume claims. /dev/sda4 20027216 4952208 14034624 26% ... -#. Verify that the files created during the earlier container session still - exist. +#. Verify that the files created during the earlier container session still exist. .. code-block:: none @@ -189,3 +190,5 @@ You must have created the persistent volume claims. i-was-here lost+found # ls /mnt2 i-was-here-too lost+found + + diff --git a/doc/source/usertasks/kubernetes/nodeport-usage-restrictions.rst b/doc/source/usertasks/kubernetes/nodeport-usage-restrictions.rst index 7fda42e80..5a0b9abe3 100644 --- a/doc/source/usertasks/kubernetes/nodeport-usage-restrictions.rst +++ b/doc/source/usertasks/kubernetes/nodeport-usage-restrictions.rst @@ -6,7 +6,7 @@ NodePort Usage Restrictions =========================== -This sections lists the usage restrictions of NodePorts for your +This sections lists the usage restrictions of NodePorts for your |prod-long| product. The following usage restrictions apply when using NodePorts: diff --git a/doc/source/usertasks/kubernetes/remote-cli-access.rst b/doc/source/usertasks/kubernetes/remote-cli-access.rst index acf2276ac..a5d93858a 100644 --- a/doc/source/usertasks/kubernetes/remote-cli-access.rst +++ b/doc/source/usertasks/kubernetes/remote-cli-access.rst @@ -29,13 +29,13 @@ methods. :command:`helm` clients directly on the remote host. This method only provides the kubernetes-related CLIs and requires OS-specific installation instructions. - + .. seealso:: :ref:`Configuring Container-backed Remote CLIs and Clients ` - :ref:`Using Container-backed Remote CLIs and Clients ` + :ref:`Using Container-backed Remote CLIs and Clients ` - :ref:`Installing Kubectl and Helm Clients Directly on a Host ` + :ref:`Installing Kubectl and Helm Clients Directly on a Host ` - :ref:`Configuring Remote Helm Client ` \ No newline at end of file + :ref:`Configuring Remote Helm Client ` \ No newline at end of file diff --git a/doc/source/usertasks/kubernetes/using-container-based-remote-clis-and-clients.rst b/doc/source/usertasks/kubernetes/using-container-based-remote-clis-and-clients.rst index 2d280d32b..13c3c1d5e 100644 --- a/doc/source/usertasks/kubernetes/using-container-based-remote-clis-and-clients.rst +++ b/doc/source/usertasks/kubernetes/using-container-based-remote-clis-and-clients.rst @@ -43,8 +43,8 @@ before proceeding. .. code-block:: none - Please enter your OpenStack Password for project tenant1 as user user1: - + Please enter your OpenStack Password for project tenant1 as user user1: + root@myclient:/home/user/remote_cli_wd# kubectl -n kube-system get pods NAME READY STATUS RESTARTS AGE calico-kube-controllers-767467f9cf-wtvmr 1/1 Running 1 3d2h @@ -57,7 +57,7 @@ before proceeding. ceph-pools-audit-1569849000-cb988 0/1 Completed 0 2m25s coredns-7cf476b5c8-5x724 1/1 Running 1 3d2h ... - root@myclient:/home/user/remote_cli_wd# + root@myclient:/home/user/remote_cli_wd# .. note:: Some |CLI| commands are designed to leave you in a shell prompt, for @@ -95,7 +95,7 @@ before proceeding. .. code-block:: none - root@myclient:/home/user# cp //test.yml $HOME/remote_cli_wd/test.yml + root@myclient:/home/user# cp //test.yml $HOME/remote_cli_wd/test.yml root@myclient:/home/user# cd $HOME/remote_cli_wd root@myclient:/home/user/remote_cli_wd# kubectl -n kube-system create -f test.yml pod/test-pod created @@ -136,7 +136,7 @@ before proceeding. -**Related information** +**Related information** .. seealso:: :ref:`Configuring Container-backed Remote CLIs and Clients ` diff --git a/doc/source/vnf_integration/bootstrap-data.rst b/doc/source/vnf_integration/bootstrap-data.rst index 3fec857a9..27d0a03c4 100644 --- a/doc/source/vnf_integration/bootstrap-data.rst +++ b/doc/source/vnf_integration/bootstrap-data.rst @@ -20,7 +20,7 @@ the metadata server and config drive. - User data is made available to the |VNF| through either the metadata service or the configuration drive. -- The cloud-init package reads information from either the metadata +- The cloud-init package reads information from either the metadata server or the configuration drive. diff --git a/doc/source/vnf_integration/overview-of-vnf-integration.rst b/doc/source/vnf_integration/overview-of-vnf-integration.rst index 1c806bcd5..0354a2716 100644 --- a/doc/source/vnf_integration/overview-of-vnf-integration.rst +++ b/doc/source/vnf_integration/overview-of-vnf-integration.rst @@ -6,8 +6,8 @@ Overview of VNF Integration =========================== -The |VNF| Integration document contains information specific to |VNF| or -|VM| application writers that assists them with integrating their +The |VNF| Integration document contains information specific to |VNF| or +|VM| application writers that assists them with integrating their application on |prod-os|. The following |VNF|-related integration topics are included in this document: @@ -32,7 +32,7 @@ The following |VNF|-related integration topics are included in this document: .. only:: partner .. include:: ../_includes/overview-of-vnf-integration.rest - + diff --git a/doc/source/vnf_integration/standard-virtio-backed-with-vhost-support.rst b/doc/source/vnf_integration/standard-virtio-backed-with-vhost-support.rst index 657814dac..1ddef7d9d 100644 --- a/doc/source/vnf_integration/standard-virtio-backed-with-vhost-support.rst +++ b/doc/source/vnf_integration/standard-virtio-backed-with-vhost-support.rst @@ -15,6 +15,6 @@ For virtio interfaces, |prod-os| supports vhost-user transparently by default. .. only:: partner .. include:: ../_includes/standard-virtio-backed-with-vhost-support.rest - +