Updated Patch Set 5 to include review comments

Changed name of file to:
admin-application-commands-and-helm-overrides.rst

Updated Strings.txt

Updated formatting issues:
installing-and-running-cpu-manager-for-kubernetes.rst

Updated Patch Set 4 to include review comments

Admin Tasks Updated

Changed name of include file to:
isolating-cpu-cores-to-enhance-application-performance.rest

Change-Id: I0b354dda3c7f66da3a5d430839b5007a6a19cfad
Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com>
Signed-off-by: Stone <ronald.stone@windriver.com>
Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com>
This commit is contained in:
Juanita-Balaraj 2020-12-07 16:10:59 -05:00
parent 7256d89dcb
commit 0c4aa91ca4
30 changed files with 1499 additions and 11 deletions

View File

@ -0,0 +1,17 @@
.. pdb1561551141102
.. _about-cpu-manager-for-kubernetes:
================================
About CPU Manager for Kubernetes
================================
The CPU Manager for Kubernetes \(CMK\) feature provides cooperative management
of CPU affinity for Kubernetes workloads requiring predictable performance.
For more information about CMK, see the project page at `https://github.com/intel/CPU-Manager-for-Kubernetes <https://github.com/intel/CPU-Manager-for-Kubernetes>`__.
.. note::
The installation instructions on the CMK project page are incomplete.
Refer instead to :ref:`Install and Run CPU Manager for Kubernetes <installing-and-running-cpu-manager-for-kubernetes>`.

View File

@ -0,0 +1,22 @@
.. njh1572366777737
.. _about-the-admin-tutorials:
=========================
About the Admin Tutorials
=========================
The |prod-long| Kubernetes administration tutorials provide working examples
of common administrative tasks.
.. xreflink For details on accessing the system, see :ref:`|prod| Access the System <configuring-local-cli-access>`.
Common administrative tasks covered in this document include:
- application management
- local Docker registries
- Kubernetes CPU resource management

View File

@ -0,0 +1,475 @@
.. hby1568295041837
.. _admin-application-commands-and-helm-overrides:
=======================================
Application Commands and Helm Overrides
=======================================
Use |prod| :command:`system application` and :command:`system helm-override`
commands to manage containerized applications provided as part of |prod|.
.. rubric:: |proc|
- Use the following command to list all containerized applications provided
as part of |prod|.
.. code-block:: none
~(keystone_admin)$ system application-list [--nowrap]
where:
**nowrap**
Prevents line wrapping of the output.
For example:
.. code-block:: none
~(keystone_admin)$ system application-list --nowrap
+-------------+---------+---------------+---------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+-------------+---------+---------------+---------------+----------+-----------+
| platform- | 1.0-7 | platform- | manifest.yaml | applied | completed |
| integ-apps | | integration- | | | |
| | | manifest | | | |
| stx- | 1.0-18 | armada- | stx-openstack | uploaded | completed |
| openstack | | manifest | .yaml | | |
+-------------+---------+---------------+---------------+----------+-----------+
- Use the following command to show details for |prod|.
.. code-block:: none
~(keystone_admin)$ system application-show <app_name>
where:
**<app\_name>**
The name of the application to show details.
For example:
.. code-block:: none
~(keystone_admin)$ system application-show stx-openstack
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | False |
| app_version | 1.0-18 |
| created_at | 2019-09-06T15:34:03.194150+00:00 |
| manifest_file | stx-openstack.yaml |
| manifest_name | armada-manifest |
| name | stx-openstack |
| progress | completed |
| status | uploaded |
| updated_at | 2019-09-06T15:34:46.995929+00:00 |
+---------------+----------------------------------+
- Use the following command to upload application Helm chart\(s\) and
manifest.
.. code-block:: none
~(keystone_admin)$ system application-upload [-n | --app-name] <app_name> [-v | --version] <version> <tar_file>
where the following are optional arguments:
**<app\_name>**
Assigns a custom name for application. You can use this name to
interact with the application in the future.
**<version>**
The version of the application.
and the following is a positional argument:
**<tar\_file>**
The path to the tar file containing the application to be uploaded.
For example:
.. code-block:: none
~(keystone_admin)$ system application-upload stx-openstack-1.0-18.tgz
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | False |
| app_version | 1.0-18 |
| created_at | 2019-09-06T15:34:03.194150+00:00 |
| manifest_file | stx-openstack.yaml |
| manifest_name | armada-manifest |
| name | stx-openstack |
| progress | None |
| status | uploading |
| updated_at | None |
+---------------+----------------------------------+
Please use 'system application-list' or 'system application-show
stx-openstack' to view the current progress.
- To list the Helm chart overrides for the |prod|, use the following
command:
.. code-block:: none
~(keystone_admin)$ system helm-override-list
usage: system helm-override-list [--nowrap] [-l | --long] <app_name>
where the following is a positional argument:
**<app\_name>**
The name of the application.
and the following is an optional argument:
**nowrap**
No word-wrapping of output.
**long**
List additional fields in output.
For example:
.. code-block:: none
~(keystone_admin)$ system helm-override-list stx-openstack --long
+---------------------+--------------------------------+---------------+
| chart name | overrides namespaces | chart enabled |
+---------------------+--------------------------------+---------------+
| aodh | [u'openstack'] | [False] |
| barbican | [u'openstack'] | [False] |
| ceilometer | [u'openstack'] | [False] |
| ceph-rgw | [u'openstack'] | [False] |
| cinder | [u'openstack'] | [True] |
| garbd | [u'openstack'] | [True] |
| glance | [u'openstack'] | [True] |
| gnocchi | [u'openstack'] | [False] |
| heat | [u'openstack'] | [True] |
| helm-toolkit | [] | [] |
| horizon | [u'openstack'] | [True] |
| ingress | [u'kube-system', u'openstack'] | [True, True] |
| ironic | [u'openstack'] | [False] |
| keystone | [u'openstack'] | [True] |
| keystone-api-proxy | [u'openstack'] | [True] |
| libvirt | [u'openstack'] | [True] |
| mariadb | [u'openstack'] | [True] |
| memcached | [u'openstack'] | [True] |
| neutron | [u'openstack'] | [True] |
| nginx-ports-control | [] | [] |
| nova | [u'openstack'] | [True] |
| nova-api-proxy | [u'openstack'] | [True] |
| openvswitch | [u'openstack'] | [True] |
| panko | [u'openstack'] | [False] |
| placement | [u'openstack'] | [True] |
| rabbitmq | [u'openstack'] | [True] |
| version_check | [] | [] |
+---------------------+--------------------------------+---------------+
- To show the overrides for a particular chart, use the following command.
System overrides are displayed in the **system\_overrides** section of
the **Property** column.
.. code-block:: none
~(keystone_admin)$ system helm-override-show
usage: system helm-override-show <app_name> <chart_name> <namespace>
where the following are positional arguments:
**<app\_name>**
The name of the application.
**< chart\_name>**
The name of the chart.
**<namespace>**
The namespace for chart overrides.
For example:
.. code-block:: none
~(keystone_admin)$ system helm-override-show stx-openstack glance openstack
- To modify service configuration parameters using user-specified overrides,
use the following command. To update a single configuration parameter, you
can use :command:`--set`. To update multiple configuration parameters, use
the :command:`--values` option with a **yaml** file.
.. code-block:: none
~(keystone_admin)$ system helm-override-update
usage: system helm-override-update <app_name> <chart_name> <namespace> --reuse-values --reset-values --values <file_name> --set <commandline_overrides>
where the following are positional arguments:
**<app\_name>**
The name of the application.
**<chart\_name>**
The name of the chart.
**<namespace>**
The namespace for chart overrides.
and the following are optional arguments:
**reuse-values**
Reuse existing Helm chart user override values. This argument is
ignored if **reset-values** is used.
**reset-values**
Replace any existing Helm chart overrides with the ones specified.
**values**
Specify a **yaml** file containing Helm chart override values. You can
specify this value multiple times.
**set**
Set Helm chart override values using the command line. Multiple
override values can be specified with multiple :command:`set`
arguments. These are processed after files passed through the
values argument.
For example, to enable the glance debugging log, use the following
command:
.. code-block:: none
~(keystone_admin)$ system helm-override-update stx-openstack glance openstack --set conf.glance.DEFAULT.DEBUG=true
+----------------+-------------------+
| Property | Value |
+----------------+-------------------+
| name | glance |
| namespace | openstack |
| user_overrides | conf: |
| | glance: |
| | DEFAULT: |
| | DEBUG: true |
+----------------+-------------------+
The user overrides are shown in the **user\_overrides** section of the
**Property** column.
.. note::
To apply the updated Helm chart overrides to the running application,
use the :command:`system application-apply` command.
- To enable or disable the installation of a particular Helm chart within an
application manifest, use the :command:`helm-chart-attribute-modify`
command. This command does not modify a chart or modify chart overrides,
which are managed through the :command:`helm-override-update` command.
.. code-block:: none
~(keystone_admin)$ system helm-chart-attribute-modify [--enabled <true/false>] <app_name> <chart_name> <namespace>
where the following is an optional argument:
**enabled**
Determines whether the chart is enabled.
and the following are positional arguments:
**<app\_name>**
The name of the application.
**<chart\_name>**
The name of the chart.
**<namespace>**
The namespace for chart overrides.
.. note::
To apply the updated Helm chart attribute to the running application,
use the :command:`system application-apply` command.
- To delete all the user overrides for a chart, use the following command:
.. code-block:: none
~(keystone_admin)$ system helm-override-delete
usage: system helm-override-delete <app_name> <chart_name> <namespace>
where the following are positional arguments:
**<app\_name>**
The name of the application.
**<chart\_name>**
The name of the chart.
**<namespace>**
The namespace for chart overrides.
For example:
.. code-block:: none
~(keystone_admin)$ system helm-override-delete stx-openstack glance openstack
Deleted chart overrides glance:openstack for application stx-openstack
- Use the following command to apply or reapply an application, making it
available for service.
.. code-block:: none
~(keystone_admin)$ system application-apply [-m | --mode] <mode> <app_name>
where the following is an optional argument:
**mode**
An application-specific mode controlling how the manifest is
applied. This option is used to delete and restore the
**stx-openstack** application.
and the following is a positional argument:
**<app\_name>**
The name of the application to apply.
For example:
.. code-block:: none
~(keystone_admin)$ system application-apply stx-openstack
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | False |
| app_version | 1.0-18 |
| created_at | 2019-09-06T15:34:03.194150+00:00 |
| manifest_file | stx-openstack.yaml |
| manifest_name | armada-manifest |
| name | stx-openstack |
| progress | None |
| status | applying |
| updated_at | 2019-09-06T15:34:46.995929+00:00 |
+---------------+----------------------------------+
Please use 'system application-list' or 'system application-show
stx-openstack' to view the current progress.
- Use the following command to abort the current application.
.. code-block:: none
~(keystone_admin)$ system application-abort <app_name>
where:
**<app\_name>**
The name of the application to abort.
For example:
.. code-block:: none
~(keystone_admin)$ system application-abort stx-openstack
Application abort request has been accepted. If the previous operation has not
completed/failed, it will be cancelled shortly.
Use :command:`application-list` to confirm that the application has been
aborted.
- Use the following command to update the deployed application to a different
version.
.. code-block:: none
~(keystone_admin)$ system application-update [-n | --app-name] <app_name> [-v | --app-version] <version> <tar_file>
where the following are optional arguments:
**<app\_name>**
The name of the application to update.
You can look up the name of an application using the
:command:`application-list` command:
.. code-block:: none
~(keystone_admin)$ system application-list
+--------------------------+----------+-------------------------------+---------------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+----------+-------------------------------+---------------------------+----------+-----------+
| cert-manager | 20.06-4 | cert-manager-manifest | certmanager-manifest.yaml | applied | completed |
| nginx-ingress-controller | 20.06-1 | nginx-ingress-controller- | nginx_ingress_controller | applied | completed |
| | | -manifest | _manifest.yaml | | |
| oidc-auth-apps | 20.06-26 | oidc-auth-manifest | manifest.yaml | uploaded | completed |
| platform-integ-apps | 20.06-9 | platform-integration-manifest | manifest.yaml | applied | completed |
| wr-analytics | 20.06-2 | analytics-armada-manifest | wr-analytics.yaml | applied | completed |
+--------------------------+----------+-------------------------------+---------------------------+----------+-----------+
The output indicates that the currently installed version of
**cert-manager** is 20.06-4.
**<version>**
The version to update the application to.
and the following is a positional argument which must come last:
**<tar\_file>**
The tar file containing the application manifest, Helm charts and
configuration file.
- Use the following command to remove an application from service. Removing
an application will clean up related Kubernetes resources and delete all
of its installed Helm charts.
.. code-block:: none
~(keystone_admin)$ system application-remove <app_name>
where:
**<app\_name>**
The name of the application to remove.
For example:
.. code-block:: none
~(keystone_admin)$ system application-remove stx-openstack
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | False |
| app_version | 1.0-18 |
| created_at | 2019-09-06T15:34:03.194150+00:00 |
| manifest_file | stx-openstack.yaml |
| manifest_name | armada-manifest |
| name | stx-openstack |
| progress | None |
| status | removing |
| updated_at | 2019-09-06T17:39:19.813754+00:00 |
+---------------+----------------------------------+
Please use 'system application-list' or 'system application-show
stx-openstack' to view the current progress.
This command places the application in the uploaded state.
- Use the following command to completely delete an application from the
system.
.. code-block:: none
~(keystone_admin)$ system application-delete <app_name>
where:
**<app\_name>**
The name of the application to delete.
You must run :command:`application-remove` before deleting an application.
For example:
.. code-block:: none
~(keystone_admin)$ system application-delete stx-openstack
Application stx-openstack deleted.

View File

@ -0,0 +1,87 @@
.. hsq1558095273229
.. _freeing-space-in-the-local-docker-registry:
=======================================
Free Space in the Local Docker Registry
=======================================
You can delete images and perform garbage collection to free unused registry
space on the docker-distribution file system of the controllers.
.. rubric:: |context|
Simply deleting an image from the local Docker registry does not free the
associated space from the file system. To do so, you must also run the
:command:`registry-garbage-collect` command.
.. rubric:: |proc|
#. Identify the name of the image you want to delete.
.. code-block:: none
~(keystone_admin)$ system registry-image-list
+------------------------------------------------------+
| Image Name |
+------------------------------------------------------+
| docker.io/starlingx/k8s-cni-sriov |
| docker.io/starlingx/k8s-plugins-sriov-network-device |
| docker.io/starlingx/multus |
| gcr.io/kubernetes-helm/tiller |
| k8s.gcr.io/coredns |
| k8s.gcr.io/etcd |
| k8s.gcr.io/kube-apiserver |
| k8s.gcr.io/kube-controller-manager |
| k8s.gcr.io/kube-proxy |
| k8s.gcr.io/kube-scheduler |
| k8s.gcr.io/pause |
| quay.io/airshipit/armada |
| quay.io/calico/cni |
| quay.io/calico/kube-controllers |
| quay.io/calico/node |
+------------------------------------------------------+
#. Find tags associated with the image.
.. code-block:: none
~(keystone_admin)$ system registry-image-tags <imageName>
#. Free file system space.
.. code-block:: none
~(keystone_admin)$ system registry-image-delete <imageName>:<tagName>
This step only removes the registry's reference to the **image:tag**.
.. warning::
Do not delete **image:tags** that are currently being used by the
system. Deleting both the local Docker registry's **image:tags** and
the **image:tags** from the Docker cache will prevent failed deployment
pods from recovering. If this happens, you will need to manually
download the deleted image from the same source and push it back into
the local Docker registry under the same name and tag.
If you need to free space consumed by **stx-openstack** images, you can
delete older tagged versions.
#. Free up file system space associated with the deleted/unreferenced images.
The :command:`registry-garbage-collect` command removes unreferenced
**image:tags** from the file system and frees the file system spaces
allocated to deleted/unreferenced images.
.. code-block:: none
~(keystone_admin)$ system registry-garbage-collect
Running docker registry garbage collect
.. note::
In rare cases the system may trigger a swact during garbage collection,
and the registry may be left in read-only mode. If this happens, run
:command:`registry-garbage-collect` again to take the registry out of
read-only mode.

View File

@ -0,0 +1,35 @@
======================================
|prod-long| Kubernetes Admin Tutorials
======================================
- :ref:`About the Admin Tutorials <about-the-admin-tutorials>`
- Application Management
- :ref:`Helm Package Manager <kubernetes-admin-tutorials-helm-package-manager>`
- :ref:`StarlingX Application Package Manager <kubernetes-admin-tutorials-tarlingx-application-package-manager>`
- :ref:`Application Commands and Helm Overrides <application-commands-and-helm-overrides>`
- Local Docker Registry
- :ref:`Local Docker Registry <local-docker-registry>`
- :ref:`Authentication and Authorization <kubernetes-admin-tutorials-authentication-and-authorization>`
- :ref:`Installing/Updating the Docker Registry Certificate <installing-updating-the-docker-registry-certificate>`
- :ref:`Setting up a Public Repository <setting-up-a-public-repository>`
- :ref:`Freeing Space in the Local Docker Registry <freeing-space-in-the-local-docker-registry>`
- Optimizing Application Performance
- :ref:`Kubernetes CPU Manager Policies <kubernetes-cpu-manager-policies>`
- :ref:`Isolating CPU Cores to Enhance Application Performance <isolating-cpu-cores-to-enhance-application-performance>`
- :ref:`Kubernetes Topology Manager Policies <kubernetes-topology-manager-policies>`
- Intel's CPU Manager for Kubernetes \(CMK\)
- :ref:`About CPU Manager for Kubernetes <about-cpu-manager-for-kubernetes>`
- :ref:`Installing and Running CPU Manager for Kubernetes <installing-and-running-cpu-manager-for-kubernetes>`
- :ref:`Removing CPU Manager for Kubernetes <removing-cpu-manager-for-kubernetes>`
- :ref:`Uninstalling CPU Manager for Kubernetes on IPv6 <uninstalling-cpu-manager-for-kubernetes-on-ipv6>`

View File

@ -0,0 +1,64 @@
.. Admin tasks file, created by
sphinx-quickstart on Thu Sep 3 15:14:59 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
===================
Administrator Tasks
===================
--------------------
StarlingX Kubernetes
--------------------
.. toctree::
:maxdepth: 1
about-the-admin-tutorials
----------------------
Application management
----------------------
.. toctree::
:maxdepth: 1
kubernetes-admin-tutorials-helm-package-manager
kubernetes-admin-tutorials-starlingx-application-package-manager
admin-application-commands-and-helm-overrides
---------------------
Local Docker registry
---------------------
.. toctree::
:maxdepth: 1
local-docker-registry
kubernetes-admin-tutorials-authentication-and-authorization
installing-updating-the-docker-registry-certificate
setting-up-a-public-repository
freeing-space-in-the-local-docker-registry
--------------------------------
Optimize application performance
--------------------------------
.. toctree::
:maxdepth: 1
kubernetes-cpu-manager-policies
isolating-cpu-cores-to-enhance-application-performance
kubernetes-topology-manager-policies
--------------------------
CPU Manager for Kubernetes
--------------------------
.. toctree::
:maxdepth: 1
about-cpu-manager-for-kubernetes
installing-and-running-cpu-manager-for-kubernetes
removing-cpu-manager-for-kubernetes
uninstalling-cpu-manager-for-kubernetes-on-ipv6

View File

@ -0,0 +1,239 @@
.. jme1561551450093
.. _installing-and-running-cpu-manager-for-kubernetes:
==========================================
Install and Run CPU Manager for Kubernetes
==========================================
You must install Helm charts and label worker nodes appropriately before using
CMK.
.. rubric:: |context|
Perform the following steps to enable CMK on a cluster.
.. rubric:: |proc|
#. Apply the **cmk-node** label to each worker node to be managed using CMK.
For example:
.. code-block:: none
~(keystone)admin)$ system host-lock worker-0
~(keystone)admin)$ system host-label-assign worker-0 cmk-node=enabled
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| uuid | 2909d775-cd6c-4bc1-8268-27499fe38d5e |
| host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 |
| label_key | cmk-node |
| label_value | enabled |
+-------------+--------------------------------------+
~(keystone)admin)$ system host-unlock worker-0
#. Perform the following steps if you have not specified CMK at Ansible
Bootstrap in the localhost.yml file:
#. On the active controller, run the following command to generate the
username and password to be used for Docker login.
This command generates the username and password to be used for Docker
login.
.. code-block:: none
$ sudo python /usr/share/ansible/stx-ansible/playbooks/roles/common/push-docker-images/files/get_registry_auth.py 625619392498.dkr.ecr.us-west-2.amazonaws.com <Access_Key_ID_from_Wind_Share> <Secret_Access_Key_from_Wind_Share>
#. Run the Docker login command:
.. code-block:: none
~(keystone)admin)$ sudo docker login 625619392498.dkr.ecr.us-west-2.amazonaws.com -u AWS -p <password_returned_from_first_cmd>
#. Pull the CMK image from the AWS registry.
.. code-block:: none
~(keystone)admin)$ sudo docker pull 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/wind-river/cmk:WRCP.20.01-v1.3.1-15-ge3df769-1
#. Tag the image, by using the following command:
.. code-block:: none
~(keystone)admin)$ sudo docker image tag 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/wind-river/cmk:WRCP.20.01-v1.3.1-15-ge3df769-1 registry.local:9001/docker.io/wind-river/cmk:WRCP.20.01-v1.3.1-15-ge3df769-1
#. Authenticate the local registry, by using the following command:
.. code-block:: none
~(keystone)admin)$ sudo docker login registry.local:9001 -u admin -p <admin_passwd>
#. Push the image, by using the following command:
.. code-block:: none
~(keystone)admin)$ sudo docker image push registry.local:9001/docker.io/wind-river/cmk:WRCP.20.01-v1.3.1-15-ge3df769-1
#. On all configurations with two controllers, after the CMK Docker image has
been pulled, tagged \(with the local registry\), and pushed \(to the local
registry\), the admin user should log in to the inactive controller and run
the following commands:
For example:
.. code-block:: none
~(keystone)admin)$ sudo docker login registry.local:9001 -u admin -p <admin_passwd>
~(keystone)admin)$ sudo docker image pull tis-lab-registry.cumulus.wrs.com:9001/wrcp-staging/docker.io/wind-river/cmk:WRCP.20.01-v1.3.1-15-ge3df769-1
#. Configure any isolated CPUs on worker nodes in order to reduce host OS
impacts on latency for tasks running on Isolated CPUs.
Any container tasks running on isolated CPUs will have to explicitly manage
their own affinity, the process scheduler will ignore them completely.
.. note::
The following commands are examples only, the admin user must specify
the number of CPUs per processor based on the node CPU topology.
.. code-block:: none
~(keystone)admin)$ system host-lock worker-1
~(keystone)admin)$ system host-cpu-modify -f platform -p0 1 worker-1
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 15 worker-1
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 15 worker-1
~(keystone)admin)$ system host-unlock worker-1
This sets one platform core and 15 application-isolated cores on NUMA node
0, and 15 application-isolated cores on NUMA node 1. At least one CPU must
be left unspecified, which will cause it to be an application CPU.
#. Run the /opt/extracharts/cpu-manager-k8s-setup.sh helper script to install
the CMK Helm charts used to configure the system for CMK.
#. Before running this command, untar files listed in /opt/extracharts.
.. code-block:: none
~(keystone)admin)$ cd /opt/extracharts
~(keystone)admin)$ sudo tar -xvf cpu-manager-k8s-init-1.3.1.tgz
~(keystone)admin)$ sudo tar -xvf cpu-manager-k8s-webhook-1.3.1.tgz
~(keystone)admin)$ sudo tar -xvf cpu-manager-k8s-1.3.1.tgz
#. Run the script.
The script is located in the /opt/extracharts directory of the active
controller.
For example:
.. code-block:: none
~(keystone)admin)$ cd /opt/extracharts
~(keystone)admin)$ ./cpu-manager-k8s-setup.sh
The following actions are performed:
- The **cpu-manager-k8s-init** chart is installed. This will create a
service account and set up rules-based access control.
- A webhook is created to insert the appropriate resources into pods
that request CMK resources. \(This will result in one pod running.\)
- A daemonset is created for the per-CMK-node pod that will handle
all CMK operations on that node.
- **cmk-webhook-deployment** is launched on the controller and
**cpu-manager-k8s-cmk-default** is launched on the worker.
By default, each node will have one available CPU allocated to the
shared pool, and all the rest allocated to the exclusive pool. The
platform CPUs will be ignored.
#. Add more CPUs to the shared pool.
#. Override the allocation via per-node Helm chart overrides on the
**cpu-manager-k8s** Helm chart.
.. code-block:: none
$ cat <<EOF > /home/sysadmin/worker-0-cmk-overrides.yml
# For NUM_EXCLUSIVE_CORES a value of -1 means
# "all available cores after infra and shared
# cores have been allocated".
# NUM_SHARED_CORES must be at least 1.
conf:
cmk:
NUM_EXCLUSIVE_CORES: -1
NUM_SHARED_CORES: 1
overrides:
cpu-manager-k8s_cmk:
hosts:
- name: worker-0
conf:
cmk:
NUM_SHARED_CORES: 2
EOF
#. Apply the override.
.. code-block:: none
$ helm upgrade cpu-manager cpu-manager-k8s --reuse-values -f /home/sysadmin/worker-0-cmk-overrides.yml
#. After CMK has been installed, run the following command to patch the
webhook to pull the image, if required for future use:
.. code-block:: none
~(keystone)admin)$ kubectl -n kube-system patch deploy cmk-webhook-deployment \
-p '{"spec":{"template":{"spec":{"containers":[{"name":"cmk-webhook",\
"imagePullPolicy":"IfNotPresent"}]}}}}'
.. rubric:: |postreq|
Once CMK is set up, you can run workloads as described at `https://github.com/intel/CPU-Manager-for-Kubernetes <https://github.com/intel/CPU-Manager-for-Kubernetes>`__,
with the following caveats:
- When using CMK, the application pods should not specify requests or limits
for the **cpu** resource.
When running a container with :command:`cmk isolate --pool=exclusive`, the
**cpu** resource should be superseded by the
:command:`cmk.intel.com/exclusive-cores` resource.
When running a container with :command:`cmk isolate --pool=shared` or
:command:`cmk isolate --pool=infra`, the **cpu** resource has no meaning as
Kubelet assumes it has access to all the CPUs rather than just the
**infra** or **shared** ones and this confuses the resource tracking.
- There is a known issue with resource tracking if a node with running
CMK-isolated applications suffers an uncontrolled reboot. The suggested
workaround is to wait for it to come back up, then lock/unlock the node.
- When using the :command:`cmk isolate --socket-id` command to run an
application on a particular socket, there can be complications with
scheduling because the Kubernetes scheduler isn't NUMA-aware. A pod can be
scheduled to a kubernetes node that has enough resources across all NUMA
nodes, but then a container trying to run :command:`cmk isolate --socket-id=<X>`
can lead to a run-time error if there are not enough resources on that
particular NUMA node:
.. code-block:: none
~(keystone)admin)$ kubectl logs cmk-isolate-pod
[6] Failed to execute script cmk
Traceback (most recent call last):
File "cmk.py", line 162, in <module> main()
File "cmk.py", line 127, in main args["--socket-id"])
File "intel/isolate.py", line 57, in isolate.format(pool_name))
SystemError: Not enough free cpu lists in pool
.. From step 1
.. xbooklink For more information on node labeling, see |node-doc|: :ref:`Configure Node Labels from the CLI <assigning-node-labels-from-the-cli>`.
.. From step 2
.. xreflink For more information, see |inst-doc|: :ref:`Bootstrap and Deploy Cloud Platform <bootstrapping-and-deploying-starlingx>`.

View File

@ -0,0 +1,73 @@
.. idr1582032622279
.. _installing-updating-the-docker-registry-certificate:
==============================================
Install/Update the Docker Registry Certificate
==============================================
The local Docker registry provides secure HTTPS access using the registry API.
.. rubric:: |context|
By default a self-signed certificate is generated at installation time for the
registry API. For more secure access, a Root CA-signed certificate is strongly
recommended.
The Root CA-signed certificate for the registry must have at least the
following |SANs|: DNS:registry.local,DNS:registry.central,
IP Address:<oam-floating-ip-address>, IP Address:<mgmt-floating-ip-address>.
Use the :command:`system addrpool-list` command to get the |OAM| floating IP
Address and management floating IP Address for your system. You can add any
additional DNS entry\(s\) that you have set up for your OAM floating IP Address.
Use the following procedure to install a Root CA-signed certificate to either
replace the default self-signed certificate or to replace an expired or soon to
expire certificate.
.. rubric:: |prereq|
Obtain a Root CA-signed certificate and key from a trusted Root Certificate
Authority \(CA\). Refer to the documentation for the external Root CA that you
are using, on how to create public certificate and private key pairs, signed by
a Root CA, for HTTPS.
.. xreflink For lab purposes, see |sec-doc|: :ref:`Locally Creating Certificates <creating-certificates-locally-using-openssl>` to create a test Root CA certificate and key, and use it to sign test certificates.
Put the Privacy Enhanced Mail \(PEM\) encoded versions of the certificate and
key in a single file, and copy the file to the controller host.
Also obtain the certificate of the Root CA that signed the above certificate.
.. rubric:: |proc|
#. In order to enable internal use of the Docker registry certificate, update
the trusted CA list for this system with the Root CA associated with the
Docker registry certificate.
.. code-block:: none
~(keystone_admin)$ system certificate-install --mode ssl_ca <pathTocertificate>
where:
**<pathTocertificate>**
is the path to the Root CA certificate associated with the Docker
registry Root CA-signed certificate.
#. Update the Docker registry certificate using the
:command:`certificate-install` command.
Set the mode (``-m`` or ``--mode``) parameter to docker\_registry.
.. code-block:: none
~(keystone_admin)$ system certificate-install --mode docker_registry <pathTocertificateAndKey>
where:
**<pathTocertificateAndKey>**
is the path to the file containing both the Docker registry certificate
and private key to install.

View File

@ -0,0 +1,56 @@
.. bew1572888575258
.. _isolating-cpu-cores-to-enhance-application-performance:
========================================================
Isolate the CPU Cores to Enhance Application Performance
========================================================
|prod| supports running the most critical low-latency applications on host CPUs
which are completely isolated from the host process scheduler.
This allows you to customize Kubernetes CPU management when policy is set to
static, or when using CMK with policy set to none so that high-performance,
low-latency applications run with optimal efficiency.
The following restrictions apply when using application-isolated cores in the
Horizon Web interface and sysinv:
- There must be at least one platform and one application core on each host.
.. warning::
The presence of an application core on the node and nodes missing this
configuration will fail.
For example:
.. code-block:: none
~(keystone)admin)$ system host-lock worker-1
~(keystone)admin)$ system host-cpu-modify -f platform -p0 1 worker-1
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 15 worker-1
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 15 worker-1
~(keystone)admin)$ system host-unlock worker-1
All SMT siblings on a core will have the same assigned function. On host boot,
any CPUs designated as isolated will be specified as part of the isolcpu kernel
boot argument, which will isolate them from the process scheduler.
The use of application-isolated cores is only applicable when using the static
Kubernetes CPU Manager policy, or when using CMK. For more information,
see :ref:`Kubernetes CPU Manager Policies <kubernetes-cpu-manager-policies>`,
or :ref:`Install and Run CPU Manager for Kubernetes <installing-and-running-cpu-manager-for-kubernetes>`.
When using the static CPU manager policy before increasing the number of
platform CPUs or changing isolated CPUs to application CPUs on a host, ensure
that no pods on the host are making use of any isolated CPUs that will be
affected. Otherwise, the pod\(s\) will transition to a Topology Affinity Error
state. Although not strictly necessary, the simplest way to do this on systems
other than AIO Simplex is to administratively lock the host, causing all the
pods to be restarted on an alternate host, before changing CPU assigned
functions. On AIO Simplex systems, you must explicitly delete the pods.
.. only:: partner
.. include:: ../_includes/isolating-cpu-cores-to-enhance-application-performance.rest

View File

@ -0,0 +1,37 @@
.. khe1563458421728
.. _kubernetes-admin-tutorials-authentication-and-authorization:
================================
Authentication and Authorization
================================
Authentication is enabled for the local Docker registry. When logging in,
users are authenticated using their platform keystone credentials.
For example:
.. code-block:: none
$ docker login registry.local:9001 -u <keystoneUserName> -p <keystonePassword>
An authorized administrator can perform any Docker action, while regular users
can only interact with their own repositories
\(i.e. registry.local:9001/<keystoneUserName>/\). For example, only
**admin** and **testuser** accounts can push to or pull from
**registry.local:9001/testuser/busybox:latest**
---------------------------------
Username and Docker compatibility
---------------------------------
Repository names in Docker registry paths must be lower case. For this reason,
a keystone user must exist that consists of all lower case characters. For
example, the user **testuser** is correct in the following URL, while
**testUser** would result in an error:
**registry.local:9001/testuser/busybox:latest**
For more information about Docker commands, see
`https://docs.docker.com/engine/reference/commandline/docker/ <https://docs.docker.com/engine/reference/commandline/docker/>`__.

View File

@ -0,0 +1,43 @@
.. yvw1582058782861
.. _kubernetes-admin-tutorials-helm-package-manager:
====================
Helm Package Manager
====================
|prod-long| supports Helm with Tiller, the Kubernetes package manager that
can be used to manage the lifecycle of applications within the Kubernetes
cluster.
.. rubric:: |context|
Helm packages are defined by Helm charts with container information sufficient
for managing a Kubernetes application. You can configure, install, and upgrade
your Kubernetes applications using Helm charts. Helm charts are defined with a
default set of values that describe the behavior of the service installed
within the Kubernetes cluster.
Upon system installation, the official curated Helm chart repository is added
to the local Helm repo list, in addition, a number of local repositories
\(containing optional |prod-long| packages\) are created and added to the Helm
repo list. For more information,
see `https://github.com/helm/charts <https://github.com/helm/charts>`__.
Use the following command to list the Helm repositories:
.. code-block:: none
~(keystone_admin)$ helm repo list
NAME URL
stable `https://kubernetes-charts.storage.googleapis.com`__
local `http://127.0.0.1:8879/charts`__
starlingx `http://127.0.0.1:8080/helm_charts/starlingx`__
stx-platform `http://127.0.0.1:8080/helm_charts/stx-platform`__
For more information on Helm, see the documentation at `https://helm.sh/docs/ <https://helm.sh/docs/>`__.
**Tiller** is a component of Helm. Tiller interacts directly with the
Kubernetes API server to install, upgrade, query, and remove Kubernetes
resources.

View File

@ -0,0 +1,67 @@
.. skm1582115510876
.. _kubernetes-admin-tutorials-starlingx-application-package-manager:
=====================================
StarlingX Application Package Manager
=====================================
Use the |prod| system application commands to manage containerized application
deployment from the command-line.
|prod| application management provides a wrapper around Airship Armada
\(see `https://opendev.org/airship/armada.git <https://opendev.org/airship/armada.git>`__\)
and Kubernetes Helm \(see `https://github.com/helm/helm <https://github.com/helm/helm>`__\)
for managing containerized applications. Armada is a tool for managing multiple
Helm charts with dependencies by centralizing all configurations in a single
Armada YAML definition and providing life-cycle hooks for all Helm releases.
A |prod| application package is a compressed tarball containing a metadata.yaml
file, a manifest.yaml Armada manifest file, and a charts directory containing
Helm charts and a checksum.md5 file. The metadata.yaml file contains the
application name, version, and optional Helm repository and disabled charts
information.
|prod| application package management provides a set of :command:`system`
CLI commands for managing the lifecycle of an application, which includes
managing overrides to the Helm charts within the application.
.. table:: Table 1. Application Package Manager Commands
:widths: auto
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Command | Description |
+========================================+=============================================================================================================================================================================================================================================================+
| :command:`application-list` | List all applications. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`application-show` | Show application details such as name, status, and progress. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`application-upload` | Upload a new application package. |
| | |
| | This command loads the application's Armada manifest and Helm charts into an internal database and automatically applies system overrides for well-known Helm charts, allowing the Helm chart to be applied optimally to the current cluster configuration. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`helm-override-list` | List system Helm charts and the namespaces with Helm chart overrides for each Helm chart. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`helm-override-show` | Show a Helm chart's overrides for a particular namespace. |
| | |
| | This command displays system overrides, user overrides and the combined system and user overrides. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`helm-override-update` | Update Helm chart user overrides for a particular namespace. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`helm-chart-attribute-modify` | Enable or disable the installation of a particular Helm chart within an application manifest. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`helm-override-delete` | Delete a Helm chart's user overrides for a particular namespace. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`application-apply` | Apply or reapply the application manifest and Helm charts. |
| | |
| | This command will install or update the existing installation of the application based on its Armada manifest, Helm charts and Helm charts' combined system and user overrides. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`application-abort` | Abort the current application operation. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`application-update` | Update the deployed application to a different version. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`application-remove` | Uninstall an application. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :command:`application-delete` | Remove the uninstalled application's definition, including manifest and Helm charts and Helm chart overrides, from the system. |
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

View File

@ -0,0 +1,53 @@
.. mlb1573055521142
.. _kubernetes-cpu-manager-policies:
===============================
Kubernetes CPU Manager Policies
===============================
You can apply the kube-cpu-mgr-policy host label from the Horizon Web interface
or the CLI to set the Kubernetes CPU Manager policy.
The **kube-cpu-mgr-policy** host label supports the values ``none`` and
``static``.
Setting either of these values results in kubelet on the host being configured
with the policy of the same name as described at `https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies <https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies>`__,
but with the following differences:
----------------------------
Static policy customizations
----------------------------
- Pods in the **kube-system** namespace are affined to platform cores
only. Other pod containers \(hosted applications\) are restricted to
running on either the application or isolated cores. CFS quota
throttling for Guaranteed QoS pods is disabled.
- When using the static policy, improved performance can be achieved if
one also uses the Isolated CPU behavior as described at :ref:`Isolating CPU Cores to Enhance Application Performance <isolating-cpu-cores-to-enhance-application-performance>`.
- For Kubernetes pods with a **Guaranteed** QoS \(see `https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ <https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/>`__
for background information\), CFS quota throttling is disabled as it
causes performance degradation.
- Kubernetes pods are prevented by default from running on CPUs with an
assigned function of **Platform**. In contrast, pods in the
**kube-system** namespace are affined to run on **Platform** CPUs by
default. This assumes that the number of platform CPUs is sufficiently
large to handle the workload. These two changes further ensure that
low-latency applications are not interrupted by housekeeping tasks.
.. xreflink For information about adding labels, see |node-doc|: :ref:`Configuring Node Labels Using Horizon <configuring-node-labels-using-horizon>`
.. xreflink and |node-doc|: :ref:`Configuring Node Labels from the CLI <assigning-node-labels-from-the-cli>`.
-----------
Limitations
-----------
|org| recommends using the static policy.

View File

@ -0,0 +1,51 @@
.. faf1573057127832
.. _kubernetes-topology-manager-policies:
====================================
Kubernetes Topology Manager Policies
====================================
You can apply the kube-topology-mgr-policy host label from Horizon or the CLI
to set the Kubernetes Topology Manager policy.
The kube-topology-mgr-policy host label has four supported values:
- none
- best-effort
This is the default when the static CPU policy is enabled.
- restricted
- single-numa-node
For more information on these settings, see the Kubernetes Topology Manager
policies described at `https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#how-topology-manager-works <https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#how-topology-manager-works>`__.
.. xreflink For information about adding labels, see |node-doc|: :ref:`Configuring Node Labels Using Horizon <configuring-node-labels-using-horizon>`
.. xreflink and |node-doc|: :ref:`Configuring Node Labels from the CLI <assigning-node-labels-from-the-cli>`.
-----------
Limitations
-----------
- The scheduler is not NUMA-aware and can therefore make suboptimal pod
scheduling decisions when the Topology Manager policy on the destination
node is taken into account.
- If a pod fails with *Topology Affinity Error* because it can't satisfy the
Topology Manager policy on the node where the schedule placed it, it will
remain in the error state and not be retried. If the pod is part of a
manager object such as ReplicaSet, Deployment, etc., then a new pod will be
created. If that new pod is placed on the same node, it can result in a
series of pods with a status of *Topology Affinity Error*. For more
information, see `https://github.com/kubernetes/kubernetes/issues/84757 <https://github.com/kubernetes/kubernetes/issues/84757>`__.
In light of these limitations, |org| recommends using the best-effort policy,
which will cause Kubernetes to try to provide NUMA-affined resources without
generating any unexpected errors if the policy cannot be satisfied.

View File

@ -0,0 +1,14 @@
.. xeu1564401508004
.. _local-docker-registry:
=====================
Local Docker Registry
=====================
A local Docker registry is deployed as part of |prod| on the internal
management network.
You can interact with the local Docker registry at the address
**registry.local:9001**.

View File

@ -0,0 +1,75 @@
.. fuq1561551658529
.. _removing-cpu-manager-for-kubernetes:
=================================
Remove CPU Manager for Kubernetes
=================================
You can uninstall CMK by removing related Helm charts in the reverse order of
their installation.
.. rubric:: |proc|
#. Delete **cmk manager**.
#. Run the :command:`helm delete` command.
.. code-block:: none
~(keystone)admin)$ helm delete --purge
release "cpu-manager" deleted
#. Ensure that any pods in the Terminating state have deleted before
proceeding to the next step. The pods being terminated are in the
**kube-system** namespace.
For example:
.. code-block:: none
~(keystone)admin)$ kubectl get pods -n kube-system | grep cmk
cmk-setup 0/1 Completed 0 71m
cmk-uninstall-2z29p 0/1 ContainerCreating 0 4s
cmk-webhook-deployment-778c787679-7bpw2 1/1 Running 0 71m
cpu-manager-k8s-cmk-compute-0-5621f953-pchjr 3/3 Terminating 0 38
~(keystone)admin)$ kubectl get pods -n kube-system | grep cmk
cmk-setup 0/1 Completed 0 72m
cmk-webhook-deployment-778c787679-7bpw2 1/1 Running 0 72m
#. Delete **cmk-manager-webhook**.
#. Run the :command:`helm delete` command.
.. code-block:: none
~(keystone)admin)$ helm delete cmk-webhook --purge
#. Ensure that any pods in the Terminating state have been deleted before
proceeding to the next step.
.. code-block:: none
~(keystone)admin)$ kubectl get pods -n kube-system | grep cmk
cmk-uninstall-webhook 0/1 Completed 0 11s
cmk-webhook-deployment-778c787679-7bpw2 1/1 Terminating 0 73m
~(keystone)admin)$ kubectl get pods -n kube-system | grep cmk
cmk-uninstall-webhook 0/1 Completed 0 49s
#. Delete **cmk-manager-init**. Run the :command:`helm delete` command.
.. code-block:: none
~(keystone)admin)$ helm delete cmk-manager-init --purge
release "cpu-manager-init" deleted
.. rubric:: |result|
The CPU Manager for Kubernetes is now deleted.
.. seealso::
:ref:`Uninstall CPU Manager for Kubernetes on IPv6 <uninstalling-cpu-manager-for-kubernetes-on-ipv6>`

View File

@ -0,0 +1,50 @@
.. qay1588350945997
.. _setting-up-a-public-repository:
==========================
Set up a Public Repository
==========================
There will likely be scenarios where you need to make images publicly available
to all users.
.. rubric:: |context|
The suggested method to do that is to create a
keystone tenant/user = 'registry'/'public', which will therefore have access to
images in the registry.local:9001/public/ repository. Then share access to
those images by sharing the registry/public user's credentials with other users.
.. rubric:: |proc|
#. Create the keystone tenant/user of registry/public.
.. code-block:: none
~(keystone_admin)$ openstack project create registry
~(keystone_admin)$ TENANTNAME="registry"
~(keystone_admin)$ TENANTID=`openstack project list | grep ${TENANTNAME} | awk '{print $2}'`
~(keystone_admin)$ USERNAME="public"
~(keystone_admin)$ USERPASSWORD="${USERNAME}K8*"
~(keystone_admin)$ openstack user create --password ${USERPASSWORD} --project ${TENANTID} ${USERNAME}
~(keystone_admin)$ openstack role add --project ${TENANTNAME} --user ${USERNAME} _member
#. Create a secret containing the credentials of the public repository in
kube-system namespace.
.. code-block:: none
% kubectl create secret docker-registry registry-local-public-key --docker-server=registry.local:9001 --docker-username=public --docker-password=public --docker-email=noreply@windriver.com -n kube-system
#. Share the credentials of the public repository with other namespaces.
Copy the secret to the other namespace and add it as an ImagePullSecret to
the namespace's **default** serviceAccount.
.. code-block:: none
% kubectl get secret registry-local-public-key -n kube-system -o yaml | grep -v '^\s*namespace:\s' | kubectl apply --namespace=<USERNAMESPACE> -f -
% kubectl patch serviceaccount default -p "{\"imagePullSecrets\": [{\"name\": \"registry-local-public-key\"}]}" -n <USERNAMESPACE>

View File

@ -0,0 +1,20 @@
.. mbd1576786954045
.. _uninstalling-cpu-manager-for-kubernetes-on-ipv6:
===============================================
Uninstalling CPU Manager for Kubernetes on IPv6
===============================================
You will have to run some additional uninstall steps for IPv6 configurations.
When uninstalling CMK on an IPv6 system, first follow the steps at
:ref:`Removing CPU Manager for Kubernetes <removing-cpu-manager-for-kubernetes>`,
then run the following commands:
.. code-block:: none
~(keystone_admin)$ kubectl delete pod/cmk-uninstall-webhook -n kube-system
~(keystone_admin)$ kubectl delete ds cmk-uninstall -n kube-system
~(keystone_admin)$ kubectl delete pod delete-uninstall -n kube-system

View File

@ -77,7 +77,7 @@ A number of components are common to most |prod| deployment configurations.
connectivity or it can be deployed as an internal network.
**External Network Connectivity with EXTERNAL cluster host network**
The |CNI| service, Calico,
The |CNI| service, Calico,
provides private tunneled networking between hosted containers on the
external cluster host network.

View File

@ -63,7 +63,7 @@ controller-0, and on :command:`system application-apply` of application packages
The `docker_registries` structure is a map of public registries and the
alternate registry values for each public registry. For each public registry the
key is a fully scoped registry name of a public registry (for example "k8s.gcr.io")
and the alternate registry URL and username/password (if authenticated).
and the alternate registry URL and username/password (if authenticated).
url
The fully scoped registry name (and optionally namespace/) for the alternate

View File

@ -358,7 +358,7 @@ At this stage you must point the flock layer to pick up your custom
distro layer content. The location of lower layer content is encoded
in config files found under ``stx-tools/centos-mirror-tools/config/<os>/<layer-to-build>``
in files ``required_layer_pkgs.cfg`` and ``required_layer_iso_inc.cfg``.
Both files use a comma seperated three field lines... ``<lower-layer>,<type>,<url>``
Both files use a comma seperated three field lines... ``<lower-layer>,<type>,<url>``
e.g. ::
cat stx-tools/centos-mirror-tools/config/centos/flock/required_layer_pkgs.cfg

View File

@ -422,7 +422,7 @@ you just built
## Note: You may need to be root to run Docker commands on your build system. If so, "sudo -s"
docker images
# Locate the image of interest you just built in the output, should be at or near the top of the list, then
# save the image of interest as a compressed tarball. It could be rather large.
# save the image of interest as a compressed tarball. It could be rather large.
docker save <image id> | gzip -9 >container_filename.tgz
# scp container_filename.tgz to the active controller, then
# ssh to active controller, then run the following instructions there:

View File

@ -848,11 +848,11 @@ Build the image:
----------------------
:ref:`Build-installer`
----------------------
Layered build has the same procedure for build installer as StarlingX R3.0 build
Layered build has the same procedure for build installer as StarlingX R3.0 build
except for the changes in path of files as below. Click the heading above for details.
#. The steps covered by the script **update-pxe-network-installer** is detailed in
$MY_REPO/stx/stx-metal/installer/initrd/README. This script creates three files on
#. The steps covered by the script **update-pxe-network-installer** is detailed in
$MY_REPO/stx/stx-metal/installer/initrd/README. This script creates three files on
/localdisk/loadbuild/stx/flock/pxe-network-installer/output.
#. The path for **build_srpm.data** is $MY_REPO/stx/metal/installer/pxe-network-installer/centos/.

View File

@ -111,6 +111,15 @@ Fault Management
fault-mgmt/index
-----------
Admin Tasks
-----------
.. toctree::
:maxdepth: 2
admintasks/index
----------
User Tasks
----------

View File

@ -1,5 +1,5 @@
=========================
Time Sensitive Networking
Time Sensitive Networking
=========================
This guide describes how to deploy and run Time Sensitive Networking (TSN)

View File

@ -62,6 +62,7 @@
.. |QoS| replace:: :abbr:`QoS (Quality of Service)`
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
.. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)`
.. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)`
.. |SAS| replace:: :abbr:`SAS (Serial Attached SCSI)`
.. |SATA| replace:: :abbr:`SATA (Serial AT Attachment)`
.. |SNAT| replace:: :abbr:`SNAT (Source Network Address Translation)`

View File

@ -136,7 +136,7 @@ You can now use the private Tiller server remotely or locally by specifying the
--home "./.helm"
.. seealso::
:ref:`Configuring Container-backed Remote CLIs
:ref:`Configuring Container-backed Remote CLIs
<kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients>`
:ref:`Using Container-backed Remote CLIs

View File

@ -11,8 +11,8 @@ Kubernetes CPU manager static policy.
.. rubric:: |prereq|
You will need to enable this CPU management mechanism before applying a
policy.
You will need to enable this CPU management mechanism before applying a
policy.
.. See |admintasks-doc|: :ref:`Optimizing Application Performance
<kubernetes-cpu-manager-policies>` for details.