User Tasks guide

Fixed typo in LetsEncrypt example

Removed duplicate Datanet entry from main index.rst

Reworked Use Kubernetes CPU Manager Static Policy prerequisite block.

Restored fault/index version of FM toctree in top-level index.

Added merged doc entries to top level index.rst.

Incorporated review comments. Also some generic formatting clean-up such as
converting abbreviations to rST-style :abbr: markup.

Moved url with embedded substitution out of code-block.

Addressed patch 2 review comments. Some addtional rST tidying. See comment replies
for open questions/issues.

This patch fixes an issue with 'stx' in filenames that may differ downstream using-an-image-from-the-local-docker-registry-in-a-container-spec
new substitution and changing code-blocks to parsed-literals as required.

Initial submission for review. Note that a couple of references to WR persist
in examples. These will be marked up with comments in the review.

Signed-off-by: Stone <ronald.stone@windriver.com>
Change-Id: I1efef569842caff5def9dc00395b594d91d7a5d0
Signed-off-by: Stone <ronald.stone@windriver.com>
This commit is contained in:
Stone 2020-11-24 09:28:16 -05:00
parent eeaa4fe4c2
commit f63f0912c6
33 changed files with 2501 additions and 1 deletions

View File

@ -75,6 +75,15 @@ Fault management
fault-mgmt/index
----------
User Tasks
----------
.. toctree::
:maxdepth: 2
usertasks/index
----------------
Operation guides
----------------

View File

@ -36,4 +36,11 @@
.. |proc| replace:: Procedure
.. |postreq| replace:: Postrequisites
.. |result| replace:: Results
.. |eg| replace:: Example
.. |eg| replace:: Example
.. Name of downloads location
.. |dnload-loc| replace:: a StarlingX mirror
.. File name prefix, as in stx-remote-cli-<version>.tgz. May also be
.. used in sample domain names etc.
.. |prefix| replace:: stx

View File

@ -0,0 +1,3 @@
{
"restructuredtext.confPath": "c:\\Users\\rstone\\Desktop\\utut\\docs\\doc\\source"
}

View File

@ -0,0 +1,37 @@
.. jxy1562587174205
.. _accessing-the-kubernetes-dashboard:
===============================
Access the Kubernetes Dashboard
===============================
You can optionally use the Kubernetes Dashboard web interface to manage your
hosted containerized applications.
.. rubric:: |proc|
.. _accessing-the-kubernetes-dashboard-steps-azn-yyd-tkb:
#. Access the dashboard at ``https://<oam-floating-ip-address OR fqdn>:
<kube-dashboard-port>``
where <kube-dashboard-port> is the port that the dashboard was installed
on. Contact your |prod| administrator for this information.
Depending on the certificate used by your |prod| administrator for
installing the Kubernetes Dashboard, you may need to install a new Trusted
Root CA or acknowledge an insecure connection in your browser.
#. Select the **kubeconfig** option for signing in to the Kubernetes
Dashboard.
.. note::
Your kubeconfig file containing credentials specified by your
StarlingX administrator (see :ref:`Installing kubectl and Helm Clients
Directly on a Host
<kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host>`)
is typically located at $HOME/.kube/config .
You are presented with the Kubernetes Dashboard for the current context
\(cluster, user and credentials\) specified in the **kubeconfig** file.

View File

@ -0,0 +1,146 @@
.. ifk1581957631610
.. _configuring-remote-helm-client:
============================
Configure Remote Helm Client
============================
For non-admin users use of the helm client, you must create your own Tiller
server, in a namespace that you have access to, with the required :abbr:`RBAC
(role-based access control)` capabilities and optionally TLS protection.
.. rubric:: |context|
To create a Tiller server with RBAC permissions within the default namespace,
complete the following steps on the controller: Except where indicated, these
commands can be run by the non-admin user, locally or remotely.
.. note::
If you are using container-backed helm CLIs and clients \(method 1\),
ensure you change directories to <$HOME>/remote\_cli\_wd.
.. rubric:: |prereq|
.. _configuring-remote-helm-client-ul-jhh-byv-nlb:
- Your remote **kubectl** access is configured. For more information, see,
:ref:`Configuring Container-backed Remote CLIs
<kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients>`,
or :ref:`Installing Kubectl and Helm Clients Directly on a Host
<kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host>`.
- Your |prod| administrator has setup the required RBAC policies for the
tiller ServiceAccount in your namespace.
.. rubric:: |proc|
.. _configuring-remote-helm-client-steps-isx-dsd-tkb:
#. Set the namespace.
.. code-block:: none
~(keystone_admin)]$ NAMESPACE=default
#. Set up your Tiller account, roles, and bindings in your namespace.
#. Execute the following commands.
.. code-block:: none
% cat <<EOF > default-tiller-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tiller
namespace: <namespace>
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tiller
namespace: <namespace>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tiller
subjects:
- kind: ServiceAccount
name: tiller
namespace: <namespace>
EOF
% kubectl apply -f default-tiller-sa.yaml
#. Initialize the Helm account.
.. code-block:: none
~(keystone_admin)]$ helm init --service-account=tiller --tiller-namespace=$NAMESPACE --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@ replicas: 1@ replicas: 1\n \ selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' > helm-init.yaml
~(keystone_admin)]$ kubectl apply -f helm-init.yaml
~(keystone_admin)]$ helm init client-only
.. note::
Ensure that each of the patterns between single quotes in the above
:command:`sed` commands are on single lines when run from your
command-line interface.
.. note::
Add the following options if you are enabling TLS for this Tiller:
``--tiller-tls``
Enable TLS on Tiller.
``--tiller-tls-cert <certificate_file>``
The public key/certificate for Tiller \(signed by
``--tls-ca-cert``\).
``--tiller-tls-key <key_file>``
The private key for Tiller.
``--tiller-tls-verify``
Enable authentication of client certificates \(i.e. validate they
are signed by ``--tls-ca-cert``\).
``--tls-ca-cert <certificate_file>``
The public certificate of the CA used for signing Tiller server and
helm client certificates.
.. rubric:: |result|
You can now use the private Tiller server remotely or locally by specifying the
``--tiller-namespace`` default option on all helm CLI commands. For example:
.. code-block:: none
helm version --tiller-namespace <namespace>
helm install --name wordpress stable/wordpress --tiller-namespace <namespace>
.. note::
If you are using container-backed helm CLI and Client \(method 1\), then
you change directory to <$HOME>/remote\_cli\_wd and include the following option
on all helm commands:
.. code-block:: none
--home "./.helm"
.. seealso::
:ref:`Configuring Container-backed Remote CLIs
<kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients>`
:ref:`Using Container-backed Remote CLIs
<usertask-using-container-backed-remote-clis-and-clients>`
:ref:`Installing Kubectl and Helm Clients Directly on a Host
<kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host>`

View File

@ -0,0 +1,148 @@
.. uen1559067854074
.. _creating-network-attachment-definitions:
=====================================
Create Network Attachment Definitions
=====================================
Network attachment definition specifications must be created in order to
reference / request an SR-IOV interface in a container specification.
.. rubric:: |context|
The sample network attachments shown in this procedure can be used in a
container as shown in :ref:`Using Network Attachment Definitions in a Container
<using-network-attachment-definitions-in-a-container>`.
.. xreflink For information about PCI-SRIOV Interface Support, see the |datanet-doc|:
:ref:`<data-network-management-data-networks>` guide.
.. rubric:: |prereq|
You must have configured at least one SR-IOV interface on a host with the
target datanetwork \(**datanet-a** or **datanet-b** in the example below\)
assigned to it before creating a **NetworkAttachmentDefinition** referencing
this data network.
.. note::
The configuration for this SR-IOV interface with either a ``netdevice`` or
``vfio`` vf-driver determines whether the **NetworkAttachmentDefinition**
will be a kernel network device or a DPDK network device.
.. rubric:: |proc|
.. _creating-network-attachment-definitions-steps-unordered-tbf-53z-hjb:
#. Create a simple SR-IOV network attachment definition file called net1.yaml
associated with the data network **datanet-a**.
.. code-block:: yaml
~(keystone_admin)$ cat <<EOF > net1.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: net1
annotations:
k8s.v1.cni.cncf.io/resourceName: intel.com/pci_sriov_net_datanet_a
spec:
config: '{
"cniVersion": "0.3.0",
"type": "sriov"
}'
EOF
This **NetworkAttachmentDefinition** is valid for both a kernel-based and
a DPDK \(vfio\) based device.
#. Create an SR-IOV network attachment.
- The following example creates an SR-IOV network attachment definition
configured for a VLAN with an ID of 2000.
.. code-block:: none
~(keystone_admin)$ cat <<EOF > net2.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: net2
annotations:
k8s.v1.cni.cncf.io/resourceName: intel.com/pci_sriov_net_datanet_b
spec:
config: '{
"cniVersion": "0.3.0",
"type": "sriov",
"vlan": 2000
}'
EOF
- The following example creates an SR-IOV network attachment definition
configured with IP Address information.
.. code-block:: none
~(keystone_admin)$ cat <<EOF > net3.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: net3
annotations:
k8s.v1.cni.cncf.io/resourceName: intel.com/pci_sriov_net_datanet_ b
spec:
config: '{
"cniVersion": "0.3.0",
"type": "sriov",
"ipam": {
"type": "host-local"
"subnet": "10.56.219.0/24",
"routes": [{
"dst": "0.0.0.0/0"
}],
"gateway": "10.56.219.1"
}
}'
EOF
.. rubric:: |result|
After SR-IOV interfaces have been provisioned and the hosts labeled and
unlocked, available SR-IOV VF resources are automatically advertised.
They can be referenced in subsequent |prod| operations using the appropriate
**NetworkAttachmentDefinition** name and the following extended resource name:
.. code-block:: none
intel.com/pci_sriov_net_${DATANETWORK_NAME}
For example, with a network called **datanet-a** the extended resource name
would be:
.. xreflink as shown in |node-doc|:
:ref:`Provisioning SR-IOV Interfaces using the CLI
<provisioning-sr-iov-interfaces-using-the-cli>`,
.. code-block:: none
intel.com/pci_sriov_net_datanet_a
.. _creating-network-attachment-definitions-ul-qjr-vnb-xhb:
- The extended resource name will convert all dashes \('-'\) in the data
network name into underscores \('\_'\).
- SR-IOV enabled interfaces using the netdevice VF driver must be
administratively and operationally up to be advertised by the SR-IOV
device plugin.
- If multiple data networks are assigned to an interface, the VFs
resources will be shared between pools.
.. seealso::
:ref:`Using Network Attachment Definitions in a Container
<using-network-attachment-definitions-in-a-container>`

View File

@ -0,0 +1,64 @@
=====================================
|prod-long| Kubernetes User Tutorials
=====================================
- :ref:`About the User Tutorials <about-the-user-tutorials>`
- Accessing the System
- :ref:`Overview <kubernetes-user-tutorials-overview>`
- :ref:`Remote CLI Access <remote-cli-access>`
- :ref:`Configuring Container-backed Remote CLIs and Clients <kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients>`
- :ref:`Installing Kubectl and Helm Clients Directly on a Host <kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host>`
- :ref:`Configuring Remote Helm Client <configuring-remote-helm-client>`
- :ref:`Accessing the GUI <kubernetes-user-tutorials-accessing-the-gui>`
- :ref:`Accessing the Kubernetes Dashboard <accessing-the-kubernetes-dashboard>`
- :ref:`REST API Access <kubernetes-user-tutorials-rest-api-access>`
- Application Management
- :ref:`Helm Package Manager <kubernetes-user-tutorials-helm-package-manager>`
- Local Docker Registry
- :ref:`Authentication and Authorization <kubernetes-user-tutorials-authentication-and-authorization>`
- :ref:`Using an Image from the Local Docker Registry in a Container Spec <using-an-image-from-the-local-docker-registry-in-a-container-spec>`
- :ref:`NodePort Usage Restrictions <nodeport-usage-restrictions>`
- :ref:`Cert Manager <kubernetes-user-tutorials-cert-manager>`
- :ref:`LetsEncrypt Example <letsencrypt-example>`
- Vault Secret and Data Management
- :ref:`Vault Overview <kubernetes-user-tutorials-vault-overview>`
- :ref:`Vault Aware <vault-aware>`
- :ref:`Vault Unaware <vault-unaware>`
- Using Kata Container Runtime
- Usage
- :ref:`Overview <cloud-platform-kubernetes-user-tutorials-overview>`
- :ref:`Specifying Kata Container Runtime in Pod Spec <specifying-kata-container-runtime-in-pod-spec>`
- :ref:`Known Limitations <known-limitations>`
- Adding Persistent Volume Claims
- :ref:`Creating Persistent Volume Claims <kubernetes-user-tutorials-creating-persistent-volume-claims>`
- :ref:`Mounting Persistent Volumes in Containers <kubernetes-user-tutorials-mounting-persistent-volumes-in-containers>`
- Adding an SRIOV Interface to a Container
- :ref:`Creating Network Attachment Definitions <creating-network-attachment-definitions>`
- :ref:`Using Network Attachment Definitions in a Container <using-network-attachment-definitions-in-a-container>`
- Managing CPU Resource Usage of Containers
- :ref:`Using Kubernetes CPU Manager Static Policy <using-kubernetes-cpu-manager-static-policy>`
- :ref:`Using Intel's CPU Manager for Kubernetes (CMK) <using-intels-cpu-manager-for-kubernetes-cmk>`
- :ref:`Uninstalling CMK <uninstalling-cmk>`

View File

@ -0,0 +1,151 @@
.. User tasks file, created by
sphinx-quickstart on Thu Sep 3 15:14:59 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
==========
User Tasks
==========
----------
Kubernetes
----------
The |prod-long| Kubernetes user tutorials provide working examples of common
end-user tasks.
These include tasks related to:
*************
System access
*************
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-access-overview
Remote CLI access
*****************
.. toctree::
:maxdepth: 1
remote-cli-access
kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients
usertask-using-container-backed-remote-clis-and-clients
kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host
configuring-remote-helm-client
GUI access
**********
.. toctree::
:maxdepth: 1
accessing-the-kubernetes-dashboard
API access
**********
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-rest-api-access
**********************
Application management
**********************
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-helm-package-manager
*********************
Local docker registry
*********************
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-authentication-and-authorization
using-an-image-from-the-local-docker-registry-in-a-container-spec
***************************
NodePort usage restrictions
***************************
.. toctree::
:maxdepth: 1
nodeport-usage-restrictions
************
Cert Manager
************
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-cert-manager
letsencrypt-example
********************************
Vault secret and data management
********************************
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-vault-overview
vault-aware
vault-unaware
****************************
Using Kata container runtime
****************************
.. toctree::
:maxdepth: 1
kata-containers-overview
specifying-kata-container-runtime-in-pod-spec
known-limitations
*******************************
Adding persistent volume claims
*******************************
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-creating-persistent-volume-claims
kubernetes-user-tutorials-mounting-persistent-volumes-in-containers
****************************************
Adding an SRIOV interface to a container
****************************************
.. toctree::
:maxdepth: 1
creating-network-attachment-definitions
using-network-attachment-definitions-in-a-container
**************************
CPU Manager for Kubernetes
**************************
.. toctree::
:maxdepth: 1
using-kubernetes-cpu-manager-static-policy
using-intels-cpu-manager-for-kubernetes-cmk
uninstalling-cmk
---------
OpenStack
---------
Coming soon.

View File

@ -0,0 +1,17 @@
.. vwx1591793382143
.. _starlingx-kubernetes-user-tutorials-overview:
========================
Kata Containers Overview
========================
|prod| supports Kata Containers.
|prod| uses a **containerd** :abbr:`CRI (Container Runtime Interface)` that supports
both runc and Kata Container runtimes. The default runtime is runc. If you want
to launch a pod that uses the Kata Container runtime, you must declare it
explicitly.
For more information about Kata containers, see `https://katacontainers.io/
<https://katacontainers.io/>`__.

View File

@ -0,0 +1,48 @@
.. ihj1591797204756
.. _known-limitations:
================================
Known Kata Container Limitations
================================
This section describes the known limitations when using Kata containers.
.. _known-limitations-section-tsh-tl1-zlb:
--------------
SR-IOV Support
--------------
A minimal **kernel** and **rootfs** for Kata containers are shipped with
|prod-long|, and are present at /usr/share/kata-containers/. To enable certain
kernel features such as :abbr:`IOMMU (I/O memory management unit)`, and desired
network kernel modules, a custom kernel image, and rootfs has to be built. For
more information, see `https://katacontainers.io/
<https://katacontainers.io/>`__.
.. _known-limitations-section-ngp-ty3-bmb:
-------------------
CPU Manager Support
-------------------
Kata containers currently occupy only the platform cores. There is no
:abbr:`CPU (Central Processing Unit)` manager support.
.. _known-limitations-section-wcd-xy3-bmb:
---------
Hugepages
---------
.. _known-limitations-ul-uhz-xy3-bmb:
- Similar to the SR-IOV limitation, hugepage support must be configured in a
custom Kata kernel.
- The size and number of hugepages must be written using the
:command:`io.katacontainers.config.hypervisor.kernel\_params` annotation.
- Creating a **hugetlbfs** mount for hugepages in the Kata container is
specific to the end user application.

View File

@ -0,0 +1,15 @@
.. kot1588353813955
.. _kubernetes-user-tutorials-overview:
===============
Access Overview
===============
A general user of |prod| can access the system using remote
**kubectl**/**helm** :abbr:`CLIs (Command Line Interfaces)` and the Kubernetes
Dashboard.
Your |prod| administrator must setup a user account \(that is, either a local
Kubernetes ServiceAccount or a remote Windows Active Directory account\), and
Kubernetes RBAC authorization policies.

View File

@ -0,0 +1,45 @@
.. qly1582054834918
.. _kubernetes-user-tutorials-authentication-and-authorization:
======================================================
Local Docker Registry Authentication and Authorization
======================================================
Authentication is enabled for the local docker registry. When logging in, users
are authenticated using their platform keystone credentials.
For example:
.. code-block:: none
$ docker login registry.local:9001 -u <keystoneUserName> -p <keystonePassword>
An authorized administrator can perform any docker action, while regular users
can only interact with their own repositories \(i.e.
``registry.local:9001/<keystoneUserName>/``\). For example, only **admin** and
**testuser** accounts can push to or pull from:
.. code-block:: none
registry.local:9001/testuser/busybox:latest
.. _kubernetes-user-tutorials-authentication-and-authorization-d315e59:
---------------------------------
Username and Docker Compatibility
---------------------------------
Repository names in Docker registry paths must be lower case. For this reason,
a keystone user must exist that consists of all lower case characters. For
example, the user **testuser** is correct in the following URL, while
**testUser** would result in an error:
.. code-block:: none
registry.local:9001/testuser/busybox:latest
.. seealso::
`https://docs.docker.com/engine/reference/commandline/docker/
<https://docs.docker.com/engine/reference/commandline/docker/>`__ for more
information about docker commands.

View File

@ -0,0 +1,65 @@
.. iac1588347002880
.. _kubernetes-user-tutorials-cert-manager:
============
Cert Manager
============
|prod| integrates the open source project cert-manager.
Cert-manager is a native Kubernetes certificate management controller, that
supports certificate management with external certificate authorities \(CAs\).
nginx-ingress-controller is also integrated with |prod| in support of http-01
challenges from CAs as part of cert-manager certificate requests.
For more information about Cert Manager, see `cert-manager.io
<http://cert-manager.io>`__.
.. _kubernetes-user-tutorials-cert-manager-section-lz5-gcw-nlb:
------------------------------------
Prerequisites for using Cert Manager
------------------------------------
.. _kubernetes-user-tutorials-cert-manager-ul-rd3-3cw-nlb:
- Ensure that your |prod| administrator has shared the local registry's
public repository's credentials/secret with the namespace where you will
create certificates,. This will allow you to leverage the
``registry.local:9001/public/cert-manager-acmesolver`` image.
- Ensure that your |prod| administrator has enabled use of the
cert-manager apiGroups in your RBAC policies.
.. _kubernetes-user-tutorials-cert-manager-section-y5r-qcw-nlb:
----------------------------------------------
Resources on Creating Issuers and Certificates
----------------------------------------------
.. _kubernetes-user-tutorials-cert-manager-ul-uts-5cw-nlb:
- Configuration documentation:
- `https://cert-manager.io/docs/configuration
<https://cert-manager.io/docs/configuration/>`__/
This link provides details on creating different types of certificate
issuers or CAs.
- Usage documentation:
- `https://cert-manager.io/docs/usage/certificate/
<https://cert-manager.io/docs/usage/certificate/>`__
This link provides details on creating a standalone certificate.
- `https://cert-manager.io/docs/usage/ingress/
<https://cert-manager.io/docs/usage/ingress/>`__
This link provides details on adding a cert-manager annotation to an
Ingress in order to specify the certificate issuer for the ingress to
use to request the certificate for its https connection.
- :ref:`LetsEncrypt Example <letsencrypt-example>`

View File

@ -0,0 +1,204 @@
.. dyp1581949325894
.. _kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients:
======================================
Configure Container-backed Remote CLIs
======================================
The |prod| command lines can be accessed from remote computers running
Linux, MacOS, and Windows.
.. rubric:: |context|
This functionality is made available using a docker container with
pre-installed :abbr:`CLIs (Command Line Interfaces)` and clients. The
container's image is pulled as required by the remote CLI/client configuration
scripts.
.. rubric:: |prereq|
.. _kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients-ul-ev3-bfq-nlb:
- You must have Docker installed on the remote systems you connect from. For
more information on installing Docker, see
`https://docs.docker.com/install/ <https://docs.docker.com/install/>`__.
For Windows remote workstations, Docker is only supported on Windows 10.
.. note::
You must be able to run docker commands using one of the following
options:
- Running the scripts using sudo
- Adding the Linux user to the docker group
For more information, see,
`https://docs.docker.com/engine/install/linux-postinstall/
<https://docs.docker.com/engine/install/linux-postinstall/>`__
- For Windows remote workstations, you must run the following commands from a
Cygwin terminal. See `https://www.cygwin.com/ <https://www.cygwin.com/>`__
for more information about the Cygwin project.
- For Windows remote workstations, you must also have :command:`winpty`
installed. Download the latest release tarball for Cygwin from
`https://github.com/rprichard/winpty/releases
<https://github.com/rprichard/winpty/releases>`__. After downloading the
tarball, extract it to any location and change the Windows <PATH> variable
to include its bin folder from the extracted winpty folder.
- You will need a kubectl config file containing your user account and login
credentials from your |prod| administrator.
The following procedure helps you configure the Container-backed remote CLIs
and clients for a non-admin user.
.. rubric:: |proc|
.. _kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients-steps-fvl-n4d-tkb:
#. Copy the remote client tarball file from |dnload-loc| to the remote
workstation, and extract its content.
- The tarball is available from the |prod| area on |dnload-loc|.
- You can extract the tarball contents anywhere on your client system.
.. parsed-literal::
$ cd $HOME
$ tar xvf |prefix|-remote-clients-<version>.tgz
#. Download the user/tenant **openrc** file from the Horizon Web interface to
the remote workstation.
#. Log in to Horizon as the user and tenant that you want to configure
remote access for.
In this example, we use 'user1' user in the 'tenant1' tenant.
#. Navigate to **Project** \> **API Access** \> **Download Openstack RC
file**.
#. Select **Openstack RC file**.
The file my-openrc.sh downloads.
#. Copy the user-kubeconfig file \(received from your administrator containing
your user account and credentials\) to the remote workstation.
You can copy the file to any location on the remote workstation. For
convenience, this example assumes that it is copied to the location of the
extracted tarball.
.. note::
Ensure that the user-kubeconfig file has 666 permissions after copying
the file to the remote workstation, otherwise, use the following
command to change permissions, :command:`chmod 666 user-kubeconfig`.
#. On the remote workstation, configure the client access.
#. Change to the location of the extracted tarball.
.. parsed-literal::
$ cd $HOME/|prefix|-remote-clients-<version>/
#. Create a working directory that will be mounted by the container
implementing the remote CLIs.
See the description of the :command:`configure\_client.sh` ``-w`` option
:ref:`below <kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients-w-option>` for more details.
.. code-block:: none
$ mkdir -p $HOME/remote_cli_wd
#. Run the :command:`configure\_client.sh` script.
.. code-block:: none
$ ./configure_client.sh -t platform -r my_openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd
where the options for configure\_client.sh are:
**-t**
The type of client configuration. The options are platform \(for
|prod-long| CLI and clients\) and openstack \(for |prod-os|
application CLI and clients\).
The default value is platform.
**-r**
The user/tenant RC file to use for :command:`openstack` CLI
commands.
The default value is admin-openrc.sh.
**-k**
The kubernetes configuration file to use for :command:`kubectl` and
:command:`helm` CLI commands.
The default value is temp-kubeconfig.
**-o**
The remote CLI/client RC file generated by this script.
This RC file needs to be sourced in the shell to set up required
environment variables and aliases before running any remote CLI
commands.
For the platform client setup, the default is
remote\_client\_platform.sh. For the openstack application client
setup, the default is remote\_client\_app.sh.
.. _kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients-w-option:
**-w**
The working directory that will be mounted by the container
implementing the remote CLIs. When using the remote CLIs, any files
passed as arguments to the remote CLI commands need to be in this
directory in order for the container to access the files. The
default value is the directory from which the
:command:`configure\_client.sh` command was run.
**-p**
Override the container image for the platform CLI and clients.
By default, the platform CLIs and clients container image is pulled
from docker.io/starlingx/stx-platformclients.
For example, to use the container images from the WRS AWS ECR:
.. code-block:: none
$ ./configure_client.sh -t platform -r my-openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd -p 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0
If you specify repositories that require authentication, you must
perform a :command:`docker login` to that repository before using
remote CLIs.
**-a**
Override the OpenStack application image.
By default, the OpenStack CLIs and clients container image is
pulled from docker.io/starlingx/stx-openstackclients.
The :command:`configure-client.sh` command will generate a
remote\_client\_platform.sh RC file. This RC file needs to be sourced
in the shell to set up required environment variables and aliases
before any remote CLI commands can be run.
.. rubric:: |postreq|
After configuring the platform's container-backed remote CLIs/clients, the
remote platform CLIs can be used in any shell after sourcing the generated
remote CLI/client RC file. This RC file sets up the required environment
variables and aliases for the remote CLI commands.
.. note::
Consider adding this command to your .login or shell rc file, such that
your shells will automatically be initialized with the environment
variables and aliases for the remote CLI commands.

View File

@ -0,0 +1,91 @@
.. rqy1582055871598
.. _kubernetes-user-tutorials-creating-persistent-volume-claims:
===============================
Create Persistent Volume Claims
===============================
Container images have an ephemeral file system by default. For data to survive
beyond the lifetime of a container, it can read and write files to a persistent
volume obtained with a :abbr:`PVC (Persistent Volume Claim)` created to provide
persistent storage.
.. rubric:: |context|
The following steps create two 1Gb persistent volume claims.
.. rubric:: |proc|
.. _kubernetes-user-tutorials-creating-persistent-volume-claims-d385e32:
#. Create the **test-claim1** persistent volume claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: yaml
~(keystone_admin)$ cat <<EOF > claim1.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)$ kubectl apply -f claim1.yaml
persistentvolumeclaim/test-claim1 created
#. Create the **test-claim2** persistent volume claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: yaml
~(keystone_admin)$ cat <<EOF > claim2.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)$ kubectl apply -f claim2.yaml
persistentvolumeclaim/test-claim2 created
.. rubric:: |result|
Two 1Gb persistent volume claims have been created. You can view them with the
following command.
.. code-block:: none
~(keystone_admin)$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s
test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s

View File

@ -0,0 +1,42 @@
.. tnr1582059022065
.. _kubernetes-user-tutorials-helm-package-manager:
====================
Helm Package Manager
====================
|prod-long| supports Helm with Tiller, the Kubernetes package manager that can
be used to manage the lifecycle of applications within the Kubernetes cluster.
.. rubric:: |context|
Helm packages are defined by Helm charts with container information sufficient
for managing a Kubernetes application. You can configure, install, and upgrade
your Kubernetes applications using Helm charts. Helm charts are defined with a
default set of values that describe the behavior of the service installed
within the Kubernetes cluster.
Upon system installation, the official curated helm chart repository is added
to the local helm repo list, in addition, a number of local repositories
\(containing optional |prod-long| packages\) are created and added to the helm
repo list. For more information, see `https://github.com/helm/charts
<https://github.com/helm/charts>`__.
Use the following command to list the helm repositories:
.. code-block:: none
~(keystone_admin)$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
starlingx http://127.0.0.1:8080/helm_charts/starlingx
stx-platform http://127.0.0.1:8080/helm_charts/stx-platform
For more information on Helm, see the documentation at `https://helm.sh/docs/
<https://helm.sh/docs/>`__.
**Tiller** is a component of Helm. Tiller interacts directly with the
Kubernetes API server to install, upgrade, query, and remove Kubernetes
resources.

View File

@ -0,0 +1,110 @@
.. orh1571690363235
.. _kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host:
===================================================
Install Kubectl and Helm Clients Directly on a Host
===================================================
As an alternative to using the container-backed Remote :abbr:`CLIs (Command
Line Interfaces)` for kubectl and helm, you can install these commands
directly on your remote host.
.. rubric:: |context|
Kubectl and helm installed directly on the remote host provide the best CLI
behaviour, especially for CLI commands that reference local files or require a
shell.
The following procedure shows you how to configure the :command:`kubectl` and
:command:`kubectl` clients directly on a remote host, for an admin user with
**cluster-admin clusterrole**. If using a non-admin user with only role
privileges within a private namespace, additional configuration is required in
order to use :command:`helm`.
.. rubric:: |prereq|
You will need the following information from your |prod| administrator:
.. _kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host-ul-nlr-1pq-nlb:
- the floating OAM IP address of the |prod|
- login credential information; in this example, it is the "TOKEN" for a
local Kubernetes ServiceAccount.
.. xreflink For a Windows Active Directory user, see,
|sec-doc|: :ref:`Overview of Windows Active Directory
<overview-of-windows-active-directory>`.
- your kubernetes namespace
.. rubric:: |proc|
.. _kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host-steps-f54-qqd-tkb:
#. On the workstation, install the :command:`kubectl` client on an Ubuntu
host by performing the following actions on the remote Ubuntu system.
#. Install the :command:`kubectl` client CLI.
.. code-block:: none
% sudo apt-get update
% sudo apt-get install -y apt-transport-https
% curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
% echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
% sudo apt-get update
% sudo apt-get install -y kubectl
.. _security-installing-kubectl-and-helm-clients-directly-on-a-host-local-configuration-context:
#. Set up the local configuration and context.
.. note::
In order for your remote host to trust the certificate used by the
|prod-long| K8s API, you must ensure that the
**k8s\_root\_ca\_cert** provided by your |prod| administrator is a
trusted CA certificate by your host. Follow the instructions for
adding a trusted CA certificate for the operating system
distribution of your particular host.
If your administrator does not provide a **k8s\_root\_ca\_cert**
at the time of installation, then specify
insecure-skip-tls-verify, as shown below.
.. code-block:: none
% kubectl config set-cluster mycluster --server=https://<$CLUSTEROAMIP>:6443 --insecure-skip-tls-verify
% kubectl config set-credentials dave-user@mycluster --token=$MYTOKEN
% kubectl config set-context dave-user@mycluster --cluster=mycluster --user admin-user admin-user@mycluster --namespace=$MYNAMESPACE
% kubectl config use-context dave-user@mycluster
#. Test remote :command:`kubectl` access.
.. code-block:: none
% kubectl get pods -o wide
NAME READY STATUS RE- AGE IP NODE NOMINA- READINESS
STARTS TED NODE GATES
nodeinfo-648f.. 1/1 Running 0 62d 172.16.38.83 worker-4 <none> <none>
nodeinfo-648f.. 1/1 Running 0 62d 172.16.97.207 worker-3 <none> <none>
nodeinfo-648f.. 1/1 Running 0 62d 172.16.203.14 worker-5 <none> <none>
tiller-deploy.. 1/1 Running 0 27d 172.16.97.219 worker-3 <none> <none>
#. On the workstation, install the :command:`helm` client on an Ubuntu host
by performing the following actions on the remote Ubuntu system.
#. Install :command:`helm` client.
.. code-block:: none
% wget https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz
% tar xvf helm-v2.13.1-linux-amd64.tar.gz
% sudo cp linux-amd64/helm /usr/local/bin
In order to use :command:`helm`, additional configuration is required.
For more information, see :ref:`Configuring Remote Helm Client
<configuring-remote-helm-client>`.

View File

@ -0,0 +1,191 @@
.. nos1582114374670
.. _kubernetes-user-tutorials-mounting-persistent-volumes-in-containers:
======================================
Mount Persistent Volumes in Containers
======================================
You can launch, attach, and terminate a busybox container to mount :abbr:`PVCs
(Persistent Volume Claims)` in your cluster.
.. rubric:: |context|
This example shows how a volume is claimed and mounted by a simple running
container. It is the responsibility of an individual micro-service within an
application to make a volume claim, mount it, and use it. For example, the
stx-openstack application will make volume claims for **mariadb** and
**rabbitmq** via their helm charts to orchestrate this.
.. rubric:: |prereq|
You must have created the persistent volume claims.
.. xreflink This procedure uses PVCs
with names and configurations created in |stor-doc|: :ref:`Creating Persistent
Volume Claims <storage-configuration-creating-persistent-volume-claims>`.
.. rubric:: |proc|
.. _kubernetes-user-tutorials-mounting-persistent-volumes-in-containers-d18e55:
#. Create the busybox container with the persistent volumes created from the
PVCs mounted.
#. Create a yaml file definition for the busybox container.
.. code-block:: none
% cat <<EOF > busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
selector:
matchLabels:
run: busybox
template:
metadata:
labels:
run: busybox
spec:
containers:
- args:
- sh
image: busybox
imagePullPolicy: Always
name: busybox
stdin: true
tty: true
volumeMounts:
- name: pvc1
mountPath: "/mnt1"
- name: pvc2
mountPath: "/mnt2"
restartPolicy: Always
volumes:
- name: pvc1
persistentVolumeClaim:
claimName: test-claim1
- name: pvc2
persistentVolumeClaim:
claimName: test-claim2
EOF
#. Apply the busybox configuration.
.. code-block:: none
% kubectl apply -f busybox.yaml
#. Attach to the busybox and create files on the persistent volumes.
#. List the available pods.
.. code-block:: none
% kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-5c4f877455-gkg2s 1/1 Running 0 19s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t
#. From the container's console, list the disks to verify that the
persistent volumes are attached.
.. code-block:: none
# df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 3239984 28201936 10% /
tmpfs 65536 0 65536 0% /dev
tmpfs 65900776 0 65900776 0% /sys/fs/cgroup
/dev/rbd0 999320 2564 980372 0% /mnt1
/dev/rbd1 999320 2564 980372 0% /mnt2
/dev/sda4 20027216 4952208 14034624 26%
...
The PVCs are mounted as /mnt1 and /mnt2.
#. Create files in the mounted volumes.
.. code-block:: none
# cd /mnt1
# touch i-was-here
# ls /mnt1
i-was-here lost+found
#
# cd /mnt2
# touch i-was-here-too
# ls /mnt2
i-was-here-too lost+found
#. End the container session.
.. code-block:: none
# exit
Session ended, resume using 'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when the pod is running
#. Terminate the busybox container.
.. code-block:: none
% kubectl delete -f busybox.yaml
#. Re-create the busybox container, again attached to persistent volumes.
#. Apply the busybox configuration.
.. code-block:: none
% kubectl apply -f busybox.yaml
#. List the available pods.
.. code-block:: none
% kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-5c4f877455-jgcc4 1/1 Running 0 19s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach busybox-5c4f877455-jgcc4 -c busybox -i -t
#. From the container's console, list the disks to verify that the PVCs are
attached.
.. code-block:: none
# df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 3239984 28201936 10% /
tmpfs 65536 0 65536 0% /dev
tmpfs 65900776 0 65900776 0% /sys/fs/cgroup
/dev/rbd0 999320 2564 980372 0% /mnt1
/dev/rbd1 999320 2564 980372 0% /mnt2
/dev/sda4 20027216 4952208 14034624 26%
...
#. Verify that the files created during the earlier container session still
exist.
.. code-block:: none
# ls /mnt1
i-was-here lost+found
# ls /mnt2
i-was-here-too lost+found

View File

@ -0,0 +1,15 @@
.. dzk1565869275194
.. _kubernetes-user-tutorials-rest-api-access:
===============
REST API Access
===============
The Kubernetes REST APIs provide programmatic access to |prod| Kubernetes.
Access the Kubernetes REST API using the URL
``https://<oam-floating-ip-address>:6443``, and using the API syntax described at
the following site:
`https://kubernetes.io/docs/concepts/overview/kubernetes-api/
<https://kubernetes.io/docs/concepts/overview/kubernetes-api/>`__.

View File

@ -0,0 +1,31 @@
.. myx1596548399062
.. _kubernetes-user-tutorials-vault-overview:
==============
Vault Overview
==============
You can optionally integrate open source Vault secret management into the
|prod| solution. The Vault integration requires :abbr:`PVC (Persistent Volume
Claims)` as a storage backend to be enabled.
There are two methods for using Vault secrets with hosted applications:
.. _kubernetes-user-tutorials-vault-overview-ul-ekx-y4m-4mb:
#. Have the application be Vault Aware and retrieve secrets using the Vault
REST API. This method is used to allow an application write secrets to
Vault, provided the applicable policy gives write permission at the
specified Vault path. For more information, see
:ref:`Vault Aware <vault-aware>`.
#. Have the application be Vault Unaware and use the Vault Agent Injector to
make secrets available on the container filesystem. For more information,
see, :ref:`Vault Unaware <vault-unaware>`.
Both methods require appropriate roles, policies and auth methods to be
configured in Vault.
.. xreflink For more information, see |sec-doc|: :ref:`Vault Secret
and Data Management <security-vault-overview>`.

View File

@ -0,0 +1,117 @@
.. nst1588348086813
.. _letsencrypt-example:
===================
LetsEncrypt Example
===================
The LetsEncrypt example illustrates cert-manager usage.
.. rubric:: |prereq|
This example requires that:
.. _letsencrypt-example-ul-h3j-f2w-nlb:
- the LetsEncrypt CA in the public internet can send an http01 challenge to
the FQDN of your |prod|'s floating OAM IP Address.
- your |prod| has access to the kuard demo application at
gcr.io/kuar-demo/kuard-amd64:blue
.. rubric:: |proc|
#. Create a LetsEncrypt Issuer in the default namespace by applying the
following manifest file.
.. code-block:: none
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: dave.user@hotmail.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
#. Create a deployment of the kuard demo application
\(`https://github.com/kubernetes-up-and-running/kuard
<https://github.com/kubernetes-up-and-running/kuard>`__\) with an ingress
using cert-manager by applying the following manifest file:
Substitute values in the example as required for your environment.
.. parsed-literal::
apiVersion: apps/v1
kind: Deployment
metadata:
name: kuard
spec:
replicas: 1
selector:
matchLabels:
app: kuard
template:
metadata:
labels:
app: kuard
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:blue
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kuard
labels:
app: kuard
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: kuard
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: "letsencrypt-prod"
name: kuard
spec:
tls:
- hosts:
- kuard.my-fqdn-for-|prefix|.company.com
secretName: kuard-ingress-tls
rules:
- host: kuard.my-fqdn-for-|prefix|.company.com
http:
paths:
- backend:
serviceName: kuard
servicePort: 80
path: /
#. Access the kuard demo from your browser to inspect and verify that the
certificate is signed by LetsEncrypt CA. For this example, the URL
would be https://kuard.my-fqdn-for-|prefix|.company.com.

View File

@ -0,0 +1,22 @@
.. viy1592399797304
.. _nodeport-usage-restrictions:
===========================
NodePort Usage Restrictions
===========================
This sections lists the usage restrictions of NodePorts for your |prod-long| product.
The following usage restrictions apply when using NodePorts:
.. _nodeport-usage-restrictions-ul-erg-sgz-1mb:
- Ports in the NodePort range 31,500 to 32,767 are reserved for applications
that use Kubernetes NodePort service to expose the application externally.
- Ports in the NodePort range 30,000 to 31,499 are reserved for |prod-long|.
.. only:: partner
.. include:: ../_includes/nodeport-usage-restrictions.rest

View File

@ -0,0 +1,31 @@
.. hqk1581948275511
.. _remote-cli-access:
=================
Remote CLI Access
=================
You can access the system :abbr:`CLIs (Command Line Interfaces)` from a
remote workstation using one of the two methods.
.. xreflink .. note::
To use the remote Windows Active Directory server for authentication of
local :command:`kubectl` commands, see, |sec-doc|: :ref:`Overview of
Windows Active Directory <overview-of-windows-active-directory>`.
.. _remote-cli-access-ul-jt2-lcy-ljb:
#. The first method involves using the remote :abbr:`CLI (Command Line
Interface)` tarball from |dnload-loc| to install a set of container-backed
remote CLIs for accessing a remote |prod-long|. This provides
access to the kubernetes-related CLIs \(kubectl, helm\). This approach is
simple to install, portable across Linux, OSX and Windows, and provides
access to all |prod-long| CLIs. However, commands such as those that
reference local files or require a shell are awkward to run in this
environment.
#. The second method involves installing the :command:`kubectl` and
:command:`helm` clients directly on the remote host. This method only
provides the kubernetes-related CLIs and requires OS-specific installation
instructions.

View File

@ -0,0 +1,67 @@
.. rpw1591793808686
.. _specifying-kata-container-runtime-in-pod-spec:
==========================================
Specify Kata Container Runtime in Pod Spec
==========================================
You can specify the use of Kata Container runtime in your pod specification by
runtime class or by annotation.
.. rubric:: |proc|
* Do one of the following:
.. table::
:widths: auto
+--------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------+
| **To use the runtime class method:** | #. Create a RuntimeClass with handler set to kata. |
| | |
| | #. Reference this class in the pod spec, as shown in the following example: |
| | |
| | .. code-block:: none |
| | |
| | kind: RuntimeClass |
| | apiVersion: node.k8s.io/v1beta1 |
| | metadata: |
| | name: kata-containers |
| | handler: kata |
| | --- |
| | apiVersion: v1 |
| | kind: Pod |
| | metadata: |
| | name: busybox-runtime |
| | spec: |
| | runtimeClassName: kata-containers |
| | containers: |
| | - name: busybox |
| | command: |
| | - sleep |
| | - "3600" |
| | image: busybox |
+--------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------+
| **To use the annotation method:** | Set io.kubernetes.cri.untrusted-workload to true in the annotations section of a pod spec. |
| | |
| | For example: |
| | |
| | .. code-block:: none |
| | |
| | apiVersion: v1 |
| | kind: Pod |
| | metadata: |
| | name: busybox-untrusted |
| | annotations: |
| | io.kubernetes.cri.untrusted-workload: "true" |
| | spec: |
| | containers: |
| | - name: busybox |
| | command: |
| | - sleep |
| | - "3600" |
| | image: busybox |
| | |
| | .. note:: |
| | This method is deprecated and may not be supported in future releases. |
+--------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------+

View File

@ -0,0 +1,33 @@
.. usq1569263366388
.. _uninstalling-cmk:
=============
Uninstall CMK
=============
You can uninstall the CPU Manager for Kubernetes from the command line.
.. rubric:: |proc|
#. Delete **cmk**.
.. code-block:: none
% helm delete --purge cmk
Wait for all pods in the terminating state to be deleted before proceeding.
#. Delete **cmk-webhook**.
.. code-block:: none
% helm delete --purge cmk-webhook
Wait for all pods in the terminating state to be deleted before proceeding.
#. Delete **cmk-init**.
.. code-block:: none
% helm delete --purge cmk-init

View File

@ -0,0 +1,163 @@
.. vja1605798752774
.. _usertask-using-container-backed-remote-clis-and-clients:
================================
Use Container-backed Remote CLIs
================================
Remote platform :abbr:`CLIs (Command Line Interfaces)` can be used in any shell
after sourcing the generated remote CLI/client RC file. This RC file sets up
the required environment variables and aliases for the remote CLI commands.
.. contents:: The following topics are discussed below:
:local:
:depth: 1
.. note::
Consider adding this command to your .login or shell rc file, such that
your shells will automatically be initialized with the environment
variables and aliases for the remote CLI commands.
.. rubric:: |prereq|
You must have completed the configuration steps described in
:ref:`Configuring Container-backed Remote CLIs
<kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients>`
before proceeding.
.. rubric:: |proc|
*******************************
Kubernetes kubectl CLI commands
*******************************
.. note::
The first usage of a remote CLI command will be slow as it requires
that the docker image supporting the remote CLIs/clients be pulled from
the remote registry.
.. code-block:: none
root@myclient:/home/user/remote_wd# source remote_client_platform.sh
Please enter your OpenStack Password for project tenant1 as user user1:
root@myclient:/home/user/remote_wd# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-767467f9cf-wtvmr 1/1 Running 1 3d2h
calico-node-j544l 1/1 Running 1 3d
calico-node-ngmxt 1/1 Running 1 3d1h
calico-node-qtc99 1/1 Running 1 3d
calico-node-x7btl 1/1 Running 4 3d2h
ceph-pools-audit-1569848400-rrpjq 0/1 Completed 0 12m
ceph-pools-audit-1569848700-jhv5n 0/1 Completed 0 7m26s
ceph-pools-audit-1569849000-cb988 0/1 Completed 0 2m25s
coredns-7cf476b5c8-5x724 1/1 Running 1 3d2h
...
root@myclient:/home/user/remote_wd#
.. note::
Some CLI commands are designed to leave you in a shell prompt, for
example:
.. code-block:: none
root@myclient:/home/user/remote_wd# openstack
or
.. code-block:: none
root@myclient:/home/user/remote_wd# kubectl exec -ti <pod_name> -- /bin/bash
In most cases, the remote CLI will detect and handle these commands
correctly. If you encounter cases that are not handled correctly, you
can force-enable or disable the shell options using the <FORCE\_SHELL>
or <FORCE\_NO\_SHELL> variables before the command.
For example:
.. code-block:: none
root@myclient:/home/user/remote_wd# FORCE_SHELL=true kubectl exec -ti <pod_name> -- /bin/bash
root@myclient:/home/user/remote_wd# FORCE_NO_SHELL=true kubectl exec <pod_name> -- ls
You cannot use both variables at the same time.
************************************
Remote CLI commands with local files
************************************
If you need to run a remote CLI command that references a local file, then
that file must be copied to or created in the working directory specified
in the ``-w`` option on the ./config\_client.sh command.
For example:
#. If you have not already done so, source the remote\_client\_platform.sh
file.
.. code-block:: none
root@myclient:/home/user/remote_wd# source remote_client_platform.sh
#. Copy the file local file and run the remote command.
.. code-block:: none
root@myclient:/home/user# cp /<someDir>/test.yml $HOME/remote_cli_wd/test.yml
root@myclient:/home/user# cd $HOME/remote_cli_wd
root@myclient:/home/user/remote_cli_wd# kubectl -n kube-system create -f test.yml
pod/test-pod created
root@myclient:/home/user/remote_cli_wd# kubectl -n kube-system delete -f test.yml
pod/test-pod deleted
****
Helm
****
Do the following to use helm.
.. xreflink .. note::
For non-admin users, additional configuration is required first as
discussed in |sec-doc|: :ref:`Configuring Remote Helm Client for
Non-Admin Users <configuring-remote-helm-client-for-non-admin-users>`.
.. note::
When using helm, any command that requires access to a helm repository
\(managed locally\) will require that you be in the
$HOME/remote\_cli\_wd directory and use the ``--home "./.helm"`` option.
#. Do the initial set-up of the helm client.
#. If you have not already done so, source the remote\_client\_platform.sh
file.
.. code-block:: none
% source remote_client_platform.sh
#. Complete the initial set-up.
.. code-block:: none
% cd $HOME/remote_cli_wd
% helm init --client-only --home "./.helm"
#. Run a helm command.
#. If you have not already done so, source the remote\_client\_platform.sh
file.
.. code-block:: none
% source remote_client_platform.sh
#. Run a helm command. This example installs WordPress.
.. code-block:: none
% cd $HOME/remote_cli_wd
% helm list
% helm install --name wordpress stable/wordpress --home "./.helm"

View File

@ -0,0 +1,68 @@
.. uxm1568850135371
.. _using-an-image-from-the-local-docker-registry-in-a-container-spec:
===============================================================
Use an Image from the Local Docker Registry in a Container Spec
===============================================================
When creating a pod spec or a deployment spec that uses an image from the local
docker registry, you must use the full image name, including the registry, and
specify an **imagePullSecret** with your keystone credentials.
.. rubric:: |context|
This example procedure assumes that testuser/busybox:latest container image has
been pushed to the local docker registry.
.. rubric:: |proc|
#. Create a secret with credentials for the local docker registry.
.. code-block:: none
% kubectl create secret docker-registry testuser-registry-secret --docker-server=registry.local:9001 --docker-username=testuser --docker-password=<testuserPassword> --docker-email=noreply@windriver.com
#. Create a configuration for the busybox container.
.. code-block:: none
% cat <<EOF > busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
selector:
matchLabels:
run: busybox
template:
metadata:
labels:
run: busybox
spec:
containers:
- args:
- sh
image: registry.local:9001/testuser/busybox:latest
imagePullPolicy: Always
name: busybox
stdin: true
tty: true
restartPolicy: Always
imagePullSecrets:
- name: testuser-registry-secret
EOF
#. Apply the configuration created in the busybox.yaml file.
.. code-block:: none
% kubectl apply -f busybox.yaml
This will launch the busybox deployment using the image in the local docker
registry and specifying the ``testuser-registry-secret`` for authentication
and authorization with the registry.

View File

@ -0,0 +1,32 @@
.. nnj1569261145380
.. _using-intels-cpu-manager-for-kubernetes-cmk:
==============================================
Use Intel's CPU Manager for Kubernetes \(CMK\)
==============================================
Use the CMK user manual to run a workload via CMK.
See `https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md#pod-configuration-on-the-clusters-with-cmk-mutating-webhook-kubernetes-v190
<https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md#pod-configuration-on-the-clusters-with-cmk-mutating-webhook-kubernetes-v190>`__ for detailed instructions.
.. xreflink See Kubernetes Admin Tasks: :ref:`Kubernetes CPU Manager Static Policy
<isolating-cpu-cores-to-enhance-application-performance>` for details on how
to enable this CPU management mechanism.
The basic workflow is to:
.. _using-intels-cpu-manager-for-kubernetes-cmk-ul-xcq-cwb-2jb:
#. Request the number of exclusive cores you want as:
.. code-block:: none
cmk.intel.com/exclusive-cores
#. Run your workload as:
.. code-block:: none
/opt/bin/cmk isolate --pool=exclusive <workload>

View File

@ -0,0 +1,101 @@
.. klf1569260954792
.. _using-kubernetes-cpu-manager-static-policy:
========================================
Use Kubernetes CPU Manager Static Policy
========================================
You can launch a container pinned to a particular set of CPU cores using a
Kubernetes CPU manager static policy.
.. rubric:: |prereq|
You will need to enable this CPU management mechanism before applying a
policy.
.. See |admintasks-doc|: :ref:`Optimizing Application Performance
<kubernetes-cpu-manager-policies>` for details.
.. rubric:: |proc|
#. Define a container running a CPU stress command.
.. note::
- The pod will be pinned to the allocated set of CPUs on the host
and have exclusive use of those CPUs if <resource:request:cpu> is
equal to <resource:cpulimit>.
- Resource memory must also be specified for guaranteed resource
allocation.
- Processes within the pod can float across the set of CPUs allocated
to the pod, unless the application in the pod explicitly pins them
to a subset of the CPUs.
For example:
.. code-block:: none
% cat <<EOF > stress-cpu-pinned.yaml
apiVersion: v1
kind: Pod
metadata:
name: stress-ng-cpu
spec:
containers:
- name: stress-ng-app
image: alexeiled/stress-ng
imagePullPolicy: IfNotPresent
command: ["/stress-ng"]
args: ["--cpu", "10", "--metrics-brief", "-v"]
resources:
requests:
cpu: 2
memory: "2Gi"
limits:
cpu: 2
memory: "2Gi"
nodeSelector:
kubernetes.io/hostname: worker-1
EOF
You will likely need to adjust some values shown above to reflect your
deployment configuration. For example, on an AIO-SX or AIO-DX system.
worker-1 would probably become controller-0 or controller-1.
The significant addition to this definition in support of CPU pinning, is
the **resources** section , which sets a CPU resource request and limit of
2.
#. Apply the definition.
.. code-block:: none
% kubectl apply -f stress-cpu-pinned.yaml
You can SSH to the worker node and run :command:`top`, and type '1' to see
CPU details per core:
#. Describe the pod or node to see the CPU Request, CPU Limits and that it is
in the "Guaranteed" QoS Class.
For example:
.. code-block:: none
% kubectl describe <node>
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default stress-ng-cpu 2 (15%) 2 (15%) 2Gi (7%) 2Gi (7%) 9m31s
% kubectl describe <pod> stress-ng-cpu
...
QoS Class: Guaranteed
#. Delete the container.
.. code-block:: none
% kubectl delete -f stress-cpu-pinned.yaml

View File

@ -0,0 +1,236 @@
.. ulm1559068249625
.. _using-network-attachment-definitions-in-a-container:
=================================================
Use Network Attachment Definitions in a Container
=================================================
Network attachment definitions can be referenced by name when creating a
container. The extended resource name of the SR-IOV pool should also be
referenced in the resource requests.
.. rubric:: |context|
The following examples use network attachment definitions **net2** and **net3**
created in :ref:`Creating Network Attachment Definitions
<creating-network-attachment-definitions>`.
As shown in these examples, it is important that you request the same number of
devices as you have network annotations.
.. xreflink For information about PCI-SRIOV Interface Support, see the :ref:`|datanet-doc|
<data-network-management-data-networks>` guide.
.. rubric:: |proc|
.. _using-network-attachment-definitions-in-a-container-steps-j2n-zqz-hjb:
#. Create a container with one additional SR-IOV interface using the **net2**
network attachment definition.
#. Populate the configuration file pod1.yaml with the following contents.
.. code-block:: yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name": "net2" }
]'
spec:
containers:
- name: pod1
image: centos/tools
imagePullPolicy: IfNotPresent
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 300000; done;" ]
resources:
requests:
intel.com/pci_sriov_net_datanet_b: '1'
limits:
intel.com/pci_sriov_net_datanet_b: '1'
#. Apply the configuration to create the container.
.. code-block:: none
~(keystone_admin)$ kubectl create -f pod1.yaml
pod/pod1 created
After creating the container, an extra network device interface, **net2**,
which uses one SR-IOV VF, will appear in the associated container\(s\).
#. Create a container with two additional SR-IOV interfaces using the **net2**
network attachment definition.
#. Populate the configuration file pod2.yaml with the following contents.
.. code-block:: yaml
apiVersion: v1
kind: Pod
metadata:
name: pod2
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name": "net2" }
]'
spec:
containers:
- name: pod2
image: centos/tools
imagePullPolicy: IfNotPresent
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 300000; done;" ]
resources:
requests:
intel.com/pci_sriov_net_datanet_b: '2'
limits:
intel.com/pci_sriov_net_datanet_b: '2'
#. Apply the configuration to create the container.
.. code-block:: none
~(keystone_admin)$ kubectl create -f pod2.yaml
pod/pod2 created
After creating the container, network device interfaces **net2** and
**net3**, which each use one SR-IOV VF, will appear in the associated
container\(s\).
#. Create a container with two additional SR-IOV interfaces using the **net2**
and **net3** network attachment definitions.
#. Populate the configuration file pod3.yaml with the following contents.
.. code-block:: yaml
apiVersion: v1
kind: Pod
metadata:
name: pod3
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name": "net2" },
{ "name": "net3" }
]'
spec:
containers:
- name: pod3
image: centos/tools
imagePullPolicy: IfNotPresent
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 300000; done;" ]
resources:
requests:
intel.com/pci_sriov_net_datanet_b: '2'
limits:
intel.com/pci_sriov_net_datanet_b: '2'
#. Apply the configuration to create the container.
.. code-block:: none
~(keystone_admin)$ kubectl create -f pod3.yaml
**net2** and **net3** will each take a device from the
**pci\_sriov\_net\_datanet\_b** pool and be configured on the
container/host based on the their respective
**NetworkAttachmentDefinition** specifications \(see :ref:`Creating Network
Attachment Definitions <creating-network-attachment-definitions>`\).
After creating the container, network device interfaces **net2** and
**net3**, which each use one SR-IOV VF, will appear in the associated
container\(s\).
.. note::
In the above examples, the PCI addresses allocated to the container are
exported via an environment variable. For example:
.. code-block:: none
~(keystone_admin)$ kubectl exec -n default -it pod3 -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=pod3
TERM=xterm
PCIDEVICE_INTEL_COM_PCI_SRIOV_NET_DATANET_B=0000:81:0e.4,0000:81:0e.0
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
container=docker
HOME=/root
#. Create a container with two additional SR-IOV interfaces using the **net1**
network attachment definition for a DPDK based application.
The configuration of the SR-IOV host interface to which the datanetwork is
assigned determines whether the network attachment in the container will be
kernel or dpdk-based. The SR-IOV host interface needs to be created with a
vfio **vf-driver** for the network attachment in the container to be
dpdk-based, otherwise it will be kernel-based.
#. Populate the configuration file pod4.yaml with the following contents.
.. code-block:: yaml
apiVersion: v1
kind: Pod
metadata:
name: testpmd
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name": "net1" },
{ "name": "net1" }
]'
spec:
restartPolicy: Never
containers:
- name: testpmd
image: "amirzei/mlnx_docker_dpdk:ubuntu16.04"
volumeMounts:
- mountPath: /mnt/huge-1048576kB
name: hugepage
stdin: true
tty: true
securityContext:
privileged: false
capabilities:
add: ["IPC_LOCK", "NET_ADMIN", "NET_RAW"]
resources:
requests:
memory: 10Gi
intel.com/pci_sriov_net_datanet_a: '2'
limits:
hugepages-1Gi: 4Gi
memory: 10Gi
intel.com/pci_sriov_net_datanet_a: '2'
volumes:
- name: hugepage
emptyDir:
medium: HugePages
.. note::
You must convert any dashes \(-\) in the datanetwork name used in
the NetworkAttachmentDefinition to underscores \(\_\).
#. Apply the configuration to create the container.
.. code-block:: none
~(keystone_admin)$ kubectl create -f pod4.yaml
pod/testpmd created
.. note::
For applications backed by Mellanox NICs, privileged mode is required in
the pod's security context. For Intel i40e based NICs bound to vfio,
privileged mode is not required.

View File

@ -0,0 +1,41 @@
.. rpr1596551983445
.. _vault-aware:
===========
Vault Aware
===========
The Vault Aware method involves writing an application to connect directly to
a Vault server using Vault REST APIs. The Vault REST APIs requires an
existing Auth method and policy to be created; the specific method depends on
the client libraries used.
The Vault REST API is used to allow an application to read and/or write secrets
to Vault, provided the applicable policy gives read and/or write permission at
the specified Vault path. The Vault REST API can be accessed from application
containers using the Vault endpoint **sva-vault**. Run the following command
to view Vault endpoints:
.. code-block:: none
$ kubectl get svc -n vault
.. seealso::
.. _vault-aware-ul-rlf-zw1-pmb:
- Vault REST API:
- `https://learn.hashicorp.com/vault/getting-started/apis
<https://learn.hashicorp.com/vault/getting-started/apis>`__
- `https://www.vaultproject.io/api-docs
<https://www.vaultproject.io/api-docs>`__
- Client libraries: `https://www.vaultproject.io/api/libraries.html
<https://www.vaultproject.io/api/libraries.html>`__
- Connect Vault with Python using the HVAC library:
`https://github.com/hvac/hvac <https://github.com/hvac/hvac>`__

View File

@ -0,0 +1,150 @@
.. izv1596552030484
.. _vault-unaware:
=============
Vault Unaware
=============
The Vault Unaware method uses the **Vault Agent Injector** to attach a sidecar
container to a given pod. The sidecar handles all authentication with Vault,
retrieves the specified secrets, and mounts them on a shared filesystem to make
them available to all containers in the pod. The applications running in the
pod can access these secrets as files.
.. rubric:: |prereq|
.. _vault-unaware-ul-y32-svm-4mb:
- Configure and enable the Kubernetes Auth Method before configuring the
Vault Unaware method.
.. xreflink For more information, see, |sec-doc|:
:ref:`Configuring Vault <configuring-vault>`.
- Ensure a policy and role exists for the Application's service account to
access the 'secret' path secret engine, and a secret actually exists in
this secret engine.
.. _vault-unaware-ol-nfj-qlk-qmb:
#. Set environment variables on controller-0.
.. code-block:: none
$ ROOT_TOKEN=$(kubectl exec -n vault sva-vault-manager-0 -- cat /mnt/data/cluster_keys.json | grep -oP --color=never '(?<="root_token":")[^"]*')
$ SA_CA_CERT=$(kubectl exec -n vault sva-vault-0 -- awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' /var/run/secrets/kubernetes.io/serviceaccount/ca.crt)
$ TOKEN_JWT=$(kubectl exec -n vault sva-vault-0 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
$ KUBERNETES_PORT_443_TCP_ADDR=$(kubectl exec -n vault sva-vault-0 -- sh -c 'echo $KUBERNETES_PORT_443_TCP_ADDR')
$ echo $(kubectl get secrets -n vault vault-ca -o jsonpath='{.data.tls\.crt}') | base64 --decode > /home/sysadmin/vault_ca.pem
#. Create the policy.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" -H "Content-Type: application/json" --request PUT -d '{"policy":"path \"secret/basic-secret/*\" {capabilities = [\"read\"]}"}' https://sva-vault.vault.svc.cluster.local:8200/v1/sys/policy/basic-secret-policy
#. Create the role with policy and namespace.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" --request POST --data '{ "bound_service_account_names": "basic-secret", "bound_service_account_namespaces": "default", "policies": "basic-secret-policy", "max_ttl": "1800000"}' https://sva-vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/role/basic-secret-role
#. Create the secret.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" -H "Content-Type: application/json" -X POST -d '{"username":"pvtest","password":"Li69nux*"}' https://sva-vault.vault.svc.cluster.local:8200/v1/secret/basic-secret/helloworld
#. Verify the secret.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" https://sva-vault.vault.svc.cluster.local:8200/v1/secret/basic-secret/helloworld
.. rubric:: |proc|
#. Copy the Vault certs to the default namespace.
.. code-block:: none
$ kubectl get secret vault-server-tls --namespace=vault --export -o yaml | kubectl apply --namespace=default -f-
#. Use the following vault-injector.yaml file to create a test namespace, an
example Vault-Unaware deployment, 'basic-secret', with vault annotations
for creating the Vault Agent Injector sidecar container:
.. code-block:: yaml
cat <<EOF >> vault-injector.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: basic-secret
namespace: test
labels:
app: basic-secret
spec:
selector:
matchLabels:
app: basic-secret
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/tls-skip-verify: "true"
vault.hashicorp.com/agent-inject-secret-helloworld: "secret/basic-secret/helloworld"
vault.hashicorp.com/agent-inject-template-helloworld: |
{{- with secret "secret/basic-secret/helloworld" -}}
{
"username" : "{{ .Data.username }}",
"password" : "{{ .Data.password }}"
}
{{- end }}
vault.hashicorp.com/role: "basic-secret-role"
labels:
app: basic-secret
spec:
serviceAccountName: basic-secret
containers:
- name: app
image: jweissig/app:0.0.1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: basic-secret
labels:
app: basic-secret
EOF
#. Apply the application and verify the pod is running.
.. code-block:: none
$ kubectl create -f helloworld.yaml
#. Verify secrets are injected into the pod.
.. code-block:: none
$ kubectl exec -n pvtest basic-secret-55d6c9bb6f-4whbp -- cat /vault/secrets/helloworld
.. _vault-unaware-ul-jsf-dqm-4mb:
.. seealso::
`https://www.vaultproject.io/docs/platform/k8s/injector
<https://www.vaultproject.io/docs/platform/k8s/injector>`__
`https://learn.hashicorp.com/vault/kubernetes/sidecar
<https://learn.hashicorp.com/vault/kubernetes/sidecar>`__