Added content related to Helm v3

Included information from msg attached to Jira in
Edited content in Admin tasks and User tasks

Patch 1: worked on comments by Greg.

Patch 4: added User task to the correct branch

Patch 5: worked on feedback from Greg also in other topics
Edited content in: -helm client for non-admin users
		   -container backed remote clis and clients
		   -install kebectl and helm clients on a host

Patch 6: edited and added content in
 		   -helm client for non-admin users

Patch 7: Added content in: Configure remote CLI
			   Configure container-backed...
			   Use container-backed...

Patch 10: acted on Jim's comment and
	removed topic 'create, test, and terminate a ptp notification demo'
	removed links to this topic

Patch 11: acted on Greg's comments

Patch 12: acted on Greg's comments

Story: 2007000
Task: 42241

https://review.opendev.org/c/starlingx/docs/+/783891

Signed-off-by: Adil <mohamed.adilassakkali@windriver.com>

Change-Id: I9a5faf5549775593ddfd517d43412725d257b24f
This commit is contained in:
Adil 2021-03-30 12:01:56 -03:00
parent c66a6dff78
commit bb5cf99f7b
13 changed files with 257 additions and 346 deletions

View File

@ -6,9 +6,8 @@
Helm Package Manager
====================
|prod-long| supports Helm with Tiller, the Kubernetes package manager that
can be used to manage the lifecycle of applications within the Kubernetes
cluster.
|prod-long| supports Helm v3 package manager for Kubernetes that can be used to
securely manage the lifecycle of applications within the Kubernetes cluster.
.. rubric:: |context|
@ -18,26 +17,30 @@ your Kubernetes applications using Helm charts. Helm charts are defined with a
default set of values that describe the behavior of the service installed
within the Kubernetes cluster.
Upon system installation, the official curated Helm chart repository is added
to the local Helm repo list, in addition, a number of local repositories
\(containing optional |prod-long| packages\) are created and added to the Helm
repo list. For more information,
see `https://github.com/helm/charts <https://github.com/helm/charts>`__.
A Helm v3 client is installed on controllers for local use by admins to manage
end-users' Kubernetes applications. |prod| recommends to install a Helm v3
client on a remote workstation, so that non-admin (and admin) end-users can
manage their Kubernetes applications remotely.
Use the following command to list the Helm repositories:
Upon system installation, local Helm repositories \(containing |prod-long|
packages\) are created and added to the Helm repo list.
Use the following command to list these local Helm repositories:
.. code-block:: none
~(keystone_admin)]$ helm repo list
NAME URL
stable `https://kubernetes-charts.storage.googleapis.com`__
local `http://127.0.0.1:8879/charts`__
starlingx `http://127.0.0.1:8080/helm_charts/starlingx`__
stx-platform `http://127.0.0.1:8080/helm_charts/stx-platform`__
starlingx `http://127.0.0.1:8080/helm_charts/starlingx`
stx-platform `http://127.0.0.1:8080/helm_charts/stx-platform`
For more information on Helm, see the documentation at `https://helm.sh/docs/ <https://helm.sh/docs/>`__.
Where the `stx-platform` repo holds helm charts of StarlingX Applications \(see
next section\) of the |prod| platform itself, while the `starlingx` repo holds
helm charts of optional StarlingX applications, such as Openstack. The admin
user can add charts to these local repos and regenerate the index to use these
charts, and add new remote repositories to the list of known repos.
**Tiller** is a component of Helm. Tiller interacts directly with the
Kubernetes API server to install, upgrade, query, and remove Kubernetes
resources.
For more information on Helm v3, see the documentation at `https://helm.sh/docs/ <https://helm.sh/docs/>`__.
For more information on how to configure and use Helm both locally and remotely, see :ref:`Configure Local CLI Access <configure-local-cli-access>`,
and :ref:`Configure Remote CLI Access <configure-remote-cli-access>`.

View File

@ -1,164 +0,0 @@
.. jff1614105111370
.. _create-test-and-terminate-a-ptp-notification-demo:
===================================================
Create, Test, and Terminate a PTP Notification Demo
===================================================
This section provides instructions on accessing, creating, testing and
terminating a **ptp-notification-demo**.
.. rubric:: |context|
Use the following procedure to copy the tarball from |dnload-loc|, create, test,
and terminate a ptp-notification-demo.
.. rubric:: |proc|
.. _create-test-and-terminate-a-ptp-notification-demo-steps-irz-5w4-t4b:
#. Copy the **ptp-notification-demo\_v1.0.2.tgz** file from |prod-long|
at `http://mirror.starlingx.cengn.ca/mirror/starlingx/
<http://mirror.starlingx.cengn.ca/mirror/starlingx/>`__ to yor system, and extract its content.
.. note::
The tarball includes the docker file and code to build the reference
API application, and the Helm chart to install the Sidecar along with
the application.
The following files are part of the tarball:
- Helm charts
- Chart.yaml
- values.yaml
- \_helpers.tpl
- configmap.yaml
- deployment.yaml
- .helmignore
- ptp-notification-override.yaml
- app\_start.sh
- sidecar\_start.sh
- notification-docker
- Dockerfile
- api
.. note::
The demo uses the following images:
- starlingx/notificationclient-base:stx.5.0-v1.0.3
- ptp-base:1.0.1
#. Build the **ptp-base:1.0.1** image using the following commands.
.. code-block:: none
$ tar xvf ptp-notification-demo_<v1.0.2>.tgz
$ cd ~/notification-dockers/ptp-base/
$ sudo docker build . -t ptp-base:1.0.1
$ sudo docker save ptp-base:1.0.1 -o ptp-base.1.0.1.tar
$ sudo ctr -n k8s.io image import ./ptp-base.1.0.1.tar
$ cd ~/charts
$ tar xvf ptp-notification-demo-1.tgz
.. note::
For |AIO|-SX and AIO-DX systems, ptp-base.1.0.1.tar should be copied to
each node and the import command, :command:`sudo ctr -n k8s.io image
import ./ptp-base.1.0.1.tar` should be run on each node.
#. Install the demo's pod using the following commands.
.. note::
This pod includes two containers, Sidecar and the referenced API
application.
.. code-block:: none
$ kubectl create namespace ptpdemo
$ helm install -n notification-demo ~/charts/ptp-notification-demo -f ~/charts/ptp-notification-demo/ptp-notification-override.yaml
$ kubectl get pods -n ptpdemo
.. code-block:: none
NAME READY STATUS RESTARTS AGE
notification-demo-ptp-notification-demo-cf7b65c47-s5jk6 2/2 Running 0 5m50s
#. Test the **ptp-notification** demo.
#. Display the app logs using the following command.
.. code-block:: none
$ kubectl logs -f notification-demo-ptp-notification-demo-<xyz> -c ptp-notification-demo-app -n ptpdemo
#. In another terminal, access the application container.
.. code-block:: none
$ kubectl exec -it notification-demo-ptp-notification-demo-<zyz> -c ptp-notification-demo-app -n ptpdemo -- bash
#. Check if you can pull |PTP| status using the REST API.
.. code-block:: none
$ curl -v -H 'Content-Type: application/json' http://127.0.0.1:8080/ocloudNotifications/v1/PTP/CurrentState
#. Subscribe to |PTP| notifications.
.. code-block:: none
$ curl -v -d '{"ResourceType": "PTP", "ResourceQualifier": {"NodeName": "controller-0"}, "EndpointUri": "http://127.0.0.1:9090/v1/resource_status/ptp"}' -H 'Content-Type: application/json' -X POST http://127.0.0.1:${SIDECAR_API_PORT}/ocloudNotifications/v1/subscriptions |python -m json.tool
#. Retrieve a list of subscriptions.
.. code-block:: none
$ curl -v -H 'Content-Type: application/json' http://127.0.0.1:${SIDECAR_API_PORT}/ocloudNotifications/v1/subscriptions |python -m json.tool
For example, to get a specific subscription, use the following command.
.. code-block:: none
$ curl -v -H 'Content-Type: application/json' http://127.0.0.1:${SIDECAR_API_PORT}/ocloudNotifications/v1/subscriptions/<subscriptionId>
#. To delete a specific subscription with the subscription ID, run the
following command.
.. code-block:: none
$ curl -X DELETE -v -H 'Content-Type: application/json' http://127.0.0.1:${SIDECAR_API_PORT}/ocloudNotifications/v1/subscriptions/<subscriptionId>
#. Terminate the demo using the following command.
.. code-block:: none
$ helm del --purge notification-demo

View File

@ -14,6 +14,3 @@ PTP Notification
remove-ptp-notifications
override-default-application-values
integrate-the-application-with-notification-client-sidecar
create-test-and-terminate-a-ptp-notification-demo

View File

@ -41,12 +41,3 @@ The following prerequisites are required before the integration:
.. image:: ../figures/cak1614112389132.png
:width: 800
For instructions on creating, testing and terminating a
**ptp-notification-demo**, see, :ref:`Create, Test, and Terminate |PTP|
Notification Demo <create-test-and-terminate-a-ptp-notification-demo>`.

View File

@ -99,13 +99,6 @@ For example:
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
.. note::
In the following examples, the prompt is shortened to:
.. code-block:: none
~(keystone_admin)]$
Use :command:`system help` for a full list of :command:`system` subcommands.
**fm commands**
@ -144,7 +137,16 @@ For example:
NAME READY STATUS RESTARTS AGE
dashboard-kubernetes-dashboard-7749d97f95-bzp5w 1/1 Running 0 3d18h
.. note::
Use the remote Windows Active Directory server for authentication of
local :command:`kubectl` commands.
**Helm commands**
Helm commands are executed with the :command:`helm` command
For example:
.. code-block:: none
% helm repo add bitnami https://charts.bitnami.com/bitnami
% helm repo update
% helm repo list
% helm search repo
% helm install wordpress bitnami/wordpress

View File

@ -42,5 +42,5 @@ either of the above two methods.
:ref:`Install Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
:ref:`Configure Remote Helm Client for Non-Admin Users
:ref:`Configure Remote Helm v2 Client
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -2,44 +2,141 @@
.. oiz1581955060428
.. _configure-remote-helm-client-for-non-admin-users:
================================================
Configure Remote Helm Client for Non-Admin Users
================================================
===============================
Configure Remote Helm v2 Client
===============================
For non-admin users \(i.e. users without access to the default Tiller
server running in kube-system namespace\), you must create a Tiller server
for this specific user in a namespace that they have access to.
Helm v3 is recommended for users to install and manage their
containerized applications. However, Helm v2 may be required, for example, if
the containerized application supports only a Helm v2 helm chart.
.. rubric:: |context|
By default, helm communicates with the default Tiller server in the
kube-system namespace. This is not accessible by non-admin users.
For non-admin users use of the helm client, you must create your own Tiller
server, in a namespace that you have access to, with the required |RBAC|
Helm v2 is only supported remotely. Also, it is only supported with kubectl and
Helm v2 clients configured directly on the remote host workstation. In
addition to installing the Helm v2 clients, users must also create their own
Tiller server, in a namespace that the user has access, with the required |RBAC|
capabilities and optionally |TLS| protection.
To create a Tiller server with |RBAC| permissions within the default
namespace, complete the following steps on the controller: Except where
indicated, these commands can be run by the non-admin user, locally or
remotely.
.. note::
If you are using container-backed helm CLIs and clients \(method 1\),
ensure you change directories to <$HOME>/remote\_cli\_wd
Complete the following steps to configure Helm v2 for managing containerized
applications with a Helm v2 helm chart.
.. rubric:: |proc|
.. _configure-remote-helm-client-for-non-admin-users-steps-isx-dsd-tkb:
#. Set the namespace.
#. On the controller, create an admin-user service account if this is not
already available.
#. Create the **admin-user** service account in **kube-system**
namespace and bind the **cluster-admin** ClusterRoleBinding to this user.
.. code-block:: none
% cat <<EOF > admin-login.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
% kubectl apply -f admin-login.yaml
#. Retrieve the secret token.
.. code-block:: none
% kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
#. On the workstation, if it is not available, install the :command:`kubectl` client on an Ubuntu
host by taking the following actions on the remote Ubuntu system.
#. Install the :command:`kubectl` client CLI.
.. code-block:: none
% sudo apt-get update
% sudo apt-get install -y apt-transport-https
% curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add
% echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee -a /etc/apt/sources.list.d/kubernetes.list
% sudo apt-get update
% sudo apt-get install -y kubectl
#. Set up the local configuration and context.
.. note::
In order for your remote host to trust the certificate used by
the |prod-long| K8S API, you must ensure that the
**k8s\_root\_ca\_cert** specified at install time is a trusted
CA certificate by your host. Follow the instructions for adding
a trusted CA certificate for the operating system distribution
of your particular host.
If you did not specify a **k8s\_root\_ca\_cert** at install
time, then specify insecure-skip-tls-verify, as shown below.
.. code-block:: none
% kubectl config set-cluster mycluster --server=https://<oam-floating-IP>:6443 \
--insecure-skip-tls-verify
% kubectl config set-credentials admin-user@mycluster --token=$TOKEN_DATA
% kubectl config set-context admin-user@mycluster --cluster=mycluster \
--user admin-user@mycluster --namespace=default
% kubectl config use-context admin-user@mycluster
<$TOKEN\_DATA> is the token retrieved in step 1.
#. Test remote :command:`kubectl` access.
.. code-block:: none
% kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE ...
controller-0 Ready master 15h v1.12.3 192.168.204.3 <none> CentOS L ...
controller-1 Ready master 129m v1.12.3 192.168.204.4 <none> CentOS L ...
worker-0 Ready <none> 99m v1.12.3 192.168.204.201 <none> CentOS L ...
worker-1 Ready <none> 99m v1.12.3 192.168.204.202 <none> CentOS L ...
%
#. Install the Helm v2 client on remote workstation.
.. code-block:: none
% wget https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz
% tar xvf helm-v2.13.1-linux-amd64.tar.gz
% sudo cp linux-amd64/helm /usr/local/bin
Verify that :command:`helm` is installed correctly.
.. code-block:: none
% helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
#. Set the namespace for which you want Helm v2 access to.
.. code-block:: none
~(keystone_admin)]$ NAMESPACE=default
#. Set up accounts, roles and bindings.
#. Set up accounts, roles and bindings for Tiller (Helm v2 cluster access).
#. Execute the following commands.
@ -94,7 +191,8 @@ remotely.
--clusterrole tiller --serviceaccount ${NAMESPACE}:tiller
#. Initialize the Helm account.
#. Initialize Helm v2 access with :command:`helm init` command to start Tiller in the
specified NAMESPACE with the specified RBAC credentials.
.. code-block:: none
@ -133,7 +231,7 @@ remotely.
.. rubric:: |result|
You can now use the private Tiller server remotely or locally by specifying
You can now use the private Tiller server remotely by specifying
the ``--tiller-namespace`` default option on all helm CLI commands. For
example:
@ -142,19 +240,6 @@ example:
helm version --tiller-namespace default
helm install --name wordpress stable/wordpress --tiller-namespace default
.. note::
If you are using container-backed helm CLI and Client \(method 1\), then
you change directory to <$HOME>/remote\_cli\_wd and include the following
option on all helm commands:
.. code-block:: none
—home "./.helm"
.. note::
Use the remote Windows Active Directory server for authentication of
remote :command:`kubectl` commands.
.. seealso::
:ref:`Configure Container-backed Remote CLIs and Clients

View File

@ -75,7 +75,6 @@ Access the System
install-the-kubernetes-dashboard
security-rest-api-access
connect-to-container-registries-through-a-firewall-or-proxy
using-container-backed-remote-clis-and-clients
***************************
Manage Non-Admin Type Users

View File

@ -7,5 +7,6 @@ Remote CLI Access
configure-remote-cli-access
security-configure-container-backed-remote-clis-and-clients
using-container-backed-remote-clis-and-clients
security-install-kubectl-and-helm-clients-directly-on-a-host
configure-remote-helm-client-for-non-admin-users

View File

@ -52,6 +52,8 @@ The following procedure shows how to configure the Container-backed Remote CLIs
and Clients for an admin user with cluster-admin clusterrole. If using a
non-admin user such as one with privileges only within a private namespace,
additional configuration is required in order to use :command:`helm`.
The following procedure shows how to configure the Container-backed Remote
CLIs and Clients for an admin user with cluster-admin clusterrole.
.. rubric:: |proc|
@ -150,7 +152,12 @@ additional configuration is required in order to use :command:`helm`.
OAM_IP="[${OAM_IP}]"
fi
#. Change the permission to be readable.
.. code-block:: none
~(keystone_admin)]$ sudo chown sysadmin:sys_protected ${OUTPUT_FILE}
sudo chmod 644 ${OUTPUT_FILE}
#. Generate the admin-kubeconfig file.
@ -196,11 +203,6 @@ additional configuration is required in order to use :command:`helm`.
convenience, this example assumes that it is copied to the location of
the extracted tarball.
.. note::
Ensure that the admin-kubeconfig file has 666 permissions after copying
the file to the remote workstation, otherwise, use the following
command to change permissions, :command:`chmod 666 temp\_kubeconfig`.
#. On the remote workstation, configure remote CLI/client access.
This step will also generate a remote CLI/client RC file.
@ -234,8 +236,9 @@ additional configuration is required in order to use :command:`helm`.
rmclients:stx.4.0-v1.3.0
If you specify repositories that require authentication, as shown
above, you must first remember to perform a :command:`docker login` to
that repository before using remote |CLIs| for the first time.
above, you must first perform a :command:`docker login` to that
repository before using remote |CLIs|. WRS |AWS| ECR credentials or a
|CA| certificate is required.
The options for configure\_client.sh are:
@ -329,6 +332,6 @@ See :ref:`Using Container-backed Remote CLIs and Clients <using-container-backed
:ref:`Install Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
:ref:`Configure Remote Helm Client for Non-Admin Users
:ref:`Configure Remote Helm v2 Client
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -61,13 +61,14 @@ configuration is required in order to use :command:`helm`.
.. code-block:: none
% kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
~(keystone_admin)]$ TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep "token:" | awk '{print $2}')
#. On the workstation, install the :command:`kubectl` client on an Ubuntu
host by taking the following actions on the remote Ubuntu system.
#. On a remote workstation, install the :command:`kubectl` client. Go to the
following link: `https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
<https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/>`__.
#. Install the :command:`kubectl` client CLI.
#. Install the :command:`kubectl` client CLI (for example, an Ubuntu host).
.. code-block:: none
@ -93,9 +94,15 @@ configuration is required in order to use :command:`helm`.
If you did not specify a **k8s\_root\_ca\_cert** at install
time, then specify insecure-skip-tls-verify, as shown below.
The following example configures the default ~/.kube/config. See the
following reference:
`https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
<https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/>`__.
You need to obtain a floating |OAM| IP.
.. code-block:: none
% kubectl config set-cluster mycluster --server=https://<oam-floating-IP>:6443 \
% kubectl config set-cluster mycluster --server=https://${OAM_IP}:6443 \
--insecure-skip-tls-verify
% kubectl config set-credentials admin-user@mycluster --token=$TOKEN_DATA
% kubectl config set-context admin-user@mycluster --cluster=mycluster \
@ -119,12 +126,15 @@ configuration is required in order to use :command:`helm`.
#. On the workstation, install the :command:`helm` client on an Ubuntu
host by taking the following actions on the remote Ubuntu system.
#. Install :command:`helm`.
#. Install :command:`helm`. See the following reference:
`https://helm.sh/docs/intro/install/
<https://helm.sh/docs/intro/install/>`__. Helm accesses the Kubernetes
cluster as configured in the previous step, using the default ~/.kube/config.
.. code-block:: none
% wget https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz
% tar xvf helm-v2.13.1-linux-amd64.tar.gz
% wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
% tar xvf helm-v3.2.1-linux-amd64.tar.gz
% sudo cp linux-amd64/helm /usr/local/bin
@ -133,8 +143,17 @@ configuration is required in order to use :command:`helm`.
.. code-block:: none
% helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
#. Run the following commands:
.. code-block:: none
% helm repo add bitnami https://charts.bitnami.com/bitnami
% helm repo update
% helm repo list
% helm search repo
% helm install wordpress bitnami/wordpress
.. seealso::
@ -144,6 +163,6 @@ configuration is required in order to use :command:`helm`.
:ref:`Using Container-backed Remote CLIs and Clients
<using-container-backed-remote-clis-and-clients>`
:ref:`Configure Remote Helm Client for Non-Admin Users
:ref:`Configure Remote Helm v2 Client
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -16,8 +16,9 @@ variables and aliases for the remote |CLI| commands.
- Consider adding the following command to your .login or shell rc file, such
that your shells will automatically be initialized with the environment
variables and aliases for the remote |CLI| commands. Otherwise, execute it
before proceeding:
variables and aliases for the remote |CLI| commands.
Otherwise, execute it before proceeding:
.. code-block:: none
@ -44,7 +45,6 @@ variables and aliases for the remote |CLI| commands.
.. code-block:: none
Please enter your OpenStack Password for project admin as user admin:
root@myclient:/home/user/remote_cli_wd# system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
@ -83,8 +83,8 @@ variables and aliases for the remote |CLI| commands.
In most cases, the remote |CLI| will detect and handle these commands
correctly. If you encounter cases that are not handled correctly, you
can force-enable or disable the shell options using the <FORCE\_SHELL>
or <FORCE\_NO\_SHELL> variables before the command.
can force-enable or disable the shell options using the <FORCE\_SHELL=true>
or <FORCE\_NO\_SHELL=true> variables before the command.
For example:
@ -110,37 +110,26 @@ variables and aliases for the remote |CLI| commands.
root@myclient:/home/user/remote_cli_wd# kubectl -n kube-system delete -f test.yml
pod/test-pod deleted
- Do the following to use helm.
- For Helm commands:
.. code-block:: none
% cd $HOME/remote_cli_wd
.. note::
For non-admin users, additional configuration is required first as
discussed in :ref:`Configuring Remote Helm Client for Non-Admin Users
<configure-remote-helm-client-for-non-admin-users>`.
.. note::
When using helm, any command that requires access to a helm repository
\(managed locally\) will require that you be in the
When using helm, any command that requires access to a helm
repository \(managed locally\) will require that you be in the
$HOME/remote\_cli\_wd directory and use the --home ./.helm option.
#. Do the initial setup of the helm client.
.. note::
This command assumes you are using Helm v2.
For the host local installation, it requires the users $HOME and
ends up in $HOME/.config and $HOME/.cache/helm.
.. code-block:: none
% cd $HOME/remote_cli_wd
% helm init --client-only --home "./.helm"
#. Run a helm command.
.. code-block:: none
% cd $HOME/remote_cli_wd
% helm list
% helm install --name wordpress stable/wordpress --home "./.helm"
% helm --home ./.helm repo add bitnami https://charts.bitnami.com/bitnami
% helm --home ./.helm repo update
% helm --home ./.helm repo list
% helm --home ./.helm search repo
% helm --home ./.helm install wordpress bitnami/wordpress
**Related information**
@ -153,6 +142,6 @@ variables and aliases for the remote |CLI| commands.
:ref:`Installing Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
:ref:`Configuring Remote Helm Client for Non-Admin Users
:ref:`Configure Remote Helm v2 Client
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -6,8 +6,8 @@
Helm Package Manager
====================
|prod-long| supports Helm with Tiller, the Kubernetes package manager that can
be used to manage the lifecycle of applications within the Kubernetes cluster.
|prod-long| supports Helm v3 package manager for Kubernetes that can
be used to securely manage the lifecycle of applications within the Kubernetes cluster.
.. rubric:: |context|
@ -17,26 +17,12 @@ your Kubernetes applications using Helm charts. Helm charts are defined with a
default set of values that describe the behavior of the service installed
within the Kubernetes cluster.
Upon system installation, the official curated helm chart repository is added
to the local helm repo list, in addition, a number of local repositories
\(containing optional |prod-long| packages\) are created and added to the helm
repo list. For more information, see `https://github.com/helm/charts
<https://github.com/helm/charts>`__.
Use the following command to list the helm repositories:
.. code-block:: none
~(keystone_admin)]$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
starlingx http://127.0.0.1:8080/helm_charts/starlingx
stx-platform http://127.0.0.1:8080/helm_charts/stx-platform
|prod| recommends a non-admin end-user to install a Helm v3 client on a remote
workstation to enable management of their Kubernetes applications.
For more information on Helm, see the documentation at `https://helm.sh/docs/
<https://helm.sh/docs/>`__.
**Tiller** is a component of Helm. Tiller interacts directly with the
Kubernetes API server to install, upgrade, query, and remove Kubernetes
resources.
For more information on how to configure and use Helm both locally and remotely, see :ref:`Configure Local CLI Access <configure-local-cli-access>`,
and :ref:`Configure Remote CLI Access <configure-remote-cli-access>`.