docs/doc/source/security/kubernetes/configure-remote-helm-client-for-non-admin-users.rst
Adil bb5cf99f7b Added content related to Helm v3
Included information from msg attached to Jira in
Edited content in Admin tasks and User tasks

Patch 1: worked on comments by Greg.

Patch 4: added User task to the correct branch

Patch 5: worked on feedback from Greg also in other topics
Edited content in: -helm client for non-admin users
		   -container backed remote clis and clients
		   -install kebectl and helm clients on a host

Patch 6: edited and added content in
 		   -helm client for non-admin users

Patch 7: Added content in: Configure remote CLI
			   Configure container-backed...
			   Use container-backed...

Patch 10: acted on Jim's comment and
	removed topic 'create, test, and terminate a ptp notification demo'
	removed links to this topic

Patch 11: acted on Greg's comments

Patch 12: acted on Greg's comments

Story: 2007000
Task: 42241

https://review.opendev.org/c/starlingx/docs/+/783891

Signed-off-by: Adil <mohamed.adilassakkali@windriver.com>

Change-Id: I9a5faf5549775593ddfd517d43412725d257b24f
2021-05-12 11:36:53 -03:00

9.0 KiB
Raw Blame History

Configure Remote Helm v2 Client

Helm v3 is recommended for users to install and manage their containerized applications. However, Helm v2 may be required, for example, if the containerized application supports only a Helm v2 helm chart.

Helm v2 is only supported remotely. Also, it is only supported with kubectl and Helm v2 clients configured directly on the remote host workstation. In addition to installing the Helm v2 clients, users must also create their own Tiller server, in a namespace that the user has access, with the required capabilities and optionally protection.

Complete the following steps to configure Helm v2 for managing containerized applications with a Helm v2 helm chart.

  1. On the controller, create an admin-user service account if this is not already available.

    1. Create the admin-user service account in kube-system namespace and bind the cluster-admin ClusterRoleBinding to this user.

      % cat <<EOF > admin-login.yaml
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: admin-user
        namespace: kube-system
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: admin-user
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
      - kind: ServiceAccount
        name: admin-user
        namespace: kube-system
      EOF
      % kubectl apply -f admin-login.yaml
    2. Retrieve the secret token.

      % kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
  2. On the workstation, if it is not available, install the kubectl client on an Ubuntu host by taking the following actions on the remote Ubuntu system.

    1. Install the kubectl client CLI.

      % sudo apt-get update
      % sudo apt-get install -y apt-transport-https
      % curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
      sudo apt-key add
      % echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
      sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      % sudo apt-get update
      % sudo apt-get install -y kubectl
    2. Set up the local configuration and context.

      Note

      In order for your remote host to trust the certificate used by the K8S API, you must ensure that the k8s_root_ca_cert specified at install time is a trusted CA certificate by your host. Follow the instructions for adding a trusted CA certificate for the operating system distribution of your particular host.

      If you did not specify a k8s_root_ca_cert at install time, then specify insecure-skip-tls-verify, as shown below.

      % kubectl config set-cluster mycluster --server=https://<oam-floating-IP>:6443 \
      --insecure-skip-tls-verify
      % kubectl config set-credentials admin-user@mycluster --token=$TOKEN_DATA
      % kubectl config set-context admin-user@mycluster --cluster=mycluster \
      --user admin-user@mycluster --namespace=default
      % kubectl config use-context admin-user@mycluster

      <$TOKEN_DATA> is the token retrieved in step 1.

    3. Test remote kubectl access.

      % kubectl get nodes -o wide
      NAME           STATUS   ROLES    AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE ...
      controller-0   Ready    master   15h    v1.12.3   192.168.204.3     <none>        CentOS L ...
      controller-1   Ready    master   129m   v1.12.3   192.168.204.4     <none>        CentOS L ...
      worker-0       Ready    <none>   99m    v1.12.3   192.168.204.201   <none>        CentOS L ...
      worker-1       Ready    <none>   99m    v1.12.3   192.168.204.202   <none>        CentOS L ...
      %
  3. Install the Helm v2 client on remote workstation.

    % wget https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz
    % tar xvf helm-v2.13.1-linux-amd64.tar.gz
    % sudo cp linux-amd64/helm /usr/local/bin

    Verify that helm is installed correctly.

    % helm version
    Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
  4. Set the namespace for which you want Helm v2 access to.

    ~(keystone_admin)]$ NAMESPACE=default
  5. Set up accounts, roles and bindings for Tiller (Helm v2 cluster access).

    1. Execute the following commands.

      Note

      These commands could be run remotely by the non-admin user who has access to the default namespace.

      ~(keystone_admin)]$ cat <<EOF > default-tiller-sa.yaml
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: tiller
        namespace: default
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: tiller
        namespace: default
      rules:
      - apiGroups: ["*"]
        resources: ["*"]
        verbs: ["*"]
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: tiller
        namespace: default
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: tiller
      subjects:
      - kind: ServiceAccount
        name: tiller
        namespace: default
      EOF
      ~(keystone_admin)]$ kubectl apply -f default-tiller-sa.yaml
    2. Execute the following commands as an admin-level user.

      ~(keystone_admin)]$ kubectl create clusterrole tiller --verb get
      --resource namespaces
      ~(keystone_admin)]$ kubectl create clusterrolebinding tiller
      --clusterrole tiller --serviceaccount ${NAMESPACE}:tiller
  6. Initialize Helm v2 access with helm init command to start Tiller in the specified NAMESPACE with the specified RBAC credentials.

    ~(keystone_admin)]$ helm init --service-account=tiller
    --tiller-namespace=$NAMESPACE --output yaml | sed 's@apiVersion:
    extensions/v1beta1@apiVersion: apps/v1@' | sed 's@ replicas: 1@
    replicas: 1\n \ selector: {"matchLabels": {"app": "helm", "name":
    "tiller"}}@' > helm-init.yaml
    ~(keystone_admin)]$ kubectl apply -f helm-init.yaml
    ~(keystone_admin)]$ helm init --client-only --home "./.helm"

    Note

    Ensure that each of the patterns between single quotes in the above sed commands are on single lines when run from your command-line interface.

    Note

    Add the following options if you are enabling TLS for this Tiller:

    --tiller-tls

    Enable TLS on Tiller.

    --tiller-tls-cert <certificate\_file>

    The public key/certificate for Tiller (signed by --tls-ca-cert).

    --tiller-tls-key <key\_file>

    The private key for Tiller.

    --tiller-tls-verify

    Enable authentication of client certificates (i.e. validate they are signed by --tls-ca-cert).

    --tls-ca-cert <certificate\_file>

    The public certificate of the used for signing Tiller server and helm client certificates.

You can now use the private Tiller server remotely by specifying the --tiller-namespace default option on all helm CLI commands. For example:

helm version --tiller-namespace default
helm install --name wordpress stable/wordpress --tiller-namespace default

Configure Container-backed Remote CLIs and Clients <security-configure-container-backed-remote-clis-and-clients>

Using Container-backed Remote CLIs and Clients <using-container-backed-remote-clis-and-clients>

Install Kubectl and Helm Clients Directly on a Host <security-install-kubectl-and-helm-clients-directly-on-a-host>