
Included information from msg attached to Jira in Edited content in Admin tasks and User tasks Patch 1: worked on comments by Greg. Patch 4: added User task to the correct branch Patch 5: worked on feedback from Greg also in other topics Edited content in: -helm client for non-admin users -container backed remote clis and clients -install kebectl and helm clients on a host Patch 6: edited and added content in -helm client for non-admin users Patch 7: Added content in: Configure remote CLI Configure container-backed... Use container-backed... Patch 10: acted on Jim's comment and removed topic 'create, test, and terminate a ptp notification demo' removed links to this topic Patch 11: acted on Greg's comments Patch 12: acted on Greg's comments Story: 2007000 Task: 42241 https://review.opendev.org/c/starlingx/docs/+/783891 Signed-off-by: Adil <mohamed.adilassakkali@windriver.com> Change-Id: I9a5faf5549775593ddfd517d43412725d257b24f
6.6 KiB
Install Kubectl and Helm Clients Directly on a Host
You can use kubectl
and helm
to interact with a controller from a remote
system.
Commands such as those that reference local files or commands that require a shell are more easily used from clients running directly on a remote workstation.
Complete the following steps to install kubectl
and helm
on a remote
system.
The following procedure shows how to configure the kubectl and helm
clients directly on remote host, for an admin user with cluster-admin
clusterrole. If using a non-admin user such as one with only role
privileges within a private namespace, the procedure is the same,
however, additional configuration is required in order to use helm
.
- On the controller, if an admin-user service account
is not already available, create one.
Create the admin-user service account in kube-system namespace and bind the cluster-admin ClusterRoleBinding to this user.
% cat <<EOF > admin-login.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system EOF % kubectl apply -f admin-login.yaml
Retrieve the secret token.
~(keystone_admin)]$ TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep "token:" | awk '{print $2}')
- On a remote workstation, install the
kubectl
client. Go to the following link: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/.Install the
kubectl
client CLI (for example, an Ubuntu host).% sudo apt-get update % sudo apt-get install -y apt-transport-https % curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \ sudo apt-key add % echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \ sudo tee -a /etc/apt/sources.list.d/kubernetes.list % sudo apt-get update % sudo apt-get install -y kubectl
Set up the local configuration and context.
Note
In order for your remote host to trust the certificate used by the K8S API, you must ensure that the k8s_root_ca_cert specified at install time is a trusted CA certificate by your host. Follow the instructions for adding a trusted CA certificate for the operating system distribution of your particular host.
If you did not specify a k8s_root_ca_cert at install time, then specify –insecure-skip-tls-verify, as shown below.
The following example configures the default ~/.kube/config. See the following reference: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/. You need to obtain a floating IP.
% kubectl config set-cluster mycluster --server=https://${OAM_IP}:6443 \ --insecure-skip-tls-verify % kubectl config set-credentials admin-user@mycluster --token=$TOKEN_DATA % kubectl config set-context admin-user@mycluster --cluster=mycluster \ --user admin-user@mycluster --namespace=default % kubectl config use-context admin-user@mycluster
<$TOKEN_DATA> is the token retrieved in step 1.
Test remote
kubectl
access.% kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE ... controller-0 Ready master 15h v1.12.3 192.168.204.3 <none> CentOS L ... controller-1 Ready master 129m v1.12.3 192.168.204.4 <none> CentOS L ... worker-0 Ready <none> 99m v1.12.3 192.168.204.201 <none> CentOS L ... worker-1 Ready <none> 99m v1.12.3 192.168.204.202 <none> CentOS L ... %
- On the workstation, install the
helm
client on an Ubuntu host by taking the following actions on the remote Ubuntu system.Install
helm
. See the following reference: https://helm.sh/docs/intro/install/. Helm accesses the Kubernetes cluster as configured in the previous step, using the default ~/.kube/config.% wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz % tar xvf helm-v3.2.1-linux-amd64.tar.gz % sudo cp linux-amd64/helm /usr/local/bin
Verify that
helm
installed correctly.% helm version version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
Run the following commands:
% helm repo add bitnami https://charts.bitnami.com/bitnami % helm repo update % helm repo list % helm search repo % helm install wordpress bitnami/wordpress
Configure Container-backed Remote CLIs and Clients
<security-configure-container-backed-remote-clis-and-clients>
Using Container-backed Remote CLIs and Clients
<using-container-backed-remote-clis-and-clients>
Configure Remote Helm v2 Client
<configure-remote-helm-client-for-non-admin-users>