docs/doc/source/security/kubernetes/security-configure-container-backed-remote-clis-and-clients.rst
Elisamara Aoki Goncalves 57390e74ef Remote CLIs and clients
Update documentation regarding CLIs.

Story: 2009811
Task: 50152

Change-Id: I6307e46efec3a33a1f64e2fbab074f0777c223ee
Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
2024-09-11 17:00:10 +00:00

13 KiB

Configure Container-backed Remote CLIs and Clients

The command lines can be accessed from remote computers running Linux, MacOS, and Windows.

This functionality is made available using a docker container with pre-installed CLIs and clients. The container's image is pulled as required by the remote CLI/client configuration scripts.

You must have a or Local username and password to get the Kubernetes authentication token, a Keystone username and password to log into Horizon, the IP and, optionally, the Kubernetes certificate of the target environment.

You must have Docker installed on the remote systems you connect from. For more information on installing Docker, see https://docs.docker.com/install/. For Windows remote clients, Docker is only supported on Windows 10.

Note

You must be able to run docker commands using one of the following options:

  • Running the scripts using sudo
  • Adding the Linux user to the docker group

For more information, see, https://docs.docker.com/engine/install/linux-postinstall/

For Windows remote clients, you must run the following commands from a Cygwin terminal. See https://www.cygwin.com/ for more information about the Cygwin project.

For Windows remote clients, you must also have winpty installed. Download the latest release tarball for Cygwin from https://github.com/rprichard/winpty/releases. After downloading the tarball, extract it to any location and change the Windows <PATH> variable to include its bin folder from the extracted winpty folder.

The following procedure shows how to configure the Container-backed Remote CLIs and Clients for an admin user with cluster-admin clusterrole. If using a non-admin user such as one with privileges only within a private namespace, additional configuration is required in order to use helm. The following procedure shows how to configure the Container-backed Remote CLIs and Clients for an admin user with cluster-admin clusterrole.

  1. In the active controller, log in through SSH or local console using sysadmin user and do the actions listed below.
    1. Configure Kubernetes permissions for users.

      1. Source the platform environment

        $ source /etc/platform/openrc
        ~(keystone_admin)]$
      2. Create a user rolebinding file. You can customize the name of the user. Alternatively, to use group rolebinding and user group membership for authorization, see Configure Users, Groups, and Authorization <configure-users-groups-and-authorization>

        ~(keystone_admin)]$ MYUSER="admin-user"
        ~(keystone_admin)]$ cat <<EOF > admin-user-rolebinding.yaml
        kind: ClusterRoleBinding
        apiVersion: rbac.authorization.k8s.io/v1
        metadata:
          name: ${MYUSER}-rolebinding
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: cluster-admin
        subjects:
        - apiGroup: rbac.authorization.k8s.io
          kind: User
          name: ${MYUSER}
        EOF
      3. Apply the rolebinding.

        ~(keystone_admin)]$ kubectl apply -f admin-user-rolebinding.yaml
    2. Note the IP address to be used later in the creation of the kubeconfig file.

      ~(keystone_admin)]$ system oam-show | grep oam_floating_ip | awk '{print $4}'

      Use the command below in environments. uses <oam_ip> instead of <oam_floating_ip>.

      ~(keystone_admin)]$ system oam-show | grep oam_ip | awk '{print $4}'
    3. Copy the public certificate of the Root that anchors REST API and Web Server's certificate to the remote workstation.

      If the certificate in your system is anchored by the platform's issuer (system-local-ca), you can do this using the following commands:

      ~(keystone_admin)]$ kubectl get secret system-local-ca -n cert-manager -o=jsonpath='{.data.ca\.crt}' | base64 --decode > /home/sysadmin/stx.ca.crt
      ~(keystone_admin)]$ scp /home/sysadmin/stx.ca.crt <remote_workstation_user>@<remote_workstation_IP>:~/stx.ca.crt
    4. Optional: copy the Kubernetes certificate /etc/kubernetes/pki/ca.crt from the active controller to the remote workstation. This step is strongly recommended, but it still possible to connect to the Kubernetes cluster without this certificate.

      ~(keystone_admin)]$ scp /etc/kubernetes/pki/ca.crt <remote_workstation_user>@<remote_workstation_IP>:~/k8s-ca.crt
  2. In the remote workstation, do the actions listed below.
    1. Create a working directory that will be mounted by the container implementing the remote .

      See the description of the configure_client.sh -w option below for more details.

      $ mkdir -p $HOME/remote_cli_wd
    2. Copy the remote client tarball file from the build servers to the remote workstation, and extract its content.

      • The tarball is available from the StarlingX Public build servers.
      • You can extract the tarball's contents anywhere on your client system.

      $ cd $HOME $ tar xvf -remote-clients-<version>.tgz

    3. Download the user/tenant openrc file from the Horizon Web interface to the remote workstation.

      1. Log in to Horizon as the user and tenant that you want to configure remote access for.

        In this example, the 'admin' user in the 'admin' tenant.

      2. Navigate to Project > API Access > Download Openstack RC file.

      3. Select Openstack RC file.

        The file admin-openrc.sh downloads. Copy this file to the location of the extracted tarball.

      Note

      For a Distributed Cloud system, navigate to Project > Central Cloud Regions > RegionOne > and download the Openstack RC file.

    4. Add the following line to the bottom of admin-openrc.sh file with the the filename for the public certificate from the Root that anchors REST API and Web Server's certificate in the remote workstation:

      export OS_CACERT="stx.ca.crt"

      Copy admin-openrc.sh to the remote workstation.

    5. Create an empty admin-kubeconfig file on the remote workstation using the following command.

      $ touch admin-kubeconfig
    6. Configure remote CLI/client access.

      This step will also generate a remote CLI/client RC file.

      1. Change to the location of the extracted tarball.

        $ cd $HOME/-remote-clients-<version>/

      2. Run the configure_client.sh script.

        $ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w HOME/remote_cli_wd -p docker.io/starlingx/stx-platformclients:

        If you specify repositories that require authentication, as shown above, you must first perform a docker login to that repository before using remote . WRS ECR credentials or a certificate is required.

        The options for configure_client.sh are:

        -t

        The type of client configuration. The options are platform (for and clients) and openstack (for application and clients).

        The default value is platform.

        -r

        The user/tenant RC file to use for openstack CLI commands.

        The default value is admin-openrc.sh.

        -k

        The kubernetes configuration file to use for kubectl and helm CLI commands.

        The default value is temp-kubeconfig.

        -o

        The remote CLI/client RC file generated by this script.

        This RC file needs to be sourced in the shell, to setup required environment variables and aliases, before running any remote commands.

        For the platform client setup, the default is remote_client_platform.sh. For the openstack application client setup, the default is remote_client_app.sh.

        -w

        The working directory that will be mounted by the container implementing the remote . When using the remote , any files passed as arguments to the remote commands need to be in this directory in order for the container to access the files. The default value is the directory from which the configure_client.sh command was run.

        -p

        Override the container image for the platform and clients.

        By default, the platform and clients container image is pulled from docker.io/starlingx/stx-platformclients.

        For example, to use the container images from the WRS AWS ECR:

        $ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p docker.io/starlingx/stx-platformclients:

        If you specify repositories that require authentication, you must first perform a docker login to that repository before using remote .

        -a

        Override the OpenStack application image.

        By default, the OpenStack and clients container image is pulled from docker.io/starlingx/stx-openstackclients.

        The configure-client.sh command will generate a remote_client_platform.sh RC file. This RC file needs to be sourced in the shell to set up required environment variables and aliases before any remote CLI commands can be run.

    7. Copy the file remote_client_platform.sh to $HOME/remote_cli_wd

    8. Update the contents in the admin-kubeconfig file using the kubectl command from the container. Use the IP address and the Kubernetes certificate acquired in the steps above. If the IP is IPv6, use the IP enclosed in brackets (example: "[fd00::a14:803]").

      $ cd $HOME/remote_cli_wd
      $ source remote_client_platform.sh
      $ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443
      $ kubectl config set clusters.wrcpcluster.certificate-authority-data $(base64 -w0 k8s-ca.crt)
      $ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER}
      $ kubectl config use-context ${MYUSER}@wrcpcluster

      If you don't have the Kubernetes certificate, execute the following commands instead.

      $ cd $HOME/remote_cli_wd
      $ source remote_client_platform.sh
      $ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 --insecure-skip-tls-verify
      $ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER}
      $ kubectl config use-context ${MYUSER}@wrcpcluster

After configuring the platform's container-backed remote CLIs/clients, the remote platform CLIs can be used in any shell after sourcing the generated remote CLI/client RC file. This RC file sets up the required environment variables and aliases for the remote commands.

Note

Consider adding this command to your .login or shell rc file, such that your shells will automatically be initialized with the environment variables and aliases for the remote commands.

See Using Container-backed Remote CLIs and Clients <using-container-backed-remote-clis-and-clients> for details.

Related information

Using Container-backed Remote CLIs and Clients <using-container-backed-remote-clis-and-clients>

Install Kubectl and Helm Clients Directly on a Host <security-install-kubectl-and-helm-clients-directly-on-a-host>