Security guide update

Re-organized topic hierarchy

Tiny edit to restart review workflow.

Squashed with Resolved index.rst conflict commit

Change-Id: I13472792cb19d1e9975ac76c6954d38054d606c5
Signed-off-by: Keane Lim <keane.lim@windriver.com>
Signed-off-by: MCamp859 <maryx.camp@intel.com>
This commit is contained in:
Keane Lim 2020-08-31 11:01:56 -04:00 committed by MCamp859
parent fad2ab40ae
commit 3c5fa979a4
93 changed files with 7055 additions and 2 deletions

View File

@ -93,6 +93,15 @@ Node Management
node_management/index
--------
Security
--------
.. toctree::
:maxdepth: 2
security/index
--------------------
System Configuration
--------------------

View File

@ -0,0 +1,34 @@
========
Security
========
----------
Kubernetes
----------
|prod-long| security encompasses a broad number of features.
.. _overview-of-starlingx-security-ul-ezc-k5f-p3b:
- |TLS| support on all external interfaces
- Kubernetes service accounts and |RBAC| policies for authentication and
authorization of Kubernetes API / CLI / GUI
- Encryption of Kubernetes Secret Data at Rest
- Keystone authentication and authorization of StarlingX API / CLI / GUI
- Barbican is used to securely store secrets such as BMC user passwords
- Networking policies / Firewalls on external APIs
- |UEFI| secureboot
- Signed software updates
.. toctree::
:maxdepth: 2
kubernetes/index

View File

@ -0,0 +1,22 @@
.. ibp1552572465781
.. _about-keystone-accounts:
=======================
About Keystone Accounts
=======================
|prod| uses tenant accounts and user accounts to identify and manage access to
StarlingX resources, and images in the Local Docker Registry.
You can create and manage Keystone projects and users from the web management
interface, the CLI, or the |prod|'s Keystone REST API.
In |prod|, the default authentication backend for Keystone users is the local
SQL Database Identity Service.
.. note::
All Kubernetes accounts are subject to system password rules. For
complete details on password rules, see :ref:`System Account Password
Rules <starlingx-system-accounts-system-account-password-rules>`.

View File

@ -0,0 +1,115 @@
.. qfk1564403051860
.. _add-a-trusted-ca:
================
Add a Trusted CA
================
Generally a trusted |CA| certificate needs to be added if |prod| clients on
the hosts will be connecting to server\(s\) secured with SSL and whose
certificate is signed by an unknown |CA|.
.. contents::
:local:
:depth: 1
For example, a trusted |CA| certificate is required if your helm charts or
yaml manifest files refer to images stored in a docker registry whose
certificate has been signed by an unknown Certificate Authority.
Trusted |CA| certificates can be added as part of the Ansible Bootstrap
Playbook or by using the StarlingX/system REST API or CLI after installation.
.. _add-a-trusted-ca-section-N1002C-N1001C-N10001:
--------------------------
Ansible Bootstrap Playbook
--------------------------
A trusted |CA| certificate may need to be specified as an override parameter
for the Ansible Bootstrap Playbook. Specifically, if the docker registries,
specified by the bootstrap overrides file, use a certificate signed by an
unknown |CA|. If this is the case then the ssl\_ca\_cert parameter needs to
be specified in the ansible overrides file, /home/sysadmin/localhost.yml, as
part of bootstrap in the installation procedure.
For example:
.. code-block:: none
apiserver_cert_sans: <trusted-ca-bundle-pem-file>
The *ssl\_ca\_cert* value is the absolute path of the file containing the
|CA| certificate\(s\) to trust. The certificate\(s\) must be in |PEM| format
and the file may contain one or more |CA| certificates.
.. _add-a-trusted-ca-section-N10047-N1001C-N10001:
-----------------------------------------------------
StarlingX/System CLI Trusted CA Certificate Install
-----------------------------------------------------
After installation, adding a trusted |CA| to the |prod| system may be required.
This is the case if images stored in a docker registry, whose certificate has
been signed by an unknown Certificate Authority, are referred to by helm
charts and/or yaml manifest files.
The certificate must be in |PEM| file format.
From the command line, run the :command:`certificate-install` command.
.. code-block:: none
~(keystone_admin)$ system certificate-install -m ssl_ca <trusted-ca-bundle-pem-file>
For example:
.. code-block:: none
~(keystone_admin)$ system certificate-install -m ssl_ca external-registry-ca-crt.pem
WARNING: For security reasons, the original certificate,
containing the private key, will be removed,
once the private key is processed.
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| uuid | c986249f-b304-4ab4-b88e-14f92e75269d |
| certtype | ssl_ca |
| signature | ssl_ca_14617336624230451058 |
| start_date | 2019-05-22 18:24:41+00:00 |
| expiry_date | 2020-05-21 18:24:41+00:00 |
+-------------+--------------------------------------+
.. note::
Multiple trusted |CA| certificates can be added with single install
command by including multiple |CA| certificates in the |PEM| file.
.. _add-a-trusted-ca-section-phr-jw4-3mb:
-------------------------------------------------------
StarlingX/System CLI Trusted CA Certificate Uninstall
-------------------------------------------------------
To remove a Trusted |CA| Certificate, first list the trusted |CAs| by
running the following command:
.. code-block:: none
~(keystone_admin)$ system certificate-list
where, all entries with certtype = ssl\_ca are trusted |CA| certificates.
Then remove a Trusted |CA| Certificate from the list of trusted |CAs| by
running the following command:
.. code-block:: none
~(keystone_admin)$ system certificate-uninstall -m ssl_ca <UUID>
where, <UUID> is the UUID of the ssl\_ca certtype to be removed.

View File

@ -0,0 +1,132 @@
.. ler1590089128119
.. _assign-pod-security-policies:
============================
Assign Pod Security Policies
============================
This section describes Pod security policies for **cluster-admin users**,
and **non-cluster-admin users**.
.. contents::
:local:
:depth: 1
.. _assign-pod-security-policies-section-xyl-2vp-bmb:
-------------------
cluster-admin users
-------------------
After enabling |PSP| checking, all users with **cluster-admin** roles can
directly create pods as they have access to the **privileged** |PSP|.
However, when creating pods through deployments/ReplicaSets/etc., the pods
are validated against the |PSP| policies of the corresponding controller
serviceAccount in kube-system namespace.
Therefore, for any user \(including cluster-admin\) to create
deployment/ReplicaSet/etc. in a particular namespace:
.. _assign-pod-security-policies-ul-hsr-1vp-bmb:
- the user must have |RBAC| permissions to create the
deployment/ReplicaSet/etc. in this namespace, and
- the **system:serviceaccounts:kube-system** must be bound to a role with
access to |PSPs| \(for example, one of the system created
**privileged-psp-user** or **restricted-psp-user** roles\) in this
namespace
**cluster-admin users** have |RBAC| permissions for everything. So it is only
the role binding of a |PSP| role to **system:serviceaccounts:kube-system**
for the target namespace, that is needed to create a deployment in a
particular namespace. The following example describes the required
RoleBinding for a **cluster-admin user** to create a deployment \(with
restricted |PSP| capabilities\) in the 'default' namespace.
.. code-block:: none
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-system-restricted-psp-users
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: restricted-psp-user
subjects:
- kind: Group
name: system:serviceaccounts:kube-system
apiGroup: rbac.authorization.k8s.io
.. _assign-pod-security-policies-section-bm5-vxp-bmb:
-----------------------
non-cluster-admin users
-----------------------
They have restricted |RBAC| capabilities, and may not have access to |PSP|
policies. They require a new RoleBinding to either the
**privileged-psp-user** role, or the **restricted-psp-user** role to create
pods directly. For creating pods through deployments/ReplicaSets/etc., the
target namespace being used will also require a RoleBinding for the
corresponding controller serviceAccounts in kube-system \(or generally
**system:serviceaccounts:kube-system**\).
.. rubric:: |proc|
#. Define the required RoleBinding for the user in the target namespace.
For example, the following RoleBinding assigns the 'restricted' |PSP|
role to dave-user in the billing-dept-ns namespace, from the examples
in :ref:`Enable Pod Security Policy Checking
<enable-pod-security-policy-checking>`.
.. code-block:: none
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dave-restricted-psp-users
namespace: billing-dept-ns
subjects:
- kind: ServiceAccount
name: dave-user
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: restricted-psp-user
This will enable dave-user to create Pods in billing-dept-ns namespace
subject to the restricted |PSP| policy.
#. Define the required RoleBinding for system:serviceaccounts:kube-system
in the target namespace.
For example, the following RoleBinding assigns the 'restricted' |PSP| to
all kube-system ServiceAccounts operating in billing-dept-ns namespace.
.. code-block:: none
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-system-restricted-psp-users
namespace: billing-dept-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: restricted-psp-user
subjects:
- kind: Group
name: system:serviceaccounts:kube-system
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,25 @@
.. okc1552681506455
.. _authentication-of-software-delivery:
===================================
Authentication of Software Delivery
===================================
|org| signs and authenticates software updates to the |prod| software.
This is done for:
.. _authentication-of-software-delivery-ul-qtp-rbk-vhb:
- Software patches incremental updates to system for software fixes or
security vulnerabilities.
- Software loads full software loads to upgrade the system to a new
major release of software.
Both software patches and loads are cryptographically signed as part of
|org| builds and are authenticated before they are loaded on a system via
|prod| REST APIs, CLIs or GUI.

View File

@ -0,0 +1,126 @@
.. afi1590692698424
.. _centralized-oidc-authentication-setup-for-distributed-cloud:
===========================================================
Centralized OIDC Authentication Setup for Distributed Cloud
===========================================================
In a Distributed Cloud configuration, you can configure |OIDC| authentication
in a distributed or centralized setup.
.. _centralized-oidc-authentication-setup-for-distributed-cloud-section-ugc-xr5-wlb:
-----------------
Distributed Setup
-----------------
For a distributed setup, configure the **kube-apiserver**, and
**oidc-auth-apps** independently for each cloud, SystemController, and all
subclouds. For more information, see:
.. _centralized-oidc-authentication-setup-for-distributed-cloud-ul-gjs-ds5-wlb:
- Configure Kubernetes for |OIDC| Token Validation
- :ref:`Configure Kubernetes for OIDC Token Validation while
Bootstrapping the System
<configure-kubernetes-for-oidc-token-validation-while-bootstrapping-the-system>`
**or**
- :ref:`Configure Kubernetes for OIDC Token Validation after
Bootstrapping the System
<configure-kubernetes-for-oidc-token-validation-after-bootstrapping-the-system>`
- :ref:`Configure OIDC Auth Applications <configure-oidc-auth-applications>`
All clouds **oidc-auth-apps** can be configured to communicate to the same
or different remote Windows Active Directory servers, however, each cloud
manages |OIDC| tokens individually. A user must login, authenticate, and get
an |OIDC| token for each cloud independently.
.. _centralized-oidc-authentication-setup-for-distributed-cloud-section-yqz-yr5-wlb:
-----------------
Centralized Setup
-----------------
For a centralized setup, the **oidc-auth-apps** is configured '**only**' on
the SystemController. The **kube-apiserver** must be configured on all
clouds, SystemController, and all subclouds, to point to the centralized
**oidc-auth-apps** running on the SystemController. In the centralized
setup, a user logs in, authenticates, and gets an |OIDC| token from the
Central SystemController's |OIDC| identity provider, and uses the |OIDC| token
with '**any**' of the subclouds as well as the SystemController cloud.
For a centralized |OIDC| authentication setup, use the following procedure:
.. rubric:: |proc|
#. Configure the **kube-apiserver** parameters on the SystemController and
each subcloud during bootstrapping, or by using the **system
service-parameter-add kubernetes kube\_apiserver** command after
bootstrapping the system, using the SystemController's floating OAM IP
address as the oidc\_issuer\_url for all clouds.
address as the oidc\_issuer\_url for all clouds.
For example,
oidc\_issuer\_url=https://<central-cloud-floating-ip>:<oidc-auth-apps-dex
-service-NodePort>/dex on the subcloud.
For more information, see:
- :ref:`Configure Kubernetes for OIDC Token Validation while
Bootstrapping the System
<configure-kubernetes-for-oidc-token-validation-while-bootstrapping-the-system>`
**or**
- :ref:`Configure Kubernetes for OIDC Token Validation after
Bootstrapping the System
<configure-kubernetes-for-oidc-token-validation-after-bootstrapping-the-system>`
#. On the SystemController only configure the **oidc-auth-apps**. For more information, see:
:ref:`Configure OIDC Auth Applications <configure-oidc-auth-applications>`
.. note::
For IPv6 deployments, ensure that the IPv6 OAM floating address is,
https://\[<central-cloud-floating-ip>\]:30556/dex \(that is, in
lower case, and wrapped in square brackets\).
.. rubric:: |postreq|
For more information on configuring Users, Groups, Authorization, and
**kubectl** for the user and retrieving the token on subclouds, see:
.. _centralized-oidc-authentication-setup-for-distributed-cloud-ul-vf3-jnl-vlb:
- :ref:`Configure Users, Groups, and Authorization <configure-users-groups-and-authorization>`
- :ref:`Configure Kubectl with a Context for the User <configure-kubectl-with-a-context-for-the-user>`
For more information on Obtaining the Authentication Token, see:
.. _centralized-oidc-authentication-setup-for-distributed-cloud-ul-wf3-jnl-vlb:
- :ref:`Obtain the Authentication Token Using the oidc-auth Shell Script
<obtain-the-authentication-token-using-the-oidc-auth-shell-script>`
- :ref:`Obtain the Authentication Token Using the Browser
<obtain-the-authentication-token-using-the-browser>`

View File

@ -0,0 +1,58 @@
.. dzm1496244723149
.. _configure-horizon-user-lockout-on-failed-logins:
===============================================
Configure Horizon User Lockout on Failed Logins
===============================================
For security, login to the Web administration interface can be disabled for a
user after several consecutive failed attempts. You can configure how many
failed attempts are allowed before the user is locked out, and how long the
user must wait before the lockout is reset.
.. rubric:: |context|
.. caution::
This procedure requires the Web service to be restarted, which causes
all current user sessions to be lost. To avoid interrupting user
sessions, perform this procedure during a scheduled maintenance period
only.
By default, after three consecutive failed login attempts, a user must wait
five minutes \(300 seconds\) before attempting another login. During this
period, all Web administration interface login attempts by the user are
refused, including those using the correct password.
This behavior is controlled by the lockout\_retries parameter and the
lockout\_seconds service parameter. To review their current values, use the
:command:`system service-parameter-list` command.
You can change the duration of the lockout using the following CLI command:
.. code-block:: none
~(keystone_admin)$ system service-parameter-modify horizon auth \
lockout_seconds=<duration>
where <duration> is the time in seconds.
You can change the number of allowed retries before a lockout is imposed
using the following CLI command:
.. code-block:: none
~(keystone_admin)$ system service-parameter-modify horizon auth \
lockout_retries=<attempts>
where <attempts> is the number of allowed retries.
For the changes to take effect, you must apply them:
.. code-block:: none
~(keystone_admin)$ system service-parameter-apply horizon
Allow about 30 seconds after applying the changes for the Web service to
restart.

View File

@ -0,0 +1,70 @@
.. luj1551986512461
.. _configure-http-and-https-ports-for-horizon-using-the-cli:
========================================================
Configure HTTP and HTTPS Ports for Horizon Using the CLI
========================================================
You can configure the **HTTP / HTTPS** ports for accessing the Horizon Web
interface using the CLI.
To access Horizon, use **http://<external OAM IP\>:8080**. By default, the
ports are **HTTP=8080**, and **HTTPS=8443**.
.. rubric:: |prereq|
You can configure **HTTP / HTTPS** ports only when all hosts are unlocked
and enabled.
.. rubric:: |context|
Use the system :command:`service-parameter-list --service=http` command to
list the configured **HTTP**, and **HTTPS** ports.
.. code-block:: none
~(keystone_admin)$ system service-parameter-list --service http
+---------+----------+---------+------------+-------+------------+--------+
| uuid | service | section | name | value |personality |Resource|
+---------+----------+---------+------------+-------+------------+--------+
| 4fc7... | http | config | http_port | 8080 | None |None |
| 9618... | http | config | https_port | 8443 | None |None |
+---------+----------+---------+------------+-------+-------------+-------+
.. rubric:: |proc|
#. Use the :command:`system service-parameter-modify` command to configure
a different port for **HTTP**, and **HTTPS**. For example,
.. code-block:: none
~(keystone_admin)$ system service-parameter-modify http config http_port=8090
~(keystone_admin)$ system service-parameter-modify http config https_port=9443
#. Apply the service parameter change.
.. code-block:: none
~(keystone_admin)$ system service-parameter-apply http
Applying http service parameters
.. note::
Do not use ports used by other services on the platform, OAM and
management interfaces on the controllers, or in custom
applications. For more information, see, |sec-doc|: :ref:`Default
Firewall Rules <security-default-firewall-rules>`.
If you plan to run |prod-os|, do not reset the ports to 80/443, as
these ports may be used by containerized OpenStack, by default.
.. rubric:: |postreq|
A configuration out-of-date alarm is generated for each host. Wait for the
configuration to be automatically applied to all nodes and the alarms to be
cleared on all hosts before performing maintenance operations, such as
rebooting or locking/unlocking a host.

View File

@ -0,0 +1,34 @@
.. jgr1582125251290
.. _configure-kubectl-with-a-context-for-the-user:
=============================================
Configure Kubectl with a Context for the User
=============================================
You can set up the kubectl context for the Windows Active Directory
**testuser** to authenticate through the **oidc-auth-apps** |OIDC| Identity
Provider \(dex\).
.. rubric:: |context|
The steps below show this procedure completed on controller-0. You can also
do so from a remote workstation.
.. rubric:: |proc|
#. Set up a cluster in kubectl if you have not done so already.
.. code-block:: none
~(keystone_admin)$ kubectl config set-cluster mywrcpcluster --server=https://<oam-floating-ip>:6443
#. Set up a context for **testuser** in this cluster in kubectl.
.. code-block:: none
~(keystone_admin)$ kubectl config set-context testuser@mywrcpcluster --cluster=mywrcpcluster --user=testuser

View File

@ -0,0 +1,76 @@
.. ydd1583939542169
.. _configure-kubernetes-for-oidc-token-validation-after-bootstrapping-the-system:
=============================================================================
Configure Kubernetes for OIDC Token Validation after Bootstrapping the System
=============================================================================
You must configure the Kubernetes cluster's **kube-apiserver** to use the
**oidc-auth-apps** |OIDC| identity provider for validation of tokens in
Kubernetes API requests, which use |OIDC| authentication.
.. rubric:: |context|
As an alternative to performing this configuration at bootstrap time as
described in :ref:`Configure Kubernetes for OIDC Token Validation while
Bootstrapping the System
<configure-kubernetes-for-oidc-token-validation-while-bootstrapping-the-system>`,
you can do so at any time using service parameters.
.. rubric:: |proc|
.. _configure-kubernetes-for-oidc-token-validation-after-bootstrapping-the-system-steps-vlw-k2p-zkb:
#. Set the following service parameters using the :command:`system
service-parameter-add kubernetes kube\_apiserver` command.
For example:
.. code-block:: none
~(keystone_admin)$ system service-parameter-add kubernetes kube_apiserver oidc_client_id=stx-oidc-client-app
- oidc\_client\_id=<client>
The value of this parameter may vary for different group
configurations in your Windows Active Directory server.
- oidc\_groups\_claim=<groups>
- oidc\_issuer\_url=https://<oam-floating-ip>:<oidc-auth-apps-dex-service-NodePort>/dex
.. note::
For IPv6 deployments, ensure that the IPv6 OAM floating address
is, https://\[<oam-floating-ip>\]:30556/dex \(that is, in lower
case, and wrapped in square brackets\).
- oidc\_username\_claim=<email>
The values of this parameter may vary for different user
configurations in your Windows Active Directory server.
The valid combinations of these service parameters are:
- none of the parameters
- oidc\_issuer\_url, oidc\_client\_id, and oidc\_username\_claim
- oidc\_issuer\_url, oidc\_client\_id, oidc\_username\_claim, and oidc\_groups\_claim
#. Apply the service parameters.
.. code-block:: none
~(keystone_admin)$ system service-parameter-apply kubernetes
For more information on |OIDC| Authentication for subclouds, see
:ref:`Centralized OIDC Authentication Setup for Distributed Cloud
<centralized-oidc-authentication-setup-for-distributed-cloud>`.

View File

@ -0,0 +1,61 @@
.. thj1582049068370
.. _configure-kubernetes-for-oidc-token-validation-while-bootstrapping-the-system:
=============================================================================
Configure Kubernetes for OIDC Token Validation while Bootstrapping the System
=============================================================================
You must configure the Kubernetes cluster's **kube-apiserver** to use the
**oidc-auth-apps** |OIDC| identity provider for validation of tokens in
Kubernetes API requests, which use |OIDC| authentication.
.. rubric:: |context|
Complete these steps to configure Kubernetes for |OIDC| token validation
during bootstrapping and deployment.
The values set in this procedure can be changed at any time using service
parameters as described in :ref:`Configure Kubernetes for OIDC Token
Validation after Bootstrapping the System
<configure-kubernetes-for-oidc-token-validation-after-bootstrapping-the-system>`.
.. rubric:: |proc|
- Configure the Kubernetes cluster **kube-apiserver** by adding the
following parameters to the localhost.yml file, during bootstrap:
.. code-block:: none
# cd ~
# cat <<EOF > /home/sysadmin/localhost.yml
apiserver_oidc:
client_id: <stx-oidc-client-app>
issuer_url: https://<oam-floating-ip>:<oidc-auth-apps-dex-service-NodePort>/dex
username_claim: <email>
groups_claim: <groups>
EOF
where:
**<oidc-auth-apps-dex-service-NodePort>**
is the port to be configured for the NodePort service for dex in
**oidc-auth-apps**. The default is 30556.
The values of the **username\_claim**, and **groups\_claim** parameters
could vary for different user and groups configurations in your Windows
Active Directory server.
.. note::
For IPv6 deployments, ensure that the IPv6 OAM floating address in
the **issuer\_url** is, https://\[<oam-floating-ip>\]:30556/dex
\(that is, in lower case, and wrapped in square brackets\).
.. rubric:: |result|
For more information on |OIDC| Authentication for subclouds, see
:ref:`Centralized OIDC Authentication Setup for Distributed Cloud
<centralized-oidc-authentication-setup-for-distributed-cloud>`.

View File

@ -0,0 +1,138 @@
.. gub1581954935898
.. _configure-local-cli-access:
==========================
Configure Local CLI Access
==========================
You can access the system via a local CLI from the active controller/master
node's local console or by SSH-ing to the OAM floating IP Address.
.. rubric:: |context|
It is highly recommended that only 'sysadmin' and a small number of admin
level user accounts be allowed to SSH to the system. This procedure will
assume that only such an admin user is using the local CLI.
Using the **sysadmin** account and the Local CLI, you can perform all
required system maintenance, administration and troubleshooting tasks.
.. rubric:: |proc|
.. _configure-local-cli-access-steps-ewr-c33-gjb:
#. Log in to controller-0 via the console or using SSH.
Use the user name **sysadmin** and your <sysadmin-password>.
#. Acquire Keystone Admin and Kubernetes Admin credentials.
.. code-block:: none
$ source /etc/platform/openrc
[sysadmin@controller-0 ~(keystone_admin)]$
#. If you plan on customizing the sysadmin's kubectl configuration on the
|prod-long| Controller, \(for example, :command:`kubectl config set-...` or
:command:`or oidc-auth`\), you should use a private KUBECONFIG file and NOT
the system-managed KUBECONFIG file /etc/kubernetes/admin.conf, which can be
changed and overwritten by the system.
#. Copy /etc/kubernetes/admin.conf to a private file under
/home/sysadmin such as /home/sysadmin/.kube/config, and update
/home/sysadmin/.profile to have the <KUBECONFIG> environment variable
point to the private file.
For example, the following commands set up a private KUBECONFIG file.
.. code-block:: none
# ssh sysadmin@<oamFloatingIpAddress>
Password:
% mkdir .kube
% cp /etc/kubernetes/admin.conf .kube/config
% echo "export KUBECONFIG=~/.kube/config" >> ~/.profile
% exit
#. Confirm that the <KUBECONFIG> environment variable is set correctly
and that :command:`kubectl` commands are functioning properly.
.. code-block:: none
# ssh sysadmin@<oamFloatingIpAddress>
Password:
% env | fgrep KUBE
KUBECONFIG=/home/sysadmin/.kube/config
% kubectl get pods
.. rubric:: |result|
You can now access all |prod| commands.
**system commands**
StarlingX system and host management commands are executed with the
:command:`system` command.
For example:
.. code-block:: none
~(keystone_admin)$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
.. note::
In the following examples, the prompt is shortened to:
.. code-block:: none
~(keystone_admin)]$
Use :command:`system help` for a full list of :command:`system` subcommands.
**fm commands**
StarlingX fault management commands are executed with the :command:`fm` command.
For example:
.. code-block:: none
~(keystone_admin)$ fm alarm-list
+-------+---------------+---------------------+----------+---------------+
| Alarm | Reason Text | Entity ID | Severity | Time Stamp |
| ID | | | | |
+-------+---------------+---------------------+----------+---------------+
| 750. | Application | k8s_application= | major | 2019-08-08T20 |
| 002 | Apply Failure | platform-integ-apps | | :17:58.223926 |
| | | | | |
+-------+---------------+---------------------+----------+---------------+
Use :command:`fm help` for a full list of :command:`fm` subcommands.
**kubectl commands**
Kubernetes commands are executed with the :command:`kubectl` command
For example:
.. code-block:: none
~(keystone_admin)$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controller-0 Ready master 5d19h v1.13.5
~(keystone_admin)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dashboard-kubernetes-dashboard-7749d97f95-bzp5w 1/1 Running 0 3d18h
.. note::
Use the remote Windows Active Directory server for authentication of
local :command:`kubectl` commands.

View File

@ -0,0 +1,180 @@
.. cwn1581381515361
.. _configure-oidc-auth-applications:
================================
Configure OIDC Auth Applications
================================
The **oidc-auth-apps** application is a system application that needs to be
configured to use a remote Windows Active Directory server to authenticate
users of the Kubernetes API. The **oidc-auth-apps** is packaged in the ISO
and uploaded by default.
.. rubric:: |prereq|
.. _configure-oidc-auth-applications-ul-gpz-x51-llb:
- You must have configured the Kubernetes **kube-apiserver** to use
the **oidc-auth-apps** |OIDC| identity provider for validation of
tokens in Kubernetes API requests, which use |OIDC| authentication. For
more information on configuring the Kubernetes **kube-apiserver**, see
:ref:`Configure Kubernetes for OIDC Token Validation while
Bootstrapping the System
<configure-kubernetes-for-oidc-token-validation-while-bootstrapping-the-system>`
or :ref:`Configure Kubernetes for OIDC Token Validation after
Bootstrapping the System
<configure-kubernetes-for-oidc-token-validation-after-bootstrapping-the-system>`.
- You must have a |CA| signed certificate \(dex-cert.pem file\), and private
key \(dex-key.pem file\) for the dex |OIDC| Identity Provider of
**oidc-auth-apps**.
This certificate *must* have the |prod|'s floating OAM IP Address in
the |SAN| list. If you are planning on defining and using a DNS
name for the |prod|'s floating OAM IP Address, then this DNS name
*must* also be in the |SAN| list. Refer to the documentation for
the external |CA| that you are using, in order to create a signed
certificate and key.
If you are using an intermediate |CA| to sign the dex certificate, include
both the dex certificate \(signed by the intermediate |CA|\), and the
intermediate |CA|'s certificate \(signed by the Root |CA|\) in that order, in
**dex-cert.pem**.
- You must have the certificate of the |CA|\(**dex-ca.pem** file\) that
signed the above certificate for the dex |OIDC| Identity Provider of
**oidc-auth-apps**.
If an intermediate |CA| was used to sign the dex certificate and both the
dex certificate and the intermediate |CA| certificate was included in
**dex-cert.pem**, then the **dex-ca.pem** file should contain the root
|CA|'s certificate.
If the signing |CA| \(**dex-ca.pem**\) is not a well-known trusted |CA|, you
must ensure the system trusts the |CA| by specifying it either during the
bootstrap phase of system installation, by specifying '**ssl\_ca\_cert:
dex-ca.pem**' in the ansible bootstrap overrides **localhost.yml** file,
or by using the **system certificate-install -m ssl\_ca dex-ca.pem**
command.
.. rubric:: |proc|
.. _configure-oidc-auth-applications-steps-kll-nbm-tkb:
#. Create the secret, **local-dex.tls**, with the certificate and key, to be
used by the **oidc-auth-apps**, as well as the secret, **dex-client-secret**,
with the |CA|'s certificate that signed the **local-dex.tls** certificate.
For example, assuming the cert and key pem files for creating these
secrets are in /home/sysadmin/ssl/, run the following commands to create
the secrets:
.. note::
**oidc-auth-apps** looks specifically for secrets of these names in
the **kube-system** namespace.
For the generic secret **dex-client-secret**, the filename must be
'**dex-ca.pem**'.
.. code-block:: none
~(keystone_admin)$ kubectl create secret tls local-dex.tls --cert=ssl/dex-cert.pem --key=ssl/dex-key.pem -n kube-system
~(keystone_admin)$ kubectl create secret generic dex-client-secret --from-file=/home/sysadmin/ssl/dex-ca.pem -n kube-system
Create the secret **wadcert** with the |CA|'s certificate that signed
the Active Directory's certificate using the following command:
.. code-block:: none
~(keystone_admin)$ kubectl create secret generic wadcert --from-file=ssl/AD_CA.cer -n kube-system
#. Specify user overrides for **oidc-auth-apps** application, by using the following command:
.. code-block:: none
~(keystone_admin)$ system helm-override-update oidc-auth-apps dex kube-system --values /home/sysadmin/dex-overrides.yaml
The dex-overrides.yaml file contains the desired dex helm chart overrides
\(that is, the LDAP connector configuration for the Active Directory
service, optional token expiry, and so on.\), and volume mounts for
providing access to the **wadcert** secret, described in this section.
For the complete list of dex helm chart values supported, see `Dex Helm
Chart Values
<https://github.com/helm/charts/blob/92b6289ae93816717a8453cfe62bad51cbdb
8ad0/stable/dex/values.yaml>`__. For the complete list of parameters of
the dex LDAP connector configuration, see `Dex LDAP Connector
Configuration
<https://github.com/dexidp/dex/blob/master/Documentation/connectors/ldap.
md>`__.
The example below configures a token expiry of ten hours, a single LDAP
connector to an Active Directory service using HTTPS \(LDAPS\) using the
**wadcert** secret configured in this section, the required Active
Directory service login information \(that is, bindDN, and bindPW\), and
example :command:`userSearch`, and :command:`groupSearch` clauses.
.. code-block:: none
config:
expiry:
idTokens: "10h"
connectors:
- type: ldap
name: OpenLDAP
id: ldap
config:
host: pv-windows-acti.cumulus.wrs.com:636
rootCA: /etc/ssl/certs/adcert/AD_CA.cer
insecureNoSSL: false
insecureSkipVerify: false
bindDN: cn=Administrator,cn=Users,dc=cumulus,dc=wrs,dc=com
bindPW: Li69nux*
usernamePrompt: Username
userSearch:
baseDN: ou=Users,ou=Titanium,dc=cumulus,dc=wrs,dc=com
filter: "(objectClass=user)"
username: sAMAccountName
idAttr: sAMAccountName
emailAttr: sAMAccountName
nameAttr: displayName
groupSearch:
baseDN: ou=Groups,ou=Titanium,dc=cumulus,dc=wrs,dc=com
filter: "(objectClass=group)"
userAttr: DN
groupAttr: member
nameAttr: cn
extraVolumes:
- name: certdir
secret:
secretName: wadcert
extraVolumeMounts:
- name: certdir
mountPath: /etc/ssl/certs/adcer
If more than one Windows Active Directory service is required for
authenticating the different users of the |prod|, multiple '**ldap**'
type connectors can be configured; one for each Windows Active
Directory service.
If more than one **userSearch** plus **groupSearch** clauses are
required for the same Windows Active Directory service, multiple
'**ldap**' type connectors, with the same host information but
different **userSearch** plus **groupSearch** clauses, should be used.
Whenever you use multiple '**ldap**' type connectors, ensure you use
unique '**name:**' and '**id:**' parameters for each connector.
#. Use the :command:`system application-apply` command to apply the
configuration:
.. code-block:: none
~(keystone_admin)$ system application-apply oidc-auth-apps

View File

@ -0,0 +1,43 @@
.. amd1581954964169
.. _configure-remote-cli-access:
===========================
Configure Remote CLI Access
===========================
You can access the system from a remote workstation using one of two methods.
.. _configure-remote-cli-access-ul-jt2-lcy-ljb:
- The first method involves using the remote |CLI| tarball from the
|prod| CENGEN build servers to install a set of container-backed remote
CLIs and clients for accessing a remote |prod-long|. This provides
access to the :command:`system` and :command:`dcmanager` |prod| CLIs,
the OpenStack CLI for Keystone and Barbican in the platform, and
Kubernetes-related CLIs \(kubectl, helm\). This approach is simple to
install, portable across Linux, macOS, and Windows, and provides access
to all |prod-long| CLIs. However, commands such as those that reference
local files or require a shell are awkward to run in this environment.
- The second method involves installing the :command:`kubectl` and
:command:`helm` clients directly on the remote host. This method only
provides the Kubernetes-related CLIs and requires OS-specific installation
instructions.
The helm client has additional installation requirements applicable to
either of the above two methods.
.. seealso::
:ref:`Configure Container-backed Remote CLIs and Clients
<security-configure-container-backed-remote-clis-and-clients>`
:ref:`Install Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
:ref:`Configure Remote Helm Client for Non-Admin Users
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -0,0 +1,165 @@
.. oiz1581955060428
.. _configure-remote-helm-client-for-non-admin-users:
================================================
Configure Remote Helm Client for Non-Admin Users
================================================
For non-admin users \(i.e. users without access to the default Tiller
server running in kube-system namespace\), you must create a Tiller server
for this specific user in a namespace that they have access to.
.. rubric:: |context|
By default, helm communicates with the default Tiller server in the
kube-system namespace. This is not accessible by non-admin users.
For non-admin users use of the helm client, you must create your own Tiller
server, in a namespace that you have access to, with the required |RBAC|
capabilities and optionally |TLS| protection.
To create a Tiller server with |RBAC| permissions within the default
namespace, complete the following steps on the controller: Except where
indicated, these commands can be run by the non-admin user, locally or
remotely.
.. note::
If you are using container-backed helm CLIs and clients \(method 1\),
ensure you change directories to <$HOME>/remote\_wd
.. rubric:: |proc|
.. _configure-remote-helm-client-for-non-admin-users-steps-isx-dsd-tkb:
#. Set the namespace.
.. code-block:: none
~(keystone_admin)$ NAMESPACE=default
#. Set up accounts, roles and bindings.
#. Execute the following commands.
.. note::
These commands could be run remotely by the non-admin user who
has access to the default namespace.
.. code-block:: none
% cat <<EOF > default-tiller-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tiller
namespace: default
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tiller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tiller
subjects:
- kind: ServiceAccount
name: tiller
namespace: default
EOF
% kubectl apply -f default-tiller-sa.yaml
#. Execute the following commands as an admin-level user.
.. code-block:: none
~(keystone_admin)$ kubectl create clusterrole tiller --verb get
--resource namespaces
~(keystone_admin)$ kubectl create clusterrolebinding tiller
--clusterrole tiller --serviceaccount ${NAMESPACE}:tiller
#. Initialize the Helm account.
.. code-block:: none
~(keystone_admin)$ helm init --service-account=tiller
--tiller-namespace=$NAMESPACE --output yaml | sed 's@apiVersion:
extensions/v1beta1@apiVersion: apps/v1@' | sed 's@ replicas: 1@
replicas: 1\n \ selector: {"matchLabels": {"app": "helm", "name":
"tiller"}}@' > helm-init.yaml
~(keystone_admin)$ kubectl apply -f helm-init.yaml
~(keystone_admin)$ helm init client-only
.. note::
Ensure that each of the patterns between single quotes in the above
:command:`sed` commands are on single lines when run from your
command-line interface.
.. note::
Add the following options if you are enabling TLS for this Tiller:
``--tiller-tls``
Enable TLS on Tiller.
``--tiller-tls-cert <certificate\_file>``
The public key/certificate for Tiller \(signed by ``--tls-ca-cert``\).
``--tiller-tls-key <key\_file>``
The private key for Tiller.
``--tiller-tls-verify``
Enable authentication of client certificates \(i.e. validate
they are signed by ``--tls-ca-cert``\).
``--tls-ca-cert <certificate\_file>``
The public certificate of the |CA| used for signing Tiller
server and helm client certificates.
.. rubric:: |result|
You can now use the private Tiller server remotely or locally by specifying
the ``--tiller-namespace`` default option on all helm CLI commands. For
example:
.. code-block:: none
helm version --tiller-namespace default
helm install --name wordpress stable/wordpress --tiller-namespace default
.. note::
If you are using container-backed helm CLI and Client \(method 1\), then
you change directory to <$HOME>/remote\_wd and include the following
option on all helm commands:
.. code-block:: none
—home "./.helm"
.. note::
Use the remote Windows Active Directory server for authentication of
remote :command:`kubectl` commands.
.. seealso::
:ref:`Configure Container-backed Remote CLIs and Clients
<security-configure-container-backed-remote-clis-and-clients>`
:ref:`Install Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`

View File

@ -0,0 +1,53 @@
.. jzo1552681837074
.. _configure-the-keystone-token-expiration-time:
============================================
Configure the Keystone Token Expiration Time
============================================
You can change the default Keystone token expiration setting. This may be
required to provide sustained access for operations that take more than an
hour.
.. rubric:: |context|
By default, the Keystone token expiration time is set to 3600 seconds \(1
hour\). This is the amount of time a token remains valid. The new setting
must be between 3600 seconds and 14400 seconds.
.. rubric:: |proc|
#. On the active controller, become the Keystone admin user.
.. code-block:: none
$ source /etc/platform/openrc
#. Ensure that the token\_expiration parameter is defined for the identity
service.
.. code-block:: none
$ system service-parameter-list | grep token_expiration
| 712e4a45-777c-4e83-9d56-5042cde482f7 | identity | config | token_expiration | 3600
#. Modify the service parameter using the following command:
.. code-block:: none
$ system service-parameter-modify identity config token_expiration=7200
#. Apply the configuration change.
.. code-block:: none
$ system service-parameter-apply identity
Applying identity service parameters
Allow a few minutes for the change to take an effect.

View File

@ -0,0 +1,73 @@
.. rzl1582124533847
.. _configure-users-groups-and-authorization:
==========================================
Configure Users, Groups, and Authorization
==========================================
You can create a **user**, and optionally one or more **groups** that the
**user** is a member of, in your Windows Active Directory server.
.. rubric:: |context|
The example below is for a **testuser** user who is a member of the,
**billingDeptGroup**, and **managerGroup** groups. See `Microsoft
documentation on Windows Active Directory
<https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/vi
rtual-dc/active-directory-domain-services-overview>`__ for additional
information on adding users and groups to Windows Active Directory.
Use the following procedure to configure the desired authorization on
|prod-long| for the user or the user's group\(s\):
.. rubric:: |proc|
.. _configure-users-groups-and-authorization-steps-b2f-ck4-dlb:
#. In |prod-long|, bind Kubernetes |RBAC| role\(s\) for the **testuser**.
For example, give **testuser** admin privileges, by creating the
following deployment file, and deploy the file with :command:`kubectl
apply -f` <filename>.
.. code-block:: none
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testuser-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: testuser
Alternatively, you can bind Kubernetes |RBAC| role\(s\) for the group\(s\)
of the **testuser**.
For example, give all members of the **billingDeptGroup** admin
privileges, by creating the following deployment file, and deploy the
file with :command:`kubectl apply -f` <filename>.
.. code-block:: none
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testuser-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: billingDeptGroup

View File

@ -0,0 +1,203 @@
.. cms1597171128588
.. _configure-vault-using-the-cli:
=============================
Configure Vault Using the CLI
=============================
After Vault has been installed, you can configure Vault for use with |prod|
using the |CLI|. This section describes the minimum configuration
requirements for server secrets for hosted Kubernetes applications.
.. rubric:: |context|
You can configure Vault by logging into a Vault server pod and using Vault CLI.
.. rubric:: |proc|
#. Get the root token for logging into Vault.
.. code-block:: none
$ kubectl exec -n vault sva-vault-manager-0 -- cat /mnt/data/cluster_keys.json | grep -oP --color=never '(?<="root_token":")[^"]*'
#. Log in to the Vault server container.
.. code-block:: none
$ kubectl exec -it -n vault sva-vault-0 -- sh
#. Log into Vault, and provide the root token when prompted. Refer to
step 1 for the root token.
.. code-block:: none
$ vault login
#. Enable the Kubernetes Auth method.
.. code-block:: none
$ vault auth enable kubernetes
#. Configure the Kubernetes Auth method.
.. code-block:: none
$ vault write auth/kubernetes/config token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
#. Verify the Kubernetes Auth method.
.. code-block:: none
$ vault auth list
and
.. code-block:: none
$ vault read auth/kubernetes/config
#. Enable a secrets engine at the path "secret".
Vault supports a variety of secret engines, as an example, create a
**kv-v2** secrets engine. The **kv-v2** secrets engine allows for
storing arbitrary key-value pairs. Secrets engines are enabled at a
"path" in Vault. When a request comes to Vault, the router
automatically routes anything with the route prefix to the secrets
engine. In this way, each secrets engine defines its own paths and
properties. To the user, secrets engines behave similar to a virtual
filesystem, supporting operations like read, write, and delete.
.. code-block:: none
$ vault secrets enable -path=secret kv-v2
For more information, see:
- `https://www.vaultproject.io/docs/secrets
<https://www.vaultproject.io/docs/secrets>`__
- `https://www.vaultproject.io/docs/secrets/kv/kv-v2
<https://www.vaultproject.io/docs/secrets/kv/kv-v2>`__
#. Create a sample policy and role for allowing access to the configured
**kv-v2** secrets engine.
A Vault policy specifies read and/or write capabilities for a
particular secret engine path, and the Vault role binds a specific
Kubernetes service account to a policy.
#. Create a policy.
.. code-block:: none
$ vault policy write basic-secret-policy - <<EOF
path "secret/basic-secret/*" {
capabilities = ["read"]
}
EOF
For more information, see
`https://www.vaultproject.io/docs/concepts/policies
<https://www.vaultproject.io/docs/concepts/policies>`__.
#. Create the role mapped to the policy.
.. note::
The service account and namespace used for the values below must
exist on the kubernetes cluster.
- **bound\_service\_account\_names**
- **bound\_service\_account\_namespaces**
.. code-block:: none
$ vault write auth/kubernetes/role/basic-secret-role bound_service_account_names=basic-secret bound_service_account_namespaces=default policies=basic-secret-policy ttl=24h
#. Verify the policy.
.. code-block:: none
$ vault policy read basic-secret-policy
#. Verify the role.
.. code-block:: none
$ vault read auth/kubernetes/role/basic-secret-role
#. Create an initial example secret in the configured **kv-v2** secrets
engine.
#. Create a secret.
.. code-block:: none
$ vault kv put secret/basic-secret/helloworld username="test" password="supersecret"
#. Verify the secret.
.. code-block:: none
$ vault kv get secret/basic-secret/helloworld
#. \(Optional\) To enable audit logging, use the steps below:
.. note::
It is recommended to enable file logging and stdout.
#. Enable Vault logging to file for persistent log storage.
.. code-block:: none
$ vault audit enable -path="/vault/audit/vault_audit.log" file file_path=/vault/audit/vault_audit.log
#. Enable Vault logging to stdout for easy log reading from the Vault container.
.. code-block:: none
$ vault audit enable -path="stdout" file file_path=stdout
#. Verify the configuration.
.. code-block:: none
$ vault audit list
#. Delete the cached credentials to log out of Vault.
.. code-block:: none
$ rm ~/.vault-token
#. Exit the Vault container.
.. code-block:: none
$ exit
..
.. rubric:: |result|
.. xbooklink
For more information, see, |usertasks-doc|::ref:`Vault Overview
<kubernetes-user-tutorials-vault-overview>`.

View File

@ -0,0 +1,180 @@
.. xgp1596216287484
.. _configure-vault:
===============
Configure Vault
===============
After Vault has been installed, you can configure Vault for use by hosted
Kubernetes applications on |prod|. This section describes the minimum
configuration requirements for secret management for hosted Kubernetes
applications.
.. rubric:: |context|
You can configure Vault using Vault's REST API. Configuration can also be
done by logging into a Vault server pod and using the Vault CLI directly.
For more information, see :ref:`Configure Vault Using the CLI
<configure-vault-using-the-cli>`.
The following steps use Vault's REST API and is run from controller-0.
.. rubric:: |proc|
#. Set environment variables.
.. code-block:: none
$ ROOT_TOKEN=$(kubectl exec -n vault sva-vault-manager-0 -- cat /mnt/data/cluster_keys.json | grep -oP --color=never '(?<="root_token":")[^"]*')
$ SA_CA_CERT=$(kubectl exec -n vault sva-vault-0 -- awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' /var/run/secrets/kubernetes.io/serviceaccount/ca.crt)
$ TOKEN_JWT=$(kubectl exec -n vault sva-vault-0 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
$ KUBERNETES_PORT_443_TCP_ADDR=$(kubectl exec -n vault sva-vault-0 -- sh -c 'echo $KUBERNETES_PORT_443_TCP_ADDR')
$ echo $(kubectl get secrets -n vault vault-ca -o jsonpath='{.data.tls\.crt}') | base64 --decode > /home/sysadmin/vault_ca.pem
#. Enable the Kubernetes Auth method.
This allows Vault to use Kubernetes service accounts for authentication of Vault commands.
For more information, see:
- `https://www.vaultproject.io/docs/auth <https://www.vaultproject.io/docs/auth>`__
- `https://www.vaultproject.io/docs/auth/kubernetes%20 <https://www.vaultproject.io/docs/auth/kubernetes%20>`__
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" --request POST --data '{"type":"kubernetes","description":"kubernetes auth"}' https://sva-vault.vault.svc.cluster.local:8200/v1/sys/auth/kubernetes
#. Configure the Kubernetes Auth method.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" --request POST --data '{"kubernetes_host": "'"https://$KUBERNETES_PORT_443_TCP_ADDR:443"'", "kubernetes_ca_cert":"'"$SA_CA_CERT"'", "token_reviewer_jwt":"'"$TOKEN_JWT"'"}' https://sva-vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/config
#. Verify the Kubernetes Auth method.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" https://sva-vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/config
#. Enable a secrets engine.
Vault supports a variety of secret engines, as an example, create a
**kv-v2** secrets engine. The **kv-v2** secrets engine allows for
storing arbitrary key-value pairs. Secrets engines are enabled at a
"path" in Vault. When a request comes to Vault, the router
automatically routes anything with the route prefix to the secrets
engine. In this way, each secrets engine defines its own paths and
properties. To the user, secrets engines behave similar to a virtual
filesystem, supporting operations like read, write, and delete.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" --request POST --data '{"type": "kv","version":"2"}' https://sva-vault.vault.svc.cluster.local:8200/v1/sys/mounts/secret
For more information, see:
- `https://www.vaultproject.io/docs/secrets <https://www.vaultproject.io/docs/secrets>`__
- `https://www.vaultproject.io/docs/secrets/kv/kv-v2 <https://www.vaultproject.io/docs/secrets/kv/kv-v2>`__
#. Create a sample policy and role for allowing access to the configured **kv-v2** secrets engine.
#. Create a policy.
A Vault policy specifies read and/or write capabilities for a
particular secret engine path, and the Vault role binds a specific
Kubernetes service account to a policy.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" -H "Content-Type: application/json" --request PUT -d '{"policy":"path \"secret/basic-secret/*\" {capabilities = [\"read\"]}"}' https://sva-vault.vault.svc.cluster.local:8200/v1/sys/policy/basic-secret-policy
For more information, see, `https://www.vaultproject.io/docs/concepts/policies <https://www.vaultproject.io/docs/concepts/policies>`__.
#. Create the role mapped to the policy.
.. note::
The service account and namespace used for the values below must exist on the kubernetes cluster.
- **bound\_service\_account\_names**
- **bound\_service\_account\_namespaces**
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" --request POST --data '{ "bound_service_account_names": "basic-secret", "bound_service_account_namespaces": "pvtest", "policies": "basic-secret-policy", "max_ttl": "1800000"}' https://sva-vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/role/basic-secret-role
#. Verify the role configuration.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" https://sva-vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/role/basic-secret-role
#. Create an initial example secret in the configured **kv-v2** secrets engine.
#. Create a secret.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" -H "Content-Type: application/json" -X POST -d '{"username":"pvtest","password":"Li69nux*"}' https://sva-vault.vault.svc.cluster.local:8200/v1/secret/basic-secret/helloworld
#. Verify the secret.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" https://sva-vault.vault.svc.cluster.local:8200/v1/secret/basic-secret/helloworld
#. \(Optional\) To enable and configure logging, use the steps below:
#. Enable Vault logging to file for persistent log storage.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --request POST --header "X-Vault-Token:$ROOT_TOKEN" --data '{"type": "file", "description": "ctest", "options": {"file_path": "/vault/audit/vault_audit.log"}}' https://sva-vault.vault.svc.cluster.local:8200/v1/sys/audit/vaultfile
#. Enable Vault logging to stdout for easy log reading from the Vault container.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --request POST --header "X-Vault-Token:$ROOT_TOKEN" --data '{"type": "file", "description": "stdout", "options": {"file_path": "stdout"}}' https://sva-vault.vault.svc.cluster.local:8200/v1/sys/audit/stdout
#. Verify the configuration.
.. code-block:: none
$ curl --cacert /home/sysadmin/vault_ca.pem --header "X-Vault-Token:$ROOT_TOKEN" https://sva-vault.vault.svc.cluster.local:8200/v1/sys/audit
..
.. rubric:: |result|
.. xbooklink
For more information, see |usertasks-doc|::ref:`Vault Overview
<kubernetes-user-tutorials-vault-overview>`.
.. seealso::
:ref:`Configure Vault Using the CLI <configure-vault-using-the-cli>`

View File

@ -0,0 +1,48 @@
.. oej1591381096383
.. _connecting-to-container-registries-through-a-firewall-or-proxy:
===========================================================
Connect to Container Registries through a Firewall or Proxy
===========================================================
You can use service parameters to connect to container registries that are
otherwise inaccessible behind a firewall or proxy.
.. rubric:: |proc|
#. Do one of the following to allow access to a specified URL.
- To allow access over HTTP:
.. code-block:: none
~(keystone_user)$ system service-parameter-modify platform docker http_proxy http://<my.proxy.com>:1080
~(keystone_user)$ system service-parameter-apply platform
- To allow access over HTTPS:
.. code-block:: none
~(keystone_user)$ system service-parameter-modify platform docker https_proxy https://<my.proxy.com>:1443
~(keystone_user)$ system service-parameter-apply platform
Substitute the correct value for <my.proxy.com>.
#. If you access registries that are not on the other side of the
firewall/proxy, you can specify their IP addresses in the no\_proxy service
parameter as a comma separated list.
.. note::
Addresses must not be in subnet format and cannot contain wildcards.
For example:
.. code-block:: none
~(keystone_user)$ system service-parameter-modify platform docker no_proxy: 1.2.3.4, 5.6.7.8
~(keystone_user)$ system service-parameter-apply platform

View File

@ -0,0 +1,63 @@
.. ily1578927061566
.. _create-an-admin-type-service-account:
====================================
Create an Admin Type Service Account
====================================
An admin type user typically has full permissions to cluster-scoped
resources as well as full permissions to all resources scoped to any
namespaces.
.. rubric:: |context|
A cluster-admin ClusterRole is defined by default for such a user. To create
an admin service account with cluster-admin role, use the following procedure:
.. rubric:: |proc|
#. Create the user definition.
For example:
.. code-block:: none
% cat <<EOF > joe-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: joe-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: joe-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: joe-admin
namespace: kube-system
EOF
#. Apply the configuration.
.. code-block:: none
% kubectl apply -f joe-admin.yaml
..
.. rubric:: |postreq|
.. xbooklink
See |sysconf-doc|: :ref:`Configure Remote CLI Access
<configure-remote-cli-access>` for details on how to setup remote CLI
access using tools such as :command:`kubectl` and :command:`helm` for a
service account such as this.

View File

@ -0,0 +1,107 @@
.. qtr1594910639395
.. _create-certificates-locally-using-cert-manager-on-the-controller:
================================================================
Create Certificates Locally using cert-manager on the Controller
================================================================
You can use :command:`cert-manager` to locally create certificates suitable
for use in a lab environment.
.. rubric:: |proc|
#. Create a Root |CA| Certificate and Key.
#. Create a self-signing issuer.
.. code-block:: none
$ echo "
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: my-selfsigning-issuer
spec:
selfSigned: {}
" | kubectl apply -f -
#. Create a Root CA certificate and key.
.. code-block:: none
$ echo "
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: my-rootca-certificate
spec:
secretName: my-rootca-certificate
commonName: "my-rootca"
isCA: true
issuerRef:
name: my-selfsigning-issuer
kind: Issuer
" | kubectl apply -f -
#. Create a Root CA Issuer.
.. code-block:: none
$ echo "
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: my-rootca-issuer
spec:
ca:
secretName: my-rootca-certificate
" | kubectl apply -f -
#. Create files for the Root CA certificate and key.
.. code-block:: none
$ kubectl get secret my-rootca-certificate -o yaml | egrep "^ tls.crt:" | awk '{print $2}' | base64 --decode > my-rootca-cert.pem
$ kubectl get secret my-rootca-certificate -o yaml | egrep "^ tls.key:" | awk '{print $2}' | base64 --decode > my-rootca-key.pem
#. Create and sign a Server Certificate and Key.
#. Create the Server certificate and key.
.. code-block:: none
$ echo "
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: my-server-certificate
spec:
secretName: my-server-certificate
duration: 2160h # 90d
renewBefore: 360h # 15d
organization:
- WindRiver
commonName: 1.1.1.1
dnsNames:
- myserver.wrs.com
ipAddresses:
- 1.1.1.1
issuerRef:
name: my-rootca-issuer
kind: Issuer
" | kubectl apply -f -
#. Create the |PEM| files for Server certificate and key.
.. code-block:: none
$ kubectl get secret my-server-certificate -o yaml | egrep "^ tls.crt:" | awk '{print $2}' | base64 --decode > my-server-cert.pem
$ kubectl get secret my-server-certificate -o yaml | egrep "^ tls.key:" | awk '{print $2}' | base64 --decode > my-server-key.pem
#. Combine the server certificate and key into a single file.
.. code-block:: none
$ cat my-server-cert.pem my-server-key.pem > my-server.pem

View File

@ -0,0 +1,67 @@
.. rmn1594906401238
.. _create-certificates-locally-using-openssl:
=========================================
Create Certificates Locally using openssl
=========================================
You can use :command:`openssl` to locally create certificates suitable for
use in a lab environment.
.. rubric:: |proc|
.. _create-certificates-locally-using-openssl-steps-unordered-pln-qhc-jmb:
#. Create a Root |CA| Certificate and Key
#. Create the Root CA private key.
.. code-block:: none
$ openssl genrsa -out my-root-ca-key.pem 2048
#. Generate the Root CA x509 certificate.
.. code-block:: none
$ openssl req -x509 -new -nodes -key my-root-ca-key.pem \
-days 1024 -out my-root-ca-cert.pem -outform PEM
#. Create and Sign a Server Certificate and Key.
#. Create the Server private key.
.. code-block:: none
$ openssl genrsa -out my-server-key.pem 2048
#. Create the Server certificate signing request \(csr\).
Specify CN=<WRCP-OAM-Floating-IP> and do not specify a
challenge password.
.. code-block:: none
$ openssl req -new -key my-server-key.pem -out my-server.csr
#. Create the |SANs| list.
.. code-block:: none
$ echo subjectAltName = IP:<WRCP-OAM-Floating-IP>,DNS:registry.local,DNS:registry.central > extfile.cnf
#. Use the my-root-ca to sign the server certificate.
.. code-block:: none
$ openssl x509 -req -in my-server.csr -CA my-root-ca-cert.pem \
-CAkey my-root-ca-key.pem -CAcreateserial -out my-server-cert.pem \
-days 365 -extfile extfile.cnf
#. Put the server certificate and key into a single file.
.. code-block:: none
$ cat my-server-cert.pem my-server-key.pem > my-server.pem

View File

@ -0,0 +1,145 @@
.. vaq1552681912484
.. _create-ldap-linux-accounts:
==========================
Create LDAP Linux Accounts
==========================
|prod| includes a script for creating LDAP Linux accounts with built-in
Keystone user support.
.. rubric:: |context|
The :command:`ldapusersetup` command provides an interactive method for
setting up LDAP Linux user accounts with access to StarlingX commands. You
can assign a limited shell or a bash shell.
Users have the option of providing Keystone credentials at login, and can
establish or change Keystone credentials at any time during a session.
Keystone credentials persist for the duration of the session.
Centralized management is implemented using two LDAP servers, one running on
each controller node. LDAP server synchronization is automatic using the
native LDAP content synchronization protocol.
A set of LDAP commands is available to operate on LDAP user accounts. The
commands are installed in the directory /usr/local/sbin, and are available to
any user account in the sudoers list. Included commands are
:command:`lsldap`, :command:`ldapadduser`, :command:`ldapdeleteuser`, and
several others starting with the prefix :command:`ldap`.
Use the command option --help on any command to display a brief help message,
as illustrated below.
.. code-block:: none
$ ldapadduser --help
Usage : /usr/local/sbin/ldapadduser <username> <groupname | gid> [uid]
$ ldapdeleteuser --help
Usage : /usr/local/sbin/ldapdeleteuser <username | uid>
.. rubric:: |prereq|
For convenience, identify the user's Keystone account user name in |prod-long|.
.. note::
There is an M:M relationship between a Keystone user account and a user
Linux account. That is, the same Keystone user account may be used across
multiple Linux accounts. For example, the Keystone user **tenant user**
may be used by several Linux users, such as Kam, Greg, and Jim.
Conversely, contingent on the policy of the organization, 3 Keystone
cloud users \(Kam, Greg, and Jim\), may be used by a single Linux
account: **operator**. That is, Kam logs into |prod| with the
**operator** account, and sources Kam's Keystone user account. Jim does
the same and logs into |prod| with the **operator** account, but sources
Jim's Keystone user account.
.. rubric:: |proc|
#. Log in as **sysadmin**, and start the :command:`ldapusersetup` script.
.. code-block:: none
controller-0: ~$ sudo ldapusersetup
#. Follow the interactive steps in the script.
#. Provide a user name.
.. code-block:: none
Enter username to add to LDAP:
For convenience, use the same name as the one assigned for the user's
Keystone account. \(This example uses **user1**\). When the LDAP user
logs in and establishes Keystone credentials, the LDAP user name is
offered as the default Keystone user name.
.. code-block:: none
Successfully added user user1 to LDAP
Successfully set password for user user1
#. Specify whether to provide a limited shell or a bash shell.
.. code-block:: none
Select Login Shell option # [2]:
1) Bash
2) Lshell
To provide a limited shell with access to the StarlingX CLI only,
specify the Lshell option.
If you select Bash, you are offered the option to add the user to the
sudoer list:
.. code-block:: none
Add user1 to sudoer list? (yes/No):
#. Specify a secondary user group for this LDAP user.
.. code-block:: none
Add user1 to secondary user group (yes/No):
#. Change the password duration.
.. code-block:: none
Enter days after which user password must be changed [90]:
.. code-block:: none
Successfully modified user entry uid=ldapuser1, ou=People, dc=cgcs, dc=local in LDAP
Updating password expiry to 90 days
#. Change the warning period before the password expires.
.. code-block:: none
Enter days before password is to expire that user is warned [2]:
.. code-block:: none
Updating password expiry to 2 days
On completion of the script, the command prompt is displayed.
.. code-block:: none
controller-0: ~$
.. rubric:: |result|
The LDAP account is created. For information about the user login process,
see :ref:`Establish Keystone Credentials from a Linux Account
<establish-keystone-credentials-from-a-linux-account>`.

View File

@ -0,0 +1,73 @@
.. luo1591184217439
.. _deprovision-windows-active-directory-authentication:
===================================================
Deprovision Windows Active Directory Authentication
===================================================
You can remove Windows Active Directory authentication from |prod-long|.
.. rubric:: |proc|
#. Remove the configuration of kube-apiserver to use oidc-auth-apps for
authentication.
#. Determine the UUIDs of parameters used in the kubernetes **kube-apiserver** group.
These include oidc\_client\_id, oidc\_groups\_claim,
oidc\_issuer\_url and oidc\_username\_claim.
.. code-block:: none
~(keystone_admin)$ system service-parameter-list
#. Delete each parameter.
.. code-block:: none
~(keystone_admin)$ system service-parameter-delete <UUID>
#. Apply the changes.
.. code-block:: none
~(keystone_admin)$ system service-parameter-apply kubernetes
#. Uninstall oidc-auth-apps.
.. code-block:: none
~(keystone_admin)$ system application-remove oidc-auth-apps
#. Clear the helm-override configuration.
.. code-block:: none
~(keystone_admin)$ system helm-override-update oidc-auth-apps dex kube-system --reset-values
~(keystone_admin)$ system helm-override-show oidc-auth-apps dex kube-system
~(keystone_admin)$ system helm-override-update oidc-auth-apps oidc-client kube-system --reset-values
~(keystone_admin)$ system helm-override-show oidc-auth-apps oidc-client kube-system
#. Remove secrets that contain certificate data.
.. code-block:: none
~(keystone_admin)$ kubectl delete secret local-dex.tls -n kube-system
~(keystone_admin)$ kubectl delete secret dex-client-secret -n kube-system
~(keystone_admin)$ kubectl delete secret wadcert -n kube-system
#. Remove any |RBAC| RoleBindings added for |OIDC| users and/or groups.
For example:
.. code-block:: none
$ kubectl delete clusterrolebinding testuser-rolebinding
$ kubectl delete clusterrolebinding billingdeptgroup-rolebinding

View File

@ -0,0 +1,27 @@
.. ecz1590154334366
.. _disable-pod-security-policy-checking:
====================================
Disable Pod Security Policy Checking
====================================
You can delete the previously added PodSecurityPolicy service parameter to
disable pod security policy checking.
.. rubric:: |proc|
#. Remove the kubernetes **kube\_apiserver admission\_plugins** system
parameter to exclude PodSecurityPolicy.
.. code-block:: none
~(keystone_admin)$ system service-parameter-delete <uuid>
#. Apply the Kubernetes system parameters.
.. code-block:: none
~(keystone_admin)$ system service-parameter-apply kubernetes

View File

@ -0,0 +1,74 @@
.. byh1570029392020
.. _enable-https-access-for-starlingx-rest-and-web-server-endpoints:
===============================================================
Enable HTTPS Access for StarlingX REST and Web Server Endpoints
===============================================================
When secure HTTPS connectivity is enabled, HTTP is disabled.
.. rubric:: |context|
.. _enable-https-access-for-starlingx-rest-and-web-server-endpoints-ul-nt1-h5f-3kb:
.. note::
When you change from HTTP to HTTPS, or from HTTPS to HTTP:
- Remote CLI users must re-source the |prod| rc file.
- Public endpoints are changed to HTTP or HTTPS, depending on which
is enabled.
- You must change the port portion of the Horizon Web interface URL.
For HTTP, use http:<oam-floating-ip-address>:8080
For HTTPS, use https:<oam-floating-ip-address>:8443
- You must logout and re-login into Horizon for the HTTPS Access
changes to be displayed accurately in Horizon.
.. rubric:: |proc|
- To enable HTTPS for StarlingX REST and Web Server endpoints:
.. code-block:: none
~(keystone_admin)$ system modify --https_enabled true
- To disable HTTPS for StarlingX REST and Web Server endpoints:
.. code-block:: none
~(keystone_admin)$ system modify --https_enabled false
- Use the following command to display HTTPS settings:
.. code-block:: none
[sysadmin@controller-0 ~(keystone_admin)]$ system show
+----------------------+--------------------------------------+
| Property | Value |
--------------------------------------------------------------+
| contact | None |
| created_at | 2020-02-27T15:47:23.102735+00:00 |
| description | None |
| https_enabled | False |
| location | None |
| name | 468f57ef-34c1-4e00-bba0-fa1b3f134b2b |
| region_name | RegionOne |
| sdn_enabled | False |
| security_feature | spectre_meltdown_v1 |
| service_project_name | services |
| software_version | 20.06 |
| system_mode | duplex |
| system_type | Standard |
| timezone | Canada/Eastern |
| updated_at | 2020-02-28T10:56:24.297774+00:00 |
| uuid | c0e35924-e139-4dfc-945d-47f9a663d710 |
| vswitch_type | none |
+----------------------+--------------------------------------+

View File

@ -0,0 +1,32 @@
.. vca1590088383576
.. _enable-pod-security-policy-checking:
===================================
Enable Pod Security Policy Checking
===================================
.. rubric:: |proc|
#. Set the kubernetes kube\_apiserver admission\_plugins system parameter to
include PodSecurityPolicy.
.. code-block:: none
~(keystone_admin)$ system service-parameter-add kubernetes kube_apiserver admission_plugins=PodSecurityPolicy
#. Apply the Kubernetes system parameters.
.. code-block:: none
~(keystone_admin)$ system service-parameter-apply kubernetes
#. View the automatically added pod security policies.
.. code-block:: none
$ kubectl get psp
$ kubectl describe <psp> privileged
$ kubectl describe <psp> restricted

View File

@ -0,0 +1,76 @@
.. svy1588343679366
.. _enable-public-use-of-the-cert-manager-acmesolver-image:
======================================================
Enable Public Use of the cert-manager-acmesolver Image
======================================================
When an arbitrary non-admin user creates a certificate with an external |CA|,
cert-manager dynamically creates a pod \(image=cert-manager-acmesolver\)
and an ingress in the user-specified namespace in order to handle the
http01 challenge from the external CA.
.. rubric:: |context|
As part of the application-apply of cert-manager at bootstrap time, the
cert-manager-acmesolver image has been pulled from an external registry and
pushed to
registry.local:9001:/quay.io/jetstack/cert-manager-acmesolver:<tag>.
However, this repository within registry.local is secured such that only
**admin** can access these images.
The registry.local:9001:/quay.io/jetstack/cert-manager-acmesolver:<tag>
image needs to be copied by **admin** into a public repository,
registry.local:9001:/public. If you have not yet set up a public
repository, see |admintasks-doc|: :ref:`Setting up a Public Repository
<setting-up-a-public-repository>`.
.. rubric:: |proc|
#. Determine the image tag of cert-manager-acmesolver image.
.. code-block:: none
~(keystone_admin)$ system registry-image-tags quay.io/jetstack/cert-manager-acmesolver
#. Copy the cert-manager-acmesolver image, and replace <TAG> with the tag
you want to copy from previous step.
.. code-block:: none
$ sudo docker login registry.local:9001
username: admin
password: <admin-password>
$
$ sudo docker pull registry.local:9001/quay.io/jetstack/cert-manager-acmesolver:<TAG>
$ sudo docker tag registry.local:9001/quay.io/jetstack/cert-manager-acmesolver:<TAG> registry.local:9001/public/cert-manager-acmesolver:<TAG>
$ sudo docker push registry.local:9001/public/cert-manager-acmesolver:<TAG>
#. Update the cert-manager application to use this public image.
#. Create an overrides file.
.. code-block:: none
~(keystone_admin)$ cat <<EOF > cm-override-values.yaml
acmesolver:
image:
repository: registry.local:9001/public/cert-manager-acmesolver
EOF
#. Apply the overrides.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --values cm-override-values.yaml cert-manager cert-manager cert-manager
#. Reapply cert-manager.
.. code-block:: none
~(keystone_admin)$ system application-apply cert-manager

View File

@ -0,0 +1,15 @@
.. feb1588344952218
.. _enable-the-use-of-cert-manager-apis-by-an-arbitrary-user:
========================================================
Enable the Use of cert-manager APIs by an Arbitrary User
========================================================
If you are currently binding fairly restrictive |RBAC| rules to non-admin
users, for these users to use cert-manager, you need to ensure they have
access to the cert-manager apiGroups, cert-manager.io and acme.cert-manager.io.
For more information, see :ref:`Private Namespace and Restricted RBAC
<private-namespace-and-restricted-rbac>`.

View File

@ -0,0 +1,34 @@
.. mid1588344357117
.. _enable-use-of-cert-manager-acmesolver-image-in-a-particular-namespace:
=====================================================================
Enable Use of cert-manager-acmesolver Image in a Particular Namespace
=====================================================================
When an arbitrary user creates a certificate with an external |CA|,
cert-manager dynamically creates the cert-manager-acmesolver pod and an
ingress in the user-specified namespace in order to handle the http01
challenge from the external CA.
.. rubric:: |context|
In order to pull the
registry.local:9001:/public/cert-manager-acmesolver:<tag> image from the
local registry, the credentials for the public repository must be in a
secret and referenced in an ImagePullSecret in the **default**
serviceAccount of that user-specified namespace.
.. rubric:: |proc|
#. Execute the following commands, substituting your deployment-specific
value for <USERNAMESPACE>.
.. code-block:: none
% kubectl get secret registry-local-public-key -n kube-system -o yaml | grep -v '^\s*namespace:\s' | kubectl apply --namespace=<USERNAMESPACE> -f -
% kubectl patch serviceaccount default -p "{\"imagePullSecrets\": [{\"name\": \"registry-local-public-key\"}]}" -n <USERNAMESPACE>

View File

@ -0,0 +1,15 @@
.. dxx1582118922443
.. _encrypt-kubernetes-secret-data-at-rest:
======================================
Encrypt Kubernetes Secret Data at Rest
======================================
By default, |prod| configures the kube-apiserver to encrypt or decrypt the
data in the Kubernetes 'Secret' resources in / from the etcd database.
This protects sensitive information in the event of access to the etcd
database being compromised. The encryption and decryption operations are
transparent to the Kubernetes API user.

View File

@ -0,0 +1,168 @@
.. fan1552681866651
.. _establish-keystone-credentials-from-a-linux-account:
===================================================
Establish Keystone Credentials from a Linux Account
===================================================
The preferred method for establishing Keystone credentials is to log in to
an LDAP account created using :command:`ldapusersetup`.
.. contents::
:local:
:depth: 1
.. rubric:: |context|
For more information about :command:`ldapusersetup`, see :ref:`Create LDAP
Linux Accounts <create-ldap-linux-accounts>`.
User accounts created using :command:`ldapusersetup` have access to the
Keystone CLI as part of the shell. To list the available commands, type
**?** at the command line:
.. code-block:: none
user1@controller-0:~$ ?
awk echo history ls pwd source cat clear
env grep keystone lsudo rm system cd cp
exit ll man openstack scp vim cut export
help lpath env passwd sftp kubectl helm
When a user logs in to an account of this type, they are prompted to store
Keystone credentials for the duration of the session:
.. code-block:: none
Pre-store Keystone user credentials for this session? (y/N):y
This invokes a script to obtain the credentials. The user can invoke the
same script at any time during the session as follows:
.. code-block:: none
user1@controller-0:~$ source /home/sysadmin/lshell_env_setup
Any Keystone credentials created by the script persist for the duration of
the session. This includes credentials added by previous invocations of the
script in the same session.
.. _establish-keystone-credentials-from-a-linux-account-section-N10079-N10020-N10001:
-------------------------------
The Keystone Credentials Script
-------------------------------
The Keystone credentials script offers the LDAP user name as the default
Keystone user name:
.. code-block:: none
Enter Keystone username [user1]:
.. code-block:: none
Enter Keystone user domain name:
It requires the name of the tenant for which the user requires access:
.. code-block:: none
Enter Project name:tenant1
.. note::
The Keystone user must be a member of a Keystone tenant. This is
configured using Keystone.
.. code-block:: none
Enter Project domain name:
It also requires the Keystone user password:
.. code-block:: none
Enter Keystone password:
When the script is run during login, it sets the default **Keystone Region
Name** and **Keystone Authentication URL**.
.. code-block:: none
Selecting default Keystone Region Name: RegionOne
Selecting default Keystone Authentication URL: http://192.168.204.2:5000/v2.0/
To re-configure your environment run "source ~/lshell_env_setup" in your shell
Keystone credentials preloaded!
If the script is run from the shell after login, it provides an option to
change the **Keystone Region Name** and **Keystone Authentication URL**.
.. _establishing-keystone-credentials-from-a-linux-account-section-N100B5-N10020-N10001:
---------------------------------------------------------
Alternative Methods for Establishing Keystone Credentials
---------------------------------------------------------
You can also establish Keystone credentials using the following methods:
.. _establish-keystone-credentials-from-a-linux-account-ul-scj-rch-t5:
- Download an OpenStack RC file \(openrc.sh\) from the Horizon Web
interface, and use it to source the required environment. For more
information, refer to `http://docs.openstack.org
<http://docs.openstack.org>`__.
.. note::
Only users with bash shell can source the required environment. This
does not apply to users with limited shell.
- Add the required environment variables manually:
**OS\_USERNAME**
the Keystone user name
**OS\_USER\_DOMAIN\_NAME**
the default domain for the user
**OS\_PROJECT\_NAME**
the tenant name
**OS\_PROJECT\_DOMAIN\_NAME**
the default domain for the project
**OS\_PASSWORD**
a clear text representation of the Keystone password
**OS\_AUTH\_URL**
the Keystone Authentication URL
**OS\_IDENTITY\_API\_VERSION**
the identity API version
**OS\_INTERFACE**
the interface
**OS\_REGION\_NAME**
the Keystone Region Name
For security and reliability, add all of the variables.
- Provide credentials as command-line options.
.. code-block:: none
user1@controller-0:~$ system --os-username admin --os-password seeCaution host-list
.. caution::
|org| does not recommend using the command-line option to provide
Keystone credentials. It creates a security risk, because the
supplied credentials are visible in the command-line history.

View File

@ -0,0 +1,38 @@
.. rsl1588342741919
.. _firewall-port-overrides:
=======================
Firewall Port Overrides
=======================
Although nginx-ingress-controller is configured by default to listen on
ports 80 and 443, for security reasons the opening of these ports is left
to be explicitly done by the system installer/administrator.
.. rubric:: |proc|
- To open these ports you need to edit the existing globalnetworkpolicy
controller-oam-if-gnp, or create another globalnetworkpolicy with your user
overrides. |org| recommends creating a new globalnetworkpolicy.
For example:
.. code-block:: none
apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
name: gnp-oam-overrides
spec:
ingress:
- action: Allow
destination:
ports:
- 80
- 443
protocol: TCP
order: 500
selector: has(iftype) && iftype == 'oam'
types:
- Ingress

View File

@ -0,0 +1,125 @@
.. ddq1552672412979
.. _https-access-overview:
=====================
HTTPS Access Overview
=====================
You can enable secure HTTPS access and manage HTTPS certificates for all
external |prod| service endpoints.
These include:
..
_https-access-overview-ul-eyn-5ln-gjb:
..
- |prod| REST API applications and the |prod| web administration
server
..
- Kubernetes API
..
- Local Docker registry
.. contents::
:local:
:depth: 1
.. note::
Only self-signed or Root |CA|-signed certificates are supported for the
above |prod| service endpoints. See `https://en.wikipedia.org/wiki/X.509
<https://en.wikipedia.org/wiki/X.509>`__ for an overview of root,
intermediate, and end-entity certificates.
You can also add a trusted |CA| for the |prod| system.
.. note::
The default HTTPS X.509 certificates that are used by |prod-long| for
authentication are not signed by a known authority. For increased
security, obtain, install, and use certificates that have been signed by
a Root certificate authority. Refer to the documentation for the external
Root |CA| that you are using, on how to create public certificate and
private key pairs, signed by a Root |CA|, for HTTPS.
.. _https-access-overview-section-N10048-N10024-N10001:
-------------------------------------------------------
REST API Applications and the web administration server
-------------------------------------------------------
By default, |prod| provides HTTP access to REST API application endpoints
\(Keystone, Barbican and |prod|\) and the web administration server. For
improved security, you can enable HTTPS access. When HTTPS access is
enabled, HTTP access is disabled.
When HTTPS is enabled for the first time on a |prod| system, a self-signed
certificate and key are automatically generated and installed for
REST and Web Server endpoints. In order to connect, remote clients must be
configured to accept the self-signed certificate without verifying it. This
is called insecure mode.
For secure-mode connections, a Root |CA|-signed certificate and key are
required. The use of a Root |CA|-signed certificate is strongly recommended.
Refer to the documentation for the external Root |CA| that you are using, on
how to create public certificate and private key pairs, signed by a Root |CA|,
for HTTPS.
You can update the certificate and key used by |prod| for the
REST and Web Server endpoints at any time after installation.
For additional security, |prod| optionally supports storing the private key
of the StarlingX REST and Web Server certificate in a |prod| |TPM| hardware
device. |TPM| 2.0-compliant hardware must be available on the controller
hosts.
.. _https-access-overview-section-N1004F-N10024-N10001:
----------
Kubernetes
----------
For the Kubernetes API Server, HTTPS is always enabled. Similarly, by
default, a self-signed certificate and key is generated and installed for
the Kubernetes Root |CA| certificate and key. This Kubernetes Root |CA| is
used to create and sign various certificates used within Kubernetes,
including the certificate used by the kube-apiserver API endpoint.
It is recommended that you update the Kubernetes Root |CA| and with a
custom Root |CA| certificate and key, generated by yourself, and trusted by
external servers connecting to |prod|'s Kubernetes API endpoint. |prod|'s
Kubernetes Root |CA| is configured as part of the bootstrap during
installation.
.. _https-access-overview-section-N10094-N10024-N10001:
---------------------
Local Docker Registry
---------------------
For the Local Docker Registry, HTTPS is always enabled. Similarly, by
default, a self-signed certificate and key is generated and installed for
this endpoint. However, it is recommended that you update the certificate
used after installation with a Root |CA|-signed certificate and key. Refer to
the documentation for the external Root |CA| that you are using, on how to
create public certificate and private key pairs, signed by a Root |CA|, for
HTTPS.
.. _https-access-overview-section-N10086-N10024-N10001:
-----------
Trusted CAs
-----------
|prod| also supports the ability to update the trusted |CA| certificate
bundle on all nodes in the system. This is required, for example, when
container images are being pulled from an external docker registry with a
certificate signed by a non-well-known |CA|.

View File

@ -0,0 +1,311 @@
========
Contents
========
***************
System Accounts
***************
.. toctree::
:maxdepth: 1
overview-of-system-accounts
Linux User Accounts
*******************
.. toctree::
:maxdepth: 1
the-sysadmin-account
local-ldap-linux-user-accounts
create-ldap-linux-accounts
remote-access-for-linux-accounts
password-recovery-for-linux-user-accounts
establish-keystone-credentials-from-a-linux-account
Kubernetes Service Accounts
***************************
.. toctree::
:maxdepth: 1
kubernetes-service-accounts
create-an-admin-type-service-account
Keystone Accounts
*****************
.. toctree::
:maxdepth: 1
about-keystone-accounts
keystone-account-authentication
manage-keystone-accounts
configure-the-keystone-token-expiration-time
password-recovery
Password Rules
**************
.. toctree::
:maxdepth: 1
starlingx-system-accounts-system-account-password-rules
*****************
Access the System
*****************
.. toctree::
:maxdepth: 2
configure-local-cli-access
remote-access-index
security-access-the-gui
configure-http-and-https-ports-for-horizon-using-the-cli
configure-horizon-user-lockout-on-failed-logins
install-the-kubernetes-dashboard
security-rest-api-access
connect-to-container-registries-through-a-firewall-or-proxy
***************************
Manage Non-Admin Type Users
***************************
.. toctree::
:maxdepth: 1
private-namespace-and-restricted-rbac
pod-security-policies
enable-pod-security-policy-checking
disable-pod-security-policy-checking
assign-pod-security-policies
resource-management
**************************************************
User Authentication Using Windows Active Directory
**************************************************
.. toctree::
:maxdepth: 1
overview-of-windows-active-directory
configure-kubernetes-for-oidc-token-validation-while-bootstrapping-the-system
configure-kubernetes-for-oidc-token-validation-after-bootstrapping-the-system
configure-oidc-auth-applications
centralized-oidc-authentication-setup-for-distributed-cloud
configure-users-groups-and-authorization
configure-kubectl-with-a-context-for-the-user
Obtain the Authentication Token
*******************************
.. toctree::
:maxdepth: 1
obtain-the-authentication-token-using-the-oidc-auth-shell-script
obtain-the-authentication-token-using-the-browser
Deprovision Windows Active Directory
************************************
.. toctree::
:maxdepth: 1
deprovision-windows-active-directory-authentication
****************
Firewall Options
****************
.. toctree::
:maxdepth: 1
security-firewall-options
security-default-firewall-rules
*************************
Secure HTTPS Connectivity
*************************
.. toctree::
:maxdepth: 1
https-access-overview
starlingx-rest-api-applications-and-the-web-administration-server
enable-https-access-for-starlingx-rest-and-web-server-endpoints
install-update-the-starlingx-rest-and-web-server-certificate
secure-starlingx-rest-and-web-certificates-private-key-storage-with-tpm
kubernetes-root-ca-certificate
security-install-update-the-docker-registry-certificate
add-a-trusted-ca
************
Cert Manager
************
.. toctree::
:maxdepth: 1
security-cert-manager
the-cert-manager-bootstrap-process
Post Installation Setup
***********************
.. toctree::
:maxdepth: 1
firewall-port-overrides
enable-public-use-of-the-cert-manager-acmesolver-image
enable-use-of-cert-manager-acmesolver-image-in-a-particular-namespace
enable-the-use-of-cert-manager-apis-by-an-arbitrary-user
******************************
Portieris Admission Controller
******************************
.. toctree::
:maxdepth: 1
portieris-overview
install-portieris
remove-portieris
portieris-clusterimagepolicy-and-imagepolicy-configuration
********************************
Vault Secret and Data Management
********************************
.. toctree::
:maxdepth: 1
security-vault-overview
install-vault
remove-vault
Configure Vault
***************
.. toctree::
:maxdepth: 1
configure-vault
configure-vault-using-the-cli
**************************************
Encrypt Kubernetes Secret Data at Rest
**************************************
.. toctree::
:maxdepth: 1
encrypt-kubernetes-secret-data-at-rest
*************************************
Operator Login/Authentication Logging
*************************************
.. toctree::
:maxdepth: 1
operator-login-authentication-logging
************************
Operator Command Logging
************************
.. toctree::
:maxdepth: 1
operator-command-logging
operator-login-authentication-logging
operator-command-logging
****************
UEFI Secure Boot
****************
.. toctree::
:maxdepth: 1
overview-of-uefi-secure-boot
use-uefi-secure-boot
***********************
Trusted Platform Module
***********************
.. toctree::
:maxdepth: 1
tpm-configuration-considerations
***********************************
Authentication of Software Delivery
***********************************
.. toctree::
:maxdepth: 1
authentication-of-software-delivery
*******************************************************
Security Feature Configuration for Spectre and Meltdown
*******************************************************
.. toctree::
:maxdepth: 1
security-feature-configuration-for-spectre-and-meltdown
***************************
Locally Create Certificates
***************************
.. toctree::
:maxdepth: 1
create-certificates-locally-using-openssl
create-certificates-locally-using-cert-manager-on-the-controller
*****************************
Security Hardening Guidelines
*****************************
.. toctree::
:maxdepth: 1
security-hardening-intro
Recommended Security Features with a Minimal Performance Impact
***************************************************************
.. toctree::
:maxdepth: 1
uefi-secure-boot
Secure System Accounts
**********************
.. toctree::
:maxdepth: 1
local-linux-account-for-sysadmin
local-and-ldap-linux-user-accounts
starlingx-accounts
web-administration-login-timeout
ssh-and-console-login-timeout
system-account-password-rules
Security Features
*****************
.. toctree::
:maxdepth: 1
secure-https-external-connectivity
security-hardening-firewall-options
isolate-starlingx-internal-cloud-management-network

View File

@ -0,0 +1,54 @@
.. xss1596546751114
.. _install-portieris:
=================
Install Portieris
=================
You can install Portieris on |prod| from the command line.
.. rubric:: |proc|
#. Locate the Portieris tarball in /usr/local/share/applications/helm.
For example:
/usr/local/share/applications/helm/portieris-<version>.tgz
#. Upload the application.
.. code-block:: none
~(keystone_admin)$ system application-upload /usr/local/share/applications/helm/portieris-<version>.tgz
#. Set caCert helm overrides if applicable.
In order to specify registries or notary servers signed by a custom |CA|
certificate, the caCert: CERTIFICATE override must be added to the
portieris-certs helm chart. This must be passed as the b64enc of the |CA|
certificate and may contain 1 or more |CA| Certificates.
For example:
#. Create the caCert.yaml override file.
.. code-block:: none
~(keystone_admin)$ echo 'caCert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURYVENDQWtXZ0F3SUJBZ0lKQUpjVHBXcTk4SWNSTUEwR0NTcUdTSWIzRFFFQkN3VUFNRVV4Q3pBSkJnTlYKQkFZVEFrRlZNUk13RVFZRFZRUUlEQXBUYjIxbExWTjBZWFJsTVNFd0h3WURWUVFLREJoSmJuUmxjbTVsZENCWAphV1JuYVhSeklGQjBlU0JNZEdRd0hoY05NVGd3T0RFMk1qQXlPREl3V2hjTk1qRXdOakExTWpBeU9ESXdXakJGCk1Rc3dDUVlEVlFRR0V3SkJWVEVUTUJFR0ExVUVDQXdLVTI5dFpTMVRkRYwWlRFaE1COEdBMVVFQ2d3WVNXNTAKWlhKdVpYUWdWMmxrWjJsMGN5QlFkSGtnVEhSa01JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQgpDZ0tDQVFFQXV4YXJMaVdwMDVnbG5kTWRsL1o3QmhySDFPTFNTVTcwcm9mV3duTmNQS3hsOURmVVNWVTZMTDJnClppUTFVZnA4TzFlVTJ4NitPYUxxekRuc2xpWjIxdzNXaHRiOGp2NmRFakdPdTg3eGlWWDBuSDBmSjF3cHFBR0UKRkVXekxVR2dJM29aUDBzME1Sbm1xVDA4VWZ6S0hCaFgvekNvNHMyVm9NcWxRNyt0Qjc2dTA3V3NKYQ0RFlQVwprR2tFVmRMSk4rWWcwK0pLaisvVU9kbE5WNDB2OE1ocEhkbWhzY1QyakI3WSszT0QzeUNxZ1RjRzVDSDQvK3J6CmR4Qjk3dEpMM2NWSkRQWTVNQi9XNFdId2NKRkwzN1p1M0dVdmhmVGF3NVE0dS85cTFkczgrVGFYajdLbWUxSzcKQnYyMTZ5dTZiN3M1ckpHU2lEZ0p1TWFNcm5YajFRSURBUUFCbzFBd1RqQWRCZ05WSFE0RUZnUVVyQndhbTAreApydUMvY3Vpbkp1RlM4Y1ZibjBBd0h3WURWUjBqQkJnd0ZvQVVyQndhbTAreHJ1Qy9jdWluSnVGUzhjVmJuMEF3CkRBWURWUjBUQFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZzJ5aEFNazVJUlRvOWZLc1IvMXkKMXJ5NzdSWU5KN1R2dTB0clltRElBMVRaanFtanlncFFiSmlGb0FPa255eHYveURLU0x6TXFNU2JIb0I1K1BhSQpnTERub0F6SnYxbzg3OEpkVllURjIyS2RUTU5wNWtITXVGMnpSTFFxc2lvenJQSUpWMDlVb2VHeHpPQ1pkYzZBCnpUblpCSy9DVTlRcnhVdzhIeDV6SEFVcHdVcGxONUE4MVROUmlMYVFVTXB5dzQ4Y08wNFcyOWY1aFA2aGMwVDMKSDJpU212OWY2K3Q5TjBvTTFuWVh1blgwWNJZll1aERmQy83c3N3eDhWcW5uTlNMN0lkQkhodGxhRHJGRXBzdQpGZzZOODBCbGlDclJiN2FPcUk4TWNjdzlCZW9UUk9uVGxVUU5RQkEzTjAyajJvTlhYL2loVHQvZkhNYlZGUFRQCi9nPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=' > caCert.yaml
#. Apply the override file.
.. code-block:: none
~(keystone_admin)$ system helm-override-update portieris portieris-certs portieris --values caCert.yaml
#. Apply the application.
.. code-block:: none
~(keystone_admin)$ system application-apply portieris

View File

@ -0,0 +1,130 @@
.. uxg1581955143110
.. _install-the-kubernetes-dashboard:
================================
Install the Kubernetes Dashboard
================================
You can optionally use the Kubernetes Dashboard web interface to perform
cluster management tasks.
.. rubric:: |context|
Kubernetes Dashboard allows you to perform common cluster management tasks
such as deployment, resource allocation, real-time and historic status
review, and troubleshooting.
.. rubric:: |prereq|
You must have **cluster-admin** |RBAC| privileges to install Kubernetes
Dashboard.
.. rubric:: |proc|
.. _install-the-kubernetes-dashboard-steps-azn-yyd-tkb:
#. Create a namespace for the Kubernetes Dashboard.
.. code-block:: none
~(keystone_admin)$ kubectl create namespace kubernetes-dashboard
#. Create a certificate for use by the Kubernetes Dashboard.
.. note::
This example uses a self-signed certificate. In a production
deployment, the use of a using a certificate signed by a trusted
Certificate Authority is strongly recommended.
#. Create a location to store the certificate.
.. code-block:: none
~(keystone_admin)$ cd /home/sysadmin
~(keystone_admin)$ mkdir -p /home/sysadmin/kube/dashboard/certs
#. Create the certificate.
.. code-block:: none
~(keystone_admin)$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /home/sysadmin/kube/dashboard/certs/dashboard.key -out /home/sysadmin/kube/dashboard/certs/dashboard.crt -subj "/CN=<FQDN>"
where:
**<FQDN>**
The fully qualified domain name for the |prod| cluster's OAM floating IP.
#. Create a kubernetes secret for holding the certificate and private key.
.. code-block:: none
~(keystone)admin)]$ kubectl -n kubernetes-dashboard create secret generic kubernetes-dashboard-certs --from-file=tls.crt=/home/sysadmin/kube/dashboard/certs/dashboard.crt --from-file=tls.key=/home/sysadmin/kube/dashboard/certs/dashboard.key
#. Configure the kubernetes-dashboard manifest:
#. Download the recommended.yaml file.
.. code-block:: none
~(keystone_admin)$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
#. Edit the file.
Comment out the auto-generate-certificates argument and add the
tls-cert-file and tls-key-file arguments.
The updates should look like:
.. code-block:: none
...
args:
# - --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
...
#. Apply the kubernetes dashboard recommended.yaml manifest.
.. code-block:: none
~(keystone_admin)$ kubectl apply -f recommended.yaml
#. Patch the kubernetes dashboard service to type=NodePort and port=30000.
.. code-block:: none
~(keystone_admin)$ kubectl patch service kubernetes-dashboard -n kubernetes-dashboard -p '{"spec":{"type":"NodePort","ports":[{"port":443, "nodePort":30000}]}}'
#. Test the Kubernetes Dashboard deployment.
The Kubernetes Dashboard is listening at port 30000 on the machine
defined above for |prod| cluster's OAM floating IP.
#. Access the dashboard at https://<fqdn>:30000
Because the certificate created earlier in this procedure was not
signed by a trusted |CA|, you will need to acknowledge an insecure
connection from the browser.
#. Select the **Kubeconfig** option for signing in to the Kubernetes
Dashboard. Note that typically your kubeconfig file on a remote host is
located at $HOME/.kube/config . You may have to copy it to somewhere
more accessible.
You are presented with the Kubernetes Dashboard for the current context
\(cluster, user and credentials\) specified in the kubeconfig file.

View File

@ -0,0 +1,46 @@
.. law1570030645265
.. _install-update-the-starlingx-rest-and-web-server-certificate:
=================================================================
Install/Update the StarlingX Rest and Web Server Certificate
=================================================================
Use the following procedure to install or update the certificate for the REST
API application endpoints \(Keystone, Barbican and StarlingX\) and the web
administration server.
.. rubric:: |prereq|
Obtain a Root |CA|-signed certificate and key from a trusted Root |CA|.
Refer to the documentation for the external Root |CA| that you are using,
on how to create public certificate and private key pairs, signed by a Root
|CA|, for HTTPS.
.. xbooklink
For lab purposes, see :ref:`Locally Creating Certificates
<creating-certificates-locally-using-openssl>` for how to create a test
Root |CA| certificate and key, and use it to sign test certificates.
Put the |PEM| encoded versions of the certificate and key in a single file,
and copy the file to the controller host.
.. rubric:: |proc|
- Install/update the copied certificate.
For example:
.. code-block:: none
~(keystone_admin)$ system certificate-install <pathTocertificateAndKey>
where:
**<pathTocertificateAndKey>**
is the path to the file containing both the Root |CA|-signed certificate
and private key to install.

View File

@ -0,0 +1,73 @@
.. ngo1596216203295
.. _install-vault:
=============
Install Vault
=============
Vault is packaged as an Armada system application and is managed using
:command:`system application`, and :command:`system helm-override` commands.
.. rubric:: |context|
.. note::
Vault requires a storage backend with PVC enabled \(for example, Ceph\).
To install Vault, use the following procedure:
.. rubric:: |proc|
#. Locate the Vault tarball in /usr/local/share/applications/helm.
For example, /usr/local/share/applications/helm/vault-20.06-9.tgz.
#. Upload Vault, using the following command:
.. code-block:: none
$ system application-upload /usr/local/share/applications/helm/vault-20.06-9.tgz
#. Verify the Vault tarball has been uploaded.
.. code-block:: none
$ system application-list
#. Apply the Vault application.
.. code-block:: none
$ system application-apply vault
#. Monitor the status.
.. code-block:: none
$ watch -n 5 system application-list
or
.. code-block:: none
$ watch kubectl get pods -n vault
It takes a few minutes for all the pods to start and for Vault-manager
to initialize the cluster.
The default configuration for the installed Vault application is:
**Vault-manager**
Runs as a statefulset, replica count of 1
**Vault-agent-injector**
Runs as a deployment, replica count of 1
**Vault**
Runs as statefulset, replica count is 1 on systems with fewer
than 3 nodes, replica count is 3 on systems with 3 or more nodes
For more information, see :ref:`Configure Vault <configure-vault>`.

View File

@ -0,0 +1,16 @@
.. djr1595963316444
.. _isolate-starlingx-internal-cloud-management-network:
=====================================================
Isolate StarlingX's Internal Cloud Management Network
=====================================================
|prod| internal networks should be configured as a private network visible to
only the hosts on the cluster.
For information on internal networks, see the :ref:`StarlingX Planning Guide
<overview-of-starlingx-planning>`. The network should be configured as a
private network visible to only the hosts on the cluster. Proper switch
configuration is required to achieve the isolation.

View File

@ -0,0 +1,13 @@
.. ehm1552681851145
.. _keystone-account-authentication:
===============================
Keystone Account Authentication
===============================
|prod| tenants and users are authenticated by the Keystone identity service.
For tenant and user authentication, |prod|'s Keystone identity service
supports the local SQL Database authentication backend.

View File

@ -0,0 +1,80 @@
.. imj1570020645091
.. _kubernetes-root-ca-certificate:
==============================
Kubernetes Root CA Certificate
==============================
By default, the K8S Root |CA| Certificate and Key are auto-generated and
result in the use of certificates signed by an unknown |CA| for Kubernetes;
for example, for the Kubernetes API server.
It is recommended that you update the Kubernetes Root |CA| and with a custom
Root |CA| certificate and key, generated by yourself, and trusted by external
servers connecting to the |prod|'s Kubernetes API endpoint.
.. xbooklink
See :ref:`Locally Creating Certificates
<creating-certificates-locally-using-openssl>` for how to create a
private Root |CA| certificate and key.
Use the bootstrap override values <k8s\_root\_ca\_cert> and
<k8s\_root\_ca\_key>, as part of the installation procedure to specify the
certificate and key for the Kubernetes root |CA|.
**<k8s\_root\_ca\_cert>**
Specifies the certificate for the Kubernetes root |CA|. The
<k8s\_root\_ca\_cert> value is the absolute path of the certificate
file. The certificate must be in |PEM| format and the value must be
provided as part of a pair with <k8s\_root\_ca\_key>. The playbook will
not proceed if only one value is provided.
**<k8s\_root\_ca\_key>**
Specifies the key for the Kubernetes root |CA|. The <k8s\_root\_ca\_key>
value is the absolute path of the certificate file. The certificate
must be in |PEM| format and the value must be provided as part of a pair
with <k8s\_root\_ca\_cert>. The playbook will not proceed if only one
value is provided.
.. caution::
The default duration for the generated Kubernetes Root |CA|
certificate is 10 years. Replacing the Root |CA| certificate is an
involved process so the custom certificate expiry should be as long
as possible. We recommend ensuring Root |CA| certificate has an
expiry of at least 5-10 years.
The administrator can also provide values to add to the Kubernetes
API server certificate Subject Alternative Name list using the
<apiserver\_cert\_sans> override parameter.
**apiserver\_cert\_sans**
Specifies a list of Subject Alternative Name entries that will be added
to the Kubernetes API server certificate. Each entry in the list must
be an IP address or domain name. For example:
.. code-block:: none
apiserver_cert_sans:
- hostname.domain
- 198.51.100.75
|prod| automatically updates this parameter to include IP records
for the |OAM| floating IP and both |OAM| unit IP addresses. Any DNS names
associated with the |OAM| floating IP address should be added.
.. _kubernetes-root-ca-certificate-section-g1j-45b-jmb:
.. rubric:: |postreq|
Make the K8S Root |CA| certificate available to any remote server wanting to
connect remotely to the |prod|'s Kubernetes API, e.g. through kubectl or helm.
See the step :ref:`2.b
<security-install-kubectl-and-helm-clients-directly-on-a-host>` in
*Install Kubectl and Helm Clients Directly on a Host*.

View File

@ -0,0 +1,27 @@
.. oud1564679022947
.. _kubernetes-service-accounts:
===========================
Kubernetes Service Accounts
===========================
|prod| uses Kubernetes service accounts and Kubernetes |RBAC| policies to
identify and manage remote access to Kubernetes resources using the
Kubernetes API, kubectl CLI or the Kubernetes Dashboard.
.. note::
|prod| can also use user accounts defined in an external Windows Active
Directory to authenticate Kubernetes API, :command:`kubectl` CLI or the
Kubernetes Dashboard. For more information, see :ref:`Configure OIDC
Auth Applications <configure-oidc-auth-applications>`.
You can create and manage Kubernetes service accounts using
:command:`kubectl` as shown below.
.. note::
It is recommended that you create and manage service accounts within the
kube-system namespace. See :ref:`Create an Admin Type Service
Account <create-an-admin-type-service-account>`

View File

@ -0,0 +1,67 @@
.. xgp1595963622893
.. _local-and-ldap-linux-user-accounts:
==============================
Local LDAP Linux User Accounts
==============================
You can manage regular Linux \(shadow\) user accounts on any host in the
cluster using standard Linux commands.
.. _local-and-ldap-linux-user-accounts-ul-zrv-zwf-mmb:
Local Linux user accounts should NOT be configured, only use local LDAP
accounts for internal system purposes that would usually not be created by
an end-user.
Password changes are not enforced automatically on the first login, and
they are not propagated by the system \(only for 'sysadmin'\).
.. note::
If the administrator wants to provision additional access to the
system, it is better to configure local LDAP Linux accounts.
- LDAP accounts are centrally managed; changes made on any host are
propagated automatically to all hosts on the cluster.
- LDAP user accounts behave as any local user account. They can be added
to the sudoers list and can acquire OpenStack administration credentials.
- The initial password must be changed immediately upon the first login.
- Login sessions are logged out automatically after about 15 minutes of
inactivity.
- The accounts block following five consecutive unsuccessful login
attempts. They unblock automatically after a period of about five minutes.
- All authentication attempts are recorded on the file /var/log/auth.log
of the target host.
.. note::
For security reasons, it is recommended that ONLY admin level users
be allowed to SSH to the nodes of |prod|. Non-admin level users
should strictly use remote CLIs or remote web GUIs.
Operational complexity:
.. _local-and-ldap-linux-user-accounts-ul-bsv-zwf-mmb:
- Passwords aging is automatically configured.
- LDAP user accounts \(operator, admin\) are available by default on
newly deployed hosts. For increased security, the admin and operator
accounts must be used from the console ports of the hosts; no SSH access is
allowed.
- |prod| includes a script for creating LDAP Linux accounts with built-in
Keystone user support. It provides an interactive method for setting up
LDAP Linux user accounts with access to OpenStack commands. You can assign
a limited shell or a bash shell.

View File

@ -0,0 +1,83 @@
.. eof1552681926485
.. _local-ldap-linux-user-accounts:
==============================
Local LDAP Linux User Accounts
==============================
You can create regular Linux user accounts using the |prod| LDAP service.
LDAP accounts are centrally managed; changes made on any host are propagated
automatically to all hosts on the cluster.
The intended use of these accounts is to provide additional admin level user
accounts \(in addition to sysadmin\) that can SSH to the nodes of the |prod|.
.. note::
For security reasons, it is recommended that ONLY admin level users be
allowed to SSH to the nodes of the |prod|. Non-admin level users should
strictly use remote CLIs or remote web GUIs.
Apart from being centrally managed, LDAP user accounts behave as any local
user account. They can be added to the sudoers list, and can acquire
Keystone administration credentials when executing on the active controller.
LDAP user accounts share the following set of attributes:
.. _local-ldap-linux-user-accounts-ul-d4q-g5c-5p:
- The initial password is the name of the account.
- The initial password must be changed immediately upon first login.
- For complete details on password rules, see :ref:`System Account Password Rules <starlingx-system-accounts-system-account-password-rules>`.
- Login sessions are logged out automatically after about 15 minutes of
inactivity.
- The accounts are blocked following five consecutive unsuccessful login
attempts. They are unblocked automatically after a period of about five
minutes.
- All authentication attempts are recorded on the file /var/log/auth.log
of the target host.
- Home directories and passwords are backed up and restored by the system
backup utilities. Note that only passwords are synced across hosts \(both
LDAP users and **sysadmin**\). Home directories are not automatically synced
and are local to that host.
The following LDAP user accounts are available by default on newly deployed
hosts, regardless of their personality:
**operator**
A cloud administrative account, comparable to the default **admin**
account used in the web management interface.
This user account operates on a restricted Linux shell, with very
limited access to native Linux commands. However, the shell is
preconfigured to have administrative access to StarlingX commands.
**admin**
A host administrative account. It has access to all native Linux
commands and is included in the sudoers list.
For increased security, the **admin** and **operator** accounts must be used
from the console ports of the hosts; no SSH access is allowed.
.. _local-ldap-linux-user-accounts-ul-h22-ql4-tz:
- These accounts serve as system access redundancies in the event that SSH
access is unavailable. In the event of any issues with connectivity, user
lockout, or **sysadmin** passwords being forgotten or not getting propagated
properly, the presence of these accounts can be essential in gaining access
to the deployment and rectifying things. This is why these accounts are
restricted to the console port only, as a form of “manual over-ride.” The
**operator** account enables access to the cloud deployment only, without
giving unabated sudo access to the entire system.

View File

@ -0,0 +1,28 @@
.. ggg1595963659829
.. _local-linux-account-for-sysadmin:
==================================
Local Linux Account for 'sysadmin'
==================================
This is a local, per-host, sudo-enabled account created automatically when
a new host is provisioned.
This Linux user account is used by the system administrator as it has
extended privileges.
.. _local-linux-account-for-sysadmin-ul-zgk-1wf-mmb:
- The initial password must be changed immediately when you log in to the
initial host for the first time.
- After five consecutive unsuccessful login attempts, further attempts
are blocked for about five minutes.
Operational complexity: None.
The above security hardening features are set by default \(see :ref:`System
Account Password Rules <system-account-password-rules>` for password rules\).

View File

@ -0,0 +1,40 @@
.. ikv1595849619976
.. _manage-keystone-accounts:
========================
Manage Keystone Accounts
========================
See
`https://docs.openstack.org/keystone/pike/admin/cli-manage-projects-users-and-roles.html
<https://docs.openstack.org/keystone/pike/admin/cli-manage-projects-users-and-roles.html>`_
_ for details on managing Keystone projects, users, and roles.
.. note::
All Kubernetes accounts are subject to system password rules. For
complete details on password rules, see :ref:`System Account Password
Rules <starlingx-system-accounts-system-account-password-rules>`.
.. _managing-keystone-accounts-ol-wyq-l4d-mmb:
If using the FIXME: REMOVE, when changing the Keystone 'admin' user
password, you must:
#. Update the password in the 'system-endpoint' secret in the FIXME:
REMOVE's deployment-config.yaml file, with the new Keystone 'admin'
user password.
Make this change to the OS\_PASSWORD value. It must be base64 encoded. For example:
.. code-block:: none
OS_PASSWORD: U3Q4cmxpbmdYKg==
#. Apply the updated deployment configuration.
.. code-block:: none
kubectl apply -f deployment-config.yaml

View File

@ -0,0 +1,84 @@
.. fvd1581384193662
.. _obtain-the-authentication-token-using-the-browser:
=================================================
Obtain the Authentication Token Using the Browser
=================================================
You can obtain the authentication token using the **oidc-auth-apps** |OIDC|
client web interface.
.. rubric:: |context|
Use the following steps to obtain the authentication token for id-token and
refresh-token using the **oidc-auth-apps** |OIDC| client web interface.
.. rubric:: |proc|
#. Use the following URL to login into **oidc-auth-apps** |OIDC| client:
``https://<oam-floating-ip-address>:30555``
#. If the |prod| **oidc-auth-apps** has been configured for multiple
'**ldap**' connectors, select the Windows Active Directory server for
authentication.
#. Enter your Username and Password.
#. Click Login. The ID token and Refresh token are displayed as follows:
.. code-block:: none
ID Token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjQ4ZjZkYjcxNGI4ODQ5ZjZlNmExM2Y2ZTQzODVhMWE1MjM0YzE1NTQifQ.eyJpc3MiOiJodHRwczovLzEyOC4yMjQuMTUxLjE3MDozMDU1Ni9kZXgiLCJzdWIiOiJDZ2R3ZG5SbGMzUXhFZ1JzWkdGdyIsImF1ZCI6InN0eC1vaWRjLWNsaWVudC1hcHAiLCJleHAiOjE1ODI1NzczMTksImlhdCI6MTU4MjU3NzMwOSwiYXRfaGFzaCI6ImhzRG1kdTFIWGFCcXFNLXBpYWoyaXciLCJlbWFpbCI6InB2dGVzdDEiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6InB2dGVzdDEifQ.TEZ-YMd8kavTGCw_FUR4iGQWf16DWsmqxW89ZlKHxaqPzAJUjGnW5NRdRytiDtf1d9iNIxOT6cGSOJI694qiMVcb-nD856OgCvU58o-e3ZkLaLGDbTP2mmoaqqBYW2FDIJNcV0jt-yq5rc9cNQopGtFXbGr6ZV2idysHooa7rA1543EUpg2FNE4qZ297_WXU7x0Qk2yDNRq-ngNQRWkwsERM3INBktwQpRUg2na3eK_jHpC6AMiUxyyMu3o3FurTfvOp3F0eyjSVgLqhC2Rh4xMbK4LgbBTN35pvnMRwOpL7gJPgaZDd0ttC9L5dBnRs9uT-s2g4j2hjV9rh3KciHQ
Access Token:
wcgw4mhddrk7jd24whofclgmj
Claims:
{
"iss": "https://128.224.151.170:30556/dex",
"sub": "CgdwdnRlc3QxEgRsZGFw",
"aud": "stx-oidc-client-app",
"exp": 1582577319,
"iat": 1582577319,
"at_hash": "hsDmdu1HXaBqqM-piaj2iw",
"email": "testuser",
"email_verified": true,
"groups": [
"billingDeptGroup",
"managerGroup"
],
"name": "testuser"
}
Refresh Token:
ChljdmoybDZ0Y3BiYnR0cmp6N2xlejNmd3F5Ehlid290enR5enR1NWw1dWM2Y2V4dnVlcHli
#. Use the token ID to set the Kubernetes credentials in kubectl configs:
.. code-block:: none
~(keystone_admin)$ TOKEN=<ID_token_string>
~(keystone_admin)$ kubectl config set-credentials testuser --token $TOKEN
#. Switch to the Kubernetes context for the user, by using the following
command, for example:
.. code-block:: none
~(keystone_admin)$ kubectl config use-context testuser@mywrcpcluster
#. Run the following command to test that the authentication token
validates correctly:
.. code-block:: none
~(keystone_admin)$ kubectl get pods --all-namespaces

View File

@ -0,0 +1,87 @@
.. lrf1583447064969
.. _obtain-the-authentication-token-using-the-oidc-auth-shell-script:
================================================================
Obtain the Authentication Token Using the oidc-auth Shell Script
================================================================
You can obtain the authentication token using the **oidc-auth** shell script.
.. rubric:: |context|
You can use the **oidc-auth** script both locally on the active controller,
as well as on a remote workstation where you are running **kubectl** and
**helm** commands.
The **oidc-auth** script retrieves the ID token from Windows Active
Directory using the |OIDC| client, and **dex**, and updates the Kubernetes
credential for the user in the **kubectl** config file.
.. _obtain-the-authentication-token-using-the-oidc-auth-shell-script-ul-kxm-qnf-ykb:
- On controller-0, **oidc-auth** is installed as part of the base |prod|
installation, and ready to use.
.. xbooklink
- On a remote workstation using remote-cli container, **oidc-auth** is
installed within the remote-cli container, and ready to use. For more
information on configuring remote CLI access, see |sysconf-doc|:
:ref:`Configure Remote CLI Access <configure-remote-cli-access>`.
- On a remote host, when using directly installed **kubectl** and **helm**, the following setup is required:
- Install "Python Mechanize" module using the following command:
.. code-block:: none
# sudo pip2 install mechanize
- Get the **oidc-auth** script from WindShare.
.. note::
**oidc-auth** script supports authenticating with a |prod|
**oidc-auth-apps** configured with single, or multiple **ldap**
connectors.
.. rubric:: |proc|
#. Run **oidc-auth** script in order to authenticate and update user
credentials in **kubectl** config file with the retrieved token.
- If **oidc-auth-apps** is deployed with a single backend **ldap** connector, run the following command:
.. code-block:: none
~(keystone_admin)$ oidc-auth -c <ip> -u <username>
For example,
.. code-block:: none
~(keystone_admin)$ oidc-auth -c <OAM_ip_address> -u testuser
Password:
Login succeeded.
Updating kubectl config ...
User testuser set.
- If **oidc-auth-apps** is deployed with multiple backend **ldap** connectors, run the following command:
.. code-block:: none
~(keystone_admin)$ oidc-auth -b <connector-id> -c <ip> -u <username>
.. note::
If you are running **oidc-auth** within the |prod| containerized
remote CLI, you must use the -p <password> option to run the command
non-interactively.

View File

@ -0,0 +1,64 @@
.. blo1552681488499
.. _operator-command-logging:
========================
Operator Command Logging
========================
|prod| logs all REST API operator commands and SNMP commands.
The logs include the timestamp, tenant name \(if applicable\), user name,
command executed, and command status \(success or failure\).
The files are located under the /var/log directory, and are named using the
convention \*-api.log. Each component that generates its own API log files
\(for example, Keystone, Barbican, and so forth\) and each |prod| /
StarlingX -specific component, updating \(patching\) system, and SNMP agent
follows, this convention.
You can examine the log files locally on the controllers, or using a remote
log server if the remote logging feature is configured. The one exception is
patching-api.log. For updating robustness, the |prod| updating system uses
minimal system facilities and does not use **syslog**. Therefore its logs
are not available on a remote log server.
.. _operator-command-logging-section-N10047-N10023-N10001:
-------
Remarks
-------
.. _operator-command-logging-ul-plj-htv-1z:
- For the |prod| :command:`system` command, whenever a REST API call is
made that is either a POST, PATCH, PUT, or DELETE, |prod| logs these events
into a new log file called /var/log/sysinv-api.log
- POST - means creating something
- PATCH - means partially updating \(modifying\) something
- PUT - means fully updating \(modifying\) something
- DELETE - means deleting something
:command:`system modify --description="A TEST"` is logged to sysinv-api.log because it issues a REST POST call
:command:`system snmp-comm-delete "TEST\_COMMUNITY1"` - is logged to sysinv-api.log because it issues a REST DELETE call
- If the :command:`sysinv` command only issues a REST GET call, it is not logged.
- :command:`fm event-list` is not logged because this performs a sysinv REST GET call
- :command:`fm event-show<xx>` is not logged because this performs a sysinv REST GET call
- All SNMP Commands are logged, including GET, GETNEXT, GETBULK and SET commands. SNMP TRAPs are not logged.

View File

@ -0,0 +1,47 @@
.. efv1552681472194
.. _operator-login-authentication-logging:
=====================================
Operator Login/Authentication Logging
=====================================
|prod| logs all operator login and authentication attempts.
For security purposes, all login attempts \(success and failure\) are
logged. This includes the Horizon Web interface and SSH logins as well as
internal local LDAP login attempts and internal database login attempts.
SNMP authentication requests \(success and failure\) are logged with
operator commands \(see :ref:`Operator Command Logging
<operator-command-logging>`\). Authentication failures are logged
explicitly, whereas successful authentications are logged when the request
is logged.
The logs include the timestamp, user name, remote IP Address, and number of
failed login attempts \(if applicable\). They are located under the /var/log
directory, and include the following:
.. _operator-login-authentication-logging-ul-wg4-bkz-zw:
- /var/log/auth.log
- /var/log/horizon.log
- /var/log/pmond.log
- /var/log/hostwd.log
- /var/log/snmp-api.log
- /var/log/sysinv.log
- /var/log/user.log
- /var/log/ima.log
You can examine the log files locally on the controllers, or using a remote
log server if the remote logging feature is configured.

View File

@ -0,0 +1,72 @@
.. lgd1552571882796
.. _overview-of-system-accounts:
=====================================
Overview of StarlingX System Accounts
=====================================
A brief description of the system accounts available in a |prod| system.
.. _overview-of-system-accounts-section-N1001F-N1001C-N10001:
------------------------
Types of System Accounts
------------------------
- **sysadmin Local Linux Account**
This is a local, per-host, account created automatically when a new host
is provisioned. This account has extended privileges and is used by the
system administrator.
- **Local Linux User Accounts**
These are local, regular Linux user accounts that are typically used for
internal system purposes and generally should not be created by an end
user.
If the administrator wants to provision additional access to the system,
it is better to configure local LDAP Linux accounts.
- **Local LDAP Linux User Accounts**
|prod| provides support for Local Ldap Linux User Accounts. Local LDAP
accounts are centrally managed; changes to local LDAP accounts made on
any host are propagated automatically to all hosts on the cluster.
|prod| includes a set of scripts for creating LDAP Linux accounts with
support for providing Keystone user account credentials. \(The scripts do
not create Keystone accounts for you. The scripts allow for sourcing or
accessing the Keystone user account credentials.\)
The intended use of these accounts is to provide additional admin level
user accounts \(in addition to sysadmin\) that can SSH to the nodes of
the |prod|.
.. note::
For security reasons, it is recommended that ONLY admin level users
be allowed to SSH to the nodes of the |prod|. Non-admin level users
should strictly use remote CLIs or remote web GUIs..
These Local LDAP Linux user accounts can be associated with a Keystone
account. You can use the provided scripts to create these Local LDAP
Linux user accounts and synchronize them with the credentials of an
associated Keystone account, so that the Linux user can leverage
StarlingX CLI commands.
- **Kubernetes Service Accounts**
|prod| uses Kubernetes service accounts and |RBAC| policies for
authentication and authorization of users of the Kubernetes API, CLI, and
Dashboard.
- **Keystone Accounts**
|prod-long| uses Keystone for authentication and authorization of users
of the StarlingX REST APIs, the CLI, the Horizon Web interface and the
Local Docker Registry. |prod|'s Keystone uses the default local SQL
Backend.
- **Remote Windows Active Directory Accounts**
|prod| can optionally be configured to use remote Windows Active
Directory Accounts and native Kubernetes |RBAC| policies for
authentication and authorization of users of the Kubernetes API,
CLI, and Dashboard.

View File

@ -0,0 +1,20 @@
.. zrf1552681385017
.. _overview-of-uefi-secure-boot:
============================
Overview of UEFI Secure Boot
============================
Secure Boot is an optional capability of |UEFI| firmware.
Secure Boot is a technology where the system firmware checks that the system
boot loader is signed with a cryptographic key authorized by a database
contained in the firmware or a security device.
|prod|'s implementation of Secure Boot also validates the signature of the
second-stage boot loader, the kernel, and kernel modules.
|prod|'s public key, for programming in the hardware's Secure Boot database,
can be found in the |prod| ISO.

View File

@ -0,0 +1,27 @@
.. tvb1581377605743
.. _overview-of-windows-active-directory:
====================================
Overview of Windows Active Directory
====================================
|prod-long| can be configured to use a remote Windows Active Directory server
to authenticate users of the Kubernetes API, using the **oidc-auth-apps**
application.
The **oidc-auth-apps** application installs a proxy |OIDC| identity provider
that can be configured to proxy authentication requests to an LDAP \(s\)
identity provider, such as Windows Active Directory. For more information,
see, `https://github.com/dexidp/dex <https://github.com/dexidp/dex>`__. The
**oidc-auth-apps** application also provides an |OIDC| client for accessing
the username and password |OIDC| login page for user authentication and
retrieval of tokens. An **oidc-auth** CLI script, available on Wind Share, at
`https://windshare.windriver.com/ <https://windshare.windriver.com/>`__, can
also be used for |OIDC| user authentication and retrieval of tokens.
In addition to installing and configuring the **oidc-auth-apps**
application, the admin must also configure Kubernetes cluster's
**kube-apiserver** to use the **oidc-auth-apps** |OIDC| identity provider for
validation of tokens in Kubernetes API requests.

View File

@ -0,0 +1,57 @@
.. thp1552681882191
.. _password-recovery-for-linux-user-accounts:
=========================================
Password Recovery for Linux User Accounts
=========================================
You can reset the password for a Linux user if required. The procedure
depends on the class of user.
.. _password-recovery-for-linux-user-accounts-section-N1001F-N1001C-N10001:
------------------
Linux System Users
------------------
This class includes the **sysadmin** account, and optionally other Linux
system user accounts created to support a multi-admin scenario. If another
Linux system account is available, you can use it to reset the password for
this type of account as follows:
.. code-block:: none
$ sudo passwd <user> <temp_password>
$ sudo chage -d 0 <user>
where <user> is the user name of the account to be reset \(for, example,
**sysadmin**\) and <temp\_password> is a temporary password. The
:command:`chage` command forces immediate expiration, so that the user must
change the password at first login.
If no other Linux system user accounts have been created, you can recover
using the default LDAP **operator** or **admin** accounts. For more
information, see :ref:`Local LDAP Linux User Accounts
<local-ldap-linux-user-accounts>`.
.. _password-recovery-for-linux-user-accounts-section-N10066-N1001C-N10001:
-----------------
LDAP System Users
-----------------
This class includes users created using LDAP utilities.
You can reset the password for an LDAP account as follows:
.. code-block:: none
$ sudo ldapmodifyuser <user> replace userPassword <temp_password>
$ sudo ldapmodifyuser <user> replace shadowLastChange 0
where <user> is the username, and <temp\_password> is a temporary password.
The second command forces a password change on first login.

View File

@ -0,0 +1,37 @@
.. not1578924824783
.. _password-recovery:
=================
Password Recovery
=================
.. rubric:: |proc|
- Do one of the following to change a Keystone admin user password at any
time.
- Use the Identity panel in the Horizon Web interface.
- Use the following command from the controller CLI.
.. code-block:: none
~(keystone_admin)$ openstack user password set
.. warning::
Both controller nodes must be locked and unlocked after changing
the Keystone admin password. You must wait five minutes before
performing the lock/unlock operations.
- Use the following command to reset a Keystone non-admin user \(tenant user\) account.
.. code-block:: none
~(keystone_admin)$ openstack user set --password <temp_password> <user>
where <user> is the username and <temp\_password> is a temporary password.

View File

@ -0,0 +1,34 @@
.. pui1590088143541
.. _pod-security-policies:
=====================
Pod Security Policies
=====================
|PSPs| enable fine-grained authorization of pod creation and updates.
|PSPs| control access to security sensitive aspects of Pod specifications
such as running of privileged containers, use of host filesystem, running as
root, etc. |PSPs| define a set of conditions that a pod must run with, in
order to be accepted into the system, as well as defaults for the related
fields. |PSPs| are assigned to users through Kubernetes |RBAC| RoleBindings.
See `https://kubernetes.io/docs/concepts/policy/pod-security-policy/
<https://kubernetes.io/docs/concepts/policy/pod-security-policy/>`__ for
details.
When enabled, Pod security policy checking will authorize all Kubernetes
API commands against the |PSPs| which the issuer of the command has access
to. If there are no |PSPs| defined in the system or the issuer does not have
access to any |PSPs|, the Pod security policy checking will fail to authorize
the command.
|prod-long| provides a system service-parameter to enable Pod security
policy checking. In addition to enabling Pod security policy checking,
setting this service parameter also creates two |PSPs| \(privileged and
restricted\) such that users with cluster-admin role \(which has access to
all resources\) has |PSPs| to authorize against. It also creates two
corresponding roles for specifying access to these |PSPs|
\(privileged-psp-user and restricted-psp-user\), for binding to other
non-admin type subjects.

View File

@ -0,0 +1,89 @@
.. uby1596554290953
.. _portieris-clusterimagepolicy-and-imagepolicy-configuration:
==========================================================
Portieris ClusterImagePolicy and ImagePolicy Configuration
==========================================================
Portieris supports cluster-wide and namespace-specific image policies.
.. _portieris-clusterimagepolicy-and-imagepolicy-configuration-section-cv5-2wk-4mb:
-----------
ImagePolicy
-----------
You can define Portieris' behavior in a namespace using an ImagePolicy. In
namespaces where ImagePolicies exist, they are used exclusively. If they do
not contain a match for the workload image being launched, then
ClusterImagePolicies are not referenced. For deployed workloads, images are
wildcard-matched against defined policies. If a policy matching the workload
image is not found then deployment is denied. If there are multiple matches
the most specific match is used.
.. _portieris-clusterimagepolicy-and-imagepolicy-configuration-section-vmd-fwk-4mb:
------------------
ClusterImagePolicy
------------------
You configure a ClusterImagePolicies at the cluster level. It will be used
if no ImagePolicy resource is defined in the namespace in which the workload
will be deployed. These resources have the same structure as namespace
ImagePolicies. Again, for deployed workloads, images are wildcard-matched
against defined policies and deployment will be denied if no matching policy
is found for an image. If there are multiple matches the most specific match
is used.
.. _portieris-clusterimagepolicy-and-imagepolicy-configuration-section-avq-x4r-4mb:
--------------
Trust Policies
--------------
You can specify a \[Cluster\]ImagePolicy to allow any image from a trusted
repository\(s\) or only allow images with trust data from a repository in a
registry+notary server
.. _portieris-clusterimagepolicy-and-imagepolicy-configuration-ul-bjc-hpr-4mb:
- This example allows any image from a trusted icr.io registry; i.e. an empty policy:
.. code-block:: none
apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1
kind: ImagePolicy
metadata:
name: allow-all-icrio
spec:
repositories:
- name: "icr.io/*"
policy:
- This example allows only images with valid trust data \(policy.trust.enabled=true\) from the icr.io registry + notary \(policy.trust.trustServer\) server.
.. code-block:: none
apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1
kind: ImagePolicy
metadata:
name: allow-custom
spec:
repositories:
- name: "icr.io/*"
policy:
trust:
enabled: true
trustServer: "https://icr.io:4443"
For additional details about policies, see
`https://github.com/IBM/portieris/blob/master/POLICIES.md
<https://github.com/IBM/portieris/blob/master/POLICIES.md>`__.

View File

@ -0,0 +1,38 @@
.. cas1596543672415
.. _portieris-overview:
==================
Portieris Overview
==================
You can enforce |prod| image security policies using the Portieris admission
controller.
Portieris allows you to configure trust policies for an individual namespace
or cluster-wide, and checks the image against a signed image list on a
specified notary server to enforce the configured image policies. Portieris
first checks that the image's registry/repository is trusted according to
the image policies, and, if trust enforcement is enabled for that
registry/repository, Portieris verifies that a signed version of the image
exists in the specified registry / notary server.
When a workload is deployed, the |prod| kube-apiserver sends a workload
admission request to Portieris, which attempts to find matching security
policies for each image in the workload. If any image in your workload does
not satisfy the policy, then the workload is blocked from being deployed.
The |prod| implementation of Portieris is integrated with cert-manager and
can use custom registries.
Configuring a trust server \(for an image or cluster-wide\) requires network
access upon pod creation. Therefore, if a cluster has no external network
connectivity, pod creation will be blocked.
It is required to pull from a registry using a docker-registry secret.
Enforcing trust for anonymous image pulls is not supported.
|prod| integration with Portieris has been verified against the Harbor
registry and notary server \(`https://goharbor.io/
<https://goharbor.io/>`__\).

View File

@ -0,0 +1,283 @@
.. vbz1578928340182
.. _private-namespace-and-restricted-rbac:
=====================================
Private Namespace and Restricted RBAC
=====================================
A non-admin type user typically does **not** have permissions for any
cluster-scoped resources and only has read and/or write permissions to
resources in one or more namespaces.
.. rubric:: |context|
.. note::
All of the |RBAC| resources for managing non-admin type users, although
they may apply to private namespaces, are created in **kube-system**
such that only admin level users can manager non-admin type users,
roles, and rolebindings.
The following example creates a non-admin service account called dave-user
with read/write type access to a single private namespace
\(**billing-dept-ns**\).
.. note::
The following example creates and uses ServiceAccounts as the user
mechanism and subject for the rolebindings, however the procedure
equally applies to user accounts defined in an external Windows Active
Directory as the subject of the rolebindings.
.. rubric:: |proc|
#. If it does not already exist, create a general user role defining
restricted permissions for general users.
This is of the type **ClusterRole** so that it can be used in the
context of any namespace when binding to a user.
#. Create the user role definition file.
.. code-block:: none
% cat <<EOF > general-user-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: general-user
rules:
# For the core API group (""), allow full access to all resource types
# EXCEPT for resource policies (limitranges and resourcequotas) only allow read access
- apiGroups: [""]
resources: ["bindings", "configmaps", "endpoints", "events", "persistentvolumeclaims", "pods", "podtemplates", "replicationcontrollers", "secrets", "serviceaccounts", "services"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: [ "limitranges", "resourcequotas" ]
verbs: ["get", "list"]
# Allow full access to all resource types of the following explicit list of apiGroups.
# Notable exceptions here are:
# ApiGroup ResourceTypes
# ------- -------------
# policy podsecuritypolicies, poddisruptionbudgets
# networking.k8s.io networkpolicies
# admissionregistration.k8s.io mutatingwebhookconfigurations, validatingwebhookconfigurations
#
- apiGroups: ["apps", "batch", "extensions", "autoscaling", "apiextensions.k8s.io", "rbac.authorization.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
# Cert Manager API access
- apiGroups: ["cert-manager.io", "acme.cert-manager.io"]
resources: ["*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
EOF
#. Apply the definition.
.. code-block:: none
~(keystone_admin)$ kubectl apply -f general-user-cluster-role.yaml
#. Create the **billing-dept-ns** namespace, if it does not already exist.
.. code-block:: none
~(keystone_admin)$ kubectl create namespace billing-dept-ns
#. Create both the **dave-user** service account and the namespace-scoped
RoleBinding.
The RoleBinding binds the **general-user** role to the **dave-user**
ServiceAccount for the **billing-dept-ns** namespace.
#. Create the account definition file.
.. code-block:: none
% cat <<EOF > dave-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dave-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dave-user
namespace: billing-dept-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: general-user
subjects:
- kind: ServiceAccount
name: dave-user
namespace: kube-system
EOF
#. Apply the definition.
.. code-block:: none
% kubectl apply -f dave-user.yaml
#. If the user requires use of the local docker registry, create an
openstack user account for authenticating with the local docker registry.
#. If a project does not already exist for this user, create one.
.. code-block:: none
% openstack project create billing-dept-ns
#. Create an openstack user in this project.
.. code-block:: none
% openstack user create --password P@ssw0rd \
--project billing-dept-ns dave-user
.. note::
Substitute a password conforming to your password formatting
rules for P@ssw0rd.
#. Create a secret containing these userid/password credentials for use
as an ImagePullSecret
.. code-block:: none
% kubectl create secret docker-registry registry-local-dave-user --docker-server=registry.local:9001 --docker-username=dave-user --docker-password=P@ssw0rd --docker-email=noreply@windriver.com -n billing-dept-ns
dave-user can now push images to registry.local:9001/dave-user/ and use
these images for pods by adding the secret above as an ImagePullSecret
in the pod spec.
#. If the user requires the ability to create persistentVolumeClaims in his
namespace, then execute the following commands to enable the rbd-provisioner
in the user's namespace.
#. Create an RBD namespaces configuration file.
.. code-block:: none
% cat <<EOF > rbd-namespaces.yaml
classes:
- additionalNamespaces: [default, kube-public, billing-dept-ns]
chunk_size: 64
crush_rule_name: storage_tier_ruleset
name: general
pool_name: kube-rbdkube-system
replication: 1
userId: ceph-pool-kube-rbd
userSecretName: ceph-pool-kube-rbd
EOF
#. Update the helm overrides.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --values rbd-namespaces.yaml \
platform-integ-apps rbd-provisioner kube-system
#. Apply the application.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
#. Monitor the system for the application-apply to finish
.. code-block:: none
~(keystone_admin)$ system application-list
#. Apply the secret to the new rbd-provisioner namespace.
.. code-block:: none
~(keystone_admin)$ kubectl get secret ceph-pool-kube-rbd -n default -o yaml | grep -v '^\s*namespace:\s' | kubectl apply -n <namespace> -f -
#. If this user requires the ability to use helm, do the following.
#. Create a ClusterRole for reading namespaces, if one does not already exist.
.. code-block:: none
% cat <<EOF > namespace-reader-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: namespace-reader
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "watch", "list"]
EOF
Apply the configuration.
.. code-block:: none
% kubectl apply -f namespace-reader-clusterrole.yaml
#. Create a RoleBinding for the tiller account of the user's namespace.
.. note::
.. xbooklink
The tiller account of the user's namespace **must** be named
'tiller'. See |sysconf-doc|: :ref:`Configure Remote Helm Client
for Non-Admin Users
<configure-remote-helm-client-for-non-admin-users>`.
.. code-block:: none
% cat <<EOF > read-namespaces-billing-dept-ns-tiller.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-namespaces-billing-dept-ns-tiller
subjects:
- kind: ServiceAccount
name: tiller
namespace: billing-dept-ns
roleRef:
kind: ClusterRole
name: namespace-reader
apiGroup: rbac.authorization.k8s.io
EOF
Apply the configuration.
.. code-block:: none
% kubectl apply -f read-namespaces-billing-dept-ns-tiller.yaml
..
.. rubric:: |postreq|
.. xbooklink
See |sysconf-doc|: :ref:`Configure Remote CLI Access
<configure-remote-cli-access>` for details on how to setup remote CLI
access using tools such as :command:`kubectl` and :command:`helm` for a
service account such as this.

View File

@ -0,0 +1,25 @@
.. tfe1552681897084
.. _remote-access-for-linux-accounts:
================================
Remote Access for Linux Accounts
================================
You can log in remotely as a Linux user using :command:`ssh`.
.. note::
For security reasons, it is recommended that ONLY admin level users be
allowed to SSH to the nodes of the |prod|. Non-admin level users should
strictly use remote CLIs or remote web GUIs.
Specifying the OAM floating IP address as the target establishes a
connection to the currently active controller; however, if the OAM floating
IP address moves from one controller node to another, the ssh session is
blocked. To ensure access to a particular controller regardless of its
current role, specify the controller physical address instead.
.. note::
Password-based access to the **root** account is not permitted over
remote connections.

View File

@ -0,0 +1,11 @@
=================
Remote CLI Access
=================
.. toctree::
:maxdepth: 1
configure-remote-cli-access
security-configure-container-backed-remote-clis-and-clients
security-install-kubectl-and-helm-clients-directly-on-a-host
configure-remote-helm-client-for-non-admin-users

View File

@ -0,0 +1,55 @@
.. kqa1596551916697
.. _remove-portieris:
================
Remove Portieris
================
You can remove the Portieris admission controller completely from a |prod|
system.
.. rubric:: |proc|
#. Remove the application.
.. code-block:: none
~(keystone_admin)$ system application-remove portieris
#. Delete kubernetes resources not automatically removed in the previous step.
This is required if you plan to reapply the application.
.. code-block:: none
~(keystone_admin)$ kubectl delete clusterroles.rbac.authorization.k8s.io portieris
~(keystone_admin)$ kubectl delete clusterrolebindings.rbac.authorization.k8s.io admission-portieris-webhook
~(keystone_admin)$ kubectl delete -n portieris secret/portieris-certs
~(keystone_admin)$ kubectl delete -n portieris cm/image-policy-crds
~(keystone_admin)$ kubectl delete -n portieris serviceaccounts/portieris
.. note::
If this step is done before removing the application in step 1, the
removal will fail, leaving the application in the **remove-failed**
state. In such cases you will need to issue the following commands
to recover:
.. code-block:: none
~(keystone_admin)$ kubectl delete MutatingWebhookConfiguration image-admission-config --ignore-not-found=true
~(keystone_admin)$ kubectl delete ValidatingWebhookConfiguration image-admission-config --ignore-not-found=true
~(keystone_admin)$ kubectl delete crd clusterimagepolicies.securityenforcement.admission.cloud.ibm.com imagepolicies.securityenforcement.admission.cloud.ibm.com --ignore-not-found=true
~(keystone_admin)$ kubectl delete clusterroles.rbac.authorization.k8s.io portieris --ignore-not-found=true
~(keystone_admin)$ kubectl delete clusterrolebindings.rbac.authorization.k8s.io admission-portieris-webhook --ignore-not-found=true
~(keystone_admin)$ kubectl delete ns/portieris --ignore-not-found=true
~(keystone_admin)$ helm delete portieris-portieris --purge --no-hooks
~(keystone_admin)$ system application-remove portieris
#. Delete the application.
.. code-block:: none
~(keystone_admin)$ system application-delete portieris

View File

@ -0,0 +1,54 @@
.. aif1596225477506
.. _remove-vault:
============
Remove Vault
============
You can remove Vault from your |prod-long|, if required, by using the
procedure described in this section.
.. rubric:: |context|
Run the following commands to remove Vault. This will remove pods and other
resources created by the Armada installation. For more information, see
:ref:`Install Vault <install-vault>`.
.. rubric:: |proc|
#. Remove pods and other resources using the following command:
.. code-block:: none
$ system application-remove vault
#. \(Optional\) If you want to reinstall Vault, and only retain Vault data
stored in PVCs, use the following command:
.. code-block:: none
$ kubectl delete secrets -n vault vault-server-tls
#. Reinstall Vault, if required using the following command:
.. code-block:: none
$ system application-apply vault
.. note::
It is recommended to do a complete remove of all resources if you
want to reinstall Vault.
#. To completely remove Vault, including PVCs \(PVCs are intended to
persist after :command:`system application-remove vault` in order to
preserve Vault data\), use the following command.
.. code-block:: none
$ kubectl delete ns vault
$ system application-delete vault

View File

@ -0,0 +1,145 @@
.. cmy1590090067787
.. _resource-management:
===================
Resource Management
===================
Kubernetes supports two types of resource policies, **LimitRange** and
**ResourceQuota**.
.. contents::
:local:
:depth: 1
.. _resource-management-section-z51-d5m-tlb:
----------
LimitRange
----------
By default, containers run with unbounded resources on a Kubernetes cluster.
Obviously this is bad as a single Pod could monopolize all available
resources on a worker node. A **LimitRange** is a policy to constrain
resource allocations \(for Pods or Containers\) in a particular namespace.
Specifically a **LimitRange** policy provides constraints that can:
.. _resource-management-ul-vz5-g5m-tlb:
- Enforce minimum and maximum compute resources usage per Pod or Container
in a namespace.
- Enforce minimum and maximum storage request per PersistentVolumeClaim in
a namespace.
- Enforce a ratio between request and limit for a resource in a namespace.
- Set default request/limit for compute resources in a namespace and
automatically inject them to Containers at runtime.
See `https://kubernetes.io/docs/concepts/policy/limit-range/ <https://kubernetes.io/docs/concepts/policy/limit-range/>`__ for more details.
An example of **LimitRange** policies for the billing-dept-ns namespace of
the example in :ref:`Assign Pod Security Policies
<assign-pod-security-policies>` is shown below:
.. code-block:: none
apiVersion: v1
kind: LimitRange
metadata:
name: mem-cpu-per-container-limit
namespace: billing-dept-ns
spec:
limits:
- max:
cpu: "800m"
memory: "1Gi"
min:
cpu: "100m"
memory: "99Mi"
default:
cpu: "700m"
memory: "700Mi"
defaultRequest:
cpu: "110m"
memory: "111Mi"
type: Container
---
apiVersion: v1
kind: LimitRange
metadata:
name: mem-cpu-per-pod-limit
namespace: billing-dept-ns
spec:
limits:
- max:
cpu: "2"
memory: "2Gi"
type: Pod
---
apiVersion: v1
kind: LimitRange
metadata:
name: pvc-limit
namespace: billing-dept-ns
spec:
limits:
- type: PersistentVolumeClaim
max:
storage: 3Gi
min:
storage: 1Gi
---
apiVersion: v1
kind: LimitRange
metadata:
name: memory-ratio-pod-limit
namespace: billing-dept-ns
spec:
limits:
- maxLimitRequestRatio:
memory: 10
type: Pod
.. _resource-management-section-ur2-q5m-tlb:
-------------
ResourceQuota
-------------
A **ResourceQuota** policy object provides constraints that limit aggregate
resource consumption per namespace. It can limit the quantity of objects
that can be created in a namespace by type, as well as the total amount of
compute resources that may be consumed by resources in that project.
**ResourceQuota** limits can be created for cpu, memory, storage and
resource counts for all standard namespaced resource types such as secrets,
configmaps, etc.
See `https://kubernetes.io/docs/concepts/policy/resource-quotas/
<https://kubernetes.io/docs/concepts/policy/resource-quotas/>`__ for more
details.
An example of **ResourceQuota** policies for the billing-dept-ns namespace
of :ref:`Assign Pod Security Policies <assign-pod-security-policies>`
is shown below:
.. code-block:: none
apiVersion: v1
kind: ResourceQuota
metadata:
name: resource-quotas
namespace: billing-dept-ns
spec:
hard:
persistentvolumeclaims: "1"
services.loadbalancers: "2"
services.nodeports: "0"

View File

@ -0,0 +1,26 @@
.. gqn1595963439839
.. _secure-https-external-connectivity:
==================================
Secure HTTPS External Connectivity
==================================
|prod| provides support for secure HTTPS external connections for REST API
and webserver access.
For secure remote access, a ROOT-CA-signed certificate is required. The use
of a ROOT-CA-signed certificate is strongly recommended.
Operational complexity:
.. _secure-https-external-connectivity-ul-ct1-pzf-mmb:
- HTTPS is enabled using the platform system commands.
- Obtain a certificate signed by a ROOT certificate authority and install
it with the platform system command.
For more information, see :ref:`Secure HTTPS Connectivity
<https-access-overview>`.

View File

@ -0,0 +1,172 @@
.. lzf1570032232833
.. _secure-starlingx-rest-and-web-certificates-private-key-storage-with-tpm:
========================================================================
Secure StarlingX REST and Web Certificate's Private Key Storage with TPM
========================================================================
For increased security, the StarlingX REST and Web Server's certificate can
be installed such that the private key is stored in a |TPM| 2.0 device on
the controller.
.. rubric:: |context|
|TPM| is an industry standard cryptographic processor that enables secure
storage of secrets. |prod| can use a TPM device, if present, to securely
store the private key of the StarlingX REST and Web Server's certificate.
The |TPM| is used to wrap the private key within the |TPM| device. Each
wrapping is unique to that |TPM| device and cannot be synchronized between
controllers using different |TPM| devices. Therefore, the same private key
is always secured to both the active and standby controllers' |TPM| devices
at the same time. Given this operational constraint, |prod| has measures in
place to detect when the standby controller is reinstalled or replaced, and
raise appropriate alarms to prevent an Unlock or Swact of a new standby
controller until the StarlingX REST and Web Server's certificate is
re-installed, in order to update the new standby controller's |TPM| device.
.. rubric:: |prereq|
.. _secure-starlingx-rest-and-web-certificates-private-key-storage-with-tpm-ul-xj3-mqc-d1b:
- Obtain a Root |CA|-signed certificate and key from a trusted Root
|CA|. Refer to the documentation for the external Root |CA| that you
are using, on how to create public certificate and private key pairs,
signed by a Root |CA|, for HTTPS.
.. xbooklink
For lab purposes, see :ref:`Locally Creating Certificates
<creating-certificates-locally-using-openssl>` for details on how to
create a test Root |CA| certificate and key, and use it to sign test
certificates.
Put the |PEM| encoded versions of the certificate and key in a
single file, and copy the file to the controller host.
- Both controllers must be provisioned and unlocked before you can install
the certificate using |TPM| to store the private key.
- A |TPM| 2.0 device must be available on both controller nodes.
- |TPM| must be enabled in the |UEFI| on both controllers.
- HTTPS must be enabled on the system.
.. caution::
Do not install the certificate using |TPM| on controller-0 before the
standby controller-1 has been provisioned and unlocked. If this happens,
you cannot unlock controller-1. To recover, do the following:
.. _secure-starlingx-rest-and-web-certificates-private-key-storage-with-tpm-ol-jpm-2kq-qcb:
#. Install the certificate without |TPM| on controller-0. For more
information, see :ref:`Install/Update the StarlingX Rest and Web
Server Certificate
<install-update-the-starlingx-rest-and-web-server-certificate>`.
#. Unlock controller-1.
#. Reinstall the certificate using |TPM| on controller-0.
.. rubric:: |proc|
.. _secure-starlingx-rest-and-web-certificates-private-key-storage-with-tpm-steps-hnx-qf5-x1b:
#. Install the StarlingX REST and Web Server's certificate using |TPM| to
securely store the private key:
.. code-block:: none
$ system certificate-install m tpm_mode <pathTocertificateAndKey>
where:
**<pathTocertificateAndKey>**
is the path to the file containing both the Root |CA|-signed
certificate and private key to install.
.. warning::
For security purposes, the utility deletes the provided SSL private
key from the file system and asks for confirmation during the
installation. You should store a copy of the SSL private key off-site.
.. note::
Only X.509 based RSA key certificates are supported \(PKCS12 format
and ECDSA keys are not supported\). Additionally, 4096 bit RSA key
lengths are not supported.
#. Check the certificate's |TPM| configuration state for each provisioned
controller node.
.. code-block:: none
[sysadmin@controller-0 tmp(keystone_admin)]$ system certificate-show tpm
+-------------+-----------------------------------------------------+
| Property | Value |
+-------------+-----------------------------------------------------+
| uuid | ed3d6a22-996d-421b-b4a5-64ab42ebe8be |
| certtype | tpm_mode |
| signature | tpm_mode_13214262027721489760 |
| start_date | 2018-03-21T14:53:03+00:00 |
| expiry_date | 2019-03-21T14:53:03+00:00 |
| details | {u'state': {u'controller-1': u'tpm-config-applied', |
| | u'controller-0': u'tpm-config-applied'}} |
+-------------+-----------------------------------------------------+
Subsequent certificate installs using |TPM| populate the updated\_at field
to indicate when the certificate was refreshed.
.. code-block:: none
[sysadmin@controller-0 tmp(keystone_admin)]$ system certificate-show tpm
+-------------+-----------------------------------------------------+
| Property | Value |
+-------------+-----------------------------------------------------+
| uuid | d6a47714-2b99-4470-b2c8-422857749c98 |
| certtype | tpm_mode |
| signature | tpm_mode_13214262027721489760 |
| start_date | 2018-03-21T14:53:03+00:00 |
| expiry_date | 2019-03-21T14:53:03+00:00 |
| details | {u'state': {u'controller-1': u'tpm-config-applied', |
| | u'controller-0': u'tpm-config-applied'}, |
| | u'updated_at':u'2018-03-21T16:18:15.879639+00:00'} |
+-------------+-----------------------------------------------------+
If either controller has state **tpm-config-failed**, then a 500.100
alarm is raised for the host.
- A LOCKED controller node that is not in the |TPM| applied configuration
state \(**tpm-config-applied**\), is prevented from being UNLOCKED
- An UNLOCKED controller node that is not in the |TPM| applied
configuration state \(**tpm-config-applied**\), is prevented from being
Swacted To or upgraded.
.. rubric:: |postreq|
When reinstalling either of the controllers or during a hardware replacement
scenario, you must reinstall the certificate:
.. code-block:: none
~(keystone_admin)$ system certificate-install -m tpm_mode
<pathTocertificateAndKey>
To disable the use of |TPM| to store the private key of the StarlingX REST
and Web Server's certificate, install the certificate without the |TPM|
option:
.. code-block:: none
~(keystone_admin)$ system certificate-install <pathTocertificateAndKey>

View File

@ -0,0 +1,42 @@
.. cee1581955119217
.. _security-access-the-gui:
==============
Access the GUI
==============
You can access either the Horizon Web interface or the Kubernetes Dashboard
from a browser.
.. rubric:: |proc|
.. _security-access-the-gui-steps-zdy-rxd-tkb:
#. Do one of the following:
.. table::
:widths: auto
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **For the StarlingX Horizon Web interface** | Access the Horizon in your browser at the address: |
| | |
| | http://<oam-floating-ip-address>:8080 |
| | |
| | Use the username **admin** and the sysadmin password to log in. |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **For the Kubernetes Dashboard** | Access the Kubernetes Dashboard GUI in your browser at the address: |
| | |
| | http://<oam-floating-ip-address>:<kube-dashboard-port> |
| | |
| | Where <kube-dashboard-port> is the port that the dashboard was installed on. |
| | |
| | Login using credentials in kubectl config on your remote workstation running the browser; see :ref:`Install Kubectl and Helm Clients Directly on a Host <security-install-kubectl-and-helm-clients-directly-on-a-host>` as an example for setting up kubectl config credentials for an admin user. |
| | |
| | .. note:: |
| | The Kubernetes Dashboard is not installed by default. See |prod| System Configuration: :ref:`Install the Kubernetes Dashboard <install-the-kubernetes-dashboard>` for information on how to install the Kubernetes Dashboard and create a Kubernetes service account for the admin user to use the dashboard. |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

View File

@ -0,0 +1,25 @@
.. knu1588334826081
.. _security-cert-manager:
============
Cert Manager
============
|prod| integrates the open source project cert-manager \(cert-manager.io\).
Cert-manager is a native Kubernetes certificate management controller, that
supports certificate management with external |CAs|.
|prod| installs cert-manager and an nginx-ingress-controller in support of
http-01 challenges from |CAs|, at bootstrap time, so that cert-manager
services are available for hosted containerized applications by default.
For more information on the cert-manager project, see
`http://cert-manager.io <http://cert-manager.io>`__.
**Related Information**
- :ref:`The cert-manager Bootstrap Process
<the-cert-manager-bootstrap-process>`

View File

@ -0,0 +1,341 @@
.. fda1581955005891
.. _security-configure-container-backed-remote-clis-and-clients:
==================================================
Configure Container-backed Remote CLIs and Clients
==================================================
The command line can be accessed from remote computers running Linux, OSX,
and Windows.
.. rubric:: |context|
This functionality is made available using a docker image for connecting to
the |prod| remotely. This docker image is pulled as required by
configuration scripts.
.. rubric:: |prereq|
You must have Docker installed on the remote systems you connect from. For
more information on installing Docker, see
`https://docs.docker.com/install/ <https://docs.docker.com/install/>`__.
For Windows remote clients, Docker is only supported on Windows 10.
For Windows remote clients, you must run the following commands from a
Cygwin terminal. See `https://www.cygwin.com/ <https://www.cygwin.com/>`__
for more information about the Cygwin project.
For Windows remote clients, you must also have :command:`winpty` installed.
Download the latest release tarball for Cygwin from
`https://github.com/rprichard/winpty/releases
<https://github.com/rprichard/winpty/releases>`__. After downloading the
tarball, extract it to any location and change the Windows <PATH> variable
to include its bin folder from the extracted winpty folder.
The following procedure shows how to configure the Container-backed Remote
CLIs and Clients for an admin user with cluster-admin clusterrole. If using
a non-admin user such as one with role privileges only within a private
namespace, additional configuration is required in order to use
:command:`helm`.
.. rubric:: |proc|
.. _security-configure-container-backed-remote-clis-and-clients-d70e74:
#. On the Controller, configure a Kubernetes service account for user on the remote client.
You must configure a Kubernetes service account on the target system
and generate a configuration file based on that service account.
Run the following commands logged in as **root** on the local console of the controller.
#. Set environment variables.
You can customize the service account name and the output
configuration file by changing the <USER> and <OUTPUT\_FILE>
variables shown in the following examples.
.. code-block:: none
% USER="admin-user"
% OUTPUT_FILE="temp-kubeconfig"
#. Create an account definition file.
.. code-block:: none
% cat <<EOF > admin-login.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${USER}
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: ${USER}
namespace: kube-system
EOF
#. Apply the definition.
.. code-block:: none
% kubectl apply -f admin-login.yaml
#. Store the token value.
.. code-block:: none
% TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${USER} | awk '{print $1}') | grep "token:" | awk '{print $2}')
#. Source the platform environment.
.. code-block:: none
% source /etc/platform/openrc
#. Store the OAM IP address.
1. .. code-block:: none
% OAM_IP=$(system oam-show |grep oam_floating_ip| awk '{print $4}')
2. AIO-SX uses <oam\_ip> instead of <oam\_floating\_ip>. The
following shell code ensures that <OAM\_IP> is assigned the correct
IP address.
.. code-block:: none
% if [ -z "$OAM_IP" ]; then
OAM_IP=$(system oam-show |grep oam_ip| awk '{print $4}')
fi
3. IPv6 addresses must be enclosed in square brackets. The following shell code does this for you.
.. code-block:: none
% if [[ $OAM_IP =~ .*:.* ]]; then
OAM_IP="[${OAM_IP}]"
fi
#. Generate the temp-kubeconfig file.
.. code-block:: none
% sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-cluster wrcp-cluster --server=https://${OAM_IP}:6443 --insecure-skip-tls-verify
% sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-credentials ${USER} --token=$TOKEN_DATA
% sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-context ${USER}@wrcp-cluster --cluster=wrcp-cluster --user ${USER} --namespace=default
% sudo kubectl config --kubeconfig ${OUTPUT_FILE} use-context ${USER}@wrcp-cluster
#. On the remote client, install the remote client tarball file from the
StarlingX CENGEN build servers..
- The tarball is available from the |prod| area on the StarlingX CENGEN build servers.
- You can extract the tarball's contents anywhere on your client system.
While it is not strictly required to extract the tarball before other
steps, subsequent steps in this example copy files to the extracted
directories as a convenience when running configuration scripts.
#. Download the openrc file from the Horizon Web interface to the remote client.
#. Log in to Horizon as the user and tenant that you want to configure remote access for.
#. Navigate to **Project** \> **API Access** \> **Download Openstack RC file**.
#. Select **Openstack RC file**.
#. Copy the temp-kubeconfig file to the remote workstation.
You can copy the file to any location on the remote workstation. For
convenience, this example assumes that it is copied to the location of
the extracted tarball.
.. note::
Ensure the temp-kubeconfig file has 666 permissions after copying
the file to the remote workstation, otherwise, use the following
command to change permissions, :command:`chmod 666 temp\_kubeconfig`.
#. On the remote client, configure the client access.
#. Change to the location of the extracted tarball.
#. Run the :command:`configure\_client.sh` script to generate a client access file for the platform.
.. note::
For remote CLI commands that require local filesystem access,
you can specify a working directory when running
:command:`configure\_client.sh` using the -w option. If no
directory is specified, the location from which the
:command:`configure\_client.sh` was run is used for local file
access by remote cli commands. This working directory is
mounted at /wd in the docker container. If remote CLI commands
need access to local files, copy the files to your configured
work directory and then provide the command with the path to
the mounted folder inside the container.
.. code-block:: none
$ mkdir -p $HOME/remote_wd
$ ./configure_client.sh -t platform -r admin_openrc.sh -k temp-kubeconfig \
-w HOME/remote_wd
$ cd $HOME/remote_wd
By default, configure\_client.sh will use container images from
docker.io for both platform clients and openstack clients. You can
optionally use the -p and -a options to override the default docker
with one or two others from a custom registry. For example, to use
the container images from the WRS AWS ECR
.. code-block:: none
$ ./configure_client.sh -t platform -r admin_openrc.sh -k
temp-kubeconfig -w HOME/remote_wd -p
625619392498.dkr.ecr.us-west-2.amazonaws.com/starlingx/stx-platfo
rmclients:stx.4.0-v1.3.0 -a
625619392498.dkr.ecr.us-west-2.amazonaws.com/starlingx/stx-openst
ackclients:master-centos-stable-latest
If you are working with repositories that require authentication,
you must first perform a :command:`docker login` to that repository
before using remote clients.
A remote\_client\_platform.sh file is generated for use accessing the platform CLI.
#. Test workstation access to the remote platform CLI.
Enter your platform password when prompted.
.. note::
The first usage of a command will be slow as it requires that the
docker image supporting the remote clients be pulled from the
remote registry.
.. code-block:: none
root@myclient:/home/user/remote_wd# source remote_client_platform.sh
Please enter your OpenStack Password for project admin as user admin:
root@myclient:/home/user/remote_wd# system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | compute-0 | worker | unlocked | enabled | available |
| 4 | compute-1 | worker | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
root@myclient:/home/user/remote_wd# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-767467f9cf-wtvmr 1/1 Running 1 3d2h
calico-node-j544l 1/1 Running 1 3d
calico-node-ngmxt 1/1 Running 1 3d1h
calico-node-qtc99 1/1 Running 1 3d
calico-node-x7btl 1/1 Running 4 3d2h
ceph-pools-audit-1569848400-rrpjq 0/1 Completed 0 12m
ceph-pools-audit-1569848700-jhv5n 0/1 Completed 0 7m26s
ceph-pools-audit-1569849000-cb988 0/1 Completed 0 2m25s
coredns-7cf476b5c8-5x724 1/1 Running 1 3d2h
...
root@myclient:/home/user/remote_wd#
.. note::
Some commands used by remote CLI are designed to leave you in a shell prompt, for example:
.. code-block:: none
root@myclient:/home/user/remote_wd# openstack
or
.. code-block:: none
root@myclient:/home/user/remote_wd# kubectl exec -ti <pod_name> -- /bin/bash
In some cases the mechanism for identifying commands that should
leave you at a shell prompt does not identify them correctly. If
you encounter such scenarios, you can force-enable or disable the
shell options using the <FORCE\_SHELL> or <FORCE\_NO\_SHELL>
variables before the command.
For example:
.. code-block:: none
root@myclient:/home/user/remote_wd# FORCE_SHELL=true kubectl exec -ti <pod_name> -- /bin/bash
root@myclient:/home/user/remote_wd# FORCE_NO_SHELL=true kubectl exec <pod_name> -- ls
You cannot use both variables at the same time.
#. If you need to run a remote CLI command that references a local file,
then that file must be copied to or created in the working directory
specified on the ./config\_client.sh command and referenced as under /wd/
in the actual command.
For example:
.. code-block:: none
root@myclient:/home/user# cd $HOME/remote_wd
root@myclient:/home/user/remote_wd# kubectl -n kube-system create -f test/test.yml
pod/test-pod created
root@myclient:/home/user/remote_wd# kubectl -n kube-system delete -f test/test.yml
pod/test-pod deleted
#. Do the following to use helm.
.. note::
For non-admin users, additional configuration is required first as
discussed in *Configure Remote Helm Client for Non-Admin Users*.
.. note::
When using helm, any command that requires access to a helm
repository \(managed locally\) will require that you be in the
$HOME/remote\_wd directory and use the --home ./helm option.
#. Do the initial setup of the helm client.
.. code-block:: none
% helm init --client-only --home "./.helm"
#. Run a helm command.
.. code-block:: none
$ helm list
$ helm install --name wordpress stable/wordpress --home "./helm"
.. seealso::
:ref:`Install Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
:ref:`Configure Remote Helm Client for Non-Admin Users
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -0,0 +1,145 @@
.. tvz1552007675065
.. _security-default-firewall-rules:
======================
Default Firewall Rules
======================
|prod| applies default firewall rules on the |OAM| network. The default rules
are recommended for most applications.
Traffic is permitted for the following protocols and ports to allow access
for platform services. By default, all other traffic is blocked.
You can view the configured firewall rules with the following command:
.. code-block:: none
~(keystone_admin)$ kubectl describe globalnetworkpolicy
Name: controller-oam-if-gnp
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"crd.projectcalico.org/v1","kind":"GlobalNetworkPolicy","metadata":{"annotations":{},"name":"controller-oam-if-gnp"},"spec":...
API Version: crd.projectcalico.org/v1
Kind: GlobalNetworkPolicy
Metadata:
Creation Timestamp: 2019-08-08T20:18:34Z
Generation: 1
Resource Version: 1395
Self Link: /apis/crd.projectcalico.org/v1/globalnetworkpolicies/controller-oam-if-gnp
UID: b28b74fe-ba19-11e9-9176-ac1f6b0eef28
Spec:
Apply On Forward: false
Egress:
Action: Allow
Ip Version: 4
Protocol: TCP
Action: Allow
Ip Version: 4
Protocol: UDP
Action: Allow
Protocol: ICMP
Ingress:
Action: Allow
Destination:
Ports:
22
18002
4545
15491
6385
7777
6443
9001
9002
7480
9311
5000
8080
Ip Version: 4
Protocol: TCP
Action: Allow
Destination:
Ports:
2222
2223
123
161
162
319
320
Ip Version: 4
Protocol: UDP
Action: Allow
Protocol: ICMP
Order: 100
Selector: has(iftype) && iftype == 'oam'
Types:
Ingress
Egress
Events: <none>
Where:
.. _security-default-firewall-rules-d477e47:
.. table::
:widths: auto
+------------------------+------------------------+------------------------+
| Protocol | Port | Service Name |
+========================+========================+========================+
| tcp | 22 | ssh |
+------------------------+------------------------+------------------------+
| tcp | 8080 | horizon \(http only\) |
+------------------------+------------------------+------------------------+
| tcp | 8443 | horizon \(https only\) |
+------------------------+------------------------+------------------------+
| tcp | 5000 | keystone-api |
+------------------------+------------------------+------------------------+
| tcp | 6385 | stx-metal |
| | | |
| | | stx-config |
+------------------------+------------------------+------------------------+
| tcp | 8119 | stx-distcloud |
+------------------------+------------------------+------------------------+
| tcp | 18002 | stx-fault |
+------------------------+------------------------+------------------------+
| tcp | 7777 | stx-ha |
+------------------------+------------------------+------------------------+
| tcp | 4545 | stx-nfv |
+------------------------+------------------------+------------------------+
| tcp | 6443 | Kubernetes api server |
+------------------------+------------------------+------------------------+
| tcp | 9001 | Docker registry |
+------------------------+------------------------+------------------------+
| tcp | 9002 | Registry token server |
+------------------------+------------------------+------------------------+
| tcp | 15491 | stx-update |
+------------------------+------------------------+------------------------+
| icmp | | icmp |
+------------------------+------------------------+------------------------+
| udp | 123 | ntp |
+------------------------+------------------------+------------------------+
| udp | 161 | snmp |
+------------------------+------------------------+------------------------+
| udp | 2222 | service manager |
+------------------------+------------------------+------------------------+
| udp | 2223 | service manager |
+------------------------+------------------------+------------------------+
.. note::
Custom rules may be added for other requirements. For more information,
see |sec-doc|: :ref:`Firewall Options <security-firewall-options>`.
.. note::
UDP ports 2222 and 2223 are used by the service manager for state
synchronization and heart beating between the controllers/masters. All
messages are authenticated with a SHA512 HMAC. Only packets originating
from the peer controller are permitted; all other packets are dropped.

View File

@ -0,0 +1,106 @@
.. myy1552681345265
.. _security-feature-configuration-for-spectre-and-meltdown:
=======================================================
Security Feature Configuration for Spectre and Meltdown
=======================================================
The system allows for the security features of the Linux kernel to be
configured to mitigate the variants of Meltdown and Spectre side-channel
vulnerabilities \(CVE-2017-5754, CVE-2017-5753, CVE-2017-5715\).
.. _security-feature-configuration-for-spectre-and-meltdown-section-N1001F-N1001C-N10001:
--------
Overview
--------
By default, mitigation is provided against Spectre v1 type attacks.
Additional mitigation can be enabled to cover Spectre v2 attacks and
Meltdown attacks. Enabling this mitigation may affect system performance.
The spectre\_v2 may also require firmware or BIOS updates from your
motherboard manufacturer to be effective.
.. _security-feature-configuration-for-spectre-and-meltdown-table-hpl-gqx-vdb:
.. table::
:widths: auto
+-----------------------------------+---------------------------------------------------------+
| **Option name** | **Description** |
+-----------------------------------+---------------------------------------------------------+
| spectre\_meltdown\_v1 \(default\) | Protect against Spectre v1 attacks, highest performance |
+-----------------------------------+---------------------------------------------------------+
| spectre\_meltdown\_all | Protect against Spectre v1, v2 and Meltdown attacks |
+-----------------------------------+---------------------------------------------------------+
.. note::
Applying these mitigations may result in some performance degradation
for certain workloads. As the actual performance impacts are expected
to vary considerably based on the customer workload, |org| recommends
all our customers to test the performance impact of CVE mitigations on
their actual workload in a sandbox environment before rolling out the
mitigations to production.
.. _security-feature-configuration-for-spectre-and-meltdown-section-N1009C-N1001C-N10001:
.. rubric:: |proc|
.. _security-feature-configuration-for-spectre-and-meltdown-ol-i4m-byx-vdb:
#. To view the existing kernel security configuration, use the following
command to check the current value of security\_feature:
.. code-block:: none
$ system show
+----------------------+--------------------------------------+
| Property | Value |
--------------------------------------------------------------+
| contact | None |
| created_at | 2020-02-27T15:47:23.102735+00:00 |
| description | None |
| https_enabled | False |
| location | None |
| name | 468f57ef-34c1-4e00-bba0-fa1b3f134b2b |
| region_name | RegionOne |
| sdn_enabled | False |
| security_feature | spectre_meltdown_v1 |
| service_project_name | services |
| software_version | 20.06 |
| system_mode | duplex |
| system_type | Standard |
| timezone | Canada/Eastern |
| updated_at | 2020-02-28T10:56:24.297774+00:00 |
| uuid | c0e35924-e139-4dfc-945d-47f9a663d710 |
| vswitch_type | none |
+----------------------+--------------------------------------+
#. To change the kernel security feature, use the following command syntax:
.. code-block:: none
system modify --security_feature [either spectre_meltdown_v1 or spectre_meltdown_all]
After this command is executed, the kernel arguments will be updated on
all hosts and on subsequently installed hosts. Rebooting the hosts by
locking and unlocking each host is required to have the new kernel
arguments take effect.
#. Analysis of a system may be performed by using the open source
spectre-meltdown-checker.sh script, which ships as
/usr/sbin/spectre-meltdown-checker.sh. This tool requires root access to
run. The tool will attempt to analyze your system to see if it is
susceptible to Spectre or Meltdown attacks. Documentation for the tool can
be found at `https://github.com/speed47/spectre-meltdown-checker
<https://github.com/speed47/spectre-meltdown-checker>`__.

View File

@ -0,0 +1,105 @@
.. zlk1582057887959
.. _security-firewall-options:
================
Firewall Options
================
|prod| incorporates a default firewall for the OAM network. You can configure
additional Kubernetes Network Policies in order to augment or override the
default rules.
The |prod| firewall uses the Kubernetes Network Policies \(using the Calico
CNI\) to implement a firewall on the OAM network.
A minimal set of rules is always applied before any custom rules, as follows:
.. _security-firewall-options-d628e35:
- Non-OAM traffic is always accepted.
- Egress traffic is always accepted.
- |SM| traffic is always accepted.
- SSH traffic is always accepted.
You can introduce custom rules by creating and installing custom Kubernetes
Network Policies.
The following example opens up default HTTPS port 443.
.. code-block:: none
% cat <<EOF > gnp-oam-overrides.yaml
apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
name: gnp-oam-overrides
spec:
ingress:
- action: Allow
destination:
ports:
- 443
protocol: TCP
order: 500
selector: has(iftype) && iftype == 'oam'
types:
- Ingress
EOF
It can be applied using the :command:`kubectl` apply command. For example:
.. code-block:: none
$ kubectl apply -f gnp-oam-overrides.yaml
You can confirm the policy was applied properly using the :command:`kubectl`
describe command. For example:
.. code-block:: none
$ kubectl describe globalnetworkpolicy gnp-oam-overrides
Name: gnp-oam-overrides
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"crd.projectcalico.org/v1","kind":"GlobalNetworkPolicy","metadata":{"annotations":{},"name":"gnp-openstack-oam"},"spec...
API Version: crd.projectcalico.org/v1
Kind: GlobalNetworkPolicy
Metadata:
Creation Timestamp: 2019-05-16T13:07:45Z
Generation: 1
Resource Version: 296298
Self Link: /apis/crd.projectcalico.org/v1/globalnetworkpolicies/gnp-openstack-oam
UID: 98a324ab-77db-11e9-9f9f-a4bf010007e9
Spec:
Ingress:
Action: Allow
Destination:
Ports:
443
Protocol: TCP
Order: 500
Selector: has(iftype) && iftype == 'oam'
Types:
Ingress
Events: <none>
.. xbooklink
For information about yaml rule syntax, see |sysconf-doc|: :ref:`Modifying OAM Firewall Rules <modifying-oam-firewall-rules>`.
For the default rules used by |prod| see |sec-doc|: :ref:`Default Firewall
Rules <security-default-firewall-rules>`.
For a full description of GNP syntax, see
`https://docs.projectcalico.org/v3.6/reference/calicoctl/resources/globalnetwo
rkpolicy
<https://docs.projectcalico.org/v3.6/reference/calicoctl/resources/globalnetwo
rkpolicy>`__.

View File

@ -0,0 +1,55 @@
.. zhw1595963351894
.. _security-hardening-firewall-options:
================
Firewall Options
================
|prod| applies default firewall rules on the |OAM| network.
The default rules are recommended for most applications. See :ref:`Default
Firewall Rules <security-default-firewall-rules>` for details. You can
configure an additional file in order to augment or override the default
rules.
A minimal set of rules is always applied before any custom rules, as follows:
.. _firewall-options-ul-gjq-k1g-mmb:
- Non-OAM traffic is always accepted.
- Egress traffic is always accepted.
- |SM| traffic is always accepted.
- SSH traffic is always accepted.
.. note::
It is recommended to disable port 80 when HTTPS is enabled for external
connection.
Operational complexity:
.. _firewall-options-ul-hjq-k1g-mmb:
- |prod| provides OAM firewall rules through Kubernetes Network Policies.
For more information, see :ref:`Firewall Options
<security-firewall-options>`.
- The custom rules are applied using iptables-restore or ip6tables-restore.
.. _firewall-options-section-csl-41d-cnb:
----------------------
Default Firewall Rules
----------------------
|prod| applies these default firewall rules on the OAM network. The default
rules are recommended for most applications.
For a complete listing, see :ref:`Default Firewall Rules
<security-default-firewall-rules>`.

View File

@ -0,0 +1,37 @@
.. wav1595963716973
.. _security-hardening-intro:
===============================
Security Hardening Introduction
===============================
Platform infrastructure hardening is an obligatory task for achieving
resilience to infrastructure attacks and complying with regulatory
requirements.
Hackers attack on an ongoing basis using various cyber-attack techniques
that are called attack vectors.
|prod| nodes must be hardened to reduce the increasing amounts of
dynamically emerging cyber-attacks.
|prod| provides a broad number of features related to system security. The
scope of this document is to provide information about these security
features to support best practice security hardening along with knowledge
about the features' various impacts on operation and performance.
The security hardening features can be classified into the following layers:
.. _security-hardening-intro-ul-gqs-xtf-mmb:
- Operating System hardening
- Platform hardening
- Application hardening
This appendix covers the security features hardening the operating system
and platform. Application hardening is not in the scope of this document.

View File

@ -0,0 +1,146 @@
.. iqi1581955028595
.. _security-install-kubectl-and-helm-clients-directly-on-a-host:
===================================================
Install Kubectl and Helm Clients Directly on a Host
===================================================
You can use :command:`kubectl` and :command:`helm` to interact with a
controller from a remote system.
.. rubric:: |context|
Commands such as those that reference local files or commands that require
a shell are more easily used from clients running directly on a remote
workstation.
Complete the following steps to install :command:`kubectl` and
:command:`helm` on a remote system.
The following procedure shows how to configure the kubectl and helm clients
directly on remote host, for an admin user with cluster-admin clusterrole.
If using a non-admin user such as one with only role privileges within a
private namespace, the procedure is the same, however, additional
configuration is required in order to use :command:`helm`.
.. rubric:: |proc|
.. _security-install-kubectl-and-helm-clients-directly-on-a-host-steps-f54-qqd-tkb:
#. On the controller, if an **admin-user** service account is not already available, create one.
#. Create the **admin-user** service account in **kube-system**
namespace and bind the **cluster-admin** ClusterRoleBinding to this user.
.. code-block:: none
% cat <<EOF > admin-login.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
% kubectl apply -f admin-login.yaml
#. Retrieve the secret token.
.. code-block:: none
% kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
#. On the workstation, install the :command:`kubectl` client on an Ubuntu
host by taking the following actions on the remote Ubuntu system.
#. Install the :command:`kubectl` client CLI.
.. code-block:: none
% sudo apt-get update
% sudo apt-get install -y apt-transport-https
% curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add
% echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee -a /etc/apt/sources.list.d/kubernetes.list
% sudo apt-get update
% sudo apt-get install -y kubectl
#. Set up the local configuration and context.
.. note::
In order for your remote host to trust the certificate used by
the |prod-long| K8S API, you must ensure that the
**k8s\_root\_ca\_cert** specified at install time is a trusted
CA certificate by your host. Follow the instructions for adding
a trusted CA certificate for the operating system distribution
of your particular host.
If you did not specify a **k8s\_root\_ca\_cert** at install
time, then specify insecure-skip-tls-verify, as shown below.
.. code-block:: none
% kubectl config set-cluster mycluster --server=https://<oam-floating-IP>:6443 \
--insecure-skip-tls-verify
% kubectl config set-credentials admin-user@mycluster --token=$TOKEN_DATA
% kubectl config set-context admin-user@mycluster --cluster=mycluster \
--user admin-user@mycluster --namespace=default
% kubectl config use-context admin-user@mycluster
<$TOKEN\_DATA> is the token retrieved in step 1.
#. Test remote :command:`kubectl` access.
.. code-block:: none
% kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE ...
controller-0 Ready master 15h v1.12.3 192.168.204.3 <none> CentOS L ...
controller-1 Ready master 129m v1.12.3 192.168.204.4 <none> CentOS L ...
worker-0 Ready <none> 99m v1.12.3 192.168.204.201 <none> CentOS L ...
worker-1 Ready <none> 99m v1.12.3 192.168.204.202 <none> CentOS L ...
%
#. On the workstation, install the :command:`helm` client on an Ubuntu
host by taking the following actions on the remote Ubuntu system.
#. Install :command:`helm`.
.. code-block:: none
% wget https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz
% tar xvf helm-v2.13.1-linux-amd64.tar.gz
% sudo cp linux-amd64/helm /usr/local/bin
#. Verify that :command:`helm` installed correctly.
.. code-block:: none
% helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
.. seealso::
:ref:`Configure Container-backed Remote CLIs and Clients
<security-configure-container-backed-remote-clis-and-clients>`
:ref:`Configure Remote Helm Client for Non-Admin Users
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -0,0 +1,81 @@
.. vri1561486014514
.. _security-install-update-the-docker-registry-certificate:
==============================================
Install/Update the Docker Registry Certificate
==============================================
The local docker registry provides secure HTTPS access using the registry API.
.. rubric:: |context|
By default a self-signed certificate is generated at installation time for
the registry API. For more secure access, a Root |CA|-signed certificate is
strongly recommended.
The Root |CA|-signed certificate for the registry must have at least the
following |SANs|: DNS:registry.local, DNS:registry.central, IP
Address:<oam-floating-ip-address>, IP Address:<mgmt-floating-ip-address>.
Use the :command:`system addrpool-list` command to get the |OAM| floating IP
Address and management floating IP Address for your system. You can add any
additional DNS entry\(s\) that you have set up for your |OAM| floating IP
Address.
Use the following procedure to install a Root |CA|-signed certificate to either
replace the default self-signed certificate or to replace an expired or soon
to expire certificate.
.. rubric:: |prereq|
Obtain a Root |CA|-signed certificate and key from a trusted Root |CA|. Refer
to the documentation for the external Root |CA| that you are using, on how to
create public certificate and private key pairs, signed by a Root |CA|, for
HTTPS.
For lab purposes, see Appendix A for how to create a test Root |CA|
certificate and key, and use it to sign test certificates.
Put the |PEM| encoded versions of the certificate and key in a single file,
and copy the file to the controller host.
Also, obtain the certificate of the Root |CA| that signed the above
certificate.
.. rubric:: |proc|
.. _security-install-update-the-docker-registry-certificate-d516e68:
#. In order to enable internal use of the docker registry certificate,
update the trusted |CA| list for this system with the Root |CA| associated
with the docker registry certificate.
.. code-block:: none
~(keystone_admin)$ system certificate-install --mode ssl_ca
<pathTocertificate>
where:
**<pathTocertificate>**
is the path to the Root |CA| certificate associated with the docker
registry Root |CA|-signed certificate.
#. Update the docker registry certificate using the
:command:`certificate-install` command.
Set the mode \(-m or --mode\) parameter to docker\_registry.
.. code-block:: none
~(keystone_admin)$ system certificate-install --mode docker_registry
<pathTocertificateAndKey>
where:
**<pathTocertificateAndKey>**
is the path to the file containing both the docker registry
certificate and private key to install.

View File

@ -0,0 +1,46 @@
.. ecl1581955165616
.. _security-rest-api-access:
===============
REST API Access
===============
The REST APIs provide programmatic access to the |prod|.
The StarlingX Platform related public REST API Endpoints can be listed by
running the following command:
.. code-block:: none
$ openstack endpoint list | grep public
Use these URLs as the prefix for the URL target of StarlingX Platform
services REST API messages documented here:
.. _security-rest-api-access-d18e40:
- Starlingx `https://docs.starlingx.io/api-ref/index.html
<https://docs.starlingx.io/api-ref/index.html>`__
- Keystone `https://docs.openstack.org/api-ref/identity/v3/
<https://docs.openstack.org/api-ref/identity/v3/>`__
- Barbican `https://docs.openstack.org/barbican/stein/api/
<https://docs.openstack.org/barbican/stein/api/>`__
.. _security-rest-api-access-d18e67:
----------
Kubernetes
----------
Access the Kubernetes REST API with the URL prefix of
https://<oam-floating-ip-address>:6443 and using the API syntax described at
the following site:
`https://kubernetes.io/docs/concepts/overview/kubernetes-api/
<https://kubernetes.io/docs/concepts/overview/kubernetes-api/>`__.

View File

@ -0,0 +1,46 @@
.. mab1596215747624
.. _security-vault-overview:
==============
Vault Overview
==============
|prod| integrates open source Vault containerized security application
\(Optional\) into the |prod| solution, that requires |PVCs| as a storage
backend to be enabled.
Vault is a containerized secrets management application that provides
encrypted storage with policy-based access control and supports multiple
secrets storage engines and auth methods.
|prod| includes a Vault-manager container to handle initialization of the
Vault servers. Vault-manager also provides the ability to automatically
unseal Vault servers in deployments where an external autounseal method
cannot be used. For more information, see, `https://www.vaultproject.io/
<https://www.vaultproject.io/>`__.
There are two methods for using Vault secrets with hosted applications:
.. _security-vault-overview-ul-ekx-y4m-4mb:
- The first method is to have the application be Vault Aware and retrieve
secrets using the Vault REST API. This method is used to allow an
application to write secrets to Vault, provided the applicable policy gives
write permission at the specified Vault path.
.. xbooklink
For more information, see |usertasks-doc|: :ref:`Vault Aware <vault-aware>`.
- The second method is to have the application be Vault Unaware and use
the Vault Agent Injector to make secrets available on the container
filesystem.
.. xbooklink
For more information, see, |usertasks-doc|: :ref:`Vault Unaware <vault-unaware>`.

View File

@ -0,0 +1,13 @@
.. xvb1600974198117
.. _ssh-and-console-login-timeout:
=============================
SSH and Console Login Timeout
=============================
|prod| is configured to automatically log users out of their SSH/Console
session after 900 seconds \(15 mins\) of inactivity.
Operational complexity: No additional configuration is required.

View File

@ -0,0 +1,55 @@
.. nfr1595963608329
.. _starlingx-accounts:
==================
StarlingX Accounts
==================
**Sysadmin Local Linux Account**
This is a local, per-host, sudo-enabled account created automatically
when a new host is provisioned. It is used by the primary system
administrator for |prod|, as it has extended privileges.
See :ref:`The sysadmin Account <the-sysadmin-account>` for more details.
**Local Linux User Accounts**
Local Linux User Accounts should NOT be created since they are used for
internal system purposes.
**Local LDAP Linux User Accounts**
These are local |LDAP| accounts that are centrally managed across all
hosts in the cluster. These accounts are intended to provide additional
admin level user accounts \(in addition to sysadmin\) that can SSH to
the nodes of the |prod|.
See :ref:`Local LDAP Linux User Accounts
<local-ldap-linux-user-accounts>` for more details.
.. note::
For security reasons, it is recommended that ONLY admin level users be
allowed to SSH to the nodes of the |prod|. Non-admin level users should
strictly use remote CLIs or remote web GUIs.
.. _starlingx-accounts-section-yyd-5jv-5mb:
---------------
Recommendations
---------------
.. _starlingx-accounts-ul-on4-p4z-tmb:
- It is recommended that **only** admin level users be allowed to SSH to
|prod| nodes. Non-admin level users should strictly use remote CLIs or
remote web GUIs.
- It is recommended that you create and manage Kubernetes service
accounts within the kube-system namespace.
- When establishing Keystone Credentials from a Linux Account, it is
recommended that you **not** use the command-line option to provide
Keystone credentials. Doing so creates a security risk since the supplied
credentials are visible in the command-line history.

View File

@ -0,0 +1,37 @@
.. xlb1552573425956
.. _starlingx-rest-api-applications-and-the-web-administration-server:
=================================================================
StarlingX REST API Applications and the Web Administration Server
=================================================================
|prod| provides support for secure HTTPS external connections used for
StarlingX REST API application endpoints \(Keystone, Barbican and
StarlingX\) and the |prod| web administration server.
By default HTTPS access to StarlingX REST and Web Server's endpoints are
disabled; i.e. accessible via HTTP only.
For secure HTTPS access, an x509 certificate and key is required.
.. note::
The default HTTPS X.509 certificates that are used by |prod-long| for
authentication are not signed by a known authority. For increased
security, obtain, install, and use certificates that have been signed
by a Root certificate authority. Refer to the documentation for the
external Root |CA| that you are using, on how to create public
certificate and private key pairs, signed by a Root |CA|, for HTTPS.
By default a self-signed certificate and key is generated and installed by
|prod| for the StarlingX REST and Web Server endpoints for evaluation
purposes. This certificate and key is installed by default when HTTPS
access is first enabled for these services. In order to connect, remote
clients must be configured to accept the self-signed certificate without
verifying it \("insecure" mode\).
For secure remote access, a Root |CA|-signed certificate and key are
required. The use of a Root |CA|-signed certificate is strongly recommended.
You can update the certificate used for HTTPS access at any time.

View File

@ -0,0 +1,57 @@
.. huk1552935670048
.. _starlingx-system-accounts-system-account-password-rules:
=============================
System Account Password Rules
=============================
|prod| enforces a set of strength requirements for new or changed passwords.
The following rules apply to all System Accounts \(Local LDAP, sysadmin,
other Linux Accounts, and Keystone accounts\):
.. _starlingx-system-accounts-system-account-password-rules-ul-jwb-g15-zw:
- The password must be at least seven characters long.
- You cannot reuse the last 2 passwords in history.
- The password must contain:
- at least one lower-case character
- at least one upper-case character
- at least one numeric character
- at least one special character
The following additional rules apply to Local Linux accounts only \(Local
LDAP, sysadmin, and other Linux accounts\):
- Dictionary words or simple number sequences \(for example, 123 or 321\)
are not allowed
- A changed password must differ from the previous password by at least
three characters
- A changed password must not be a simple reversal of the previous
password. For example, if nEtw!rk5 is the current password, 5kr!wtEn is not
allowed as the new password.
- A changed password using only character case differences is not allowed.
For example, if nEtw!rk5 is the current password, Netw!RK5 is not allowed as
the new password.
- A changed password cannot use the older password that immediately
preceded the current password. For example, if the password was previously
changed from oP3n!sRC to the current password nEtw!rk5, then the new
password cannot be oP3n!sRC.
- After five consecutive incorrect password attempts, the user is locked
out for 5 minutes.

View File

@ -0,0 +1,14 @@
.. nje1595963161437
.. _system-account-password-rules:
=============================
System Account Password Rules
=============================
|prod| enforces a set of strength requirements for new or changed passwords.
See :ref:`System Account Password Rules
<starlingx-system-accounts-system-account-password-rules>` for a complete
list of password selection, updating, and usage constraints.

View File

@ -0,0 +1,110 @@
.. gks1588335341933
.. _the-cert-manager-bootstrap-process:
==================================
The cert-manager Bootstrap Process
==================================
Both nginx-ingress-controller and cert-manager are packaged as armada system
applications managed via :command:`system application-\*` and
:command:`system helm-override-\*` commands.
Both system applications are uploaded and applied, by default, as part of
the bootstrap phase of the |prod-long| installation.
/usr/share/ansible/stx-ansible/playbooks/host\_vars/bootstrap/default.yml
contains the following definition:
.. code-block:: none
...
applications:
- /usr/local/share/applications/helm/nginx-ingress-controller-1.0-0.tgz:
- /usr/local/share/applications/helm/cert-manager-1.0-0.tgz:
...
As with other parameters in default.yml, you can override this definition in
$HOME/localhost.yml. In the case of the applications: parameter, do this to
change the application helm overrides for an application.
The full general syntax for the applications: structure is:
.. code-block:: none
applications:
- /full/path/to/appOne-1.0-0.tgz:
overrides:
- chart: appOne-ChartOne
namespace: kube-system
values-path: /home/sysinv/appOne-ChartOne-overrides.yaml
- chart: appOne-ChartTwo
namespace: kube-system
values-path: /home/sysinv/appOne-ChartTwo-overrides.yaml
- /full/path/to/appTwo-1.0-0.tgz:
overrides:
- chart: appTwo-ChartOne
namespace: kube-system
values-path: /home/sysinv/appTwo-ChartOne-overrides.yaml
If you do override applications: in $HOME/localhost.yml, note the following:
.. _the-cert-manager-bootstrap-process-ul-o3j-vdv-nlb:
- The applications: definition in localhost.yml replaces rather than
augments the definition in default.yml.
- Semantically, nginx-ingress-controller and cert-manager are mandatory
and must be in this order, otherwise bootstrap fails.
|org| recommends that you copy applications: from default.yml and add in any required overrides.
At a high-level, the default configuration for the two mandatory applications is:
.. _the-cert-manager-bootstrap-process-ul-dxm-q2v-nlb:
- nginx-ingress-controller
- Runs as a DaemonSet only on masters/controllers
- Uses host networking, which means it can use any port numbers.
Does not change the nginx default ports of 80 and 443.
- Has a default backend.
- cert-manager
- Runs as a Deployment only on masters/controllers.
- Runs with a podAntiAffinity rule to prevent multiple pods of
deployment from running on the same node.
- The deployment replicaCount is set to 1 for bootstrap.
.. note::
replicaCount can NOT be changed at bootstrap time. The second controller
must be configured and unlocked before replicaCount can be set to 2.
The Helm Chart Values that you can override are described at on the following web pages:
.. _the-cert-manager-bootstrap-process-ul-d4j-khv-nlb:
- Nginx-ingress-controller
`https://github.com/helm/charts/tree/master/stable/nginx-ingress <https://github.com/helm/charts/tree/master/stable/nginx-ingress>`__
- cert-manager
`https://github.com/jetstack/cert-manager/blob/release-0.15/deploy/charts/cert-manager/README.template.md <https://github.com/jetstack/cert-manager/blob/release-0.15/deploy/charts/cert-manager/README.template.md>`__

View File

@ -0,0 +1,90 @@
.. efc1552681959124
.. _the-sysadmin-account:
====================
The sysadmin Account
====================
This is a local, per-host, sudo-enabled account created automatically when a
new host is provisioned.
This Linux user account is used by the system administrator as it has
extended privileges.
On controller nodes, this account is available even before :command:`ansible
bootstrap playbook` is executed.
The default initial password is **sysadmin**.
.. _the-sysadmin-account-ul-aqh-b41-pq:
- The initial password must be changed immediately when you log in to each
host for the first time. For details, see the `StarlingX Installation Guide
<https://docs.starlingx.io/deploy_install_guides/index.html>`__.
- After five consecutive unsuccessful login attempts, further attempts are
blocked for about five minutes. To clarify, the 5 minute block is not an
absolute window, but a sliding one. That is, if you keep attempting to log
in within those 5 minutes, the window keeps sliding and the user remains
blocked. Therefore, you should not attempt any further login attempts for 5
minutes after 5 unsuccessful login attempts.
Subsequent password changes must be executed on the active controller in an
**unlocked**, **enabled**, and **available** state to ensure that they
propagate to all other unlocked-active hosts in the cluster. Otherwise, they
remain local to the host where they were executed, and are overwritten on
the next reboot or host unlock to match the password on the active controller.
From the **sysadmin** account, you can execute commands requiring different
privileges.
.. _the-sysadmin-account-ul-hlh-f2c-5p:
- You can execute non-root level commands as a regular Linux user directly.
If you do not have sufficient privileges to execute a command as a
regular Linux user, you may receive a permissions error, or in some
cases, the command may be reported as not found.
- You can execute root-level commands as the **root** user.
To become the root user, use the :command:`sudo` command to elevate your
privileges, followed by the command to be executed. For example, to run
the :command:`license-install` command as the :command:`root` user:
.. code-block:: none
$ sudo /usr/sbin/license-install license_file
If a password is requested, provide the password for the **sysadmin**
account.
- You can execute StarlingX administrative commands as the Keystone
**admin** user and Kubernetes kubectl and helm administrative commands as
the Kubernetes admin user.
To become the **admin** user from the Linux **sysadmin** account, source
the script /etc/platform/openrc:
.. code-block:: none
$ source /etc/platform/openrc
[sysadmin@controller-0 ~(keystone_admin)]$
The system prompt changes to indicate the newly acquired privileges.
.. note::
The default Keystone prompt includes the host name and the current
working path. For simplicity, this guide uses the following generic
prompt instead:
.. code-block:: none
~(keystone_admin)]$

View File

@ -0,0 +1,95 @@
.. qjd1552681409626
.. _tpm-configuration-considerations:
================================
TPM Configuration Considerations
================================
There are some considerations to account for when configuring or
reconfiguring |TPM|.
This includes certain behavior and warnings that you may encounter when
configuring TPM. The same behavior and warnings are seen when performing
these actions in the Horizon Web interface, also.
.. _tpm-configuration-considerations-ul-fbm-1fy-f1b:
- The command :command:`certificate-show tpm` will indicate the status of
the TPM configuration on the hosts, either **tpm-config-failed** or
**tpm-config-applied**.
.. code-block:: none
~(keystone_admin)]$ system certificate-show tpm
+-------------+-----------------------------------------------------+
| Property | Value |
+-------------+-----------------------------------------------------+
| uuid | ed3d6a22-996d-421b-b4a5-64ab42ebe8be |
| certtype | tpm_mode |
| signature | tpm_mode_13214262027721489760 |
| start_date | 2018-03-21T14:53:03+00:00 |
| expiry_date | 2019-03-21T14:53:03+00:00 |
| details | {u'state': {u'controller-1': u'tpm-config-applied', |
| | u'controller-0': u'tpm-config-applied'}} |
+-------------+-----------------------------------------------------+
- If either controller has state **tpm-config-failed**, then a 500.100
alarm will be raised for the host.
.. code-block:: none
~(keystone_admin)]$ fm alarm-list
+----------+------------------+------------------+----------+------------+
| Alarm ID | Reason Text | Entity ID | Severity | Time Stamp |
+----------+------------------+------------------+----------+------------+
| 500.100 | TPM configuration| host=controller-1| major | 2017-06-1..|
| | failed or device.| | |.586010 |
+----------+------------------+------------------+----------+------------+
- An UNLOCKED controller node that is not in TPM applied configuration
state \(**tpm-config-applied**\) will be prevented from being Swacted To or
upgraded.
The following warning is generated when you attempt to swact:
.. code-block:: none
~(keystone_admin)]$ system host-swact controller-0
TPM configuration not fully applied on host controller-1; Please
run https-certificate-install before re-attempting.
- A LOCKED controller node that is not in TPM applied configuration state
\(**tpm-config-applied**\) will be prevented from being UNLOCKED.
The :command:`host-list` command below shows controller-1 as locked and
disabled.
.. code-block:: none
~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
The following warning is generated when you attempt to UNLOCK a
controller not in a **tpm-config-applied** state:
.. code-block:: none
~[keystone_admin)]$ system host-unlock controller-1
TPM configuration not fully applied on host controller-1; Please
run https-certificate-install before re-attempting

View File

@ -0,0 +1,32 @@
.. avv1595963682527
.. _uefi-secure-boot:
================
UEFI Secure Boot
================
Secure Boot is a technology where the system firmware checks that the
system boot loader is signed with a cryptographic key authorized by a
configured database of certificate\(s\) contained in the firmware or a
security device. It is used to secure various boot stages.
|prod|'s implementation of Secure Boot also validates the signature of the
second-stage boot loader, the kernel, and kernel modules.
Operational complexity:
.. _uefi-secure-boot-ul-cfz-cvf-mmb:
- For each node that is going to use secure boot, you must populate the
|prod| public certificate \(with public key\) in the |UEFI| Secure Boot
authorized database in accordance with the board manufacturer's process.
- You may need to work with your hardware vendor to have the certificate
installed.
- This must be done for each node before starting the installation.
For more information, see :ref:`UEFI Secure Boot
<overview-of-uefi-secure-boot>`.

View File

@ -0,0 +1,51 @@
.. fyl1552681364538
.. _use-uefi-secure-boot:
====================
Use UEFI Secure Boot
====================
Secure Boot is supported in |UEFI| installations only. It is not used when
booting |prod| as a legacy boot target.
|prod| currently does not support switching from legacy to UEFI mode after a
system has been installed. Doing so requires a reinstall of the system. This
also means that upgrading from a legacy install to a secure boot install
\(UEFI\) is not supported.
When upgrading a |prod| system from a version which does not support secure
boot to a version that does, do not enable secure boot in UEFI firmware until
the upgrade is complete.
For each node that is going to use secure boot, you must populate the |prod|
public certificate/key in the |UEFI| Secure Boot authorized database in
accordance with the board manufacturer's process. This must be done for each
node before starting installation.
You may need to work with your hardware vendor to have the certificate
installed.
There is often an option in the UEFI setup utility which allows a user to
browse to a file containing a certificate to be loaded in the authorized
database. This option may be hidden in the UEFI setup utility unless UEFI
mode is enabled, and secure boot is enabled.
The UEFI implementation may or may not require a |TPM| device to be
present and enabled before providing for secure boot functionality. Refer to
you server board's manufacturer's documentation.
Many motherboards ship with Microsoft secure boot certificates
pre-programmed in the UEFI certificate database. These certificates may be
required to boot UEFI drivers for video cards, RAID controllers, or NICs
\(for example, the PXE boot software for a NIC may have been signed by a
Microsoft certificate\). While certificates can usually be removed from the
certificate database \(again, this is UEFI implementation specific\) it
may be required that you keep the Microsoft certificates to allow for
complete system operation.
Mixed combinations of secure boot and non-secure boot nodes are supported.
For example, a controller node may secure boot, while a worker node may not.
Secure boot must be enabled in the UEFI firmware of each node for that node
to be protected by secure boot.

View File

@ -0,0 +1,17 @@
.. tzr1595963495431
.. _web-administration-login-timeout:
================================
Web Administration Login Timeout
================================
The |prod| Web administration tool \(Horizon\) will automatically log users
out after 50 minutes \(the Keystone Token Expiry time\), regardless of activity.
Operational complexity: No additional configuration is required.
You can also block user access after a set number of failed login attempts as
described in :ref:`Configure Horizon User Lockout on Failed Logins
<configure-horizon-user-lockout-on-failed-logins>`.

View File

@ -53,22 +53,27 @@
.. |NUMA| replace:: :abbr:`NUMA (Non-Uniform Memory Access)`
.. |NVMe| replace:: :abbr:`NVMe (Non-Volatile Memory express)`
.. |OAM| replace:: :abbr:`OAM (Operations, administration and management)`
.. |OIDC| replace:: :abbr:`OIDC (OpenID Connect)`
.. |ONAP| replace:: :abbr:`ONAP (Open Network Automation Program)`
.. |OSD| replace:: :abbr:`OSD (Object Storage Device)`
.. |OSDs| replace:: :abbr:`OSDs (Object Storage Devices)`
.. |PAC| replace:: :abbr:`PAC (Programmable Acceleration Card)`
.. |PCI| replace:: :abbr:`PCI (Peripheral Component Interconnect)`
.. |PDU| replace:: :abbr:`PDU (Packet Data Unit)`
.. |PEM| replace:: :abbr:`PEM (Privacy Enhanced Mail)`
.. |PF| replace:: :abbr:`PF (Physical Function)`
.. |PHB| replace:: :abbr:`PHB (Per-Hop Behavior)`
.. |PQDN| replace:: :abbr:`PDQN (Partially Qualified Domain Name)`
.. |PQDNs| replace:: :abbr:`PQDNs (Partially Qualified Domain Names)`
.. |PSP| replace:: :abbr:`PSP (Pod Security Policy)`
.. |PSPs| replace:: :abbr:`PSPs (Pod Security Policies)`
.. |PTP| replace:: :abbr:`PTP (Precision Time Protocol)`
.. |PVC| replace:: :abbr:`PVC (Persistent Volume Claim)`
.. |PVCs| replace:: :abbr:`PVCs (Persistent Volume Claims)`
.. |PXE| replace:: :abbr:`PXE (Preboot Execution Environment)`
.. |QoS| replace:: :abbr:`QoS (Quality of Service)`
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
.. |RBAC| replace:: :abbr:`RBAC (Role-Based Access Control)`
.. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)`
.. |SAN| replace:: :abbr:`SAN (Subject Alternative Name)`
.. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)`
@ -76,6 +81,7 @@
.. |SATA| replace:: :abbr:`SATA (Serial AT Attachment)`
.. |SLA| replace:: :abbr:`SLA (Service Level Agreement)`
.. |SLAs| replace:: :abbr:`SLAs (Service Level Agreements)`
.. |SM| replace:: :abbr:`SM (Service Manager)`
.. |SNAT| replace:: :abbr:`SNAT (Source Network Address Translation)`
.. |SNMP| replace:: :abbr:`SNMP (Simple Network Management Protocol)`
.. |SRIOV| replace:: :abbr:`SR-IOV (Single Root I/O Virtualization)`
@ -84,9 +90,11 @@
.. |SSH| replace:: :abbr:`SSH (Secure Shell)`
.. |SSL| replace:: :abbr:`SSL (Secure Socket Layer)`
.. |STP| replace:: :abbr:`STP (Spanning Tree Protocol)`
.. |TPM| replace:: :abbr:`TPM (Trusted Platform Module)`
.. |TFTP| replace:: :abbr:`TFTP (Trivial File Transfer Protocol)`
.. |TLS| replace:: :abbr:`TLS (Transport Layer Security)`
.. |ToR| replace:: :abbr:`ToR (Top-of-Rack)`
.. |TPM| replace:: :abbr:`TPM (Trusted Platform Module)`
.. |TPMs| replace:: :abbr:`TPMs (Trusted Platform Modules)`
.. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)`
.. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)`
.. |VF| replace:: :abbr:`VF (Virtual Function)`
@ -96,9 +104,9 @@
.. |VM| replace:: :abbr:`VM (Virtual Machine)`
.. |VMs| replace:: :abbr:`VMs (Virtual Machines)`
.. |VNC| replace:: :abbr:`VNC (Virtual Network Computing)`
.. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)`
.. |VNI| replace:: :abbr:`VNI (VXLAN Network Interface)`
.. |VNIs| replace:: :abbr:`VNIs (VXLAN Network Interfaces)`
.. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)`
.. |VXLAN| replace:: :abbr:`VXLAN (Virtual eXtensible Local Area Network)`
.. |VXLANs| replace:: :abbr:`VXLANs (Virtual eXtensible Local Area Networks)`
.. |XML| replace:: :abbr:`XML (eXtensible Markup Language)`