WIP: ToC Preparation for Openstack (#296)
* complete docs refactor * replaces /data references in favor of /var/lib/localkube - fixes #95 * additional layout changes * additional operations layout * readme updates and versioning updates to multinode doc * remove dragons * project goal clarity * remove 'magic' * clean up container image concerns * slight verbiage change * charts not hard coded * small change requested * reconfiguration * updates to minikube docs
This commit is contained in:
parent
412a186eec
commit
b83ce91f1e
1
docs/guides-developer/dev-helm/function-endpoints.md
Normal file
1
docs/guides-developer/dev-helm/function-endpoints.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# TBD
|
1
docs/guides-developer/dev-helm/function-hosts.md
Normal file
1
docs/guides-developer/dev-helm/function-hosts.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# TBD
|
1
docs/guides-developer/dev-helm/helm-toolkit.md
Normal file
1
docs/guides-developer/dev-helm/helm-toolkit.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# TBD
|
8
docs/guides-developer/dev-helm/readme.md
Normal file
8
docs/guides-developer/dev-helm/readme.md
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
# Openstack-Helm development
|
||||||
|
|
||||||
|
## Conceptual Guides: Kubernetes
|
||||||
|
#### [Helm-Toolkit](helm-toolkit.md)
|
||||||
|
##### [User Registration](registration-user.md)
|
||||||
|
##### [Domain Registration](registration-domain.md)
|
||||||
|
##### [Service Registration](registration-service.md)
|
||||||
|
##### [Endpoint Registration](registration-endpoint.md)
|
1
docs/guides-developer/dev-helm/registration-domain.md
Normal file
1
docs/guides-developer/dev-helm/registration-domain.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# TBD
|
1
docs/guides-developer/dev-helm/registration-endpoint.md
Normal file
1
docs/guides-developer/dev-helm/registration-endpoint.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# TBD
|
1
docs/guides-developer/dev-helm/registration-service.md
Normal file
1
docs/guides-developer/dev-helm/registration-service.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# TBD
|
1
docs/guides-developer/dev-helm/registration-user.md
Normal file
1
docs/guides-developer/dev-helm/registration-user.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# TBD
|
1
docs/guides-developer/dev-helm/tips-development.md
Normal file
1
docs/guides-developer/dev-helm/tips-development.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# TBD
|
1
docs/guides-developer/dev-kubernetes/readme.md
Normal file
1
docs/guides-developer/dev-kubernetes/readme.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Table of Contents
|
29
docs/guides-developer/getting-started/gs-conditionals.md
Normal file
29
docs/guides-developer/getting-started/gs-conditionals.md
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
## Common Conditionals
|
||||||
|
|
||||||
|
The OpenStack-Helm charts make the following conditions available across all charts, which can be set at install or upgrade time with Helm below.
|
||||||
|
|
||||||
|
### Developer Mode
|
||||||
|
|
||||||
|
```
|
||||||
|
helm install local/chart --set development.enabled=true
|
||||||
|
```
|
||||||
|
|
||||||
|
The development mode flag should be available on all charts. Enabling this reduces dependencies that the chart may have on persistent volume claims (which are difficult to support in a laptop minikube environment) as well as reducing replica counts or resiliency features to support a minimal environment.
|
||||||
|
|
||||||
|
The glance chart for instance defines the following `development:` overrides:
|
||||||
|
|
||||||
|
```
|
||||||
|
development:
|
||||||
|
enabled: false
|
||||||
|
storage_path: /var/lib/localkube/openstack-helm/glance/images
|
||||||
|
```
|
||||||
|
|
||||||
|
The `enabled` flag allows the developer to enable development mode. The storage path allows the operator to store glance images in a hostPath instead of leveraging a ceph backend, which again, is difficult to spin up in a small laptop minikube environment. The host path can be overriden by the operator if desired.
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
```
|
||||||
|
helm install local/chart --set resources.enabled=true
|
||||||
|
```
|
||||||
|
|
||||||
|
Resource limits/requirements can be turned on and off. By default, they are off. Setting this enabled to `true` will deploy Kubernetes resources with resource requirements and limits.
|
65
docs/guides-developer/getting-started/gs-endpoints.md
Normal file
65
docs/guides-developer/getting-started/gs-endpoints.md
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
## Endpoints
|
||||||
|
|
||||||
|
The project's goal is to provide a consistent mechanism for endpoints. OpenStack is a highly interconnected application, with various components requiring connectivity details to numerous services, including other OpenStack components and infrastructure elements such as databases, queues, and memcached infrastructure. The project's goal is to ensure that it can provide a consistent mechanism for defining these "endpoints" across all charts and provide the macros necessary to convert those definitions into usable endpoints. The charts should consistently default to building endpoints that assume the operator is leveraging all charts to build their OpenStack cloud. Endpoints should be configurable if an operator would like a chart to work with their existing infrastructure or run elements in different namespaces.
|
||||||
|
|
||||||
|
|
||||||
|
For instance, in the Neutron chart `values.yaml` the following endpoints are defined:
|
||||||
|
|
||||||
|
```
|
||||||
|
# typically overridden by environmental
|
||||||
|
# values, but should include all endpoints
|
||||||
|
# required by this chart
|
||||||
|
endpoints:
|
||||||
|
image:
|
||||||
|
hosts:
|
||||||
|
default: glance-api
|
||||||
|
type: image
|
||||||
|
path: null
|
||||||
|
scheme: 'http'
|
||||||
|
port:
|
||||||
|
api: 9292
|
||||||
|
registry: 9191
|
||||||
|
compute:
|
||||||
|
hosts:
|
||||||
|
default: nova-api
|
||||||
|
path: "/v2/%(tenant_id)s"
|
||||||
|
type: compute
|
||||||
|
scheme: 'http'
|
||||||
|
port:
|
||||||
|
api: 8774
|
||||||
|
metadata: 8775
|
||||||
|
novncproxy: 6080
|
||||||
|
identity:
|
||||||
|
hosts:
|
||||||
|
default: keystone-api
|
||||||
|
path: /v3
|
||||||
|
type: identity
|
||||||
|
scheme: 'http'
|
||||||
|
port:
|
||||||
|
admin: 35357
|
||||||
|
public: 5000
|
||||||
|
network:
|
||||||
|
hosts:
|
||||||
|
default: neutron-server
|
||||||
|
path: null
|
||||||
|
type: network
|
||||||
|
scheme: 'http'
|
||||||
|
port:
|
||||||
|
api: 9696
|
||||||
|
```
|
||||||
|
|
||||||
|
These values define all the endpoints that the Neutron chart may need in order to build full URL compatible endpoints to various services. Long-term, these will also include database, memcached, and rabbitmq elements in one place. Essentially, all external connectivity can be be defined centrally.
|
||||||
|
|
||||||
|
The macros that help translate these into the actual URLs necessary are defined in the `helm-toolkit` chart. For instance, the cinder chart defines a `glance_api_servers` definition in the `cinder.conf` template:
|
||||||
|
|
||||||
|
```
|
||||||
|
+glance_api_servers = {{ tuple "image" "internal" "api" . | include "helm-toolkit.endpoint_type_lookup_addr" }}
|
||||||
|
```
|
||||||
|
|
||||||
|
As an example, this line uses the `endpoint_type_lookup_addr` macro in the `helm-toolkit` chart (since it is used by all charts). Note that there is a second convention here. All `{{ define }}` macros in charts should be pre-fixed with the chart that is defining them. This allows developers to easily identify the source of a Helm macro and also avoid namespace collisions. In the example above, the macro `endpoint_type_look_addr` is defined in the `helm-toolkit` chart. This macro is passing three parameters (aided by the `tuple` method built into the go/sprig templating library used by Helm):
|
||||||
|
|
||||||
|
- image: This is the OpenStack service that the endpoint is being built for. This will be mapped to `glance` which is the image service for OpenStack.
|
||||||
|
- internal: This is the OpenStack endpoint type we are looking for - valid values would be `internal`, `admin`, and `public`
|
||||||
|
- api: This is the port to map to for the service. Some components, such as glance, provide an `api` port and a `registry` port, for example.
|
||||||
|
|
||||||
|
Charts should not use hard coded values such as `http://keystone-api:5000` because these are not compatible with operator overrides and do not support spreading components out over various namespaces.
|
38
docs/guides-developer/getting-started/gs-images.md
Normal file
38
docs/guides-developer/getting-started/gs-images.md
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
## Images
|
||||||
|
|
||||||
|
The project's core philosophy regarding images is that the toolsets required to enable the OpenStack services should be applied by Kubernetes itself. This requires OpenStack-Helm to develop common and simple scripts with minimal dependencies that can be overlaid on any image that meets the OpenStack core library requirements. The advantage of this is that the project can be image agnostic, allowing operators to use Stackanetes, Kolla, Yaodu, or any image flavor and format they choose and they will all function the same.
|
||||||
|
|
||||||
|
A long-term goal, besides being image agnostic, is to also be able to support any of the container runtimes that Kubernetes supports, even those that might not use Docker's own packaging format. This will allow the project to continue to offer maximum flexibility with regard to operator choice.
|
||||||
|
|
||||||
|
To that end, all charts provide an `images:` section that allows operators to override images. Also, all default image references should be fully spelled out, even those hosted by Docker or Quay. Further, no default image reference should use `:latest` but rather should be pinned to a specific version to ensure consistent behavior for deployments over time.
|
||||||
|
|
||||||
|
Today, the `images:` section has several common conventions. Most OpenStack services require a database initialization function, a database synchronization function, and a series of steps for Keystone registration and integration. Each component may also have a specific image that composes an OpenStack service. The images may or may not differ, but regardless, should all be defined in `images`.
|
||||||
|
|
||||||
|
The following standards are in use today, in addition to any components defined by the service itself:
|
||||||
|
|
||||||
|
- dep_check: The image that will perform dependency checking in an init-container.
|
||||||
|
- db_init: The image that will perform database creation operations for the OpenStack service.
|
||||||
|
- db_sync: The image that will perform database sync (schema initialization and migration) for the OpenStack service.
|
||||||
|
- ks_user: The image that will perform keystone user creation for the service.
|
||||||
|
- ks_service: The image that will perform keystone service registration for the service.
|
||||||
|
- ks_endpoints: The image that will perform keystone endpoint registration for the service.
|
||||||
|
- pull_policy: The image pull policy, one of "Always", "IfNotPresent", and "Never" which will be used by all containers in the chart.
|
||||||
|
|
||||||
|
An illustrative example of an `images:` section taken from the heat chart:
|
||||||
|
|
||||||
|
```
|
||||||
|
images:
|
||||||
|
dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.1.1
|
||||||
|
db_init: quay.io/stackanetes/stackanetes-kolla-toolbox:newton
|
||||||
|
db_sync: docker.io/kolla/ubuntu-source-heat-api:3.0.1
|
||||||
|
ks_user: quay.io/stackanetes/stackanetes-kolla-toolbox:newton
|
||||||
|
ks_service: quay.io/stackanetes/stackanetes-kolla-toolbox:newton
|
||||||
|
ks_endpoints: quay.io/stackanetes/stackanetes-kolla-toolbox:newton
|
||||||
|
api: docker.io/kolla/ubuntu-source-heat-api:3.0.1
|
||||||
|
cfn: docker.io/kolla/ubuntu-source-heat-api:3.0.1
|
||||||
|
cloudwatch: docker.io/kolla/ubuntu-source-heat-api:3.0.1
|
||||||
|
engine: docker.io/kolla/ubuntu-source-heat-engine:3.0.1
|
||||||
|
pull_policy: "IfNotPresent"
|
||||||
|
```
|
||||||
|
|
||||||
|
The OpenStack-Helm project today uses a mix of Docker images from Stackanetes and Kolla, but will likely standardize on a default set of images for all charts without any reliance on image-specific utilities.
|
82
docs/guides-developer/getting-started/gs-labels.md
Normal file
82
docs/guides-developer/getting-started/gs-labels.md
Normal file
@ -0,0 +1,82 @@
|
|||||||
|
## Labels
|
||||||
|
|
||||||
|
This project uses `nodeSelectors` as well as `podAntiAffinity` rules to ensure resources land in the proper place within Kubernetes. Today, OpenStack-Helm employs four labels:
|
||||||
|
|
||||||
|
- ceph-storage: enabled
|
||||||
|
- openstack-control-plane: enabled
|
||||||
|
- openstack-compute-node: enabled
|
||||||
|
- openvswitch: enabled
|
||||||
|
|
||||||
|
NOTE: The `openvswitch` label is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` nodes. Ideally, you would eliminate the `openvswitch` label if you could simply do an OR of (`openstack-control-plane` and `openstack-compute-node`). However, Kubernetes `nodeSelectors` prohibits this specific logic. As a result of this, a third label that spans all hosts is required, which in this case is `openvswitch`. The Open vSwitch service must run on both control plane and tenant nodes with both labels to provide connectivity for DHCP, L3, and Metadata services. These Open vSwitch services run as part of the control plane as well as tenant connectivity, which runs as part of the compute node infrastructure.
|
||||||
|
|
||||||
|
|
||||||
|
Labels are of course definable and overridable by the chart operators. Labels are defined in charts by using a `labels:` section, which is a common convention that defines both a selector and a value:
|
||||||
|
|
||||||
|
```
|
||||||
|
labels:
|
||||||
|
node_selector_key: openstack-control-plane
|
||||||
|
node_selector_value: enabled
|
||||||
|
```
|
||||||
|
|
||||||
|
In some cases, such as with the Neutron chart, a chart may need to define more then one label. In cases such as this, each element should be articulated under the `labels:` section, nesting where appropriate:
|
||||||
|
|
||||||
|
```
|
||||||
|
labels:
|
||||||
|
# ovs is a special case, requiring a special
|
||||||
|
# label that can apply to both control hosts
|
||||||
|
# and compute hosts, until we get more sophisticated
|
||||||
|
# with our daemonset scheduling
|
||||||
|
ovs:
|
||||||
|
node_selector_key: openvswitch
|
||||||
|
node_selector_value: enabled
|
||||||
|
agent:
|
||||||
|
dhcp:
|
||||||
|
node_selector_key: openstack-control-plane
|
||||||
|
node_selector_value: enabled
|
||||||
|
l3:
|
||||||
|
node_selector_key: openstack-control-plane
|
||||||
|
node_selector_value: enabled
|
||||||
|
metadata:
|
||||||
|
node_selector_key: openstack-control-plane
|
||||||
|
node_selector_value: enabled
|
||||||
|
server:
|
||||||
|
node_selector_key: openstack-control-plane
|
||||||
|
node_selector_value: enabled
|
||||||
|
```
|
||||||
|
|
||||||
|
These labels should be leveraged by `nodeSelector` definitions in charts for all resources, including jobs:
|
||||||
|
|
||||||
|
```
|
||||||
|
...
|
||||||
|
spec:
|
||||||
|
nodeSelector:
|
||||||
|
{{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }}
|
||||||
|
containers:
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
In some cases, especially with infrastructure components, it is necessary for the chart developer to provide scheduling instruction to Kubernetes to help ensure proper resiliency. The most common examples employed today are podAntiAffinity rules, such as those used in the `mariadb` chart. These should be placed on all foundational elements so that Kubernetes will not only disperse resources for resiliency, but also allow multi-replica installations to deploy successfully into a single host environment:
|
||||||
|
|
||||||
|
```
|
||||||
|
annotations:
|
||||||
|
# this soft requirement allows single
|
||||||
|
# host deployments to spawn several mariadb containers
|
||||||
|
# but in a larger environment, would attempt to spread
|
||||||
|
# them out
|
||||||
|
scheduler.alpha.kubernetes.io/affinity: >
|
||||||
|
{
|
||||||
|
"podAntiAffinity": {
|
||||||
|
"preferredDuringSchedulingIgnoredDuringExecution": [{
|
||||||
|
"labelSelector": {
|
||||||
|
"matchExpressions": [{
|
||||||
|
"key": "app",
|
||||||
|
"operator": "In",
|
||||||
|
"values":["mariadb"]
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
"topologyKey": "kubernetes.io/hostname",
|
||||||
|
"weight": 10
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
3
docs/guides-developer/getting-started/gs-overrides.md
Normal file
3
docs/guides-developer/getting-started/gs-overrides.md
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
# Helm Overrides
|
||||||
|
|
||||||
|
This document covers Helm overrides and the OpenStack-Helm approach. For more information on Helm overrides in general see the Helm [Values Documentation](https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files)
|
19
docs/guides-developer/getting-started/gs-replicas.md
Normal file
19
docs/guides-developer/getting-started/gs-replicas.md
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
## Replicas
|
||||||
|
|
||||||
|
All charts must provide replica definitions and leverage those in the Kubernetes manifests. This allows site operators to tune the replica counts at install or when upgrading. Each chart should deploy with multiple replicas by default to ensure that production deployments are treated as first class citizens, and that services are tested with multiple replicas more frequently during development and testing. Developers wishing to deploy minimal environments can enable the `development` mode override, which should enforce only one replica per component.
|
||||||
|
|
||||||
|
The convention today in OpenStack-Helm is to define a `replicas:` section for the chart, where each component being deployed has its own tunable value.
|
||||||
|
|
||||||
|
For example, the `glance` chart provides the following replicas in `values.yaml`
|
||||||
|
|
||||||
|
```
|
||||||
|
replicas:
|
||||||
|
api: 2
|
||||||
|
registry: 2
|
||||||
|
```
|
||||||
|
|
||||||
|
An operator can override these on `install` or `upgrade`:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ helm install local/glance --set replicas.api=3,replicas.registry=3
|
||||||
|
```
|
50
docs/guides-developer/getting-started/gs-resources.md
Normal file
50
docs/guides-developer/getting-started/gs-resources.md
Normal file
@ -0,0 +1,50 @@
|
|||||||
|
## Resource Limits
|
||||||
|
|
||||||
|
Resource limits should be defined for all charts within OpenStack-Helm.
|
||||||
|
|
||||||
|
The convention is to leverage a `resources:` section within values.yaml by using an `enabled` setting that defaults to `false` but can be turned on by the operator at install or upgrade time.
|
||||||
|
|
||||||
|
The resources specify the requests (memory and cpu) and limits (memory and cpu) for each deployed resource. For example, from the Nova chart `values.yaml`:
|
||||||
|
|
||||||
|
```
|
||||||
|
resources:
|
||||||
|
enabled: false
|
||||||
|
nova_compute:
|
||||||
|
requests:
|
||||||
|
memory: "124Mi"
|
||||||
|
cpu: "100m"
|
||||||
|
limits:
|
||||||
|
memory: "1024Mi"
|
||||||
|
cpu: "2000m"
|
||||||
|
nova_libvirt:
|
||||||
|
requests:
|
||||||
|
memory: "124Mi"
|
||||||
|
cpu: "100m"
|
||||||
|
limits:
|
||||||
|
memory: "1024Mi"
|
||||||
|
cpu: "2000m"
|
||||||
|
nova_api_metadata:
|
||||||
|
requests:
|
||||||
|
memory: "124Mi"
|
||||||
|
cpu: "100m"
|
||||||
|
limits:
|
||||||
|
memory: "1024Mi"
|
||||||
|
cpu: "2000m"
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
These resources definitions are then applied to the appropriate component when the `enabled` flag is set. For instance, the following nova_compute daemonset has the requests and limits values applied from `.Values.resources.nova_compute`:
|
||||||
|
|
||||||
|
```
|
||||||
|
{{- if .Values.resources.enabled }}
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: {{ .Values.resources.nova_compute.requests.memory | quote }}
|
||||||
|
cpu: {{ .Values.resources.nova_compute.requests.cpu | quote }}
|
||||||
|
limits:
|
||||||
|
memory: {{ .Values.resources.nova_compute.limits.memory | quote }}
|
||||||
|
cpu: {{ .Values.resources.nova_compute.limits.cpu | quote }}
|
||||||
|
{{- end }}
|
||||||
|
```
|
||||||
|
|
||||||
|
When a chart developer doesn't know what resource limits or requests to apply to a new component, they can deploy the component locally and examine resource utilization using tools like WeaveScope. The resource limits may not be perfect on initial submission, but over time and with community contributions, they can be refined to reflect reality.
|
44
docs/guides-developer/getting-started/gs-upgrades.md
Normal file
44
docs/guides-developer/getting-started/gs-upgrades.md
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
## Upgrades and Reconfiguration
|
||||||
|
|
||||||
|
The OpenStack-Helm project assumes all upgrades will be done through Helm. This includes handling several different resource types. First, changes to the Helm chart templates themselves are handled. Second, all of the resources layered on top of the container image, such as `ConfigMaps` which includes both scripts and configuration files, are updated during an upgrade. Finally, any image references will result in rolling updates of containers, replacing them with the updating image.
|
||||||
|
|
||||||
|
As Helm stands today, several issues exist when you update images within charts that might have been used by jobs that already ran to completion or are still in flight. OpenStack-Helm developers will continue to work with the Helm community or develop charts that will support job removal prior to an upgrade, which will recreate services with updated images. An example of where this behavior would be desirable is when an updated db_sync image has updated to point from a Mitaka image to a Newton image. In this case, the operator will likely want a db_sync job, which was already run and completed during site installation, to run again with the updated image to bring the schema inline with the Newton release.
|
||||||
|
|
||||||
|
The OpenStack-Helm project also implements annotations across all chart configmaps so that changing resources inside containers, such as configuration files, triggers a Kubernetes rolling update. This means that those resources can be updated without deleting and redeploying the service and can be treated like any other upgrade, such as a container image change.
|
||||||
|
|
||||||
|
This is accomplished with the following annotation:
|
||||||
|
|
||||||
|
```
|
||||||
|
...
|
||||||
|
annotations:
|
||||||
|
configmap-bin-hash: {{ tuple "configmap-bin.yaml" . | include "hash" }}
|
||||||
|
configmap-etc-hash: {{ tuple "configmap-etc.yaml" . | include "hash" }}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `hash` function defined in the `helm-toolkit` chart ensures that any change to any file referenced by configmap-bin.yaml or configmap-etc.yaml results in a new hash, which will then trigger a rolling update.
|
||||||
|
|
||||||
|
All chart components (except `DaemonSets`) are outfitted by default with rolling update strategies:
|
||||||
|
|
||||||
|
```
|
||||||
|
spec:
|
||||||
|
replicas: {{ .Values.replicas }}
|
||||||
|
revisionHistoryLimit: {{ .Values.upgrades.revision_history }}
|
||||||
|
strategy:
|
||||||
|
type: {{ .Values.upgrades.pod_replacement_strategy }}
|
||||||
|
{{ if eq .Values.upgrades.pod_replacement_strategy "RollingUpdate" }}
|
||||||
|
rollingUpdate:
|
||||||
|
maxUnavailable: {{ .Values.upgrades.rolling_update.max_unavailable }}
|
||||||
|
maxSurge: {{ .Values.upgrades.rolling_update.max_surge }}
|
||||||
|
{{ end }}
|
||||||
|
```
|
||||||
|
|
||||||
|
In values.yaml in each chart, the same defaults are supplied in every chart, which allows the operator to override at upgrade or deployment time.
|
||||||
|
|
||||||
|
```
|
||||||
|
upgrades:
|
||||||
|
revision_history: 3
|
||||||
|
pod_replacement_strategy: RollingUpdate
|
||||||
|
rolling_update:
|
||||||
|
max_unavailable: 1
|
||||||
|
max_surge: 3
|
||||||
|
```
|
9
docs/guides-developer/getting-started/gs-values.md
Normal file
9
docs/guides-developer/getting-started/gs-values.md
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
## Default Values
|
||||||
|
|
||||||
|
Two major philosophies guide the OpenStack-Helm values approach. It is important that new chart developers understand the `values.yaml` approach OpenStack-Helm has within each of its charts to ensure that all charts are both consistent and remain a joy to work with.
|
||||||
|
|
||||||
|
The first philosophy to understand is that all charts should be independently installable and should not require a parent chart. This means that the values file in each chart should be self-contained. The project avoids using Helm globals and parent charts as requirements for capturing and feeding environment specific overrides into subcharts. An example of a single site definition YAML that can be source controlled and used as `--values` input to all OpenStack-Helm charts to maintain overrides in one testable place is forthcoming. Currently Helm does not support a `--values=environment.yaml` chunking up a larger override file's YAML namespace. Ideally, the project seeks native Helm support for `helm install local/keystone --values=environment.yaml:keystone` where `environment.yaml` is the operator's chart-wide environment definition and `keystone` is the section in environment.yaml that will be fed to the keystone chart during install as overrides. Standard YAML anchors can be used to duplicate common elements like the `endpoints` sections. At the time of writing, operators can use a temporary approach like [values.py](https://github.com/att-comdev/openstack-helm/blob/master/helm-toolkit/utils/values/values.py) to chunk up a single override YAML file as input to various individual charts. Overrides, just like the templates themselves, should be source controlled and tested, especially for operators operating charts at scale. This project will continue to examine efforts such as [helm-value-store](https://github.com/skuid/helm-value-store) and solutions in the vein of [helmfile](https://github.com/roboll/helmfile). Another compelling project that seems to address the needs of orchestrating multiple charts and managing site specific overrides is [Landscape](https://github.com/Eneco/landscaper)
|
||||||
|
|
||||||
|
The second philosophy is that the values files should be consistent across all charts, including charts in core, infra, and add-ons. This provides a consistent way for operators to override settings, such as enabling developer mode, defining resource limitations, and customizing the actual OpenStack configuration within chart templates without having to guess how a particular chart developer has laid out their values.yaml. There are also various macros in the `helm-toolkit` chart that will depend on the `values.yaml` within all charts being structured a certain way.
|
||||||
|
|
||||||
|
Finally, where charts reference connectivity information for other services sane defaults should be provided. In cases where these services are provided by OpenStack-Helm itself, the defaults should assume that the user will use the OpenStack-Helm charts for those services, but should also allow those charts to be overridden if the operator has them externally deployed.
|
1
docs/guides-developer/getting-started/readme.md
Normal file
1
docs/guides-developer/getting-started/readme.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Table of Contents
|
1
docs/guides-developer/readme.md
Normal file
1
docs/guides-developer/readme.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Table of Contents
|
1
docs/guides-install/install-aio.md
Normal file
1
docs/guides-install/install-aio.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Installation: AIO
|
@ -187,12 +187,8 @@ To deploy Openstack-Helm in development mode, ensure you've created a minikube-a
|
|||||||
/data
|
/data
|
||||||
/var/lib/localkube
|
/var/lib/localkube
|
||||||
/var/lib/docker
|
/var/lib/docker
|
||||||
```
|
/tmp/hostpath_pv
|
||||||
|
/tmp/hostpath-provisioner
|
||||||
As a result of this guidence, we recommend creating the following for MariaDB like shown below.
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mkdir -p /data/openstack-helm/mariadb
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Label Minikube Node
|
### Label Minikube Node
|
@ -6,9 +6,9 @@ In order to drive towards a production-ready Openstack solution, our goal is to
|
|||||||
|
|
||||||
| | Version | Notes |
|
| | Version | Notes |
|
||||||
|--- |--- |--- |
|
|--- |--- |--- |
|
||||||
| **Kubernetes** | [v1.5.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v153) | [Custom Controller for RDB tools](https://quay.io/repository/attcomdev/kube-controller-manager?tab=tags) |
|
| **Kubernetes** | [v1.5.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v155) | [Custom Controller for RDB tools](https://quay.io/repository/attcomdev/kube-controller-manager?tab=tags) |
|
||||||
| **Helm** | [v2.2.1](https://github.com/kubernetes/helm/releases/tag/v2.2.1) | Planning for [v2.3.0](https://github.com/kubernetes/helm/milestone/30) |
|
| **Helm** | [v2.2.3](https://github.com/kubernetes/helm/releases/tag/v2.2.3) | Planning for [v2.3.0](https://github.com/kubernetes/helm/milestone/30) |
|
||||||
| **Calico** | [v2.0](http://docs.projectcalico.org/v2.0/releases/) | [`calicoctl` v1.0](https://github.com/projectcalico/calicoctl/releases) |
|
| **Calico** | [v2.1](http://docs.projectcalico.org/v2.1/releases/) | [`calicoctl` v1.1](https://github.com/projectcalico/calicoctl/releases) |
|
||||||
| **Docker** | [v1.12.6](https://github.com/docker/docker/releases/tag/v1.12.1) | [Per kubeadm Instructions](http://kubernetes.io/docs/getting-started-guides/kubeadm/) | |
|
| **Docker** | [v1.12.6](https://github.com/docker/docker/releases/tag/v1.12.1) | [Per kubeadm Instructions](http://kubernetes.io/docs/getting-started-guides/kubeadm/) | |
|
||||||
|
|
||||||
Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further.
|
Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further.
|
||||||
@ -42,9 +42,9 @@ admin@kubenode01:~$
|
|||||||
After an initial `kubeadmn` deployment has been scheduled, it is time to deploy a CNI-enabled SDN. We have selected **Calico**, but have also confirmed that this works for Weave, and Romana. For Calico version v2.0, you can apply the provided [Kubeadm Hosted Install](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/) manifest:
|
After an initial `kubeadmn` deployment has been scheduled, it is time to deploy a CNI-enabled SDN. We have selected **Calico**, but have also confirmed that this works for Weave, and Romana. For Calico version v2.0, you can apply the provided [Kubeadm Hosted Install](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/) manifest:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
|
kubectl apply -f http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
|
||||||
```
|
```
|
||||||
**PLEASE NOTE:** If you are using a 192.168.0.0/16 CIDR for your Kubernetes hosts, you will need to modify [line 42](https://gist.github.com/v1k0d3n/a152b1f5b8db5a8ae9c8c7da575a9694#file-calico-kubeadm-hosted-yml-L42) for the `cidr` declaration within the `ippool`. This must be a `/16` range or more, as the `kube-controller` will hand out `/24` ranges to each node. We have included a sample comparison of the changes [here](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml) and [here](https://gist.githubusercontent.com/v1k0d3n/a152b1f5b8db5a8ae9c8c7da575a9694/raw/c950eef1123a7dcc4b0dedca1a202e0c06248e9e/calico-kubeadm-hosted.yml).
|
**PLEASE NOTE:** For Calico deployments using v2.0, if you are using a 192.168.0.0/16 CIDR for your Kubernetes hosts, you will need to modify [line 42](https://gist.github.com/v1k0d3n/a152b1f5b8db5a8ae9c8c7da575a9694#file-calico-kubeadm-hosted-yml-L42) for the `cidr` declaration within the `ippool`. This must be a `/16` range or more, as the `kube-controller` will hand out `/24` ranges to each node. We have included a sample comparison of the changes [here](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml) and [here](https://gist.githubusercontent.com/v1k0d3n/a152b1f5b8db5a8ae9c8c7da575a9694/raw/c950eef1123a7dcc4b0dedca1a202e0c06248e9e/calico-kubeadm-hosted.yml). This is not applicable for Calico v2.1.
|
||||||
|
|
||||||
After the container CNI-SDN is deployed, Calico has a tool you can use to verify your deployment. You can download this tool, [`calicoctl`](https://github.com/projectcalico/calicoctl/releases) to execute the following command:
|
After the container CNI-SDN is deployed, Calico has a tool you can use to verify your deployment. You can download this tool, [`calicoctl`](https://github.com/projectcalico/calicoctl/releases) to execute the following command:
|
||||||
```
|
```
|
6
docs/guides-install/readme.md
Normal file
6
docs/guides-install/readme.md
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
# Installation Guides
|
||||||
|
|
||||||
|
## Installation Guides
|
||||||
|
#### [Development: Minikube](install-minikube.md)
|
||||||
|
#### [Evaluation: AIO](install-aio.md)
|
||||||
|
#### [Multinode: Multi-Server](install-multinode.md)
|
1
docs/guides-install/third-pary-tools/armada.md
Normal file
1
docs/guides-install/third-pary-tools/armada.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Armada
|
1
docs/guides-operator/getting-started/readme.md
Normal file
1
docs/guides-operator/getting-started/readme.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Table of Contents
|
1
docs/guides-operator/guides-helm/helm-addons.md
Normal file
1
docs/guides-operator/guides-helm/helm-addons.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Helm-Addons
|
1
docs/guides-operator/guides-helm/readme.md
Normal file
1
docs/guides-operator/guides-helm/readme.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Table of Contents
|
@ -0,0 +1 @@
|
|||||||
|
# Kubernetes Init-Containers
|
1
docs/guides-operator/guides-kubernetes/kb-jobs.md
Normal file
1
docs/guides-operator/guides-kubernetes/kb-jobs.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Kubernetes Jobs
|
5
docs/guides-operator/guides-kubernetes/readme.md
Normal file
5
docs/guides-operator/guides-kubernetes/readme.md
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
# Openstack-Helm development
|
||||||
|
|
||||||
|
## Conceptual Guides: Kubernetes
|
||||||
|
#### [Init Containers](init-containers.md)
|
||||||
|
#### [User Registration](registration-user.md)
|
1
docs/guides-operator/guides-network/net-ingress.md
Normal file
1
docs/guides-operator/guides-network/net-ingress.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Using Ingress
|
1
docs/guides-operator/guides-network/net-nodeport.md
Normal file
1
docs/guides-operator/guides-network/net-nodeport.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Using NodePorts
|
9
docs/guides-operator/guides-network/readme.md
Normal file
9
docs/guides-operator/guides-network/readme.md
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
# Table of Contents
|
||||||
|
|
||||||
|
### 4.1 Kubernetes Control Plane
|
||||||
|
#### 4.1.1 CNI SDN Considerations
|
||||||
|
#### 4.1.2 Calico Networking
|
||||||
|
### 4.2 Ingress Philosophy
|
||||||
|
### 4.3 Openstack Networking
|
||||||
|
#### 4.3.1 Flat Networking
|
||||||
|
#### 4.3.1 L2 Networking
|
8
docs/guides-operator/guides-security/readme.md
Normal file
8
docs/guides-operator/guides-security/readme.md
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
# Table of Contents
|
||||||
|
|
||||||
|
## 5. Security Guidelines
|
||||||
|
### 5.1 Network Policies
|
||||||
|
### 5.2 Advanced Network Policies
|
||||||
|
### 5.3 Role-Based Access Controls
|
||||||
|
### 5.4 Security Contexts
|
||||||
|
### 5.5 Security Add-Ons
|
1
docs/guides-operator/guides-security/sec-namespaces.md
Normal file
1
docs/guides-operator/guides-security/sec-namespaces.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Namespace Isolation
|
1
docs/guides-operator/guides-security/sec-rbac.md
Normal file
1
docs/guides-operator/guides-security/sec-rbac.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
# Role-Based Access Controls
|
@ -3,4 +3,3 @@
|
|||||||
This guide is to help users debug any networking issues when deploying Charts in this repository.
|
This guide is to help users debug any networking issues when deploying Charts in this repository.
|
||||||
|
|
||||||
# Diagnosing the problem
|
# Diagnosing the problem
|
||||||
|
|
4
docs/guides-welcome/project-overview.md
Normal file
4
docs/guides-welcome/project-overview.md
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
## 2. Repository Structure
|
||||||
|
### 2.1 Infrastructure Components
|
||||||
|
### 2.2 Openstack-Helm Core Services
|
||||||
|
### 2.3 Openstack-Helm Add-Ons
|
6
docs/guides-welcome/readme.md
Normal file
6
docs/guides-welcome/readme.md
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
## Table of Contents: Welcome Guide
|
||||||
|
|
||||||
|
- [Mission Statement](../readme.me#mission_statement)
|
||||||
|
- [Overview]](welcome-overview.md)
|
||||||
|
- [Resiliency]](welcome-resiliency.md)
|
||||||
|
- [Scaling]](welcome-scaling.md)
|
3
docs/guides-welcome/welcome-overview.md
Normal file
3
docs/guides-welcome/welcome-overview.md
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
## Project Overview
|
||||||
|
|
||||||
|
The goal for OpenStack-Helm is to provide an incredibly customizable *framework* for operators and developers alike. This framework will enable end-users to deploy, maintain, and upgrade a fully functioning OpenStack environment for both simple and complex environments. Administrators or developers can either deploy all or individual OpenStack components along with their required dependencies. It heavily borrows concepts from [Stackanetes](https://github.com/stackanetes/stackanetes) and [other complex Helm application deployments](https://github.com/sapcc/openstack-helm). This project is meant to be a collaborative project that brings Openstack applications into a [Cloud-Native](https://www.cncf.io/about/charter) model.
|
@ -1,7 +1,3 @@
|
|||||||
# Mission
|
|
||||||
|
|
||||||
The goal for OpenStack-Helm is to provide an incredibly customizable *framework* for operators and developers alike. This framework will enable end-users to deploy, maintain, and upgrade a fully functioning OpenStack environment for both simple and complex environments. Administrators or developers can either deploy all or individual OpenStack components along with their required dependencies. It heavily borrows concepts from [Stackanetes](https://github.com/stackanetes/stackanetes) and [other complex Helm application deployments](https://github.com/sapcc/openstack-helm). This project is meant to be a collaborative project that brings Openstack applications into a [Cloud-Native](https://www.cncf.io/about/charter) model.
|
|
||||||
|
|
||||||
## Resiliency
|
## Resiliency
|
||||||
|
|
||||||
One of the goals of this project is to produce a set of charts that can be used in a production setting to deploy and upgrade OpenStack. To achieve this goal, all components must be resilient, including both OpenStack and Infrastructure components leveraged by this project. In addition, this also includes Kubernetes itself. It is part of our mission to ensure that all infrastructure components are highly available and that a deployment can withstand a physical host failure out of the box. This means that:
|
One of the goals of this project is to produce a set of charts that can be used in a production setting to deploy and upgrade OpenStack. To achieve this goal, all components must be resilient, including both OpenStack and Infrastructure components leveraged by this project. In addition, this also includes Kubernetes itself. It is part of our mission to ensure that all infrastructure components are highly available and that a deployment can withstand a physical host failure out of the box. This means that:
|
||||||
@ -11,14 +7,3 @@ One of the goals of this project is to produce a set of charts that can be used
|
|||||||
- Scheduling annotations need to be employed to ensure maximum resiliency for multi-host environments. They also need to be flexible to allow all-in-one deployments. To this end, we promote the usage of `podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` for most infrastructure elements.
|
- Scheduling annotations need to be employed to ensure maximum resiliency for multi-host environments. They also need to be flexible to allow all-in-one deployments. To this end, we promote the usage of `podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` for most infrastructure elements.
|
||||||
- We make the assumption that we can depend on a reliable implementation of centralized storage to create PVCs within Kubernetes to support resiliency and complex application design. Today, this is provided by the included Ceph chart. There is much work to do when making even a single backend production ready. We have chosen to focus on bringing Ceph into a production ready state, which includes handling real world deployment scenarios, resiliency, and pool configurations. In the future we would like to support more options for hardened backend PVC's. In the future, we would like to offer flexibility in choosing a hardened backend.
|
- We make the assumption that we can depend on a reliable implementation of centralized storage to create PVCs within Kubernetes to support resiliency and complex application design. Today, this is provided by the included Ceph chart. There is much work to do when making even a single backend production ready. We have chosen to focus on bringing Ceph into a production ready state, which includes handling real world deployment scenarios, resiliency, and pool configurations. In the future we would like to support more options for hardened backend PVC's. In the future, we would like to offer flexibility in choosing a hardened backend.
|
||||||
- We will document the best practices for running a resilient Kubernetes cluster in production. This includes documenting the steps necessary to make all components resilient, such as Etcd and SkyDNS where possible, and point out gaps due to missing features.
|
- We will document the best practices for running a resilient Kubernetes cluster in production. This includes documenting the steps necessary to make all components resilient, such as Etcd and SkyDNS where possible, and point out gaps due to missing features.
|
||||||
|
|
||||||
## Scaling
|
|
||||||
|
|
||||||
Scaling is another first class citizen in openstack-helm. We will be working to ensure that we support various deployment models that can support hyperscale, such as:
|
|
||||||
|
|
||||||
- Ensuring that by default, clusters include multiple replicas to verify that scaling issues are identified early and often (unless development mode is enabled).
|
|
||||||
- Ensuring that every chart can support more then one replica and allowing operators to override those replica counts. For some applications, this means that they support clustering.
|
|
||||||
- Ensuring clustering style applications are not limited to fixed replica counts. For instance, we want to ensure that we can support n=Galera members and have those scale linearly, within reason, as opposed to only supporting a fixed count.
|
|
||||||
- Duplicate charts of the same type within the same namespace. For example, deploying rabbitmq twice, to the openstack namespace resulting in two fully functioning clusters.
|
|
||||||
- Allowing charts to be deployed to a diverse set of namespaces. For example, allowing infrastructure to be deployed in one namespace and OpenStack in another, or deploying each chart in its own namespace.
|
|
||||||
- Supporting hyperscale configurations that call for per-component infrastructure, such as a dedicated database and RabbitMQ solely for Ceilometer, or even dedicated infrastructure(s) for every component you deploy. It is unique, large scale deployment designs such as this that only become practical under a Kubernetes/Container framework and we want to ensure that we can support them.
|
|
10
docs/guides-welcome/welcome-scaling.md
Normal file
10
docs/guides-welcome/welcome-scaling.md
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
## Scaling
|
||||||
|
|
||||||
|
Scaling is another first class citizen in openstack-helm. We will be working to ensure that we support various deployment models that can support hyperscale, such as:
|
||||||
|
|
||||||
|
- Ensuring that by default, clusters include multiple replicas to verify that scaling issues are identified early and often (unless development mode is enabled).
|
||||||
|
- Ensuring that every chart can support more then one replica and allowing operators to override those replica counts. For some applications, this means that they support clustering.
|
||||||
|
- Ensuring clustering style applications are not limited to fixed replica counts. For instance, we want to ensure that we can support n=Galera members and have those scale linearly, within reason, as opposed to only supporting a fixed count.
|
||||||
|
- Duplicate charts of the same type within the same namespace. For example, deploying rabbitmq twice, to the openstack namespace resulting in two fully functioning clusters.
|
||||||
|
- Allowing charts to be deployed to a diverse set of namespaces. For example, allowing infrastructure to be deployed in one namespace and OpenStack in another, or deploying each chart in its own namespace.
|
||||||
|
- Supporting hyperscale configurations that call for per-component infrastructure, such as a dedicated database and RabbitMQ solely for Ceilometer, or even dedicated infrastructure(s) for every component you deploy. It is unique, large scale deployment designs such as this that only become practical under a Kubernetes/Container framework and we want to ensure that we can support them.
|
@ -1,349 +0,0 @@
|
|||||||
# Helm Overrides
|
|
||||||
|
|
||||||
This document covers Helm overrides and the OpenStack-Helm approach. For more information on Helm overrides in general see the Helm [Values Documentation](https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files)
|
|
||||||
|
|
||||||
## Values Philosophy
|
|
||||||
|
|
||||||
Two major philosophies guide the OpenStack-Helm values approach. It is important that new chart developers understand the `values.yaml` approach OpenStack-Helm has within each of its charts to ensure that all charts are both consistent and remain a joy to work with.
|
|
||||||
|
|
||||||
The first philosophy to understand is that all charts should be independently installable and should not require a parent chart. This means that the values file in each chart should be self-contained. The project avoids using Helm globals and parent charts as requirements for capturing and feeding environment specific overrides into subcharts. An example of a single site definition YAML that can be source controlled and used as `--values` input to all OpenStack-Helm charts to maintain overrides in one testable place is forthcoming. Currently Helm does not support a `--values=environment.yaml` chunking up a larger override file's YAML namespace. Ideally, the project seeks native Helm support for `helm install local/keystone --values=environment.yaml:keystone` where `environment.yaml` is the operator's chart-wide environment definition and `keystone` is the section in environment.yaml that will be fed to the keystone chart during install as overrides. Standard YAML anchors can be used to duplicate common elements like the `endpoints` sections. At the time of writing, operators can use a temporary approach like [values.py](https://github.com/att-comdev/openstack-helm/blob/master/helm-toolkit/utils/values/values.py) to chunk up a single override YAML file as input to various individual charts. Overrides, just like the templates themselves, should be source controlled and tested, especially for operators operating charts at scale. This project will continue to examine efforts such as [helm-value-store](https://github.com/skuid/helm-value-store) and solutions in the vein of [helmfile](https://github.com/roboll/helmfile). Another compelling project that seems to address the needs of orchestrating multiple charts and managing site specific overrides is [Landscape](https://github.com/Eneco/landscaper)
|
|
||||||
|
|
||||||
The second philosophy is that the values files should be consistent across all charts, including charts in core, infra, and add-ons. This provides a consistent way for operators to override settings, such as enabling developer mode, defining resource limitations, and customizing the actual OpenStack configuration within chart templates without having to guess how a particular chart developer has laid out their values.yaml. There are also various macros in the `helm-toolkit` chart that will depend on the `values.yaml` within all charts being structured a certain way.
|
|
||||||
|
|
||||||
Finally, where charts reference connectivity information for other services sane defaults should be provided. In cases where these services are provided by OpenStack-Helm itself, the defaults should assume that the user will use the OpenStack-Helm charts for those services, but should also allow those charts to be overridden if the operator has them externally deployed.
|
|
||||||
|
|
||||||
## Replicas
|
|
||||||
|
|
||||||
All charts must provide replica definitions and leverage those in the Kubernetes manifests. This allows site operators to tune the replica counts at install or when upgrading. Each chart should deploy with multiple replicas by default to ensure that production deployments are treated as first class citizens, and that services are tested with multiple replicas more frequently during development and testing. Developers wishing to deploy minimal environments can enable the `development` mode override, which should enforce only one replica per component.
|
|
||||||
|
|
||||||
The convention today in OpenStack-Helm is to define a `replicas:` section for the chart, where each component being deployed has its own tunable value.
|
|
||||||
|
|
||||||
For example, the `glance` chart provides the following replicas in `values.yaml`
|
|
||||||
|
|
||||||
```
|
|
||||||
replicas:
|
|
||||||
api: 2
|
|
||||||
registry: 2
|
|
||||||
```
|
|
||||||
|
|
||||||
An operator can override these on `install` or `upgrade`:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ helm install local/glance --set replicas.api=3,replicas.registry=3
|
|
||||||
```
|
|
||||||
|
|
||||||
## Labels
|
|
||||||
|
|
||||||
This project uses `nodeSelectors` as well as `podAntiAffinity` rules to ensure resources land in the proper place within Kubernetes. Today, OpenStack-Helm employs four labels:
|
|
||||||
|
|
||||||
- ceph-storage: enabled
|
|
||||||
- openstack-control-plane: enabled
|
|
||||||
- openstack-compute-node: enabled
|
|
||||||
- openvswitch: enabled
|
|
||||||
|
|
||||||
NOTE: The `openvswitch` label is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` nodes. Ideally, you would eliminate the `openvswitch` label if you could simply do an OR of (`openstack-control-plane` and `openstack-compute-node`). However, Kubernetes `nodeSelectors` prohibits this specific logic. As a result of this, a third label that spans all hosts is required, which in this case is `openvswitch`. The Open vSwitch service must run on both control plane and tenant nodes with both labels to provide connectivity for DHCP, L3, and Metadata services. These Open vSwitch services run as part of the control plane as well as tenant connectivity, which runs as part of the compute node infrastructure.
|
|
||||||
|
|
||||||
|
|
||||||
Labels are of course definable and overridable by the chart operators. Labels are defined in charts by using a `labels:` section, which is a common convention that defines both a selector and a value:
|
|
||||||
|
|
||||||
```
|
|
||||||
labels:
|
|
||||||
node_selector_key: openstack-control-plane
|
|
||||||
node_selector_value: enabled
|
|
||||||
```
|
|
||||||
|
|
||||||
In some cases, such as with the Neutron chart, a chart may need to define more then one label. In cases such as this, each element should be articulated under the `labels:` section, nesting where appropriate:
|
|
||||||
|
|
||||||
```
|
|
||||||
labels:
|
|
||||||
# ovs is a special case, requiring a special
|
|
||||||
# label that can apply to both control hosts
|
|
||||||
# and compute hosts, until we get more sophisticated
|
|
||||||
# with our daemonset scheduling
|
|
||||||
ovs:
|
|
||||||
node_selector_key: openvswitch
|
|
||||||
node_selector_value: enabled
|
|
||||||
agent:
|
|
||||||
dhcp:
|
|
||||||
node_selector_key: openstack-control-plane
|
|
||||||
node_selector_value: enabled
|
|
||||||
l3:
|
|
||||||
node_selector_key: openstack-control-plane
|
|
||||||
node_selector_value: enabled
|
|
||||||
metadata:
|
|
||||||
node_selector_key: openstack-control-plane
|
|
||||||
node_selector_value: enabled
|
|
||||||
server:
|
|
||||||
node_selector_key: openstack-control-plane
|
|
||||||
node_selector_value: enabled
|
|
||||||
```
|
|
||||||
|
|
||||||
These labels should be leveraged by `nodeSelector` definitions in charts for all resources, including jobs:
|
|
||||||
|
|
||||||
```
|
|
||||||
...
|
|
||||||
spec:
|
|
||||||
nodeSelector:
|
|
||||||
{{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }}
|
|
||||||
containers:
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
In some cases, especially with infrastructure components, it is necessary for the chart developer to provide scheduling instruction to Kubernetes to help ensure proper resiliency. The most common examples employed today are podAntiAffinity rules, such as those used in the `mariadb` chart. These should be placed on all foundational elements so that Kubernetes will not only disperse resources for resiliency, but also allow multi-replica installations to deploy successfully into a single host environment:
|
|
||||||
|
|
||||||
```
|
|
||||||
annotations:
|
|
||||||
# this soft requirement allows single
|
|
||||||
# host deployments to spawn several mariadb containers
|
|
||||||
# but in a larger environment, would attempt to spread
|
|
||||||
# them out
|
|
||||||
scheduler.alpha.kubernetes.io/affinity: >
|
|
||||||
{
|
|
||||||
"podAntiAffinity": {
|
|
||||||
"preferredDuringSchedulingIgnoredDuringExecution": [{
|
|
||||||
"labelSelector": {
|
|
||||||
"matchExpressions": [{
|
|
||||||
"key": "app",
|
|
||||||
"operator": "In",
|
|
||||||
"values":["mariadb"]
|
|
||||||
}]
|
|
||||||
},
|
|
||||||
"topologyKey": "kubernetes.io/hostname",
|
|
||||||
"weight": 10
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Images
|
|
||||||
|
|
||||||
The project's core philosophy regarding images is that the toolsets required to enable the OpenStack services should be applied by Kubernetes itself. This requires OpenStack-Helm to develop common and simple scripts with minimal dependencies that can be overlaid on any image that meets the OpenStack core library requirements. The advantage of this is that the project can be image agnostic, allowing operators to use Stackanetes, Kolla, Yaodu, or any image flavor and format they choose and they will all function the same.
|
|
||||||
|
|
||||||
A long-term goal, besides being image agnostic, is to also be able to support any of the container runtimes that Kubernetes supports, even those that might not use Docker's own packaging format. This will allow the project to continue to offer maximum flexibility with regard to operator choice.
|
|
||||||
|
|
||||||
To that end, all charts provide an `images:` section that allows operators to override images. Also, all default image references should be fully spelled out, even those hosted by Docker or Quay. Further, no default image reference should use `:latest` but rather should be pinned to a specific version to ensure consistent behavior for deployments over time.
|
|
||||||
|
|
||||||
Today, the `images:` section has several common conventions. Most OpenStack services require a database initialization function, a database synchronization function, and a series of steps for Keystone registration and integration. Each component may also have a specific image that composes an OpenStack service. The images may or may not differ, but regardless, should all be defined in `images`.
|
|
||||||
|
|
||||||
The following standards are in use today, in addition to any components defined by the service itself:
|
|
||||||
|
|
||||||
- dep_check: The image that will perform dependency checking in an init-container.
|
|
||||||
- db_init: The image that will perform database creation operations for the OpenStack service.
|
|
||||||
- db_sync: The image that will perform database sync (schema insertion) for the OpenStack service.
|
|
||||||
- ks_user: The image that will perform keystone user creation for the service.
|
|
||||||
- ks_service: The image that will perform keystone service registration for the service.
|
|
||||||
- ks_endpoints: The image that will perform keystone endpoint registration for the service.
|
|
||||||
- pull_policy: The image pull policy, one of "Always", "IfNotPresent", and "Never" which will be used by all containers in the chart.
|
|
||||||
|
|
||||||
An illustrative example of an `images:` section taken from the heat chart:
|
|
||||||
|
|
||||||
```
|
|
||||||
images:
|
|
||||||
dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.1.1
|
|
||||||
db_init: quay.io/stackanetes/stackanetes-kolla-toolbox:newton
|
|
||||||
db_sync: docker.io/kolla/ubuntu-source-heat-api:3.0.1
|
|
||||||
ks_user: quay.io/stackanetes/stackanetes-kolla-toolbox:newton
|
|
||||||
ks_service: quay.io/stackanetes/stackanetes-kolla-toolbox:newton
|
|
||||||
ks_endpoints: quay.io/stackanetes/stackanetes-kolla-toolbox:newton
|
|
||||||
api: docker.io/kolla/ubuntu-source-heat-api:3.0.1
|
|
||||||
cfn: docker.io/kolla/ubuntu-source-heat-api:3.0.1
|
|
||||||
cloudwatch: docker.io/kolla/ubuntu-source-heat-api:3.0.1
|
|
||||||
engine: docker.io/kolla/ubuntu-source-heat-engine:3.0.1
|
|
||||||
pull_policy: "IfNotPresent"
|
|
||||||
```
|
|
||||||
|
|
||||||
The OpenStack-Helm project today uses a mix of Docker images from Stackanetes and Kolla, but will likely standardize on Kolla images for all charts without any reliance on Kolla image utilities. Soon, the project will support alternative images with substantially smaller footprints, such as [Yaodu](https://github.com/yaodu).
|
|
||||||
|
|
||||||
## Upgrades
|
|
||||||
|
|
||||||
The OpenStack-Helm project assumes all upgrades will be done through Helm. This includes handling several different resource types. First, changes to the Helm chart templates themselves are handled. Second, all of the resources layered on top of the container image, such as `ConfigMaps` which includes both scripts and configuration files, are updated during an upgrade. Finally, any image references will result in rolling updates of containers, replacing them with the updating image.
|
|
||||||
|
|
||||||
As Helm stands today, several issues exist when you update images within charts that might have been used by jobs that already ran to completion or are still in flight. OpenStack-Helm developers will continue to work with the Helm community or develop charts that will support job removal prior to an upgrade, which will recreate services with updated images. An example of where this behavior would be desirable is when an updated db_sync image has updated to point from a Mitaka image to a Newton image. In this case, the operator will likely want a db_sync job, which was already run and completed during site installation, to run again with the updated image to bring the schema inline with the Newton release.
|
|
||||||
|
|
||||||
The OpenStack-Helm project also implements annotations across all chart configmaps so that changing resources inside containers, such as configuration files, triggers a Kubernetes rolling update. This means that those resources can be updated without deleting and redeploying the service and can be treated like any other upgrade, such as a container image change.
|
|
||||||
|
|
||||||
This is accomplished with the following annotation:
|
|
||||||
|
|
||||||
```
|
|
||||||
...
|
|
||||||
annotations:
|
|
||||||
configmap-bin-hash: {{ tuple "configmap-bin.yaml" . | include "hash" }}
|
|
||||||
configmap-etc-hash: {{ tuple "configmap-etc.yaml" . | include "hash" }}
|
|
||||||
```
|
|
||||||
|
|
||||||
The `hash` function defined in the `helm-toolkit` chart ensures that any change to any file referenced by configmap-bin.yaml or configmap-etc.yaml results in a new hash, which will then trigger a rolling update.
|
|
||||||
|
|
||||||
All chart components (except `DaemonSets`) are outfitted by default with rolling update strategies:
|
|
||||||
|
|
||||||
```
|
|
||||||
spec:
|
|
||||||
replicas: {{ .Values.replicas }}
|
|
||||||
revisionHistoryLimit: {{ .Values.upgrades.revision_history }}
|
|
||||||
strategy:
|
|
||||||
type: {{ .Values.upgrades.pod_replacement_strategy }}
|
|
||||||
{{ if eq .Values.upgrades.pod_replacement_strategy "RollingUpdate" }}
|
|
||||||
rollingUpdate:
|
|
||||||
maxUnavailable: {{ .Values.upgrades.rolling_update.max_unavailable }}
|
|
||||||
maxSurge: {{ .Values.upgrades.rolling_update.max_surge }}
|
|
||||||
{{ end }}
|
|
||||||
```
|
|
||||||
|
|
||||||
In values.yaml in each chart, the same defaults are supplied in every chart, which allows the operator to override at upgrade or deployment time.
|
|
||||||
|
|
||||||
```
|
|
||||||
upgrades:
|
|
||||||
revision_history: 3
|
|
||||||
pod_replacement_strategy: RollingUpdate
|
|
||||||
rolling_update:
|
|
||||||
max_unavailable: 1
|
|
||||||
max_surge: 3
|
|
||||||
```
|
|
||||||
|
|
||||||
## Resource Limits
|
|
||||||
|
|
||||||
Resource limits should be defined for all charts within OpenStack-Helm.
|
|
||||||
|
|
||||||
The convention is to leverage a `resources:` section within values.yaml by using an `enabled` setting that defaults to `false` but can be turned on by the operator at install or upgrade time.
|
|
||||||
|
|
||||||
The resources specify the requests (memory and cpu) and limits (memory and cpu) for each deployed resource. For example, from the Nova chart `values.yaml`:
|
|
||||||
|
|
||||||
```
|
|
||||||
resources:
|
|
||||||
enabled: false
|
|
||||||
nova_compute:
|
|
||||||
requests:
|
|
||||||
memory: "124Mi"
|
|
||||||
cpu: "100m"
|
|
||||||
limits:
|
|
||||||
memory: "1024Mi"
|
|
||||||
cpu: "2000m"
|
|
||||||
nova_libvirt:
|
|
||||||
requests:
|
|
||||||
memory: "124Mi"
|
|
||||||
cpu: "100m"
|
|
||||||
limits:
|
|
||||||
memory: "1024Mi"
|
|
||||||
cpu: "2000m"
|
|
||||||
nova_api_metadata:
|
|
||||||
requests:
|
|
||||||
memory: "124Mi"
|
|
||||||
cpu: "100m"
|
|
||||||
limits:
|
|
||||||
memory: "1024Mi"
|
|
||||||
cpu: "2000m"
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
These resources definitions are then applied to the appropriate component when the `enabled` flag is set. For instance, the following nova_compute daemonset has the requests and limits values applied from `.Values.resources.nova_compute`:
|
|
||||||
|
|
||||||
```
|
|
||||||
{{- if .Values.resources.enabled }}
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
memory: {{ .Values.resources.nova_compute.requests.memory | quote }}
|
|
||||||
cpu: {{ .Values.resources.nova_compute.requests.cpu | quote }}
|
|
||||||
limits:
|
|
||||||
memory: {{ .Values.resources.nova_compute.limits.memory | quote }}
|
|
||||||
cpu: {{ .Values.resources.nova_compute.limits.cpu | quote }}
|
|
||||||
{{- end }}
|
|
||||||
```
|
|
||||||
|
|
||||||
When a chart developer doesn't know what resource limits or requests to apply to a new component, they can deploy the component locally and examine resource utilization using tools like WeaveScope. The resource limits may not be perfect on initial submission, but over time and with community contributions, they can be refined to reflect reality.
|
|
||||||
|
|
||||||
## Endpoints
|
|
||||||
|
|
||||||
NOTE: This feature is under active development. There may be dragons here.
|
|
||||||
|
|
||||||
As a large part of the project's purpose, OpenStack-Helm seeks to provide mechanisms around endpoints. OpenStack is a highly interconnected application, with various components requiring connectivity details to numerous services, including other OpenStack components and infrastructure elements such as databases, queues, and memcached infrastructure. The project's goal is to ensure that it can provide a consistent mechanism for defining these "endpoints" across all charts and provide the macros necessary to convert those definitions into usable endpoints. The charts should consistently default to building endpoints that assume the operator is leveraging all charts to build their OpenStack cloud. Endpoints should be configurable if an operator would like a chart to work with their existing infrastructure or run elements in different namespaces.
|
|
||||||
|
|
||||||
|
|
||||||
For instance, in the Neutron chart `values.yaml` the following endpoints are defined:
|
|
||||||
|
|
||||||
```
|
|
||||||
# typically overridden by environmental
|
|
||||||
# values, but should include all endpoints
|
|
||||||
# required by this chart
|
|
||||||
endpoints:
|
|
||||||
image:
|
|
||||||
hosts:
|
|
||||||
default: glance-api
|
|
||||||
type: image
|
|
||||||
path: null
|
|
||||||
scheme: 'http'
|
|
||||||
port:
|
|
||||||
api: 9292
|
|
||||||
registry: 9191
|
|
||||||
compute:
|
|
||||||
hosts:
|
|
||||||
default: nova-api
|
|
||||||
path: "/v2/%(tenant_id)s"
|
|
||||||
type: compute
|
|
||||||
scheme: 'http'
|
|
||||||
port:
|
|
||||||
api: 8774
|
|
||||||
metadata: 8775
|
|
||||||
novncproxy: 6080
|
|
||||||
identity:
|
|
||||||
hosts:
|
|
||||||
default: keystone-api
|
|
||||||
path: /v3
|
|
||||||
type: identity
|
|
||||||
scheme: 'http'
|
|
||||||
port:
|
|
||||||
admin: 35357
|
|
||||||
public: 5000
|
|
||||||
network:
|
|
||||||
hosts:
|
|
||||||
default: neutron-server
|
|
||||||
path: null
|
|
||||||
type: network
|
|
||||||
scheme: 'http'
|
|
||||||
port:
|
|
||||||
api: 9696
|
|
||||||
```
|
|
||||||
|
|
||||||
These values define all the endpoints that the Neutron chart may need in order to build full URL compatible endpoints to various services. Long-term, these will also include database, memcached, and rabbitmq elements in one place. Essentially, all external connectivity needs to be defined centrally.
|
|
||||||
|
|
||||||
The macros that help translate these into the actual URLs necessary are defined in the `helm-toolkit` chart. For instance, the cinder chart defines a `glance_api_servers` definition in the `cinder.conf` template:
|
|
||||||
|
|
||||||
```
|
|
||||||
+glance_api_servers = {{ tuple "image" "internal" "api" . | include "helm-toolkit.endpoint_type_lookup_addr" }}
|
|
||||||
```
|
|
||||||
|
|
||||||
This line of magic uses the `endpoint_type_lookup_addr` macro in the `helm-toolkit` chart (since it is used by all charts). Note that there is a second convention here. All `{{ define }}` macros in charts should be pre-fixed with the chart that is defining them. This allows developers to easily identify the source of a Helm macro and also avoid namespace collisions. In the example above, the macro `endpoint_type_look_addr` is defined in the `helm-toolkit` chart. This macro is passing three parameters (aided by the `tuple` method built into the go/sprig templating library used by Helm):
|
|
||||||
|
|
||||||
- image: This is the OpenStack service that the endpoint is being built for. This will be mapped to `glance` which is the image service for OpenStack.
|
|
||||||
- internal: This is the OpenStack endpoint type we are looking for - valid values would be `internal`, `admin`, and `public`
|
|
||||||
- api: This is the port to map to for the service. Some components, such as glance, provide an `api` port and a `registry` port, for example.
|
|
||||||
|
|
||||||
At all costs, charts should avoid hard coding values such as `http://keystone-api:5000` because these are not compatible with operator overrides and do not support spreading components out over various namespaces.
|
|
||||||
|
|
||||||
## Common Conditionals
|
|
||||||
|
|
||||||
The OpenStack-Helm charts make the following conditions available across all charts, which can be set at install or upgrade time with Helm below.
|
|
||||||
|
|
||||||
### Developer Mode
|
|
||||||
|
|
||||||
```
|
|
||||||
helm install local/chart --set development.enabled=true
|
|
||||||
```
|
|
||||||
|
|
||||||
The development mode flag should be available on all charts. Enabling this reduces dependencies that the chart may have on persistent volume claims (which are difficult to support in a laptop minikube environment) as well as reducing replica counts or resiliency features to support a minimal environment.
|
|
||||||
|
|
||||||
The glance chart for instance defines the following `development:` overrides:
|
|
||||||
|
|
||||||
```
|
|
||||||
development:
|
|
||||||
enabled: false
|
|
||||||
storage_path: /data/openstack-helm/glance/images
|
|
||||||
```
|
|
||||||
|
|
||||||
The `enabled` flag allows the developer to enable development mode. The storage path allows the operator to store glance images in a hostPath instead of leveraging a ceph backend, which again, is difficult to spin up in a small laptop minikube environment. The host path can be overriden by the operator if desired.
|
|
||||||
|
|
||||||
### Resources
|
|
||||||
|
|
||||||
```
|
|
||||||
helm install local/chart --set resources.enabled=true
|
|
||||||
```
|
|
||||||
|
|
||||||
Resource limits/requirements can be turned on and off. By default, they are off. Setting this enabled to `true` will deploy Kubernetes resources with resource requirements and limits.
|
|
57
docs/readme.md
Normal file
57
docs/readme.md
Normal file
@ -0,0 +1,57 @@
|
|||||||
|
# Openstack-Helm
|
||||||
|
|
||||||
|
Welcome to the Openstack-Helm project!
|
||||||
|
|
||||||
|
## Mission Statement
|
||||||
|
|
||||||
|
Openstack-Helm is a project that provides a flexible, production-grade Kubernetes deployment of Openstack Services using Kubernetes/Helm deployment primitives. The charts contained within the Openstack-Helm project are designed to be a plenary framework of self-contained, service-level manifest templates for development, operations, and troubleshooting, leveraging upstream Kubernetes and Helm best practices with third-party add-ons treated as a last resort.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
The documentation provided for Openstack-Helm are provided in the following role-specific guides:
|
||||||
|
|
||||||
|
- [Welcome Guide](guides_welcome/readme.md)
|
||||||
|
- [Mission](guides_welcome/mission.md) - Openstack-Helm Mission Statement
|
||||||
|
- [Project Overview](guides_welcome/welcome-overview.md)
|
||||||
|
- [Resiliency Philosophy](guides_welcome/welcome-resiliency.md)
|
||||||
|
- [Scalability Philosophy](guides_welcome/welcome-scaling.md)
|
||||||
|
- [Installation Guides](guides-install/readme.md) -
|
||||||
|
- [All-in-One](guides-install/install-aio.md) - Evaluation of Openstack-Helm
|
||||||
|
- [Developer Installation](guides-install/install-minikube.md) - Envirnment for Openstack-Helm Development
|
||||||
|
- [Multinode](guides-install/install-multinode.md) - Multinode or Production Deployments
|
||||||
|
- [Developer Guides](guides-developer/readme.md) - Resources for Openstack-Helm Developers
|
||||||
|
- [Getting Started](guides-developer/getting-started/readme.md) - Development Philosophies
|
||||||
|
- [Default Values](guides-developer/getting-started/gs-values.md)
|
||||||
|
- [Chart Overrides](guides-developer/getting-started/gs-overrides.md)
|
||||||
|
- [Replica Guidelines](guides-developer/getting-started/gs-replicas.md)
|
||||||
|
- [Image Guidelines](guides-developer/getting-started/gs-images.md)
|
||||||
|
- [Resource Guidelines](guides-developer/getting-started/gs-resources.md)
|
||||||
|
- [Labeling Guidelines](guides-developer/getting-started/gs-labels.md)
|
||||||
|
- [Endpoint Considerations](guides-developer/getting-started/gs-endpoints.md)
|
||||||
|
- [Helm Upgrades Considerations](guides-developer/getting-started/gs-upgrades.md)
|
||||||
|
- [Using Conditionals](guides-developer/getting-started/gs-conditionals.md)
|
||||||
|
- [Helm Development Handbook](guides-developer/install-minikube.md) - Hands-On Development Guide
|
||||||
|
- [Helm-Toolkit Overview](guides-developer/) - Overview of Helm-Toolkit
|
||||||
|
- [User Registration](guides-developer/guides-devs-helm/registration-user.md)
|
||||||
|
- [Domain Registration](guides-developer/guides-devs-helm/registration-domain.md)
|
||||||
|
- [Host Registration](guides-developer/guides-devs-helm/registration-host.md)
|
||||||
|
- [Service Registration](guides-developer/guides-devs-helm/registration-service.md)
|
||||||
|
- [Kubernetes Development Handbook](guides-developer/install-multinode.md) -
|
||||||
|
- [Operator Guides](guides-operator/readme.md) - Resources for Openstack-Helm Developers
|
||||||
|
- [Helm Operations](guides-operator/getting-started/readme.md) - Helm Operator Guides
|
||||||
|
- [Addons and Plugins](guides-operator/getting-started/helm-addons.md)
|
||||||
|
- [Kubernetes Operations](guides-operator/readme.md)
|
||||||
|
- [Init Containers](guides-operator/readme.md)
|
||||||
|
- [Jobs](guides-operator/readme.md)
|
||||||
|
- [Openstack Operations](guides-operator/readme.md)
|
||||||
|
- [Config Generation](guides-operator/readme.md) - Openstack-Helm Configuration Management
|
||||||
|
- [Networking Guides](guides-operator/readme.md) - Network Operations
|
||||||
|
- [Ingress](guides-operator/readme.md)
|
||||||
|
- [Nodeports](guides-operator/readme.md)
|
||||||
|
- [Security Guides](guides-operator/readme.md) - Security Operations
|
||||||
|
- [Namespace Isolation](guides-operator/readme.md)
|
||||||
|
- [SELinux and SECCOMP](guides-operator/readme.md)
|
||||||
|
- [Role-Based Access Control](guides-operator/readme.md)
|
||||||
|
- [Troubleshooting Guides](charts.md)
|
||||||
|
- [Appendix A: Helm Resources](charts.md) - Curated List of Helm Resources
|
||||||
|
- [Appendix B: Kubernetes Resources](charts.md) - Curated List of Kubernetes Resources
|
@ -23,7 +23,7 @@ replicas:
|
|||||||
|
|
||||||
development:
|
development:
|
||||||
enabled: false
|
enabled: false
|
||||||
storage_path: /data/openstack-helm/glance/images
|
storage_path: /var/lib/localkube/openstack-helm/glance/images
|
||||||
|
|
||||||
labels:
|
labels:
|
||||||
node_selector_key: openstack-control-plane
|
node_selector_key: openstack-control-plane
|
||||||
|
@ -28,7 +28,7 @@ replicas: 3
|
|||||||
# will override certain things, like the replicas requested
|
# will override certain things, like the replicas requested
|
||||||
development:
|
development:
|
||||||
enabled: false
|
enabled: false
|
||||||
storage_path: /data/openstack-helm/mariadb
|
storage_path: /var/lib/localkube/openstack-helm/mariadb
|
||||||
|
|
||||||
resources:
|
resources:
|
||||||
enabled: false
|
enabled: false
|
||||||
|
Loading…
Reference in New Issue
Block a user