From b46c400cb0cb3930f67616991b4369ec9c81d264 Mon Sep 17 00:00:00 2001 From: Alan Meadows Date: Tue, 31 Jan 2017 16:15:43 -0800 Subject: [PATCH 01/18] Documentation 1.2: add helm overrides/values document --- docs/README.md | 12 +- docs/helm_overrides.md | 346 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 355 insertions(+), 3 deletions(-) create mode 100644 docs/helm_overrides.md diff --git a/docs/README.md b/docs/README.md index 00c6b4b0a0..65e8ad807d 100644 --- a/docs/README.md +++ b/docs/README.md @@ -7,9 +7,15 @@ ###    1.1 [Mission](mission.md) #####      1.1.1 [Resiliency](mission.md#resiliency) #####      1.1.2 [Scaling](mission.md#scaling) -###    1.2 [Helm Overrides]() -#####      1.2.1 [Resource Limits]() -#####      1.2.2 [Conditionals]() +###    1.2 [Helm Overrides](helm_overrides.md) +#####      1.2.1 [Values Philosophy](helm_overrides#values) +#####      1.2.1 [Replicas](helm_overrides.md#replicas) +#####      1.2.1 [Labels](helm_overrides.md#labels) +#####      1.2.1 [Images](helm_overrides.md#images) +#####      1.2.1 [Upgrades](helm_overrides.md#upgrades) +#####      1.2.1 [Resource Limits](helm_overrides.md#resource limits) +#####      1.2.2 [Endpoints](#helm_overrides.md#endpoints) +#####      1.2.2 [Common Conditionals](helm_overrides.md#common conditionals) ###    1.3 [Init-Containers]() #####      1.3.1 [Dependency Checking]() ###    1.4 [Kubernetes Jobs]() diff --git a/docs/helm_overrides.md b/docs/helm_overrides.md new file mode 100644 index 0000000000..da51811069 --- /dev/null +++ b/docs/helm_overrides.md @@ -0,0 +1,346 @@ +# Helm Overrides + +This document covers helm overrides and the openstack-helm approach. For more information on helm overrides in general see the Helm [Values Documentation](https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files) + +## Values Philosophy + +There are two major The openstack-helm values philosophies that guide how values are formed. It is important that new chart developers understand the values.yaml approach openstack-helm has to ensure our charts are both consistent and remain a joy to work with. + +The first is that all charts should be independently installable and do not require a parent chart. This means that the values file in each chart should be self-contained. We will avoid the use of helm globals and the concept of a parent chart as a requirement to capture and feed environment specific overrides into subcharts. An example of a single site definition YAML that can be source controlled and used as `--values` input to all openstack-helm charts to maintain overrides in one testable place is forthcoming. Currently helm does not support as `--values=environment.yaml` chunking up a larger override files YAML namespace. Ideally, we are seeking native helm support for `helm install local/keystone --values=environment.yaml:keystone` where `environment.yaml` is the operators chart-wide environment defition and `keystone` is the section in environment.yaml that will be fed to the keystone chart during install as overrides. Standard YAML anchors can be used to duplicate common elements like the `endpoints` sections. As of this document, operators can use a temporary approach like [values.py](https://github.com/att-comdev/openstack-helm/blob/master/common/utils/values/values.py) to chunk up a single override YAML file as input to various individual charts. It is our belief that overrides, just like the templates themselves, should be source controlled and tested especially for operators operating charts at scale. We will continue to examine efforts such as [helm-value-store](https://github.com/skuid/helm-value-store) and solutions in the vein of [helmfile](https://github.com/roboll/helmfile). A project that seems quite compelling to address the needs of orchestrating multiple charts and managing site specific overrides is [Landscape](https://github.com/Eneco/landscaper) + +The second is that the values files should be consistent across all charts, including charts in core, infra, and addons. This provides a consistent way for operators to override settings such as enabling developer mode, defining resource limitations, and customizing the actual OpenStack configuration within chart templates without having to guess how a particular chart developer has layed out their values.yaml. There are also various macros in the `common` chart that will depend on the values.yaml within all charts being structured a certain way. + +Finally, where charts reference connectivity information for other services sane defaults should be provided. In the case where these services are provided by openstack-helm itself, the defaults should assume the user will use the openstack-helm charts for those services but ensure that they can be overriden if the operator has them externally deployed. + +## Replicas + +All charts must provide replicas definitions and leverage those in the kubernetes manifests. This allows site operators to tune the replica counts at install or upgrade time. We suggest all charts deploy by default with more then one replica to ensure that openstack-helm being used in production environments is treated as a first class citizen and that more then one replica of every service is frequently tested. Developers wishing to deploy minimal environments can enable the `development` mode override which should enforce only one replica of each component. + +The convention today in openstack-helm is to define a `replicas:` section for the chart, with each component being deployed having its own tunable value. + +For example, the `glance` chart provides the following replicas in `values.yaml` + +``` +replicas: + api: 2 + registry: 2 +``` + +An operator can override these on `install` or `upgrade`: + +``` +$ helm install local/glance --set replicas.api=3,replicas.registry=3 +``` + +## Labels + +We use nodeSelectors as well as podAntiAffinity rules to ensure resources land in the proper place within Kubernetes. Today, openstack-helm employs four labels: + +- ceph-storage: enabled +- openstack-control-plane: enabled +- openstack-compute-node: enabled +- openvswitch: enabled + +Ideally, we would eliminate the openvswitch label as this is really an OR of (`openstack-control-plane` and `openstack-compute-node`) however the fundamental way in which Kubernetes nodeSelectors prohibits this specific logic so we require a third label that spans all hosts, `openvswitch` as this is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` hosts. The openvswitch service must run on both types of hosts to provide openvswitch connectivity for DHCP, L3, Metadata services which run in the control plane as well as tenant connectivity which runs on the compute node infrastructure. + +Labels are of course definable and overridable by the chart operators. Labels are defined in charts with a common convention, using a `labels:` section which defines both a selector, and a value: + +``` +labels: + node_selector_key: openstack-control-plane + node_selector_value: enabled +``` + +In some cases, a chart may need to define more then one label, such as the neutron chart in which case each element should be articulated under the `labels:` section, nesting where appropriate. + +``` +labels: + # ovs is a special case, requiring a special + # label that can apply to both control hosts + # and compute hosts, until we get more sophisticated + # with our daemonset scheduling + ovs: + node_selector_key: openvswitch + node_selector_value: enabled + agent: + dhcp: + node_selector_key: openstack-control-plane + node_selector_value: enabled + l3: + node_selector_key: openstack-control-plane + node_selector_value: enabled + metadata: + node_selector_key: openstack-control-plane + node_selector_value: enabled + server: + node_selector_key: openstack-control-plane + node_selector_value: enabled +``` + +These labels should be leveraged by `nodeSelector` definitions in charts for all resources, including jobs: + +``` + ... + spec: + nodeSelector: + {{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }} + containers: + ... +``` + +In some cases, especially with infrastructure components, it becomes necessary for the chart developer to provide some scheduling instruction to Kubernetes to help ensure proper resiliency. The most common example employed today are podAntiAffinity rules, such as those used in the `mariadb` chart. We encurage these to be placed on all foundational elements so that Kubernetes will disperse resources for resiliency, but also allow multi-replica installations to deploy successfully into a single host environment: + +``` + annotations: + # this soft requirement allows single + # host deployments to spawn several mariadb containers + # but in a larger environment, would attempt to spread + # them out + scheduler.alpha.kubernetes.io/affinity: > + { + "podAntiAffinity": { + "preferredDuringSchedulingIgnoredDuringExecution": [{ + "labelSelector": { + "matchExpressions": [{ + "key": "app", + "operator": "In", + "values":["mariadb"] + }] + }, + "topologyKey": "kubernetes.io/hostname", + "weight": 10 + }] + } + } +``` + +## Images + +Our core philosophy regarding images is that the toolsets required to enable the OpenStack services should be applied by Kubernetes itself. This requires the openstack-helm to develop common and simple scripts with minimal dependencies that can be overlayed on any image meeting the OpenStack core library requirements. The advantage of this however is that we can be image agnostic, allowing operators to use Stackanetes, Kolla, Yaodu, or any image flavor they choose and they will all function the same. + +To that end, all charts provide an `images:` section that allows operators to override images. It is also our assertion that all default image references should be fully spelled out, even those hosted by docker, and no default image reference should use `:latest` but be pinned to a specific version to ensure a consistent behavior for deployments over time. + +Today, the `images:` section has several common conventions. Most openstack services require a database initialization function, a database synchronization function, and finally a series of steps for Keystone registration and integration. There may also be specific images for each component that composes that OpenStack services and these may or may not differ but should all be defined in `images`. + +The following standards are in use today, in addition to any components defined by the service itself. + +- dep_check: The image that will perform dependency checking in an init-container. +- db_init: The image that will perform database creation operations for the OpenStack service. +- db_sync: The image that will perform database sync (schema insertion) for the OpenStack service. +- ks_user: The image that will perform keystone user creation for the service. +- ks_service: The image that will perform keystone service registration for the service. +- ks_endpoints: The image that will perform keystone endpoint registration for the service. +- pull_policy: The image pull policy, one of "Always", "IfNotPresent", and "Never" which will be used by all containers in the chart. + +An illustrative example of a images: section taken from the heat chart: + +``` +images: + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.1.0 + db_init: quay.io/stackanetes/stackanetes-kolla-toolbox:newton + db_sync: docker.io/kolla/ubuntu-source-heat-api:3.0.1 + ks_user: quay.io/stackanetes/stackanetes-kolla-toolbox:newton + ks_service: quay.io/stackanetes/stackanetes-kolla-toolbox:newton + ks_endpoints: quay.io/stackanetes/stackanetes-kolla-toolbox:newton + api: docker.io/kolla/ubuntu-source-heat-api:3.0.1 + cfn: docker.io/kolla/ubuntu-source-heat-api:3.0.1 + cloudwatch: docker.io/kolla/ubuntu-source-heat-api:3.0.1 + engine: docker.io/kolla/ubuntu-source-heat-engine:3.0.1 + pull_policy: "IfNotPresent" +``` + +The openstack-helm project today uses a mix of docker images from Stackanetes, Kolla, but going forward we will likely first standardize on Kolla images across all charts but without any reliance on Kolla image utilities, followed by support for alternative images with substantially smaller footprints such as [Yaodu](https://github.com/yaodu) + +## Upgrades + +The openstack-helm project assumes all upgrades will be done through helm. This includes both template changes, including resources layered on top of the image such as configuration files as well as the Kubernetes resources themselves in addition to the more common practice of updating images. + +Today, several issues exist within helm when updating images within charts that may be used by jobs that have already run to completion or are still in flight. We will seek to address these within the helm community or within the charts themselves by support helm hooks to allow jobs to be deleted during an upgrade so that they can be recreated with an updated image. An example of where this behavior would be desirable would be an updated db_sync image which has updated to point from a Mitaka image to a Newton image. In this case, the operator will likely want a db_sync job which was already run and completed during site installation to run again with the updated image to bring the schema inline with the Newton release. + +The openstack-helm project also implements annotations across all chart configmaps so that changing resources inside containers such as configuration files, triggers a Kubernetes rolling update so that those resources can be updated without deleting and redeploying the service and treated like any other upgrade such as a container image change. + +This is accomplished with the following annotation: + +``` + ... + annotations: + configmap-bin-hash: {{ tuple "configmap-bin.yaml" . | include "hash" }} + configmap-etc-hash: {{ tuple "configmap-etc.yaml" . | include "hash" }} +``` + +The `hash` function defined in the `common` chart ensures that any change to any file referenced by configmap-bin.yaml or configmap-etc.yaml results in a new hash, which will trigger a rolling update. + +All chart components (except `DaemonSets`) are outfitted by default with rolling update strategies: + +``` +spec: + replicas: {{ .Values.replicas }} + revisionHistoryLimit: {{ .Values.upgrades.revision_history }} + strategy: + type: {{ .Values.upgrades.pod_replacement_strategy }} + {{ if eq .Values.upgrades.pod_replacement_strategy "RollingUpdate" }} + rollingUpdate: + maxUnavailable: {{ .Values.upgrades.rolling_update.max_unavailable }} + maxSurge: {{ .Values.upgrades.rolling_update.max_surge }} + {{ end }} +``` + +In values.yaml in each chart, the same defaults are supplied in every chart, allowing the operator to override at upgrade or deployment time. + +``` +upgrades: + revision_history: 3 + pod_replacement_strategy: RollingUpdate + rolling_update: + max_unavailable: 1 + max_surge: 3 +``` + +## Resource Limits + +Resource limits should be defined for all charts within openstack-helm. + +The convention is to leverage a `resources:` section within values.yaml with an `enabled` setting that defaults to `false` but can be turned on by the operator at install or upgrade time. + +The resources specify the requests (memory and cpu) and limits (memory and cpu) for each deployed resource. For example, from the nova chart `values.yaml`: + +``` +resources: + enabled: false + nova_compute: + requests: + memory: "124Mi" + cpu: "100m" + limits: + memory: "1024Mi" + cpu: "2000m" + nova_libvirt: + requests: + memory: "124Mi" + cpu: "100m" + limits: + memory: "1024Mi" + cpu: "2000m" + nova_api_metadata: + requests: + memory: "124Mi" + cpu: "100m" + limits: + memory: "1024Mi" + cpu: "2000m" +... +``` + +These resources definitions are then applied to the appropriate component, when the `enabled` flag is set. For instance, below, the nova_compute daemonset has the requests and limits values applied from `.Values.resources.nova_compute`: + +``` + {{- if .Values.resources.enabled }} + resources: + requests: + memory: {{ .Values.resources.nova_compute.requests.memory | quote }} + cpu: {{ .Values.resources.nova_compute.requests.cpu | quote }} + limits: + memory: {{ .Values.resources.nova_compute.limits.memory | quote }} + cpu: {{ .Values.resources.nova_compute.limits.cpu | quote }} + {{- end }} +``` + +When a chart developer doesn't know what resource limits or requests to apply to a new component, they can deploy them locally and examine utilization using tools like WeaveScope. The resource limits may not be perfect on initial submission but over time with community contributions they will be refined to reflect reality. + +## Endpoints + +NOTE: This feature is under active development. There may be dragons here. + +Endpoints are a large part of what openstack-helm seeks to provide mechanisms around. OpenStack is a highly interconnected application, with various components requiring connectivity details to numerous services, including other OpenStack components, infrastructure elements such as databases, queues, and memcache infrastructure. We want to ensure we provide a consistent mechanism for defining these "endpoints" across all charts and the macros necessary to convert those definitions into usable endpoints. The charts should consistently default to building endpoints that assume the operator is leveraging all of our charts to build their OpenStack stack. However, we want to ensure that if operators want to run some of our charts and have those plug into their existing infrastructure, or run elements in different namespaces, for example, these endpoints should be overridable. + + +For instance, in the neutron chart `values.yaml` the following endpoints are defined: + +``` +# typically overriden by environmental +# values, but should include all endpoints +# required by this chart +endpoints: + glance: + hosts: + default: glance-api + type: image + path: null + scheme: 'http' + port: + api: 9292 + registry: 9191 + nova: + hosts: + default: nova-api + path: "/v2/%(tenant_id)s" + type: compute + scheme: 'http' + port: + api: 8774 + metadata: 8775 + novncproxy: 6080 + keystone: + hosts: + default: keystone-api + path: /v3 + type: identity + scheme: 'http' + port: + admin: 35357 + public: 5000 + neutron: + hosts: + default: neutron-server + path: null + type: network + scheme: 'http' + port: + api: 9696 +``` + +These values define all the endpoints that the neutron chart may need in order to build full URL compatible endpoints to various services. Long-term these will also include database, memcache, and rabbitmq elements in one place--essentially all external connectivity needs defined centrally. + +The macros that help translate these into the actual URLs necessary are defined in the `common` chart. For instance, the cinder chart defines a `glance_api_servers` definition in the `cinder.conf` template: + +``` ++glance_api_servers = {{ tuple "image" "internal" "api" . | include "endpoint_type_lookup_addr" }} +``` + +This line of magic uses the `endpoint_type_lookup_addr` macro in the common chart (since it is used by all charts), and passes it three parameters: + +- image: This the OpenStack service we are building an endpoint for. This will be mapped to `glance` which is the image service for OpenStack. +- internal: This is the OpenStack endpoint type we are looking for - valid values would be `internal`, `admin`, and `public` +- api: This is the port to map to for the service. Some components such as glance provide an `api` port and a `registry` port, for example. + +Charts should avoid at all costs hard coding values such as ``http://keystone-api:5000` as these are not compatible with operator overrides or supporting spreading components out over various namespaces. + +## Common Conditionals + +The openstack-helm charts make the following conditions available across all charts which can be set at install or upgrade time with helm: + +### Developer Mode + +``` +helm install local/chart --set development.enabled=true +``` + +The development mode flag should be available on all charts. Enabling this reduces dependencies that chart may have on persistent volume claims (which are difficult to support in a laptop minikube environment) as well as reducing replica counts or resiliency features to support a minimal environment. + +The glance chart for instance defines the following `development:` overrides: + +``` +development: + enabled: false + storage_path: /data/openstack-helm/glance/images +``` + +The `enabled` flag allows the developer to enable development mode. The storage path allows the operator to store glance images in a hostPath instead of leveraging a ceph backend, which again, is difficult to spin up in a small laptop minikube environment. The host path can be overriden by the operator if desired. + +### Resources + +``` +helm install local/chart --set resources.enabled=true +``` + +Resource limits/requirements can be turned on and off. By default, they are off. Setting this enabled to `true` will deploy Kubernetes resources with resource requirements and limits. From baaaec70c7f2fd7f3fc486b75c9bd8c38cf0db4c Mon Sep 17 00:00:00 2001 From: Alan Meadows Date: Fri, 3 Feb 2017 08:39:38 -0800 Subject: [PATCH 02/18] update helm overrides documentation from feedback --- docs/helm_overrides.md | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/docs/helm_overrides.md b/docs/helm_overrides.md index da51811069..74e8dafe1f 100644 --- a/docs/helm_overrides.md +++ b/docs/helm_overrides.md @@ -1,12 +1,12 @@ # Helm Overrides -This document covers helm overrides and the openstack-helm approach. For more information on helm overrides in general see the Helm [Values Documentation](https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files) +This document covers helm overrides and the openstack-helm approach. For more information on Helm overrides in general see the Helm [Values Documentation](https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files) ## Values Philosophy -There are two major The openstack-helm values philosophies that guide how values are formed. It is important that new chart developers understand the values.yaml approach openstack-helm has to ensure our charts are both consistent and remain a joy to work with. +Two major philosophies guide the openstack-helm values approach. It is important that new chart developers understand the `values.yaml` approach openstack-helm has within each of its charts to ensure all of our charts are both consistent and remain a joy to work with. -The first is that all charts should be independently installable and do not require a parent chart. This means that the values file in each chart should be self-contained. We will avoid the use of helm globals and the concept of a parent chart as a requirement to capture and feed environment specific overrides into subcharts. An example of a single site definition YAML that can be source controlled and used as `--values` input to all openstack-helm charts to maintain overrides in one testable place is forthcoming. Currently helm does not support as `--values=environment.yaml` chunking up a larger override files YAML namespace. Ideally, we are seeking native helm support for `helm install local/keystone --values=environment.yaml:keystone` where `environment.yaml` is the operators chart-wide environment defition and `keystone` is the section in environment.yaml that will be fed to the keystone chart during install as overrides. Standard YAML anchors can be used to duplicate common elements like the `endpoints` sections. As of this document, operators can use a temporary approach like [values.py](https://github.com/att-comdev/openstack-helm/blob/master/common/utils/values/values.py) to chunk up a single override YAML file as input to various individual charts. It is our belief that overrides, just like the templates themselves, should be source controlled and tested especially for operators operating charts at scale. We will continue to examine efforts such as [helm-value-store](https://github.com/skuid/helm-value-store) and solutions in the vein of [helmfile](https://github.com/roboll/helmfile). A project that seems quite compelling to address the needs of orchestrating multiple charts and managing site specific overrides is [Landscape](https://github.com/Eneco/landscaper) +The first is that all charts should be independently installable and do not require a parent chart. This means that the values file in each chart should be self-contained. We will avoid the use of Helm globals and the concept of a parent chart as a requirement to capture and feed environment specific overrides into subcharts. An example of a single site definition YAML that can be source controlled and used as `--values` input to all openstack-helm charts to maintain overrides in one testable place is forthcoming. Currently Helm does not support as `--values=environment.yaml` chunking up a larger override files YAML namespace. Ideally, we are seeking native Helm support for `helm install local/keystone --values=environment.yaml:keystone` where `environment.yaml` is the operators chart-wide environment defition and `keystone` is the section in environment.yaml that will be fed to the keystone chart during install as overrides. Standard YAML anchors can be used to duplicate common elements like the `endpoints` sections. As of this document, operators can use a temporary approach like [values.py](https://github.com/att-comdev/openstack-helm/blob/master/common/utils/values/values.py) to chunk up a single override YAML file as input to various individual charts. It is our belief that overrides, just like the templates themselves, should be source controlled and tested especially for operators operating charts at scale. We will continue to examine efforts such as [helm-value-store](https://github.com/skuid/helm-value-store) and solutions in the vein of [helmfile](https://github.com/roboll/helmfile). A project that seems quite compelling to address the needs of orchestrating multiple charts and managing site specific overrides is [Landscape](https://github.com/Eneco/landscaper) The second is that the values files should be consistent across all charts, including charts in core, infra, and addons. This provides a consistent way for operators to override settings such as enabling developer mode, defining resource limitations, and customizing the actual OpenStack configuration within chart templates without having to guess how a particular chart developer has layed out their values.yaml. There are also various macros in the `common` chart that will depend on the values.yaml within all charts being structured a certain way. @@ -14,7 +14,7 @@ Finally, where charts reference connectivity information for other services sane ## Replicas -All charts must provide replicas definitions and leverage those in the kubernetes manifests. This allows site operators to tune the replica counts at install or upgrade time. We suggest all charts deploy by default with more then one replica to ensure that openstack-helm being used in production environments is treated as a first class citizen and that more then one replica of every service is frequently tested. Developers wishing to deploy minimal environments can enable the `development` mode override which should enforce only one replica of each component. +All charts must provide replicas definitions and leverage those in the Kubernetes manifests. This allows site operators to tune the replica counts at install or upgrade time. We suggest all charts deploy by default with more then one replica to ensure that openstack-helm being used in production environments is treated as a first class citizen and that more than one replica of every service is frequently tested. Developers wishing to deploy minimal environments can enable the `development` mode override which should enforce only one replica of each component. The convention today in openstack-helm is to define a `replicas:` section for the chart, with each component being deployed having its own tunable value. @@ -41,7 +41,7 @@ We use nodeSelectors as well as podAntiAffinity rules to ensure resources land i - openstack-compute-node: enabled - openvswitch: enabled -Ideally, we would eliminate the openvswitch label as this is really an OR of (`openstack-control-plane` and `openstack-compute-node`) however the fundamental way in which Kubernetes nodeSelectors prohibits this specific logic so we require a third label that spans all hosts, `openvswitch` as this is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` hosts. The openvswitch service must run on both types of hosts to provide openvswitch connectivity for DHCP, L3, Metadata services which run in the control plane as well as tenant connectivity which runs on the compute node infrastructure. +NOTE: The `openvswitch` label is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` hosts. Ideally, we would eliminate the `openvswitch` label as we simply want to deploy openvswitch to an OR of (`openstack-control-plane` and `openstack-compute-node`). However, Kubernetes `nodeSelectors` prohibits this specific logic. As a result of this, we require a third label that spans all hosts, which is `openvswitch`. The openvswitch service must run on both types of hosts to provide openvswitch connectivity for DHCP, L3, Metadata services which run in the control plane as well as tenant connectivity which runs on the compute node infrastructure. Labels are of course definable and overridable by the chart operators. Labels are defined in charts with a common convention, using a `labels:` section which defines both a selector, and a value: @@ -51,7 +51,7 @@ labels: node_selector_value: enabled ``` -In some cases, a chart may need to define more then one label, such as the neutron chart in which case each element should be articulated under the `labels:` section, nesting where appropriate. +In some cases, such as the neutron chart, a chart may need to define more then one label. In cases such as this, each element should be articulated under the `labels:`` section, nesting where appropriate: ``` labels: @@ -88,7 +88,7 @@ These labels should be leveraged by `nodeSelector` definitions in charts for all ... ``` -In some cases, especially with infrastructure components, it becomes necessary for the chart developer to provide some scheduling instruction to Kubernetes to help ensure proper resiliency. The most common example employed today are podAntiAffinity rules, such as those used in the `mariadb` chart. We encurage these to be placed on all foundational elements so that Kubernetes will disperse resources for resiliency, but also allow multi-replica installations to deploy successfully into a single host environment: +In some cases, especially with infrastructure components, it becomes necessary for the chart developer to provide some scheduling instruction to Kubernetes to help ensure proper resiliency. The most common example employed today are podAntiAffinity rules, such as those used in the `mariadb` chart. We encourage these to be placed on all foundational elements so that Kubernetes will not only disperse resources for resiliency, but also allow multi-replica installations to deploy successfully into a single host environment: ``` annotations: @@ -120,7 +120,7 @@ Our core philosophy regarding images is that the toolsets required to enable the To that end, all charts provide an `images:` section that allows operators to override images. It is also our assertion that all default image references should be fully spelled out, even those hosted by docker, and no default image reference should use `:latest` but be pinned to a specific version to ensure a consistent behavior for deployments over time. -Today, the `images:` section has several common conventions. Most openstack services require a database initialization function, a database synchronization function, and finally a series of steps for Keystone registration and integration. There may also be specific images for each component that composes that OpenStack services and these may or may not differ but should all be defined in `images`. +Today, the `images:` section has several common conventions. Most OpenStack services require a database initialization function, a database synchronization function, and finally a series of steps for Keystone registration and integration. There may also be specific images for each component that composes those OpenStack services and these may or may not differ but should all be defined in `images`. The following standards are in use today, in addition to any components defined by the service itself. @@ -153,9 +153,9 @@ The openstack-helm project today uses a mix of docker images from Stackanetes, K ## Upgrades -The openstack-helm project assumes all upgrades will be done through helm. This includes both template changes, including resources layered on top of the image such as configuration files as well as the Kubernetes resources themselves in addition to the more common practice of updating images. +The openstack-helm project assumes all upgrades will be done through Helm. This includes both template changes, including resources layered on top of the image such as configuration files as well as the Kubernetes resources themselves in addition to the more common practice of updating images. -Today, several issues exist within helm when updating images within charts that may be used by jobs that have already run to completion or are still in flight. We will seek to address these within the helm community or within the charts themselves by support helm hooks to allow jobs to be deleted during an upgrade so that they can be recreated with an updated image. An example of where this behavior would be desirable would be an updated db_sync image which has updated to point from a Mitaka image to a Newton image. In this case, the operator will likely want a db_sync job which was already run and completed during site installation to run again with the updated image to bring the schema inline with the Newton release. +Today, several issues exist within Helm when updating images within charts that may be used by jobs that have already run to completion or are still in flight. We will seek to address these within the Helm community or within the charts themselves by support Helm hooks to allow jobs to be deleted during an upgrade so that they can be recreated with an updated image. An example of where this behavior would be desirable would be an updated db_sync image which has updated to point from a Mitaka image to a Newton image. In this case, the operator will likely want a db_sync job which was already run and completed during site installation to run again with the updated image to bring the schema inline with the Newton release. The openstack-helm project also implements annotations across all chart configmaps so that changing resources inside containers such as configuration files, triggers a Kubernetes rolling update so that those resources can be updated without deleting and redeploying the service and treated like any other upgrade such as a container image change. @@ -251,7 +251,7 @@ When a chart developer doesn't know what resource limits or requests to apply to NOTE: This feature is under active development. There may be dragons here. -Endpoints are a large part of what openstack-helm seeks to provide mechanisms around. OpenStack is a highly interconnected application, with various components requiring connectivity details to numerous services, including other OpenStack components, infrastructure elements such as databases, queues, and memcache infrastructure. We want to ensure we provide a consistent mechanism for defining these "endpoints" across all charts and the macros necessary to convert those definitions into usable endpoints. The charts should consistently default to building endpoints that assume the operator is leveraging all of our charts to build their OpenStack stack. However, we want to ensure that if operators want to run some of our charts and have those plug into their existing infrastructure, or run elements in different namespaces, for example, these endpoints should be overridable. +Endpoints are a large part of what openstack-helm seeks to provide mechanisms around. OpenStack is a highly interconnected application, with various components requiring connectivity details to numerous services, including other OpenStack components, infrastructure elements such as databases, queues, and memcached infrastructure. We want to ensure we provide a consistent mechanism for defining these "endpoints" across all charts and the macros necessary to convert those definitions into usable endpoints. The charts should consistently default to building endpoints that assume the operator is leveraging all of our charts to build their OpenStack stack. However, we want to ensure that if operators want to run some of our charts and have those plug into their existing infrastructure, or run elements in different namespaces, for example, these endpoints should be overridable. For instance, in the neutron chart `values.yaml` the following endpoints are defined: @@ -299,7 +299,7 @@ endpoints: api: 9696 ``` -These values define all the endpoints that the neutron chart may need in order to build full URL compatible endpoints to various services. Long-term these will also include database, memcache, and rabbitmq elements in one place--essentially all external connectivity needs defined centrally. +These values define all the endpoints that the neutron chart may need in order to build full URL compatible endpoints to various services. Long-term these will also include database, memcached, and rabbitmq elements in one place--essentially all external connectivity needs defined centrally. The macros that help translate these into the actual URLs necessary are defined in the `common` chart. For instance, the cinder chart defines a `glance_api_servers` definition in the `cinder.conf` template: @@ -309,7 +309,7 @@ The macros that help translate these into the actual URLs necessary are defined This line of magic uses the `endpoint_type_lookup_addr` macro in the common chart (since it is used by all charts), and passes it three parameters: -- image: This the OpenStack service we are building an endpoint for. This will be mapped to `glance` which is the image service for OpenStack. +- image: This is the OpenStack service we are building an endpoint for. This will be mapped to `glance` which is the image service for OpenStack. - internal: This is the OpenStack endpoint type we are looking for - valid values would be `internal`, `admin`, and `public` - api: This is the port to map to for the service. Some components such as glance provide an `api` port and a `registry` port, for example. @@ -317,7 +317,7 @@ Charts should avoid at all costs hard coding values such as ``http://keystone-ap ## Common Conditionals -The openstack-helm charts make the following conditions available across all charts which can be set at install or upgrade time with helm: +The openstack-helm charts make the following conditions available across all charts which can be set at install or upgrade time with Helm: ### Developer Mode From 74629958b2461d63196d5a884d6267e8fbf68a5e Mon Sep 17 00:00:00 2001 From: Alan Meadows Date: Fri, 3 Feb 2017 11:40:43 -0800 Subject: [PATCH 03/18] update helm override document with further feedback --- docs/helm_overrides.md | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/docs/helm_overrides.md b/docs/helm_overrides.md index 74e8dafe1f..da0e90a644 100644 --- a/docs/helm_overrides.md +++ b/docs/helm_overrides.md @@ -6,9 +6,9 @@ This document covers helm overrides and the openstack-helm approach. For more i Two major philosophies guide the openstack-helm values approach. It is important that new chart developers understand the `values.yaml` approach openstack-helm has within each of its charts to ensure all of our charts are both consistent and remain a joy to work with. -The first is that all charts should be independently installable and do not require a parent chart. This means that the values file in each chart should be self-contained. We will avoid the use of Helm globals and the concept of a parent chart as a requirement to capture and feed environment specific overrides into subcharts. An example of a single site definition YAML that can be source controlled and used as `--values` input to all openstack-helm charts to maintain overrides in one testable place is forthcoming. Currently Helm does not support as `--values=environment.yaml` chunking up a larger override files YAML namespace. Ideally, we are seeking native Helm support for `helm install local/keystone --values=environment.yaml:keystone` where `environment.yaml` is the operators chart-wide environment defition and `keystone` is the section in environment.yaml that will be fed to the keystone chart during install as overrides. Standard YAML anchors can be used to duplicate common elements like the `endpoints` sections. As of this document, operators can use a temporary approach like [values.py](https://github.com/att-comdev/openstack-helm/blob/master/common/utils/values/values.py) to chunk up a single override YAML file as input to various individual charts. It is our belief that overrides, just like the templates themselves, should be source controlled and tested especially for operators operating charts at scale. We will continue to examine efforts such as [helm-value-store](https://github.com/skuid/helm-value-store) and solutions in the vein of [helmfile](https://github.com/roboll/helmfile). A project that seems quite compelling to address the needs of orchestrating multiple charts and managing site specific overrides is [Landscape](https://github.com/Eneco/landscaper) +The first is that all charts should be independently installable and do not require a parent chart. This means that the values file in each chart should be self-contained. We will avoid the use of Helm globals and the concept of a parent chart as a requirement to capture and feed environment specific overrides into subcharts. An example of a single site definition YAML that can be source controlled and used as `--values` input to all openstack-helm charts to maintain overrides in one testable place is forthcoming. Currently Helm does not support as `--values=environment.yaml` chunking up a larger override files YAML namespace. Ideally, we are seeking native Helm support for `helm install local/keystone --values=environment.yaml:keystone` where `environment.yaml` is the operators chart-wide environment defition and `keystone` is the section in environment.yaml that will be fed to the keystone chart during install as overrides. Standard YAML anchors can be used to duplicate common elements like the `endpoints` sections. As of this document, operators can use a temporary approach like [values.py](https://github.com/att-comdev/openstack-helm/blob/master/helm-toolkit/utils/values/values.py) to chunk up a single override YAML file as input to various individual charts. It is our belief that overrides, just like the templates themselves, should be source controlled and tested especially for operators operating charts at scale. We will continue to examine efforts such as [helm-value-store](https://github.com/skuid/helm-value-store) and solutions in the vein of [helmfile](https://github.com/roboll/helmfile). A project that seems quite compelling to address the needs of orchestrating multiple charts and managing site specific overrides is [Landscape](https://github.com/Eneco/landscaper) -The second is that the values files should be consistent across all charts, including charts in core, infra, and addons. This provides a consistent way for operators to override settings such as enabling developer mode, defining resource limitations, and customizing the actual OpenStack configuration within chart templates without having to guess how a particular chart developer has layed out their values.yaml. There are also various macros in the `common` chart that will depend on the values.yaml within all charts being structured a certain way. +The second is that the values files should be consistent across all charts, including charts in core, infra, and addons. This provides a consistent way for operators to override settings such as enabling developer mode, defining resource limitations, and customizing the actual OpenStack configuration within chart templates without having to guess how a particular chart developer has layed out their values.yaml. There are also various macros in the `helm-toolkit` chart that will depend on the `values.yaml` within all charts being structured a certain way. Finally, where charts reference connectivity information for other services sane defaults should be provided. In the case where these services are provided by openstack-helm itself, the defaults should assume the user will use the openstack-helm charts for those services but ensure that they can be overriden if the operator has them externally deployed. @@ -51,7 +51,7 @@ labels: node_selector_value: enabled ``` -In some cases, such as the neutron chart, a chart may need to define more then one label. In cases such as this, each element should be articulated under the `labels:`` section, nesting where appropriate: +In some cases, such as the neutron chart, a chart may need to define more then one label. In cases such as this, each element should be articulated under the `labels:` section, nesting where appropriate: ``` labels: @@ -116,7 +116,9 @@ In some cases, especially with infrastructure components, it becomes necessary f ## Images -Our core philosophy regarding images is that the toolsets required to enable the OpenStack services should be applied by Kubernetes itself. This requires the openstack-helm to develop common and simple scripts with minimal dependencies that can be overlayed on any image meeting the OpenStack core library requirements. The advantage of this however is that we can be image agnostic, allowing operators to use Stackanetes, Kolla, Yaodu, or any image flavor they choose and they will all function the same. +Our core philosophy regarding images is that the toolsets required to enable the OpenStack services should be applied by Kubernetes itself. This requires the openstack-helm to develop common and simple scripts with minimal dependencies that can be overlayed on any image meeting the OpenStack core library requirements. The advantage of this however is that we can be image agnostic, allowing operators to use Stackanetes, Kolla, Yaodu, or any image flavor and format they choose and they will all function the same. + +The long-term goal besides being image agnostic is to also able to support any of the container runtimes that Kubernetes supports, even those which may not use Docker's own packaging format. This will allow this project to continue to offer maximum flexibility with regard to operator choice. To that end, all charts provide an `images:` section that allows operators to override images. It is also our assertion that all default image references should be fully spelled out, even those hosted by docker, and no default image reference should use `:latest` but be pinned to a specific version to ensure a consistent behavior for deployments over time. @@ -168,7 +170,7 @@ This is accomplished with the following annotation: configmap-etc-hash: {{ tuple "configmap-etc.yaml" . | include "hash" }} ``` -The `hash` function defined in the `common` chart ensures that any change to any file referenced by configmap-bin.yaml or configmap-etc.yaml results in a new hash, which will trigger a rolling update. +The `hash` function defined in the `helm-toolkit` chart ensures that any change to any file referenced by configmap-bin.yaml or configmap-etc.yaml results in a new hash, which will trigger a rolling update. All chart components (except `DaemonSets`) are outfitted by default with rolling update strategies: @@ -261,7 +263,7 @@ For instance, in the neutron chart `values.yaml` the following endpoints are def # values, but should include all endpoints # required by this chart endpoints: - glance: + image: hosts: default: glance-api type: image @@ -270,7 +272,7 @@ endpoints: port: api: 9292 registry: 9191 - nova: + compute: hosts: default: nova-api path: "/v2/%(tenant_id)s" @@ -280,7 +282,7 @@ endpoints: api: 8774 metadata: 8775 novncproxy: 6080 - keystone: + identity: hosts: default: keystone-api path: /v3 @@ -289,7 +291,7 @@ endpoints: port: admin: 35357 public: 5000 - neutron: + network: hosts: default: neutron-server path: null @@ -299,15 +301,15 @@ endpoints: api: 9696 ``` -These values define all the endpoints that the neutron chart may need in order to build full URL compatible endpoints to various services. Long-term these will also include database, memcached, and rabbitmq elements in one place--essentially all external connectivity needs defined centrally. +These values define all the endpoints that the neutron chart may need in order to build full URL compatible endpoints to various services. Long-term these will also include database, memcached, and rabbitmq elements in one place-essentially all external connectivity needs defined centrally. -The macros that help translate these into the actual URLs necessary are defined in the `common` chart. For instance, the cinder chart defines a `glance_api_servers` definition in the `cinder.conf` template: +The macros that help translate these into the actual URLs necessary are defined in the `helm-toolkit` chart. For instance, the cinder chart defines a `glance_api_servers` definition in the `cinder.conf` template: ``` -+glance_api_servers = {{ tuple "image" "internal" "api" . | include "endpoint_type_lookup_addr" }} ++glance_api_servers = {{ tuple "image" "internal" "api" . | include "helm-toolkit.endpoint_type_lookup_addr" }} ``` -This line of magic uses the `endpoint_type_lookup_addr` macro in the common chart (since it is used by all charts), and passes it three parameters: +This line of magic uses the `endpoint_type_lookup_addr` macro in the `helm-toolkit` chart (since it is used by all charts). Note a second convention here, all `{{ define }}` macros in charts should be pre-fixed with the chart that is defining them. This allows developers to easily identify the source of a helm macro and also avoid namespace collissions. In the example above, the macro `endpoint_type_look_addr` is defined in the `helm-toolkit` chart. This macro is passed three parameters (aided by the `tuple` method built into the go/sprig templating library used by Helm): - image: This is the OpenStack service we are building an endpoint for. This will be mapped to `glance` which is the image service for OpenStack. - internal: This is the OpenStack endpoint type we are looking for - valid values would be `internal`, `admin`, and `public` From d3f23c30bd3ac8549d7e11ae892f3f3339f2f725 Mon Sep 17 00:00:00 2001 From: Alan Meadows Date: Fri, 3 Feb 2017 11:44:12 -0800 Subject: [PATCH 04/18] Remove non-existent links in TOC As it stands, it is confusing to a viewer of the TOC on what links to actual documentation and what is a placeholder. This removes links that go no where. --- docs/README.md | 78 +++++++++++++++++++++++++------------------------- 1 file changed, 39 insertions(+), 39 deletions(-) diff --git a/docs/README.md b/docs/README.md index 65e8ad807d..76723c1f5f 100644 --- a/docs/README.md +++ b/docs/README.md @@ -4,10 +4,10 @@ ## Table of Contents ##  1. [Openstack-Helm Design Principals]() -###    1.1 [Mission](mission.md) +###    1.1 Mission #####      1.1.1 [Resiliency](mission.md#resiliency) #####      1.1.2 [Scaling](mission.md#scaling) -###    1.2 [Helm Overrides](helm_overrides.md) +###    1.2 Helm Overrides #####      1.2.1 [Values Philosophy](helm_overrides#values) #####      1.2.1 [Replicas](helm_overrides.md#replicas) #####      1.2.1 [Labels](helm_overrides.md#labels) @@ -16,42 +16,42 @@ #####      1.2.1 [Resource Limits](helm_overrides.md#resource limits) #####      1.2.2 [Endpoints](#helm_overrides.md#endpoints) #####      1.2.2 [Common Conditionals](helm_overrides.md#common conditionals) -###    1.3 [Init-Containers]() -#####      1.3.1 [Dependency Checking]() -###    1.4 [Kubernetes Jobs]() -#####      1.4.1 [Service Registration]() -#####      1.4.2 [User Registration]() -#####      1.4.3 [Database Creation]() -#####      1.4.4 [Database Migration]() -###    1.5 [Complimentary Efforts]() -####      1.5.1 [Image-Based Project Considerations]() -###    1.6 [Kubernetes State]() -####      1.6.1 [Third Party Resources]() -####      1.6.2 [Add-Ons]() -##  2. [Repository Structure]() -###    2.1 [Infrastructure Components]() -###    2.2 [Openstack-Helm Core Services]() -###    2.3 [Openstack-Helm Add-Ons]() -##  3. [Operator Resources]() +###    1.3 Init-Containers +#####      1.3.1 Dependency Checking +###    1.4 Kubernetes Jobs +#####      1.4.1 Service Registration +#####      1.4.2 User Registration +#####      1.4.3 Database Creation +#####      1.4.4 Database Migration +###    1.5 Complimentary Efforts +####      1.5.1 Image-Based Project Considerations +###    1.6 Kubernetes State +####      1.6.1 Third Party Resources +####      1.6.2 Add-Ons +##  2. Repository Structure +###    2.1 Infrastructure Components +###    2.2 Openstack-Helm Core Services +###    2.3 Openstack-Helm Add-Ons +##  3. Operator Resources ###     3.1 [Installation](https://github.com/att-comdev/openstack-helm/blob/master/docs/installation/getting-started.md) -###     3.2 [Openstack-Helm Chart Definition Overrides]() -###     3.2 [Openstacak-Helm Upgrades]() -##  4. [Openstack-Helm Networking]() -###    4.1 [Kubernetes Control Plane]() -####     4.1.1 [CNI SDN Considerations]() -####     4.1.2 [Calico Networking]() -###    4.2 [Ingress Philosophy]() -###    4.3 [Openstack Networking]() -####     4.3.1 [Flat Networking]() -####     4.3.1 [L2 Networking]() -##  5. [Security Guidelines]() -###    5.1 [Network Policies]() -###    5.2 [Advanced Network Policies]() -###    5.3 [Role-Based Access Controls]() -###    5.4 [Security Contexts]() -###    5.5 [Security Add-Ons]() -##  6. [Developer Resources](https://github.com/att-comdev/openstack-helm/tree/master/docs/developer) -###    6.1 [Contributions and Guidelines]() -###    6.2 [Development Tools]() +###     3.2 Openstack-Helm Chart Definition Overrides +###     3.2 Openstacak-Helm Upgrades +##  4. Openstack-Helm Networking +###    4.1 Kubernetes Control Plane +####     4.1.1 CNI SDN Considerations +####     4.1.2 Calico Networking +###    4.2 Ingress Philosophy +###    4.3 Openstack Networking +####     4.3.1 Flat Networking +####     4.3.1 L2 Networking +##  5. Security Guidelines +###    5.1 Network Policies +###    5.2 Advanced Network Policies +###    5.3 Role-Based Access Controls +###    5.4 Security Contexts +###    5.5 Security Add-Ons +##  6. Developer Resources +###    6.1 Contributions and Guidelines +###    6.2 Development Tools ####     6.2.1 [Minikube Development](https://github.com/att-comdev/openstack-helm/blob/master/docs/developer/minikube.md) -###    6.3 [Tips and Considerations]() +###    6.3 Tips and Considerations From 8cd88154a1edb0a2bb9e323dfc944b2bbf48327e Mon Sep 17 00:00:00 2001 From: "Rachel M. Jozsa" Date: Mon, 6 Feb 2017 22:13:02 -0500 Subject: [PATCH 05/18] grammatical changes MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - General formatting/sentence structure changes. - Capitalization changes for projects like Neutron, Kubernetes, OpenStack-Helm, etc (proper nouns). - Changed plural pronouns to singular nouns (“project” vs. “we” or “our”). --- docs/helm_overrides.md | 27 ++++++++++++++------------- docs/mission.md | 2 +- 2 files changed, 15 insertions(+), 14 deletions(-) diff --git a/docs/helm_overrides.md b/docs/helm_overrides.md index da0e90a644..8a7eb306ec 100644 --- a/docs/helm_overrides.md +++ b/docs/helm_overrides.md @@ -1,22 +1,22 @@ # Helm Overrides -This document covers helm overrides and the openstack-helm approach. For more information on Helm overrides in general see the Helm [Values Documentation](https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files) +This document covers Helm overrides and the OpenStack-Helm approach. For more information on Helm overrides in general see the Helm [Values Documentation](https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files) ## Values Philosophy -Two major philosophies guide the openstack-helm values approach. It is important that new chart developers understand the `values.yaml` approach openstack-helm has within each of its charts to ensure all of our charts are both consistent and remain a joy to work with. +Two major philosophies guide the OpenStack-Helm values approach. It is important that new chart developers understand the `values.yaml` approach OpenStack-Helm has within each of its charts to ensure that all charts are both consistent and remain a joy to work with. -The first is that all charts should be independently installable and do not require a parent chart. This means that the values file in each chart should be self-contained. We will avoid the use of Helm globals and the concept of a parent chart as a requirement to capture and feed environment specific overrides into subcharts. An example of a single site definition YAML that can be source controlled and used as `--values` input to all openstack-helm charts to maintain overrides in one testable place is forthcoming. Currently Helm does not support as `--values=environment.yaml` chunking up a larger override files YAML namespace. Ideally, we are seeking native Helm support for `helm install local/keystone --values=environment.yaml:keystone` where `environment.yaml` is the operators chart-wide environment defition and `keystone` is the section in environment.yaml that will be fed to the keystone chart during install as overrides. Standard YAML anchors can be used to duplicate common elements like the `endpoints` sections. As of this document, operators can use a temporary approach like [values.py](https://github.com/att-comdev/openstack-helm/blob/master/helm-toolkit/utils/values/values.py) to chunk up a single override YAML file as input to various individual charts. It is our belief that overrides, just like the templates themselves, should be source controlled and tested especially for operators operating charts at scale. We will continue to examine efforts such as [helm-value-store](https://github.com/skuid/helm-value-store) and solutions in the vein of [helmfile](https://github.com/roboll/helmfile). A project that seems quite compelling to address the needs of orchestrating multiple charts and managing site specific overrides is [Landscape](https://github.com/Eneco/landscaper) +The first philosophy to understand is that all charts should be independently installable and should not require a parent chart. This means that the values file in each chart should be self-contained. The project avoids using Helm globals and parent charts as requirements for capturing and feeding environment specific overrides into subcharts. An example of a single site definition YAML that can be source controlled and used as `--values` input to all OpenStack-Helm charts to maintain overrides in one testable place is forthcoming. Currently Helm does not support a `--values=environment.yaml` chunking up a larger override file's YAML namespace. Ideally, the project seeks native Helm support for `helm install local/keystone --values=environment.yaml:keystone` where `environment.yaml` is the operator's chart-wide environment definition and `keystone` is the section in environment.yaml that will be fed to the keystone chart during install as overrides. Standard YAML anchors can be used to duplicate common elements like the `endpoints` sections. At the time of writing, operators can use a temporary approach like [values.py](https://github.com/att-comdev/openstack-helm/blob/master/helm-toolkit/utils/values/values.py) to chunk up a single override YAML file as input to various individual charts. Overrides, just like the templates themselves, should be source controlled and tested, especially for operators operating charts at scale. This project will continue to examine efforts such as [helm-value-store](https://github.com/skuid/helm-value-store) and solutions in the vein of [helmfile](https://github.com/roboll/helmfile). Another compelling project that seems to address the needs of orchestrating multiple charts and managing site specific overrides is [Landscape](https://github.com/Eneco/landscaper) -The second is that the values files should be consistent across all charts, including charts in core, infra, and addons. This provides a consistent way for operators to override settings such as enabling developer mode, defining resource limitations, and customizing the actual OpenStack configuration within chart templates without having to guess how a particular chart developer has layed out their values.yaml. There are also various macros in the `helm-toolkit` chart that will depend on the `values.yaml` within all charts being structured a certain way. +The second philosophy is that the values files should be consistent across all charts, including charts in core, infra, and add-ons. This provides a consistent way for operators to override settings, such as enabling developer mode, defining resource limitations, and customizing the actual OpenStack configuration within chart templates without having to guess how a particular chart developer has laid out their values.yaml. There are also various macros in the `helm-toolkit` chart that will depend on the `values.yaml` within all charts being structured a certain way. -Finally, where charts reference connectivity information for other services sane defaults should be provided. In the case where these services are provided by openstack-helm itself, the defaults should assume the user will use the openstack-helm charts for those services but ensure that they can be overriden if the operator has them externally deployed. +Finally, where charts reference connectivity information for other services sane defaults should be provided. In cases where these services are provided by OpenStack-Helm itself, the defaults should assume that the user will use the OpenStack-Helm charts for those services, but should also allow those charts to be overridden if the operator has them externally deployed. ## Replicas -All charts must provide replicas definitions and leverage those in the Kubernetes manifests. This allows site operators to tune the replica counts at install or upgrade time. We suggest all charts deploy by default with more then one replica to ensure that openstack-helm being used in production environments is treated as a first class citizen and that more than one replica of every service is frequently tested. Developers wishing to deploy minimal environments can enable the `development` mode override which should enforce only one replica of each component. +All charts must provide replica definitions and leverage those in the Kubernetes manifests. This allows site operators to tune the replica counts at install or when upgrading. Each chart should deploy with multiple replicas by default to ensure that production deployments are treated as first class citizens, and that services are tested with multiple replicas more frequently during development and testing. Developers wishing to deploy minimal environments can enable the `development` mode override, which should enforce only one replica per component. -The convention today in openstack-helm is to define a `replicas:` section for the chart, with each component being deployed having its own tunable value. +The convention today in OpenStack-Helm is to define a `replicas:` section for the chart, where each component being deployed has its own tunable value. For example, the `glance` chart provides the following replicas in `values.yaml` @@ -34,16 +34,17 @@ $ helm install local/glance --set replicas.api=3,replicas.registry=3 ## Labels -We use nodeSelectors as well as podAntiAffinity rules to ensure resources land in the proper place within Kubernetes. Today, openstack-helm employs four labels: +This project uses nodeSelectors as well as podAntiAffinity rules to ensure resources land in the proper place within Kubernetes. Today, OpenStack-Helm employs four labels: - ceph-storage: enabled - openstack-control-plane: enabled - openstack-compute-node: enabled - openvswitch: enabled -NOTE: The `openvswitch` label is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` hosts. Ideally, we would eliminate the `openvswitch` label as we simply want to deploy openvswitch to an OR of (`openstack-control-plane` and `openstack-compute-node`). However, Kubernetes `nodeSelectors` prohibits this specific logic. As a result of this, we require a third label that spans all hosts, which is `openvswitch`. The openvswitch service must run on both types of hosts to provide openvswitch connectivity for DHCP, L3, Metadata services which run in the control plane as well as tenant connectivity which runs on the compute node infrastructure. +NOTE: The `openvswitch` label is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` nodes. Ideally, you would eliminate the `openvswitch` label if you simply wanted to deploy openvswitch to an OR of (`openstack-control-plane` and `openstack-compute-node`). However, Kubernetes `nodeSelectors` prohibits this specific logic. As a result of this, a third label that spans all hosts is required, which in this case is `openvswitch`. The Open vSwitch service must run on both control plane and tenant nodes with both labels to provide connectivity for DHCP, L3, and Metadata services. These Open vSwitch services run as part of the control plane as well as tenant connectivity, which runs as part of the compute node infrastructure. -Labels are of course definable and overridable by the chart operators. Labels are defined in charts with a common convention, using a `labels:` section which defines both a selector, and a value: + +Labels are of course definable and overridable by the chart operators. Labels are defined in charts with a common convention, by using a `labels:` section, which defines both a selector and a value: ``` labels: @@ -51,7 +52,7 @@ labels: node_selector_value: enabled ``` -In some cases, such as the neutron chart, a chart may need to define more then one label. In cases such as this, each element should be articulated under the `labels:` section, nesting where appropriate: +In some cases, such as with the Neutron chart, a chart may need to define more then one label. In cases such as this, each element should be articulated under the `labels:` section, nesting where appropriate: ``` labels: @@ -233,7 +234,7 @@ resources: ... ``` -These resources definitions are then applied to the appropriate component, when the `enabled` flag is set. For instance, below, the nova_compute daemonset has the requests and limits values applied from `.Values.resources.nova_compute`: +These resources definitions are then applied to the appropriate component, when the `enabled` flag is set. For instance, the following nova_compute daemonset has the requests and limits values applied from `.Values.resources.nova_compute`: ``` {{- if .Values.resources.enabled }} @@ -321,7 +322,7 @@ Charts should avoid at all costs hard coding values such as ``http://keystone-ap The openstack-helm charts make the following conditions available across all charts which can be set at install or upgrade time with Helm: -### Developer Mode +### Developer Mode ``` helm install local/chart --set development.enabled=true diff --git a/docs/mission.md b/docs/mission.md index b39a809f90..f7d96fc611 100644 --- a/docs/mission.md +++ b/docs/mission.md @@ -1,6 +1,6 @@ # Mission -The goal for openstack-helm is to provide an incredibly customizable *framework* for operators and developers alike. This framework will enable end-users to deploy, maintain, and upgrade a fully functioning OpenStack environment for both simple and complex environments. Administrators or developers can either deploy all or individual OpenStack components along with their required dependencies. It heavily borrows concepts from [Stackanetes](https://github.com/stackanetes/stackanetes) and [other complex Helm application deployments](https://github.com/sapcc/openstack-helm). This project is meant to be a collaborative project that brings Openstack applications into a [Cloud-Native](https://www.cncf.io/about/charter) model. +The goal for OpenStack-Helm is to provide an incredibly customizable *framework* for operators and developers alike. This framework will enable end-users to deploy, maintain, and upgrade a fully functioning OpenStack environment for both simple and complex environments. Administrators or developers can either deploy all or individual OpenStack components along with their required dependencies. It heavily borrows concepts from [Stackanetes](https://github.com/stackanetes/stackanetes) and [other complex Helm application deployments](https://github.com/sapcc/openstack-helm). This project is meant to be a collaborative project that brings Openstack applications into a [Cloud-Native](https://www.cncf.io/about/charter) model. ## Resiliency From 58c0da8a745c0b4efa9dbcf2a20131d4e7715d4b Mon Sep 17 00:00:00 2001 From: gardlt Date: Tue, 7 Feb 2017 10:37:36 -0600 Subject: [PATCH 06/18] remove-lock-files-from-chart-directories * removed lockfiles from charts dir Closes-bug: #180 --- .gitignore | 1 + bootstrap/requirements.lock | 6 ------ common/requirements.lock | 3 --- horizon/requirements.lock | 6 ------ keystone/requirements.lock | 6 ------ mariadb/requirements.lock | 6 ------ memcached/requirements.lock | 6 ------ openstack/requirements.lock | 18 ------------------ rabbitmq/requirements.lock | 6 ------ 9 files changed, 1 insertion(+), 57 deletions(-) delete mode 100644 bootstrap/requirements.lock delete mode 100644 common/requirements.lock delete mode 100644 horizon/requirements.lock delete mode 100644 keystone/requirements.lock delete mode 100644 mariadb/requirements.lock delete mode 100644 memcached/requirements.lock delete mode 100644 openstack/requirements.lock delete mode 100644 rabbitmq/requirements.lock diff --git a/.gitignore b/.gitignore index 41c079b6c3..1e69553255 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,5 @@ *.lock +*/*.lock *.tgz **/*.tgz .idea/ diff --git a/bootstrap/requirements.lock b/bootstrap/requirements.lock deleted file mode 100644 index e5d7bf64b5..0000000000 --- a/bootstrap/requirements.lock +++ /dev/null @@ -1,6 +0,0 @@ -dependencies: -- name: common - repository: http://localhost:8879/charts - version: 0.1.0 -digest: sha256:c6a7e430c900036912fe3fdc14213e9280c5da0b6607ce4dcf6dc95535d114fc -generated: 2016-11-30T17:10:48.887264789-08:00 diff --git a/common/requirements.lock b/common/requirements.lock deleted file mode 100644 index 98d4c0a4eb..0000000000 --- a/common/requirements.lock +++ /dev/null @@ -1,3 +0,0 @@ -dependencies: [] -digest: sha256:81059fe6210ccee4e3349c0f34c12d180f995150128a913d63b65b7937c6b152 -generated: 2016-11-30T17:10:48.32482926-08:00 diff --git a/horizon/requirements.lock b/horizon/requirements.lock deleted file mode 100644 index a30dd69a13..0000000000 --- a/horizon/requirements.lock +++ /dev/null @@ -1,6 +0,0 @@ -dependencies: -- name: common - repository: http://localhost:8879/charts - version: 0.1.0 -digest: sha256:c6a7e430c900036912fe3fdc14213e9280c5da0b6607ce4dcf6dc95535d114fc -generated: 2016-11-30T17:10:50.917838584-08:00 diff --git a/keystone/requirements.lock b/keystone/requirements.lock deleted file mode 100644 index 33d057c17f..0000000000 --- a/keystone/requirements.lock +++ /dev/null @@ -1,6 +0,0 @@ -dependencies: -- name: common - repository: http://localhost:8879/charts - version: 0.1.0 -digest: sha256:c6a7e430c900036912fe3fdc14213e9280c5da0b6607ce4dcf6dc95535d114fc -generated: 2016-11-30T17:10:51.579937981-08:00 diff --git a/mariadb/requirements.lock b/mariadb/requirements.lock deleted file mode 100644 index 9c460b97df..0000000000 --- a/mariadb/requirements.lock +++ /dev/null @@ -1,6 +0,0 @@ -dependencies: -- name: common - repository: http://localhost:8879/charts - version: 0.1.0 -digest: sha256:c6a7e430c900036912fe3fdc14213e9280c5da0b6607ce4dcf6dc95535d114fc -generated: 2016-11-30T17:10:49.537749902-08:00 diff --git a/memcached/requirements.lock b/memcached/requirements.lock deleted file mode 100644 index a30dd69a13..0000000000 --- a/memcached/requirements.lock +++ /dev/null @@ -1,6 +0,0 @@ -dependencies: -- name: common - repository: http://localhost:8879/charts - version: 0.1.0 -digest: sha256:c6a7e430c900036912fe3fdc14213e9280c5da0b6607ce4dcf6dc95535d114fc -generated: 2016-11-30T17:10:50.917838584-08:00 diff --git a/openstack/requirements.lock b/openstack/requirements.lock deleted file mode 100644 index 73571f11ba..0000000000 --- a/openstack/requirements.lock +++ /dev/null @@ -1,18 +0,0 @@ -dependencies: -- name: common - repository: http://localhost:8879/charts - version: 0.1.0 -- name: memcached - repository: http://localhost:8879/charts - version: 0.1.0 -- name: rabbitmq - repository: http://localhost:8879/charts - version: 0.1.0 -- name: mariadb - repository: http://localhost:8879/charts - version: 0.1.0 -- name: keystone - repository: http://localhost:8879/charts - version: 0.1.0 -digest: sha256:e92d6b6811d65492a95e4db258d516bfd7dd540108bb3d0e92e7dabc13ae2bbf -generated: 2016-11-30T17:10:52.235026033-08:00 diff --git a/rabbitmq/requirements.lock b/rabbitmq/requirements.lock deleted file mode 100644 index 3d8db9f442..0000000000 --- a/rabbitmq/requirements.lock +++ /dev/null @@ -1,6 +0,0 @@ -dependencies: -- name: common - repository: http://localhost:8879/charts - version: 0.1.0 -digest: sha256:c6a7e430c900036912fe3fdc14213e9280c5da0b6607ce4dcf6dc95535d114fc -generated: 2016-11-30T17:10:50.189434385-08:00 From 95759a7189dbd6291ff7f6e07f8ee4571fe492f6 Mon Sep 17 00:00:00 2001 From: Steve Wilkerson Date: Tue, 7 Feb 2017 14:03:55 -0600 Subject: [PATCH 07/18] Add travis-ci to openstack-helm Adds the travis.yml file to openstack-helm. The current travis workflow is: install glide for helm, build helm from source, run kubeadm in a container, init helm on the client side, then run tiller locally. The scripts currently run the makefile to check: helm dep up, helm lint, and helm package on the charts defined in the makefile. Then a helm install --dry-run debug is run on each of the charts packaged locally. --- .travis.yml | 33 +++++++++++++++++++++++++++++++++ travis-ci/charts_dry_run.sh | 6 ++++++ travis-ci/kubeadm_setup.sh | 5 +++++ 3 files changed, 44 insertions(+) create mode 100644 .travis.yml create mode 100644 travis-ci/charts_dry_run.sh create mode 100644 travis-ci/kubeadm_setup.sh diff --git a/.travis.yml b/.travis.yml new file mode 100644 index 0000000000..62c1e39121 --- /dev/null +++ b/.travis.yml @@ -0,0 +1,33 @@ +language: go + +services: + - docker + +before_install: + - export GLIDE_VERSION=v0.12.3 + - ls $GOPATH/src/ + - wget "https://github.com/Masterminds/glide/releases/download/$GLIDE_VERSION/glide-$GLIDE_VERSION-linux-amd64.tar.gz" + - mkdir -p $HOME/bin + - tar -vxz -C $HOME/bin --strip=1 -f glide-$GLIDE_VERSION-linux-amd64.tar.gz + - export PATH="$HOME/bin:$PATH" GLIDE_HOME="$HOME/.glide" + +install: + - cd $GOPATH/src/ + - mkdir k8s.io && cd k8s.io + - git clone https://github.com/kubernetes/helm + - cd helm && make bootstrap build + - mv bin/helm $HOME/bin + +script: + - cd $TRAVIS_BUILD_DIR + - bash travis-ci/kubeadm_setup.sh + - $HOME/gopath/src/k8s.io/helm/bin/tiller & + - export HELM_HOST=localhost:44134 + - helm init --client-only + - helm version + - helm serve & + - sleep 1m + - helm repo add local http://localhost:8879/charts + - helm repo update + - make + - bash travis-ci/charts_dry_run.sh diff --git a/travis-ci/charts_dry_run.sh b/travis-ci/charts_dry_run.sh new file mode 100644 index 0000000000..8fed0b1e29 --- /dev/null +++ b/travis-ci/charts_dry_run.sh @@ -0,0 +1,6 @@ +#!/bin/bash + +for chart in *.tgz; do + echo "Running helm install --dry-run --debug on $chart"; + helm install --dry-run --debug local/$chart; +done diff --git a/travis-ci/kubeadm_setup.sh b/travis-ci/kubeadm_setup.sh new file mode 100644 index 0000000000..b2d933dd52 --- /dev/null +++ b/travis-ci/kubeadm_setup.sh @@ -0,0 +1,5 @@ +#!/bin/bash + +docker run -it -e quay.io/attcomdev/kubeadm-ci:latest --name kubeadm-ci --privileged=true -d --net=host --security-opt seccomp:unconfined --cap-add=SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /var/run/docker.sock:/var/run/docker.sock quay.io/attcomdev/kubeadm-ci:latest /sbin/init + +docker exec kubeadm-ci kubeadm.sh From 38dc85b94a71e9776eee0cad3cfca515b43dfb8c Mon Sep 17 00:00:00 2001 From: Steve Wilkerson Date: Tue, 7 Feb 2017 15:13:09 -0600 Subject: [PATCH 08/18] Added tag to kubeadm-ci image Accidentally used the `latest` tagged image for kubeadm-ci --- travis-ci/kubeadm_setup.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/travis-ci/kubeadm_setup.sh b/travis-ci/kubeadm_setup.sh index b2d933dd52..14b04764dc 100644 --- a/travis-ci/kubeadm_setup.sh +++ b/travis-ci/kubeadm_setup.sh @@ -1,5 +1,5 @@ #!/bin/bash -docker run -it -e quay.io/attcomdev/kubeadm-ci:latest --name kubeadm-ci --privileged=true -d --net=host --security-opt seccomp:unconfined --cap-add=SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /var/run/docker.sock:/var/run/docker.sock quay.io/attcomdev/kubeadm-ci:latest /sbin/init +docker run -it -e quay.io/attcomdev/kubeadm-ci:v1.1.0 --name kubeadm-ci --privileged=true -d --net=host --security-opt seccomp:unconfined --cap-add=SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /var/run/docker.sock:/var/run/docker.sock quay.io/attcomdev/kubeadm-ci:v1.1.0 /sbin/init docker exec kubeadm-ci kubeadm.sh From 78f4da1f8d72f477f1c2a7ca5b50ff87b07a52b2 Mon Sep 17 00:00:00 2001 From: "Rachel M. Jozsa" Date: Tue, 7 Feb 2017 22:57:16 -0500 Subject: [PATCH 09/18] Cont. Grammatical Changes --- docs/helm_overrides.md | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/docs/helm_overrides.md b/docs/helm_overrides.md index 8a7eb306ec..cb4eda8d94 100644 --- a/docs/helm_overrides.md +++ b/docs/helm_overrides.md @@ -34,7 +34,7 @@ $ helm install local/glance --set replicas.api=3,replicas.registry=3 ## Labels -This project uses nodeSelectors as well as podAntiAffinity rules to ensure resources land in the proper place within Kubernetes. Today, OpenStack-Helm employs four labels: +This project uses `nodeSelectors` as well as `podAntiAffinity` rules to ensure resources land in the proper place within Kubernetes. Today, OpenStack-Helm employs four labels: - ceph-storage: enabled - openstack-control-plane: enabled @@ -44,7 +44,7 @@ This project uses nodeSelectors as well as podAntiAffinity rules to ensure resou NOTE: The `openvswitch` label is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` nodes. Ideally, you would eliminate the `openvswitch` label if you simply wanted to deploy openvswitch to an OR of (`openstack-control-plane` and `openstack-compute-node`). However, Kubernetes `nodeSelectors` prohibits this specific logic. As a result of this, a third label that spans all hosts is required, which in this case is `openvswitch`. The Open vSwitch service must run on both control plane and tenant nodes with both labels to provide connectivity for DHCP, L3, and Metadata services. These Open vSwitch services run as part of the control plane as well as tenant connectivity, which runs as part of the compute node infrastructure. -Labels are of course definable and overridable by the chart operators. Labels are defined in charts with a common convention, by using a `labels:` section, which defines both a selector and a value: +Labels are of course definable and overridable by the chart operators. Labels are defined in charts by using a `labels:` section, which is a common convention that defines both a selector and a value: ``` labels: @@ -89,7 +89,7 @@ These labels should be leveraged by `nodeSelector` definitions in charts for all ... ``` -In some cases, especially with infrastructure components, it becomes necessary for the chart developer to provide some scheduling instruction to Kubernetes to help ensure proper resiliency. The most common example employed today are podAntiAffinity rules, such as those used in the `mariadb` chart. We encourage these to be placed on all foundational elements so that Kubernetes will not only disperse resources for resiliency, but also allow multi-replica installations to deploy successfully into a single host environment: +In some cases, especially with infrastructure components, it is necessary for the chart developer to provide scheduling instruction to Kubernetes to help ensure proper resiliency. The most common examples employed today are podAntiAffinity rules, such as those used in the `mariadb` chart. These should be placed on all foundational elements so that Kubernetes will not only disperse resources for resiliency, but also allow multi-replica installations to deploy successfully into a single host environment: ``` annotations: @@ -117,15 +117,15 @@ In some cases, especially with infrastructure components, it becomes necessary f ## Images -Our core philosophy regarding images is that the toolsets required to enable the OpenStack services should be applied by Kubernetes itself. This requires the openstack-helm to develop common and simple scripts with minimal dependencies that can be overlayed on any image meeting the OpenStack core library requirements. The advantage of this however is that we can be image agnostic, allowing operators to use Stackanetes, Kolla, Yaodu, or any image flavor and format they choose and they will all function the same. +The project's core philosophy regarding images is that the toolsets required to enable the OpenStack services should be applied by Kubernetes itself. This requires OpenStack-Helm to develop common and simple scripts with minimal dependencies that can be overlaid on any image that meets the OpenStack core library requirements. The advantage of this is that the project can be image agnostic, allowing operators to use Stackanetes, Kolla, Yaodu, or any image flavor and format they choose and they will all function the same. -The long-term goal besides being image agnostic is to also able to support any of the container runtimes that Kubernetes supports, even those which may not use Docker's own packaging format. This will allow this project to continue to offer maximum flexibility with regard to operator choice. +A long-term goal, besides being image agnostic, is to also be able to support any of the container runtimes that Kubernetes supports, even those that might not use Docker's own packaging format. This will allow the project to continue to offer maximum flexibility with regard to operator choice. -To that end, all charts provide an `images:` section that allows operators to override images. It is also our assertion that all default image references should be fully spelled out, even those hosted by docker, and no default image reference should use `:latest` but be pinned to a specific version to ensure a consistent behavior for deployments over time. +To that end, all charts provide an `images:` section that allows operators to override images. Also, all default image references should be fully spelled out, even those hosted by Docker or Quay. Further, no default image reference should use `:latest` but rather should be pinned to a specific version to ensure consistent behavior for deployments over time. -Today, the `images:` section has several common conventions. Most OpenStack services require a database initialization function, a database synchronization function, and finally a series of steps for Keystone registration and integration. There may also be specific images for each component that composes those OpenStack services and these may or may not differ but should all be defined in `images`. +Today, the `images:` section has several common conventions. Most OpenStack services require a database initialization function, a database synchronization function, and a series of steps for Keystone registration and integration. Each component may also have a specific image that composes an OpenStack service. The images may or may not differ, but regardless, should all be defined in `images`. -The following standards are in use today, in addition to any components defined by the service itself. +The following standards are in use today, in addition to any components defined by the service itself: - dep_check: The image that will perform dependency checking in an init-container. - db_init: The image that will perform database creation operations for the OpenStack service. @@ -135,7 +135,7 @@ The following standards are in use today, in addition to any components defined - ks_endpoints: The image that will perform keystone endpoint registration for the service. - pull_policy: The image pull policy, one of "Always", "IfNotPresent", and "Never" which will be used by all containers in the chart. -An illustrative example of a images: section taken from the heat chart: +An illustrative example of an `images:` section taken from the heat chart: ``` images: @@ -152,15 +152,15 @@ images: pull_policy: "IfNotPresent" ``` -The openstack-helm project today uses a mix of docker images from Stackanetes, Kolla, but going forward we will likely first standardize on Kolla images across all charts but without any reliance on Kolla image utilities, followed by support for alternative images with substantially smaller footprints such as [Yaodu](https://github.com/yaodu) +The OpenStack-Helm project today uses a mix of Docker images from Stackanetes and Kolla, but will likely standardize on Kolla images for all charts without any reliance on Kolla image utilities. Soon, the project will support alternative images with substantially smaller footprints, such as [Yaodu](https://github.com/yaodu). ## Upgrades -The openstack-helm project assumes all upgrades will be done through Helm. This includes both template changes, including resources layered on top of the image such as configuration files as well as the Kubernetes resources themselves in addition to the more common practice of updating images. +The OpenStack-Helm project assumes all upgrades will be done through Helm. This includes both template changes, including resources layered on top of the image such as configuration files as well as the Kubernetes resources themselves in addition to the more common practice of updating images. -Today, several issues exist within Helm when updating images within charts that may be used by jobs that have already run to completion or are still in flight. We will seek to address these within the Helm community or within the charts themselves by support Helm hooks to allow jobs to be deleted during an upgrade so that they can be recreated with an updated image. An example of where this behavior would be desirable would be an updated db_sync image which has updated to point from a Mitaka image to a Newton image. In this case, the operator will likely want a db_sync job which was already run and completed during site installation to run again with the updated image to bring the schema inline with the Newton release. +As Helm stands today, several issues exist when you update images within charts that might have been used by jobs that already ran to completion or are still in flight. OpenStack-Helm developers will continue to work with the Helm community or develop charts that will support job removal prior to an upgrade, which will recreate services with updated images. An example of where this behavior would be desirable is when an updated db_sync image has updated to point from a Mitaka image to a Newton image. In this case, the operator will likely want a db_sync job, which was already run and completed during site installation, to run again with the updated image to bring the schema inline with the Newton release. -The openstack-helm project also implements annotations across all chart configmaps so that changing resources inside containers such as configuration files, triggers a Kubernetes rolling update so that those resources can be updated without deleting and redeploying the service and treated like any other upgrade such as a container image change. +The OpenStack-Helm project also implements annotations across all chart configmaps so that changing resources inside containers, such as configuration files, triggers a Kubernetes rolling update. This means that those resources can be updated without deleting and redeploying the service and can be treated like any other upgrade, such as a container image change. This is accomplished with the following annotation: @@ -171,7 +171,7 @@ This is accomplished with the following annotation: configmap-etc-hash: {{ tuple "configmap-etc.yaml" . | include "hash" }} ``` -The `hash` function defined in the `helm-toolkit` chart ensures that any change to any file referenced by configmap-bin.yaml or configmap-etc.yaml results in a new hash, which will trigger a rolling update. +The `hash` function defined in the `helm-toolkit` chart ensures that any change to any file referenced by configmap-bin.yaml or configmap-etc.yaml results in a new hash, which will then trigger a rolling update. All chart components (except `DaemonSets`) are outfitted by default with rolling update strategies: @@ -188,7 +188,7 @@ spec: {{ end }} ``` -In values.yaml in each chart, the same defaults are supplied in every chart, allowing the operator to override at upgrade or deployment time. +In values.yaml in each chart, the same defaults are supplied in every chart, which allows the operator to override at upgrade or deployment time. ``` upgrades: From b006bdf48edb7d70ddd99a092954c93b0fbf4dcc Mon Sep 17 00:00:00 2001 From: Steve Wilkerson Date: Wed, 8 Feb 2017 08:40:36 -0600 Subject: [PATCH 10/18] Fix path for Tiller instance in .travis.yml Overlooked an issue with the path for running Tiller locally. Should be $GOPATH instead of $HOME/gopath --- .travis.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.travis.yml b/.travis.yml index 62c1e39121..9484e3a88c 100644 --- a/.travis.yml +++ b/.travis.yml @@ -21,7 +21,7 @@ install: script: - cd $TRAVIS_BUILD_DIR - bash travis-ci/kubeadm_setup.sh - - $HOME/gopath/src/k8s.io/helm/bin/tiller & + - $GOPATH/src/k8s.io/helm/bin/tiller & - export HELM_HOST=localhost:44134 - helm init --client-only - helm version From 668a9970ea88f304fa844ab518df99f4055c4ddd Mon Sep 17 00:00:00 2001 From: Andre Pollard Date: Wed, 8 Feb 2017 10:20:12 -0500 Subject: [PATCH 11/18] Fix broken link in README for Values Philosophy --- docs/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/README.md b/docs/README.md index 7f504ab43c..bb185d7558 100644 --- a/docs/README.md +++ b/docs/README.md @@ -8,7 +8,7 @@ #####      1.1.1 [Resiliency](mission.md#resiliency) #####      1.1.2 [Scaling](mission.md#scaling) ###    1.2 Helm Overrides -#####      1.2.1 [Values Philosophy](helm_overrides#values) +#####      1.2.1 [Values Philosophy](helm_overrides.md#values) #####      1.2.1 [Replicas](helm_overrides.md#replicas) #####      1.2.1 [Labels](helm_overrides.md#labels) #####      1.2.1 [Images](helm_overrides.md#images) From a23c03b2eff76df132749ead1e78dcb9b607bada Mon Sep 17 00:00:00 2001 From: "Brandon B. Jozsa" Date: Wed, 8 Feb 2017 19:10:19 -0500 Subject: [PATCH 12/18] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 6014cf596d..da84a258d6 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,5 @@ +[![Travis CI](https://travis-ci.org/att-comdev/openstack-helm.svg?branch=master)](https://travis-ci.org/att-comdev/openstack-helm) + # Openstack-Helm **Join us on [Slack](http://slack.k8s.io/):** `#openstack-helm`
From dd4166d2680235228b5f4b0036f38b1af30965bf Mon Sep 17 00:00:00 2001 From: "Rachel M. Jozsa" Date: Wed, 8 Feb 2017 22:45:07 -0500 Subject: [PATCH 13/18] Cont. Grammar Edits MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @alanmeadows this is part 3 of 3 edits. I just need some feedback on a couple of things (I can make these for you, if it’s easier), and then these can be merged in. --- docs/helm_overrides.md | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/docs/helm_overrides.md b/docs/helm_overrides.md index cb4eda8d94..b3cfb38756 100644 --- a/docs/helm_overrides.md +++ b/docs/helm_overrides.md @@ -201,11 +201,11 @@ upgrades: ## Resource Limits -Resource limits should be defined for all charts within openstack-helm. +Resource limits should be defined for all charts within OpenStack-Helm. -The convention is to leverage a `resources:` section within values.yaml with an `enabled` setting that defaults to `false` but can be turned on by the operator at install or upgrade time. +The convention is to leverage a `resources:` section within values.yaml by using an `enabled` setting that defaults to `false` but can be turned on by the operator at install or upgrade time. -The resources specify the requests (memory and cpu) and limits (memory and cpu) for each deployed resource. For example, from the nova chart `values.yaml`: +The resources specify the requests (memory and cpu) and limits (memory and cpu) for each deployed resource. For example, from the Nova chart `values.yaml`: ``` resources: @@ -234,7 +234,7 @@ resources: ... ``` -These resources definitions are then applied to the appropriate component, when the `enabled` flag is set. For instance, the following nova_compute daemonset has the requests and limits values applied from `.Values.resources.nova_compute`: +These resources definitions are then applied to the appropriate component when the `enabled` flag is set. For instance, the following nova_compute daemonset has the requests and limits values applied from `.Values.resources.nova_compute`: ``` {{- if .Values.resources.enabled }} @@ -248,19 +248,19 @@ These resources definitions are then applied to the appropriate component, when {{- end }} ``` -When a chart developer doesn't know what resource limits or requests to apply to a new component, they can deploy them locally and examine utilization using tools like WeaveScope. The resource limits may not be perfect on initial submission but over time with community contributions they will be refined to reflect reality. +When a chart developer doesn't know what resource limits or requests to apply to a new component, they can deploy the component locally and examine resource utilization using tools like WeaveScope. The resource limits may not be perfect on initial submission, but over time and with community contributions, they can be refined to reflect reality. ## Endpoints NOTE: This feature is under active development. There may be dragons here. -Endpoints are a large part of what openstack-helm seeks to provide mechanisms around. OpenStack is a highly interconnected application, with various components requiring connectivity details to numerous services, including other OpenStack components, infrastructure elements such as databases, queues, and memcached infrastructure. We want to ensure we provide a consistent mechanism for defining these "endpoints" across all charts and the macros necessary to convert those definitions into usable endpoints. The charts should consistently default to building endpoints that assume the operator is leveraging all of our charts to build their OpenStack stack. However, we want to ensure that if operators want to run some of our charts and have those plug into their existing infrastructure, or run elements in different namespaces, for example, these endpoints should be overridable. +As a large part of the project's purpose, OpenStack-Helm seeks to provide mechanisms around endpoints. OpenStack is a highly interconnected application, with various components requiring connectivity details to numerous services, including other OpenStack components and infrastructure elements such as databases, queues, and memcached infrastructure. The project's goal is to ensure that it can provide a consistent mechanism for defining these "endpoints" across all charts and provide the macros necessary to convert those definitions into usable endpoints. The charts should consistently default to building endpoints that assume the operator is leveraging all charts to build their OpenStack cloud. Endpoints should be configurable if an operator would like a chart to work with their existing infrastructure or run elements in different namespaces. -For instance, in the neutron chart `values.yaml` the following endpoints are defined: +For instance, in the Neutron chart `values.yaml` the following endpoints are defined: ``` -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -302,7 +302,7 @@ endpoints: api: 9696 ``` -These values define all the endpoints that the neutron chart may need in order to build full URL compatible endpoints to various services. Long-term these will also include database, memcached, and rabbitmq elements in one place-essentially all external connectivity needs defined centrally. +These values define all the endpoints that the Neutron chart may need in order to build full URL compatible endpoints to various services. Long-term, these will also include database, memcached, and rabbitmq elements in one place. Essentially, all external connectivity needs to be defined centrally. The macros that help translate these into the actual URLs necessary are defined in the `helm-toolkit` chart. For instance, the cinder chart defines a `glance_api_servers` definition in the `cinder.conf` template: @@ -310,17 +310,17 @@ The macros that help translate these into the actual URLs necessary are defined +glance_api_servers = {{ tuple "image" "internal" "api" . | include "helm-toolkit.endpoint_type_lookup_addr" }} ``` -This line of magic uses the `endpoint_type_lookup_addr` macro in the `helm-toolkit` chart (since it is used by all charts). Note a second convention here, all `{{ define }}` macros in charts should be pre-fixed with the chart that is defining them. This allows developers to easily identify the source of a helm macro and also avoid namespace collissions. In the example above, the macro `endpoint_type_look_addr` is defined in the `helm-toolkit` chart. This macro is passed three parameters (aided by the `tuple` method built into the go/sprig templating library used by Helm): +This line of magic uses the `endpoint_type_lookup_addr` macro in the `helm-toolkit` chart (since it is used by all charts). Note that there is a second convention here. All `{{ define }}` macros in charts should be pre-fixed with the chart that is defining them. This allows developers to easily identify the source of a Helm macro and also avoid namespace collisions. In the example above, the macro `endpoint_type_look_addr` is defined in the `helm-toolkit` chart. This macro is passing three parameters (aided by the `tuple` method built into the go/sprig templating library used by Helm): -- image: This is the OpenStack service we are building an endpoint for. This will be mapped to `glance` which is the image service for OpenStack. +- image: This is the OpenStack service that the endpoint is being built for. This will be mapped to `glance` which is the image service for OpenStack. - internal: This is the OpenStack endpoint type we are looking for - valid values would be `internal`, `admin`, and `public` -- api: This is the port to map to for the service. Some components such as glance provide an `api` port and a `registry` port, for example. +- api: This is the port to map to for the service. Some components, such as glance, provide an `api` port and a `registry` port, for example. -Charts should avoid at all costs hard coding values such as ``http://keystone-api:5000` as these are not compatible with operator overrides or supporting spreading components out over various namespaces. +At all costs, charts should avoid hard coding values such as `http://keystone-api:5000` because these are not compatible with operator overrides and do not support spreading components out over various namespaces. ## Common Conditionals -The openstack-helm charts make the following conditions available across all charts which can be set at install or upgrade time with Helm: +The OpenStack-Helm charts make the following conditions available across all charts, which can be set at install or upgrade time with Helm: ### Developer Mode @@ -328,7 +328,7 @@ The openstack-helm charts make the following conditions available across all cha helm install local/chart --set development.enabled=true ``` -The development mode flag should be available on all charts. Enabling this reduces dependencies that chart may have on persistent volume claims (which are difficult to support in a laptop minikube environment) as well as reducing replica counts or resiliency features to support a minimal environment. +The development mode flag should be available on all charts. Enabling this reduces dependencies that the chart may have on persistent volume claims (which are difficult to support in a laptop minikube environment) as well as reducing replica counts or resiliency features to support a minimal environment. The glance chart for instance defines the following `development:` overrides: From 490c2a1df2755d050ab9546fef9ecce41a6beaa0 Mon Sep 17 00:00:00 2001 From: "Brandon B. Jozsa" Date: Thu, 9 Feb 2017 07:22:16 -0500 Subject: [PATCH 14/18] Update README.md --- docs/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/README.md b/docs/README.md index bb185d7558..e2ef11b78c 100644 --- a/docs/README.md +++ b/docs/README.md @@ -3,7 +3,7 @@ ## Table of Contents -##  1. [Openstack-Helm Design Principles] +##  1. Openstack-Helm Design Principles ###    1.1 [Mission](mission.md) #####      1.1.1 [Resiliency](mission.md#resiliency) #####      1.1.2 [Scaling](mission.md#scaling) From 3d5121726227b5e4bbe94b8c37c5d8a92cd38ca4 Mon Sep 17 00:00:00 2001 From: Greg Althaus Date: Fri, 10 Feb 2017 18:54:35 -0600 Subject: [PATCH 15/18] Add missing steps for bring-up ceph and nova-compute (#190) Add discovered manual steps for bringing more pieces of the system. --- docs/installation/getting-started.md | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-) diff --git a/docs/installation/getting-started.md b/docs/installation/getting-started.md index a0d261720e..16c04a47f9 100644 --- a/docs/installation/getting-started.md +++ b/docs/installation/getting-started.md @@ -232,6 +232,13 @@ admin@kubenode01:~$ ./generate_secrets.sh all `./generate_secrets.sh fsid` admin@kubenode01:~$ cd ../../.. ``` +## Nova Compute Instance Storage +Nova Compute requires a place to store instances locally. Each node labeled `openstack-compute-node` needs to have the following directory: +``` +admin@kubenode01:~$ mkdir -p /var/lib/nova/instances +``` +*Repeat this step for each node labeled: `openstack-compute-node`* + ## Helm Preparation Now we need to install and prepare Helm, the core of our project. Please use the installation guide from the [Kubernetes/Helm](https://github.com/kubernetes/helm/blob/master/docs/install.md#from-the-binary-releases) repository. Please take note of our required versions above. @@ -326,7 +333,7 @@ The parameters is what we're looking for here. If we see parameters passed to th ### Ceph Validation Most commonly, we want to validate that Ceph is working correctly. This can be done with the following ceph command: ``` -admin@kubenode01:~$ kubectl exec -t -i ceph-mon-392438295-6q04c -n ceph -- ceph status +admin@kubenode01:~$ kubectl exec -t -i ceph-mon-0 -n ceph -- ceph status cluster 046de582-f8ee-4352-9ed4-19de673deba0 health HEALTH_OK monmap e3: 3 mons at {ceph-mon-392438295-6q04c=10.25.65.131:6789/0,ceph-mon-392438295-ksrb2=10.25.49.196:6789/0,ceph-mon-392438295-l0pzj=10.25.79.193:6789/0} @@ -339,7 +346,23 @@ admin@kubenode01:~$ kubectl exec -t -i ceph-mon-392438295-6q04c -n ceph -- ceph 80 active+clean admin@kubenode01:~$ ``` -Use one of your Ceph Monitors to check the status of the cluster. A couple of things to note above; our health is 'HEALTH_OK', we have 3 mons, we've established a quorum, and we can see that our active mds is 'ceph-mds-2810413505-gtjgv'. We have a healthy environment, and are ready to install our next chart, MariaDB. +Use one of your Ceph Monitors to check the status of the cluster. A couple of things to note above; our health is 'HEALTH_OK', we have 3 mons, we've established a quorum, and we can see that our active mds is 'ceph-mds-2810413505-gtjgv'. We have a healthy environment. + +For Glance and Cinder to operate, you will need to create some storage pools for these systems. Additionally, Nova can be configured to use a pool as well, but this is off by default. + +``` +kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create volumes 128 +kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create images 128 +``` + +Nova storage would be added like this: +``` +kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create vms 128 +``` + +The choosing the amount of storage is up to you and can be changed by replacing the 128 to meet your needs. + +We are now ready to install our next chart, MariaDB. ## MariaDB Installation and Verification We are using Galera to cluster MariaDB and establish a quorum. To install the MariaDB, issue the following command: From 347fdc86b08e860973e918a9f3f797f394f45987 Mon Sep 17 00:00:00 2001 From: Larry Rensing Date: Fri, 10 Feb 2017 19:03:42 -0600 Subject: [PATCH 16/18] Adding configurable maas secret (#176) --- .../bin/_register-rack-controller.sh.tpl | 24 +++++++++++++++++++ maas/templates/configmap-bin.yaml | 4 +++- maas/templates/configmap-etc.yaml | 2 +- maas/templates/deploy-rack.yaml | 10 ++++++++ maas/templates/deploy-region.yaml | 8 +++---- maas/templates/etc/_secret.tpl | 2 +- maas/values.yaml | 4 +++- 7 files changed, 46 insertions(+), 8 deletions(-) create mode 100644 maas/templates/bin/_register-rack-controller.sh.tpl diff --git a/maas/templates/bin/_register-rack-controller.sh.tpl b/maas/templates/bin/_register-rack-controller.sh.tpl new file mode 100644 index 0000000000..3445cc35fe --- /dev/null +++ b/maas/templates/bin/_register-rack-controller.sh.tpl @@ -0,0 +1,24 @@ +#!/bin/bash + +set -ex + +# show env +env > /tmp/env + +echo "register-rack-controller URL: "{{ .Values.service_name }}.{{ .Release.Namespace }} + +# note the secret must be a valid hex value + +# register forever +while [ 1 ]; +do + if maas-rack register --url=http://{{ .Values.service_name }}.{{ .Release.Namespace }}/MAAS --secret={{ .Values.secret | quote }}; + then + echo "Successfully registered with MaaS Region Controller" + break + else + echo "Unable to register with http://{{ .Values.service_name }}.{{ .Release.Namespace }}/MAAS... will try again" + sleep 10 + fi; + +done; \ No newline at end of file diff --git a/maas/templates/configmap-bin.yaml b/maas/templates/configmap-bin.yaml index c7c2108d76..db17c81355 100644 --- a/maas/templates/configmap-bin.yaml +++ b/maas/templates/configmap-bin.yaml @@ -1,9 +1,11 @@ apiVersion: v1 kind: ConfigMap metadata: - name: maas-region-bin + name: maas-bin data: start.sh: | {{ tuple "bin/_start.sh.tpl" . | include "template" | indent 4 }} maas-region-controller.postinst: | {{ tuple "bin/_maas-region-controller.postinst.tpl" . | include "template" | indent 4 }} + register-rack-controller.sh: | +{{ tuple "bin/_register-rack-controller.sh.tpl" . | include "template" | indent 4 }} diff --git a/maas/templates/configmap-etc.yaml b/maas/templates/configmap-etc.yaml index ececffc02c..9437409738 100644 --- a/maas/templates/configmap-etc.yaml +++ b/maas/templates/configmap-etc.yaml @@ -1,7 +1,7 @@ apiVersion: v1 kind: ConfigMap metadata: - name: maas-region-etc + name: maas-etc data: named.conf.options: |+ {{ tuple "etc/_region-dns-config.tpl" . | include "template" | indent 4 }} diff --git a/maas/templates/deploy-rack.yaml b/maas/templates/deploy-rack.yaml index e2c6af2478..9feed98e71 100644 --- a/maas/templates/deploy-rack.yaml +++ b/maas/templates/deploy-rack.yaml @@ -11,6 +11,7 @@ spec: nodeSelector: {{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }} hostNetwork: true + dnsPolicy: ClusterFirst containers: - name: maas-rack image: {{ .Values.images.maas_rack }} @@ -26,3 +27,12 @@ spec: {{- end }} securityContext: privileged: true + volumeMounts: + - name: registerrackcontrollersh + mountPath: "/usr/local/bin/register-rack-controller.sh" + subPath: "register-rack-controller.sh" + volumes: + - name: registerrackcontrollersh + configMap: + name: maas-bin + defaultMode: 0511 diff --git a/maas/templates/deploy-region.yaml b/maas/templates/deploy-region.yaml index 18cf77578a..313d690e86 100644 --- a/maas/templates/deploy-region.yaml +++ b/maas/templates/deploy-region.yaml @@ -98,15 +98,15 @@ spec: emptyDir: {} - name: maas-region-secret configMap: - name: maas-region-etc + name: maas-etc - name: maas-config emptyDir: {} - name: maas-dns-config configMap: - name: maas-region-etc + name: maas-etc - name: startsh configMap: - name: maas-region-bin + name: maas-bin - name: maasregionpostinst configMap: - name: maas-region-bin + name: maas-bin diff --git a/maas/templates/etc/_secret.tpl b/maas/templates/etc/_secret.tpl index 48aad03a88..14c823bc4a 100644 --- a/maas/templates/etc/_secret.tpl +++ b/maas/templates/etc/_secret.tpl @@ -1 +1 @@ -3858f62230ac3c915f300c664312c63f +{{ .Values.secret }} diff --git a/maas/values.yaml b/maas/values.yaml index 421e66ca6d..eee2544b45 100644 --- a/maas/values.yaml +++ b/maas/values.yaml @@ -19,7 +19,9 @@ network: service_proxy: 8000 service_proxy_target: 8000 -service_name: maas-region-ui +secret: 3858f62230ac3c915f300c664312c63f + +service_name: maas-region-ui resources: enabled: false From 0c97c63cee8afa42168c24ed1358ac7ea96601a9 Mon Sep 17 00:00:00 2001 From: Larry Rensing Date: Fri, 10 Feb 2017 19:06:18 -0600 Subject: [PATCH 17/18] Initial postgresql chart (#154) --- Makefile | 8 +++-- postgresql/Chart.yaml | 3 ++ postgresql/README.md | 11 ++++++ postgresql/requirements.yaml | 4 +++ postgresql/templates/deployment.yaml | 54 ++++++++++++++++++++++++++++ postgresql/templates/service.yaml | 10 ++++++ postgresql/values.yaml | 29 +++++++++++++++ 7 files changed, 116 insertions(+), 3 deletions(-) create mode 100644 postgresql/Chart.yaml create mode 100644 postgresql/README.md create mode 100644 postgresql/requirements.yaml create mode 100644 postgresql/templates/deployment.yaml create mode 100644 postgresql/templates/service.yaml create mode 100644 postgresql/values.yaml diff --git a/Makefile b/Makefile index 4ca9917a35..86c6aa342a 100644 --- a/Makefile +++ b/Makefile @@ -1,12 +1,12 @@ -.PHONY: ceph bootstrap mariadb keystone memcached rabbitmq common openstack neutron nova cinder heat maas all clean +.PHONY: ceph bootstrap mariadb postgresql keystone memcached rabbitmq common openstack neutron nova cinder heat maas all clean B64_DIRS := common/secrets B64_EXCLUDE := $(wildcard common/secrets/*.b64) -CHARTS := ceph mariadb rabbitmq memcached keystone glance horizon neutron nova cinder heat maas openstack +CHARTS := ceph mariadb postgresql rabbitmq memcached keystone glance horizon neutron nova cinder heat maas openstack COMMON_TPL := common/templates/_globals.tpl -all: common ceph bootstrap mariadb rabbitmq memcached keystone glance horizon neutron nova cinder heat maas openstack +all: common ceph bootstrap mariadb postgresql rabbitmq memcached keystone glance horizon neutron nova cinder heat maas openstack common: build-common @@ -17,6 +17,8 @@ bootstrap: build-bootstrap mariadb: build-mariadb +postgresql: build-postgresql + keystone: build-keystone cinder: build-cinder diff --git a/postgresql/Chart.yaml b/postgresql/Chart.yaml new file mode 100644 index 0000000000..83840a3571 --- /dev/null +++ b/postgresql/Chart.yaml @@ -0,0 +1,3 @@ +description: A Helm chart for postgresql +name: postgresql +version: 0.1.0 diff --git a/postgresql/README.md b/postgresql/README.md new file mode 100644 index 0000000000..6857bd2e19 --- /dev/null +++ b/postgresql/README.md @@ -0,0 +1,11 @@ +# openstack-helm/postgresql + +This chart leverages StatefulSets, with persistent storage. + +The StatefulSets all leverage PVCs to provide stateful storage to /var/lib/postgresql. + +You must ensure that your control nodes that should receive postgresql instances are labeled with openstack-control-plane=enabled, or whatever you have configured in values.yaml for the label configuration: + +``` +kubectl label nodes openstack-control-plane=enabled --all +``` diff --git a/postgresql/requirements.yaml b/postgresql/requirements.yaml new file mode 100644 index 0000000000..2350b1facb --- /dev/null +++ b/postgresql/requirements.yaml @@ -0,0 +1,4 @@ +dependencies: + - name: common + repository: http://localhost:8879/charts + version: 0.1.0 diff --git a/postgresql/templates/deployment.yaml b/postgresql/templates/deployment.yaml new file mode 100644 index 0000000000..6593b14b9e --- /dev/null +++ b/postgresql/templates/deployment.yaml @@ -0,0 +1,54 @@ +apiVersion: apps/v1beta1 +kind: StatefulSet +metadata: + name: {{ .Values.service_name }} +spec: + serviceName: {{ .Values.service_name }} + replicas: {{ .Values.replicas }} + template: + metadata: + labels: + app: {{ .Values.service_name }} + spec: + nodeSelector: + {{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }} + containers: + - name: {{ .Values.service_name }} + image: {{ .Values.images.postgresql }} + imagePullPolicy: {{ .Values.images.pull_policy }} + ports: + - containerPort: {{ .Values.network.port.postgresql }} + livenessProbe: + exec: + command: + - pg_isready + initialDelaySeconds: 20 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - pg_isready + initialDelaySeconds: 20 + timeoutSeconds: 5 + resources: +{{ toYaml .Values.resources | indent 10 }} + volumeMounts: + - name: postgresql-data + mountPath: /var/lib/postgresql + volumes: +{{- if .Values.development.enabled }} + - name: postgresql-data + hostPath: + path: {{ .Values.development.storage_path }} +{{- else }} + volumeClaimTemplates: + - metadata: + name: postgresql-data + annotations: + {{ .Values.volume.class_path }}: {{ .Values.volume.class_name }} + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: {{ .Values.volume.size }} +{{- end }} diff --git a/postgresql/templates/service.yaml b/postgresql/templates/service.yaml new file mode 100644 index 0000000000..223b79338c --- /dev/null +++ b/postgresql/templates/service.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ .Values.service_name }} +spec: + ports: + - name: db + port: {{ .Values.network.port.postgresql }} + selector: + app: {{ .Values.service_name }} diff --git a/postgresql/values.yaml b/postgresql/values.yaml new file mode 100644 index 0000000000..d8a340c52c --- /dev/null +++ b/postgresql/values.yaml @@ -0,0 +1,29 @@ +# Default values for postgresql. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +development: + enabled: true + storage_path: /data/openstack-helm/postgresql + +replicas: 1 #only 1 replica currently supported + +service_name: postgresql + +# using dockerhub postgresql: https://hub.docker.com/r/library/postgres/tags/ +images: + postgresql: "docker.io/postgres:9.5" + pull_policy: IfNotPresent + +volume: + class_path: volume.beta.kubernetes.io/storage-class + class_name: general + size: 20Gi + +labels: + node_selector_key: openstack-control-plane + node_selector_value: enabled + +network: + port: + postgresql: 5432 From e960756268bdc4205f14390037c0f57ec3aa50b0 Mon Sep 17 00:00:00 2001 From: "Rachel M. Jozsa" Date: Sun, 12 Feb 2017 10:51:08 -0500 Subject: [PATCH 18/18] Final Edits with Feedback incorporated Added @alanmeadows comments/feedback. No further changes. --- docs/helm_overrides.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/helm_overrides.md b/docs/helm_overrides.md index b3cfb38756..1e8eb8b92b 100644 --- a/docs/helm_overrides.md +++ b/docs/helm_overrides.md @@ -41,7 +41,7 @@ This project uses `nodeSelectors` as well as `podAntiAffinity` rules to ensure r - openstack-compute-node: enabled - openvswitch: enabled -NOTE: The `openvswitch` label is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` nodes. Ideally, you would eliminate the `openvswitch` label if you simply wanted to deploy openvswitch to an OR of (`openstack-control-plane` and `openstack-compute-node`). However, Kubernetes `nodeSelectors` prohibits this specific logic. As a result of this, a third label that spans all hosts is required, which in this case is `openvswitch`. The Open vSwitch service must run on both control plane and tenant nodes with both labels to provide connectivity for DHCP, L3, and Metadata services. These Open vSwitch services run as part of the control plane as well as tenant connectivity, which runs as part of the compute node infrastructure. +NOTE: The `openvswitch` label is an element that is applicable to both `openstack-control-plane` as well as `openstack-compute-node` nodes. Ideally, you would eliminate the `openvswitch` label if you could simply do an OR of (`openstack-control-plane` and `openstack-compute-node`). However, Kubernetes `nodeSelectors` prohibits this specific logic. As a result of this, a third label that spans all hosts is required, which in this case is `openvswitch`. The Open vSwitch service must run on both control plane and tenant nodes with both labels to provide connectivity for DHCP, L3, and Metadata services. These Open vSwitch services run as part of the control plane as well as tenant connectivity, which runs as part of the compute node infrastructure. Labels are of course definable and overridable by the chart operators. Labels are defined in charts by using a `labels:` section, which is a common convention that defines both a selector and a value: @@ -156,7 +156,7 @@ The OpenStack-Helm project today uses a mix of Docker images from Stackanetes an ## Upgrades -The OpenStack-Helm project assumes all upgrades will be done through Helm. This includes both template changes, including resources layered on top of the image such as configuration files as well as the Kubernetes resources themselves in addition to the more common practice of updating images. +The OpenStack-Helm project assumes all upgrades will be done through Helm. This includes handling several different resource types. First, changes to the Helm chart templates themselves are handled. Second, all of the resources layered on top of the container image, such as `ConfigMaps` which includes both scripts and configuration files, are updated during an upgrade. Finally, any image references will result in rolling updates of containers, replacing them with the updating image. As Helm stands today, several issues exist when you update images within charts that might have been used by jobs that already ran to completion or are still in flight. OpenStack-Helm developers will continue to work with the Helm community or develop charts that will support job removal prior to an upgrade, which will recreate services with updated images. An example of where this behavior would be desirable is when an updated db_sync image has updated to point from a Mitaka image to a Newton image. In this case, the operator will likely want a db_sync job, which was already run and completed during site installation, to run again with the updated image to bring the schema inline with the Newton release. @@ -320,7 +320,7 @@ At all costs, charts should avoid hard coding values such as `http://keystone-ap ## Common Conditionals -The OpenStack-Helm charts make the following conditions available across all charts, which can be set at install or upgrade time with Helm: +The OpenStack-Helm charts make the following conditions available across all charts, which can be set at install or upgrade time with Helm below. ### Developer Mode