7189e6c916
Add options to nova to enable/disable the use of: 1. The vnc or spice server proxyclient address found by the console compute init container 2. The my_ip hypervisor address found by compute init container 3. The libvirt live_migration_inbound_addr used by nova compute to live-migrate instances These options can be used to prevent cases where the found addresses overwrite what has already been defined in nova.conf by per host nova compute DaemonSet overrides. It is important to allow the flexibility of using or not the default ConfigMap - DaemonSet cluster level configuration, allowing the possibility of having custom per host overrides definitions that will not be overwrite by nova-compute-init.sh One use case (live-migration) for this flexibility is the following: Originally the nova-compute-init.sh script received the capability of selection a target interface (by name, in a ConfigMap level) through which the live-migration traffic should be handled [1], allowing the possibility of selecting a separate network to handle live-migration traffic. This was not assuming any interface/network IP if users did not set .Values.conf.libvirt.live_migration_interface. Later [2], same script was updated to fall-back to default gateway IP resolution in case the live_migration_interface is not defined. So, currently it is mandatory to define a "cluster level config" for the interface name (i.e., through ConfigMap) or to rely on default gateway IP resolution for live-migration addresses. This can be problematic for use cases were: * There are many networks defined for the cluster and a host default gateway might not resolve to the desired network IP; * There is the need of having a per host definition of nova.conf, since nova-compute-init.sh will create a new .conf that will overwrite it. [1] commit |
||
---|---|---|
aodh | ||
barbican | ||
ceilometer | ||
cinder | ||
cyborg | ||
designate | ||
doc | ||
glance | ||
heat | ||
horizon | ||
ironic | ||
keystone | ||
magnum | ||
manila | ||
masakari | ||
mistral | ||
monasca | ||
neutron | ||
nova | ||
octavia | ||
openstack | ||
placement | ||
rally | ||
releasenotes | ||
senlin | ||
tacker | ||
tempest | ||
tests | ||
tools | ||
zuul.d | ||
.gitignore | ||
.gitreview | ||
bindep.txt | ||
CONTRIBUTING.rst | ||
LICENSE | ||
Makefile | ||
README.rst | ||
setup.cfg | ||
setup.py | ||
tox.ini | ||
yamllint-templates.conf | ||
yamllint.conf |
OpenStack-Helm
Mission
The goal of OpenStack-Helm is to provide a collection of Helm charts that simply, resiliently, and flexibly deploy OpenStack and related services on Kubernetes.
Communication
- Join us on IRC: #openstack-helm on oftc
- Community IRC Meetings: [Every Tuesday @ 1500 UTC], #openstack-helm in IRC (OFTC)
- Meeting Agenda Items: Agenda
- Join us on Slack
- #openstack-helm
Storyboard
Bugs and enhancements are tracked via OpenStack-Helm's Storyboard.
Installation and Development
Please review our documentation. For quick installation, evaluation, and convenience, we have a minikube based all-in-one solution that runs in a Docker container. The set up can be found here.
This project is under active development. We encourage anyone interested in OpenStack-Helm to review our Installation documentation. Feel free to ask questions or check out our current Storyboard backlog.
To evaluate a multinode installation, follow the Bare Metal install guide.
Repository
Developers wishing to work on the OpenStack-Helm project should always base their work on the latest code, available from the OpenStack-Helm git repository.
Contributing
We welcome contributions. Check out this document if you would like to get involved.