
The [pci] header is necessary since pike [0]. This change updates the pike template to add the [pci] header, rename pci_passthrough_whitelist to passthrough_whitelist and add the config option pci-alias. [0] https://docs.openstack.org/nova/pike/admin/pci-passthrough.html Change-Id: I7a8c76f5989edb5b4a0b30036ce722ffb0ecb7ab
433 lines
16 KiB
YAML
433 lines
16 KiB
YAML
options:
|
|
debug:
|
|
type: boolean
|
|
default: False
|
|
description: Enable debug logging.
|
|
verbose:
|
|
type: boolean
|
|
default: False
|
|
description: Enable verbose logging.
|
|
use-syslog:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
Setting this to True will allow supporting services to log to syslog.
|
|
openstack-origin:
|
|
type: string
|
|
default: distro
|
|
description: |
|
|
Repository from which to install. May be one of the following:
|
|
distro (default), ppa:somecustom/ppa, a deb url sources entry or a
|
|
supported Ubuntu Cloud Archive (UCA) release pocket.
|
|
.
|
|
Supported UCA sources include:
|
|
.
|
|
cloud:<series>-<openstack-release>
|
|
cloud:<series>-<openstack-release>/updates
|
|
cloud:<series>-<openstack-release>/staging
|
|
cloud:<series>-<openstack-release>/proposed
|
|
.
|
|
For series=Precise we support UCA for openstack-release=
|
|
* icehouse
|
|
.
|
|
For series=Trusty we support UCA for openstack-release=
|
|
* juno
|
|
* kilo
|
|
* ...
|
|
.
|
|
NOTE: updating this setting to a source that is known to provide
|
|
a later version of OpenStack will trigger a software upgrade.
|
|
.
|
|
action-managed-upgrade:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
If True enables openstack upgrades for this charm via juju actions.
|
|
You will still need to set openstack-origin to the new repository but
|
|
instead of an upgrade running automatically across all units, it will
|
|
wait for you to execute the openstack-upgrade action for this charm on
|
|
each unit. If False it will revert to existing behavior of upgrading
|
|
all units on config change.
|
|
harden:
|
|
default:
|
|
type: string
|
|
description: |
|
|
Apply system hardening. Supports a space-delimited list of modules
|
|
to run. Supported modules currently include os, ssh, apache and mysql.
|
|
nova-config:
|
|
type: string
|
|
default: /etc/nova/nova.conf
|
|
description: Full path to Nova configuration file.
|
|
rabbit-user:
|
|
type: string
|
|
default: nova
|
|
description: Username used to access rabbitmq queue.
|
|
rabbit-vhost:
|
|
type: string
|
|
default: openstack
|
|
description: Rabbitmq vhost.
|
|
virt-type:
|
|
type: string
|
|
default: kvm
|
|
description: |
|
|
Virtualisation flavor. Supported and tested flavors are: kvm, lxd.
|
|
|
|
If using the lxd flavor, the lxd subordinate charm must also be related.
|
|
|
|
Other native libvirt flavors available to exercise (dev/test only):
|
|
uml, lxc, qemu.
|
|
|
|
NOTE: Changing virtualisation flavor after deployment is not supported.
|
|
disk-cachemodes:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Specific cachemodes to use for different disk types e.g:
|
|
file=directsync,block=none
|
|
enable-resize:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
Enable instance resizing.
|
|
.
|
|
NOTE: This also enables passwordless SSH access for user 'nova' between
|
|
compute hosts.
|
|
enable-live-migration:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
Configure libvirt or lxd for live migration.
|
|
.
|
|
Live migration support for lxd is still considered experimental.
|
|
.
|
|
NOTE: This also enables passwordless SSH access for user 'root' between
|
|
compute hosts.
|
|
migration-auth-type:
|
|
type: string
|
|
default: ssh
|
|
description: |
|
|
TCP authentication scheme for libvirt live migration. Available options
|
|
include ssh.
|
|
authorized-keys-path:
|
|
type: string
|
|
default: '{homedir}/.ssh/authorized_keys'
|
|
description: |
|
|
Only used when migration-auth-type is set to ssh.
|
|
.
|
|
Full path to authorized_keys file, can be useful for systems with
|
|
non-default AuthorizedKeysFile location. It will be formatted using the
|
|
following variables:
|
|
.
|
|
homedir - user's home directory
|
|
username - username
|
|
.
|
|
instances-path:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Path used for storing Nova instances data - empty means default of
|
|
/var/lib/nova/instances.
|
|
config-flags:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Comma-separated list of key=value config flags. These values will be
|
|
placed in the nova.conf [DEFAULT] section.
|
|
database-user:
|
|
type: string
|
|
default: nova
|
|
description: Username for database access.
|
|
database:
|
|
type: string
|
|
default: nova
|
|
description: Nova database name.
|
|
multi-host:
|
|
type: string
|
|
default: 'yes'
|
|
description: Whether to run nova-api and nova-network on the compute nodes.
|
|
pci-passthrough-whitelist:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Sets the pci_passthrough_whitelist option in nova.conf which allows PCI
|
|
passthrough of specific devices to VMs.
|
|
.
|
|
Example applications: GPU processing, SR-IOV networking, etc.
|
|
.
|
|
NOTE: For PCI passthrough to work IOMMU must be enabled on the machine
|
|
deployed to. This can be accomplished by setting kernel parameters on
|
|
capable machines in MAAS, tagging them and using these tags as
|
|
constraints in the model.
|
|
pci-alias:
|
|
type: string
|
|
default:
|
|
description: |
|
|
The pci-passthrough-whitelist option of nova-compute charm is used for
|
|
specifying which PCI devices are allowed passthrough. pci-alias is more
|
|
a convenience that can be used in conjunction with Nova flavor properties
|
|
to automatically assign required PCI devices to new instances. You could,
|
|
for example, have a GPU flavor or a SR-IOV flavor:
|
|
.
|
|
pci-alias='{"vendor_id":"8086","product_id":"10ca","name":"a1"}'
|
|
.
|
|
This configures a new PCI alias 'a1' which will request a PCI device with
|
|
a vendor id of 0x8086 and a product id of 10ca.
|
|
.
|
|
For more information about the syntax of pci_alias, refer to
|
|
https://docs.openstack.org/ocata/config-reference/compute/config-options.html
|
|
reserved-host-memory:
|
|
type: int
|
|
default: 512
|
|
description: |
|
|
Amount of memory in MB to reserve for the host. Defaults to 512MB.
|
|
vcpu-pin-set:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Sets vcpu_pin_set option in nova.conf which defines which pcpus that
|
|
instance vcpus can or cannot use. For example '^0,^2' to reserve two
|
|
cpus for the host.
|
|
worker-multiplier:
|
|
type: float
|
|
default:
|
|
description: |
|
|
The CPU core multiplier to use when configuring worker processes for
|
|
this services e.g. metadata-api. By default, the number of workers for
|
|
each daemon is set to twice the number of CPU cores a service unit has.
|
|
When deployed in a LXD container, this default value will be capped to 4
|
|
workers unless this configuration option is set.
|
|
# Required if using FlatManager (nova-network)
|
|
bridge-interface:
|
|
type: string
|
|
default: br100
|
|
description: Bridge interface to be configured.
|
|
bridge-ip:
|
|
type: string
|
|
default: 11.0.0.1
|
|
description: IP to be assigned to bridge interface.
|
|
bridge-netmask:
|
|
type: string
|
|
default: 255.255.255.0
|
|
description: Netmask to be assigned to bridge interface.
|
|
# Required if using FlatDHCPManager (nova-network)
|
|
flat-interface:
|
|
type: string
|
|
default: eth1
|
|
description: Network interface on which to build bridge.
|
|
# Network config (by default all access is over 'private-address')
|
|
os-internal-network:
|
|
type: string
|
|
default:
|
|
description: |
|
|
The IP address and netmask of the OpenStack Internal network (e.g.
|
|
192.168.0.0/24)
|
|
.
|
|
This network will be used to bind vncproxy client.
|
|
use-internal-endpoints:
|
|
default: False
|
|
type: boolean
|
|
description: |
|
|
Openstack mostly defaults to using public endpoints for
|
|
internal communication between services. If set to True this option will
|
|
configure services to use internal endpoints where possible.
|
|
prefer-ipv6:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
If True enables IPv6 support. The charm will expect network interfaces
|
|
to be configured with an IPv6 address. If set to False (default) IPv4
|
|
is expected.
|
|
.
|
|
NOTE: these charms do not currently support IPv6 privacy extension. In
|
|
order for this charm to function correctly, the privacy extension must be
|
|
disabled and a non-temporary address must be configured/available on
|
|
your network interface.
|
|
cpu-mode:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Set to 'host-model' to clone the host CPU feature flags; to
|
|
'host-passthrough' to use the host CPU model exactly; to 'custom' to
|
|
use a named CPU model; to 'none' to not set any CPU model. If
|
|
virt_type='kvm|qemu', it will default to 'host-model', otherwise it will
|
|
default to 'none'. Defaults to 'host-passthrough' for ppc64el, ppc64le
|
|
if no value is set.
|
|
cpu-model:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Set to a named libvirt CPU model (see names listed in
|
|
/usr/share/libvirt/cpu_map.xml). Only has effect if cpu_mode='custom' and
|
|
virt_type='kvm|qemu'.
|
|
# Storage config
|
|
libvirt-image-backend:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Tell Nova which libvirt image backend to use. Supported backends are rbd
|
|
and qcow2. If no backend is specified, the Nova default (qcow2) is
|
|
used. Note that rbd imagebackend is only supported with >= Juno.
|
|
force-raw-images:
|
|
type: boolean
|
|
default: True
|
|
description: |
|
|
Force conversion of backing images to raw format. Note that the conversion
|
|
process in Pike uses O_DIRECT calls - certain file systems do not support this,
|
|
for example ZFS; e.g. if using the LXD provider with ZFS backend, this option
|
|
should be set to False.
|
|
rbd-pool:
|
|
type: string
|
|
default: 'nova'
|
|
description: |
|
|
RBD pool to use with Nova libvirt RBDImageBackend. Only required when you
|
|
have libvirt-image-backend set to 'rbd'.
|
|
rbd-client-cache:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Enable/disable rbd client cache. Leaving this value unset will result in
|
|
default Ceph rbd client settings being used (rbd cache is enabled by
|
|
default for Ceph >= Giant). Supported values here are "enabled" or
|
|
"disabled".
|
|
ceph-osd-replication-count:
|
|
type: int
|
|
default: 3
|
|
description: |
|
|
This value dictates the number of replicas ceph must make of any
|
|
object it stores within the nova rbd pool. Of course, this only
|
|
applies if using Ceph as a backend store. Note that once the nova
|
|
rbd pool has been created, changing this value will not have any
|
|
effect (although it can be changed in ceph by manually configuring
|
|
your ceph cluster).
|
|
ceph-pool-weight:
|
|
type: int
|
|
default: 30
|
|
description: |
|
|
Defines a relative weighting of the pool as a percentage of the total
|
|
amount of data in the Ceph cluster. This effectively weights the number
|
|
of placement groups for the pool created to be appropriately portioned
|
|
to the amount of data expected. For example, if the ephemeral volumes
|
|
for the OpenStack compute instances are expected to take up 20% of the
|
|
overall configuration then this value would be specified as 20. Note -
|
|
it is important to choose an appropriate value for the pool weight as
|
|
this directly affects the number of placement groups which will be
|
|
created for the pool. The number of placement groups for a pool can
|
|
only be increased, never decreased - so it is important to identify the
|
|
percent of data that will likely reside in the pool.
|
|
restrict-ceph-pools:
|
|
default: False
|
|
type: boolean
|
|
description: |
|
|
Optionally restrict Ceph key permissions to access pools as required.
|
|
# Other config
|
|
sysctl:
|
|
type: string
|
|
default:
|
|
description: |
|
|
YAML formatted associative array of sysctl values, e.g.:
|
|
'{ kernel.pid_max : 4194303 }'
|
|
# Huge page configuration - off by default
|
|
hugepages:
|
|
type: string
|
|
default:
|
|
description: |
|
|
The percentage of system memory to use for hugepages eg '10%' or the
|
|
total number of 2M hugepages - eg "1024".
|
|
For a systemd system (wily and later) the prefered approach is to enable
|
|
hugepages via kernel parameters set in MAAS and systemd will mount them
|
|
automatically.
|
|
.
|
|
NOTE: For hugepages to work it must be enabled on the machine deployed
|
|
to. This can be accomplished by setting kernel parameters on capable
|
|
machines in MAAS, tagging them and using these tags as constraints in the
|
|
model.
|
|
ksm:
|
|
type: string
|
|
default: "AUTO"
|
|
description: |
|
|
Set to 1 to enable KSM, 0 to disable KSM, and AUTO to use default
|
|
settings.
|
|
.
|
|
Please note that the AUTO value works for qemu 2.2+ (> Kilo), older
|
|
releases will be set to 1 as default.
|
|
aa-profile-mode:
|
|
type: string
|
|
default: 'disable'
|
|
description: |
|
|
Experimental enable apparmor profile. Valid settings: 'complain',
|
|
'enforce' or 'disable'.
|
|
.
|
|
AA disabled by default.
|
|
default-availability-zone:
|
|
type: string
|
|
default: 'nova'
|
|
description: |
|
|
Default compute node availability zone.
|
|
.
|
|
This option determines the availability zone to be used when it is not
|
|
specified in the VM creation request. If this option is not set, the
|
|
default availability zone 'nova' is used.
|
|
.
|
|
NOTE: Availability zones must be created manually using the
|
|
'openstack aggregate create' command.
|
|
.
|
|
resume-guests-state-on-host-boot:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
This option determines whether to start guests that were running
|
|
before the host rebooted.
|
|
# Monitoring options
|
|
nagios_context:
|
|
type: string
|
|
default: 'juju'
|
|
description: |
|
|
Used by the nrpe-external-master subordinate charm. A string that will be
|
|
prepended to instance name to set the host name in nagios. So for
|
|
instance the hostname would be something like:
|
|
.
|
|
juju-myservice-0
|
|
.
|
|
If you're running multiple environments with the same services in them
|
|
this allows you to differentiate between them.
|
|
nagios_servicegroups:
|
|
type: string
|
|
default:
|
|
description: |
|
|
A comma-separated list of nagios servicegroups. If left empty, the
|
|
nagios_context will be used as the servicegroup.
|
|
use-multipath:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
Use a multipath connection for iSCSI volumes. Enabling this feature
|
|
causes libvirt to discover and login to available iscsi targets before
|
|
presenting the disk via device mapper (/dev/mapper/XX) to the VM instead
|
|
of a single path (/dev/disk/by-path/XX). If changed after deployment,
|
|
each VM will require a full stop/start for changes to take affect.
|
|
ephemeral-device:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Block devices to use for storage of ephermeral disks to support nova
|
|
instances; generally used in-conjunction with 'encrypt' to support
|
|
data-at-rest encryption of instance direct attached storage volumes.
|
|
encrypt:
|
|
default: False
|
|
type: boolean
|
|
description: |
|
|
Encrypt block devices used for nova instances using dm-crypt, making use
|
|
of vault for encryption key management; requires a relation to vault.
|
|
ephemeral-unmount:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Cloud instances provide ephermeral storage which is normally mounted
|
|
on /mnt.
|
|
.
|
|
Setting this option to the path of the ephemeral mountpoint will force
|
|
an unmount of the corresponding device so that it can be used for as the
|
|
backing store for local instances. This is useful for testing purposes
|
|
(cloud deployment is not a typical use case).
|
|
|