Add support for dynamic tags
This allows users to create tags (or properties in the case of OpenStack) on instances using string interpolation values. The use case is to be able to add information about the tenant* which requested the instance to cloud-provider tags. * Note that ultimately Nodepool may not end up using a given node for the request which originally prompted its creation, so care should be taken when using information like this. The documentation notes that. This feature uses a new configuration attribute on the provider-label rather than the existing "tags" or "instance-properties" because existing values may not be safe for use as Python format strings (e.g., an existing value might be a JSON blob). This could be solved with YAML tags (like !unsafe) but the most sensible default for that would be to assume format strings and use a YAML tag to disable formatting, which doesn't help with our backwards-compatibility problem. Additionally, Nodepool configuration does not use YAML anchors (yet), so this would be a significant change that might affect people's use of external tools on the config file. Testing this was beyond the ability of the AWS test framework as written, so some redesign for how we handle patching boto-related methods is included. The new approach is simpler, more readable, and flexible in that it can better accomodate future changes. Change-Id: I5f1befa6e2f2625431523d8d94685f79426b6ae5
This commit is contained in:
parent
916d62a374
commit
6320b06950
@ -627,6 +627,46 @@ Selecting the ``aws`` driver adds the following options to the
|
||||
A dictionary of tags to add to the EC2 instances.
|
||||
Values must be supplied as strings.
|
||||
|
||||
.. attr:: dynamic-tags
|
||||
:type: dict
|
||||
:default: None
|
||||
|
||||
Similar to
|
||||
:attr:`providers.[aws].pools.labels.tags`,
|
||||
but is interpreted as a format string with the following
|
||||
values available:
|
||||
|
||||
* request: Information about the request which prompted the
|
||||
creation of this node (note that the node may ultimately
|
||||
be used for a different request and in that case this
|
||||
information will not be updated).
|
||||
|
||||
* id: The request ID.
|
||||
|
||||
* labels: The list of labels in the request.
|
||||
|
||||
* requestor: The name of the requestor.
|
||||
|
||||
* requestor_data: Key/value information from the requestor.
|
||||
|
||||
* relative_priority: The relative priority of the request.
|
||||
|
||||
* event_id: The external event ID of the request.
|
||||
|
||||
* created_time: The creation time of the request.
|
||||
|
||||
* tenant_name: The name of the tenant associated with the
|
||||
request.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
labels:
|
||||
- name: precise
|
||||
dynamic-tags:
|
||||
request_info: "Created for request {request.id}"
|
||||
|
||||
.. _`EBS volume type`: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
|
||||
.. _`AWS region`: https://docs.aws.amazon.com/general/latest/gr/rande.html
|
||||
.. _`Boto configuration`: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html
|
||||
|
@ -623,6 +623,46 @@ section of the configuration.
|
||||
|
||||
A dictionary of tags to add to newly created VMs.
|
||||
|
||||
.. attr:: dynamic-tags
|
||||
:type: dict
|
||||
:default: None
|
||||
|
||||
Similar to
|
||||
:attr:`providers.[azure].pools.labels.tags`,
|
||||
but is interpreted as a format string with the following
|
||||
values available:
|
||||
|
||||
* request: Information about the request which prompted the
|
||||
creation of this node (note that the node may ultimately
|
||||
be used for a different request and in that case this
|
||||
information will not be updated).
|
||||
|
||||
* id: The request ID.
|
||||
|
||||
* labels: The list of labels in the request.
|
||||
|
||||
* requestor: The name of the requestor.
|
||||
|
||||
* requestor_data: Key/value information from the requestor.
|
||||
|
||||
* relative_priority: The relative priority of the request.
|
||||
|
||||
* event_id: The external event ID of the request.
|
||||
|
||||
* created_time: The creation time of the request.
|
||||
|
||||
* tenant_name: The name of the tenant associated with the
|
||||
request.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
labels:
|
||||
- name: precise
|
||||
dynamic-tags:
|
||||
request_info: "Created for request {request.id}"
|
||||
|
||||
.. attr:: user-data
|
||||
:type: str
|
||||
:default: None
|
||||
|
@ -657,6 +657,46 @@ Selecting the OpenStack driver adds the following options to the
|
||||
``meta-data`` on the active server (e.g. within
|
||||
``config-drive:openstack/latest/meta_data.json``)
|
||||
|
||||
.. attr:: dynamic-instance-properties
|
||||
:type: dict
|
||||
:default: None
|
||||
|
||||
Similar to
|
||||
:attr:`providers.[openstack].pools.labels.instance-properties`,
|
||||
but is interpreted as a format string with the following
|
||||
values available:
|
||||
|
||||
* request: Information about the request which prompted the
|
||||
creation of this node (note that the node may ultimately
|
||||
be used for a different request and in that case this
|
||||
information will not be updated).
|
||||
|
||||
* id: The request ID.
|
||||
|
||||
* labels: The list of labels in the request.
|
||||
|
||||
* requestor: The name of the requestor.
|
||||
|
||||
* requestor_data: Key/value information from the requestor.
|
||||
|
||||
* relative_priority: The relative priority of the request.
|
||||
|
||||
* event_id: The external event ID of the request.
|
||||
|
||||
* created_time: The creation time of the request.
|
||||
|
||||
* tenant_name: The name of the tenant associated with the
|
||||
request.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
labels:
|
||||
- name: precise
|
||||
dynamic-instance-properties:
|
||||
request_info: "Created for request {request.id}"
|
||||
|
||||
.. attr:: userdata
|
||||
:type: str
|
||||
:default: None
|
||||
|
@ -16,13 +16,15 @@
|
||||
# limitations under the License.
|
||||
|
||||
from nodepool.driver.statemachine import StateMachineDriver
|
||||
from nodepool.driver.aws.config import AwsProviderConfig
|
||||
from nodepool.driver.aws.adapter import AwsAdapter
|
||||
# Import the modules rather than the class so that the unit tests can
|
||||
# override the classes to add some test-specific methods/data.
|
||||
import nodepool.driver.aws.config as driver_config
|
||||
import nodepool.driver.aws.adapter as driver_adapter
|
||||
|
||||
|
||||
class AwsDriver(StateMachineDriver):
|
||||
def getProviderConfig(self, provider):
|
||||
return AwsProviderConfig(self, provider)
|
||||
return driver_config.AwsProviderConfig(self, provider)
|
||||
|
||||
def getAdapter(self, provider_config):
|
||||
return AwsAdapter(provider_config)
|
||||
return driver_adapter.AwsAdapter(provider_config)
|
||||
|
@ -77,6 +77,8 @@ QUOTA_CODES = {
|
||||
'x': 'L-7295265B',
|
||||
}
|
||||
|
||||
CACHE_TTL = 10
|
||||
|
||||
|
||||
class AwsInstance(statemachine.Instance):
|
||||
def __init__(self, instance, quota):
|
||||
@ -140,7 +142,7 @@ class AwsCreateStateMachine(statemachine.StateMachine):
|
||||
COMPLETE = 'complete'
|
||||
|
||||
def __init__(self, adapter, hostname, label, image_external_id,
|
||||
metadata, retries, log):
|
||||
metadata, retries, request, log):
|
||||
self.log = log
|
||||
super().__init__()
|
||||
self.adapter = adapter
|
||||
@ -149,6 +151,11 @@ class AwsCreateStateMachine(statemachine.StateMachine):
|
||||
self.image_external_id = image_external_id
|
||||
self.metadata = metadata
|
||||
self.tags = label.tags.copy() or {}
|
||||
for k, v in label.dynamic_tags.items():
|
||||
try:
|
||||
self.tags[k] = v.format(request=request.getSafeAttributes())
|
||||
except Exception:
|
||||
self.log.exception("Error formatting tag %s", k)
|
||||
self.tags.update(metadata)
|
||||
self.tags['Name'] = hostname
|
||||
self.hostname = hostname
|
||||
@ -269,10 +276,10 @@ class AwsAdapter(statemachine.Adapter):
|
||||
self.create_executor.shutdown()
|
||||
self._running = False
|
||||
|
||||
def getCreateStateMachine(self, hostname, label,
|
||||
image_external_id, metadata, retries, log):
|
||||
return AwsCreateStateMachine(self, hostname, label,
|
||||
image_external_id, metadata, retries, log)
|
||||
def getCreateStateMachine(self, hostname, label, image_external_id,
|
||||
metadata, retries, request, log):
|
||||
return AwsCreateStateMachine(self, hostname, label, image_external_id,
|
||||
metadata, retries, request, log)
|
||||
|
||||
def getDeleteStateMachine(self, external_id, log):
|
||||
return AwsDeleteStateMachine(self, external_id, log)
|
||||
@ -387,7 +394,7 @@ class AwsAdapter(statemachine.Adapter):
|
||||
# Import snapshot
|
||||
self.log.debug(f"Importing {image_name}")
|
||||
with self.rate_limiter:
|
||||
import_snapshot_task = self._import_snapshot(
|
||||
import_snapshot_task = self.ec2_client.import_snapshot(
|
||||
DiskContainer={
|
||||
'Format': image_format,
|
||||
'UserBucket': {
|
||||
@ -404,7 +411,8 @@ class AwsAdapter(statemachine.Adapter):
|
||||
)
|
||||
task_id = import_snapshot_task['ImportTaskId']
|
||||
|
||||
paginator = self._get_paginator('describe_import_snapshot_tasks')
|
||||
paginator = self.ec2_client.get_paginator(
|
||||
'describe_import_snapshot_tasks')
|
||||
done = False
|
||||
while not done:
|
||||
time.sleep(self.IMAGE_UPLOAD_SLEEP)
|
||||
@ -586,7 +594,8 @@ class AwsAdapter(statemachine.Adapter):
|
||||
self.not_our_snapshots.add(snap.id)
|
||||
|
||||
def _listImportSnapshotTasks(self):
|
||||
paginator = self._get_paginator('describe_import_snapshot_tasks')
|
||||
paginator = self.ec2_client.get_paginator(
|
||||
'describe_import_snapshot_tasks')
|
||||
with self.non_mutating_rate_limiter:
|
||||
for page in paginator.paginate():
|
||||
for task in page['ImportSnapshotTasks']:
|
||||
@ -639,30 +648,30 @@ class AwsAdapter(statemachine.Adapter):
|
||||
return instance
|
||||
return None
|
||||
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=10)
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=CACHE_TTL)
|
||||
def _listInstances(self):
|
||||
with self.non_mutating_rate_limiter(
|
||||
self.log.debug, "Listed instances"):
|
||||
return list(self.ec2.instances.all())
|
||||
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=10)
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=CACHE_TTL)
|
||||
def _listVolumes(self):
|
||||
with self.non_mutating_rate_limiter:
|
||||
return list(self.ec2.volumes.all())
|
||||
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=10)
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=CACHE_TTL)
|
||||
def _listAmis(self):
|
||||
# Note: this is overridden in tests due to the filter
|
||||
with self.non_mutating_rate_limiter:
|
||||
return list(self.ec2.images.filter(Owners=['self']))
|
||||
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=10)
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=CACHE_TTL)
|
||||
def _listSnapshots(self):
|
||||
# Note: this is overridden in tests due to the filter
|
||||
with self.non_mutating_rate_limiter:
|
||||
return list(self.ec2.snapshots.filter(OwnerIds=['self']))
|
||||
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=10)
|
||||
@cachetools.func.ttl_cache(maxsize=1, ttl=CACHE_TTL)
|
||||
def _listObjects(self):
|
||||
bucket_name = self.provider.object_storage.get('bucket-name')
|
||||
if not bucket_name:
|
||||
@ -896,12 +905,3 @@ class AwsAdapter(statemachine.Adapter):
|
||||
with self.rate_limiter:
|
||||
self.log.debug(f"Deleting object {external_id}")
|
||||
self.s3.Object(bucket_name, external_id).delete()
|
||||
|
||||
# These methods allow the tests to patch our use of boto to
|
||||
# compensate for missing methods in the boto mocks.
|
||||
def _import_snapshot(self, *args, **kw):
|
||||
return self.ec2_client.import_snapshot(*args, **kw)
|
||||
|
||||
def _get_paginator(self, *args, **kw):
|
||||
return self.ec2_client.get_paginator(*args, **kw)
|
||||
# End test methods
|
||||
|
@ -166,6 +166,7 @@ class AwsLabel(ConfigValue):
|
||||
self.userdata = label.get('userdata', None)
|
||||
self.iam_instance_profile = label.get('iam-instance-profile', None)
|
||||
self.tags = label.get('tags', {})
|
||||
self.dynamic_tags = label.get('dynamic-tags', {})
|
||||
|
||||
@staticmethod
|
||||
def getSchema():
|
||||
@ -184,6 +185,7 @@ class AwsLabel(ConfigValue):
|
||||
v.Exclusive('arn', 'iam_instance_profile_id'): str
|
||||
},
|
||||
'tags': dict,
|
||||
'dynamic-tags': dict,
|
||||
}
|
||||
|
||||
|
||||
|
@ -169,8 +169,9 @@ class AzureCreateStateMachine(statemachine.StateMachine):
|
||||
COMPLETE = 'complete'
|
||||
|
||||
def __init__(self, adapter, hostname, label, image_external_id,
|
||||
metadata, retries):
|
||||
metadata, retries, request, log):
|
||||
super().__init__()
|
||||
self.log = log
|
||||
self.adapter = adapter
|
||||
self.retries = retries
|
||||
self.attempts = 0
|
||||
@ -178,6 +179,11 @@ class AzureCreateStateMachine(statemachine.StateMachine):
|
||||
self.image_reference = None
|
||||
self.metadata = metadata
|
||||
self.tags = label.tags.copy() or {}
|
||||
for k, v in label.dynamic_tags.items():
|
||||
try:
|
||||
self.tags[k] = v.format(request=request.getSafeAttributes())
|
||||
except Exception:
|
||||
self.log.exception("Error formatting tag %s", k)
|
||||
self.tags.update(metadata)
|
||||
self.hostname = hostname
|
||||
self.label = label
|
||||
@ -329,9 +335,11 @@ class AzureAdapter(statemachine.Adapter):
|
||||
self._getSKUs()
|
||||
|
||||
def getCreateStateMachine(self, hostname, label,
|
||||
image_external_id, metadata, retries, log):
|
||||
image_external_id, metadata, retries,
|
||||
request, log):
|
||||
return AzureCreateStateMachine(self, hostname, label,
|
||||
image_external_id, metadata, retries)
|
||||
image_external_id, metadata,
|
||||
retries, request, log)
|
||||
|
||||
def getDeleteStateMachine(self, external_id, log):
|
||||
return AzureDeleteStateMachine(self, external_id)
|
||||
|
@ -169,6 +169,7 @@ class AzureLabel(ConfigValue):
|
||||
|
||||
self.hardware_profile = label['hardware-profile']
|
||||
self.tags = label.get('tags', {})
|
||||
self.dynamic_tags = label.get('dynamic-tags', {})
|
||||
self.user_data = self._encodeData(label.get('user-data', None))
|
||||
self.custom_data = self._encodeData(label.get('custom-data', None))
|
||||
|
||||
@ -189,6 +190,7 @@ class AzureLabel(ConfigValue):
|
||||
'diskimage': str,
|
||||
v.Required('hardware-profile'): azure_hardware_profile,
|
||||
'tags': dict,
|
||||
'dynamic-tags': dict,
|
||||
'user-data': str,
|
||||
'custom-data': str,
|
||||
}
|
||||
|
@ -379,9 +379,11 @@ class IBMVPCAdapter(statemachine.Adapter):
|
||||
return authenticator
|
||||
|
||||
def getCreateStateMachine(self, hostname, label,
|
||||
image_external_id, metadata, retries, log):
|
||||
image_external_id, metadata, retries,
|
||||
request, log):
|
||||
return IBMVPCCreateStateMachine(self, hostname, label,
|
||||
image_external_id, metadata, retries)
|
||||
image_external_id, metadata,
|
||||
retries)
|
||||
|
||||
def getDeleteStateMachine(self, external_id, log):
|
||||
return IBMVPCDeleteStateMachine(self, external_id)
|
||||
|
@ -271,7 +271,8 @@ class MetastaticAdapter(statemachine.Adapter):
|
||||
return self._provider._zk
|
||||
|
||||
def getCreateStateMachine(self, hostname, label,
|
||||
image_external_id, metadata, retries, log):
|
||||
image_external_id, metadata, retries,
|
||||
request, log):
|
||||
return MetastaticCreateStateMachine(self, hostname, label,
|
||||
image_external_id, metadata,
|
||||
retries)
|
||||
|
@ -175,7 +175,9 @@ class ProviderPool(ConfigPool):
|
||||
False))
|
||||
pl.volume_size = label.get('volume-size', 50)
|
||||
pl.instance_properties = label.get('instance-properties',
|
||||
None)
|
||||
{})
|
||||
pl.dynamic_instance_properties = label.get(
|
||||
'dynamic-instance-properties', {})
|
||||
pl.userdata = label.get('userdata', None)
|
||||
pl.networks = label.get('networks', self.networks)
|
||||
pl.host_key_checking = label.get(
|
||||
@ -319,6 +321,7 @@ class OpenStackProviderConfig(ProviderConfig):
|
||||
'boot-from-volume': bool,
|
||||
'volume-size': int,
|
||||
'instance-properties': dict,
|
||||
'dynamic-instance-properties': dict,
|
||||
'userdata': str,
|
||||
'networks': [str],
|
||||
'host-key-checking': bool,
|
||||
|
@ -28,7 +28,8 @@ from nodepool.driver import NodeRequestHandler
|
||||
|
||||
|
||||
class OpenStackNodeLauncher(NodeLauncher):
|
||||
def __init__(self, handler, node, provider_config, provider_label):
|
||||
def __init__(self, handler, node, provider_config, provider_label,
|
||||
request):
|
||||
'''
|
||||
Initialize the launcher.
|
||||
|
||||
@ -38,6 +39,8 @@ class OpenStackNodeLauncher(NodeLauncher):
|
||||
describing the provider launching this node.
|
||||
:param ProviderLabel provider_label: A ProviderLabel object
|
||||
describing the label to use for the node.
|
||||
:param NodeRequest request: The NodeRequest that prompted the
|
||||
launch.
|
||||
'''
|
||||
super().__init__(handler, node, provider_config)
|
||||
|
||||
@ -46,6 +49,7 @@ class OpenStackNodeLauncher(NodeLauncher):
|
||||
|
||||
self.label = provider_label
|
||||
self.pool = provider_label.pool
|
||||
self.request = request
|
||||
|
||||
def _logConsole(self, server_id, hostname):
|
||||
if not self.label.console_log:
|
||||
@ -123,6 +127,16 @@ class OpenStackNodeLauncher(NodeLauncher):
|
||||
# because that isn't available in ZooKeeper until after the server is
|
||||
# active, which could cause a race in leak detection.
|
||||
|
||||
props = self.label.instance_properties.copy()
|
||||
for k, v in self.label.dynamic_instance_properties.items():
|
||||
try:
|
||||
props[k] = v.format(request=self.request.getSafeAttributes())
|
||||
except Exception:
|
||||
self.log.exception(
|
||||
"Error formatting dynamic instance property %s", k)
|
||||
if not props:
|
||||
props = None
|
||||
|
||||
try:
|
||||
server = self.handler.manager.createServer(
|
||||
hostname,
|
||||
@ -140,7 +154,7 @@ class OpenStackNodeLauncher(NodeLauncher):
|
||||
security_groups=self.pool.security_groups,
|
||||
boot_from_volume=self.label.boot_from_volume,
|
||||
volume_size=self.label.volume_size,
|
||||
instance_properties=self.label.instance_properties,
|
||||
instance_properties=props,
|
||||
userdata=self.label.userdata)
|
||||
except openstack.cloud.exc.OpenStackCloudCreateException as e:
|
||||
if e.resource_id:
|
||||
@ -457,6 +471,7 @@ class OpenStackNodeRequestHandler(NodeRequestHandler):
|
||||
|
||||
def launch(self, node):
|
||||
label = self.pool.labels[node.type[0]]
|
||||
thd = OpenStackNodeLauncher(self, node, self.provider, label)
|
||||
thd = OpenStackNodeLauncher(self, node, self.provider, label,
|
||||
self.request)
|
||||
thd.start()
|
||||
self._threads.append(thd)
|
||||
|
@ -136,7 +136,8 @@ class StateMachineNodeLauncher(stats.StatsReporter):
|
||||
'nodepool_pool_name': self.handler.pool.name,
|
||||
'nodepool_provider_name': self.manager.provider.name}
|
||||
self.state_machine = self.manager.adapter.getCreateStateMachine(
|
||||
hostname, label, image_external_id, metadata, retries, self.log)
|
||||
hostname, label, image_external_id, metadata, retries,
|
||||
self.handler.request, self.log)
|
||||
|
||||
def updateNodeFromInstance(self, instance):
|
||||
if instance is None:
|
||||
|
@ -161,3 +161,10 @@ def nodescan(ip, port=22, timeout=60, gather_hostkeys=True):
|
||||
sock = None
|
||||
|
||||
return keys
|
||||
|
||||
|
||||
class Attributes(object):
|
||||
"""A class to hold attributes for string formatting."""
|
||||
|
||||
def __init__(self, **kw):
|
||||
setattr(self, '__dict__', kw)
|
||||
|
@ -285,6 +285,8 @@ class BaseTestCase(testtools.TestCase):
|
||||
continue
|
||||
if t.name.startswith("PoolWorker"):
|
||||
continue
|
||||
if t.name.startswith("ThreadPoolExecutor"):
|
||||
continue
|
||||
if t.name not in whitelist:
|
||||
done = False
|
||||
if done:
|
||||
|
47
nodepool/tests/fixtures/aws/aws-min-ready.yaml
vendored
Normal file
47
nodepool/tests/fixtures/aws/aws-min-ready.yaml
vendored
Normal file
@ -0,0 +1,47 @@
|
||||
zookeeper-servers:
|
||||
- host: {zookeeper_host}
|
||||
port: {zookeeper_port}
|
||||
chroot: {zookeeper_chroot}
|
||||
|
||||
zookeeper-tls:
|
||||
ca: {zookeeper_ca}
|
||||
cert: {zookeeper_cert}
|
||||
key: {zookeeper_key}
|
||||
|
||||
tenant-resource-limits:
|
||||
- tenant-name: tenant-1
|
||||
max-cores: 1024
|
||||
|
||||
labels:
|
||||
- name: ubuntu1404-with-tags
|
||||
min-ready: 1
|
||||
|
||||
providers:
|
||||
- name: ec2-us-west-2
|
||||
driver: aws
|
||||
region-name: us-west-2
|
||||
cloud-images:
|
||||
- name: ubuntu1404
|
||||
image-id: ami-1e749f67
|
||||
username: ubuntu
|
||||
pools:
|
||||
- name: main
|
||||
max-servers: 1
|
||||
subnet-id: {subnet_id}
|
||||
security-group-id: {security_group_id}
|
||||
node-attributes:
|
||||
key1: value1
|
||||
key2: value2
|
||||
labels:
|
||||
- name: ubuntu1404-with-tags
|
||||
cloud-image: ubuntu1404
|
||||
instance-type: t3.medium
|
||||
key-name: zuul
|
||||
tags:
|
||||
has-tags: true
|
||||
Name: ignored-name
|
||||
dynamic-tags:
|
||||
# Note: we double the braces to deal with unit-test
|
||||
# pre-processing of this file. The output and actual
|
||||
# file syntax is single braces.
|
||||
dynamic-tenant: "Tenant is {{request.tenant_name}}"
|
5
nodepool/tests/fixtures/aws/aws.yaml
vendored
5
nodepool/tests/fixtures/aws/aws.yaml
vendored
@ -101,6 +101,11 @@ providers:
|
||||
tags:
|
||||
has-tags: true
|
||||
Name: ignored-name
|
||||
dynamic-tags:
|
||||
# Note: we double the braces to deal with unit-test
|
||||
# pre-processing of this file. The output and actual
|
||||
# file syntax is single braces.
|
||||
dynamic-tenant: "Tenant is {{request.tenant_name}}"
|
||||
- name: ubuntu1404-with-shell-type
|
||||
cloud-image: ubuntu1404-with-shell-type
|
||||
instance-type: t3.medium
|
||||
|
62
nodepool/tests/fixtures/azure-min-ready.yaml
vendored
Normal file
62
nodepool/tests/fixtures/azure-min-ready.yaml
vendored
Normal file
@ -0,0 +1,62 @@
|
||||
webapp:
|
||||
port: 8005
|
||||
listen_address: '0.0.0.0'
|
||||
|
||||
zookeeper-servers:
|
||||
- host: {zookeeper_host}
|
||||
port: {zookeeper_port}
|
||||
chroot: {zookeeper_chroot}
|
||||
|
||||
zookeeper-tls:
|
||||
ca: {zookeeper_ca}
|
||||
cert: {zookeeper_cert}
|
||||
key: {zookeeper_key}
|
||||
|
||||
tenant-resource-limits:
|
||||
- tenant-name: tenant-1
|
||||
max-cores: 1024
|
||||
|
||||
labels:
|
||||
- name: bionic
|
||||
min-ready: 1
|
||||
|
||||
providers:
|
||||
- name: azure
|
||||
driver: azure
|
||||
zuul-public-key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+mplenM+m6pNY9Un3fpO9eqf808Jrfb3d1gXg7BZVawCvtEZ/cDYvLQ3OF1AeL2kcIC0UAIglM5JXae7yO5CJbJRdkbXvv0u1LvpLxYSPM4ATR0r4IseC5YVxkfJQNi4ixSwTqD4ScEkuCXcSqSU9M+hB+KlnwXoR4IcYHf7vD2Z0Mdwm2ikk3SeERmspmMxx/uz0SPn58QxONuoTlNWQKqDWsV6bRyoPa6HWccMrIH1/e7E69Nw/30oioOQpKBgaDCauh+QkDtSkjRpRMOV47ZFh16Q9DqMgLx+FD8z6++9rsHlB65Zas1xyQsiRCFG09s00b7OR7Xz9ukQ5+vXV
|
||||
resource-group-location: centralus
|
||||
location: centralus
|
||||
resource-group: nodepool
|
||||
auth-path: {auth_path}
|
||||
subnet-id: /subscriptions/c35cf7df-ed75-4c85-be00-535409a85120/resourceGroups/nodepool/providers/Microsoft.Network/virtualNetworks/NodePool/subnets/default
|
||||
cloud-images:
|
||||
- name: bionic
|
||||
username: zuul
|
||||
shell-type: sh
|
||||
image-reference:
|
||||
sku: 18.04-LTS
|
||||
publisher: Canonical
|
||||
version: latest
|
||||
offer: UbuntuServer
|
||||
pools:
|
||||
- name: main
|
||||
max-servers: 10
|
||||
node-attributes:
|
||||
key1: value1
|
||||
key2: value2
|
||||
labels:
|
||||
- name: bionic
|
||||
cloud-image: bionic
|
||||
hardware-profile:
|
||||
vm-size: Standard_B1ls
|
||||
tags:
|
||||
department: R&D
|
||||
team: DevOps
|
||||
systemPurpose: CI
|
||||
dynamic-tags:
|
||||
# Note: we double the braces to deal with unit-test
|
||||
# pre-processing of this file. The output and actual
|
||||
# file syntax is single braces.
|
||||
dynamic-tenant: "Tenant is {{request.tenant_name}}"
|
||||
user-data: "This is the user data"
|
||||
custom-data: "This is the custom data"
|
5
nodepool/tests/fixtures/azure.yaml
vendored
5
nodepool/tests/fixtures/azure.yaml
vendored
@ -86,6 +86,11 @@ providers:
|
||||
department: R&D
|
||||
team: DevOps
|
||||
systemPurpose: CI
|
||||
dynamic-tags:
|
||||
# Note: we double the braces to deal with unit-test
|
||||
# pre-processing of this file. The output and actual
|
||||
# file syntax is single braces.
|
||||
dynamic-tenant: "Tenant is {{request.tenant_name}}"
|
||||
user-data: "This is the user data"
|
||||
custom-data: "This is the custom data"
|
||||
- name: image-by-name
|
||||
|
7
nodepool/tests/fixtures/node.yaml
vendored
7
nodepool/tests/fixtures/node.yaml
vendored
@ -46,6 +46,13 @@ providers:
|
||||
diskimage: fake-image
|
||||
min-ram: 8192
|
||||
flavor-name: 'Fake'
|
||||
instance-properties:
|
||||
prop1: foo
|
||||
dynamic-instance-properties:
|
||||
# Note: we double the braces to deal with unit-test
|
||||
# pre-processing of this file. The output and actual
|
||||
# file syntax is single braces.
|
||||
dynamic-tenant: "Tenant is {{request.tenant_name}}"
|
||||
|
||||
diskimages:
|
||||
- name: fake-image
|
||||
|
@ -41,6 +41,13 @@ providers:
|
||||
diskimage: fake-image
|
||||
min-ram: 8192
|
||||
flavor-name: 'Fake'
|
||||
instance-properties:
|
||||
prop1: foo
|
||||
dynamic-instance-properties:
|
||||
# Note: we double the braces to deal with unit-test
|
||||
# pre-processing of this file. The output and actual
|
||||
# file syntax is single braces.
|
||||
dynamic-tenant: "Tenant is {{request.tenant_name}}"
|
||||
- name: fake-label2
|
||||
diskimage: fake-image
|
||||
min-ram: 8192
|
||||
|
@ -30,6 +30,7 @@ from nodepool.zk import zookeeper as zk
|
||||
from nodepool.nodeutils import iterate_timeout
|
||||
import nodepool.driver.statemachine
|
||||
from nodepool.driver.statemachine import StateMachineProvider
|
||||
import nodepool.driver.aws.adapter
|
||||
from nodepool.driver.aws.adapter import AwsInstance, AwsAdapter
|
||||
|
||||
from nodepool.tests.unit.fake_aws import FakeAws
|
||||
@ -43,6 +44,48 @@ class Dummy:
|
||||
pass
|
||||
|
||||
|
||||
class FakeAwsAdapter(AwsAdapter):
|
||||
# Patch/override adapter methods to aid unit tests
|
||||
def __init__(self, *args, **kw):
|
||||
super().__init__(*args, **kw)
|
||||
|
||||
# Note: boto3 doesn't handle ipv6 addresses correctly
|
||||
# when in fake mode so we need to intercept the
|
||||
# create_instances call and validate the args we supply.
|
||||
def _fake_create_instances(*args, **kwargs):
|
||||
self.__testcase.create_instance_calls.append(kwargs)
|
||||
return self.ec2.create_instances_orig(*args, **kwargs)
|
||||
|
||||
self.ec2.create_instances_orig = self.ec2.create_instances
|
||||
self.ec2.create_instances = _fake_create_instances
|
||||
self.ec2_client.import_snapshot = \
|
||||
self.__testcase.fake_aws.import_snapshot
|
||||
self.ec2_client.get_paginator = \
|
||||
self.__testcase.fake_aws.get_paginator
|
||||
|
||||
# moto does not mock service-quotas, so we do it ourselves:
|
||||
def _fake_get_service_quota(ServiceCode, QuotaCode, *args, **kwargs):
|
||||
# This is a simple fake that only returns the number
|
||||
# of cores.
|
||||
if self.__quotas is None:
|
||||
return {'Quota': {'Value': 100}}
|
||||
else:
|
||||
return {'Quota': {'Value': self.__quotas.get(QuotaCode)}}
|
||||
self.aws_quotas.get_service_quota = _fake_get_service_quota
|
||||
|
||||
|
||||
def aws_quotas(quotas):
|
||||
"""Specify a set of AWS quota values for use by a test method.
|
||||
|
||||
:arg dict quotas: The quota dictionary.
|
||||
"""
|
||||
|
||||
def decorator(test):
|
||||
test.__aws_quotas__ = quotas
|
||||
return test
|
||||
return decorator
|
||||
|
||||
|
||||
class TestDriverAws(tests.DBTestCase):
|
||||
log = logging.getLogger("nodepool.TestDriverAws")
|
||||
mock_ec2 = mock_ec2()
|
||||
@ -54,6 +97,7 @@ class TestDriverAws(tests.DBTestCase):
|
||||
StateMachineProvider.MINIMUM_SLEEP = 0.1
|
||||
StateMachineProvider.MAXIMUM_SLEEP = 1
|
||||
AwsAdapter.IMAGE_UPLOAD_SLEEP = 1
|
||||
|
||||
aws_id = 'AK000000000000000000'
|
||||
aws_key = '0123456789abcdef0123456789abcdef0123456789abcdef'
|
||||
self.useFixture(
|
||||
@ -103,9 +147,13 @@ class TestDriverAws(tests.DBTestCase):
|
||||
Description='Zuul Nodes')
|
||||
self.security_group_id = self.security_group['GroupId']
|
||||
self.patch(nodepool.driver.statemachine, 'nodescan', fake_nodescan)
|
||||
self.patch(AwsAdapter, '_import_snapshot',
|
||||
self.fake_aws.import_snapshot)
|
||||
self.patch(AwsAdapter, '_get_paginator', self.fake_aws.get_paginator)
|
||||
test_name = self.id().split('.')[-1]
|
||||
test = getattr(self, test_name)
|
||||
if hasattr(test, '__aws_quotas__'):
|
||||
quotas = getattr(test, '__aws_quotas__')
|
||||
else:
|
||||
quotas = None
|
||||
self.patchAdapter(quotas=quotas)
|
||||
|
||||
def tearDown(self):
|
||||
self.mock_ec2.stop()
|
||||
@ -117,47 +165,18 @@ class TestDriverAws(tests.DBTestCase):
|
||||
kw['security_group_id'] = self.security_group_id
|
||||
return super().setup_config(*args, **kw)
|
||||
|
||||
def patchProvider(self, nodepool, provider_name='ec2-us-west-2',
|
||||
quotas=None):
|
||||
for _ in iterate_timeout(
|
||||
30, Exception, 'wait for provider'):
|
||||
try:
|
||||
provider_manager = nodepool.getProviderManager(provider_name)
|
||||
if provider_manager.adapter.ec2 is not None:
|
||||
break
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Note: boto3 doesn't handle ipv6 addresses correctly
|
||||
# when in fake mode so we need to intercept the
|
||||
# create_instances call and validate the args we supply.
|
||||
def _fake_create_instances(*args, **kwargs):
|
||||
self.create_instance_calls.append(kwargs)
|
||||
return provider_manager.adapter.ec2.create_instances_orig(
|
||||
*args, **kwargs)
|
||||
|
||||
provider_manager.adapter.ec2.create_instances_orig =\
|
||||
provider_manager.adapter.ec2.create_instances
|
||||
provider_manager.adapter.ec2.create_instances =\
|
||||
_fake_create_instances
|
||||
|
||||
# moto does not mock service-quotas, so we do it ourselves:
|
||||
def _fake_get_service_quota(ServiceCode, QuotaCode, *args, **kwargs):
|
||||
# This is a simple fake that only returns the number
|
||||
# of cores.
|
||||
if quotas is None:
|
||||
return {'Quota': {'Value': 100}}
|
||||
else:
|
||||
return {'Quota': {'Value': quotas.get(QuotaCode)}}
|
||||
provider_manager.adapter.aws_quotas.get_service_quota =\
|
||||
_fake_get_service_quota
|
||||
def patchAdapter(self, quotas=None):
|
||||
self.patch(nodepool.driver.aws.adapter, 'AwsAdapter', FakeAwsAdapter)
|
||||
self.patch(nodepool.driver.aws.adapter.AwsAdapter,
|
||||
'_FakeAwsAdapter__testcase', self)
|
||||
self.patch(nodepool.driver.aws.adapter.AwsAdapter,
|
||||
'_FakeAwsAdapter__quotas', quotas)
|
||||
|
||||
def requestNode(self, config_path, label):
|
||||
# A helper method to perform a single node request
|
||||
configfile = self.setup_config(config_path)
|
||||
pool = self.useNodepool(configfile, watermark_sleep=1)
|
||||
pool.start()
|
||||
self.patchProvider(pool)
|
||||
|
||||
req = zk.NodeRequest()
|
||||
req.state = zk.REQUESTED
|
||||
@ -190,7 +209,6 @@ class TestDriverAws(tests.DBTestCase):
|
||||
configfile = self.setup_config('aws/aws-multiple.yaml')
|
||||
pool = self.useNodepool(configfile, watermark_sleep=1)
|
||||
pool.start()
|
||||
self.patchProvider(pool)
|
||||
|
||||
reqs = []
|
||||
for x in range(4):
|
||||
@ -211,15 +229,15 @@ class TestDriverAws(tests.DBTestCase):
|
||||
for node in nodes:
|
||||
self.waitForNodeDeletion(node)
|
||||
|
||||
@aws_quotas({
|
||||
'L-1216C47A': 1,
|
||||
'L-43DA4232': 224,
|
||||
})
|
||||
def test_aws_multi_quota(self):
|
||||
# Test multiple instance type quotas (standard and high-mem)
|
||||
configfile = self.setup_config('aws/aws-quota.yaml')
|
||||
pool = self.useNodepool(configfile, watermark_sleep=1)
|
||||
pool.start()
|
||||
self.patchProvider(pool, quotas={
|
||||
'L-1216C47A': 1,
|
||||
'L-43DA4232': 224,
|
||||
})
|
||||
|
||||
# Create a high-memory node request.
|
||||
req1 = zk.NodeRequest()
|
||||
@ -265,16 +283,16 @@ class TestDriverAws(tests.DBTestCase):
|
||||
req3 = self.waitForNodeRequest(req3)
|
||||
self.assertSuccess(req3)
|
||||
|
||||
@aws_quotas({
|
||||
'L-1216C47A': 1000,
|
||||
'L-43DA4232': 1000,
|
||||
})
|
||||
def test_aws_multi_pool_limits(self):
|
||||
# Test multiple instance type quotas (standard and high-mem)
|
||||
# with pool resource limits
|
||||
configfile = self.setup_config('aws/aws-limits.yaml')
|
||||
pool = self.useNodepool(configfile, watermark_sleep=1)
|
||||
pool.start()
|
||||
self.patchProvider(pool, quotas={
|
||||
'L-1216C47A': 1000,
|
||||
'L-43DA4232': 1000,
|
||||
})
|
||||
|
||||
# Create a standard node request.
|
||||
req1 = zk.NodeRequest()
|
||||
@ -310,16 +328,16 @@ class TestDriverAws(tests.DBTestCase):
|
||||
req2 = self.waitForNodeRequest(req2)
|
||||
self.assertSuccess(req2)
|
||||
|
||||
@aws_quotas({
|
||||
'L-1216C47A': 1000,
|
||||
'L-43DA4232': 1000,
|
||||
})
|
||||
def test_aws_multi_tenant_limits(self):
|
||||
# Test multiple instance type quotas (standard and high-mem)
|
||||
# with tenant resource limits
|
||||
configfile = self.setup_config('aws/aws-limits.yaml')
|
||||
pool = self.useNodepool(configfile, watermark_sleep=1)
|
||||
pool.start()
|
||||
self.patchProvider(pool, quotas={
|
||||
'L-1216C47A': 1000,
|
||||
'L-43DA4232': 1000,
|
||||
})
|
||||
|
||||
# Create a high node request.
|
||||
req1 = zk.NodeRequest()
|
||||
@ -501,6 +519,26 @@ class TestDriverAws(tests.DBTestCase):
|
||||
self.assertIn({"Key": "has-tags", "Value": "true"}, tag_list)
|
||||
self.assertIn({"Key": "Name", "Value": "np0000000000"}, tag_list)
|
||||
self.assertNotIn({"Key": "Name", "Value": "ignored-name"}, tag_list)
|
||||
self.assertIn(
|
||||
{"Key": "dynamic-tenant", "Value": "Tenant is tenant-1"}, tag_list)
|
||||
|
||||
def test_aws_min_ready(self):
|
||||
# Test dynamic tag formatting without a real node request
|
||||
configfile = self.setup_config('aws/aws-min-ready.yaml')
|
||||
pool = self.useNodepool(configfile, watermark_sleep=1)
|
||||
pool.start()
|
||||
node = self.waitForNodes('ubuntu1404-with-tags')[0]
|
||||
|
||||
self.assertEqual(node.host_keys, ['ssh-rsa FAKEKEY'])
|
||||
self.assertEqual(node.image_id, 'ubuntu1404')
|
||||
|
||||
instance = self.ec2.Instance(node.external_id)
|
||||
tag_list = instance.tags
|
||||
self.assertIn({"Key": "has-tags", "Value": "true"}, tag_list)
|
||||
self.assertIn({"Key": "Name", "Value": "np0000000000"}, tag_list)
|
||||
self.assertNotIn({"Key": "Name", "Value": "ignored-name"}, tag_list)
|
||||
self.assertIn(
|
||||
{"Key": "dynamic-tenant", "Value": "Tenant is None"}, tag_list)
|
||||
|
||||
def test_aws_shell_type(self):
|
||||
req = self.requestNode('aws/shell-type.yaml',
|
||||
@ -545,7 +583,6 @@ class TestDriverAws(tests.DBTestCase):
|
||||
|
||||
pool = self.useNodepool(configfile, watermark_sleep=1)
|
||||
pool.start()
|
||||
self.patchProvider(pool)
|
||||
|
||||
req = zk.NodeRequest()
|
||||
req.state = zk.REQUESTED
|
||||
@ -574,8 +611,6 @@ class TestDriverAws(tests.DBTestCase):
|
||||
self.waitForBuildDeletion('fake-image', '0000000001')
|
||||
|
||||
def test_aws_resource_cleanup(self):
|
||||
self.patch(AwsAdapter, '_get_paginator', self.fake_aws.get_paginator)
|
||||
|
||||
# Start by setting up leaked resources
|
||||
instance_tags = [
|
||||
{'Key': 'nodepool_node_id', 'Value': '0000000042'},
|
||||
@ -682,7 +717,6 @@ class TestDriverAws(tests.DBTestCase):
|
||||
configfile = self.setup_config('aws/diskimage.yaml')
|
||||
pool = self.useNodepool(configfile, watermark_sleep=1)
|
||||
pool.start()
|
||||
self.patchProvider(pool)
|
||||
|
||||
for _ in iterate_timeout(30, Exception, 'instance deletion'):
|
||||
instance = self.ec2.Instance(instance_id)
|
||||
|
@ -96,6 +96,39 @@ class TestDriverAzure(tests.DBTestCase):
|
||||
self.fake_azure.crud['Microsoft.Compute/virtualMachines'].
|
||||
requests[0]['properties']['userData'],
|
||||
'VGhpcyBpcyB0aGUgdXNlciBkYXRh') # This is the user data
|
||||
tags = (self.fake_azure.crud['Microsoft.Compute/virtualMachines'].
|
||||
requests[0]['tags'])
|
||||
self.assertEqual(tags.get('team'), 'DevOps')
|
||||
self.assertEqual(tags.get('dynamic-tenant'), 'Tenant is tenant-1')
|
||||
|
||||
def test_azure_min_ready(self):
|
||||
configfile = self.setup_config(
|
||||
'azure-min-ready.yaml',
|
||||
auth_path=self.fake_azure.auth_file.name)
|
||||
pool = self.useNodepool(configfile, watermark_sleep=1)
|
||||
pool.start()
|
||||
node = self.waitForNodes('bionic')[0]
|
||||
|
||||
self.assertEqual(node.state, zk.READY)
|
||||
self.assertIsNotNone(node.launcher)
|
||||
self.assertEqual(node.connection_type, 'ssh')
|
||||
self.assertEqual(node.shell_type, 'sh')
|
||||
self.assertEqual(node.attributes,
|
||||
{'key1': 'value1', 'key2': 'value2'})
|
||||
self.assertEqual(node.host_keys, ['ssh-rsa FAKEKEY'])
|
||||
self.assertEqual(node.python_path, 'auto')
|
||||
self.assertEqual(
|
||||
self.fake_azure.crud['Microsoft.Compute/virtualMachines'].
|
||||
items[0]['properties']['osProfile']['customData'],
|
||||
'VGhpcyBpcyB0aGUgY3VzdG9tIGRhdGE=') # This is the custom data
|
||||
self.assertEqual(
|
||||
self.fake_azure.crud['Microsoft.Compute/virtualMachines'].
|
||||
requests[0]['properties']['userData'],
|
||||
'VGhpcyBpcyB0aGUgdXNlciBkYXRh') # This is the user data
|
||||
tags = (self.fake_azure.crud['Microsoft.Compute/virtualMachines'].
|
||||
requests[0]['tags'])
|
||||
self.assertEqual(tags.get('team'), 'DevOps')
|
||||
self.assertEqual(tags.get('dynamic-tenant'), 'Tenant is None')
|
||||
|
||||
def test_azure_diskimage(self):
|
||||
configfile = self.setup_config(
|
||||
|
@ -86,6 +86,18 @@ class TestLauncher(tests.DBTestCase):
|
||||
'ram': 8192,
|
||||
}
|
||||
self.assertEqual(node.resources, resources)
|
||||
|
||||
# We check the "cloud" side attributes are set from nodepool side
|
||||
provider = pool.getProviderManager('fake-provider')
|
||||
cloud_node = provider.getServer(node.hostname)
|
||||
self.assertEqual(
|
||||
cloud_node.metadata['nodepool_provider_name'],
|
||||
'fake-provider')
|
||||
self.assertEqual(cloud_node.metadata['nodepool_pool_name'], 'main')
|
||||
self.assertEqual(cloud_node.metadata['prop1'], 'foo')
|
||||
self.assertEqual(cloud_node.metadata['dynamic-tenant'],
|
||||
'Tenant is tenant-1')
|
||||
|
||||
self.zk.lockNode(node, blocking=False)
|
||||
self.zk.unlockNode(node)
|
||||
|
||||
@ -632,6 +644,9 @@ class TestLauncher(tests.DBTestCase):
|
||||
cloud_node.metadata['nodepool_provider_name'],
|
||||
'fake-provider')
|
||||
self.assertEqual(cloud_node.metadata['nodepool_pool_name'], 'main')
|
||||
self.assertEqual(cloud_node.metadata['prop1'], 'foo')
|
||||
self.assertEqual(cloud_node.metadata['dynamic-tenant'],
|
||||
'Tenant is None')
|
||||
|
||||
def test_node_network_cli(self):
|
||||
"""Same as test_node but using connection-type network_cli"""
|
||||
|
@ -27,6 +27,7 @@ from nodepool import exceptions as npe
|
||||
from nodepool.logconfig import get_annotated_logger
|
||||
from nodepool.zk.components import COMPONENT_REGISTRY
|
||||
from nodepool.zk import ZooKeeperBase
|
||||
from nodepool.nodeutils import Attributes
|
||||
|
||||
# States:
|
||||
# We are building this image (or node) but it is not ready for use.
|
||||
@ -463,6 +464,19 @@ class NodeRequest(BaseModel):
|
||||
self.created_time = d.get('created_time')
|
||||
self.tenant_name = d.get('tenant_name')
|
||||
|
||||
def getSafeAttributes(self):
|
||||
'''Return a dict of attributes safe for user-visible templating'''
|
||||
return Attributes(
|
||||
id=self.id,
|
||||
labels=self.node_types,
|
||||
requestor=self.requestor,
|
||||
requestor_data=self.requestor_data,
|
||||
relative_priority=self.relative_priority,
|
||||
event_id=self.event_id,
|
||||
created_time=self.created_time,
|
||||
tenant_name=self.tenant_name,
|
||||
)
|
||||
|
||||
|
||||
class Node(BaseModel):
|
||||
'''
|
||||
|
10
releasenotes/notes/dynamic-tags-693c8f61f74600c1.yaml
Normal file
10
releasenotes/notes/dynamic-tags-693c8f61f74600c1.yaml
Normal file
@ -0,0 +1,10 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
The OpenStack, AWS, and Azure drivers now support adding tags (AKA
|
||||
metadata, AKA properties) to instances with dynamic data about the
|
||||
corresponding node request using string formatting.
|
||||
|
||||
See :attr:`providers.[openstack].pools.labels.instance-properties`,
|
||||
:attr:`providers.[aws].pools.labels.dynamic-tags`, and
|
||||
:attr:`providers.[azure].pools.labels.dynamic-tags` for details.
|
Loading…
Reference in New Issue
Block a user