OCI container adjacent artifact support

Adds support to use an OCI url for references to artifacts
to deploy.

- Identification of a disk image from the container registry.
- Determination of an image_url which IPA leverages to download
  content. Content cannot be compressed at this point in time.
- Ironic can download the file locally, and with a patch to
  detect and enable Zstandard compression, can extract the
  URL, starting from either an OCI container URL with a Tag,
  or a specific manifest digest, which will then download the
  file for use.
- User driven auth seems to be good.
- Tags work with Quay, however not OpenShift due to what appears
  to be additional request validation.

Closes-Bug: 2085565
Change-Id: I17f7ba57e0ec1a5451890838f153746f5f8e5182
This commit is contained in:
Julia Kreger 2024-12-02 08:58:35 -08:00
parent 3704cc378d
commit db4412d570
27 changed files with 3922 additions and 150 deletions

View File

@ -26,3 +26,4 @@ Bare Metal Service Features
Layer 3 or DHCP-less Ramdisk Booting <dhcp-less> Layer 3 or DHCP-less Ramdisk Booting <dhcp-less>
Deploying with Anaconda <anaconda-deploy-interface> Deploying with Anaconda <anaconda-deploy-interface>
Node History <node-history> Node History <node-history>
OCI Container Registry Support <oci-container-registry>

View File

@ -0,0 +1,302 @@
.. _oci_container_registry:
================================
Use of an OCI Container Registry
================================
What is an OCI Container Registry?
----------------------------------
An OCI Container registry is an evolution of a docker container registry
where layers which make up containers can be housed as individual data
files, and then be retrieved to be reconstructed into a running container.
OCI is short for "Open Container Initiative", and you can learn more about
about OCI at `opencontainers.org <https://opencontainers.org>`_.
Container registries are evolving to support housing other data files, and
the focus in this context is the evolution to support those additional files
to be housed in and served by a container registry.
.. WARNING::
This feature should be considered experimental.
Overview
--------
A recent addition to Ironic is the ability to retrieve artifacts from an
OCI Container Registry. This support is modeled such that it can be used
by an Ironic deployment for both disk images, and underlying artifacts used
for deployment, such as kernels and ramdisks. Different rules apply and
as such, please review the next several sections carefully.
At present, this functionality is only available for users who are directly
interacting with Ironic. Nova's data model is *only* compatible with usage
Glance at this present time, but that may change in the future.
How it works
------------
An OCI registry has a layered data model which can be divided into three
conceptual layers.
- Artifact Index - Higher level list which points to manifests and contains
information like annotations, platform, architecture.
- Manifest - The intermediate structural location which contains the lower
level details related to the blob (Binary Large OBject) data as well as
the information where to find the blob data. When a container image is
being referenced, this Manifest contains information on multiple "layers"
which comprise a container.
- Blob data - The actual artifact, which can only be retrieved using the
data provided in the manifest.
Ironic has a separate image service client which translates an an OCI
style container URL in an ``image_source`` value, formatted such as
``oci://host/user/container:tag``. When just a tag has been defined,
which can be thought of as specific "tagged" view containing many
artifacts, which Ironic will search through to find the best match.
Matching is performed with an attempt to weigh preference to file type
based upon the configured ``image_download_source``, where as with a ``local``
value, ``qcow2`` disk images are preferred. Otherwise ``raw`` is preferred.
This uses the ``disktype`` annotation where ``qcow2`` and ``qemu`` are
considered QCOW2 format images. A ``disktype`` annotation on the manifests
of ``raw`` or ``applehv``, are considered raw disk images.
Once file types have been appropriately weighted, the code attempts to match
the baremetal node's CPU architecture to the listed platform ``architecture``
in the remote registry. Once the file identification process has been
completed, Ironic automatically updates the ``image_source`` value to the
matching artifact in the remote container registry.
.. NOTE::
The code automatically attempts to handle differences in architecture
naming which has been observed, where ``x86_64`` is sometimes referred to
as ``amd64``, and ``aarch64`` is sometimes referred to as ``arm64``.
.. WARNING:: An ``image_download_source`` of ``swift`` is incompatible
with this image service. Only ``local`` and ``http`` are supported.
When a URL is specific and pointing to a specific manifest, for example
``oci://host/user/container@sha256:f00b...``, Ironic is only able to
retrieve that specific file from the the container registry. Due to the
data model, we also cannot learn additional details about that image
such as annotations, as annotations are part of the structural data
which points to manifests in the first place.
An added advantage to the use of container registries, is that the
checksum *is confirmed* in transit based upon the supplied metadata
from the container registry. For example, when you use a manifest URL,
the digest portion of the URL is used to checksum the returned contents,
and that manifests then contains the digest values for artifacts which
also supplies sufficient information to identify the URL where to download
the artifacts from.
Authentication
--------------
Authentication is an important topic for users of an OCI Image Registry.
While some public registries are fairly friendly to providing download access,
other registries may have aggressive quotas in place which require users to
be authenticated to download artifacts. Furthermore, private image registries
may require authentication for any access.
As such, there are three available paths for providing configuration:
* A node ``instance_info`` value of ``image_pull_secret``. This value may be
utilized to retrieve an image artifact, but is not intended for pulling
other artifacts like kernels or ramdisks used as part of a deployment
process. As with all other ``instance_info`` field values, this value
is deleted once the node has been unprovisioned.
* A node ``driver_info`` value of ``image_pull_secret``. This setting is
similar to the ``instance_info`` setting, but may be utilized by an
administrator of a baremetal node to define the specific registry
credential to utilize for the node.
* The :oslo.config:option:`oci.authentication_config` which allows for
a conductor process wide pre-shared secret configuration. This configuration
value can be set to a file which parses the common auth configuration
format used for container tooling in regards to the secret to utilize
for container registry authentication. This value is only consulted
*if* a specific secret has not been defined to utilize, and is intended
to be compaitble with the the format used by docker ``config.json`` to
store authentication detail.
An example of the configuration file looks something like the following
example.
.. code-block:: json
{
"auths": {
"quay.io": {
"auth": "<secret_here>",
},
"private-registry.tld": {
"auth": "<secret_here>",
}
}
}
.. NOTE::
The ``image_pull_secret`` values are not visible in the API surface
due Ironic's secret value santiization, which prevents sensitive
values from being visible, and are instead returned as '******'.
Available URL Formats
---------------------
The following URL formats are available for use to download a disk image
artifact. When a non-precise manifest URL is supplied, Ironic will attempt
to identify and match the artifact. URLs for artifacts which are not disk
images are required to be specific and point to a specific manifest.
.. NOTE::
If no tag is defined, the tag ``latest`` will be attempted,
however, if that is not found in the *list* of available tags returned
by the container registry, an ImageNotFound error will be raised in
Ironic.
* oci://host/path/container - Ironic assumes 'latest' is the desired tag
in this case.
* oci://host/path/container:tag - Ironic discoveres artifacts based upon
the view provided by the defined tag.
* oci://host/path/container@sha256:f00f - This is a URL which defines a
specific manifest. Should this be a container, this would be a manifest
file with many layers to make a container, but for an artifact only a
single file is represented by this manifest, and we retrieve this
specific file.
.. WARNING::
The use of tag values to access an artifact, for example, ``deploy_kernel``
or ``deploy_ramdisk``, is not possible. This is an intentional limitation
which may addressed in a future version of Ironic.
Known Limitations
-----------------
* For usage with disk images, only whole-disk images are supported.
Ironic does not intend to support Partition images with this image service.
* IPA is unaware of remote container registries, as well as authentication
to a remote registry. This is expected to be addressed in a future release
of Ironic.
* Some artifacts may be compressed using Zstandard. Only disk images or
artifacts which transit through the conductor may be appropriately
decompressed. Unfortunately IPA won't be able to decompress such artifacts
dynamically while streaming content.
* Authentication to container image registries is *only* available through
the use of pre-shared token secrets.
* Use of tags may not be viable on some OCI Compliant image registries.
This may result as an ImageNotFound error being raised when attempting
to resolve a tag.
* User authentication is presently limited to use of a bearer token,
under the model only supporting a "pull secret" style of authentication.
If Basic authentication is required, please file a bug in
`Ironic Launchpad <https://bugs.launchpad.net/ironic>`_.
How do I upload files to my own registry?
-----------------------------------------
While there are several different ways to do this, the easiest path is to
leverage a tool called ``ORAS``. You can learn more about ORAS at
`https://oras.land <https://oras.land/>`_
The ORAS utility is able to upload arbitrary artifacts to a Container
Registry along with the required manifest *and* then associates a tag
for easy human reference. While the OCI data model *does* happily
support a model of one tag in front of many manifests, ORAS does not.
In the ORAS model, one tag is associated with one artifact.
In the examples below, you can see how this is achieved. Please be careful
that these examples are *not* commands you can just cut and paste, but are
intended to demonstrate the required step and share the concept of how
to construct the URL for the artifact.
.. NOTE::
These examples command lines may differ slightly based upon your remote
registry, and underlying configuration, and as such leave out credential
settings.
As a first step, we will demonstrate uploading an IPA Ramdisk kernel.
.. code-block:: shell
$ export HOST=my-container-host.domain.tld
$ export CONTAINER=my-project/my-container
$ oras push ${HOST}/${CONTAINER}:ipa_kernel tinyipa-master.vmlinuz
✓ Exists tinyipa-master.vmlinuz 5.65/5.65 MB 100.00% 0s
└─ sha256:15ed5220a397e6960a9ac6f770a07e3cc209c6870c42cbf8f388aa409d11ea71
✓ Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s
└─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
✓ Uploaded application/vnd.oci.image.manifest.v1+json 606/606 B 100.00% 0s
└─ sha256:2d408348dd6ff2e26efc1de03616ca91d76936a27028061bc314289cecdc895f
Pushed [registry] my-container-host.domain.tld/my-project/my-container:ipa_kernel
ArtifactType: application/vnd.unknown.artifact.v1
Digest: sha256:2d408348dd6ff2e26efc1de03616ca91d76936a27028061bc314289cecdc895f
$
$ export MY_IPA_KERNEL=oci://${HOST}/${CONTAINER}:@sha256:2d408348dd6ff2e26efc1de03616ca91d76936a27028061bc314289cecdc895f
As you can see from this example, we've executed the command, and uploaded the file.
The important aspect to highlight is the digest reported at the end. This is the
manifest digest which you can utilize to generate your URL.
.. WARNING::
When constructing environment variables for your own use, specifically with
digest values, please be mindful that you will need to utilize the digest
value from your own upload, and not from the example.
.. code-block:: shell
$ oras push ${HOST}/${CONTAINER}:ipa_ramdisk tinyipa-master.gz
✓ Exists tinyipa-master.gz 91.9/91.9 MB 100.00% 0s
└─ sha256:0d92eeb98483f06111a352b673d588b1aab3efc03690c1553ef8fd8acdde15fc
✓ Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s
└─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
✓ Uploaded application/vnd.oci.image.manifest.v1+json 602/602 B 100.00% 0s
└─ sha256:b17e53ff83539dd6d49e714b09eeb3bd0a9bb7eee2ba8716f6819f2f6ceaad13
Pushed [registry] my-container-host.domain.tld/my-project/my-container:ipa_ramdisk
ArtifactType: application/vnd.unknown.artifact.v1
Digest: sha256:b17e53ff83539dd6d49e714b09eeb3bd0a9bb7eee2ba8716f6819f2f6ceaad13
$
$ export MY_IPA_RAMDISK=oci://${HOST}/${CONTAINER}:@sha256:b17e53ff83539dd6d49e714b09eeb3bd0a9bb7eee2ba8716f6819f2f6ceaad13
As a reminder, please remember to utilize *different* tags with ORAS.
For example, you can view the current tags in the remote registry by existing the following command.
.. code-block:: shell
$ oras repo tags --insecure $HOST/project/container
ipa_kernel
ipa_ramdisk
unrelated_item
$
Now that you have successfully uploaded an IPA kernel and ramdisk, the only
item remaining is a disk image. In this example below, we're generating a
container tag based URL as well as direct manifest digest URL.
.. NOTE::
The example below sets a manifest annotation of ``disktype`` and
artifact platform. While not explicitly required, these are recommended
should you allow Ironic to resolve the disk image utilizing the container
tag as opposed to a digest URL.
.. code-block:: shell
$ oras push -a disktype=qcow2 --artifact-platform linux/x86_64 $HOST/$CONTAINER:cirros-0.6.3 ./cirros-0.6.3-x86_64-disk.img
✓ Exists cirros-0.6.3-x86_64-disk.img 20.7/20.7 MB 100.00% 0s
└─ sha256:7d6355852aeb6dbcd191bcda7cd74f1536cfe5cbf8a10495a7283a8396e4b75b
✓ Uploaded application/vnd.oci.image.config.v1+json 38/38 B 100.00% 43ms
└─ sha256:369358945e345b86304b802b704a7809f98ccbda56b0a459a269077169a0ac5a
✓ Uploaded application/vnd.oci.image.manifest.v1+json 626/626 B 100.00% 0s
└─ sha256:0a175cf13c651f44750d6a5cf0cf2f75d933bd591315d77e19105e5446b73a86
Pushed [registry] my-container-host.domain.tld/my-project/my-container:cirros-0.6.3
ArtifactType: application/vnd.unknown.artifact.v1
Digest: sha256:0a175cf13c651f44750d6a5cf0cf2f75d933bd591315d77e19105e5446b73a86
$ export MY_DISK_IMAGE_TAG_URL=oci://${HOST}/${CONTAINER}:cirros-0.6.3
$ export MY_DISK_IMAGE_DIGEST_URL=oci://${HOST}/${CONTAINER}@sha256:0a175cf13c651f44750d6a5cf0cf2f75d933bd591315d77e19105e5446b73a86

View File

@ -12,6 +12,7 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import hashlib
import os import os
import re import re
import time import time
@ -267,3 +268,148 @@ def get_checksum_from_url(checksum, image_source):
image_href=checksum, image_href=checksum,
reason=(_("Checksum file does not contain name %s") reason=(_("Checksum file does not contain name %s")
% expected_fname)) % expected_fname))
class TransferHelper(object):
def __init__(self, response, checksum_algo, expected_checksum):
"""Helper class to drive data download with concurrent checksum.
The TransferHelper can be used to help retrieve data from a
Python requests request invocation, where the request was set
with `stream=True`, which also builds the checksum digest as the
transfer is underway.
:param response: A populated requests.model.Response object.
:param checksum_algo: The expected checksum algorithm.
:param expected_checksum: The expected checksum of the data being
transferred.
"""
# NOTE(TheJulia): Similar code exists in IPA in regards to
# downloading and checksumming a raw image while streaming.
# If a change is required here, it might be worthwhile to
# consider if a similar change is needed in IPA.
# NOTE(TheJulia): 1 Megabyte is an attempt to always exceed the
# minimum chunk size which may be needed for proper checksum
# generation and balance the memory required. We may want to
# tune this, but 1MB has worked quite well for IPA for some time.
# This may artificially throttle transfer speeds a little in
# high performance environments as the data may get held up
# in the kernel limiting the window from scaling.
self._chunk_size = 1024 * 1024 # 1MB
self._last_check_time = time.time()
self._request = response
self._bytes_transferred = 0
self._checksum_algo = checksum_algo
self._expected_checksum = expected_checksum
self._expected_size = self._request.headers.get(
'Content-Length')
# Determine the hash algorithm and value will be used for calculation
# and verification, fallback to md5 if algorithm is not set or not
# supported.
# NOTE(TheJulia): Regarding MD5, it is likely this will never be
# hit, but we will guard in case of future use for this method
# anyhow.
if checksum_algo == 'md5' and not CONF.agent.allow_md5_checksum:
# MD5 not permitted
LOG.error('MD5 checksum utilization is disabled by '
'configuration.')
raise exception.ImageChecksumAlgorithmFailure()
if checksum_algo in hashlib.algorithms_available:
self._hash_algo = hashlib.new(checksum_algo)
else:
raise ValueError("Unable to process checksum processing "
"for image transfer. Algorithm %s "
"is not available." % checksum_algo)
def __iter__(self):
"""Downloads and returns the next chunk of the image.
:returns: A chunk of the image. Size of 1MB.
"""
self._last_chunk_time = None
for chunk in self._request.iter_content(self._chunk_size):
# Per requests forum posts/discussions, iter_content should
# periodically yield to the caller for the client to do things
# like stopwatch and potentially interrupt the download.
# While this seems weird and doesn't exactly seem to match the
# patterns in requests and urllib3, it does appear to be the
# case. Field testing in environments where TCP sockets were
# discovered in a read hanged state were navigated with
# this code in IPA.
if chunk:
self._last_chunk_time = time.time()
if isinstance(chunk, str):
encoded_data = chunk.encode()
self._hash_algo.update(encoded_data)
self._bytes_transferred += len(encoded_data)
else:
self._hash_algo.update(chunk)
self._bytes_transferred += len(chunk)
yield chunk
elif (time.time() - self._last_chunk_time
> CONF.image_download_connection_timeout):
LOG.error('Timeout reached waiting for a chunk of data from '
'a remote server.')
raise exception.ImageDownloadError(
self._image_info['id'],
'Timed out reading next chunk from webserver')
@property
def checksum_matches(self):
"""Verifies the checksum matches and returns True/False."""
checksum = self._hash_algo.hexdigest()
if checksum != self._expected_checksum:
# This is a property, let the caller figure out what it
# wants to do.
LOG.error('Verifying transfer checksum %(algo_name)s value '
'%(checksum)s against %(xfer_checksum)s.',
{'algo_name': self._hash_algo.name,
'checksum': self._expected_checksum,
'xfer_checksum': checksum})
return False
else:
LOG.debug('Verifying transfer checksum %(algo_name)s value '
'%(checksum)s against %(xfer_checksum)s.',
{'algo_name': self._hash_algo.name,
'checksum': self._expected_checksum,
'xfer_checksum': checksum})
return True
@property
def bytes_transferred(self):
"""Property value to return the number of bytes transferred."""
return self._bytes_transferred
@property
def content_length(self):
"""Property value to return the server indicated length."""
# If none, there is nothing we can do, the server didn't have
# a response.
return self._expected_size
def validate_text_checksum(payload, digest):
"""Compares the checksum of a payload versus the digest.
The purpose of this method is to take the payload string data,
and compare it to the digest value of the supplied input. The use
of this is to validate the the data in cases where we have data
and need to compare it. Useful in API responses, such as those
from an OCI Container Registry.
:param payload: The supplied string with an encode method.
:param digest: The checksum value in digest form of algorithm:checksum.
:raises: ImageChecksumError when the response payload does not match the
supplied digest.
"""
split_digest = digest.split(':')
checksum_algo = split_digest[0]
checksum = split_digest[1]
hasher = hashlib.new(checksum_algo)
hasher.update(payload.encode())
if hasher.hexdigest() != checksum:
# Mismatch, something is wrong.
raise exception.ImageChecksumError()

View File

@ -1062,3 +1062,26 @@ class ServiceRegistrationFailure(IronicException):
class Unauthorized(IronicException): class Unauthorized(IronicException):
code = http_client.UNAUTHORIZED code = http_client.UNAUTHORIZED
headers = {'WWW-Authenticate': 'Basic realm="Baremetal API"'} headers = {'WWW-Authenticate': 'Basic realm="Baremetal API"'}
class ImageHostRateLimitFailure(TemporaryFailure):
_msg_fmt = _("The image registry has indicates the rate limit has been "
"exceeded for url %(image_ref)s. Please try again later or "
"consider using authentication and/or trying again.")
class ImageMatchFailure(InvalidImage):
_msg_fmt = _("The requested image lacks the required attributes to "
"identify the file to select.")
class OciImageNotSpecific(InvalidImage):
_msg_fmt = _("The requested image (%(image_ref)s) was not specific. "
"Please supply a full URL mapping to the manifest to be "
"utilized for the file download.")
class ImageServiceAuthenticationRequired(ImageUnacceptable):
_msg_fmt = _("The requested image %(image_ref)s requires "
"authentication which has not been provided. "
"Unable to proceed.")

View File

@ -420,3 +420,17 @@ class GlanceImageService(object):
if (v.url_expires_at < max_valid_time)] if (v.url_expires_at < max_valid_time)]
for k in keys_to_remove: for k in keys_to_remove:
del self._cache[k] del self._cache[k]
# TODO(TheJulia): Here because the GlanceImageService class is not based
# upon the base image service class.
@property
def is_auth_set_needed(self):
"""Property to notify the caller if it needs to set authentication."""
return False
@property
def transfer_verified_checksum(self):
"""The transferred artifact checksum."""
# FIXME(TheJulia): We should look at and see if we wire
# this up in a future change.
return None

View File

@ -18,6 +18,7 @@
import abc import abc
import datetime import datetime
from http import client as http_client from http import client as http_client
from operator import itemgetter
import os import os
import shutil import shutil
from urllib import parse as urlparse from urllib import parse as urlparse
@ -30,6 +31,7 @@ import requests
from ironic.common import exception from ironic.common import exception
from ironic.common.glance_service.image_service import GlanceImageService from ironic.common.glance_service.image_service import GlanceImageService
from ironic.common.i18n import _ from ironic.common.i18n import _
from ironic.common import oci_registry
from ironic.common import utils from ironic.common import utils
from ironic.conf import CONF from ironic.conf import CONF
@ -70,6 +72,16 @@ class BaseImageService(object, metaclass=abc.ABCMeta):
UTC datetime object. UTC datetime object.
""" """
@property
def is_auth_set_needed(self):
"""Property to notify the caller if it needs to set authentication."""
return False
@property
def transfer_verified_checksum(self):
"""The transferred artifact checksum."""
return None
class HttpImageService(BaseImageService): class HttpImageService(BaseImageService):
"""Provides retrieval of disk images using HTTP.""" """Provides retrieval of disk images using HTTP."""
@ -325,6 +337,472 @@ class HttpImageService(BaseImageService):
reason=str(e)) reason=str(e))
class OciImageService(BaseImageService):
"""Image Service class for accessing an OCI Container Registry."""
# Holding place on the instantiated class for the image processing
# request to house authentication data, because we have to support
# varying authentication to backend services.
_user_auth_data = None
# Field to house the verified checksum of the last downloaded content
# by the running class.
_verified_checksum = None
_client = None
def __init__(self):
verify = strutils.bool_from_string(CONF.webserver_verify_ca,
strict=True)
# Creates a client which we can use for actions.
# Note, this is not yet authenticated!
self._client = oci_registry.OciClient(verify=verify)
def _validate_url_is_specific(self, image_href):
"""Identifies if the supplied image_href is a manifest pointer.
Identifies if the image_href value is specific, and performs basic
data validation on the digest value to ensure it is as expected.
As a note, this does *not* consider a URL with a tag value as
specific enough, because that is a starting point in the data
structure view which can have multiple artifacts nested within
that view.
:param image_href: The user supplied image_href value to evaluate
if the URL is specific to to a specific manifest,
or is otherwise generalized and needs to be
identified.
:raises: OciImageNotSpecifc if the supplied image_href lacks a
required manifest digest value, or if the digest value
is not understood.
:raises: ImageRefValidationFailed if the supplied image_href
appears to be malformed and lacking a digest value,
or if the supplied data and values are the incorrect
length and thus invalid.
"""
href = urlparse.urlparse(image_href)
# Identify if we have an @ character denoting manifest
# reference in the path.
split_path = str(href.path).split('@')
if len(split_path) < 2:
# Lacks a manifest digest pointer being referenced.
raise exception.OciImageNotSpecific(image_ref=image_href)
# Extract the digest for evaluation.
hash_array = split_path[1].split(':')
if len(hash_array) < 2:
# We cannot parse something we don't understand. Specifically the
# supplied data appaears to be invalid.
raise exception.ImageRefValidationFailed(
image_href=image_href,
reason='Lacking required digest value')
algo = hash_array[0].lower()
value = hash_array[1].lower()
# Sanity check the checksum hash lengths to match types we expect.
# NOTE(TheJulia): Generally everything is sha256 with container
# registries, however there are open patches to also embrace sha512
# in the upstream registry code base.
if 'sha256' == algo:
if 64 != len(value):
raise exception.ImageRefValidationFailed(
image_href=image_href,
reason='Manifest digest length incorrect and does not '
'match the expected lenngth of the algorithm.')
elif 'sha512' == algo:
# While sha256 seems to be the convention, the go libraries and
# even the transport reference don't seem to explicitly set an
# expectation of what type. This is likely some future proofing
# more than anything else.
if 128 != len(value):
raise exception.ImageRefValidationFailed(
image_href=image_href,
reason='Manifest digest length incorrect and does not '
'match the expected lenngth of the algorithm.')
else:
LOG.error('Failed to parse %(image_href)s, unknown digest '
'algorithm %(algo)s.',
{'image_href': image_href,
'algo': algo})
raise exception.OciImageNotSpecific(image_ref=image_href)
def validate_href(self, image_href, secret=None):
"""Validate OCI image reference.
This method is an alias of the ``show`` method on this class, which
exists only for API compatibility reasons. Ultimately, the show
method performs all of the same validation required.
:param image_href: Image reference.
:param secret: Unused setting.
:raises: exception.ImageRefValidationFailed
:raises: exception.OciImageNotSpecific
:returns: Identical output to the ``show`` method on this class
as this method is an alias of the ``show``.
"""
return self.show(image_href)
def download(self, image_href, image_file):
"""Downloads image to specified location.
:param image_href: Image reference.
:param image_file: File object to write data to.
:raises: exception.ImageRefValidationFailed.
:raises: exception.ImageDownloadFailed.
:raises: exception.OciImageNotSpecific.
"""
# Call not permitted until we have a specific image_source.
self._validate_url_is_specific(image_href)
csum = self._client.download_blob_from_manifest(image_href,
image_file)
self._verified_checksum = csum
def show(self, image_href):
"""Get dictionary of image properties.
:param image_href: Image reference.
:raises: exception.ImageRefValidationFailed.
:raises: exception.OciImageNotSpecific.
:returns: dictionary of image properties. It has three of them: 'size',
'checksum', and 'digest'
"""
self._validate_url_is_specific(image_href)
manifest = self._client.get_manifest(image_href)
layers = manifest.get('layers', [{}])
size = layers[0].get('size', 0)
digest = layers[0].get('digest')
checksum = None
if digest and ':' in digest:
# This should always be the case, but just being
# defensive given array interaction.
checksum = digest.split(':')[1]
# Return values to the caller so size handling can be
# navigated with the image cache, checksum saved to make
# everyone happy, and the original digest value to help
# generate a blob url path to enable download.
return {'size': size,
'checksum': checksum,
'digest': digest}
@property
def is_auth_set_needed(self):
"""Property to notify the caller if it needs to set authentication."""
return True
@property
def transfer_verified_checksum(self):
"""Property to notify the caller if it needs to set authentication."""
return self._verified_checksum
def set_image_auth(self, image_url, auth_data):
"""Sets the supplied auth_data dictionary on the class for use later.
Provides a mechanism to inform the image service of specific
credentials without wiring this in as a first class citizen in
all image service interfaces.
:param auth_data: The authentication data dictionary holding username,
password, or other authentication data which may
be used by this client class.
:returns: None
:raises: AssertionError should this method be called twice
in the same workflow.
"""
if self._user_auth_data:
raise AssertionError("BUG: _user_auth_data should only be set"
"once in a overall workflow.")
if not auth_data and not CONF.oci.authentication_config:
# We have no data, and no settings, we should just quietly
# return, there is nothing to do.
return
if auth_data:
# Set a username and password. Bearer auth expects
# no valid user name in the code path of the oci client.
# The important as the passwords with bearer auth are
# full tokens.
self._user_auth_data = auth_data
username = auth_data.get('username')
password = auth_data.get('password')
else:
# Set username and password to None so the OCI client loads
# auth data from configuration.
username = None
password = None
self._client.authenticate(image_url, username, password)
def identify_specific_image(self, image_href, image_download_source=None,
cpu_arch=None):
"""Identify a specific OCI Registry Artifact.
This method supports the caller, but is located in the image service
code to provide it access to the Container Registry client code which
holds the lower level methods.
The purpose of this method is to take the user requested image_href
and identify the best matching artifact attached to a container
registry's entry. This is because the container registry can
contain many artifacts which can be distributed and allocated
by different types. To achieve this goal, this method utilizes
the image_download_source to weight the preference of type of
file to look for, and the CPU architecture to enable support
for mutli-arch container registries.
In order to inform the caller about the url, as well as related
data, such as the manifest which points to the artifact, artifact
digest, known original filename of the artifact, this method
returns a dictionary with several fields which may be useful
to aid in understanding of what artifact was chosen.
:param image_href: The image URL as supplied by the Ironic user.
:param image_download_source: The Ironic image_download_source
value, defaults to None. When a value of 'local' is provided,
this method prefers selection of qcow images over raw images.
Otherwise, raw images are the preference.
:param cpu_arch: The Bare Metal node's defined CPU architecture,
if any. Defaults to None. When used, a direct match is sought
in the remote container registry. If 'x86_64' or 'amd64' is used,
the code searches for the values in the remote registry
interchangeably due to OCI data model standardizing on `amd64` as
the default value for 64bit x86 Architectures.
:returns: A dictionary with multiple values to the caller to aid
in returning the required HTTP URL, but also metadata about the
selected artifact including size, filename, blob digest, related
manifest digest, the remote recorded mediaType value, if the file
appears compressed, if the file appears to be a raw disk image,
any HTTP Authorization secret, if applicable, and the OCI
image manifest URL. As needs could be different based upon
different selection algorithms and evolving standards/approaches
in use of OCI registries, the dictionary can also be empty, or
contain different values and any caller should defensively use
information as needed. If a record is *not* found, a empty
dictionary is the result set. Under normal circumstances, the
result looks something like this example.
{
'image_url': 'https://fqdn/path',
'image_size': 1234567,
'image_filename': 'filename.raw.zstd',
'image_checksum': 'f00f...',
'image_container_blob_digest': 'sha256:f00f...',
'image_media_type': 'application/zstd,
'image_compression_type': 'zstd',
'image_disk_format': 'raw',
'image_request_authorization_secret': None,
'oci_image_manifest_url': 'https://fqdn/path@sha256:123f...',
}
"""
# TODO(TheJulia): Ideally we should call the referrers endpoint
# in the remote API, however, it is *very* new only having been
# approved in Mid-2024, is not widely available. It would allow
# the overall query sequence to take more of streamlined flow
# as opposed to the existing code which gets the index and then
# looks into manifest data.
# See
# https://github.com/opencontainers/image-spec/pull/934
# https://github.com/opencontainers/distribution-spec/pull/335
# An image_url tells us if we've found something matching what
# we're looking for.
image_url = None
requested_image = urlparse.urlparse(image_href)
if requested_image.path and '@' in requested_image.path:
LOG.debug('We have been given a specific URL, as such we are '
'skipping specific artifact detection.')
# We have a specific URL, we don't need to do anything else.
# FIXME(TheJulia): We need to improve this. Essentially we
# need to go get the image url
manifest = self.show(image_href)
# Identify the blob URL from the defining manifest for IPA.
image_url = self._client.get_blob_url(image_href,
manifest['digest'])
return {
# Return an OCI url in case Ironic is doing the download
'oci_image_manifest_url': image_href,
# Return a checksum, so we don't make the checksum code
# angry!
'image_checksum': manifest['checksum'],
'image_url': image_url,
# NOTE(TheJulia) With the OCI data model, there is *no*
# way for us to know what the disk image format is.
# We can't look up, we're pointed at a manifest URL
# with limited information.
'image_disk_format': 'unknown',
}
# Query the remote API for a list index list of manifests
artifact_index = self._client.get_artifact_index(image_href)
manifests = artifact_index.get('manifests', [])
if len(manifests) < 1:
# This is likely not going to happen, but we have nothing
# to identify and deploy based upon, so nothing found
# for user consistency.
raise exception.ImageNotFound(image_id=image_href)
if image_download_source == 'swift':
raise exception.InvalidParameterValue(
err="An image_download_source of swift is incompatible with "
"retrieval of artifacts from an OCI container registry.")
# Determine our preferences for matching
if image_download_source == 'local':
# These types are qcow2 images, we can download these and convert
# them, but it is okay for us to match a raw appearing image
# if we don't have a qcow available.
disk_format_priority = {'qcow2': 1,
'qemu': 2,
'raw': 3,
'applehv': 4}
else:
# applehv appears to be a raw image,
# raw is the Ironic community preference.
disk_format_priority = {'qcow2': 3,
'qemu': 4,
'raw': 1,
'applehv': 2}
# First thing to do, filter by disk types
# and assign a selection priority... since Ironic can handle
# several different formats without issue.
new_manifests = []
for manifest in manifests:
artifact_format = manifest.get('annotations', {}).get('disktype')
if artifact_format in disk_format_priority.keys():
manifest['_priority'] = disk_format_priority[artifact_format]
else:
manifest['_priority'] = 100
new_manifests.append(manifest)
sorted_manifests = sorted(new_manifests, key=itemgetter('_priority'))
# Iterate through the entries of manifests and evaluate them
# one by one to identify a likely item.
for manifest in sorted_manifests:
# First evaluate the architecture because ironic can operated in
# an architecture agnostic mode... and we *can* match on it, but
# it is one of the most constraining factors.
if cpu_arch:
# NOTE(TheJulia): amd64 is the noted standard format in the
# API for x86_64. One thing, at least observing quay.io hosted
# artifacts is that there is heavy use of x86_64 as instead
# of amd64 as expected by the specification. This same sort
# of pattern extends to arm64/aarch64.
if cpu_arch in ['x86_64', 'amd64']:
possible_cpu_arch = ['x86_64', 'amd64']
elif cpu_arch in ['arm64', 'aarch64']:
possible_cpu_arch = ['aarch64', 'arm64']
else:
possible_cpu_arch = [cpu_arch]
# Extract what the architecture is noted for the image, from
# the platform field.
architecture = manifest.get('platform', {}).get('architecture')
if architecture and architecture not in possible_cpu_arch:
# skip onward, we don't have a localized match
continue
# One thing podman is doing, and an ORAS client can set for
# upload, is annotations. This is ultimately the first point
# where we can identify likely artifacts.
# We also pre-sorted on disktype earlier, so in theory based upon
# preference, we should have the desired result as our first
# matching hint which meets the criteria.
disktype = manifest.get('annotations', {}).get('disktype')
if disktype:
if disktype in disk_format_priority.keys():
identified_manifest_digest = manifest.get('digest')
blob_manifest = self._client.get_manifest(
image_href, identified_manifest_digest)
layers = blob_manifest.get('layers', [])
if len(layers) != 1:
# This is a *multilayer* artifact, meaning a container
# construction, not a blob artifact in the OCI
# container registry. Odds are we're at the end of
# the references for what the user has requested
# consideration of as well, so it is good to log here.
LOG.info('Skipping consideration of container '
'registry manifest %s as it has multiple'
'layers.',
identified_manifest_digest)
continue
# NOTE(TheJulia): The resulting layer contents, has a
# mandatory mediaType value, which may be something like
# application/zstd or application/octet-stream and the
# an optional org.opencontainers.image.title annotation
# which would contain the filename the file was stored
# with in alignment with OARS annotations. Furthermore,
# there is an optional artifactType value with OCI
# distribution spec 1.1 (mid-2024) which could have
# been stored when the artifact was uploaded,
# but is optional. In any event, this is only available
# on the manifest contents, not further up unless we have
# the newer referrers API available. As of late 2024,
# quay.io did not offer the referrers API.
chosen_layer = layers[0]
blob_digest = chosen_layer.get('digest')
# Use the client helper to assemble a blob url, so we
# have consistency with what we expect and what we parse.
image_url = self._client.get_blob_url(image_href,
blob_digest)
image_size = chosen_layer.get('size')
chosen_original_filename = chosen_layer.get(
'annotations', {}).get(
'org.opencontainers.image.title')
manifest_digest = manifest.get('digest')
media_type = chosen_layer.get('mediaType')
is_raw_image = disktype in ['raw', 'applehv']
break
else:
# The case of there being no disk type in the entry.
# The only option here is to query the manifest contents out
# and based decisions upon that. :\
# We could look at the layers, count them, and maybe look at
# artifact types.
continue
if image_url:
# NOTE(TheJulia): Doing the final return dict generation as a
# last step in order to leave the door open to handling other
# types and structures for matches which don't use an annotation.
# TODO(TheJulia): We likely ought to check artifacttype,
# as well for any marker of the item being compressed.
# Also, shorthanded for +string format catching which is
# also a valid storage format.
if media_type.endswith('zstd'):
compression_type = 'zstd'
elif media_type.endswith('gzip'):
compression_type = 'gzip'
else:
compression_type = None
cached_auth = self._client.get_cached_auth()
# Generate new URL to reset the image_source to
# so download calls can use the OCI interface
# and code path moving forward.
url = urlparse.urlparse(image_href)
# Drop any trailing content indicating a tag
image_path = url.path.split(':')[0]
manifest = f'{url.scheme}://{url.netloc}{image_path}@{manifest_digest}' # noqa
return {
'image_url': image_url,
'image_size': image_size,
'image_filename': chosen_original_filename,
'image_checksum': blob_digest.split(':')[1],
'image_container_manifest_digest': manifest_digest,
'image_media_type': media_type,
'image_compression_type': compression_type,
'image_disk_format': 'raw' if is_raw_image else 'qcow2',
'image_request_authorization_secret': cached_auth,
'oci_image_manifest_url': manifest,
}
else:
# NOTE(TheJulia): This is likely future proofing, suggesting a
# future case where we're looking at the container, and we're not
# finding disk images, but it does look like a legitimate
# container. As such, here we're just returning an empty dict,
# and we can sort out the rest of the details once we get there.
return {}
class FileImageService(BaseImageService): class FileImageService(BaseImageService):
"""Provides retrieval of disk images available locally on the conductor.""" """Provides retrieval of disk images available locally on the conductor."""
@ -410,6 +888,7 @@ protocol_mapping = {
'https': HttpImageService, 'https': HttpImageService,
'file': FileImageService, 'file': FileImageService,
'glance': GlanceImageService, 'glance': GlanceImageService,
'oci': OciImageService,
} }
@ -431,6 +910,9 @@ def get_image_service(image_href, client=None, context=None):
if uuidutils.is_uuid_like(str(image_href)): if uuidutils.is_uuid_like(str(image_href)):
cls = GlanceImageService cls = GlanceImageService
else: else:
# TODO(TheJulia): Consider looking for a attributes
# which suggest a container registry reference...
# because surely people will try.
raise exception.ImageRefValidationFailed( raise exception.ImageRefValidationFailed(
image_href=image_href, image_href=image_href,
reason=_('Scheme-less image href is not a UUID.')) reason=_('Scheme-less image href is not a UUID.'))
@ -445,3 +927,63 @@ def get_image_service(image_href, client=None, context=None):
if cls == GlanceImageService: if cls == GlanceImageService:
return cls(client, context) return cls(client, context)
return cls() return cls()
def get_image_service_auth_override(node, permit_user_auth=True):
"""Collect image service authentication overrides
This method is intended to collect authentication credentials
together for submission to remote image services which may have
authentication requirements which are not presently available,
or where specific authentication details are required.
:param task: A Node object instance.
:param permit_user_auth: Option to allow the caller to indicate if
user provided authentication should be permitted.
:returns: A dictionary with username and password keys containing
credential to utilize or None if no value found.
"""
# NOTE(TheJulia): This is largely necessary as in a pure OpenStack
# operating context, we assume the caller is just a glance image UUID
# and that Glance holds the secret. Ironic would then utilize it's static
# authentication to interact with Glance.
# TODO(TheJulia): It was not lost on me that the overall *general* idea
# here could similarly be leveraged to *enable* private user image access.
# While that wouldn't necessarily be right here in the code, it would
# likely need to be able to be picked up for user based authentication.
if permit_user_auth and 'image_pull_secret' in node.instance_info:
return {
# Pull secrets appear to leverage basic auth, but provide a blank
# username, where the password is understood to be the pre-shared
# secret to leverage for authentication.
'username': '',
'password': node.instance_info.get('image_pull_secret'),
}
elif 'image_pull_secret' in node.driver_info:
# Enables fallback to the driver_info field, as it is considered
# administratively set.
return {
'username': '',
'password': node.driver_info.get('image_pull_secret'),
}
# In the future, we likely want to add logic here to enable condutor
# configuration housed credentials.
else:
return None
def is_container_registry_url(image_href):
"""Determine if the supplied reference string is an OCI registry URL.
:param image_href: A string containing a url, sourced from the
original user request.
:returns: True if the URL appears to be an OCI image registry
URL. Otherwise, False.
"""
if not isinstance(image_href, str):
return False
# Possible future idea: engage urlparse, and look at just the path
# field, since shorthand style gets parsed out without a network
# location, and parses the entire string as a path so we can detect
# the shorthand url style without a protocol definition.
return image_href.startswith('oci://')

View File

@ -364,7 +364,22 @@ def create_esp_image_for_uefi(
raise exception.ImageCreationFailed(image_type='iso', error=e) raise exception.ImageCreationFailed(image_type='iso', error=e)
def fetch_into(context, image_href, image_file): def fetch_into(context, image_href, image_file,
image_auth_data=None):
"""Fetches image file contents into a file.
:param context: A context object.
:param image_href: The Image URL or reference to attempt to retrieve.
:param image_file: The file handler or file name to write the requested
file contents to.
:param image_auth_data: Optional dictionary for credentials to be conveyed
from the original task to the image download
process, if required.
:returns: If a value is returned, that value was validated as the checksum.
Otherwise None indicating the process had been completed.
"""
# TODO(TheJulia): We likely need to document all of the exceptions which
# can be raised by any of the various image services here.
# TODO(vish): Improve context handling and add owner and auth data # TODO(vish): Improve context handling and add owner and auth data
# when it is added to glance. Right now there is no # when it is added to glance. Right now there is no
# auth checking in glance, so we assume that access was # auth checking in glance, so we assume that access was
@ -376,6 +391,12 @@ def fetch_into(context, image_href, image_file):
'image_href': image_href}) 'image_href': image_href})
start = time.time() start = time.time()
if image_service.is_auth_set_needed:
# Send a dictionary with username/password data,
# but send it in a dictionary since it fundimentally
# can differ dramatically by types.
image_service.set_image_auth(image_href, image_auth_data)
if isinstance(image_file, str): if isinstance(image_file, str):
with open(image_file, "wb") as image_file_obj: with open(image_file, "wb") as image_file_obj:
image_service.download(image_href, image_file_obj) image_service.download(image_href, image_file_obj)
@ -384,15 +405,32 @@ def fetch_into(context, image_href, image_file):
LOG.debug("Image %(image_href)s downloaded in %(time).2f seconds.", LOG.debug("Image %(image_href)s downloaded in %(time).2f seconds.",
{'image_href': image_href, 'time': time.time() - start}) {'image_href': image_href, 'time': time.time() - start})
if image_service.transfer_verified_checksum:
# TODO(TheJulia): The Glance Image service client does a
# transfer related check when it retrieves the file. We might want
# to shift the model some to do that upfront across most image
# services which are able to be used that way.
# We know, thanks to a value and not an exception, that
# we have a checksum which matches the transfer.
return image_service.transfer_verified_checksum
return None
def fetch(context, image_href, path, force_raw=False, def fetch(context, image_href, path, force_raw=False,
checksum=None, checksum_algo=None): checksum=None, checksum_algo=None,
image_auth_data=None):
with fileutils.remove_path_on_error(path): with fileutils.remove_path_on_error(path):
fetch_into(context, image_href, path) transfer_checksum = fetch_into(context, image_href, path,
if (not CONF.conductor.disable_file_checksum image_auth_data)
if (not transfer_checksum
and not CONF.conductor.disable_file_checksum
and checksum): and checksum):
checksum_utils.validate_checksum(path, checksum, checksum_algo) checksum_utils.validate_checksum(path, checksum, checksum_algo)
# FIXME(TheJulia): need to check if we need to extract the file
# i.e. zstd... before forcing raw.
if force_raw: if force_raw:
image_to_raw(image_href, path, "%s.part" % path) image_to_raw(image_href, path, "%s.part" % path)
@ -468,14 +506,20 @@ def image_to_raw(image_href, path, path_tmp):
os.rename(path_tmp, path) os.rename(path_tmp, path)
def image_show(context, image_href, image_service=None): def image_show(context, image_href, image_service=None, image_auth_data=None):
if image_service is None: if image_service is None:
image_service = service.get_image_service(image_href, context=context) image_service = service.get_image_service(image_href, context=context)
if image_service.is_auth_set_needed:
# We need to possibly authenticate, so we should attempt to do so.
image_service.set_image_auth(image_href, image_auth_data)
return image_service.show(image_href) return image_service.show(image_href)
def download_size(context, image_href, image_service=None): def download_size(context, image_href, image_service=None,
return image_show(context, image_href, image_service)['size'] image_auth_data=None):
return image_show(context, image_href,
image_service=image_service,
image_auth_data=image_auth_data)['size']
def converted_size(path, estimate=False): def converted_size(path, estimate=False):
@ -647,6 +691,11 @@ def is_whole_disk_image(ctx, instance_info):
is_whole_disk_image = (not iproperties.get('kernel_id') is_whole_disk_image = (not iproperties.get('kernel_id')
and not iproperties.get('ramdisk_id')) and not iproperties.get('ramdisk_id'))
elif service.is_container_registry_url(image_source):
# NOTE(theJulia): We can safely assume, at least outright,
# that all container images are whole disk images, unelss
# someone wants to add explicit support.
is_whole_disk_image = True
else: else:
# Non glance image ref # Non glance image ref
if is_source_a_path(ctx, instance_info.get('image_source')): if is_source_a_path(ctx, instance_info.get('image_source')):

View File

@ -0,0 +1,798 @@
# Copyright 2025 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# NOTE(TheJulia): This file is based upon, in part, some of the TripleO
# project container uploader.
# https://github.com/openstack-archive/tripleo-common/blame/stable/wallaby/tripleo_common/image/image_uploader.py
import base64
import json
import re
import requests
from requests.adapters import HTTPAdapter
from requests import auth as requests_auth
import tenacity
from urllib import parse
from oslo_log import log as logging
from ironic.common import checksum_utils
from ironic.common import exception
from ironic.conf import CONF
LOG = logging.getLogger(__name__)
(
CALL_MANIFEST,
CALL_BLOB,
CALL_TAGS,
) = (
'%(image)s/manifests/%(tag)s',
'%(image)s/blobs/%(digest)s',
'%(image)s/tags/list',
)
(
MEDIA_OCI_MANIFEST_V1,
MEDIA_OCI_INDEX_V1,
) = (
'application/vnd.oci.image.manifest.v1+json',
'application/vnd.oci.image.index.v1+json',
)
class MakeSession(object):
"""Class method to uniformly create sessions.
Sessions created by this class will retry on errors with an exponential
backoff before raising an exception. Because our primary interaction is
with the container registries the adapter will also retry on 401 and
404. This is being done because registries commonly return 401 when an
image is not found, which is commonly a cache miss. See the adapter
definitions for more on retry details.
"""
def __init__(self, verify=True):
self.session = requests.Session()
self.session.verify = verify
adapter = HTTPAdapter(
max_retries=3,
pool_block=False
)
self.session.mount('http://', adapter)
self.session.mount('https://', adapter)
def create(self):
return self.__enter__()
def __enter__(self):
return self.session
def __exit__(self, *args, **kwargs):
self.session.close()
class RegistrySessionHelper(object):
"""Class with various registry session helpers
This class contains a bunch of static methods to be used when making
session requests against a container registry. The methods are primarily
used to handle authentication/reauthentication for the requests against
registries that require auth.
"""
@staticmethod
def check_status(session, request, allow_reauth=True):
"""Check request status and trigger reauth
This function can be used to check if we need to perform authentication
for a container registry request because we've gotten a 401.
"""
text = getattr(request, 'text', 'unknown')
reason = getattr(request, 'reason', 'unknown')
status_code = getattr(request, 'status_code', None)
headers = getattr(request, 'headers', {})
if status_code >= 300 and status_code != 401:
LOG.info(
'OCI client got a Non-2xx: status %s, reason %s, text %s',
status_code,
reason,
text)
if status_code == 401:
LOG.warning(
'OCI client failed: status %s, reason %s text %s',
status_code,
reason,
text)
www_auth = headers.get(
'www-authenticate',
headers.get(
'Www-Authenticate'
)
)
if www_auth:
error = None
# Handle docker.io shenanigans. docker.io will return 401
# for 403 and 404 but provide an error string. Other registries
# like registry.redhat.io and quay.io do not do this. So if
# we find an error string, check to see if we should reauth.
do_reauth = allow_reauth
if 'error=' in www_auth:
error = re.search('error="(.*?)"', www_auth).group(1)
LOG.warning(
'Error detected in auth headers: error %s', error)
do_reauth = (error == 'invalid_token' and allow_reauth)
if do_reauth:
if hasattr(session, 'reauthenticate'):
# This is a re-auth counter
reauth = int(session.headers.get('_ReAuth', 0))
reauth += 1
session.headers['_ReAuth'] = str(reauth)
session.reauthenticate(**session.auth_args)
if status_code == 429:
raise exception.ImageHostRateLimitFailure(image_ref=request.url)
request.raise_for_status()
@staticmethod
def check_redirect_trusted(request_response, request_session,
stream=True, timeout=60):
"""Check if we've been redirected to a trusted source
Because we may be using auth, we may not want to leak authentication
keys to an untrusted source. If we get a redirect, we need to check
that the redirect url is one of our sources that we trust. Otherwise
we drop the Authorization header from the redirect request. We'll
add the header back into the request session after performing the
request to ensure that future usage of the session.
:param: request_response: Response object of the request to check
:param: request_session: Session to use when redirecting
:param: stream: Should we stream the response of the redirect
:param: timeout: Timeout for the redirect request
"""
# we're not a redirect, just return the original response
if not (request_response.status_code >= 300
and request_response.status_code < 400):
return request_response
# parse the destination location
redir_url = parse.urlparse(request_response.headers['Location'])
# close the response since we're going to replace it
request_response.close()
auth_header = request_session.headers.pop('Authorization', None)
# ok we got a redirect, let's check where we are going
secure_cdn = CONF.oci.secure_cdn_registries
# TODO(TheJulia): Consider breaking the session calls below into
# a helper method, because as-is, this is basically impossible
# to unit test the delienation in behavior.
if len([h for h in secure_cdn if h in redir_url.netloc]) > 0:
# we're going to a trusted location, add the header back and
# return response
request_session.headers.update({'Authorization': auth_header})
request_response = request_session.get(redir_url.geturl(),
stream=stream,
timeout=timeout)
else:
# we didn't trust the place we're going, request without auth but
# add the auth back to the request session afterwards
request_response = request_session.get(redir_url.geturl(),
stream=stream,
timeout=timeout)
request_session.headers.update({'Authorization': auth_header})
request_response.encoding = 'utf-8'
# recheck status here to make sure we didn't get a 401 from
# our redirect host path.
RegistrySessionHelper.check_status(session=request_session,
request=request_response)
return request_response
def get_token_from_config(fqdn):
"""Takes a FQDN for a container registry and consults auth config.
This method evaluates named configuration parameter
[oci]authentication_config and looks for pre-shared secrets
in the supplied json file. It is written to defensively
handle the file such that errors are not treated as fatal to
the overall lookup process, but errors are logged.
The expected file format is along the lines of:
{
"auths": {
"domain.name": {
"auth": "pre-shared-secret-value"
}
}
}
:param fqdn: A fully qualified domain name for interacting
with the remote image registry.
:returns: String value for the "auth" key which matches
the supplied FQDN.
"""
if not CONF.oci.authentication_config:
return
auth = None
try:
with open(CONF.oci.authentication_config, 'r') as auth_file:
auth_dict = json.loads(auth_file)
except OSError as e:
LOG.error('Failed to load pre-shared authentication token '
'data: %s', e)
return
except (json.JSONDecodeError, UnicodeDecodeError) as e:
LOG.error('Unable to decode pre-shared authentication token '
'data: %s', e)
return
try:
# Limiting all key interactions here to capture any formatting
# errors in one place.
auth_dict = auth_dict['auths']
fqdn_dict = auth_dict.get(fqdn)
auth = fqdn_dict.get('auth')
except (AttributeError, KeyError):
LOG.error('There was an error while looking up authentication '
'for dns name %s. Possible misformatted file?')
return
return auth
@staticmethod
def get_bearer_token(session, username=None, password=None,
realm=None, service=None, scope=None):
auth = None
token_param = {}
if service:
token_param['service'] = service
if scope:
token_param['scope'] = scope
if username:
# NOTE(TheJulia): This won't be invoked under the current
# client code which does not use a username. Tokens
# have the username encoded within and the remote servers
# know how to decode it.
auth = requests.auth.HTTPBasicAuth(username, password)
elif password:
# This is a case where we have a pre-shared token.
LOG.debug('Using user provided pre-shared authentication '
'token to authenticate to the remote registry.')
auth = requests.auth.HTTPBasicAuth('', password)
else:
realm_url = parse.urlparse(realm)
local_token = RegistrySessionHelper.get_token_from_config(
realm_url.netloc)
if local_token:
LOG.debug('Using a locally configured pre-shared key '
'for authentication to the remote registry.')
auth = requests.auth.HTTPBasicAuth('', local_token)
auth_req = session.get(realm, params=token_param, auth=auth,
timeout=CONF.webserver_connection_timeout)
auth_req.raise_for_status()
resp = auth_req.json()
if 'token' not in resp:
raise AttributeError('Invalid auth response, no token provide')
return resp['token']
@staticmethod
def parse_www_authenticate(header):
auth_type = None
auth_type_match = re.search('^([A-Za-z]*) ', header)
if auth_type_match:
auth_type = auth_type_match.group(1)
if not auth_type:
return (None, None, None)
realm = None
service = None
if 'realm=' in header:
realm = re.search('realm="(.*?)"', header).group(1)
if 'service=' in header:
service = re.search('service="(.*?)"', header).group(1)
return (auth_type, realm, service)
@staticmethod
@tenacity.retry( # Retry up to 5 times with longer time for rate limit
reraise=True,
retry=tenacity.retry_if_exception_type(
exception.ImageHostRateLimitFailure
),
wait=tenacity.wait_random_exponential(multiplier=1.5, max=60),
stop=tenacity.stop_after_attempt(5)
)
def _action(action, request_session, *args, **kwargs):
"""Perform a session action and retry if auth fails
This function dynamically performs a specific type of call
using the provided session (get, patch, post, etc). It will
attempt a single re-authentication if the initial request
fails with a 401.
"""
_action = getattr(request_session, action)
try:
req = _action(*args, **kwargs)
if not kwargs.get('stream'):
# The caller has requested a stream, likely download so
# we really can't call check_status because it would force
# full content transfer.
RegistrySessionHelper.check_status(session=request_session,
request=req)
except requests.exceptions.HTTPError as e:
if e.response.status_code == 401:
req = _action(*args, **kwargs)
RegistrySessionHelper.check_status(session=request_session,
request=req)
else:
raise
return req
@staticmethod
def get(request_session, *args, **kwargs):
"""Perform a get and retry if auth fails
This function is designed to be used when we perform a get to
an authenticated source. This function will attempt a single
re-authentication request if the first one fails.
"""
return RegistrySessionHelper._action('get',
request_session,
*args,
**kwargs)
class OciClient(object):
# The cached client authorization which may be used by for an
# artifact being accessed by ironic-python-agent so we can retrieve
# the authorization data and convey it to IPA without needing to
# directly handle credentials to IPA.
_cached_auth = None
def __init__(self, verify):
"""Initialize the OCI container registry client class.
:param verify: If certificate verification should be leveraged for
the underlying HTTP client.
"""
# FIXME(TheJulia): This should come from configuration
self.session = MakeSession(verify=verify).create()
def authenticate(self, image_url, username=None, password=None):
"""Authenticate to the remote container registry.
:param image_url: The URL to utilise for the remote container
registry.
:param username: The username paraemter.
:param password: The password parameter.
:raises: AttributeError when an unknown authentication attribute has
been specified by the remote service.
:raises: ImageServiceAuthenticationRequired when the remote Container
registry requires authentication but we do not have a
credentials.
"""
url = self._image_to_url(image_url)
image, tag = self._image_tag_from_url(url)
scope = 'repository:%s:pull' % image[1:]
url = self._build_url(url, path='/')
# If authenticate is called an additional time....
# clear the authorization in the client.
if self.session:
self.session.headers.pop('Authorization', None)
r = self.session.get(url, timeout=CONF.webserver_connection_timeout)
LOG.debug('%s status code %s', url, r.status_code)
if r.status_code == 200:
# "Auth" was successful, returning.
return self.session
if r.status_code != 401:
# Auth was rejected.
r.raise_for_status()
if 'www-authenticate' not in r.headers:
# Something is wrong and unexpected.
raise AttributeError(
'Unknown authentication method for headers: %s' % r.headers)
auth = None
www_auth = r.headers['www-authenticate']
token_param = {}
(auth_type, realm, service) = \
RegistrySessionHelper.parse_www_authenticate(www_auth)
if auth_type and auth_type.lower() == 'bearer':
LOG.debug('Using bearer token auth')
token = RegistrySessionHelper.get_bearer_token(
self.session,
username=username,
password=password,
realm=realm,
service=service,
scope=scope)
elif auth_type and auth_type.lower() == 'basic':
if not username or not password:
raise exception.ImageServiceAuthenticationRequired(
image_ref=image_url)
auth = requests_auth.HTTPBasicAuth(username, password)
rauth = self.session.get(
url, params=token_param,
auth=auth,
timeout=CONF.webserver_connection_timeout)
rauth.raise_for_status()
token = (
base64.b64encode(
bytes(username + ':' + password, 'utf-8')).decode('ascii')
)
else:
raise AttributeError(
'Unknown www-authenticate value: %s', www_auth)
auth_header = '%s %s' % (auth_type, token)
self.session.headers['Authorization'] = auth_header
# Set a cached Authorization token value so we can extract it
# if needed, useful for enabling something else to be able to
# make that actual call.
self._cached_auth = auth_header
setattr(self.session, 'reauthenticate', self.authenticate)
setattr(
self.session,
'auth_args',
dict(
image_url=image_url,
username=username,
password=password,
session=self.session
)
)
@staticmethod
def _get_response_text(response, encoding='utf-8', force_encoding=False):
"""Return request response text
We need to set the encoding for the response other wise it
will attempt to detect the encoding which is very time consuming.
See https://github.com/psf/requests/issues/4235 for additional
context.
:param: response: requests Respoinse object
:param: encoding: encoding to set if not currently set
:param: force_encoding: set response encoding always
"""
if force_encoding or not response.encoding:
response.encoding = encoding
return response.text
@classmethod
def _build_url(cls, url, path):
"""Build an HTTPS URL from the input urlparse data.
:param url: The urlparse result object with the netloc object which
is extracted and used by this method.
:param path: The path in the form of a string which is then assembled
into an HTTPS URL to be used for access.
:returns: A fully formed url in the form of https://ur.
"""
netloc = url.netloc
scheme = 'https'
return '%s://%s/v2%s' % (scheme, netloc, path)
def _get_manifest(self, image_url, digest=None):
if not digest:
# Caller has the digest in the url, that's fine, lets
# use that.
digest = image_url.path.split('@')[1]
image_path = image_url.path.split(':')[0]
manifest_url = self._build_url(
image_url, CALL_MANIFEST % {'image': image_path,
'tag': digest})
# Explicitly ask for the OCI artifact index
manifest_headers = {'Accept': ", ".join([MEDIA_OCI_MANIFEST_V1])}
try:
manifest_r = RegistrySessionHelper.get(
self.session,
manifest_url,
headers=manifest_headers,
timeout=CONF.webserver_connection_timeout
)
except requests.exceptions.HTTPError as e:
if e.response.status_code == 401:
# Authorization Required.
raise exception.ImageServiceAuthenticationRequired(
image_ref=manifest_url)
if e.response.status_code in (403, 404):
raise exception.ImageNotFound(
image_id=image_url.geturl())
if e.response.status_code >= 500:
raise exception.TemporaryFailure()
raise
manifest_str = self._get_response_text(manifest_r)
checksum_utils.validate_text_checksum(manifest_str, digest)
return json.loads(manifest_str)
def _get_artifact_index(self, image_url):
LOG.debug('Attempting to get the artifact index for: %s',
image_url)
parts = self._resolve_tag(image_url)
index_url = self._build_url(
image_url, CALL_MANIFEST % parts
)
# Explicitly ask for the OCI artifact index
index_headers = {'Accept': ", ".join([MEDIA_OCI_INDEX_V1])}
try:
index_r = RegistrySessionHelper.get(
self.session,
index_url,
headers=index_headers,
timeout=CONF.webserver_connection_timeout
)
except requests.exceptions.HTTPError as e:
if e.response.status_code == 401:
# Authorization Required.
raise exception.ImageServiceAuthenticationRequired(
image_ref=index_url)
if e.response.status_code in (403, 404):
raise exception.ImageNotFound(
image_id=image_url.geturl())
if e.response.status_code >= 500:
raise exception.TemporaryFailure()
raise
index_str = self._get_response_text(index_r)
# Return a dictionary to the caller so it can house the
# filtering/sorting application logic.
return json.loads(index_str)
def _resolve_tag(self, image_url):
"""Attempts to resolve tags from a container URL."""
LOG.debug('Attempting to resolve tag for: %s',
image_url)
image, tag = self._image_tag_from_url(image_url)
parts = {
'image': image,
'tag': tag
}
tags_url = self._build_url(
image_url, CALL_TAGS % parts
)
tag_headers = {'Accept': ", ".join([MEDIA_OCI_INDEX_V1])}
try:
tags_r = RegistrySessionHelper.get(
self.session, tags_url,
headers=tag_headers,
timeout=CONF.webserver_connection_timeout)
except requests.exceptions.HTTPError as e:
if e.response.status_code == 401:
# Authorization Required.
raise exception.ImageServiceAuthenticationRequired(
image_ref=tags_url)
if e.response.status_code >= 500:
raise exception.TemporaryFailure()
raise
tags = tags_r.json()['tags']
while 'next' in tags_r.links:
next_url = parse.urljoin(tags_url, tags_r.links['next']['url'])
tags_r = RegistrySessionHelper.get(
self.session, next_url,
headers=tag_headers,
timeout=CONF.webserver_connection_timeout)
tags.extend(tags_r.json()['tags'])
if tag not in tags:
raise exception.ImageNotFound(
image_id=image_url.geturl())
return parts
def get_artifact_index(self, image):
"""Retrieve an index of artifacts in the Container Registry.
:param image: The remote container registry URL in the form of
oci://host/user/container:tag.
:returns: A dictionary object representing the index of artifacts
present in the container registry, in the form of manifest
references along with any other metadata per entry which
the remote registry returns such as annotations, and
platform labeling which aids in artifact selection.
"""
image_url = self._image_to_url(image)
return self._get_artifact_index(image_url)
def get_manifest(self, image, digest=None):
"""Retrieve a manifest from the remote API.
This method is a wrapper for the _get_manifest helper, which
normalizes the input URL, performs basic sanity checking,
and then calls the underlying method to retrieve the manifest.
The manifest is then returned to the caller in the form of a
dictionary.
:param image: The full URL to the desired manifest or the URL
of the container and an accompanying digest parameter.
:param digest: The Digest value for the requested manifest.
:returns: A dictionary object representing the manifest as stored
in the remote API.
"""
LOG.debug('Attempting to get manifest for: %s', image)
if not digest and '@' in image:
# Digest must be part of the URL, this is fine!
url_split = image.split("@")
image_url = self._image_to_url(url_split[0])
digest = url_split[1]
elif digest and '@' in image:
raise AttributeError('Invalid request - Appears to attempt '
'to use a digest value and a digest in '
'the provided URL.')
else:
image_url = self._image_to_url(image)
return self._get_manifest(image_url, digest)
def get_blob_url(self, image, blob_digest):
"""Generates an HTTP representing an blob artifact.
:param image: The OCI Container URL.
:param blob_digest: The digest value representing the desired blob
artifact.
:returns: A HTTP URL string representing the blob URL which can be
utilized by an HTTP client to retrieve the artifact.
"""
if not blob_digest and '@' in image:
split_url = image.split('@')
image_url = parse.urlparse(split_url[0])
blob_digest = split_url[1]
elif blob_digest and '@' in image:
split_url = image.split('@')
image_url = parse.urlparse(split_url[0])
# The caller likely has a bug or bad pattern
# which needs to be fixed
else:
image_url = parse.urlparse(image)
# just in caes, split out the tag since it is not
# used for a blob manifest lookup.
image_path = image_url.path.split(':')[0]
manifest_url = self._build_url(
image_url, CALL_BLOB % {'image': image_path,
'digest': blob_digest})
return manifest_url
def get_cached_auth(self):
"""Retrieves the cached authentication header for reuse."""
# This enables the cached authentication data to be retrieved
# to enable Ironic to provide that the data without shipping
# credentials around directly.
return self._cached_auth
def download_blob_from_manifest(self, manifest_url, image_file):
"""Retrieves the requested blob from the manifest URL...
And saves the requested manifest's artifact as the requested
image_file location, and then returns the verified checksum.
:param manifest_url: The URL, in oci://host/user/container@digest
formatted artifact manifest URL. This is *not*
the digest value for the blob, which can only
be discovered by retrieving the manifest.
:param image_file: The image file object to write the blob to.
:returns: The verified digest value matching the saved artifact.
"""
LOG.debug('Starting download blob download sequence for %s',
manifest_url)
manifest = self.get_manifest(manifest_url)
layers = manifest.get('layers', [])
layer_count = len(layers)
if layer_count != 1:
# This is not a blob manifest, it is the container,
# or something else we don't understand.
raise exception.ImageRefValidationFailed(
'Incorrect number of layers. Expected 1 layer, '
'found %s layers.' % layer_count)
blob_digest = layers[0].get('digest')
blob_url = self.get_blob_url(manifest_url, blob_digest)
LOG.debug('Identified download url for blob: %s', blob_url)
# One which is an OCI URL with a manifest.
try:
resp = RegistrySessionHelper.get(
self.session,
blob_url,
stream=True,
timeout=CONF.webserver_connection_timeout
)
resp = RegistrySessionHelper.check_redirect_trusted(
resp, self.session, stream=True)
if resp.status_code != 200:
raise exception.ImageRefValidationFailed(
image_href=blob_url,
reason=("Got HTTP code %s instead of 200 in response "
"to GET request.") % resp.status_code)
# Reminder: image_file, is a file object handler.
split_digest = blob_digest.split(':')
# Invoke the transfer helper so the checksum can be calculated
# in transfer.
download_helper = checksum_utils.TransferHelper(
resp, split_digest[0], split_digest[1])
# NOTE(TheJuila): If we *ever* try to have retry logic here,
# remember to image_file.seek(0) to reset position.
for chunk in download_helper:
# write the desired file out
image_file.write(chunk)
LOG.debug('Download of %(manifest)s has completed. Transferred '
'%(bytes)s of %(total)s total bytes.',
{'manifest': manifest_url,
'bytes': download_helper.bytes_transferred,
'total': download_helper.content_length})
if download_helper.checksum_matches:
return blob_digest
else:
raise exception.ImageChecksumError()
except requests.exceptions.HTTPError as e:
LOG.debug('Encountered error while attempting to download %s',
blob_url)
# Stream changes the behavior, so odds of hitting
# this area area a bit low unless an actual exception
# is raised.
if e.response.status_code == 401:
# Authorization Required.
raise exception.ImageServiceAuthenticationRequired(
image_ref=blob_url)
if e.response.status_code in (403, 404):
raise exception.ImageNotFound(image_id=blob_url)
if e.response.status_code >= 500:
raise exception.TemporaryFailure()
raise
except (OSError, requests.ConnectionError, requests.RequestException,
IOError) as e:
raise exception.ImageDownloadFailed(image_href=blob_url,
reason=str(e))
@classmethod
def _image_tag_from_url(cls, image_url):
"""Identify image and tag from image_url.
:param image_url: Input image url.
:returns: Tuple of values, image URL which has been reconstructed
and the requested tag. Defaults to 'latest' when a tag has
not been identified as part of the supplied URL.
"""
if '@' in image_url.path:
parts = image_url.path.split('@')
tag = parts[-1]
image = ':'.join(parts[:-1])
elif ':' in image_url.path:
parts = image_url.path.split(':')
tag = parts[-1]
image = ':'.join(parts[:-1])
else:
tag = 'latest'
image = image_url.path
return image, tag
@classmethod
def _image_to_url(cls, image):
"""Helper to create an OCI URL."""
if '://' not in image:
# Slight bit of future proofing in case we ever support
# identifying bare URLs.
image = 'oci://' + image
url = parse.urlparse(image)
return url

View File

@ -1328,8 +1328,16 @@ def cache_ramdisk_kernel(task, pxe_info, ipxe_enabled=False):
LOG.debug("Fetching necessary kernel and ramdisk for node %s", LOG.debug("Fetching necessary kernel and ramdisk for node %s",
node.uuid) node.uuid)
images_info = list(t_pxe_info.values()) images_info = list(t_pxe_info.values())
# Call the central auth override lookup, but signal it is not
# for *user* direct artifacts, i.e. this is system related
# activity as we're getting TFTP Image cache artifacts.
img_service_auth = service.get_image_service_auth_override(
task.node, permit_user_auth=False)
deploy_utils.fetch_images(ctx, TFTPImageCache(), images_info, deploy_utils.fetch_images(ctx, TFTPImageCache(), images_info,
CONF.force_raw_images) CONF.force_raw_images,
image_auth_data=img_service_auth)
if CONF.pxe.file_permission: if CONF.pxe.file_permission:
for info in images_info: for info in images_info:
os.chmod(info[1], CONF.pxe.file_permission) os.chmod(info[1], CONF.pxe.file_permission)

View File

@ -662,6 +662,7 @@ def wipe_deploy_internal_info(task):
node.del_driver_internal_info('agent_cached_deploy_steps') node.del_driver_internal_info('agent_cached_deploy_steps')
node.del_driver_internal_info('deploy_step_index') node.del_driver_internal_info('deploy_step_index')
node.del_driver_internal_info('steps_validated') node.del_driver_internal_info('steps_validated')
node.del_driver_internal_info('image_source')
async_steps.remove_node_flags(node) async_steps.remove_node_flags(node)
@ -687,6 +688,7 @@ def wipe_service_internal_info(task):
node.del_driver_internal_info('service_step_index') node.del_driver_internal_info('service_step_index')
node.del_driver_internal_info('service_disable_ramdisk') node.del_driver_internal_info('service_disable_ramdisk')
node.del_driver_internal_info('steps_validated') node.del_driver_internal_info('steps_validated')
node.del_driver_internal_info('image_source')
async_steps.remove_node_flags(node) async_steps.remove_node_flags(node)

View File

@ -45,6 +45,7 @@ from ironic.conf import metrics
from ironic.conf import molds from ironic.conf import molds
from ironic.conf import neutron from ironic.conf import neutron
from ironic.conf import nova from ironic.conf import nova
from ironic.conf import oci
from ironic.conf import pxe from ironic.conf import pxe
from ironic.conf import redfish from ironic.conf import redfish
from ironic.conf import sensor_data from ironic.conf import sensor_data
@ -84,6 +85,7 @@ metrics.register_opts(CONF)
molds.register_opts(CONF) molds.register_opts(CONF)
neutron.register_opts(CONF) neutron.register_opts(CONF)
nova.register_opts(CONF) nova.register_opts(CONF)
oci.register_opts(CONF)
pxe.register_opts(CONF) pxe.register_opts(CONF)
redfish.register_opts(CONF) redfish.register_opts(CONF)
sensor_data.register_opts(CONF) sensor_data.register_opts(CONF)

63
ironic/conf/oci.py Normal file
View File

@ -0,0 +1,63 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from ironic.common.i18n import _
group = cfg.OptGroup(name='oci',
title='OCI Container Registry Client Options')
opts = [
cfg.ListOpt('secure_cdn_registries',
default=[
'registry.redhat.io',
'registry.access.redhat.com',
'docker.io',
'registry-1.docker.io',
],
# NOTE(TheJulia): Not a mutable option because this setting
# impacts how the OCI client navigates configuration handling
# for these hosts.
mutable=False,
help=_('An option which signals to the OCI Container Registry '
'client which remote endpoints are fronted by Content '
'Distribution Networks which we may receive redirects '
'to in order to download the requested artifacts, '
'where the OCI client should go ahead and issue the '
'download request with authentication headers before '
'being asked by the remote server for user '
'authentication.')),
cfg.StrOpt('authentication_config',
mutable=True,
help=_('An option which allows pre-shared authorization keys '
'to be utilized by the Ironic service to facilitate '
'authentication with remote image registries which '
'may require authentication for all interactions. '
'Ironic will utilize these credentials to access '
'general artifacts, but Ironic will *also* prefer '
'user credentials, if supplied, for disk images. '
'This file is in the same format utilized in the '
'container ecosystem for the same purpose. '
'Structured as a JSON document with an ``auths`` '
'key, with remote registry domain FQDNs as keys, '
'and a nested ``auth`` key within that value which '
'holds the actual pre-shared secret. Ironic does '
'not cache the contents of this file at launch, '
'and the file can be updated as Ironic operates '
'in the event pre-shared tokens need to be '
'regenerated.')),
]
def register_opts(conf):
conf.register_group(group)
conf.register_opts(opts, group='oci')

View File

@ -46,6 +46,7 @@ _opts = [
('molds', ironic.conf.molds.opts), ('molds', ironic.conf.molds.opts),
('neutron', ironic.conf.neutron.list_opts()), ('neutron', ironic.conf.neutron.list_opts()),
('nova', ironic.conf.nova.list_opts()), ('nova', ironic.conf.nova.list_opts()),
('oci', ironic.conf.oci.opts),
('pxe', ironic.conf.pxe.opts), ('pxe', ironic.conf.pxe.opts),
('redfish', ironic.conf.redfish.opts), ('redfish', ironic.conf.redfish.opts),
('sensor_data', ironic.conf.sensor_data.opts), ('sensor_data', ironic.conf.sensor_data.opts),

View File

@ -611,7 +611,8 @@ class AgentDeploy(CustomAgentDeploy):
# NOTE(dtantsur): glance images contain a checksum; for file images we # NOTE(dtantsur): glance images contain a checksum; for file images we
# will recalculate the checksum anyway. # will recalculate the checksum anyway.
if (not service_utils.is_glance_image(image_source) if (not service_utils.is_glance_image(image_source)
and not image_source.startswith('file://')): and not image_source.startswith('file://')
and not image_source.startswith('oci://')):
def _raise_missing_checksum_exception(node): def _raise_missing_checksum_exception(node):
raise exception.MissingParameterValue(_( raise exception.MissingParameterValue(_(

View File

@ -207,7 +207,8 @@ def check_for_missing_params(info_dict, error_msg, param_prefix=''):
def fetch_images(ctx, cache, images_info, force_raw=True, def fetch_images(ctx, cache, images_info, force_raw=True,
expected_format=None, expected_checksum=None, expected_format=None, expected_checksum=None,
expected_checksum_algo=None): expected_checksum_algo=None,
image_auth_data=None):
"""Check for available disk space and fetch images using ImageCache. """Check for available disk space and fetch images using ImageCache.
:param ctx: context :param ctx: context
@ -227,7 +228,8 @@ def fetch_images(ctx, cache, images_info, force_raw=True,
""" """
try: try:
image_cache.clean_up_caches(ctx, cache.master_dir, images_info) image_cache.clean_up_caches(ctx, cache.master_dir, images_info,
image_auth_data)
except exception.InsufficientDiskSpace as e: except exception.InsufficientDiskSpace as e:
raise exception.InstanceDeployFailure(reason=e) raise exception.InstanceDeployFailure(reason=e)
@ -243,7 +245,8 @@ def fetch_images(ctx, cache, images_info, force_raw=True,
force_raw=force_raw, force_raw=force_raw,
expected_format=expected_format, expected_format=expected_format,
expected_checksum=expected_checksum, expected_checksum=expected_checksum,
expected_checksum_algo=expected_checksum_algo) expected_checksum_algo=expected_checksum_algo,
image_auth_data=image_auth_data)
image_list.append((href, path, image_format)) image_list.append((href, path, image_format))
return image_list return image_list
@ -1112,15 +1115,22 @@ def cache_instance_image(ctx, node, force_raw=None, expected_format=None,
i_info = parse_instance_info(node) i_info = parse_instance_info(node)
fileutils.ensure_tree(_get_image_dir_path(node.uuid)) fileutils.ensure_tree(_get_image_dir_path(node.uuid))
image_path = _get_image_file_path(node.uuid) image_path = _get_image_file_path(node.uuid)
if 'image_source' in node.driver_internal_info:
uuid = node.driver_internal_info.get('image_source')
else:
uuid = i_info['image_source'] uuid = i_info['image_source']
img_auth = image_service.get_image_service_auth_override(node)
LOG.debug("Fetching image %(image)s for node %(uuid)s", LOG.debug("Fetching image %(image)s for node %(uuid)s",
{'image': uuid, 'uuid': node.uuid}) {'image': uuid, 'uuid': node.uuid})
image_list = fetch_images(ctx, InstanceImageCache(), [(uuid, image_path)], image_list = fetch_images(ctx, InstanceImageCache(), [(uuid, image_path)],
force_raw, expected_format=expected_format, force_raw, expected_format=expected_format,
expected_checksum=expected_checksum, expected_checksum=expected_checksum,
expected_checksum_algo=expected_checksum_algo) expected_checksum_algo=expected_checksum_algo,
image_auth_data=img_auth)
return (uuid, image_path, image_list[0][2]) return (uuid, image_path, image_list[0][2])
@ -1194,6 +1204,12 @@ def _validate_image_url(node, url, secret=False, inspect_image=None,
# is NOT raised. In other words, that the endpoint does not # is NOT raised. In other words, that the endpoint does not
# return a 200. If we're fed a folder list, this will still # return a 200. If we're fed a folder list, this will still
# work, which is a good and bad thing at the same time. :/ # work, which is a good and bad thing at the same time. :/
if image_service.is_container_registry_url(url):
oci = image_service.OciImageService()
image_auth = image_service.get_image_service_auth_override(node)
oci.set_image_auth(url, image_auth)
oci.validate_href(url, secret)
else:
image_service.HttpImageService().validate_href(url, secret) image_service.HttpImageService().validate_href(url, secret)
except exception.ImageRefValidationFailed as e: except exception.ImageRefValidationFailed as e:
with excutils.save_and_reraise_exception(): with excutils.save_and_reraise_exception():
@ -1236,9 +1252,19 @@ def _validate_image_url(node, url, secret=False, inspect_image=None,
def _cache_and_convert_image(task, instance_info, image_info=None): def _cache_and_convert_image(task, instance_info, image_info=None):
"""Cache an image locally and convert it to RAW if needed.""" """Cache an image locally and convert it to RAW if needed.
:param task: The Taskmanager object related to this action.
:param instance_info: The instance_info field being used in
association with this method call.
:param image_info: The supplied image_info from Glance.
"""
# Ironic cache and serve images from httpboot server # Ironic cache and serve images from httpboot server
force_raw = direct_deploy_should_convert_raw_image(task.node) force_raw = direct_deploy_should_convert_raw_image(task.node)
if 'image_source' in task.node.driver_internal_info:
image_source = task.node.driver_internal_info.get('image_source')
else:
image_source = task.node.instance_info.get('image_source')
if image_info is None: if image_info is None:
initial_format = instance_info.get('image_disk_format') initial_format = instance_info.get('image_disk_format')
@ -1260,7 +1286,7 @@ def _cache_and_convert_image(task, instance_info, image_info=None):
'%(image)s for node %(node)s', '%(image)s for node %(node)s',
{'image': image_path, 'node': task.node.uuid}) {'image': image_path, 'node': task.node.uuid})
instance_info['image_disk_format'] = \ instance_info['image_disk_format'] = \
images.get_source_format(instance_info['image_source'], images.get_source_format(image_source,
image_path) image_path)
# Standard behavior is for image_checksum to be MD5, # Standard behavior is for image_checksum to be MD5,
@ -1361,9 +1387,16 @@ def build_instance_info_for_deploy(task):
""" """
node = task.node node = task.node
instance_info = node.instance_info instance_info = node.instance_info
di_info = node.driver_internal_info
iwdi = node.driver_internal_info.get('is_whole_disk_image') iwdi = node.driver_internal_info.get('is_whole_disk_image')
image_source = instance_info['image_source'] image_source = instance_info['image_source']
# Remove the saved image_source in case it exists in driver_internal_info
di_info.pop('image_source', None)
# Save out driver_internal_info to prevent race conditions.
node.driver_internal_info = di_info
# Flag if we know the source is a path, used for Anaconda # Flag if we know the source is a path, used for Anaconda
# deploy interface where you can just tell anaconda to # deploy interface where you can just tell anaconda to
# consume artifacts from a path. In this case, we are not # consume artifacts from a path. In this case, we are not
@ -1381,7 +1414,30 @@ def build_instance_info_for_deploy(task):
# and gets replaced at various points in this sequence. # and gets replaced at various points in this sequence.
instance_info['image_url'] = None instance_info['image_url'] = None
# This flag exists to lockout the overall continued flow of
# file validation if glance is in use. This is because glance
# can have have objects stored in Swift and those objects can
# be directly referenced by a separate swift client. Which means,
# additional information then needs to be gathered and exchanged
# which is a separate process from just a remote http file.
is_glance_image = False
# TODO(TheJulia): We should likely look at splitting this method
# into everal distinct helpers. First, glance, then OCI, then
# general file activities like download/cache or verify a remote
# URL.
# Remote image services/repositories are a little different, they house
# extra data we need to collect data from to streamline the process.
if service_utils.is_glance_image(image_source): if service_utils.is_glance_image(image_source):
# We know the image source is likely rooted from a glance record,
# so we don't need to do other checks unrelated to non-glance flows.
is_glance_image = True
# TODO(TheJulia): At some point, break all of the glance check/set
# work into a helper method to be called so we minimize the amount
# of glance specific code in this overall multi-image-service flow
# for future maintainer sanity.
glance = image_service.GlanceImageService(context=task.context) glance = image_service.GlanceImageService(context=task.context)
image_info = glance.show(image_source) image_info = glance.show(image_source)
LOG.debug('Got image info: %(info)s for node %(node)s.', LOG.debug('Got image info: %(info)s for node %(node)s.',
@ -1422,7 +1478,44 @@ def build_instance_info_for_deploy(task):
if not iwdi and boot_option != 'local': if not iwdi and boot_option != 'local':
instance_info['kernel'] = image_info['properties']['kernel_id'] instance_info['kernel'] = image_info['properties']['kernel_id']
instance_info['ramdisk'] = image_info['properties']['ramdisk_id'] instance_info['ramdisk'] = image_info['properties']['ramdisk_id']
elif (image_source.startswith('file://') elif image_service.is_container_registry_url(image_source):
# Is an oci image, we need to figure out the particulars...
# but we *don't* need to also handle special casing with Swift.
# We will setup things so _cache_and_convert_image can do the needful
# Or just validate the remote url data.
oci = image_service.OciImageService()
image_auth = image_service.get_image_service_auth_override(task.node)
oci.set_image_auth(image_source, image_auth)
# Ask the image service method to identify and gather information
# about the image. This is different from a specific manifest supplied
# upfront.
image_info = oci.identify_specific_image(
image_source, image_download_source,
node.properties.get('cpu_arch')
)
if (image_info.get('image_disk_format') == 'unknown'
and instance_info.get('image_disk_format') == 'raw'):
# Ironic, internally, resets image_disk_format for IPA, and
# we're in a case where we've been given a specific URL, which
# might not be raw. There is no way to know what is actually
# correct, so we'll pop the value out completely, and let
# auto-detection run it's course, so rebuilds or redeploy
# attempts are an available option.
image_info.pop('image_disk_format')
instance_info.pop('image_disk_format')
instance_info.update(image_info)
# Save what we are using for discoverability by the user, and
# save an override image_source to driver_internal_info for
# other methods to rely upon as the authoritative source.
image_source = instance_info.get('oci_image_manifest_url')
# This preserves an override for _cache_and_convert_image
# so it knows where to actually retrieve data from without
# us overriding image_source saved by the user, so rebuilds
# will work as expected.
di_info['image_source'] = image_source
node.driver_internal_info = di_info
if not is_glance_image:
if (image_source.startswith('file://')
or image_download_source == 'local'): or image_download_source == 'local'):
# In this case, we're explicitly downloading (or copying a file) # In this case, we're explicitly downloading (or copying a file)
# hosted locally so IPA can download it directly from Ironic. # hosted locally so IPA can download it directly from Ironic.
@ -1438,8 +1531,8 @@ def build_instance_info_for_deploy(task):
# has supplied us a direct URL to reference. In cases like the # has supplied us a direct URL to reference. In cases like the
# anaconda deployment interface where we might just have a path # anaconda deployment interface where we might just have a path
# and not a file, or where a user may be supplying a full URL to # and not a file, or where a user may be supplying a full URL to
# a remotely hosted image, we at a minimum need to check if the url # a remotely hosted image, we at a minimum need to check if the
# is valid, and address any redirects upfront. # url is valid, and address any redirects upfront.
try: try:
# NOTE(TheJulia): In the case we're here, we not doing an # NOTE(TheJulia): In the case we're here, we not doing an
# integrated image based deploy, but we may also be doing # integrated image based deploy, but we may also be doing
@ -1451,38 +1544,45 @@ def build_instance_info_for_deploy(task):
validated_results = {} validated_results = {}
if isap: if isap:
# This is if the source is a path url, such as one used by # This is if the source is a path url, such as one used by
# anaconda templates to to rely upon bootstrapping defaults. # anaconda templates to to rely upon bootstrapping
_validate_image_url(node, image_source, inspect_image=False) # defaults.
_validate_image_url(node, image_source,
inspect_image=False)
else: else:
# When not isap, we can just let _validate_image_url make a # When not isap, we can just let _validate_image_url make
# the required decision on if contents need to be sampled, # the required decision on if contents need to be sampled,
# or not. We try to pass the image_disk_format which may be # or not. We try to pass the image_disk_format which may
# declared by the user, and if not we set expected_format to # be declared by the user, and if not we set
# None. # expected_format to None.
validate_results = _validate_image_url( validate_results = _validate_image_url(
node, node,
image_source, image_source,
expected_format=instance_info.get('image_disk_format', expected_format=instance_info.get(
'image_disk_format',
None)) None))
# image_url is internal, and used by IPA and some boot templates. # image_url is internal, and used by IPA and some boot
# in most cases, it needs to come from image_source explicitly. # templates. In most cases, it needs to come from image_source
# explicitly.
if 'disk_format' in validated_results: if 'disk_format' in validated_results:
# Ensure IPA has the value available, so write what we detect, # Ensure IPA has the value available, so write what we
# if anything. This is also an item which might be needful # detect, if anything. This is also an item which might be
# with ansible deploy interface, when used in standalone mode. # needful with ansible deploy interface, when used in
# standalone mode.
instance_info['image_disk_format'] = \ instance_info['image_disk_format'] = \
validate_results.get('disk_format') validate_results.get('disk_format')
if not instance_info.get('image_url'):
instance_info['image_url'] = image_source instance_info['image_url'] = image_source
except exception.ImageRefIsARedirect as e: except exception.ImageRefIsARedirect as e:
# At this point, we've got a redirect response from the webserver, # At this point, we've got a redirect response from the
# and we're going to try to handle it as a single redirect action, # webserver, and we're going to try to handle it as a single
# as requests, by default, only lets a single redirect to occur. # redirect action, as requests, by default, only lets a single
# This is likely a URL pathing fix, like a trailing / on a path, # redirect to occur. This is likely a URL pathing fix, like a
# trailing / on a path,
# or move to HTTPS from a user supplied HTTP url. # or move to HTTPS from a user supplied HTTP url.
if e.redirect_url: if e.redirect_url:
# Since we've got a redirect, we need to carry the rest of the # Since we've got a redirect, we need to carry the rest of
# request logic as well, which includes recording a disk # the request logic as well, which includes recording a
# format, if applicable. # disk format, if applicable.
instance_info['image_url'] = e.redirect_url instance_info['image_url'] = e.redirect_url
# We need to save the image_source back out so it caches # We need to save the image_source back out so it caches
instance_info['image_source'] = e.redirect_url instance_info['image_source'] = e.redirect_url
@ -1493,8 +1593,8 @@ def build_instance_info_for_deploy(task):
# telling the client to use HTTPS. # telling the client to use HTTPS.
validated_results = _validate_image_url( validated_results = _validate_image_url(
node, e.redirect_url, node, e.redirect_url,
expected_format=instance_info.get('image_disk_format', expected_format=instance_info.get(
None)) 'image_disk_format', None))
if 'disk_format' in validated_results: if 'disk_format' in validated_results:
instance_info['image_disk_format'] = \ instance_info['image_disk_format'] = \
validated_results.get('disk_format') validated_results.get('disk_format')

View File

@ -71,7 +71,8 @@ class ImageCache(object):
def fetch_image(self, href, dest_path, ctx=None, force_raw=None, def fetch_image(self, href, dest_path, ctx=None, force_raw=None,
expected_format=None, expected_checksum=None, expected_format=None, expected_checksum=None,
expected_checksum_algo=None): expected_checksum_algo=None,
image_auth_data=None):
"""Fetch image by given href to the destination path. """Fetch image by given href to the destination path.
Does nothing if destination path exists and is up to date with cache Does nothing if destination path exists and is up to date with cache
@ -111,14 +112,16 @@ class ImageCache(object):
expected_format=expected_format, expected_format=expected_format,
expected_checksum=expected_checksum, expected_checksum=expected_checksum,
expected_checksum_algo=expected_checksum_algo, expected_checksum_algo=expected_checksum_algo,
disable_validation=self._disable_validation) disable_validation=self._disable_validation,
image_auth_data=image_auth_data)
else: else:
with _concurrency_semaphore: with _concurrency_semaphore:
_fetch(ctx, href, dest_path, force_raw, _fetch(ctx, href, dest_path, force_raw,
expected_format=expected_format, expected_format=expected_format,
expected_checksum=expected_checksum, expected_checksum=expected_checksum,
expected_checksum_algo=expected_checksum_algo, expected_checksum_algo=expected_checksum_algo,
disable_validation=self._disable_validation) disable_validation=self._disable_validation,
image_auth_data=image_auth_data)
return return
# TODO(ghe): have hard links and counts the same behaviour in all fs # TODO(ghe): have hard links and counts the same behaviour in all fs
@ -142,6 +145,10 @@ class ImageCache(object):
# TODO(dtantsur): lock expiration time # TODO(dtantsur): lock expiration time
with lockutils.lock(img_download_lock_name): with lockutils.lock(img_download_lock_name):
img_service = image_service.get_image_service(href, context=ctx) img_service = image_service.get_image_service(href, context=ctx)
if img_service.is_auth_set_needed:
# We need to possibly authenticate based on what a user
# has supplied, so we'll send that along.
img_service.set_image_auth(href, image_auth_data)
img_info = img_service.show(href) img_info = img_service.show(href)
# NOTE(vdrok): After rebuild requested image can change, so we # NOTE(vdrok): After rebuild requested image can change, so we
# should ensure that dest_path and master_path (if exists) are # should ensure that dest_path and master_path (if exists) are
@ -172,14 +179,16 @@ class ImageCache(object):
ctx=ctx, force_raw=force_raw, ctx=ctx, force_raw=force_raw,
expected_format=expected_format, expected_format=expected_format,
expected_checksum=expected_checksum, expected_checksum=expected_checksum,
expected_checksum_algo=expected_checksum_algo) expected_checksum_algo=expected_checksum_algo,
image_auth_data=image_auth_data)
# NOTE(dtantsur): we increased cache size - time to clean up # NOTE(dtantsur): we increased cache size - time to clean up
self.clean_up() self.clean_up()
def _download_image(self, href, master_path, dest_path, img_info, def _download_image(self, href, master_path, dest_path, img_info,
ctx=None, force_raw=None, expected_format=None, ctx=None, force_raw=None, expected_format=None,
expected_checksum=None, expected_checksum_algo=None): expected_checksum=None, expected_checksum_algo=None,
image_auth_data=None):
"""Download image by href and store at a given path. """Download image by href and store at a given path.
This method should be called with uuid-specific lock taken. This method should be called with uuid-specific lock taken.
@ -194,6 +203,8 @@ class ImageCache(object):
:param expected_format: The expected original format for the image. :param expected_format: The expected original format for the image.
:param expected_checksum: The expected image checksum. :param expected_checksum: The expected image checksum.
:param expected_checksum_algo: The expected image checksum algorithm. :param expected_checksum_algo: The expected image checksum algorithm.
:param image_auth_data: Dictionary with credential details which may be
required to download the file.
:raise ImageDownloadFailed: when the image cache and the image HTTP or :raise ImageDownloadFailed: when the image cache and the image HTTP or
TFTP location are on different file system, TFTP location are on different file system,
causing hard link to fail. causing hard link to fail.
@ -208,7 +219,8 @@ class ImageCache(object):
_fetch(ctx, href, tmp_path, force_raw, expected_format, _fetch(ctx, href, tmp_path, force_raw, expected_format,
expected_checksum=expected_checksum, expected_checksum=expected_checksum,
expected_checksum_algo=expected_checksum_algo, expected_checksum_algo=expected_checksum_algo,
disable_validation=self._disable_validation) disable_validation=self._disable_validation,
image_auth_data=image_auth_data)
if img_info.get('no_cache'): if img_info.get('no_cache'):
LOG.debug("Caching is disabled for image %s", href) LOG.debug("Caching is disabled for image %s", href)
@ -375,7 +387,7 @@ def _free_disk_space_for(path):
def _fetch(context, image_href, path, force_raw=False, def _fetch(context, image_href, path, force_raw=False,
expected_format=None, expected_checksum=None, expected_format=None, expected_checksum=None,
expected_checksum_algo=None, expected_checksum_algo=None,
disable_validation=False): disable_validation=False, image_auth_data=None):
"""Fetch image and convert to raw format if needed.""" """Fetch image and convert to raw format if needed."""
assert not (disable_validation and expected_format) assert not (disable_validation and expected_format)
path_tmp = "%s.part" % path path_tmp = "%s.part" % path
@ -384,7 +396,8 @@ def _fetch(context, image_href, path, force_raw=False,
os.remove(path_tmp) os.remove(path_tmp)
images.fetch(context, image_href, path_tmp, force_raw=False, images.fetch(context, image_href, path_tmp, force_raw=False,
checksum=expected_checksum, checksum=expected_checksum,
checksum_algo=expected_checksum_algo) checksum_algo=expected_checksum_algo,
image_auth_data=image_auth_data)
# By default, the image format is unknown # By default, the image format is unknown
image_format = None image_format = None
disable_dii = (disable_validation disable_dii = (disable_validation
@ -396,7 +409,8 @@ def _fetch(context, image_href, path, force_raw=False,
# format known even if they are not passed to qemu-img. # format known even if they are not passed to qemu-img.
remote_image_format = images.image_show( remote_image_format = images.image_show(
context, context,
image_href).get('disk_format') image_href,
image_auth_data=image_auth_data).get('disk_format')
else: else:
remote_image_format = expected_format remote_image_format = expected_format
image_format = images.safety_check_image(path_tmp) image_format = images.safety_check_image(path_tmp)
@ -469,7 +483,7 @@ def _clean_up_caches(directory, amount):
) )
def clean_up_caches(ctx, directory, images_info): def clean_up_caches(ctx, directory, images_info, image_auth_data=None):
"""Explicitly cleanup caches based on their priority (if required). """Explicitly cleanup caches based on their priority (if required).
This cleans up the caches to free up the amount of space required for the This cleans up the caches to free up the amount of space required for the
@ -484,7 +498,8 @@ def clean_up_caches(ctx, directory, images_info):
:raises: InsufficientDiskSpace exception, if we cannot free up enough space :raises: InsufficientDiskSpace exception, if we cannot free up enough space
after trying all the caches. after trying all the caches.
""" """
total_size = sum(images.download_size(ctx, uuid) total_size = sum(images.download_size(ctx, uuid,
image_auth_data=image_auth_data)
for (uuid, path) in images_info) for (uuid, path) in images_info)
_clean_up_caches(directory, total_size) _clean_up_caches(directory, total_size)
@ -518,6 +533,9 @@ def _delete_master_path_if_stale(master_path, href, img_info):
if service_utils.is_glance_image(href): if service_utils.is_glance_image(href):
# Glance image contents cannot be updated without changing image's UUID # Glance image contents cannot be updated without changing image's UUID
return os.path.exists(master_path) return os.path.exists(master_path)
if image_service.is_container_registry_url(href):
# OCI Images cannot be changed without changing the digest values.
return os.path.exists(master_path)
if os.path.exists(master_path): if os.path.exists(master_path):
img_mtime = img_info.get('updated_at') img_mtime = img_info.get('updated_at')
if not img_mtime: if not img_mtime:

View File

@ -162,6 +162,17 @@ class IronicChecksumUtilsTestCase(base.TestCase):
self.assertEqual('f' * 64, csum) self.assertEqual('f' * 64, csum)
self.assertEqual('sha256', algo) self.assertEqual('sha256', algo)
def test_validate_text_checksum(self):
csum = ('sha256:02edbb53017ded13c286e27d14285cb82f5a'
'87f6dcbae280d6c53b5d98477bb7')
res = checksum_utils.validate_text_checksum('me0w', csum)
self.assertIsNone(res)
def test_validate_text_checksum_invalid(self):
self.assertRaises(exception.ImageChecksumError,
checksum_utils.validate_text_checksum,
'me0w', 'sha256:f00')
@mock.patch.object(image_service.HttpImageService, 'get', @mock.patch.object(image_service.HttpImageService, 'get',
autospec=True) autospec=True)

View File

@ -24,6 +24,8 @@ import requests
from ironic.common import exception from ironic.common import exception
from ironic.common.glance_service import image_service as glance_v2_service from ironic.common.glance_service import image_service as glance_v2_service
from ironic.common import image_service from ironic.common import image_service
from ironic.common.oci_registry import OciClient as ociclient
from ironic.common.oci_registry import RegistrySessionHelper as rs_helper
from ironic.tests import base from ironic.tests import base
@ -714,6 +716,396 @@ class FileImageServiceTestCase(base.TestCase):
copy_mock.assert_called_once_with(self.href_path, 'file') copy_mock.assert_called_once_with(self.href_path, 'file')
class OciImageServiceTestCase(base.TestCase):
def setUp(self):
super(OciImageServiceTestCase, self).setUp()
self.service = image_service.OciImageService()
self.href = 'oci://localhost/podman/machine-os:5.3'
# NOTE(TheJulia): These test usesdata structures captured from
# requests from quay.io with podman's machine-os container
# image. As a result, they are a bit verbose and rather...
# annoyingly large, but they are as a result, accurate.
self.artifact_index = {
'schemaVersion': 2,
'mediaType': 'application/vnd.oci.image.index.v1+json',
'manifests': [
{
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'digest': ('sha256:9d046091b3dbeda26e1f4364a116ca8d942840'
'00f103da7310e3a4703df1d3e4'),
'size': 475,
'annotations': {'disktype': 'applehv'},
'platform': {'architecture': 'x86_64', 'os': 'linux'}},
{
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'digest': ('sha256:f2981621c1bf821ce44c1cb31c507abe6293d8'
'eea646b029c6b9dc773fa7821a'),
'size': 476,
'annotations': {'disktype': 'applehv'},
'platform': {'architecture': 'aarch64', 'os': 'linux'}},
{
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'digest': ('sha256:3e42f5c348842b9e28bdbc9382962791a791a2'
'e5cdd42ad90e7d6807396c59db'),
'size': 475,
'annotations': {'disktype': 'hyperv'},
'platform': {'architecture': 'x86_64', 'os': 'linux'}},
{
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'digest': ('sha256:7efa5128a3a82e414cc8abd278a44f0c191a28'
'067e91154c238ef8df39966008'),
'size': 476,
'annotations': {'disktype': 'hyperv'},
'platform': {'architecture': 'aarch64', 'os': 'linux'}},
{
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'digest': ('sha256:dfcb3b199378320640d78121909409599b58b8'
'012ed93320dae48deacde44d45'),
'size': 474,
'annotations': {'disktype': 'qemu'},
'platform': {'architecture': 'x86_64', 'os': 'linux'}},
{
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'digest': ('sha256:1010f100f03dba1e5e2bad9905fd9f96ba8554'
'158beb7e6f030718001fa335d8'),
'size': 475,
'annotations': {'disktype': 'qemu'},
'platform': {'architecture': 'aarch64', 'os': 'linux'}},
{
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'digest': ('sha256:605c96503253b2e8cd4d1eb46c68e633192bb9'
'b61742cffb54ad7eb3aef7ad6b'),
'size': 11538,
'platform': {'architecture': 'amd64', 'os': 'linux'}},
{
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'digest': ('sha256:d9add02195d33fa5ec9a2b35076caae88eea3a'
'7fa15f492529b56c7813949a15'),
'size': 11535,
'platform': {'architecture': 'arm64', 'os': 'linux'}}
]
}
self.empty_artifact_index = {
'schemaVersion': 2,
'mediaType': 'application/vnd.oci.image.index.v1+json',
'manifests': []
}
@mock.patch.object(ociclient, 'get_manifest', autospec=True)
@mock.patch.object(ociclient, 'get_artifact_index',
autospec=True)
def test_identify_specific_image_local(
self,
mock_get_artifact_index,
mock_get_manifest):
mock_get_artifact_index.return_value = self.artifact_index
mock_get_manifest.return_value = {
'schemaVersion': 2,
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'config': {
'mediaType': 'application/vnd.oci.empty.v1+json',
'digest': ('sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21'
'fe77e8310c060f61caaff8a'),
'size': 2,
'data': 'e30='},
'layers': [
{
'mediaType': 'application/zstd',
'digest': ('sha256:bf53aea26da8c4b2e4ca2d52db138e20fc7e73'
'0e6b34b866e9e8e39bcaaa2dc5'),
'size': 1059455878,
'annotations': {
'org.opencontainers.image.title': ('podman-machine.'
'x86_64.qemu.'
'qcow2.zst')
}
}
]
}
expected_data = {
'image_checksum': 'bf53aea26da8c4b2e4ca2d52db138e20fc7e730e6b34b866e9e8e39bcaaa2dc5', # noqa
'image_compression_type': 'zstd',
'image_container_manifest_digest': 'sha256:dfcb3b199378320640d78121909409599b58b8012ed93320dae48deacde44d45', # noqa
'image_disk_format': 'qcow2',
'image_filename': 'podman-machine.x86_64.qemu.qcow2.zst',
'image_media_type': 'application/zstd',
'image_request_authorization_secret': None,
'image_size': 1059455878,
'image_url': 'https://localhost/v2/podman/machine-os/blobs/sha256:bf53aea26da8c4b2e4ca2d52db138e20fc7e730e6b34b866e9e8e39bcaaa2dc5', # noqa
'oci_image_manifest_url': 'oci://localhost/podman/machine-os@sha256:dfcb3b199378320640d78121909409599b58b8012ed93320dae48deacde44d45' # noqa
}
img_data = self.service.identify_specific_image(
self.href, image_download_source='local')
self.assertEqual(expected_data, img_data)
mock_get_artifact_index.assert_called_once_with(mock.ANY, self.href)
mock_get_manifest.assert_called_once_with(
mock.ANY, self.href,
'sha256:dfcb3b199378320640d78121909409599b58b8012ed93320dae48de'
'acde44d45')
@mock.patch.object(ociclient, 'get_manifest', autospec=True)
@mock.patch.object(ociclient, 'get_artifact_index', autospec=True)
def test_identify_specific_image(
self, mock_get_artifact_index, mock_get_manifest):
mock_get_artifact_index.return_value = self.artifact_index
mock_get_manifest.return_value = {
'schemaVersion': 2,
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'config': {
'mediaType': 'application/vnd.oci.empty.v1+json',
'digest': ('sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21'
'fe77e8310c060f61caaff8a'),
'size': 2,
'data': 'e30='},
'layers': [
{
'mediaType': 'application/zstd',
'digest': ('sha256:047caa9c410038075055e1e41d520fc975a097'
'97838541174fa3066e58ebd8ea'),
'size': 1060062418,
'annotations': {
'org.opencontainers.image.title': ('podman-machine.'
'x86_64.applehv.'
'raw.zst')}
}
]
}
expected_data = {
'image_checksum': '047caa9c410038075055e1e41d520fc975a09797838541174fa3066e58ebd8ea', # noqa
'image_compression_type': 'zstd',
'image_container_manifest_digest': 'sha256:9d046091b3dbeda26e1f4364a116ca8d94284000f103da7310e3a4703df1d3e4', # noqa
'image_filename': 'podman-machine.x86_64.applehv.raw.zst',
'image_disk_format': 'raw',
'image_media_type': 'application/zstd',
'image_request_authorization_secret': None,
'image_size': 1060062418,
'image_url': 'https://localhost/v2/podman/machine-os/blobs/sha256:047caa9c410038075055e1e41d520fc975a09797838541174fa3066e58ebd8ea', # noqa
'oci_image_manifest_url': 'oci://localhost/podman/machine-os@sha256:9d046091b3dbeda26e1f4364a116ca8d94284000f103da7310e3a4703df1d3e4' # noqa
}
img_data = self.service.identify_specific_image(
self.href, cpu_arch='amd64')
self.assertEqual(expected_data, img_data)
mock_get_artifact_index.assert_called_once_with(mock.ANY, self.href)
mock_get_manifest.assert_called_once_with(
mock.ANY, self.href,
'sha256:9d046091b3dbeda26e1f4364a116ca8d94284000f103da7310e'
'3a4703df1d3e4')
@mock.patch.object(ociclient, 'get_manifest', autospec=True)
@mock.patch.object(ociclient, 'get_artifact_index',
autospec=True)
def test_identify_specific_image_aarch64(
self,
mock_get_artifact_index,
mock_get_manifest):
mock_get_artifact_index.return_value = self.artifact_index
mock_get_manifest.return_value = {
'schemaVersion': 2,
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'config': {
'mediaType': 'application/vnd.oci.empty.v1+json',
'digest': ('sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21'
'fe77e8310c060f61caaff8a'),
'size': 2,
'data': 'e30='},
'layers': [
{
'mediaType': 'application/zstd',
'digest': ('sha256:13b69bec70305ccd85d47a0bd6d2357381c95'
'7cf87dceb862427aace4b964a2b'),
'size': 1013782193,
'annotations': {
'org.opencontainers.image.title': ('podman-machine.'
'aarch64.applehv'
'.raw.zst')}
}
]
}
expected_data = {
'image_checksum': '13b69bec70305ccd85d47a0bd6d2357381c957cf87dceb862427aace4b964a2b', # noqa
'image_compression_type': 'zstd',
'image_container_manifest_digest': 'sha256:f2981621c1bf821ce44c1cb31c507abe6293d8eea646b029c6b9dc773fa7821a', # noqa
'image_disk_format': 'raw',
'image_filename': 'podman-machine.aarch64.applehv.raw.zst',
'image_media_type': 'application/zstd',
'image_request_authorization_secret': None,
'image_size': 1013782193,
'image_url': 'https://localhost/v2/podman/machine-os/blobs/sha256:13b69bec70305ccd85d47a0bd6d2357381c957cf87dceb862427aace4b964a2b', # noqa
'oci_image_manifest_url': 'oci://localhost/podman/machine-os@sha256:f2981621c1bf821ce44c1cb31c507abe6293d8eea646b029c6b9dc773fa7821a' # noqa
}
img_data = self.service.identify_specific_image(
self.href, cpu_arch='aarch64')
self.assertEqual(expected_data, img_data)
mock_get_artifact_index.assert_called_once_with(mock.ANY, self.href)
mock_get_manifest.assert_called_once_with(
mock.ANY, self.href,
'sha256:f2981621c1bf821ce44c1cb31c507abe6293d8eea646b029c6b9'
'dc773fa7821a')
@mock.patch.object(ociclient, 'get_manifest', autospec=True)
@mock.patch.object(ociclient, 'get_artifact_index',
autospec=True)
def test_identify_specific_image_bad_manifest(
self,
mock_get_artifact_index,
mock_get_manifest):
mock_get_artifact_index.return_value = self.empty_artifact_index
self.assertRaises(exception.ImageNotFound,
self.service.identify_specific_image,
self.href)
mock_get_artifact_index.assert_called_once_with(mock.ANY, self.href)
mock_get_manifest.assert_not_called()
@mock.patch.object(rs_helper, 'get', autospec=True)
@mock.patch('hashlib.new', autospec=True)
@mock.patch('builtins.open', autospec=True)
@mock.patch.object(ociclient, 'get_manifest', autospec=True)
def test_download_direct_manifest_reference(self, mock_get_manifest,
mock_open,
mock_hash,
mock_request):
mock_get_manifest.return_value = {
'schemaVersion': 2,
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'config': {},
'layers': [
{
'mediaType': 'application/vnd.cyclonedx+json',
'size': 402627,
'digest': ('sha256:96f33f01d5347424f947e43ff05634915f422'
'debc2ca1bb88307824ff0c4b00d')}
]
}
response = mock_request.return_value
response.status_code = 200
response.headers = {}
response.iter_content.return_value = ['some', 'content']
file_mock = mock.Mock()
mock_open.return_value.__enter__.return_value = file_mock
file_mock.read.return_value = None
hexdigest_mock = mock_hash.return_value.hexdigest
hexdigest_mock.return_value = ('96f33f01d5347424f947e43ff05634915f422'
'debc2ca1bb88307824ff0c4b00d')
self.service.download(
'oci://localhost/project/container:latest@sha256:96f33'
'f01d5347424f947e43ff05634915f422debc2ca1bb88307824ff0c4b00d',
file_mock)
mock_request.assert_called_once_with(
mock.ANY,
'https://localhost/v2/project/container/blobs/sha256:96f33f01d53'
'47424f947e43ff05634915f422debc2ca1bb88307824ff0c4b00d',
stream=True, timeout=60)
write = file_mock.write
write.assert_any_call('some')
write.assert_any_call('content')
self.assertEqual(2, write.call_count)
@mock.patch.object(rs_helper, 'get', autospec=True)
@mock.patch('hashlib.new', autospec=True)
@mock.patch('builtins.open', autospec=True)
@mock.patch.object(ociclient, '_get_manifest', autospec=True)
def test_download_direct_manifest_reference_just_digest(
self, mock_get_manifest,
mock_open,
mock_hash,
mock_request):
# NOTE(TheJulia): This is ultimately exercising the interface between
# the oci image service, and the oci registry client, and ultimately
# the checksum_utils.TransferHelper logic.
mock_get_manifest.return_value = {
'schemaVersion': 2,
'mediaType': 'application/vnd.oci.image.manifest.v1+json',
'config': {},
'layers': [
{
'mediaType': 'application/vnd.cyclonedx+json',
'size': 402627,
'digest': ('sha256:96f33f01d5347424f947e43ff05634915f422'
'debc2ca1bb88307824ff0c4b00d')}
]
} # noqa
response = mock_request.return_value
response.status_code = 200
response.headers = {}
csum = ('96f33f01d5347424f947e43ff05634915f422'
'debc2ca1bb88307824ff0c4b00d')
response.iter_content.return_value = ['some', 'content']
file_mock = mock.Mock()
mock_open.return_value.__enter__.return_value = file_mock
file_mock.read.return_value = None
hexdigest_mock = mock_hash.return_value.hexdigest
hexdigest_mock.return_value = csum
self.service.download(
'oci://localhost/project/container@sha256:96f33f01d53'
'47424f947e43ff05634915f422debc2ca1bb88307824ff0c4b00d',
file_mock)
mock_request.assert_called_once_with(
mock.ANY,
'https://localhost/v2/project/container/blobs/sha256:96f33f01d53'
'47424f947e43ff05634915f422debc2ca1bb88307824ff0c4b00d',
stream=True, timeout=60)
write = file_mock.write
write.assert_any_call('some')
write.assert_any_call('content')
self.assertEqual(2, write.call_count)
self.assertEqual('sha256:' + csum,
self.service.transfer_verified_checksum)
@mock.patch.object(ociclient, '_get_manifest', autospec=True)
def test_show(self, mock_get_manifest):
layer_csum = ('96f33f01d5347424f947e43ff05634915f422debc'
'2ca1bb88307824ff0c4b00d')
mock_get_manifest.return_value = {
'schemaVersion': 2,
'mediaType': 'foo',
'config': {},
'layers': [{'mediaType': 'app/fee',
'size': 402627,
'digest': 'sha256:%s' % layer_csum}]
}
res = self.service.show(
'oci://localhost/project/container@sha256:96f33f01d53'
'47424f947e43ff05634915f422debc2ca1bb88307824ff0c4b00d')
self.assertEqual(402627, res['size'])
self.assertEqual(layer_csum, res['checksum'])
self.assertEqual('sha256:' + layer_csum, res['digest'])
@mock.patch.object(image_service.OciImageService, 'show', autospec=True)
def test_validate_href(self, mock_show):
self.service.validate_href("oci://foo")
mock_show.assert_called_once_with(mock.ANY, "oci://foo")
def test__validate_url_is_specific(self):
csum = 'f' * 64
self.service._validate_url_is_specific('oci://foo/bar@sha256:' + csum)
csum = 'f' * 128
self.service._validate_url_is_specific('oci://foo/bar@sha512:' + csum)
def test__validate_url_is_specific_bad_format(self):
self.assertRaises(exception.ImageRefValidationFailed,
self.service._validate_url_is_specific,
'oci://foo/bar@sha256')
def test__validate_url_is_specific_not_specific(self):
self.assertRaises(exception.OciImageNotSpecific,
self.service._validate_url_is_specific,
'oci://foo/bar')
self.assertRaises(exception.OciImageNotSpecific,
self.service._validate_url_is_specific,
'oci://foo/bar:baz')
self.assertRaises(exception.OciImageNotSpecific,
self.service._validate_url_is_specific,
'oci://foo/bar@baz:meow')
class ServiceGetterTestCase(base.TestCase): class ServiceGetterTestCase(base.TestCase):
@mock.patch.object(glance_v2_service.GlanceImageService, '__init__', @mock.patch.object(glance_v2_service.GlanceImageService, '__init__',
@ -760,3 +1152,45 @@ class ServiceGetterTestCase(base.TestCase):
for image_ref in invalid_refs: for image_ref in invalid_refs:
self.assertRaises(exception.ImageRefValidationFailed, self.assertRaises(exception.ImageRefValidationFailed,
image_service.get_image_service, image_ref) image_service.get_image_service, image_ref)
@mock.patch.object(image_service.OciImageService, '__init__',
return_value=None, autospec=True)
def test_get_image_service_oci_url(self, oci_mock):
image_hrefs = [
'oci://fqdn.tld/user/image:tag@sha256:f00f',
'oci://fqdn.tld/user/image:latest',
'oci://fqdn.tld/user/image',
]
for href in image_hrefs:
image_service.get_image_service(href)
oci_mock.assert_called_once_with(mock.ANY)
oci_mock.reset_mock()
def test_get_image_service_auth_override(self):
test_node = mock.Mock()
test_node.instance_info = {'image_pull_secret': 'foo'}
test_node.driver_info = {'image_pull_secret': 'bar'}
res = image_service.get_image_service_auth_override(test_node)
self.assertDictEqual({'username': '',
'password': 'foo'}, res)
def test_get_image_service_auth_override_no_user_auth(self):
test_node = mock.Mock()
test_node.instance_info = {'image_pull_secret': 'foo'}
test_node.driver_info = {'image_pull_secret': 'bar'}
res = image_service.get_image_service_auth_override(
test_node, permit_user_auth=False)
self.assertDictEqual({'username': '',
'password': 'bar'}, res)
def test_get_image_service_auth_override_no_data(self):
test_node = mock.Mock()
test_node.instance_info = {}
test_node.driver_info = {}
res = image_service.get_image_service_auth_override(test_node)
self.assertIsNone(res)
def test_is_container_registry_url(self):
self.assertFalse(image_service.is_container_registry_url(None))
self.assertFalse(image_service.is_container_registry_url('https://'))
self.assertTrue(image_service.is_container_registry_url('oci://.'))

View File

@ -62,6 +62,7 @@ class IronicImagesTestCase(base.TestCase):
@mock.patch.object(builtins, 'open', autospec=True) @mock.patch.object(builtins, 'open', autospec=True)
def test_fetch_image_service_force_raw(self, open_mock, image_to_raw_mock, def test_fetch_image_service_force_raw(self, open_mock, image_to_raw_mock,
image_service_mock): image_service_mock):
image_service_mock.return_value.transfer_verified_checksum = None
mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle = mock.MagicMock(spec=io.BytesIO)
mock_file_handle.__enter__.return_value = 'file' mock_file_handle.__enter__.return_value = 'file'
open_mock.return_value = mock_file_handle open_mock.return_value = mock_file_handle
@ -82,6 +83,7 @@ class IronicImagesTestCase(base.TestCase):
def test_fetch_image_service_force_raw_with_checksum( def test_fetch_image_service_force_raw_with_checksum(
self, open_mock, image_to_raw_mock, self, open_mock, image_to_raw_mock,
image_service_mock, mock_checksum): image_service_mock, mock_checksum):
image_service_mock.return_value.transfer_verified_checksum = None
mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle = mock.MagicMock(spec=io.BytesIO)
mock_file_handle.__enter__.return_value = 'file' mock_file_handle.__enter__.return_value = 'file'
open_mock.return_value = mock_file_handle open_mock.return_value = mock_file_handle
@ -105,6 +107,7 @@ class IronicImagesTestCase(base.TestCase):
def test_fetch_image_service_with_checksum_mismatch( def test_fetch_image_service_with_checksum_mismatch(
self, open_mock, image_to_raw_mock, self, open_mock, image_to_raw_mock,
image_service_mock, mock_checksum): image_service_mock, mock_checksum):
image_service_mock.return_value.transfer_verified_checksum = None
mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle = mock.MagicMock(spec=io.BytesIO)
mock_file_handle.__enter__.return_value = 'file' mock_file_handle.__enter__.return_value = 'file'
open_mock.return_value = mock_file_handle open_mock.return_value = mock_file_handle
@ -130,6 +133,7 @@ class IronicImagesTestCase(base.TestCase):
def test_fetch_image_service_force_raw_no_checksum_algo( def test_fetch_image_service_force_raw_no_checksum_algo(
self, open_mock, image_to_raw_mock, self, open_mock, image_to_raw_mock,
image_service_mock, mock_checksum): image_service_mock, mock_checksum):
image_service_mock.return_value.transfer_verified_checksum = None
mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle = mock.MagicMock(spec=io.BytesIO)
mock_file_handle.__enter__.return_value = 'file' mock_file_handle.__enter__.return_value = 'file'
open_mock.return_value = mock_file_handle open_mock.return_value = mock_file_handle
@ -153,6 +157,7 @@ class IronicImagesTestCase(base.TestCase):
def test_fetch_image_service_force_raw_combined_algo( def test_fetch_image_service_force_raw_combined_algo(
self, open_mock, image_to_raw_mock, self, open_mock, image_to_raw_mock,
image_service_mock, mock_checksum): image_service_mock, mock_checksum):
image_service_mock.return_value.transfer_verified_checksum = None
mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle = mock.MagicMock(spec=io.BytesIO)
mock_file_handle.__enter__.return_value = 'file' mock_file_handle.__enter__.return_value = 'file'
open_mock.return_value = mock_file_handle open_mock.return_value = mock_file_handle
@ -168,6 +173,35 @@ class IronicImagesTestCase(base.TestCase):
image_to_raw_mock.assert_called_once_with( image_to_raw_mock.assert_called_once_with(
'image_href', 'path', 'path.part') 'image_href', 'path', 'path.part')
@mock.patch.object(fileutils, 'compute_file_checksum',
autospec=True)
@mock.patch.object(image_service, 'get_image_service', autospec=True)
@mock.patch.object(images, 'image_to_raw', autospec=True)
@mock.patch.object(builtins, 'open', autospec=True)
def test_fetch_image_service_auth_data_checksum(
self, open_mock, image_to_raw_mock,
svc_mock, mock_checksum):
svc_mock.return_value.transfer_verified_checksum = 'f00'
svc_mock.return_value.is_auth_set_needed = True
mock_file_handle = mock.MagicMock(spec=io.BytesIO)
mock_file_handle.__enter__.return_value = 'file'
open_mock.return_value = mock_file_handle
mock_checksum.return_value = 'f00'
images.fetch('context', 'image_href', 'path', force_raw=True,
checksum='sha512:f00', image_auth_data='meow')
# In this case, the image service does the checksum so we know
# we don't need to do a checksum pass as part of the common image
# handling code path.
mock_checksum.assert_not_called()
open_mock.assert_called_once_with('path', 'wb')
svc_mock.return_value.download.assert_called_once_with(
'image_href', 'file')
image_to_raw_mock.assert_called_once_with(
'image_href', 'path', 'path.part')
svc_mock.return_value.set_image_auth.assert_called_once_with(
'image_href', 'meow')
@mock.patch.object(image_format_inspector, 'detect_file_format', @mock.patch.object(image_format_inspector, 'detect_file_format',
autospec=True) autospec=True)
def test_image_to_raw_not_permitted_format(self, detect_format_mock): def test_image_to_raw_not_permitted_format(self, detect_format_mock):
@ -438,10 +472,12 @@ class IronicImagesTestCase(base.TestCase):
@mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(images, 'image_show', autospec=True)
def test_download_size(self, show_mock): def test_download_size(self, show_mock):
show_mock.return_value = {'size': 123456} show_mock.return_value = {'size': 123456}
size = images.download_size('context', 'image_href', 'image_service') size = images.download_size('context', 'image_href', 'image_service',
image_auth_data='meow')
self.assertEqual(123456, size) self.assertEqual(123456, size)
show_mock.assert_called_once_with('context', 'image_href', show_mock.assert_called_once_with('context', 'image_href',
'image_service') image_service='image_service',
image_auth_data='meow')
@mock.patch.object(image_format_inspector, 'detect_file_format', @mock.patch.object(image_format_inspector, 'detect_file_format',
autospec=True) autospec=True)
@ -540,6 +576,25 @@ class IronicImagesTestCase(base.TestCase):
mock_igi.assert_called_once_with(image_source) mock_igi.assert_called_once_with(image_source)
mock_gip.assert_called_once_with('context', image_source) mock_gip.assert_called_once_with('context', image_source)
@mock.patch.object(images, 'get_image_properties', autospec=True)
@mock.patch.object(image_service, 'is_container_registry_url',
autospec=True)
@mock.patch.object(glance_utils, 'is_glance_image', autospec=True)
def test_is_whole_disk_image_whole_disk_image_oci(self, mock_igi,
mock_ioi,
mock_gip):
mock_igi.return_value = False
mock_ioi.return_value = True
mock_gip.return_value = {}
instance_info = {'image_source': 'oci://image'}
image_source = instance_info['image_source']
is_whole_disk_image = images.is_whole_disk_image('context',
instance_info)
self.assertTrue(is_whole_disk_image)
mock_igi.assert_called_once_with(image_source)
mock_ioi.assert_called_once_with(image_source)
mock_gip.assert_not_called()
@mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(images, 'get_image_properties', autospec=True)
@mock.patch.object(glance_utils, 'is_glance_image', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True)
def test_is_whole_disk_image_partition_non_glance(self, mock_igi, def test_is_whole_disk_image_partition_non_glance(self, mock_igi,

View File

@ -0,0 +1,903 @@
# Copyright (C) 2025 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import builtins
import hashlib
import io
import json
from unittest import mock
from urllib import parse
from oslo_config import cfg
import requests
from ironic.common import exception
from ironic.common import oci_registry
from ironic.tests import base
CONF = cfg.CONF
class OciClientTestCase(base.TestCase):
def setUp(self):
super().setUp()
self.client = oci_registry.OciClient(verify=False)
@mock.patch.object(oci_registry, 'MakeSession',
autospec=True)
def test_client_init_make_session(self, mock_session):
oci_registry.OciClient(verify=True)
mock_session.assert_called_once_with(verify=True)
mock_session.return_value.create.assert_called_once()
def test__image_to_url(self):
url = self.client._image_to_url('oci://host/path')
self.assertEqual('host', url.netloc)
self.assertEqual('/path', url.path)
self.assertEqual('oci', url.scheme)
def test__image_to_url_adds_oci(self):
url = self.client._image_to_url('host/path')
self.assertEqual('oci', url.scheme)
self.assertEqual('host', url.netloc)
self.assertEqual('/path', url.path)
def test_image_tag_from_url(self):
url = self.client._image_to_url('oci://host/path')
img, tag = self.client._image_tag_from_url(url)
self.assertEqual('/path', img)
self.assertEqual('latest', tag)
def test_image_tag_from_url_with_tag(self):
url = self.client._image_to_url('oci://host/path:5.3')
img, tag = self.client._image_tag_from_url(url)
self.assertEqual('/path', img)
self.assertEqual('5.3', tag)
def test_image_tag_from_url_with_digest(self):
url = self.client._image_to_url('oci://host/path@sha256:f00')
img, tag = self.client._image_tag_from_url(url)
self.assertEqual('/path', img)
self.assertEqual('sha256:f00', tag)
def test_get_blob_url(self):
digest = ('sha256:' + 'a' * 64)
image = 'oci://host/project/container'
res = self.client.get_blob_url(image, digest)
self.assertEqual(
'https://host/v2/project/container/blobs/' + digest,
res)
@mock.patch.object(requests.sessions.Session, 'get', autospec=True)
class OciClientRequestTestCase(base.TestCase):
def setUp(self):
super().setUp()
self.client = oci_registry.OciClient(verify=True)
def test_get_manifest_checksum_verifies(self, get_mock):
fake_csum = 'f' * 64
get_mock.return_value.status_code = 200
get_mock.return_value.text = '{}'
self.assertRaises(
exception.ImageChecksumError,
self.client.get_manifest,
'oci://localhost/local@sha256:' + fake_csum)
get_mock.return_value.assert_has_calls([
mock.call.raise_for_status(),
mock.call.encoding.__bool__()])
get_mock.assert_called_once_with(
mock.ANY,
('https://localhost/v2/local/manifests/sha256:ffffffffff'
'ffffffffffffffffffffffffffffffffffffffffffffffffffffff'),
headers={'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60)
def test_get_manifest(self, get_mock):
csum = ('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c0'
'60f61caaff8a')
get_mock.return_value.status_code = 200
get_mock.return_value.text = '{}'
res = self.client.get_manifest(
'oci://localhost/local@sha256:' + csum)
self.assertEqual({}, res)
get_mock.return_value.assert_has_calls([
mock.call.raise_for_status(),
mock.call.encoding.__bool__()])
get_mock.assert_called_once_with(
mock.ANY,
'https://localhost/v2/local/manifests/sha256:' + csum,
headers={'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60)
def test_get_manifest_auth_required(self, get_mock):
fake_csum = 'f' * 64
response = mock.Mock()
response.status_code = 401
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.ImageServiceAuthenticationRequired,
self.client.get_manifest,
'oci://localhost/local@sha256:' + fake_csum)
call_mock = mock.call(
mock.ANY,
('https://localhost/v2/local/manifests/sha256:ffffffffff'
'ffffffffffffffffffffffffffffffffffffffffffffffffffffff'),
headers={'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60)
# Gets retried.
get_mock.assert_has_calls([call_mock, call_mock])
def test_get_manifest_image_access_denied(self, get_mock):
fake_csum = 'f' * 64
response = mock.Mock()
response.status_code = 403
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.ImageNotFound,
self.client.get_manifest,
'oci://localhost/local@sha256:' + fake_csum)
call_mock = mock.call(
mock.ANY,
('https://localhost/v2/local/manifests/sha256:ffffffffff'
'ffffffffffffffffffffffffffffffffffffffffffffffffffffff'),
headers={'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
def test_get_manifest_image_not_found(self, get_mock):
fake_csum = 'f' * 64
response = mock.Mock()
response.status_code = 404
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.ImageNotFound,
self.client.get_manifest,
'oci://localhost/local@sha256:' + fake_csum)
call_mock = mock.call(
mock.ANY,
('https://localhost/v2/local/manifests/sha256:ffffffffff'
'ffffffffffffffffffffffffffffffffffffffffffffffffffffff'),
headers={'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
def test_get_manifest_image_temporary_failure(self, get_mock):
fake_csum = 'f' * 64
response = mock.Mock()
response.status_code = 500
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.TemporaryFailure,
self.client.get_manifest,
'oci://localhost/local@sha256:' + fake_csum)
call_mock = mock.call(
mock.ANY,
('https://localhost/v2/local/manifests/sha256:ffffffffff'
'ffffffffffffffffffffffffffffffffffffffffffffffffffffff'),
headers={'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
@mock.patch.object(oci_registry.OciClient, '_resolve_tag',
autospec=True)
def test_get_artifact_index_with_tag(self, resolve_tag_mock, get_mock):
resolve_tag_mock.return_value = {
'image': '/local',
'tag': 'tag'
}
get_mock.return_value.status_code = 200
get_mock.return_value.text = '{}'
res = self.client.get_artifact_index(
'oci://localhost/local:tag')
self.assertEqual({}, res)
resolve_tag_mock.assert_called_once_with(
mock.ANY,
parse.urlparse('oci://localhost/local:tag'))
get_mock.return_value.assert_has_calls([
mock.call.raise_for_status(),
mock.call.encoding.__bool__()])
get_mock.assert_called_once_with(
mock.ANY,
'https://localhost/v2/local/manifests/tag',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
@mock.patch.object(oci_registry.OciClient, '_resolve_tag',
autospec=True)
def test_get_artifact_index_not_found(self, resolve_tag_mock, get_mock):
resolve_tag_mock.return_value = {
'image': '/local',
'tag': 'tag'
}
response = mock.Mock()
response.status_code = 404
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.ImageNotFound,
self.client.get_artifact_index,
'oci://localhost/local:tag')
resolve_tag_mock.assert_called_once_with(
mock.ANY,
parse.urlparse('oci://localhost/local:tag'))
call_mock = mock.call(
mock.ANY,
'https://localhost/v2/local/manifests/tag',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
@mock.patch.object(oci_registry.OciClient, '_resolve_tag',
autospec=True)
def test_get_artifact_index_not_authorized(self, resolve_tag_mock,
get_mock):
resolve_tag_mock.return_value = {
'image': '/local',
'tag': 'tag'
}
response = mock.Mock()
response.status_code = 401
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.ImageServiceAuthenticationRequired,
self.client.get_artifact_index,
'oci://localhost/local:tag')
resolve_tag_mock.assert_called_once_with(
mock.ANY,
parse.urlparse('oci://localhost/local:tag'))
call_mock = mock.call(
mock.ANY,
'https://localhost/v2/local/manifests/tag',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
# Automatic retry to authenticate
get_mock.assert_has_calls([call_mock, call_mock])
@mock.patch.object(oci_registry.OciClient, '_resolve_tag',
autospec=True)
def test_get_artifact_index_temporaryfailure(self, resolve_tag_mock,
get_mock):
resolve_tag_mock.return_value = {
'image': '/local',
'tag': 'tag'
}
response = mock.Mock()
response.status_code = 500
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.TemporaryFailure,
self.client.get_artifact_index,
'oci://localhost/local:tag')
resolve_tag_mock.assert_called_once_with(
mock.ANY,
parse.urlparse('oci://localhost/local:tag'))
call_mock = mock.call(
mock.ANY,
'https://localhost/v2/local/manifests/tag',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
@mock.patch.object(oci_registry.OciClient, '_resolve_tag',
autospec=True)
def test_get_artifact_index_access_denied(self, resolve_tag_mock,
get_mock):
resolve_tag_mock.return_value = {
'image': '/local',
'tag': 'tag'
}
response = mock.Mock()
response.status_code = 403
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.ImageNotFound,
self.client.get_artifact_index,
'oci://localhost/local:tag')
resolve_tag_mock.assert_called_once_with(
mock.ANY,
parse.urlparse('oci://localhost/local:tag'))
call_mock = mock.call(
mock.ANY,
'https://localhost/v2/local/manifests/tag',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
def test__resolve_tag(self, get_mock):
response = mock.Mock()
response.json.return_value = {'tags': ['latest', 'foo', 'bar']}
response.status_code = 200
response.links = {}
get_mock.return_value = response
res = self.client._resolve_tag(
parse.urlparse('oci://localhost/local'))
self.assertDictEqual({'image': '/local', 'tag': 'latest'}, res)
call_mock = mock.call(
mock.ANY,
'https://localhost/v2/local/tags/list',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
def test__resolve_tag_if_not_found(self, get_mock):
response = mock.Mock()
response.json.return_value = {'tags': ['foo', 'bar']}
response.status_code = 200
response.links = {}
get_mock.return_value = response
self.assertRaises(
exception.ImageNotFound,
self.client._resolve_tag,
parse.urlparse('oci://localhost/local'))
call_mock = mock.call(
mock.ANY,
'https://localhost/v2/local/tags/list',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
def test__resolve_tag_follows_links(self, get_mock):
response = mock.Mock()
response.json.return_value = {'tags': ['foo', 'bar']}
response.status_code = 200
response.links = {'next': {'url': 'list2'}}
response2 = mock.Mock()
response2.json.return_value = {'tags': ['zoo']}
response2.status_code = 200
response2.links = {}
get_mock.side_effect = iter([response, response2])
res = self.client._resolve_tag(
parse.urlparse('oci://localhost/local:zoo'))
self.assertDictEqual({'image': '/local', 'tag': 'zoo'}, res)
call_mock = mock.call(
mock.ANY,
'https://localhost/v2/local/tags/list',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
call_mock_2 = mock.call(
mock.ANY,
'https://localhost/v2/local/tags/list2',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock, call_mock_2])
def test__resolve_tag_auth_needed(self, get_mock):
response = mock.Mock()
response.json.return_value = {}
response.status_code = 401
response.text = 'Authorization Required'
response.links = {}
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.ImageServiceAuthenticationRequired,
self.client._resolve_tag,
parse.urlparse('oci://localhost/local'))
call_mock = mock.call(
mock.ANY,
'https://localhost/v2/local/tags/list',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
def test__resolve_tag_temp_failure(self, get_mock):
response = mock.Mock()
response.json.return_value = {}
response.status_code = 500
response.text = 'Server on vacation'
response.links = {}
exc = requests.exceptions.HTTPError(
response=response)
get_mock.side_effect = exc
self.assertRaises(
exception.TemporaryFailure,
self.client._resolve_tag,
parse.urlparse('oci://localhost/local'))
call_mock = mock.call(
mock.ANY,
'https://localhost/v2/local/tags/list',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=60)
get_mock.assert_has_calls([call_mock])
def test_authenticate_noop(self, get_mock):
"""Test authentication when the remote endpoint doesn't require it."""
response = mock.Mock()
response.status_code = 200
get_mock.return_value = response
self.client.authenticate(
'oci://localhost/foo/bar:meow',
username='foo',
password='bar')
get_mock.assert_has_calls([
mock.call(mock.ANY, 'https://localhost/v2/', timeout=60)])
def test_authenticate_401_no_header(self, get_mock):
"""Test authentication when the remote endpoint doesn't require it."""
response = mock.Mock()
response.status_code = 401
response.headers = {}
get_mock.return_value = response
self.assertRaisesRegex(
AttributeError,
'Unknown authentication method',
self.client.authenticate,
'oci://localhost/foo/bar:meow',
username='foo',
password='bar')
get_mock.assert_has_calls([
mock.call(mock.ANY, 'https://localhost/v2/', timeout=60)])
def test_authenticate_401_bad_header(self, get_mock):
"""Test authentication when the remote endpoint doesn't require it."""
response = mock.Mock()
response.status_code = 401
response.headers = {'www-authenticate': 'magic'}
get_mock.return_value = response
self.assertRaisesRegex(
AttributeError,
'Unknown www-authenticate value',
self.client.authenticate,
'oci://localhost/foo/bar:meow',
username='foo',
password='bar')
get_mock.assert_has_calls([
mock.call(mock.ANY, 'https://localhost/v2/', timeout=60)])
def test_authenticate_401_bearer_auth(self, get_mock):
self.assertIsNone(self.client._cached_auth)
self.assertIsNone(self.client.session.headers.get('Authorization'))
response = mock.Mock()
response.status_code = 401
response.json.return_value = {'token': 'me0w'}
response.headers = {'www-authenticate': 'bearer realm="foo"'}
response2 = mock.Mock()
response2.status_code = 200
response2.json.return_value = {'token': 'me0w'}
get_mock.side_effect = iter([response, response2])
self.client.authenticate(
'oci://localhost/foo/bar:meow',
username='',
password='bar')
get_mock.assert_has_calls([
mock.call(mock.ANY, 'https://localhost/v2/', timeout=60),
mock.call(mock.ANY, 'foo',
params={'scope': 'repository:foo/bar:pull'},
auth=mock.ANY, timeout=60)])
self.assertIsNotNone(self.client._cached_auth)
self.assertEqual('bearer me0w',
self.client.session.headers['Authorization'])
def test_authenticate_401_basic_auth_no_username(self, get_mock):
self.assertIsNone(self.client._cached_auth)
self.assertIsNone(self.client.session.headers.get('Authorization'))
response = mock.Mock()
response.status_code = 401
response.headers = {'www-authenticate': 'basic service="foo"'}
get_mock.return_value = response
self.assertRaises(
exception.ImageServiceAuthenticationRequired,
self.client.authenticate,
'oci://localhost/foo/bar:meow',
username='',
password='bar')
get_mock.assert_has_calls([
mock.call(mock.ANY, 'https://localhost/v2/', timeout=60)])
def test_authenticate_401_basic_auth(self, get_mock):
self.assertIsNone(self.client._cached_auth)
self.assertIsNone(self.client.session.headers.get('Authorization'))
response = mock.Mock()
response.status_code = 401
response.headers = {'www-authenticate': 'basic service="foo"'}
response2 = mock.Mock()
response2.status_code = 200
get_mock.side_effect = iter([response, response2])
self.client.authenticate(
'oci://localhost/foo/bar:meow',
username='user',
password='bar')
get_mock.assert_has_calls([
mock.call(mock.ANY, 'https://localhost/v2/', timeout=60),
mock.call(mock.ANY, 'https://localhost/v2/',
params={},
auth=mock.ANY, timeout=60)])
self.assertIsNotNone(self.client._cached_auth)
self.assertEqual('basic dXNlcjpiYXI=',
self.client.session.headers['Authorization'])
@mock.patch.object(oci_registry.RegistrySessionHelper,
'get_token_from_config',
autospec=True)
def test_authenticate_401_fallback_to_service_config(self, token_mock,
get_mock):
self.assertIsNone(self.client._cached_auth)
self.assertIsNone(self.client.session.headers.get('Authorization'))
response = mock.Mock()
response.status_code = 401
response.headers = {
'www-authenticate': 'bearer realm="https://foo/bar"'}
response2 = mock.Mock()
response2.status_code = 200
response2.json.return_value = {'token': 'me0w'}
get_mock.side_effect = iter([response, response2])
self.client.authenticate(
'oci://localhost/foo/bar:meow',
username=None,
password=None)
get_mock.assert_has_calls([
mock.call(mock.ANY, 'https://localhost/v2/', timeout=60),
mock.call(mock.ANY, 'https://foo/bar',
params={'scope': 'repository:foo/bar:pull'},
auth=mock.ANY, timeout=60)])
self.assertIsNotNone(self.client._cached_auth)
self.assertEqual('bearer me0w',
self.client.session.headers['Authorization'])
token_mock.assert_called_once_with('foo')
@mock.patch.object(hashlib, 'new', autospec=True)
def test_download_blob_from_manifest(self, mock_hash, get_mock):
CONF.set_override('secure_cdn_registries', ['localhost'], group='oci')
self.client.session.headers = {'Authorization': 'bearer zoo'}
mock_file = mock.MagicMock(spec=io.BytesIO)
mock_hash.return_value.hexdigest.side_effect = iter([
('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f'
'61caaff8a'),
('2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e8'
'86266e7ae')
])
csum = ('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c0'
'60f61caaff8a')
get_1 = mock.Mock()
get_1.status_code = 200
manifest = {
'layers': [{
'digest': ('sha256:2c26b46b68ffc68ff99b453c1d30413413422d706'
'483bfa0f98a5e886266e7ae')}]
}
get_1.text = json.dumps(manifest)
get_2 = mock.Mock()
get_2.status_code = 301
get_2.headers = {'Location': 'https://localhost/foo/sha'}
get_3 = mock.Mock()
get_3.status_code = 200
get_3.iter_content.return_value = ['some', 'content']
get_mock.side_effect = iter([get_1, get_2, get_3])
res = self.client.download_blob_from_manifest(
'oci://localhost/foo/bar@sha256:' + csum,
mock_file)
mock_file.write.assert_has_calls([
mock.call('some'),
mock.call('content')])
self.assertEqual(
('sha256:2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98'
'a5e886266e7ae'),
res)
get_mock.assert_has_calls([
mock.call(
mock.ANY,
('https://localhost/v2/foo/bar/manifests/sha256:44136fa355b'
'3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a'),
headers={
'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60),
mock.call(
mock.ANY,
('https://localhost/v2/foo/bar/blobs/sha256:2c26b46b68ffc68f'
'f99b453c1d30413413422d706483bfa0f98a5e886266e7ae'),
stream=True,
timeout=60),
mock.call(
mock.ANY,
'https://localhost/foo/sha',
stream=True,
timeout=60)
])
@mock.patch.object(hashlib, 'new', autospec=True)
def test_download_blob_from_manifest_code_check(self, mock_hash,
get_mock):
mock_file = mock.MagicMock(spec=io.BytesIO)
mock_hash.return_value.hexdigest.side_effect = iter([
('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f'
'61caaff8a'),
('2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e8'
'86266e7ae')
])
csum = ('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c0'
'60f61caaff8a')
get_1 = mock.Mock()
get_1.status_code = 200
manifest = {
'layers': [{
'digest': ('sha256:2c26b46b68ffc68ff99b453c1d30413413422d706'
'483bfa0f98a5e886266e7ae')}]
}
get_1.text = json.dumps(manifest)
get_2 = mock.Mock()
get_2.status_code = 301
get_2.headers = {'Location': 'https://localhost/foo/sha'}
get_3 = mock.Mock()
get_3.status_code = 204
get_3.iter_content.return_value = ['some', 'content']
get_mock.side_effect = iter([get_1, get_2, get_3])
self.assertRaisesRegex(
exception.ImageRefValidationFailed,
'Got HTTP code 204',
self.client.download_blob_from_manifest,
'oci://localhost/foo/bar@sha256:' + csum,
mock_file)
mock_file.write.assert_not_called()
get_mock.assert_has_calls([
mock.call(
mock.ANY,
('https://localhost/v2/foo/bar/manifests/sha256:44136fa355b'
'3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a'),
headers={
'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60),
mock.call(
mock.ANY,
('https://localhost/v2/foo/bar/blobs/sha256:2c26b46b68ffc68f'
'f99b453c1d30413413422d706483bfa0f98a5e886266e7ae'),
stream=True,
timeout=60),
mock.call(
mock.ANY,
'https://localhost/foo/sha',
stream=True,
timeout=60)
])
@mock.patch.object(hashlib, 'new', autospec=True)
def test_download_blob_from_manifest_code_401(self, mock_hash,
get_mock):
self.client.session.headers = {'Authorization': 'bearer zoo'}
mock_file = mock.MagicMock(spec=io.BytesIO)
mock_hash.return_value.hexdigest.side_effect = iter([
('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f'
'61caaff8a'),
('2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e8'
'86266e7ae')
])
csum = ('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c0'
'60f61caaff8a')
get_1 = mock.Mock()
get_1.status_code = 200
manifest = {
'layers': [{
'digest': ('sha256:2c26b46b68ffc68ff99b453c1d30413413422d706'
'483bfa0f98a5e886266e7ae')}]
}
get_1.text = json.dumps(manifest)
get_2 = mock.Mock()
get_2.status_code = 401
get_2_exc = requests.exceptions.HTTPError(
response=get_2)
# Authentication is automatically retried, hence
# needing to return exceptions twice.
get_mock.side_effect = iter([get_1, get_2_exc, get_2_exc])
self.assertRaises(
exception.ImageServiceAuthenticationRequired,
self.client.download_blob_from_manifest,
'oci://localhost/foo/bar@sha256:' + csum,
mock_file)
mock_file.write.assert_not_called()
get_mock.assert_has_calls([
mock.call(
mock.ANY,
('https://localhost/v2/foo/bar/manifests/sha256:44136fa355b'
'3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a'),
headers={
'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60),
mock.call(
mock.ANY,
('https://localhost/v2/foo/bar/blobs/sha256:2c26b46b68ffc68f'
'f99b453c1d30413413422d706483bfa0f98a5e886266e7ae'),
stream=True,
timeout=60),
mock.call(
mock.ANY,
('https://localhost/v2/foo/bar/blobs/sha256:2c26b46b68ffc68f'
'f99b453c1d30413413422d706483bfa0f98a5e886266e7ae'),
stream=True,
timeout=60),
])
@mock.patch.object(hashlib, 'new', autospec=True)
def test_download_blob_from_manifest_code_404(self, mock_hash,
get_mock):
mock_file = mock.MagicMock(spec=io.BytesIO)
mock_hash.return_value.hexdigest.side_effect = iter([
('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f'
'61caaff8a'),
('2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e8'
'86266e7ae')
])
csum = ('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c0'
'60f61caaff8a')
get_1 = mock.Mock()
get_1.status_code = 200
manifest = {
'layers': [{
'digest': ('sha256:2c26b46b68ffc68ff99b453c1d30413413422d706'
'483bfa0f98a5e886266e7ae')}]
}
get_1.text = json.dumps(manifest)
get_2 = mock.Mock()
get_2.status_code = 404
get_2_exc = requests.exceptions.HTTPError(
response=get_2)
get_mock.side_effect = iter([get_1, get_2_exc])
self.assertRaises(
exception.ImageNotFound,
self.client.download_blob_from_manifest,
'oci://localhost/foo/bar@sha256:' + csum,
mock_file)
mock_file.write.assert_not_called()
get_mock.assert_has_calls([
mock.call(
mock.ANY,
('https://localhost/v2/foo/bar/manifests/sha256:44136fa355b'
'3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a'),
headers={
'Accept': 'application/vnd.oci.image.manifest.v1+json'},
timeout=60),
mock.call(
mock.ANY,
('https://localhost/v2/foo/bar/blobs/sha256:2c26b46b68ffc68f'
'f99b453c1d30413413422d706483bfa0f98a5e886266e7ae'),
stream=True,
timeout=60),
])
@mock.patch.object(hashlib, 'new', autospec=True)
def test_download_blob_from_manifest_code_403(self, mock_hash,
get_mock):
mock_file = mock.MagicMock(spec=io.BytesIO)
mock_hash.return_value.hexdigest.side_effect = iter([
('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f'
'61caaff8a'),
('2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e8'
'86266e7ae')
])
csum = ('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c0'
'60f61caaff8a')
get_1 = mock.Mock()
get_1.status_code = 200
manifest = {
'layers': [{
'digest': ('sha256:2c26b46b68ffc68ff99b453c1d30413413422d706'
'483bfa0f98a5e886266e7ae')}]
}
get_1.text = json.dumps(manifest)
get_2 = mock.Mock()
get_2.status_code = 403
get_2_exc = requests.exceptions.HTTPError(
response=get_2)
get_mock.side_effect = iter([get_1, get_2_exc])
self.assertRaises(
exception.ImageNotFound,
self.client.download_blob_from_manifest,
'oci://localhost/foo/bar@sha256:' + csum,
mock_file)
mock_file.write.assert_not_called()
self.assertEqual(2, get_mock.call_count)
@mock.patch.object(hashlib, 'new', autospec=True)
def test_download_blob_from_manifest_code_500(self, mock_hash,
get_mock):
mock_file = mock.MagicMock(spec=io.BytesIO)
mock_hash.return_value.hexdigest.side_effect = iter([
('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f'
'61caaff8a'),
('2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e8'
'86266e7ae')
])
csum = ('44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c0'
'60f61caaff8a')
get_1 = mock.Mock()
get_1.status_code = 200
manifest = {
'layers': [{
'digest': ('sha256:2c26b46b68ffc68ff99b453c1d30413413422d706'
'483bfa0f98a5e886266e7ae')}]
}
get_1.text = json.dumps(manifest)
get_2 = mock.Mock()
get_2.status_code = 500
get_2_exc = requests.exceptions.HTTPError(
response=get_2)
get_mock.side_effect = iter([get_1, get_2_exc])
self.assertRaises(
exception.TemporaryFailure,
self.client.download_blob_from_manifest,
'oci://localhost/foo/bar@sha256:' + csum,
mock_file)
mock_file.write.assert_not_called()
self.assertEqual(2, get_mock.call_count)
class TestRegistrySessionHelper(base.TestCase):
def test_get_token_from_config_default(self):
self.assertIsNone(
oci_registry.RegistrySessionHelper.get_token_from_config(
'foo.bar'))
@mock.patch.object(builtins, 'open', autospec=True)
def test_get_token_from_config(self, mock_open):
CONF.set_override('authentication_config', '/tmp/foo',
group='oci')
mock_file = mock.MagicMock(spec=io.BytesIO)
mock_file.__enter__.return_value = \
"""{"auths": {"foo.fqdn": {"auth": "secret"}}}"""
mock_open.return_value = mock_file
res = oci_registry.RegistrySessionHelper.get_token_from_config(
'foo.fqdn')
self.assertEqual('secret', res)
@mock.patch.object(builtins, 'open', autospec=True)
def test_get_token_from_config_no_match(self, mock_open):
CONF.set_override('authentication_config', '/tmp/foo',
group='oci')
mock_file = mock.MagicMock(spec=io.BytesIO)
mock_file.__enter__.return_value = \
"""{"auths": {"bar.fqdn": {}}}"""
mock_open.return_value = mock_file
res = oci_registry.RegistrySessionHelper.get_token_from_config(
'foo.fqdn')
self.assertIsNone(res)
@mock.patch.object(builtins, 'open', autospec=True)
def test_get_token_from_config_bad_file(self, mock_open):
CONF.set_override('authentication_config', '/tmp/foo',
group='oci')
mock_file = mock.MagicMock(spec=io.BytesIO)
mock_file.__enter__.return_value = \
"""{"auths":..."""
mock_open.return_value = mock_file
res = oci_registry.RegistrySessionHelper.get_token_from_config(
'foo.fqdn')
self.assertIsNone(res)

View File

@ -1646,22 +1646,31 @@ class PXEInterfacesTestCase(db_base.DbTestCase):
mock.ANY, mock.ANY,
[('deploy_kernel', [('deploy_kernel',
image_path)], image_path)],
True) True,
image_auth_data=None)
@mock.patch.object(base_image_service, 'get_image_service_auth_override',
autospec=True)
@mock.patch.object(os, 'chmod', autospec=True) @mock.patch.object(os, 'chmod', autospec=True)
@mock.patch.object(pxe_utils, 'TFTPImageCache', lambda: None) @mock.patch.object(pxe_utils, 'TFTPImageCache', lambda: None)
@mock.patch.object(pxe_utils, 'ensure_tree', autospec=True) @mock.patch.object(pxe_utils, 'ensure_tree', autospec=True)
@mock.patch.object(deploy_utils, 'fetch_images', autospec=True) @mock.patch.object(deploy_utils, 'fetch_images', autospec=True)
def test_cache_ramdisk_kernel(self, mock_fetch_image, mock_ensure_tree, def test_cache_ramdisk_kernel(self, mock_fetch_image, mock_ensure_tree,
mock_chmod): mock_chmod, mock_get_auth):
auth_dict = {'foo': 'bar'}
fake_pxe_info = pxe_utils.get_image_info(self.node) fake_pxe_info = pxe_utils.get_image_info(self.node)
expected_path = os.path.join(CONF.pxe.tftp_root, self.node.uuid) expected_path = os.path.join(CONF.pxe.tftp_root, self.node.uuid)
mock_get_auth.return_value = auth_dict
with task_manager.acquire(self.context, self.node.uuid, with task_manager.acquire(self.context, self.node.uuid,
shared=True) as task: shared=True) as task:
pxe_utils.cache_ramdisk_kernel(task, fake_pxe_info) pxe_utils.cache_ramdisk_kernel(task, fake_pxe_info)
mock_get_auth.assert_called_once_with(
task.node,
permit_user_auth=False)
mock_ensure_tree.assert_called_with(expected_path) mock_ensure_tree.assert_called_with(expected_path)
mock_fetch_image.assert_called_once_with( mock_fetch_image.assert_called_once_with(
self.context, mock.ANY, list(fake_pxe_info.values()), True) self.context, mock.ANY, list(fake_pxe_info.values()), True,
image_auth_data=auth_dict)
@mock.patch.object(os, 'chmod', autospec=True) @mock.patch.object(os, 'chmod', autospec=True)
@mock.patch.object(pxe_utils, 'TFTPImageCache', lambda: None) @mock.patch.object(pxe_utils, 'TFTPImageCache', lambda: None)
@ -1679,7 +1688,8 @@ class PXEInterfacesTestCase(db_base.DbTestCase):
mock_ensure_tree.assert_called_with(expected_path) mock_ensure_tree.assert_called_with(expected_path)
mock_fetch_image.assert_called_once_with(self.context, mock.ANY, mock_fetch_image.assert_called_once_with(self.context, mock.ANY,
list(fake_pxe_info.values()), list(fake_pxe_info.values()),
True) True,
image_auth_data=None)
@mock.patch.object(os, 'chmod', autospec=True) @mock.patch.object(os, 'chmod', autospec=True)
@mock.patch.object(pxe_utils, 'TFTPImageCache', lambda: None) @mock.patch.object(pxe_utils, 'TFTPImageCache', lambda: None)
@ -1719,7 +1729,8 @@ class PXEInterfacesTestCase(db_base.DbTestCase):
mock_ensure_tree.assert_called_with(expected_path) mock_ensure_tree.assert_called_with(expected_path)
mock_fetch_image.assert_called_once_with(self.context, mock.ANY, mock_fetch_image.assert_called_once_with(self.context, mock.ANY,
list(expected.values()), list(expected.values()),
True) True,
image_auth_data=None)
@mock.patch.object(pxe.PXEBoot, '__init__', lambda self: None) @mock.patch.object(pxe.PXEBoot, '__init__', lambda self: None)

View File

@ -3184,7 +3184,8 @@ class ServiceUtilsTestCase(db_base.DbTestCase):
'service_disable_ramdisk': False, 'service_disable_ramdisk': False,
'skip_current_service_step': False, 'skip_current_service_step': False,
'steps_validated': 'meow' 'steps_validated': 'meow'
'agent_secret_token'} 'agent_secret_token',
'image_source': 'image_ref'}
self.node.save() self.node.save()
not_in_list = ['agent_cached_service_steps', not_in_list = ['agent_cached_service_steps',
'servicing_reboot', 'servicing_reboot',
@ -3192,7 +3193,8 @@ class ServiceUtilsTestCase(db_base.DbTestCase):
'service_disable_ramdisk', 'service_disable_ramdisk',
'skip_current_service_step', 'skip_current_service_step',
'steps_validated', 'steps_validated',
'agent_secret_token'] 'agent_secret_token'
'image_source']
with task_manager.acquire(self.context, self.node.id, with task_manager.acquire(self.context, self.node.id,
shared=True) as task: shared=True) as task:
conductor_utils.wipe_service_internal_info(task) conductor_utils.wipe_service_internal_info(task)

View File

@ -720,6 +720,21 @@ class TestAgentDeploy(CommonTestsMixin, db_base.DbTestCase):
pxe_boot_validate_mock.assert_called_once_with( pxe_boot_validate_mock.assert_called_once_with(
task.driver.boot, task) task.driver.boot, task)
@mock.patch.object(pxe.PXEBoot, 'validate', autospec=True)
def test_validate_oci_no_checksum(
self, pxe_boot_validate_mock):
i_info = self.node.instance_info
i_info['image_source'] = 'oci://image-ref'
del i_info['image_checksum']
self.node.instance_info = i_info
self.node.save()
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
self.driver.validate(task)
pxe_boot_validate_mock.assert_called_once_with(
task.driver.boot, task)
@mock.patch.object(agent, 'validate_http_provisioning_configuration', @mock.patch.object(agent, 'validate_http_provisioning_configuration',
autospec=True) autospec=True)
@mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True)

View File

@ -604,14 +604,35 @@ class OtherFunctionTestCase(db_base.DbTestCase):
spec_set=['fetch_image', 'master_dir'], master_dir='master_dir') spec_set=['fetch_image', 'master_dir'], master_dir='master_dir')
utils.fetch_images(None, mock_cache, [('uuid', 'path')]) utils.fetch_images(None, mock_cache, [('uuid', 'path')])
mock_clean_up_caches.assert_called_once_with(None, 'master_dir', mock_clean_up_caches.assert_called_once_with(None, 'master_dir',
[('uuid', 'path')]) [('uuid', 'path')],
None)
mock_cache.fetch_image.assert_called_once_with( mock_cache.fetch_image.assert_called_once_with(
'uuid', 'path', 'uuid', 'path',
ctx=None, ctx=None,
force_raw=True, force_raw=True,
expected_format=None, expected_format=None,
expected_checksum=None, expected_checksum=None,
expected_checksum_algo=None) expected_checksum_algo=None,
image_auth_data=None)
@mock.patch.object(image_cache, 'clean_up_caches', autospec=True)
def test_fetch_images_with_auth(self, mock_clean_up_caches):
mock_cache = mock.MagicMock(
spec_set=['fetch_image', 'master_dir'], master_dir='master_dir')
utils.fetch_images(None, mock_cache, [('uuid', 'path')],
image_auth_data='meow')
mock_clean_up_caches.assert_called_once_with(None, 'master_dir',
[('uuid', 'path')],
'meow')
mock_cache.fetch_image.assert_called_once_with(
'uuid', 'path',
ctx=None,
force_raw=True,
expected_format=None,
expected_checksum=None,
expected_checksum_algo=None,
image_auth_data='meow')
@mock.patch.object(image_cache, 'clean_up_caches', autospec=True) @mock.patch.object(image_cache, 'clean_up_caches', autospec=True)
def test_fetch_images_checksum(self, mock_clean_up_caches): def test_fetch_images_checksum(self, mock_clean_up_caches):
@ -624,14 +645,16 @@ class OtherFunctionTestCase(db_base.DbTestCase):
expected_checksum='f00', expected_checksum='f00',
expected_checksum_algo='sha256') expected_checksum_algo='sha256')
mock_clean_up_caches.assert_called_once_with(None, 'master_dir', mock_clean_up_caches.assert_called_once_with(None, 'master_dir',
[('uuid', 'path')]) [('uuid', 'path')],
None)
mock_cache.fetch_image.assert_called_once_with( mock_cache.fetch_image.assert_called_once_with(
'uuid', 'path', 'uuid', 'path',
ctx=None, ctx=None,
force_raw=True, force_raw=True,
expected_format='qcow2', expected_format='qcow2',
expected_checksum='f00', expected_checksum='f00',
expected_checksum_algo='sha256') expected_checksum_algo='sha256',
image_auth_data=None)
@mock.patch.object(image_cache, 'clean_up_caches', autospec=True) @mock.patch.object(image_cache, 'clean_up_caches', autospec=True)
def test_fetch_images_fail(self, mock_clean_up_caches): def test_fetch_images_fail(self, mock_clean_up_caches):
@ -649,7 +672,8 @@ class OtherFunctionTestCase(db_base.DbTestCase):
mock_cache, mock_cache,
[('uuid', 'path')]) [('uuid', 'path')])
mock_clean_up_caches.assert_called_once_with(None, 'master_dir', mock_clean_up_caches.assert_called_once_with(None, 'master_dir',
[('uuid', 'path')]) [('uuid', 'path')],
None)
@mock.patch('ironic.common.keystone.get_auth', autospec=True) @mock.patch('ironic.common.keystone.get_auth', autospec=True)
@mock.patch.object(utils, '_get_ironic_session', autospec=True) @mock.patch.object(utils, '_get_ironic_session', autospec=True)
@ -2162,6 +2186,160 @@ class TestBuildInstanceInfoForDeploy(db_base.DbTestCase):
mock_cache_image.assert_called_once_with( mock_cache_image.assert_called_once_with(
mock.ANY, mock.ANY, force_raw=False, expected_format='qcow2') mock.ANY, mock.ANY, force_raw=False, expected_format='qcow2')
@mock.patch.object(utils, 'cache_instance_image', autospec=True)
@mock.patch.object(image_service.OciImageService,
'set_image_auth',
autospec=True)
@mock.patch.object(image_service.OciImageService,
'identify_specific_image',
autospec=True)
@mock.patch.object(image_service.OciImageService, 'validate_href',
autospec=True)
def test_build_instance_info_for_deploy_oci_url_remote_download(
self, validate_href_mock, identify_image_mock,
set_image_auth_mock, mock_cache_image):
cfg.CONF.set_override('image_download_source', 'http', group='agent')
specific_url = 'https://host/user/container/blobs/sha256/f00'
specific_source = 'oci://host/user/container@sha256:f00'
identify_image_mock.return_value = {
'image_url': specific_url,
'oci_image_manifest_url': specific_source
}
i_info = self.node.instance_info
driver_internal_info = self.node.driver_internal_info
i_info['image_source'] = 'oci://host/user/container'
i_info['image_pull_secret'] = 'meow'
i_info['image_url'] = 'prior_failed_url'
driver_internal_info['is_whole_disk_image'] = True
self.node.instance_info = i_info
self.node.driver_internal_info = driver_internal_info
self.node.save()
mock_cache_image.return_value = ('fake', '/tmp/foo', 'qcow2')
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
info = utils.build_instance_info_for_deploy(task)
self.assertIn('oci_image_manifest_url', info)
self.assertEqual(specific_url,
info['image_url'])
validate_href_mock.assert_called_once_with(
mock.ANY, specific_source, False)
mock_cache_image.assert_not_called()
identify_image_mock.assert_called_with(
mock.ANY, 'oci://host/user/container', 'http',
'x86_64')
self.assertEqual(specific_source,
task.node.driver_internal_info['image_source'])
set_image_auth_mock.assert_called_with(
mock.ANY,
specific_source,
{'username': '', 'password': 'meow'})
@mock.patch.object(utils, 'cache_instance_image', autospec=True)
@mock.patch.object(image_service.OciImageService,
'set_image_auth',
autospec=True)
@mock.patch.object(image_service.OciImageService,
'identify_specific_image',
autospec=True)
@mock.patch.object(image_service.OciImageService, 'validate_href',
autospec=True)
def test_build_instance_info_for_deploy_oci_url_remote_download_rebuild(
self, validate_href_mock, identify_image_mock,
set_image_auth_mock, mock_cache_image):
# There is some special case handling in the method for rebuilds or bad
# image_disk_info, the intent of this test is to just make sure it is
# addressed.
cfg.CONF.set_override('image_download_source', 'http', group='agent')
specific_url = 'https://host/user/container/blobs/sha256/f00'
specific_source = 'oci://host/user/container@sha256:f00'
identify_image_mock.return_value = {
'image_url': specific_url,
'oci_image_manifest_url': specific_source,
'image_disk_format': 'unknown'
}
i_info = self.node.instance_info
driver_internal_info = self.node.driver_internal_info
i_info['image_source'] = 'oci://host/user/container'
i_info['image_pull_secret'] = 'meow'
i_info['image_url'] = 'prior_failed_url'
i_info['image_disk_format'] = 'raw'
driver_internal_info['is_whole_disk_image'] = True
driver_internal_info['image_source'] = 'foo'
self.node.instance_info = i_info
self.node.driver_internal_info = driver_internal_info
self.node.save()
mock_cache_image.return_value = ('fake', '/tmp/foo', 'qcow2')
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
info = utils.build_instance_info_for_deploy(task)
self.assertIn('oci_image_manifest_url', info)
self.assertEqual(specific_url,
info['image_url'])
validate_href_mock.assert_called_once_with(
mock.ANY, specific_source, False)
mock_cache_image.assert_not_called()
identify_image_mock.assert_called_with(
mock.ANY, 'oci://host/user/container', 'http',
'x86_64')
self.assertEqual(specific_source,
task.node.driver_internal_info['image_source'])
set_image_auth_mock.assert_called_with(
mock.ANY,
specific_source,
{'username': '', 'password': 'meow'})
self.assertNotIn('image_disk_format', task.node.instance_info)
@mock.patch.object(utils, '_cache_and_convert_image', autospec=True)
@mock.patch.object(image_service.OciImageService,
'identify_specific_image',
autospec=True)
@mock.patch.object(image_service.OciImageService, 'validate_href',
autospec=True)
def test_build_instance_info_for_deploy_oci_url_local_download(
self, validate_href_mock, identify_image_mock,
mock_cache_image):
cfg.CONF.set_override('image_download_source', 'local', group='agent')
specific_url = 'https://host/user/container/blobs/sha256/f00'
specific_source = 'oci://host/user/container@sha256:f00'
identify_image_mock.return_value = {
'image_url': specific_url,
'oci_image_manifest_url': specific_source,
'image_checksum': 'a' * 64,
'image_disk_format': 'raw'
}
i_info = self.node.instance_info
driver_internal_info = self.node.driver_internal_info
props = self.node.properties
i_info['image_source'] = 'oci://host/user/container'
i_info['image_url'] = 'prior_failed_url'
driver_internal_info['is_whole_disk_image'] = True
props['cpu_arch'] = 'aarch64'
self.node.instance_info = i_info
self.node.driver_internal_info = driver_internal_info
self.node.properties = props
self.node.save()
mock_cache_image.return_value = ('fake', '/tmp/foo', 'qcow2')
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
info = utils.build_instance_info_for_deploy(task)
self.assertIn('oci_image_manifest_url', info)
self.assertEqual(specific_url,
info['image_url'])
validate_href_mock.assert_not_called()
mock_cache_image.assert_called_once_with(
mock.ANY, task.node.instance_info)
identify_image_mock.assert_called_once_with(
mock.ANY, 'oci://host/user/container', 'local',
'aarch64')
self.assertEqual('oci://host/user/container',
task.node.instance_info.get('image_source'))
self.assertEqual(
specific_source,
task.node.driver_internal_info.get('image_source'))
@mock.patch.object(utils, 'cache_instance_image', autospec=True) @mock.patch.object(utils, 'cache_instance_image', autospec=True)
@mock.patch.object(image_service.HttpImageService, 'validate_href', @mock.patch.object(image_service.HttpImageService, 'validate_href',
autospec=True) autospec=True)

View File

@ -67,7 +67,8 @@ class TestImageCacheFetch(BaseTest):
self.assertFalse(mock_download.called) self.assertFalse(mock_download.called)
mock_fetch.assert_called_once_with( mock_fetch.assert_called_once_with(
None, self.uuid, self.dest_path, True, None, self.uuid, self.dest_path, True,
None, None, None, disable_validation=False) None, None, None, disable_validation=False,
image_auth_data=None)
self.assertFalse(mock_clean_up.called) self.assertFalse(mock_clean_up.called)
mock_image_service.assert_not_called() mock_image_service.assert_not_called()
@ -85,7 +86,8 @@ class TestImageCacheFetch(BaseTest):
self.assertFalse(mock_download.called) self.assertFalse(mock_download.called)
mock_fetch.assert_called_once_with( mock_fetch.assert_called_once_with(
None, self.uuid, self.dest_path, True, None, self.uuid, self.dest_path, True,
None, None, None, disable_validation=False) None, None, None, disable_validation=False,
image_auth_data=None)
self.assertFalse(mock_clean_up.called) self.assertFalse(mock_clean_up.called)
mock_image_service.assert_not_called() mock_image_service.assert_not_called()
@ -161,7 +163,8 @@ class TestImageCacheFetch(BaseTest):
self.cache, self.uuid, self.master_path, self.dest_path, self.cache, self.uuid, self.master_path, self.dest_path,
mock_image_service.return_value.show.return_value, mock_image_service.return_value.show.return_value,
ctx=None, force_raw=True, expected_format=None, ctx=None, force_raw=True, expected_format=None,
expected_checksum=None, expected_checksum_algo=None) expected_checksum=None, expected_checksum_algo=None,
image_auth_data=None)
mock_clean_up.assert_called_once_with(self.cache) mock_clean_up.assert_called_once_with(self.cache)
mock_image_service.assert_called_once_with(self.uuid, context=None) mock_image_service.assert_called_once_with(self.uuid, context=None)
mock_image_service.return_value.show.assert_called_once_with(self.uuid) mock_image_service.return_value.show.assert_called_once_with(self.uuid)
@ -184,7 +187,8 @@ class TestImageCacheFetch(BaseTest):
self.cache, self.uuid, self.master_path, self.dest_path, self.cache, self.uuid, self.master_path, self.dest_path,
mock_image_service.return_value.show.return_value, mock_image_service.return_value.show.return_value,
ctx=None, force_raw=True, expected_format=None, ctx=None, force_raw=True, expected_format=None,
expected_checksum=None, expected_checksum_algo=None) expected_checksum=None, expected_checksum_algo=None,
image_auth_data=None)
mock_clean_up.assert_called_once_with(self.cache) mock_clean_up.assert_called_once_with(self.cache)
def test_fetch_image_not_uuid(self, mock_download, mock_clean_up, def test_fetch_image_not_uuid(self, mock_download, mock_clean_up,
@ -198,7 +202,8 @@ class TestImageCacheFetch(BaseTest):
self.cache, href, master_path, self.dest_path, self.cache, href, master_path, self.dest_path,
mock_image_service.return_value.show.return_value, mock_image_service.return_value.show.return_value,
ctx=None, force_raw=True, expected_format=None, ctx=None, force_raw=True, expected_format=None,
expected_checksum=None, expected_checksum_algo=None) expected_checksum=None, expected_checksum_algo=None,
image_auth_data=None)
self.assertTrue(mock_clean_up.called) self.assertTrue(mock_clean_up.called)
def test_fetch_image_not_uuid_no_force_raw(self, mock_download, def test_fetch_image_not_uuid_no_force_raw(self, mock_download,
@ -214,7 +219,8 @@ class TestImageCacheFetch(BaseTest):
self.cache, href, master_path, self.dest_path, self.cache, href, master_path, self.dest_path,
mock_image_service.return_value.show.return_value, mock_image_service.return_value.show.return_value,
ctx=None, force_raw=False, expected_format=None, ctx=None, force_raw=False, expected_format=None,
expected_checksum='f00', expected_checksum_algo='sha256') expected_checksum='f00', expected_checksum_algo='sha256',
image_auth_data=None)
self.assertTrue(mock_clean_up.called) self.assertTrue(mock_clean_up.called)
@mock.patch.object(image_cache, '_fetch', autospec=True) @mock.patch.object(image_cache, '_fetch', autospec=True)
@ -227,7 +233,8 @@ class TestImageCacheFetch(BaseTest):
mock_download.assert_not_called() mock_download.assert_not_called()
mock_fetch.assert_called_once_with( mock_fetch.assert_called_once_with(
None, self.uuid, self.dest_path, True, None, self.uuid, self.dest_path, True,
None, None, None, disable_validation=True) None, None, None, disable_validation=True,
image_auth_data=None)
mock_clean_up.assert_not_called() mock_clean_up.assert_not_called()
mock_image_service.assert_not_called() mock_image_service.assert_not_called()
@ -339,6 +346,26 @@ class TestUpdateImages(BaseTest):
mock_path_exists.assert_called_once_with(self.master_path) mock_path_exists.assert_called_once_with(self.master_path)
self.assertTrue(res) self.assertTrue(res)
@mock.patch.object(os.path, 'exists', return_value=False, autospec=True)
def test__delete_master_path_if_stale_oci_img_not_cached(
self, mock_path_exists, mock_unlink):
res = image_cache._delete_master_path_if_stale(self.master_path,
'oci://foo',
self.img_info)
self.assertFalse(mock_unlink.called)
mock_path_exists.assert_called_once_with(self.master_path)
self.assertFalse(res)
@mock.patch.object(os.path, 'exists', return_value=True, autospec=True)
def test__delete_master_path_if_stale_oci_img(
self, mock_path_exists, mock_unlink):
res = image_cache._delete_master_path_if_stale(self.master_path,
'oci://foo',
self.img_info)
self.assertFalse(mock_unlink.called)
mock_path_exists.assert_called_once_with(self.master_path)
self.assertTrue(res)
def test__delete_master_path_if_stale_no_master(self, mock_unlink): def test__delete_master_path_if_stale_no_master(self, mock_unlink):
res = image_cache._delete_master_path_if_stale(self.master_path, res = image_cache._delete_master_path_if_stale(self.master_path,
'http://11', 'http://11',
@ -819,16 +846,55 @@ class TestFetchCleanup(base.TestCase):
mock_size.return_value = 100 mock_size.return_value = 100
image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True, image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True,
expected_checksum='1234', expected_checksum='1234',
expected_checksum_algo='md5') expected_checksum_algo='md5',
image_auth_data=None)
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum='1234', checksum='1234',
checksum_algo='md5') checksum_algo='md5',
image_auth_data=None)
mock_clean.assert_called_once_with('/foo', 100) mock_clean.assert_called_once_with('/foo', 100)
mock_raw.assert_called_once_with('fake-uuid', '/foo/bar', mock_raw.assert_called_once_with('fake-uuid', '/foo/bar',
'/foo/bar.part') '/foo/bar.part')
mock_remove.assert_not_called() mock_remove.assert_not_called()
mock_show.assert_called_once_with('fake', 'fake-uuid') mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count)
@mock.patch.object(image_format_inspector, 'detect_file_format',
autospec=True)
@mock.patch.object(images, 'image_show', autospec=True)
@mock.patch.object(os, 'remove', autospec=True)
@mock.patch.object(images, 'converted_size', autospec=True)
@mock.patch.object(images, 'fetch', autospec=True)
@mock.patch.object(images, 'image_to_raw', autospec=True)
@mock.patch.object(image_cache, '_clean_up_caches', autospec=True)
def test__fetch_with_image_auth(
self, mock_clean, mock_raw, mock_fetch,
mock_size, mock_remove, mock_show, mock_format_inspector):
image_check = mock.MagicMock()
image_check.__str__.side_effect = iter(['qcow2', 'raw'])
image_check.safety_check.return_value = True
mock_format_inspector.return_value = image_check
mock_show.return_value = {}
mock_size.return_value = 100
image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True,
expected_checksum='1234',
expected_checksum_algo='md5',
image_auth_data='foo')
mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False,
checksum='1234',
checksum_algo='md5',
image_auth_data='foo')
mock_clean.assert_called_once_with('/foo', 100)
mock_raw.assert_called_once_with('fake-uuid', '/foo/bar',
'/foo/bar.part')
mock_remove.assert_not_called()
mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data='foo')
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count) self.assertEqual(1, image_check.__str__.call_count)
@ -856,12 +922,14 @@ class TestFetchCleanup(base.TestCase):
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum='1234', checksum='1234',
checksum_algo='md5') checksum_algo='md5',
image_auth_data=None)
mock_clean.assert_called_once_with('/foo', 100) mock_clean.assert_called_once_with('/foo', 100)
mock_raw.assert_called_once_with('fake-uuid', '/foo/bar', mock_raw.assert_called_once_with('fake-uuid', '/foo/bar',
'/foo/bar.part') '/foo/bar.part')
mock_remove.assert_not_called() mock_remove.assert_not_called()
mock_show.assert_called_once_with('fake', 'fake-uuid') mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count) self.assertEqual(1, image_check.__str__.call_count)
@ -889,7 +957,8 @@ class TestFetchCleanup(base.TestCase):
image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True) image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True)
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum=None, checksum_algo=None) checksum=None, checksum_algo=None,
image_auth_data=None)
mock_clean.assert_called_once_with('/foo', 100) mock_clean.assert_called_once_with('/foo', 100)
mock_raw.assert_called_once_with('fake-uuid', '/foo/bar', mock_raw.assert_called_once_with('fake-uuid', '/foo/bar',
'/foo/bar.part') '/foo/bar.part')
@ -923,7 +992,8 @@ class TestFetchCleanup(base.TestCase):
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum='1234', checksum='1234',
checksum_algo='md5') checksum_algo='md5',
image_auth_data=None)
mock_clean.assert_called_once_with('/foo', 100) mock_clean.assert_called_once_with('/foo', 100)
mock_raw.assert_called_once_with('fake-uuid', '/foo/bar', mock_raw.assert_called_once_with('fake-uuid', '/foo/bar',
'/foo/bar.part') '/foo/bar.part')
@ -958,13 +1028,15 @@ class TestFetchCleanup(base.TestCase):
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum='f00', checksum='f00',
checksum_algo='sha256') checksum_algo='sha256',
image_auth_data=None)
mock_clean.assert_called_once_with('/foo', 100) mock_clean.assert_called_once_with('/foo', 100)
mock_raw.assert_called_once_with('fake-uuid', '/foo/bar', mock_raw.assert_called_once_with('fake-uuid', '/foo/bar',
'/foo/bar.part') '/foo/bar.part')
self.assertEqual(1, mock_exists.call_count) self.assertEqual(1, mock_exists.call_count)
self.assertEqual(1, mock_remove.call_count) self.assertEqual(1, mock_remove.call_count)
mock_image_show.assert_called_once_with('fake', 'fake-uuid') mock_image_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count) self.assertEqual(1, image_check.__str__.call_count)
@ -992,11 +1064,13 @@ class TestFetchCleanup(base.TestCase):
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum='e00', checksum='e00',
checksum_algo='sha256') checksum_algo='sha256',
image_auth_data=None)
mock_clean.assert_not_called() mock_clean.assert_not_called()
mock_size.assert_not_called() mock_size.assert_not_called()
mock_raw.assert_not_called() mock_raw.assert_not_called()
mock_show.assert_called_once_with('fake', 'fake-uuid') mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count) self.assertEqual(1, image_check.__str__.call_count)
@ -1025,11 +1099,13 @@ class TestFetchCleanup(base.TestCase):
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum='e00', checksum='e00',
checksum_algo='sha256') checksum_algo='sha256',
image_auth_data=None)
mock_clean.assert_not_called() mock_clean.assert_not_called()
mock_size.assert_not_called() mock_size.assert_not_called()
mock_raw.assert_not_called() mock_raw.assert_not_called()
mock_show.assert_called_once_with('fake', 'fake-uuid') mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count) self.assertEqual(1, image_check.__str__.call_count)
@ -1060,11 +1136,13 @@ class TestFetchCleanup(base.TestCase):
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum='a00', checksum='a00',
checksum_algo='sha512') checksum_algo='sha512',
image_auth_data=None)
mock_clean.assert_not_called() mock_clean.assert_not_called()
mock_size.assert_not_called() mock_size.assert_not_called()
mock_raw.assert_not_called() mock_raw.assert_not_called()
mock_show.assert_called_once_with('fake', 'fake-uuid') mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count) self.assertEqual(1, image_check.__str__.call_count)
@ -1091,11 +1169,13 @@ class TestFetchCleanup(base.TestCase):
'/foo/bar', force_raw=True) '/foo/bar', force_raw=True)
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum=None, checksum_algo=None) checksum=None, checksum_algo=None,
image_auth_data=None)
mock_clean.assert_not_called() mock_clean.assert_not_called()
mock_size.assert_not_called() mock_size.assert_not_called()
mock_raw.assert_not_called() mock_raw.assert_not_called()
mock_show.assert_called_once_with('fake', 'fake-uuid') mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(0, image_check.__str__.call_count) self.assertEqual(0, image_check.__str__.call_count)
@ -1121,7 +1201,8 @@ class TestFetchCleanup(base.TestCase):
image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True) image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True)
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum=None, checksum_algo=None) checksum=None, checksum_algo=None,
image_auth_data=None)
mock_size.assert_has_calls([ mock_size.assert_has_calls([
mock.call('/foo/bar.part', estimate=False), mock.call('/foo/bar.part', estimate=False),
mock.call('/foo/bar.part', estimate=True), mock.call('/foo/bar.part', estimate=True),
@ -1132,7 +1213,8 @@ class TestFetchCleanup(base.TestCase):
]) ])
mock_raw.assert_called_once_with('fake-uuid', '/foo/bar', mock_raw.assert_called_once_with('fake-uuid', '/foo/bar',
'/foo/bar.part') '/foo/bar.part')
mock_show.assert_called_once_with('fake', 'fake-uuid') mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count) self.assertEqual(1, image_check.__str__.call_count)
@ -1159,11 +1241,13 @@ class TestFetchCleanup(base.TestCase):
image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True) image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True)
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum=None, checksum_algo=None) checksum=None, checksum_algo=None,
image_auth_data=None)
mock_clean.assert_not_called() mock_clean.assert_not_called()
mock_raw.assert_not_called() mock_raw.assert_not_called()
mock_remove.assert_not_called() mock_remove.assert_not_called()
mock_show.assert_called_once_with('fake', 'fake-uuid') mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count) self.assertEqual(1, image_check.__str__.call_count)
@ -1191,11 +1275,13 @@ class TestFetchCleanup(base.TestCase):
image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True) image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True)
mock_fetch.assert_called_once_with('fake', 'fake-uuid', mock_fetch.assert_called_once_with('fake', 'fake-uuid',
'/foo/bar.part', force_raw=False, '/foo/bar.part', force_raw=False,
checksum=None, checksum_algo=None) checksum=None, checksum_algo=None,
image_auth_data=None)
mock_clean.assert_not_called() mock_clean.assert_not_called()
mock_raw.assert_not_called() mock_raw.assert_not_called()
mock_remove.assert_not_called() mock_remove.assert_not_called()
mock_show.assert_called_once_with('fake', 'fake-uuid') mock_show.assert_called_once_with('fake', 'fake-uuid',
image_auth_data=None)
mock_format_inspector.assert_called_once_with('/foo/bar.part') mock_format_inspector.assert_called_once_with('/foo/bar.part')
image_check.safety_check.assert_called_once() image_check.safety_check.assert_called_once()
self.assertEqual(1, image_check.__str__.call_count) self.assertEqual(1, image_check.__str__.call_count)

View File

@ -65,7 +65,8 @@ class ISOCacheTestCase(base.TestCase):
self.cache.fetch_image(self.uuid, self.dest_path) self.cache.fetch_image(self.uuid, self.dest_path)
mock_fetch.assert_called_once_with(mock.ANY, self.uuid, self.dest_path, mock_fetch.assert_called_once_with(mock.ANY, self.uuid, self.dest_path,
False, mock.ANY, mock.ANY, mock.ANY, False, mock.ANY, mock.ANY, mock.ANY,
disable_validation=True) disable_validation=True,
image_auth_data=None)
@mock.patch.object(os, 'link', autospec=True) @mock.patch.object(os, 'link', autospec=True)
@mock.patch.object(image_cache, '_fetch', autospec=True) @mock.patch.object(image_cache, '_fetch', autospec=True)
@ -75,7 +76,8 @@ class ISOCacheTestCase(base.TestCase):
self.img_info) self.img_info)
mock_fetch.assert_called_once_with(mock.ANY, self.uuid, mock.ANY, mock_fetch.assert_called_once_with(mock.ANY, self.uuid, mock.ANY,
False, mock.ANY, mock.ANY, mock.ANY, False, mock.ANY, mock.ANY, mock.ANY,
disable_validation=True) disable_validation=True,
image_auth_data=None)
class RedfishImageHandlerTestCase(db_base.DbTestCase): class RedfishImageHandlerTestCase(db_base.DbTestCase):

View File

@ -0,0 +1,5 @@
---
features:
- |
Adds support for OCI Container Registries for the retrieval of deployment
artifacts and whole-disk images to be written to a remote host.