python-builder: install sibling packages

In the dependent change, the docker roles will add sibling packages to
the .zuul-siblings directory of the checked-out source.

Refactor the "assemble" script to handle this.  Essentially we build
the wheel for "." and then iterate over ZUUL_SIBLINGS subdirectories
(set in a --build-arg by the role in dependent change) to also build
the sibling packages.  Note we concatenate the bindep.txt files, so
that we end up with the complete package list required by the main
code and its dependencies.

"install-from-bindep" now installs all the wheels, using --force to
make sure we re-install the speculatively built packages.

This means that a single Dockerfile works under Zuul when
ZUUL_SIBLINGS is set, pointing to Zuul's checkouts; but it also works
stand-alone -- in this case ZUUL_SIBLINGS is empty and we just install
from upstream as usual.

Depends-On: https://review.opendev.org/696987
Change-Id: I4943ae723b06b0ad808e7c7f20788109e21aa8bf
This commit is contained in:
Ian Wienand 2019-12-03 16:34:16 +11:00
parent 5cabb8ca07
commit bd66a7cb1b
2 changed files with 59 additions and 41 deletions

View File

@ -27,53 +27,66 @@ cd /tmp/src
apt-get update
# Protect from the bindep builder image use of the assemble script
# to produce a wheel
if [ -f bindep.txt -o -f other-requirements.txt ] ; then
bindep -l newline > /output/bindep/run.txt || true
compile_packages=$(bindep -b compile || true)
if [ ! -z "$compile_packages" ] ; then
apt-get install -y ${compile_packages}
fi
fi
# pbr needs git installed, else nothing will work
apt-get install -y git
# Build a wheel so that we have an install target.
# pip install . in the container context with the mounted
# source dir gets ... exciting.
# We run sdist first to trigger code generation steps such
# as are found in zuul, since the sequencing otherwise
# happens in a way that makes wheel content copying unhappy.
# pip wheel isn't used here because it puts all of the output
# in the output dir and not the wheel cache, so it's not
# possible to tell what is the wheel for the project and
# what is the wheel cache.
python setup.py sdist bdist_wheel -d /output/wheels
# Use a virtualenv for the next install steps in case to prevent
# things from the current environment from making us not build a
# wheel.
# Use a clean virtualenv for install steps to prevent things from the
# current environment making us not build a wheel.
python -m venv /tmp/venv
/tmp/venv/bin/pip install -U pip wheel
# Install everything so that the wheel cache is populated
# with transitive depends. If a requirements.txt file exists,
# install it directly so that people can use git url syntax
# to do things like pick up patched but unreleased versions
# of dependencies.
if [ -f /tmp/src/requirements.txt ] ; then
/tmp/venv/bin/pip install --cache-dir=/output/wheels -r /tmp/src/requirements.txt
cp /tmp/src/requirements.txt /output/requirements.txt
fi
/tmp/venv/bin/pip install --cache-dir=/output/wheels /output/wheels/*whl
function install_pwd {
# Protect from the bindep builder image use of the assemble script
# to produce a wheel. Note we append because we want all
# sibling packages in here too
if [ -f bindep.txt -o -f other-requirements.txt ] ; then
bindep -l newline >> /output/bindep/run.txt || true
compile_packages=$(bindep -b compile || true)
if [ ! -z "$compile_packages" ] ; then
apt-get install -y ${compile_packages}
fi
fi
# Install each of the extras so that we collect all possibly
# needed wheels in the wheel cache. get-extras-packages also
# writes out the req files into /output/$extra/requirements.txt.
for req in $(get-extras-packages) ; do
/tmp/venv/bin/pip install --cache-dir=/output/wheels "$req"
# Build a wheel so that we have an install target.
# pip install . in the container context with the mounted
# source dir gets ... exciting.
# We run sdist first to trigger code generation steps such
# as are found in zuul, since the sequencing otherwise
# happens in a way that makes wheel content copying unhappy.
# pip wheel isn't used here because it puts all of the output
# in the output dir and not the wheel cache, so it's not
# possible to tell what is the wheel for the project and
# what is the wheel cache.
python setup.py sdist bdist_wheel -d /output/wheels
# Install everything so that the wheel cache is populated with
# transitive depends. If a requirements.txt file exists, install
# it directly so that people can use git url syntax to do things
# like pick up patched but unreleased versions of dependencies.
# Only do this for the main package (i.e. only write requirements
# once).
if [ -f /tmp/src/requirements.txt ] && [ ! -f /output/requirements.txt ] ; then
/tmp/venv/bin/pip install --cache-dir=/output/wheels -r /tmp/src/requirements.txt
cp /tmp/src/requirements.txt /output/requirements.txt
fi
/tmp/venv/bin/pip install --cache-dir=/output/wheels /output/wheels/*whl
# Install each of the extras so that we collect all possibly
# needed wheels in the wheel cache. get-extras-packages also
# writes out the req files into /output/$extra/requirements.txt.
for req in $(get-extras-packages) ; do
/tmp/venv/bin/pip install --cache-dir=/output/wheels "$req"
done
}
# Install the main package
install_pwd
# go through ZUUL_SIBLINGS, if any, and build those wheels too
for sibling in ${ZUUL_SIBLINGS:-}; do
pushd .zuul-siblings/${sibling}
install_pwd
popd
done
rm -rf /tmp/venv

View File

@ -25,7 +25,12 @@ apt-get -y install $(cat /output/bindep/run.txt)
if [ -f /output/requirements.txt ] ; then
pip install --cache-dir=/output/wheels -r /output/requirements.txt
fi
pip install --cache-dir=/output/wheels /output/wheels/*.whl
# Install the wheels. Use --force here because sibling wheels might
# be built with the same version number as the latest release, but we
# really want the speculatively built wheels installed over any
# automatic dependencies.
pip install --force --cache-dir=/output/wheels /output/wheels/*.whl
# clean up after ourselves
apt-get clean