Image Pipeline

How the container image pipeline works from source to release

Explanation of the complete container image pipeline, from source templates through generation, build, testing, and release.

Overview

The container image pipeline consists of six main stages:

  1. Source Templates - Jinja2 templates in images/*/ define container images
  2. Generation - Templates are rendered into Containerfiles and Konflux resources
  3. Build - Konflux builds multi-architecture container images
  4. Testing - Testing Farm runs integration tests via Konflux
  5. Enterprise Contract Validation - Conforma validates policy compliance before release
  6. Release - Images are published to Quay.io

Stage 1: Source Templates

Each container image is defined by templates in images/<image-name>/:

  • properties.yml - Image configuration (packages, variants, tags, etc.)
  • Containerfile.j2 - Jinja2 template for the container build
  • README.md.j2 - Documentation template
  • tests-container.yml - Integration test definitions

Templates use reusable macros from macros/*.yml.j2:

  • setup_newroot() - Configures DNF and filesystem
  • install_newroot() - Installs packages
  • cleanup_newroot() - Cleans up files
  • final_stage() - Creates scratch-based final image

Shared configuration is defined in images/variables.yml. See Global Variables Reference for details.

Stage 2: Generation

Templates are rendered into concrete artifacts that drive the pipeline:

  • Containerfiles - Build instructions for each image variant
  • README documentation - Image documentation for Quay.io
  • Konflux resources - CI/CD pipeline definitions

Containerfile Generation

Containerfiles are generated from templates for each image variant using Make’s incremental build system:

make

This combines:

  • Reusable macros from macros/
  • Service-specific templates from images/*/Containerfile.j2
  • Configuration from properties.yml (see Image Configuration Reference)
  • Variables from images/variables.yml (see Global Variables Reference)
  • RPM versions from rpms.lock.yaml files
  • Git submodule information from .gitmodules

Output: images/<image-name>/<variant>/Containerfile, along with VERSION and TAGS files

The build system uses timestamp-based dependency tracking, so only changed files are regenerated.

README Generation

After Containerfiles are generated, README documentation is generated from README.md.j2 templates:

  1. Tag values are extracted from the generated Containerfile labels
  2. README is rendered using macros from macros/readme.yml.j2
  3. Generated README includes actual version tags from the Containerfile

Output: images/<image-name>/README.md

Konflux Resource Generation

Konflux CI/CD resources are generated from templates in konflux-templates/:

make

This generates:

  • Components - Define what to build (one per image variant)
  • ImageRepositories - Define where to push images
  • ReleasePlanAdmissions - Define how to release images

Output: konflux-templates/rendered.yml

These resources must be deployed to Konflux before builds can run. See the Konflux Resource Deployment guide for how and when resources are deployed.

Stage 3: Build

Konflux builds container images automatically when changes are pushed to GitLab.

Build Triggers

  • Merge Requests: Builds all changed images and triggers tests
  • Main Branch: Builds all changed images (tests do not run on main)

Build Process

For each image variant:

  1. Konflux Component watches the GitLab repository
  2. On changes, Konflux triggers a build PipelineRun
  3. The build uses the generated Containerfile from images/<name>/<variant>/Containerfile
  4. Images are built for multiple architectures (x86_64 and aarch64)
  5. Built images are pushed to the development registry

Build Output

Development images are pushed to the Red Hat User Workloads registry:

quay.io/redhat-user-workloads/hummingbird-tenant/<group>--<variant>--main

Merge request builds are tagged with:

quay.io/redhat-user-workloads/hummingbird-tenant/<group>--<variant>--main:on-mr-<MR_ID>-<COMMIT_SHA>

Examples:

  • quay.io/redhat-user-workloads/hummingbird-tenant/curl--default--main
  • quay.io/redhat-user-workloads/hummingbird-tenant/nginx--builder--main:on-mr-123-abc1234

SBOM Generation

Each per-architecture build also produces an SPDX 2.3 Software Bill of Materials (SBOM), attached as an OCI artifact to the image. The SBOM is assembled from two independent scans, merged into a single document:

flowchart LR
    syft["Syft\n(buildah-remote-oci-ta)"] --> mobster["Mobster\n(buildah-remote-oci-ta)"]
    hermeto["Hermeto\n(prefetch-dependencies-oci-ta)"] --> mobster
    mobster --> sbom["Per-arch SPDX SBOM"]
    sbom --> index["Mobster\n(build-image-index)"]
    index --> oci["Index SBOM\n(.sbom OCI artifact)"]
  • Syft runs as the sbom-syft-generate step inside the buildah-remote-oci-ta Tekton task. It scans the RPM database of the built per-arch image and finds installed binary RPMs plus non-RPM packages (Go modules, pip packages, etc.). One SBOM is produced per architecture.
  • Hermeto runs inside the prefetch-dependencies-oci-ta Tekton task. It records all build-time dependencies from lockfiles. A single Hermeto SBOM covers all architectures and source RPMs.
  • Mobster runs as the prepare-sboms step inside buildah-remote-oci-ta, merging the Syft and Hermeto SBOMs into one SPDX document per architecture. A second Mobster invocation in the build-image-index task generates an index-level SBOM and attaches it as an OCI artifact.

See Security Labels and Metadata for the SBOM entry structure and how to access SBOMs from the registry.

Stage 4: Testing

Images are validated through two types of integration tests:

  • Container tests (tests-container.yml) - Run with Podman and Docker via Testing Farm on RHEL-9-Nightly systems
  • K8s tests (tests-k8s.yml) - Run in Konflux ephemeral Kubernetes namespaces

Test results appear as external jobs in GitLab CI pipelines, providing pass/fail status and links to the Konflux PipelineRun.

Test Triggering

Tests are only triggered on merge requests, not on main branch builds.

For an image group at images/<group-name>/ or ci/images/<group-name>/, tests are triggered for all changes below that directory, excluding documentation-only changes.

Any changes below .tekton/ or ci/ will trigger tests for the caddy image to ensure infrastructure changes work before merging.

Test Execution

For each image group, pipelines are created per variant. If there is more than one variant in an image group, additional group pipelines are created. See Konflux group snapshot documentation for details.

Tests are selected based on the variants field in test files. Additionally, global tests from ci/{variant}-tests/ are included. Group pipelines run tests that specify variants: [group], allowing validation across multiple variants.

Reverse Dependency Testing

When a base image like core-runtime changes, dependent images (rust, xcaddy, etc.) need to be tested to ensure compatibility. Reverse dependency testing is enabled by default. Images can opt-out by setting reverse_dependency_tests: false in their properties.yml. This is recommended for images like “curl” which are widely used and only have a small API which their own tests cover well enough.

For container tests, dependent images are rebuilt locally in the testing environment to prevent version skew and ensure dependent images are in sync with the repository status in the merge request under test.

Integration Test Scenarios

Integration tests are triggered via IntegrationTestScenario resources defined in the infrastructure repository:

Application Purpose
containers-rawhide All Rawhide-based images
containers-hummingbird Red Hat supported Hummingbird images
containers-ci-hummingbird Community-supported Hummingbird images (supported-by: community)

Container Testing

Container tests run via Testing Farm on RHEL-9-Nightly systems for both x86_64 and aarch64 architectures.

Container Testing Flow

  1. Developer opens MR modifying images/nginx/
  2. GitLab CI pipeline and Konflux pipelines start in parallel
  3. Konflux builds nginx image variants
  4. IntegrationTestScenario triggers Testing Farm job
  5. tmt discovers fmf plan (ci/run_tests_container.fmf)
  6. Testing Farm provisions machines (x86_64 and aarch64 with RHEL-9-Nightly)
  7. tmt sets up the testing environment
  8. tmt runs tests via ci/run_tests_container.sh

Container Test ITS Configuration

The container tests use the upstream Testing Farm pipeline for Konflux CI:

kind: IntegrationTestScenario
spec:
  contexts:
    - {name: pull_request}
  params:
    - {name: COMPOSE, value: RHEL-9-Nightly}
    - {name: ARCH, value: x86_64|aarch64}  # one for each
    - {name: IMAGE_TAG, value: v3.2}
  resolverRef:
    resolver: bundles
    params:
      - {name: bundle, value: quay.io/testing-farm/tmt-via-testing-farm:$(params.IMAGE_TAG)}
      - {name: name, value: tmt-via-testing-farm}
      - {name: kind, value: pipeline}

FMF Test Plan

The root folder of the containers repository is marked with a .fmf directory to enable Testing Farm support. The actual test plan in ci/run_tests_container.fmf runs the following steps:

  1. Install podman for container testing
  2. Install Docker and start the Docker daemon on the host for Docker integration tests
  3. Fix git submodules until TFT-3991 is resolved
  4. For regular pipelines:
    • Verify the image built by Konflux is reproducible using ci/test_rebuild.sh
    • If needed, build reverse dependency images locally via Buildah
    • Run Podman and Docker tests via ci/run_tests_container.sh --component-name
  5. For group pipelines:
    • Run Podman and Docker group tests via ci/run_tests_container.sh --group-component-name

Container Test Environment

Testing Farm provides these environment variables to the test plan:

Variable Description
IMAGE_NAME Single component (e.g., curl--default--main)
IMAGE_NAMES Multiple components for group pipelines
IMAGE_URL Image URL from Konflux
IMAGE_URL_... Image URLs from Konflux for group pipelines
SNAPSHOT_b64 Snapshot metadata (base64 encoded)

K8s Testing

K8s tests run in Konflux ephemeral namespaces provisioned via EaaS (Environment as a Service).

K8s Testing Flow

  1. Developer opens MR modifying ci/images/gitlab-ci/
  2. GitLab CI pipeline and Konflux pipelines start in parallel
  3. Konflux builds gitlab-ci image variants
  4. IntegrationTestScenario triggers K8s test pipeline
  5. Pipeline checks for tests-k8s.yml; skips with SUCCESS if not found
  6. Pipeline provisions ephemeral namespace via Konflux EaaS (tied to PipelineRun lifecycle)
  7. Pipeline fetches source via Trusted Artifacts
  8. Tests run via ci/run_tests_k8s.sh with kubeconfig for ephemeral namespace
  9. Pipeline fails if any test reports non-SUCCESS, ensuring GitLab sees correct status

K8s Test ITS Configuration

The K8s tests use the k8s-test-pipeline:

kind: IntegrationTestScenario
spec:
  contexts:
    - {name: pull_request}
  resolverRef:
    resolver: bundles
    params:
      - {name: bundle, value: quay.io/hummingbird-ci/k8s-test-pipeline:latest}
      - {name: name, value: k8s-test}
      - {name: kind, value: pipeline}

Stage 5: Enterprise Contract Validation

Before images can be released, they must pass Enterprise Contract (also known as Conforma) policy validation. This ensures images meet security, compliance, and build quality standards. These checks can also be run locally against Konflux-built images.

Policy Validation

Enterprise Contract validates that:

  • Images are built using trusted, verified Tekton tasks
  • Builds are hermetic (network-isolated with pre-fetched dependencies)
  • Required security tests have passed
  • Images have proper metadata and labels
  • Build artifacts meet supply chain security requirements

Policy Configuration

Policies are defined as EnterpriseContractPolicy resources in konflux-templates/macros/policy.yml.j2:

  • containers-rawhide / containers-hummingbird - Strict policies for production images
  • containers-ci-hummingbird - Relaxed policy for community-supported images (includes additional exclusions passed via the additional_exclusions parameter in konflux-resources.yml.j2)

These policies use the @redhat rule collection from the ec-release-policy.

Policy Exclusions

The following checks are excluded from the default @redhat policy set. When modifying exclusions, update the policy macro and this documentation.

Test Package

The test package verifies that each build was subjected to a set of tests and that those tests all passed.

Snyk SAST checks (test.required_tests_passed:sast-snyk-check, test.no_skipped_tests:sast-snyk-check, test.required_tests_passed:sast-snyk-check-oci-ta, test.no_skipped_tests:sast-snyk-check-oci-ta) are excluded because Hummingbird images are currently not supported by Snyk.

Red Hat certification preflight checks (test.no_failed_tests:ecosystem-cert-preflight-checks, test.no_erred_tests:ecosystem-cert-preflight-checks) are excluded because Hummingbird images are not yet published to the Red Hat certified container registry.

Informative test failures (test.no_failed_informative_tests) are excluded because these produce warnings for advisory purposes only and are explicitly non-blocking.

Deprecated image warnings (test.no_test_warnings:deprecated-image-check) are excluded because final images are built FROM scratch, and the builder image is updated via Renovate like all other images.

Trusted Task Package

The trusted_task package verifies that all Tekton Tasks involved in building the image are trusted by comparing Task references with a pre-defined list of trusted Tasks.

The trusted_task.current check warns when newer versions of tasks are available. This is excluded because we use stable pinned task versions and control upgrade timing via Renovate rather than requiring the latest version at all times.

RPM Repos Package

The rpm_repos package confirms that all RPM packages listed in SBOMs specify a known and permitted repository ID.

The rpm_repos.ids_known check is excluded because images use the internal hummingbird repository and Fedora repositories, which are not in the upstream known_rpm_repositories.yml list (that file only contains Red Hat official repositories).

Labels Package

The labels package checks if the image has the expected labels set, including required and optional labels for Red Hat container certification.

Both labels.required_labels and labels.optional_labels are excluded because images currently only include basic labels (maintainer, license_terms, name, cpe) and version labels, not the full set of Red Hat certification labels (vendor, version, release, summary, description, url, etc.) required for Red Hat Ecosystem Catalog publishing.

Buildah Build Task Package

The buildah_build_task package verifies buildah build task parameters.

The buildah_build_task.privileged_nested_param check verifies that PRIVILEGED_NESTED is not set to true. This is excluded because images use the dnf-installroot helper from the builder image to build all containers, including the builder image itself. This script requires privileged operations (unshare, mount -t tmpfs, mount --bind for /proc and /dev/*) to set up the install root environment (see commit 3668af16).

Schedule Package

The schedule package verifies that releases conform to a given schedule, including weekday restrictions.

The schedule.weekday_restriction check is excluded to allow releases any day including weekends. The @redhat policy restricts weekend releases, but this project needs the ability to ship urgent CVE fixes immediately regardless of the day of week.

CVE Package

The cve package checks for blocking and non-blocking CVEs in container images.

The cve.cve_blockers check is excluded because blocking on known CVEs would prevent releasing images that fix other CVEs. A VEX feed could suppress false positives for RPM-level CVEs, but CVEs can also originate from other artifact types where VEX does not apply (see MR !2227).

Hermetic Task Package (CI-Only)

The hermetic_task package verifies that tasks were invoked with the proper parameters to perform a hermetic (network-isolated) execution.

The hermetic_task.hermetic check is excluded only for the containers-ci-hummingbird policy because the gitlab-ci image sets hermetic: false. This image requires network access during build to install npm packages globally (markdownlint-cli) and download the latest grype/syft releases from the GitHub API. These dependencies are not currently prefetched.

Stage 6: Release

After images pass testing in merge requests and are merged to main, they are released to public registries.

Registry Organization

Images are published to different Quay.io registries based on distro and purpose:

Registry Purpose Future
quay.io/hummingbird/ Red Hat supported Hummingbird images Will move to official Red Hat registry
quay.io/hummingbird-rawhide/ All Rawhide-based images Will stay public
quay.io/hummingbird-ci/ Community-supported Hummingbird images Will stay public

Primary Registry (hummingbird): Publishes Red Hat supported Hummingbird images. Will transition to an official Red Hat registry in the future.

CI Registry (hummingbird-ci): Publishes community-supported Hummingbird images (those with supported-by: community in properties.yml). These include internal CI tooling and other images without official Red Hat support.

Release Process

  1. Merge request passes all tests
  2. Merge request is merged to main branch
  3. Konflux builds images from main
  4. ReleasePlanAdmission resources trigger the release pipeline
  5. Images are copied from Konflux registry to Hummingbird registries
  6. Tags are applied based on properties.yml configuration

Release Mechanism

Releases are configured via ReleasePlanAdmission (RPA) resources in the containers repo and ReleasePlan resources in the infrastructure repo:

  • ReleasePlanAdmission - Defines per-registry release configuration (target registry, tags, visibility settings)
  • ReleasePlan - Triggers the release pipeline for a specific application and registry

Each distro/registry combination has its own RPA.

Release Output

Released images are published to registries based on distro and support level:

quay.io/hummingbird/<image-repository>:<image-tag>
quay.io/hummingbird-rawhide/<image-repository>:<image-tag>
quay.io/hummingbird-ci/<image-repository>:<image-tag>

Examples:

  • quay.io/hummingbird/nodejs:20 - Red Hat supported Hummingbird image
  • quay.io/hummingbird-rawhide/curl:latest - Rawhide image
  • quay.io/hummingbird-ci/builder:latest - community-supported image

Release Tags

Tags are extracted from Containerfile labels as defined in properties.yml:

  • latest - Latest version of the image
  • <major> - Major version (e.g., 20 for Node.js 20.x)
  • <major>.<minor> - Major.minor version (e.g., 20.11)
  • <full-version> - Complete version with release (e.g., 20.11.1-1.fc42)

Non-default variants receive a -<variant> suffix (e.g., latest-builder).

Image Documentation

When a commit to main changes a README.md file, the content is automatically pushed to quay.io as the image description via the update_quay_description job.

Next Steps