Test Configuration Reference
Complete reference for configuring image tests via tests-container.yml and tests-k8s.yml.
Test Definition Format
Each image can have tests defined in:
images/<name>/tests-container.yml- Container tests (run with Podman/Docker)images/<name>/tests-k8s.yml- K8s tests (run in Kubernetes cluster)
Both files use the same YAML format:
---
script:
command: ./test.sh
inline:
command: |
image=${TEST_IMAGE:?}
test_engine_run --rm "${image}" --version
Environment Variables
The test runners provide environment variables to test commands. Some are common to both test types, others are specific to container or K8s tests.
Common Variables
| Variable | Description |
|---|---|
TEST_IMAGE |
The container image being tested |
TEST_IMAGES |
Associative array with all group/variant image URLs |
TEST_IMAGES_PATH |
Path to file containing serialized TEST_IMAGES array |
TEST_VERBOSE |
Show test command output (true or false) |
TEST_GROUP |
The image group being tested (nginx, python, etc.) |
TEST_DISTRO |
The distro being tested (hummingbird, rawhide) |
TEST_VARIANT |
The variant being tested (default, builder, etc.) |
Container Test Variables
| Variable | Description |
|---|---|
TEST_ENGINE |
Container engine to use (podman or docker) |
TEST_USER_ID |
Current user ID (for permission handling) |
TEST_FIPS |
Whether host is in FIPS mode (true or false) |
K8s Test Variables
| Variable | Description |
|---|---|
TEST_RUN_ID |
Unique ID for this test run (for resource naming) |
TEST_RUN_LABEL |
Label selector for cleanup (hum-k8s-test=<id>) |
Helper Functions
Common to All Tests
Both test types provide the test_fail function:
| Function | Description |
|---|---|
test_fail |
Fail the test with a custom error message |
In K8s tests, kubectl is pre-configured with the context/kubeconfig from CLI args.
Container Tests Only
Container tests provide helper functions that wrap the container engine with
appropriate flags. Always use these functions instead of calling
${TEST_ENGINE} directly for create and run commands. This ensures tests
respect globally set options such as the hermetic mode setting
(--pull=never).
| Function | Description |
|---|---|
test_engine_create |
Run ${TEST_ENGINE} create with appropriate pull flags |
test_engine_run |
Run ${TEST_ENGINE} run with appropriate pull flags |
Note: For other container engine commands (inspect, network, volume, etc.),
use ${TEST_ENGINE:?} directly.
Variant-Aware Testing
The testing system automatically includes variant-specific default tests:
- Global tests from
ci/{variant}-tests/tests-container.yml - Image-specific tests from
images/<name>/tests-container.yml
Variant Selection
Tests can specify which variants they apply to using the filters.variants field. The field
supports both exact matches and glob patterns.
# Test that only runs for builder variants
build-tools-test:
filters:
variants: [builder]
command: |
test_engine_run --rm "${TEST_IMAGE}" make --version
# Test that runs for multiple specific variants
multi-variant-test:
filters:
variants: [default, builder]
command: |
test_engine_run --rm "${TEST_IMAGE}" echo "Hello"
# Test that runs for all variants (no filters.variants field)
universal-test:
command: |
test_engine_run --rm "${TEST_IMAGE}" echo "Always runs"
Glob Patterns in Variant Filters
Use glob patterns to match multiple variants dynamically:
# Test that runs for any FIPS variant (fips, fips-builder, etc.)
fips-test:
filters:
variants: ["*fips*"]
command: |
test_engine_run --rm "${TEST_IMAGE}" check-fips-mode
# Test that runs for all builder variants (builder, fpm-builder, etc.)
builder-test:
filters:
variants: ["*-builder", "builder"]
command: |
test_engine_run --rm "${TEST_IMAGE}" dnf --version
# Mix exact matches and patterns
mixed-filter:
filters:
variants: [default, "*fips*"]
command: |
test_engine_run --rm "${TEST_IMAGE}" echo "Runs on default and all fips variants"
Supported glob syntax:
*- matches any sequence of characters?- matches any single character
Distro Selection
Tests can specify which distros they apply to using the filters.distros field. Like variant
filters, this field supports both exact matches and glob patterns.
# Test that only runs for hummingbird distro
hummingbird-only-test:
filters:
distros: [hummingbird]
command: |
test_engine_run --rm "${TEST_IMAGE}" some-hummingbird-specific-check
# Test that runs for multiple distros
multi-distro-test:
filters:
distros: [hummingbird, rawhide]
command: |
test_engine_run --rm "${TEST_IMAGE}" echo "Runs on both"
# Test that runs for all distros (no filters.distros field)
universal-test:
command: |
test_engine_run --rm "${TEST_IMAGE}" echo "Always runs"
This is useful for tests that check distro-specific features. For example, CPE labels are only present on hummingbird images (Fedora/rawhide has no official CPE).
Glob Patterns in Distro Filters
Use glob patterns to match multiple distros dynamically:
# Test that runs for any distro ending in "bird"
bird-distro-test:
filters:
distros: ["*bird"]
command: |
test_engine_run --rm "${TEST_IMAGE}" check-something
# Test that runs for any distro starting with "raw"
raw-distro-test:
filters:
distros: ["raw*"]
command: |
test_engine_run --rm "${TEST_IMAGE}" check-rawhide-feature
Supported glob syntax:
*- matches any sequence of characters?- matches any single character
FIPS Mode Selection
Tests can specify whether they should run based on the host’s FIPS mode using filters.fips:
# Test that only runs on FIPS-enabled hosts
fips-required-test:
filters:
fips: true
command: |
test_engine_run --rm "${TEST_IMAGE}" openssl list -providers
# Test that only runs on non-FIPS hosts
non-fips-test:
filters:
fips: false
command: |
test_engine_run --rm "${TEST_IMAGE}" some-non-fips-check
# Test that runs regardless of FIPS mode (no filters.fips field)
universal-test:
command: |
test_engine_run --rm "${TEST_IMAGE}" echo "Always runs"
The TEST_FIPS environment variable is set to true when running on a FIPS-enabled host
(detected from /proc/sys/crypto/fips_enabled), and false otherwise.
Group Test Mode
When running tests for all variants (without specifying a specific variant), the system supports a special “group” test mode for tests that need to work across multiple variants simultaneously:
# Test that runs only in group mode
cross-variant-compatibility-test:
filters:
variants: [group]
command: |
# Access specific group/variant combinations using TEST_GROUP
builder_image=${TEST_IMAGES[${TEST_GROUP}/builder]:?}
default_image=${TEST_IMAGES[${TEST_GROUP}/default]:?}
echo "Builder variant: ${builder_image}"
echo "Default variant: ${default_image}"
# Test that both variants have compatible APIs
test_engine_run --rm "${builder_image}" nginx -version
test_engine_run --rm "${default_image}" nginx -version
Using TEST_IMAGES in External Scripts
External shell scripts that need to reference other image variants must source the TEST_IMAGES array file at the beginning:
#!/bin/bash
set -euo pipefail
# Load TEST_IMAGES array
# shellcheck disable=SC1090
source "${TEST_IMAGES_PATH:?}"
# Now TEST_IMAGES is available - always use :? for proper error checking
test_engine_run --rm "${TEST_IMAGES[curl/default]:?}" ...
test_engine_run --rm "${TEST_IMAGES[httpd/default]:?}" ...
Note: Inline test commands in YAML files automatically have access to TEST_IMAGES and do not need this sourcing pattern.
Using TEST_GROUP for Dynamic References
The TEST_GROUP variable contains the current image group name (e.g., nginx, python,
aspnet-runtime-8-0). Use it with TEST_IMAGES to dynamically reference the current image’s
variants without hardcoding the group name:
multi-stage-build:
filters:
variants: [builder]
command: |
# Build in builder variant, run in default variant
"${TEST_ENGINE}" build -t localhost/myapp -f - . <<EOF
FROM ${TEST_IMAGES[${TEST_GROUP}/builder]:?}
# ... build steps ...
FROM ${TEST_IMAGES[${TEST_GROUP}/default]:?}
COPY --from=0 /app /app
EOF
test_engine_run --rm localhost/myapp /app/myapp
Test Helper Functions
The test runner provides helper functions available in test commands:
test_fail(message)
Immediately fails the test with a custom error message sent to stderr:
test-name:
command: |
result=$(test_engine_run --rm "${TEST_IMAGE}" some-command)
[[ "$result" == "expected" ]] || test_fail "Expected 'expected', got '$result'"
Known Issues (Container Tests Only)
Container tests can specify known log patterns that should not cause the test to report an error (for which support is still unimplemented) or failure. This helps distinguish between expected failures (tracked issues) and unexpected failures (new regressions), and detects when expected failures stop occurring.
Add known_issues to any test to specify patterns matching expected failures:
test-name:
command: |
result=$(test_engine_run --rm "${TEST_IMAGE}" some-command)
[[ "$result" == "expected" ]] || test_fail "Custom error message"
known_issues:
- description: "Known configuration issue"
issue: "https://issues.redhat.com/browse/PROJ-1234"
pattern: "Custom error message"
fails: "sometimes" # optional, defaults to "always"
Fields
- description: Human-readable explanation of the issue (required)
- issue: Full URL to the issue tracking this problem (required)
- pattern: Regular expression(s) to match against test output (required)
- fails: Frequency of failure (optional, defaults to “always”)
Pattern Support
The pattern field supports both single patterns and arrays of patterns:
known_issues:
# Single pattern
- description: "Simple failure case"
issue: "https://issues.redhat.com/browse/PROJ-1"
pattern: "Connection failed"
# Multiple patterns (any match triggers)
- description: "Network connectivity issues"
issue: "https://issues.redhat.com/browse/PROJ-2"
pattern:
- "Connection timeout"
- "Network unreachable"
- "curl: \\(28\\)"
fails: "sometimes"
Failure Frequency
- always: Known failures that consistently fail every time (default)
- sometimes: Intermittent failures that may pass on retry (flaky tests)
Unexpected Pass Detection
The system automatically detects when tests with known issues labeled as fails: "always"
suddenly start passing. This will not prevent missing known issues being masked by other (still
occurring) known issues.
Automatic Retries (Container Tests Only)
The container test runner automatically retries tests that fail with certain transient infrastructure errors. This helps avoid false test failures caused by temporary issues with external services like container registries or network problems.
Retry Behavior:
- Failed tests are checked against a list of retriable error patterns
- If a match is found, the test is automatically retried
- If the test still fails after all attempts, it’s reported as a normal failure
Next Steps
- Testing Guide - How to run and write tests
- Adding Images - How to add new images with tests