Using a Project Hummingbird container image
Project Hummingbird builds a collection of minimal, hardened, and secure container images with a significantly reduced attack surface. This strong focus on security combined with a highly automated update workflow aims to minimize CVE counts, targeting near-zero vulnerabilities. All images support amd64 and arm64 architectures.
Available images include languages (Python, Go, Node.js, Rust, PHP), databases (PostgreSQL, MariaDB), web servers (httpd, Caddy, nginx), tools (curl, git), and base runtime images. Browse the complete catalog at Quay.io.
Note that the project is currently under development. All containers are tested and built with care, but are not yet recommended for production.
Quick Start
Getting started with Hummingbird images is simple. All images are available from the Quay.io registry and can be used directly with Podman or Docker:
# Run a command directly with the curl image
podman run quay.io/hummingbird/curl -v https://example.com
# Start a PostgreSQL database
podman run -e POSTGRES_PASSWORD=mysecret quay.io/hummingbird/postgresql:latest
Example container build using the Hummingbird Python image:
FROM quay.io/hummingbird/python:latest
COPY myapp.py /app/
WORKDIR /app
CMD ["python3", "myapp.py"]
You can find the documentation for each image on Quay.
Distroless Containers
Hummingbird builds so-called distroless containers which is a fancy way of describing that the images do not ship with a package manager and most do not even provide a shell.
The distroless design usually makes the bundled application the container’s entrypoint, offering a more user-friendly experience. For example, with the Hummingbird curl image, you can directly specify arguments to curl, like podman run quay.io/hummingbird/curl -v https://www.redhat.com/en.
Providing purpose-built containers means tons of work for the Project Hummingbird team, but heavily reduces the work for users. As a user, you don’t have to build your own container image and then be burdened with managing the vulnerabilities of your image. Instead, you can directly use a Hummingbird image with the bundled application you need and thereby avoid CVE hell.
Hardened for Security
The project applies a number of measures to harden the Hummingbird containers:
- Minimal Software Footprint: Images include only essential software packages required for the workload, significantly reducing the attack surface and the number of CVEs per image.
- Rapid Update Deployment: Software package updates are shipped as quickly as possible, ensuring that fixes are consumed early.
- Non-Root User Default: Containers default to a non-root user where technically possible, increasing security by reducing privileges within the container.
- Hermetic Build Environment: All containers are built in a hermetic environment without network access. This prevents unintended package drift and gives the Hummingbird project full control over the software versions used.
- Distroless Security: The distroless nature of Hummingbird containers, shipping only what is strictly necessary for the given workload, reduces the attack surface and further improves security by making certain types of attacks impossible.
Understanding Image Variants
Hummingbird provides different variants and base images to support various use cases while maintaining security by default.
Application Image Variants
Each application image (Python, Go, PostgreSQL, etc.) is available in multiple variants:
Default Variant (:latest)
- Distroless: no package manager, no shell
- Minimal attack surface
- Recommended for production
- Example:
quay.io/hummingbird/python:latest
Builder Variant (:latest-builder)
- Includes
dnfpackage manager andbash - For installing additional dependencies
- Intended for multi-stage builds and development
- Example:
quay.io/hummingbird/python:latest-builder
Hatchling Variant (:latest-hatchling)
- Experimental variant using packages from the Hummingbird repository in addition to Fedora Rawhide packages
- Enables early testing of Hummingbird-built packages before they become the default
- Example:
quay.io/hummingbird/php:latest-hatchling
Some images provide more variants such as PHP’s FPM variant.
Base Runtime Images
For multi-stage builds where you compile code in a builder variant, you can use default variants as the base image. For some toolchains, we recommend using the core-runtime base image for your final stage:
core-runtime: Minimal runtime environment with essential libraries (glibc, etc.). Use for compiled binaries from languages like C, Go, and Rust.
Tagging Strategy
Images follow a version-based tagging scheme for stability:
:latest- Most recent version (may change):<version>- Specific version (e.g.,:3.11,:16):<version>-builder- Builder variant of specific version
Recommendation: For best practices on deciding which tags or even digests to use, refer to the following article: How to name, version, and reference container images
Use versioned tags in production for reproducible builds. The :latest tag is convenient for development but may introduce unexpected changes.
Check Quay.io for available version tags for each image.
Multi-Stage Build Pattern
The recommended pattern for compiled languages:
# Build stage: use builder variant to install dependencies and compile
FROM quay.io/hummingbird/go:latest-builder AS builder
RUN dnf install -y <build dependencies>
COPY . /src
WORKDIR /src
RUN go build -o /app .
# Runtime stage: use minimal base image for the compiled binary
FROM quay.io/hummingbird/core-runtime:latest
COPY --from=builder /app /app
ENTRYPOINT ["/app"]
This approach gives you build-time flexibility while maintaining a minimal production image.
Source Containers
The source code for all Hummingbird containers is available in the form of so-called source containers. A source container includes RPM and non-RPM content that is shipped in an image and is pushed alongside each available tag with the “-source” suffix. You can inspect the contents of a source container with skopeo:
IMAGE=quay.io/hummingbird/curl:latest-source
cd $(mktemp -d)
mkdir source
skopeo copy --override-os=linux docker://$IMAGE dir:source
cd source
mv version manifest.json ..
for f in $(ls); do tar xvf $f; done
Compatibility
Hummingbird images are designed for compatibility with popular images from Docker Hub, Red Hat UBI, and other registries, enabling straightforward migration of existing workloads.
Key Difference: Hummingbird images default to a non-root user (UID 65532) where technically possible, while most traditional images run as root. This may require adjusting file permissions on mounted volumes:
# Ensure correct ownership for mounted data
chown -R 65532:65532 /path/to/data
For detailed comparisons (environment variables, ports, sizes, default users), see the compatibility report, which documents differences between each Hummingbird image and its traditional counterpart. This is also available in machine readable form in report.json.
Sharing Host Data
By default, containers do not have access to host filesystem content. Volume mounts must be added explicitly to make host directories visible inside the container:
podman run -v /path/on/host:/path/in/container quay.io/hummingbird/curl ...
SELinux Relabeling
On systems with SELinux enabled (such as Fedora and RHEL), mounted volumes must be relabeled to allow container access. Use the :z or :Z option:
podman run -v /path/on/host:/path/in/container:z quay.io/hummingbird/curl ...
The :z option relabels the content to be accessible by any container sharing this mount. The :Z option relabels them to be uniquely accessible to one container.
File Permissions
Most Hummingbird images default to a non-root user. When mounting host directories that the container needs to write to, either ensure the directory is world-writable (within a private directory):
mkdir -m 700 /path/on/host
mkdir -m 777 /path/on/host/mnt
podman run -v /path/on/host/mnt:/path/in/container:z ...
Or run it as the root user (this is recommended only with rootless podman, where the root user in the container is mapped to the calling user on the host), e.g.:
podman run --user root -v /path/on/host:/path/in/container:z ...
For read-only access, no permission changes are needed as long as the files are world-readable.
Image Verification
Development Status: Hummingbird images are currently signed by our Tekton pipeline for development and testing purposes. Official production images will be signed with Red Hat keys and published through Red Hat’s official pipeline in the future.
To verify an image was built by the Hummingbird Tekton pipeline:
- Save the public key to a file (e.g.,
key.pub):
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEtYRltxRJvXLMpXT+pIIu86CLhDP7
Q6VznCXqlzV3AO4AK/ge/HYtv6wMPfe4NHP3VQkCWoUokegC926FB+MTyA==
-----END PUBLIC KEY-----
- Verify the image signature using cosign:
cosign verify --key key.pub --insecure-ignore-tlog quay.io/hummingbird/<image>:<tag>
Vulnerability Scanning
For security vulnerability scanning, Hummingbird uses Syft for SBOM generation and Grype for vulnerability detection.
However, Syft’s output needs to be post-processed for accurate vulnerability matching with Fedora packages. For the time being (see Roadmap below), our syft-hummingbird.sh script calls syft and enhances the output with CPE dictionary lookups for better Grype matching. Other security scanners, including the scans on Quay.io, do not yet support Hummingbird containers.
To scan a Hummingbird image for vulnerabilities, run the script from this repository (you need to have syft and grype installed)):
ci/syft-hummingbird.sh quay.io/hummingbird/<image>:<tag> | grype
Alternatively, run it from our official CI container, which contains syft, grype, and the script. Use a volume to cache the databases between runs if you plan to scan multiple images:
podman run --volume vuln-db:/tmp/.cache quay.io/hummingbird-ci/gitlab-ci syft-hummingbird.sh quay.io/hummingbird/<image>:<tag> > sbom.json
Or use the grype-hummingbird.sh wrapper to run both syft and grype:
podman run --volume vuln-db:/tmp/.cache quay.io/hummingbird-ci/gitlab-ci grype-hummingbird.sh quay.io/hummingbird/<image>:<tag>
This will produce results like in the vulnerability report, or the per-image reports.
Known invalid or outdated CVEs are filtered out using the global ignore list in images/vulnerabilities-ignore.yml.
Reproducible Builds
Reproducible builds ensure that build artifacts from the same inputs and build environment always produces bit-for-bit identical output. This allows independent verification that a published image actually corresponds to its claimed source materials and makes it easy to detect if malware was injected during the build process, whether through a compromised build system or other supply chain attack.
Hummingbird images are fully reproducible. Given the signed SLSA provenance attestation that accompanies each image, anyone can rebuild the image from its inputs (this git repo and the RPMs it describes) to verify it matches the published version exactly.
Verifying Reproducibility
The cosign and podman tools are required. First, download and verify the SLSA provenance attestation using cosign (see the Image Verification section for key details):
cosign verify-attestation --key key.pub --insecure-ignore-tlog \
--type slsaprovenance $IMAGE > attestation.json
Then, feed this attestation into the rebuild tool (capture the image ID output; this will be used in the next step):
iid=$(podman run -i --rm --privileged -v /mnt \
quay.io/hummingbird-ci/builder rebuild < attestation.json)
[!NOTE] The
--privilegedflag is required because the build process uses nested containerization. However, this podman command is expected to be run rootless. A rootless container cannot gain more privileges than the calling user.
To verify reproducibility, we can pull down the image and compare its image ID:
iid2=$(podman pull $IMAGE)
[ $iid = $iid2 ] && echo "Identical"
[!NOTE] Here, we compare the containers-storage image ID. This is distinct from the repo digest. The image ID is a hash over the manifest and uncompressed content. The repo digest is a hash over compressed content. Images should be compared uncompressed to avoid depending on the exact compression algorithm used and target registry format compatibility.
By default, the image rebuild is discarded since it should be identical to the Quay.io image. However, it is also possible to keep it for further inspection using the DUMP_OCIARCHIVE environment variable and piping it into podman load:
podman run -i --rm --privileged -e DUMP_OCIARCHIVE=1 -v /mnt quay.io/hummingbird-ci/builder rebuild < attestation.json | podman load
Roadmap
- RPM builds: We’re currently building Project Hummingbird containers on Fedora Rawhide. Our future plan is to manage the build pipelines for Hummingbird’s RPM packages. This will allow us to quickly consume updates of stable software through a fully automated process. It also gives us the flexibility to offer packages in multiple versions and optimize them for container use.
- Security scanning: Once we build our own RPM packages, we will add proper SBOM metadata, and also set up a Hummingbird CVE database. Then Syft/Grype should work directly without post-processing.
- Official Red Hat signing: Transition from development pipeline signatures to official Red Hat signing keys and publishing infrastructure for production-ready images.
- Increasing image catalog: We intend to significantly expand our catalog of images.
- Image size optimization: Building our own RPM packages will provide complete control over the software stack, enabling further optimization of image sizes through container-specific package configurations and dependency management.
Relationship to Fedora
As mentioned in the roadmap, while Project Hummingbird containers are currently built from Fedora Rawhide, in the future distinct packages will be maintained, making the separation from Fedora clearer. Major differences from Fedora include (or will include):
- Use of monorepos; there is one monorepo for packages and one monorepo for containers. This makes it easier to manage and automate, especially for a smaller team.
- Heavy reliance on CI/CD; dependency management, testing, and releasing is automated.
- Secure supply chain requirements; container images (and eventually packages) are built using Konflux which provides SLSA level 3 compliance.
- Greater emphasis on upstream tracking; software will closely track upstream projects’ lifecycles more uniformly, including parallel stable versions if offered.
Overall, there is a natural affinity between the two projects and opportunities exist to contribute back into Fedora some of the work happening in this project. The degree to which this collaboration happens and how it takes form will need to be discussed within the Fedora community to gain consensus.
Contributing
Interested in contributing? See the Quickstart Guide for information on building and testing images locally, the CI/CD pipeline, project structure, and how to add new images.