-
Notifications
You must be signed in to change notification settings - Fork 0
pip -> uv, CI and deploy updated accordingly #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughMigrate CI and container workflows from actions/setup-python + pip/requirements.txt to astral-sh/setup-uv + uv (pyproject.toml + uv.lock); add a shared backend base image, convert Dockerfiles and docker-compose to use uv, switch packaging to Hatchling, and update Pydantic model configs and a few tests. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Areas to inspect closely:
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used🪛 LanguageTooldocs/operations/cicd.md[grammar] ~52-~52: Use a hyphen to join words. (QB_NEW_EN_HYPHEN) [style] ~70-~70: ‘on top of that’ might be wordy. Consider a shorter alternative. (EN_WORDINESS_PREMIUM_ON_TOP_OF_THAT) [uncategorized] ~168-~168: The official name of this software platform is spelled with a capital “H”. (GITHUB) [uncategorized] ~169-~169: The official name of this software platform is spelled with a capital “H”. (GITHUB) [uncategorized] ~170-~170: The official name of this software platform is spelled with a capital “H”. (GITHUB) [uncategorized] ~171-~171: The official name of this software platform is spelled with a capital “H”. (GITHUB) [uncategorized] ~172-~172: The official name of this software platform is spelled with a capital “H”. (GITHUB) [uncategorized] ~172-~172: If this is a compound adjective that modifies the following noun, use a hyphen. (EN_COMPOUND_ADJECTIVE_INTERNAL) [uncategorized] ~173-~173: The official name of this software platform is spelled with a capital “H”. (GITHUB) ⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/tests.yml (1)
20-24: Pin and verify checksums for yq and k3s downloads to reduce supply-chain riskLines 22 and 27 use unpinned endpoints: yq's
releases/latestresolves to the newest version at runtime without version locking, and k3s is fetched viaget.k3s.iowrapper script (curl|sh pattern). Neither verifies checksums or signatures. Recommend pinning to specific versions and validating checksums against published digests.
🧹 Nitpick comments (12)
.github/workflows/ruff.yml (1)
17-26: Preferworking-directory: backendfor clarity
Not required, but it simplifies the step and reduces shell-script footguns.- - name: Run ruff - run: | - cd backend - uv run ruff check . --config pyproject.toml + - name: Run ruff + working-directory: backend + run: uv run ruff check . --config pyproject.toml.github/workflows/mypy.yml (1)
17-33: Considerworking-directory: backend(optional) — otherwise LGTM
Keeps steps single-line and easier to maintain, but current approach is fine.backend/pyproject.toml (2)
129-172: Dev dependencies are duplicated in two places (drift risk)
Right now, the dev list exists in both[project.optional-dependencies].devand[dependency-groups].dev. Pick one source of truth (or ensure one is derived/generated), otherwise they will diverge.
231-231: Pytest addopts has-qtwice
Not harmful, but can be cleaned up.backend/workers/Dockerfile.coordinator (2)
12-13: Pin uv version to ensure reproducible builds.Using the
:latesttag for the uv image risks non-deterministic builds. Pin to a specific version (e.g.,ghcr.io/astral-sh/uv:0.4.9).
28-28: Standardize Python invocation syntax across worker Dockerfiles.This file uses
python -m workers.run_coordinator(module syntax), while other workers (e.g., dlq_processor) use direct file paths likepython workers/dlq_processor.py. Consider standardizing the approach across all worker Dockerfiles for consistency.backend/workers/Dockerfile.k8s_worker (1)
11-12: Pin uv version to ensure reproducible builds.Using the
:latesttag for the uv image risks non-deterministic builds. Pin to a specific version.backend/workers/Dockerfile.pod_monitor (1)
11-12: Pin uv version to ensure reproducible builds.Using the
:latesttag for the uv image risks non-deterministic builds. Pin to a specific version.backend/workers/Dockerfile.saga_orchestrator (1)
11-12: Pin uv version to ensure reproducible builds.Using the
:latesttag for the uv image risks non-deterministic builds. Pin to a specific version.backend/workers/Dockerfile.result_processor (1)
11-12: Pin uv version to ensure reproducible builds.Using the
:latesttag for the uv image risks non-deterministic builds. Pin to a specific version.backend/Dockerfile (1)
9-10: Pin uv version to ensure reproducible builds.Using the
:latesttag for the uv image risks non-deterministic builds. Pin to a specific version.backend/workers/Dockerfile.dlq_processor (1)
11-12: Pin uv version to ensure reproducible builds.Using the
:latesttag for the uv image risks non-deterministic builds and supply chain inconsistencies. Pin to a specific version (e.g.,ghcr.io/astral-sh/uv:0.9.17).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
backend/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.github/workflows/docs.yml(1 hunks).github/workflows/mypy.yml(1 hunks).github/workflows/ruff.yml(1 hunks).github/workflows/security.yml(1 hunks).github/workflows/tests.yml(4 hunks)backend/Dockerfile(2 hunks)backend/Dockerfile.test(2 hunks)backend/pyproject.toml(2 hunks)backend/requirements-dev.txt(0 hunks)backend/requirements.txt(0 hunks)backend/workers/Dockerfile.coordinator(2 hunks)backend/workers/Dockerfile.dlq_processor(2 hunks)backend/workers/Dockerfile.event_replay(2 hunks)backend/workers/Dockerfile.k8s_worker(2 hunks)backend/workers/Dockerfile.pod_monitor(2 hunks)backend/workers/Dockerfile.result_processor(2 hunks)backend/workers/Dockerfile.saga_orchestrator(2 hunks)
💤 Files with no reviewable changes (2)
- backend/requirements.txt
- backend/requirements-dev.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Backend Tests
🔇 Additional comments (8)
.github/workflows/tests.yml (1)
174-184: UV migration in tests looks consistent (uv sync --frozen+uv run pytest)
Nice: frozen lock usage and running tools via uv should make CI more reproducible.Also applies to: 199-204
backend/pyproject.toml (1)
148-154: Hatchling switch + explicit wheel packages looks good—verify package layout matches
packages = ["app", "workers"]is fine as long as those are import packages (and your backend build context expects that). Please double-check install/build output in CI.backend/Dockerfile.test (1)
27-28: Test command viauv runis consistent with the migration
Looks good.backend/workers/Dockerfile.event_replay (1)
26-33: Non-root worker +uv sync --no-dev+uv runis a solid baseline
LGTM..github/workflows/docs.yml (1)
42-46: jq is preinstalled onubuntu-latest
The concern aboutjqnot being available onubuntu-latestis unfounded. GitHub's ubuntu-latest runner images (both Ubuntu 22.04 and the current Ubuntu 24.04 as of January 2025) includejqby default. The jq command in this step (object merge with+) uses basic syntax available in all jq versions.While explicitly installing
jqcould be a defensive practice if you need to guarantee a specific version, it is not necessary for the current workflow.Likely an incorrect or invalid review comment.
backend/workers/Dockerfile.dlq_processor (2)
33-33: Verify uv run works with direct Python file paths.Confirm that
uv run python workers/dlq_processor.pycorrectly executes the script with the synced virtual environment. If module invocation is preferred (consistency across workers), consider using-msyntax instead.
14-18: Theuv sync --frozen --no-devcommand syntax is correct.The flags align with the intended behavior:
--frozenenforces the existing lockfile without modification, and--no-devexcludes development dependencies. This command properly installs only production dependencies from the locked dependency versions.backend/Dockerfile (1)
38-51: No action needed — gunicorn and uvicorn are already present in dependencies.Both
gunicorn==23.0.0anduvicorn==0.34.2are explicitly listed inbackend/pyproject.tomland will be available afteruv sync.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.github/workflows/tests.yml (2)
156-163: Health check can false-positive withoutcurl --fail.
Right now the loop may succeed even if the endpoint returns an HTTP error (e.g., 404/500). Considercurl -fsSk ...(and optionally--retry-connrefused) so the step only passes on a successful status.- timeout 300 bash -c 'until curl -k https://127.0.0.1:443/api/v1/health -o /dev/null; do \ + timeout 300 bash -c 'until curl -fsSk https://127.0.0.1:443/api/v1/health -o /dev/null; do \ echo "Retrying backend health check..."; \ sleep 5; \ done'
20-24: Pin yq and k3s versions and verify checksums in CI—avoidreleases/latestand pipe-to-shell patterns.Currently, yq is fetched from
releases/latestwithout checksum verification, and k3s is installed viacurl | shwithout version pinning or checksum verification. Both introduce supply-chain risk and nondeterministic builds.For yq (lines 20–23): pin a specific release version, download the accompanying SHA256 checksums file, and verify before installing:
- name: Install yq run: | VERSION=v4.43.1 # Set desired version curl -sSL -o yq https://github.com/mikefarah/yq/releases/download/${VERSION}/yq_linux_amd64 curl -sSL -o checksums https://github.com/mikefarah/yq/releases/download/${VERSION}/checksums sha256sum -c <(grep yq_linux_amd64 checksums) sudo install -m 0755 yq /usr/local/bin/yqFor k3s (lines 25–27): set
INSTALL_K3S_VERSIONto pin the exact release, download and verify the binary and checksums separately before running the installer:- name: Setup Kubernetes (k3s) and Kubeconfig run: | K3S_VERSION=v1.33.3+k3s1 # Pin exact version curl -fsSL -o k3s "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/k3s-io/k3s/releases/download/${K3S_VERSION}/k3s" curl -fsSL -o sha256sum.txt "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/k3s-io/k3s/releases/download/${K3S_VERSION}/sha256sum-amd64.txt" sha256sum -c --ignore-missing sha256sum.txt sudo install -m 0755 k3s /usr/local/bin/k3s # Continue with k3s configuration...Both tools publish SHA256 checksums with each release as downloadable assets; GPG signatures are not available for either.
🧹 Nitpick comments (1)
.github/workflows/tests.yml (1)
174-185: Current uv migration approach is sound; consider passingpython-versiondirectly tosetup-uvaction input for consistency.The workflow correctly uses
setup-uv@v5withenable-cache: trueandcache-dependency-glob: "backend/uv.lock". Thecache-dependency-globpath (relative to repo root) is correctly configured. However, rather than relying on hardcodeduv python install 3.12in the script, passingpython-version: 3.12as an input to thesetup-uvaction (which setsUV_PYTHON) would be cleaner and more declarative—though the step would still be needed to ensure the interpreter is installed. The current approach works fine and incurs minimal overhead; this is an optional refinement rather than a requirement.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/mypy.yml(1 hunks).github/workflows/ruff.yml(1 hunks).github/workflows/tests.yml(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/mypy.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Backend Tests
🔇 Additional comments (1)
.github/workflows/ruff.yml (1)
17-32: Ruff setup is correct and properly configured for CI.Ruff is declared in
backend/pyproject.tomlunder[tool.uv.dev-dependencies]and is included in theuv.lockfile. The workflow correctly usesuv sync --frozen, which installs all dependency groups (including dev dependencies) from the lock file, ensuring ruff availability in the CI environment. The setup withsetup-uv@v5,uv python install 3.12, and the caching strategy is appropriate for the intended use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (1)
backend/workers/Dockerfile.event_replay (1)
11-12: Pin the uv image version to ensure reproducible builds.Using
:latestcan lead to non-deterministic builds if a new uv version is released. Replace with a specific version tag (e.g.,0.5.0) or digest.-COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/ +COPY --from=ghcr.io/astral-sh/uv:0.5.0 /uv /uvx /bin/
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
.github/workflows/tests.yml(5 hunks)backend/Dockerfile(2 hunks)backend/workers/Dockerfile.coordinator(2 hunks)backend/workers/Dockerfile.dlq_processor(2 hunks)backend/workers/Dockerfile.event_replay(2 hunks)backend/workers/Dockerfile.k8s_worker(2 hunks)backend/workers/Dockerfile.pod_monitor(2 hunks)backend/workers/Dockerfile.result_processor(2 hunks)backend/workers/Dockerfile.saga_orchestrator(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- .github/workflows/tests.yml
- backend/workers/Dockerfile.coordinator
- backend/workers/Dockerfile.k8s_worker
- backend/workers/Dockerfile.saga_orchestrator
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Backend Tests
🔇 Additional comments (2)
backend/Dockerfile (2)
14-19: Approve the uv-based dependency installation approach.The migration from pip to uv with
--frozen --no-dev --no-install-projectflags is a solid improvement. The--frozenflag ensures reproducibility, and--no-devcorrectly limits production image size. The early copy of dependency files before application code is an efficient layer caching strategy.
35-54: The bash CMD script has correct variable expansion and escaping.All variable substitutions using the
${VAR:-default}syntax work correctly within the double-quoted context. The backslash line continuations are preserved properly, and the inline variable assignments function as intended. No issues with quoting or escaping were detected in the script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
backend/Dockerfile (1)
12-12: Duplicate: Pin the uv image version to ensure reproducible builds.This issue was raised in a previous review: using
:latestmakes builds non-deterministic. Replace with a specific version tag (e.g.,0.5.0).Suggested fix:
-COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/ +COPY --from=ghcr.io/astral-sh/uv:0.5.0 /uv /uvx /bin/You may want to document the chosen version in a comment or update it in sync with your project's dependency updates.
🧹 Nitpick comments (1)
docker-compose.yaml (1)
536-536: Minor: Inconsistent command format across services.
event-replay(line 536) anddlq-processor(line 563) use shell string format (uv run python ...), whilekafka-init(line 351) uses array notation. Both work, but standardizing on array notation would improve clarity and avoid shell interpretation.Consider updating lines 536 and 563 to use array notation for consistency:
- command: uv run python workers/run_event_replay.py + command: ["uv", "run", "python", "workers/run_event_replay.py"]- command: uv run python workers/dlq_processor.py + command: ["uv", "run", "python", "workers/dlq_processor.py"]Also applies to: 563-563
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/tests.yml(6 hunks)backend/Dockerfile(3 hunks)docker-compose.yaml(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Backend Tests
🔇 Additional comments (6)
.github/workflows/tests.yml (3)
177-187: UV setup and frozen dependency installation look good.Using
astral-sh/setup-uv@v5with caching enabled anduv sync --frozenensures reproducible, cached builds. This aligns well with the migration strategy.
84-85: Volume mount removal is a pragmatic Docker + uv pattern.Removing the
./backend:/appmount prevents permission conflicts between the host and container's uv-managed.venv. This is correct for CI where hot-reload is unnecessary.
207-207: Test command correctly uses uv runner.The
uv run pytestinvocation ensures pytest is resolved from the frozen environment and all coverage options are preserved.docker-compose.yaml (1)
351-351: kafka-init command correctly uses uv runner with explicit array notation.The
["uv", "run", "python", "-m", "scripts.create_topics"]invocation ensures uv resolves and runs the module in the frozen environment.backend/Dockerfile (2)
5-9: Native dependencies are appropriate for uv-based builds.The installed packages (gcc, libsnappy-dev, liblzma-dev) support compilation of Python packages with C extensions, which is likely needed by your dependencies. These are correct to include.
15-19: UV-based dependency management and runtime execution are correctly configured.
uv sync --frozen --no-dev --no-install-project(line 19) ensures reproducible production builds with pinned versions.PYTHONPATH=/app(line 32) enables imports without installing the project as a package.uv run gunicorn(line 44) properly resolves and runs Gunicorn from the frozen environment.All changes align well with the UV migration strategy.
Also applies to: 32-32, 44-44
|
Codecov Report✅ All modified and coverable lines are covered by tests.
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 2 files with indirect coverage changes 🚀 New features to boost your workflow:
|
- uv tool run for docs.yml
- uv for security ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
backend/tests/unit/core/test_logging_and_correlation.py (1)
73-84: Consider restricting/pingto GET-only to avoid unintentionally broad method coverage.
Route("/ping", ping)withmethods=Nonecan accept more methods than intended; specifyingmethods=["GET"]keeps the test intent crisp.- app = Starlette(routes=[Route("/ping", ping)]) + app = Starlette(routes=[Route("/ping", ping, methods=["GET"])])backend/pyproject.toml (1)
230-230: Remove duplicate-qflag from pytest addopts.The addopts line contains a duplicate
-qflag (quiet mode appears twice). Keep only one instance.-addopts = "-n 4 --dist loadfile --tb=short -q --no-header -q" +addopts = "-n 4 --dist loadfile --tb=short -q --no-header"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
.github/workflows/security.yml(1 hunks)backend/app/infrastructure/kafka/events/base.py(2 hunks)backend/app/schemas_pydantic/health_dashboard.py(2 hunks)backend/app/schemas_pydantic/user.py(0 hunks)backend/pyproject.toml(2 hunks)backend/tests/unit/core/test_logging_and_correlation.py(2 hunks)backend/tests/unit/schemas_pydantic/test_notification_schemas.py(1 hunks)docker-compose.yaml(3 hunks)
💤 Files with no reviewable changes (1)
- backend/app/schemas_pydantic/user.py
🚧 Files skipped from review as they are similar to previous changes (2)
- .github/workflows/security.yml
- docker-compose.yaml
🧰 Additional context used
🧬 Code graph analysis (1)
backend/tests/unit/core/test_logging_and_correlation.py (1)
backend/app/core/logging.py (3)
CorrelationFilter(23-37)JSONFormatter(40-102)setup_logger(105-144)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Backend Tests
🔇 Additional comments (7)
backend/tests/unit/core/test_logging_and_correlation.py (1)
1-13: Imports look clean;Routeimport matches the new explicit routing.
The consolidation/removal of duplicateioand the explicitRouteimport align with the updated app construction.backend/pyproject.toml (1)
1-230: uv migration is complete and properly implemented.No old requirements.txt files remain, uv.lock is committed, all CI workflows use
astral-sh/setup-uv@v5withuv sync --frozen, and all Dockerfiles have been updated to useuvinstead of pip. The pyproject.toml is correctly structured for uv.backend/tests/unit/schemas_pydantic/test_notification_schemas.py (1)
47-47: LGTM! Correct Pydantic v2 migration.The change from
copy()tomodel_copy()aligns with Pydantic v2 API. The functionality remains equivalent, creating shallow copies for batch validation testing.backend/app/schemas_pydantic/health_dashboard.py (2)
3-3: LGTM! Required import for Pydantic v2 configuration.The
ConfigDictimport is necessary to support the new Pydantic v2 configuration pattern used inDependencyEdge.
163-163: LGTM! Correct Pydantic v2 configuration pattern.The migration from inner
Configclass tomodel_config = ConfigDict(populate_by_name=True)is the correct Pydantic v2 approach. Thepopulate_by_namesetting is properly preserved, ensuring fields can be populated by both their field name and alias (e.g.,from_serviceandfrom).backend/app/infrastructure/kafka/events/base.py (2)
5-5: LGTM! Required import for Pydantic v2 field serialization.The
field_serializerimport is necessary to replace the deprecatedjson_encodersapproach with Pydantic v2's field serializer pattern.
25-29: LGTM! Correct Pydantic v2 serialization pattern.The migration from
json_encodersinmodel_configto@field_serializerdecorator is the correct Pydantic v2 approach. Theserialize_timestampmethod properly converts datetime to ISO format string, maintaining the same serialization behavior.Verification confirmed all Pydantic v1 patterns have been successfully migrated across the codebase: no remaining
Configclasses,json_encoders,@validator, or@root_validatorpatterns found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (5)
backend/workers/Dockerfile.dlq_processor (1)
1-8: Critical:FROM basereference will fail in CI/CD pipelines.Same issue as
backend/workers/Dockerfile.saga_orchestrator—FROM basecannot be resolved in standalone builds. See that file's review comment for detailed explanation and solution options.backend/workers/Dockerfile.result_processor (1)
1-8: Critical:FROM basereference will fail in CI/CD pipelines.Same issue as previous worker Dockerfiles. See
backend/workers/Dockerfile.saga_orchestratorreview comment for full details and solution options.backend/Dockerfile (1)
1-2: Critical:FROM basereference will fail in CI/CD pipelines.Same issue as worker Dockerfiles. See
backend/workers/Dockerfile.saga_orchestratorreview comment for full details and solution options.backend/workers/Dockerfile.k8s_worker (1)
1-8: Critical:FROM basereference will fail in CI/CD pipelines.Same issue as previous worker Dockerfiles. See
backend/workers/Dockerfile.saga_orchestratorreview comment for full details and solution options.backend/workers/Dockerfile.coordinator (1)
1-8: Critical:FROM basereference will fail in CI/CD pipelines.Same issue as previous worker Dockerfiles. See
backend/workers/Dockerfile.saga_orchestratorreview comment for full details and solution options.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
.github/workflows/tests.yml(7 hunks)backend/Dockerfile(2 hunks)backend/Dockerfile.base(1 hunks)backend/workers/Dockerfile.coordinator(1 hunks)backend/workers/Dockerfile.dlq_processor(1 hunks)backend/workers/Dockerfile.event_replay(1 hunks)backend/workers/Dockerfile.k8s_worker(1 hunks)backend/workers/Dockerfile.pod_monitor(1 hunks)backend/workers/Dockerfile.result_processor(1 hunks)backend/workers/Dockerfile.saga_orchestrator(1 hunks)docker-compose.yaml(10 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/workers/Dockerfile.pod_monitor
- backend/workers/Dockerfile.event_replay
🧰 Additional context used
🪛 GitHub Actions: Docker Build & Scan
backend/Dockerfile.base
[error] 2-2: Command failed: docker buildx build --cache-from type=gha --cache-to type=gha,mode=max --iidfile /tmp/docker-actions-toolkit-KeaN1G/iidfile --tag integr8scode:test --load --metadata-file /tmp/docker-actions-toolkit-KeaN1G/metadata-file ./backend. Base image pull failed: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (FROM base).
backend/Dockerfile
[error] 2-2: Command failed: docker buildx build --cache-from type=gha --cache-to type=gha,mode=max --iidfile /tmp/docker-actions-toolkit-KeaN1G/iidfile --tag integr8scode:test --load --metadata-file /tmp/docker-actions-toolkit-KeaN1G/metadata-file ./backend. Base image pull failed: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (FROM base).
backend/workers/Dockerfile.dlq_processor
[error] 2-2: Command failed: docker buildx build --cache-from type=gha --cache-to type=gha,mode=max --iidfile /tmp/docker-actions-toolkit-KeaN1G/iidfile --tag integr8scode:test --load --metadata-file /tmp/docker-actions-toolkit-KeaN1G/metadata-file ./backend. Base image pull failed: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (FROM base).
backend/workers/Dockerfile.k8s_worker
[error] 2-2: Command failed: docker buildx build --cache-from type=gha --cache-to type=gha,mode=max --iidfile /tmp/docker-actions-toolkit-KeaN1G/iidfile --tag integr8scode:test --load --metadata-file /tmp/docker-actions-toolkit-KeaN1G/metadata-file ./backend. Base image pull failed: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (FROM base).
backend/workers/Dockerfile.saga_orchestrator
[error] 2-2: Command failed: docker buildx build --cache-from type=gha --cache-to type=gha,mode=max --iidfile /tmp/docker-actions-toolkit-KeaN1G/iidfile --tag integr8scode:test --load --metadata-file /tmp/docker-actions-toolkit-KeaN1G/metadata-file ./backend. Base image pull failed: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (FROM base).
backend/workers/Dockerfile.coordinator
[error] 2-2: Command failed: docker buildx build --cache-from type=gha --cache-to type=gha,mode=max --iidfile /tmp/docker-actions-toolkit-KeaN1G/iidfile --tag integr8scode:test --load --metadata-file /tmp/docker-actions-toolkit-KeaN1G/metadata-file ./backend. Base image pull failed: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (FROM base).
backend/workers/Dockerfile.result_processor
[error] 2-2: Command failed: docker buildx build --cache-from type=gha --cache-to type=gha,mode=max --iidfile /tmp/docker-actions-toolkit-KeaN1G/iidfile --tag integr8scode:test --load --metadata-file /tmp/docker-actions-toolkit-KeaN1G/metadata-file ./backend. Base image pull failed: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (FROM base).
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Backend Tests
🔇 Additional comments (11)
backend/Dockerfile.base (1)
15-26: Base image setup looks good; verify build context and dependency resolution.The base image correctly pins uv to
0.9.17, uses--frozento ensure reproducibility, and excludes the project itself with--no-install-project. However, ensure that:
- Docker builds reference
backend/as the context soCOPY pyproject.toml uv.lock ./resolves correctly- The
pyproject.tomlanduv.lockin the backend directory match the dependency requirementsbackend/Dockerfile (1)
18-37: Gunicorn startup with uv looks good; verify TLS cert preparation flow.The migration to
uv run gunicornis correct. The TLS cert waiting logic (lines 19) is appropriate for ensuring certs are ready. Ensure that the TLS cert preparation (presumably from cert-generator service in docker-compose) completes reliably before this container starts.Once the
FROM baseissue is resolved, this section should work correctly.docker-compose.yaml (4)
2-8: Base service definition is correct for docker-compose; ensure proper handling in CI/CD.The base service is correctly defined with proper build context and will be built before dependent services via the
depends_on: condition: service_completed_successfullypattern. However, note that the base image is tagged asintegr8scode-base:latest(not pinned to a version).Concern: This docker-compose configuration works well for local/orchestration builds, but individual Dockerfile builds in CI/CD pipelines will fail because the
FROM basereference cannot be resolved. Ensure your CI/CD pipeline either:
- Builds the base service first and tags it locally
- Pushes the base image to a registry and uses the full registry path in Dockerfiles
- Uses docker-compose for all builds
See the detailed explanation in
backend/workers/Dockerfile.saga_orchestratorreview.
74-92: Backend service dependencies and contexts correctly updated.The additional_contexts and depends_on additions properly integrate the base image for the backend service. Verify that all referenced services (cert-generator, mongo, redis, kafka, schema-registry) are available and that the health checks work as expected.
349-369: Kafka-init service correctly migrated to uv-based execution.The command now uses
uv run python -m scripts.create_topics, which aligns with the uv migration. Ensure the scripts.create_topics module exists and is properly configured.
373-402: All worker services consistently updated with base image and uv commands.All backend worker services (coordinator, k8s-worker, pod-monitor, result-processor, saga-orchestrator, event-replay, dlq-processor) are consistently updated with:
additional_contexts: base: service:basedepends_on: base: condition: service_completed_successfullyuv runcommands for startupThis is well-structured and maintains consistency across services. Once the base image resolution issue in CI/CD is addressed, these services should work reliably.
Also applies to: 404-439, 441-473, 475-509, 511-539, 561-589, 592-624
.github/workflows/tests.yml (5)
133-134: BUILDKIT_PROGRESS addition is a good observability improvement.Adding
BUILDKIT_PROGRESS: plainimproves build log visibility during CI. This is a non-breaking change.
216-216: Coverage paths and uv run syntax look correct.The migration from
python -m pytesttouv run pytestis correct. Coverage collection paths (--cov=app) and report generation appear consistent with the backend structure.
186-196: Verify setup-uv action parameters and uv command syntax.The migration from
actions/setup-pythontoastral-sh/setup-uv@v5looks sound. Theenable-cacheandcache-dependency-globparameters are both valid inputs supported by the action. Thebackend/uv.lockfile exists in the repository, and the cache dependency glob pattern is correctly specified. Confirm thatuv python install 3.12is the correct command syntax for your uv version to avoid any runtime surprises.
91-92: Backend container can start correctly without the/appvolume mount in CI.All three concerns are confirmed:
uv dependencies installed at build time: The base image (Dockerfile.base) runs
uv sync --frozen --no-dev --no-install-projectduring build, baking all production dependencies into the image.No runtime dependency on host files: The Dockerfile CMD runs
uv run gunicornwith pre-installed dependencies. The startup logic (app/core/startup.py) only initializes metrics and Redis configuration—no file reads from the host./backenddirectory.Service functions without the mount: Removing
./backend:/appis safe in CI because all dependencies are containerized. The other critical mounts (certs, kubeconfig) remain unchanged.The change eliminates unnecessary hot-reload capability (not needed in CI) and prevents
.venvpermission issues from host bind-mounts.
63-63: No action needed. The uv base image version 0.9.17 is the current stable release as of December 9, 2025, with no known security advisories. This version is appropriate and secure for use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
.github/workflows/docker.yml (1)
16-25: Implicit layer caching may impact build performance and reproducibility.The migration removes explicit
cache-from/cache-todirectives (per AI summary) in favor of relying on Docker's implicit layer caching. While this simplifies the configuration, it can make builds less reproducible across different CI environments or cache invalidation scenarios.Consider whether explicit caching or a caching strategy (e.g., Docker layer caching or buildx cache backend) is needed for consistency.
backend/pyproject.toml (1)
186-187: Address the TODO for mypy configuration.Line 186 marks the exclude pattern (line 187) as requiring removal. If this refactoring is intentional and complete, remove the TODO comment and the exclude pattern. If it's still pending, open a tracking issue to defer the cleanup.
Would you like me to remove the TODO comment and the exclude pattern, or should this be tracked separately for completion?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/docker.yml(1 hunks)backend/app/infrastructure/kafka/events/base.py(2 hunks)backend/pyproject.toml(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Backend Tests
🔇 Additional comments (4)
backend/pyproject.toml (1)
129-134: Well-executed hatchling migration.The switch from setuptools to hatchling as the build backend is clean and correct. The
[tool.hatch.build.targets.wheel]configuration properly specifies the package structure for the monorepo (app and workers), aligning with the broader uv and CI/CD migration objectives observed in the PR.backend/app/infrastructure/kafka/events/base.py (3)
5-5: LGTM! Correct import for Pydantic v2.The addition of
field_serializeris necessary for the new timestamp serialization approach below.
25-26: Correct Pydantic v2 model configuration.The empty
ConfigDict()is appropriate here. The class properly uses Pydantic v2 patterns—timestamp serialization is handled via the@field_serializerdecorator (lines 27-29), and no additional model configuration settings are needed.
27-29: LGTM! Proper Pydantic v2 field serialization.The
@field_serializerdecorator withwhen_used='json'correctly replaces the deprecatedjson_encodersapproach. The implementation usingisoformat()is the standard way to serialize datetime objects to ISO format strings. Project uses Pydantic 2.9.2, which fully supports this decorator and parameter.
|



Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.