fix(integration): refresh shared infra on integration switch#2375
fix(integration): refresh shared infra on integration switch#2375Quratulain-bilal wants to merge 5 commits intogithub:mainfrom
integration switch#2375Conversation
There was a problem hiding this comment.
Pull request overview
Updates specify integration switch <key> to refresh the project’s vendored shared infrastructure so switching integrations can’t leave stale .specify/scripts/* copies that break newer integrations (per #2293).
Changes:
- Make
integration switchcall_install_shared_infra(..., force=True, ...)so shared scripts/templates are overwritten with the bundled (CLI-version) copies. - Expand inline rationale in
integration_switch()explaining why overwrite is needed during switches.
Show a summary per file
| File | Description |
|---|---|
src/specify_cli/__init__.py |
Forces shared infra refresh during integration switching to prevent stale vendored scripts from breaking the newly selected integration. |
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Files reviewed: 1/1 changed files
- Comments generated: 2
|
Please address Copilot feedback. Note that if the switch command will overwrite existing infra files we should force the user to use the --force upon switch. That way it is clear that they opted into overwriting modified infra files? |
…ithub#2293) Address Copilot review on github#2375: 1. Replace unconditional force=True with hash-aware refresh_managed mode in _install_shared_infra. Files whose on-disk hash matches the previously recorded manifest hash are refreshed; files whose hash has diverged are treated as user customizations and preserved with a clear warning. The user can opt back into a full overwrite by passing --force to switch. 2. Update --force help text on integration switch so the documented semantics include 'overwrite customized shared infrastructure files', not just uninstall behavior. 3. Add two CLI-level regression tests: - test_switch_refreshes_stale_managed_shared_infra reproduces github#2293 by hashing a stale vendored common.sh into the manifest, then asserting it is replaced after switch. - test_switch_preserves_user_customized_shared_infra modifies the file without updating the manifest hash and asserts the customization survives the switch and the warning is emitted.
|
@mnriem this is exactly what commit c072c4e implements:
Net effect: the only files rewritten silently are the stale-but-unmodified vendored copies that caused #2293. Anything Covered by two new CLI-level tests: Happy to flip to a stricter contract (always require |
There was a problem hiding this comment.
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Files reviewed: 2/2 changed files
- Comments generated: 3
Three concerns from Copilot's review on github#2375: 1. Manifest hash retention: load the prior speckit manifest and seed the new manifest's _files dict with its hashes, so files we don't touch this run keep their tracking entry. Future refreshes can still tell managed-vs-customized apart. 2. Symlink safety: never overwrite a symlinked destination. Following it could read or write outside the project root. Treat symlinked dst paths as user customizations and preserve them. 3. Decouple --force: introduce a separate --refresh-shared-infra flag on `integration switch`. --force now only controls modified-file removal during the previous integration's uninstall (its original meaning). Users who want to overwrite their shared-infra customizations must opt in explicitly with --refresh-shared-infra. The preservation warning now points at the new flag. Tests: two new regression tests for the decoupling — one verifies --refresh-shared-infra overwrites customizations, the other verifies --force alone does not.
There was a problem hiding this comment.
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Files reviewed: 2/2 changed files
- Comments generated: 3
|
Please address Copilot feedback. If not applicable, please explain why |
… case Address Copilot follow-up on github#2375: the "re-run with --refresh-shared-infra" hint was hardcoded but `_install_shared_infra()` is also reached from `init`, `integration install`, and `integration upgrade` — none of which expose that flag. Symlinked destinations are never overwritten by any flag and need manual intervention. - Split the preserved-files warning into two: customized files (hash divergence) and symlinked files. They have different remediations. - Add a `refresh_hint` parameter so each caller passes its own remediation text. Only `integration switch` passes the `--refresh-shared-infra` hint. Other call sites omit it (no false suggestion of a flag they don't have). - Symlink message tells the user to remove/replace the symlink manually, since no flag will resolve it.
|
Addressed in a5adde5: |
There was a problem hiding this comment.
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Comments suppressed due to low confidence (1)
src/specify_cli/init.py:868
- Same symlink issue as the scripts loop: checking only
dst.is_symlink()won’t prevent writing into a symlinked parent directory (e.g..specify/templatesbeing a symlink). That can lead to writing outside the project root and can also makemanifest.record_existing()raise after the write. Add a pre-write containment check (resolved path within project root + no symlink components in the destination directory chain) and skip with a clear warning when unsafe.
dst = dest_templates / f.name
rel_posix = dst.relative_to(project_path).as_posix()
if dst.is_symlink():
symlinked_files.append(rel_posix)
continue
should_write = (
not dst.exists()
or force
or (refresh_managed and _is_managed(rel_posix, dst))
)
- Files reviewed: 2/2 changed files
- Comments generated: 1
…d infra Address Copilot follow-up on github#2375: previous symlink guard only checked the leaf path (dst_path.is_symlink()). If an *ancestor* directory like .specify/scripts/ or .specify/templates/ is itself a symlink to a location outside the project root, the leaf check returns False and shutil.copy2 / write_text would escape the project. Add an _is_safe_dest() check that: - resolves the full destination path and confirms it's contained under project_path.resolve(); rejects paths that escape via any link. - walks every existing ancestor between project_path and dst.parent and rejects if any one of them is a symlink (defence in depth — covers the case where resolve() can't fully traverse a not-yet-created leaf but a higher ancestor is symlinked). Apply the check at both copy sites (scripts loop and templates loop). Unsafe paths are added to symlinked_files and reported with the existing "remove or replace the symlink manually" guidance. Add a regression test (Unix-only — Windows symlink creation needs admin) that symlinks .specify/scripts/bash to an outside directory and verifies switch refuses to write through it.
There was a problem hiding this comment.
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Comments suppressed due to low confidence (1)
src/specify_cli/init.py:923
- Same manifest-tracking issue as the scripts loop: if a bundled template already exists but isn’t in
prior_hashes, the code skips it without recording it into the manifest. That prevents futurerefresh_managedruns from ever treating it as managed. Consider recording hashes for skipped-but-existing bundled templates (e.g.,manifest.record_existing(rel_posix)) so the manifest can rehydrate missing entries.
if not should_write:
if refresh_managed and dst.exists() and rel_posix in prior_hashes:
preserved_user_files.append(rel_posix)
else:
skipped_files.append(str(dst.relative_to(project_path)))
- Files reviewed: 2/2 changed files
- Comments generated: 4
| if src_path.is_file(): | ||
| rel_path = src_path.relative_to(variant_src) | ||
| dst_path = dest_variant / rel_path | ||
| if dst_path.exists() and not force: | ||
| skipped_files.append(str(dst_path.relative_to(project_path))) | ||
| rel_posix = dst_path.relative_to(project_path).as_posix() | ||
| # Refuse to write through a symlink at the leaf or via a | ||
| # symlinked ancestor (e.g. .specify/scripts -> /etc). Both | ||
| # cases could let shutil.copy2 escape the project root. | ||
| # Symlinks need manual intervention — no flag will resolve | ||
| # them — so track them separately. | ||
| if dst_path.is_symlink() or not _is_safe_dest(dst_path): | ||
| symlinked_files.append(rel_posix) | ||
| continue |
There was a problem hiding this comment.
Even though per-file writes are guarded by _is_safe_dest(), the destination directories (dest_scripts/dest_variant) are created earlier unconditionally. If .specify/scripts is a symlink to a location outside the project root and the variant directory doesn’t exist yet, dest_variant.mkdir(...) can still create directories outside the project before this loop runs. Consider validating the destination root/variant directories with the same safe-path checks before any mkdir, and skipping the whole scripts install when unsafe (without creating anything).
See below for a potential fix:
dest_scripts_rel = Path(".specify") / "scripts"
# Validate the scripts root before creating it. A symlinked
# `.specify/scripts` could otherwise make mkdir() escape the project.
if dest_scripts.is_symlink() or not _is_safe_dest(dest_scripts):
symlinked_files.append(dest_scripts_rel.as_posix())
else:
dest_scripts.mkdir(parents=True, exist_ok=True)
variant_dir = "bash" if script_type == "sh" else "powershell"
variant_src = scripts_src / variant_dir
if variant_src.is_dir():
dest_variant = dest_scripts / variant_dir
dest_variant_rel = dest_scripts_rel / variant_dir
# Validate the variant directory before creating it for the
# same reason as above.
if dest_variant.is_symlink() or not _is_safe_dest(dest_variant):
symlinked_files.append(dest_variant_rel.as_posix())
else:
dest_variant.mkdir(parents=True, exist_ok=True)
for src_path in variant_src.rglob("*"):
if src_path.is_file():
rel_path = src_path.relative_to(variant_src)
dst_path = dest_variant / rel_path
rel_posix = dst_path.relative_to(project_path).as_posix()
# Refuse to write through a symlink at the leaf or via a
# symlinked ancestor (e.g. .specify/scripts -> /etc). Both
# cases could let shutil.copy2 escape the project root.
# Symlinks need manual intervention — no flag will resolve
# them — so track them separately.
if dst_path.is_symlink() or not _is_safe_dest(dst_path):
symlinked_files.append(rel_posix)
continue
should_write = (
not dst_path.exists()
or force
or (refresh_managed and _is_managed(rel_posix, dst_path))
)
if not should_write:
if refresh_managed and dst_path.exists() and rel_posix in prior_hashes:
preserved_user_files.append(rel_posix)
else:
skipped_files.append(str(dst_path.relative_to(project_path)))
else:
dst_path.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(src_path, dst_path)
manifest.record_existing(rel_posix)
| for f in templates_src.iterdir(): | ||
| if f.is_file() and f.name != "vscode-settings.json" and not f.name.startswith("."): | ||
| dst = dest_templates / f.name | ||
| if dst.exists() and not force: | ||
| skipped_files.append(str(dst.relative_to(project_path))) | ||
| rel_posix = dst.relative_to(project_path).as_posix() | ||
| # Same defence as the scripts loop: refuse symlinks at the | ||
| # leaf or anywhere in the parent chain. | ||
| if dst.is_symlink() or not _is_safe_dest(dst): | ||
| symlinked_files.append(rel_posix) | ||
| continue |
There was a problem hiding this comment.
Similar to the scripts path: dest_templates.mkdir(...) happens before the per-file _is_safe_dest() check. If .specify/templates (or an ancestor) is a symlink to a path outside the project root and doesn’t exist yet, this mkdir can create directories outside the project even though later file writes are skipped. Consider applying the safe-path/symlink checks before creating dest_templates, and skipping template installation when unsafe.
| if refresh_managed and dst_path.exists() and rel_posix in prior_hashes: | ||
| preserved_user_files.append(rel_posix) | ||
| else: | ||
| skipped_files.append(str(dst_path.relative_to(project_path))) |
There was a problem hiding this comment.
When refresh_managed=True and an existing destination file is skipped because it’s not considered managed (rel_posix not in prior_hashes), it is added to skipped_files but never recorded into the new speckit manifest. That means these paths remain untracked forever (and refresh_managed can never refresh them in future runs). Consider calling manifest.record_existing(rel_posix) for skipped-but-existing bundled paths (at least when refresh_managed=True) so the manifest converges back to tracking all shared infra files.
This issue also appears on line 919 of the same file.
| skipped_files.append(str(dst_path.relative_to(project_path))) | |
| skipped_files.append(str(dst_path.relative_to(project_path))) | |
| if refresh_managed and dst_path.exists(): | |
| manifest.record_existing(rel_posix) |
| "[cyan]specify integration upgrade --force[/cyan]." | ||
| ) |
There was a problem hiding this comment.
The skipped_files remediation text always points to specify init --here --force / specify integration upgrade --force, but callers like integration switch now have a different supported flag (--refresh-shared-infra). Since refresh_hint is already plumbed for the customization warning, consider also using it (or a similar hook) for the skipped_files warning so the guidance matches the command the user just ran (especially when refresh_managed=True and prior_hashes is missing/incomplete).
See below for a potential fix:
if refresh_hint:
console.print(refresh_hint)
else:
console.print(
"To refresh shared infrastructure, run "
"[cyan]specify init --here --force[/cyan] or "
"[cyan]specify integration upgrade --force[/cyan]."
)
|
Please address Copilot feedback. If not applicable, please explain why |
…efresh_hint Address three Copilot follow-ups on github#2375: 1. mkdir before safety check: dest_scripts.mkdir() and dest_templates.mkdir() ran before any symlink/safety validation. If `.specify/scripts/` or `.specify/templates/` were already symlinked outside the project, mkdir could resolve through the link and create directories outside project bounds. Wrap mkdir in a `_safe_mkdir()` helper that runs the same `_is_safe_dest()` + `is_symlink()` check before creating; on failure the directory is reported as symlinked and the loop is skipped. 2. Skipped-but-existing managed files lost tracking: when refresh_managed was on and a file already matched the prior hash but was skipped (e.g. no force, customizations branch), the new manifest never re-recorded it. Future runs would no longer recognize it as managed. Re-record on the skipped path so hash tracking is preserved across runs. 3. skipped_files warning ignored refresh_hint: it always suggested `specify init --here --force`, even when the caller (integration switch) exposes a different flag. Use refresh_hint when provided, fall back to the init/upgrade remediation otherwise.
|
Addressed in c7833ec: |
There was a problem hiding this comment.
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Comments suppressed due to low confidence (1)
src/specify_cli/init.py:949
- Same issue as in the scripts loop: when
refresh_managedis enabled and an existing template file is preserved due to hash divergence, callingmanifest.record_existing(rel_posix)updates the manifest to the customized hash. That makes future refreshes treat the customization as managed and overwrite it. Preserve the prior hash for customized files (already present fromprior_hashes) and only record/update hashes for files you overwrite in this run.
if refresh_managed and dst.exists() and rel_posix in prior_hashes:
preserved_user_files.append(rel_posix)
else:
skipped_files.append(str(dst.relative_to(project_path)))
if refresh_managed and dst.exists():
manifest.record_existing(rel_posix)
- Files reviewed: 2/2 changed files
- Comments generated: 0 new
c7833ec to
005f24c
Compare
|
Hi @mnriem friendly check-in on this one. The PR sat green for about a 4 days ago, which is when the upstream This fix addresses #2293 (stale vendored shared scripts breaking the target integration after a switch) happy to add |
There was a problem hiding this comment.
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Files reviewed: 4/4 changed files
- Comments generated: 4
| preserved with a warning. ``force=True`` overwrites everything regardless. | ||
| ``refresh_hint`` is shown after the customization warning to tell the user | ||
| which flag would overwrite their customizations. |
| if refresh_hint: | ||
| console.print(refresh_hint) | ||
| else: | ||
| console.print( | ||
| "To refresh shared infrastructure, run " | ||
| "[cyan]specify init --here --force[/cyan] or " | ||
| "[cyan]specify integration upgrade --force[/cyan]." | ||
| ) |
|
|
||
|
|
| rel_path = src_path.relative_to(variant_src) | ||
| dst_path = dest_variant / rel_path | ||
| _ensure_safe_shared_destination(project_path, dst_path, parent_must_exist=False) | ||
| if dst_path.exists() and not force: | ||
| skipped_files.append(dst_path.relative_to(project_path).as_posix()) | ||
| rel = dst_path.relative_to(project_path).as_posix() | ||
| write, bucket = _decide_overwrite(rel, dst_path) |
|
Please address Copilot feedback. If not applicable, please explain why |
- Clarify install_shared_infra docstring: force overwrites regular files but always preserves symlinks (safe-destination check refuses to follow). - Print refresh_hint only for preserved_user_files; skipped_files keeps the generic remediation. Avoids misleading guidance when files were merely skipped (not detected as customized). - Catch ValueError from the safe-destination check and bucket the path under a new symlinked_files warning instead of aborting the switch. - Restore templates/constitution-template.md to upstream (drop accidental leading blank lines).
ae4d373 to
ee9afea
Compare
There was a problem hiding this comment.
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Files reviewed: 3/3 changed files
- Comments generated: 1
| """ | ||
| try: | ||
| _ensure_safe_shared_destination(project_path, dst, parent_must_exist=parent_must_exist) | ||
| except ValueError: |
|
Please address Copilot feedback. If not applicable, please explain why |
Address Copilot feedback on shared_infra.py:305 — _safe_dest_or_bucket caught any ValueError as 'symlinked', which masked genuine safety errors (path escape, parent-not-a-directory). - Introduce SymlinkedSharedPathError(ValueError) raised only by the symlink-specific branches in _ensure_safe_shared_*(). - _safe_dest_or_bucket() now catches only SymlinkedSharedPathError; other ValueErrors propagate so the operation aborts with the real cause instead of being silently bucketed. - Wrap top-level dest_scripts/dest_variant/dest_templates mkdir calls in the same bucket helper so a symlinked .specify/scripts or .specify/templates is preserved with a warning rather than aborting the switch (matches the documented 'preserve customizations' behavior). - Update tests to expect the new bucket+warn behavior for leaf-level symlinked destinations.
|
@mnriem! Addressed Copilot's latest feedback in 93850a7. The issue was that Fix: introduced |
There was a problem hiding this comment.
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Files reviewed: 4/4 changed files
- Comments generated: 3
| if symlinked_files: | ||
| console.print( | ||
| f"[yellow]⚠[/yellow] Skipped {len(symlinked_files)} symlinked shared " | ||
| "infrastructure file(s) — symlinks are never overwritten because they " |
| outside.write_text("# outside\n", encoding="utf-8") | ||
| os.symlink(outside, scripts_dir / "z.sh") | ||
|
|
||
| with pytest.raises(ValueError, match="Refusing to overwrite symlinked"): | ||
| install_shared_infra( | ||
| project, | ||
| "sh", | ||
| version="test", | ||
| core_pack=core_pack, | ||
| repo_root=tmp_path / "unused", | ||
| console=_NoopConsole(), | ||
| force=True, | ||
| ) | ||
| install_shared_infra( | ||
| project, |
| @@ -313,5 +411,28 @@ def install_shared_infra( | |||
| "[cyan]specify integration upgrade --force[/cyan]." | |||
| ) | |||
|
Please address Copilot feedback |
Address Copilot review on PR github#2375: - skipped_files hint now uses refresh_hint when refresh_managed=True so integration switch suggests --refresh-shared-infra instead of the generic init/upgrade flags. - symlinked-files warning header says "path(s)" rather than "file(s)" since symlinked directories (e.g. .specify/scripts/bash) are also bucketed there. - Rename test_shared_infra_install_preflights_before_writing to test_shared_infra_install_buckets_unsafe_destinations_and_continues to match the new bucket-and-continue semantics.
|
There was a problem hiding this comment.
Copilot's findings
Comments suppressed due to low confidence (2)
src/specify_cli/shared_infra.py:35
load_speckit_manifest()callsIntegrationManifest.load()whenspeckit.manifest.jsonexists, butIntegrationManifest.load()reads the manifest file without any symlink/containment preflight. If.specify/or.specify/integrations/is a symlink, this can end up reading JSON from outside the project root, even though writes are later blocked. Consider validating the manifest directory with the same symlink/escape checks (e.g.,_validate_safe_shared_directory(project_path, project_path / '.specify' / 'integrations')) and raising early before attempting to read.
def load_speckit_manifest(
project_path: Path,
*,
version: str,
console: Any | None = None,
) -> IntegrationManifest:
"""Load the shared infrastructure manifest, preserving existing entries."""
manifest_path = project_path / ".specify" / "integrations" / "speckit.manifest.json"
if manifest_path.exists():
try:
manifest = IntegrationManifest.load("speckit", project_path)
manifest.version = version
return manifest
tests/integrations/test_cli.py:323
- This test name/docstring says the installer "refuses" a symlinked template destination, but the implementation now skips/buckets symlinked paths with a warning instead of raising. Please rename the test/docstring to match the new semantics.
@pytest.mark.skipif(not hasattr(os, "symlink"), reason="symlinks are unavailable")
def test_shared_infra_refuses_symlinked_template_destination(self, tmp_path, capsys):
"""Shared template installs must not follow destination symlinks."""
from specify_cli import _install_shared_infra
- Files reviewed: 4/4 changed files
- Comments generated: 1
| @pytest.mark.skipif(not hasattr(os, "symlink"), reason="symlinks are unavailable") | ||
| def test_shared_infra_refuses_symlinked_script_destination(self, tmp_path): | ||
| def test_shared_infra_refuses_symlinked_script_destination(self, tmp_path, capsys): | ||
| """Shared script refreshes must not follow destination symlinks.""" | ||
| from specify_cli import _install_shared_infra | ||
|
|
|
Please address Copilot feedback |
…ehavior The two file-bucketing tests at line 300/320 were named *_refuses_*, but the new behavior buckets symlinked file destinations with a warning while safe destinations in the same install still complete. Rename to *_buckets_* and update docstrings to match. The remaining *_refuses_* tests (line 342/362/381) genuinely raise on symlinked dirs/manifests and keep their names.
There was a problem hiding this comment.
Copilot's findings
Comments suppressed due to low confidence (1)
src/specify_cli/shared_infra.py:163
_ensure_safe_shared_destination()validates symlinks/escape but doesn’t reject existing non-regular destinations (e.g., a directory at the file path). In that case the later write path will fail atos.replace()with an OS error, which is harder to diagnose and may leave the operation partially applied. Consider adding an explicitif dest.exists() and not dest.is_file(): raise ValueError("Shared infrastructure destination path is not a file: …")(and possibly similar handling for other non-regular types) to fail fast with a clear message.
label = _shared_destination_label(project_path, dest)
if dest.is_symlink():
raise SymlinkedSharedPathError(f"Refusing to overwrite symlinked shared infrastructure path: {label}")
if dest.exists():
try:
dest.resolve().relative_to(root)
except (OSError, ValueError):
raise ValueError(f"Shared infrastructure destination escapes project root: {label}") from None
- Files reviewed: 4/4 changed files
- Comments generated: 1
| # Refresh shared infrastructure to the current CLI version. Switching | ||
| # integrations is exactly when stale vendored shared scripts (e.g. | ||
| # update-agent-context.sh that pre-dates the target integration's | ||
| # supported-agent list) would silently break the new integration. | ||
| # | ||
| # Use refresh_managed=True so only files that match their previously | ||
| # recorded hash are overwritten — user customizations are detected via | ||
| # hash divergence and preserved with a warning. Pass | ||
| # --refresh-shared-infra to overwrite customizations as well. See #2293. | ||
| _install_shared_infra_or_exit( | ||
| project_root, | ||
| selected_script, | ||
| force=refresh_shared_infra, | ||
| refresh_managed=True, | ||
| invoke_separator=_invoke_separator_for_integration( | ||
| target_integration, current, target, parsed_options | ||
| ), | ||
| refresh_hint=( | ||
| "To overwrite customizations, re-run with " | ||
| "[cyan]specify integration switch ... --refresh-shared-infra[/cyan]." | ||
| ), | ||
| ) |
|
Renamed |
Fixes #2293 — specify integration switch left stale vendored shared scripts under
.specify/scripts/{bash,powershell}/ from the previous integration, breaking the new one.
Root cause
integration_switch() called _install_shared_infra(...) with the default force=False. The merge-missing-only behavior
preserved older copies (e.g. update-agent-context.sh whose supported-agent table did not list newer agents like pi), so
the freshly installed integration wrapper delegated to a stale shared script and failed with Unknown agent type 'pi'.
Fix
Introduces a hash-aware refresh_managed mode in _install_shared_infra() and wires integration switch to use it.
Default switch behavior (refresh_managed=True, force=False):
bundled version (this fixes the bug — stale managed copies get replaced).
new manifest re-records the prior hash so future refreshes can still tell managed vs. customized apart.
Explicit override: a new --refresh-shared-infra flag on integration switch opts in to overwriting customized
shared-infra files as well. This is decoupled from --force, which retains its original meaning (force-remove modified
files during the previous integration's uninstall).
User content under specs/ is untouched in all modes.
Behavior on integration switch
(force-removes modified command files).
Tests
refreshed.
decoupled.
Full test sweep: pytest tests/integrations/ — all green.