Compare commits

...

12 Commits

Author SHA1 Message Date
Kevin Veen-Birkenbach
b5ddf7402a Release version 0.8.0 2025-12-10 17:32:00 +01:00
Kevin Veen-Birkenbach
900224ed2e Moved installer dir 2025-12-10 17:27:26 +01:00
Kevin Veen-Birkenbach
e290043089 Refine installer capability integration tests and documentation
- Adjust install_repos integration test to patch resolve_command_for_repo
  in the pipeline module and tighten DummyInstaller overrides
- Rewrite recursive capability integration tests to focus on layer
  ordering and capability shadowing across Makefile, Python, Nix
  and OS-package installers
- Extend recursive capabilities markdown with hierarchy diagram,
  capability matrix, scenario matrix and link to the external
  setup controller schema

https://chatgpt.com/share/69399857-4d84-800f-a636-6bcd1ab5e192
2025-12-10 17:23:33 +01:00
Kevin Veen-Birkenbach
a7fd37d646 Add unit tests for install pipeline, Nix flake installer, and command resolution
https://chatgpt.com/share/69399857-4d84-800f-a636-6bcd1ab5e192
2025-12-10 16:57:02 +01:00
Kevin Veen-Birkenbach
d4b00046d3 Refine installer layering and Python/Nix integration
- Introduce explicit CLI layer model (os-packages, nix, python, makefile)
  and central InstallationPipeline to orchestrate installers.
- Move installer orchestration out of install_repos() into
  pkgmgr.actions.repository.install.pipeline, using layer precedence and
  capability tracking.
- Add pkgmgr.actions.repository.install.layers to classify commands into
  layers and compare priorities.
- Rework PythonInstaller to always use isolated environments:
  PKGMGR_PIP override → active venv → per-repo venv under ~/.venvs/<identifier>,
  avoiding system Python and PEP 668 conflicts.
- Adjust NixFlakeInstaller to install flake outputs based on repository
  identity: pkgmgr/package-manager → pkgmgr (mandatory) + default (optional),
  all other repos → default (mandatory).
- Tighten MakefileInstaller behaviour, add global
  PKGMGR_DISABLE_MAKEFILE_INSTALLER switch, and simplify install target
  detection.
- Rewrite resolve_command_for_repo() with explicit Repository typing,
  better Python package detection, Nix/PATH resolution, and a
  library-only fallback instead of raising on missing CLI.
- Update flake.nix devShell to provide python3 with pip and add pip as a
  propagated build input.
- Remove deprecated/wip repository entries from config defaults and drop
  the unused config/wip.yml.

https://chatgpt.com/share/69399157-86d8-800f-9935-1a820893e908
2025-12-10 16:26:23 +01:00
Kevin Veen-Birkenbach
545d345ea4 core(command): implement explicit command=None bypass and add unit tests
This update introduces Variant B behavior in the command resolver:

- If a repository explicitly defines the key \"command\" (even if its value is None),
  resolve_command_for_repo() treats it as authoritative and returns immediately.
  This allows library-only repositories to declare:
      command: null
  which disables CLI resolution entirely.

- As a result, Python package repositories without installed CLI entry points
  no longer trigger SystemExit during update/install flows, as long as they set
  command: null in their repo configuration.

The resolution logic is now bypassed for such repositories, skipping:
  - Python package detection (src/*/__main__.py)
  - PATH/Nix/venv binary lookup
  - main.sh/main.py fallback evaluation

A new unit test suite has been added under
  tests/unit/pkgmgr/core/command/test_resolve.py
covering:

 1) Python package without installed command → SystemExit
 2) Python package with installed command → returned correctly
 3) Script repository fallback to main.py
 4) Explicit command overrides all logic

This commit stabilizes update/install flows and ensures library-only
repositories behave as intended when no CLI command is provided.

https://chatgpt.com/share/69394a53-bc78-800f-995d-21099a68dd60
2025-12-10 11:23:57 +01:00
Kevin Veen-Birkenbach
a29b831e41 Release version 0.7.14 2025-12-10 10:38:36 +01:00
Kevin Veen-Birkenbach
bc9ca140bd fix(e2e): treat SystemExit(0) as successful CLI termination in clone-all test
The pkgmgr proxy layer may intentionally terminate the process via
SystemExit(0). The previous test logic interpreted any SystemExit as a failure,
causing false negatives during `pkgmgr clone --all` E2E runs.

This patch updates `test_clone_all.py` to:
- accept SystemExit(0) as a successful run,
- only fail on non-zero exit codes,
- preserve diagnostic output for real failures.

This stabilizes the clone-all E2E test across proxy-triggered exits.

https://chatgpt.com/share/69393f6b-b854-800f-aabb-25811bbb8c74
2025-12-10 10:37:40 +01:00
Kevin Veen-Birkenbach
ad8e3cd07c Updated CHANGELOG.md 2025-12-10 10:28:20 +01:00
Kevin Veen-Birkenbach
22efe0b32e Release version 0.7.13 2025-12-10 10:27:27 +01:00
Kevin Veen-Birkenbach
d23a0a94d5 Fix tools path resolution and add tests
- Use _resolve_repository_path() for explore, terminal and code commands
  so tools no longer rely on a 'directory' key in the repository dict.
- Fall back to repositories_base_dir/repositories_dir via get_repo_dir()
  when no explicit path-like key is present.
- Make VS Code workspace creation more robust (safe default for
  directories.workspaces and UTF-8 when writing JSON).
- Add unit tests for handle_tools_command (explore, terminal, code) under
  tests/unit/pkgmgr/cli/commands/test_tools.py.
- Add E2E/integration-style tests for the tools subcommands' --help
  output under tests/e2e/test_tools_help.py, treating SystemExit(0) as
  success.

This change fixes the KeyError: 'directory' when running 'pkgmgr code'
and verifies the behavior via unit and integration tests.

https://chatgpt.com/share/69393ca1-b554-800f-9967-abf8c4e3fea3
2025-12-10 10:25:29 +01:00
Kevin Veen-Birkenbach
e42b79c9d8 Add E2E tests for 'clone --all' and 'update --all' using HTTPS mode
This commit introduces two new end-to-end integration tests:

  • tests/e2e/test_clone_all.py
      Runs: pkgmgr clone --all --clone-mode https --no-verification
      Verifies that full HTTPS cloning of all configured repositories
      works inside the test container environment.

  • tests/e2e/test_update_all.py
      Runs: pkgmgr update --all --clone-mode https --no-verification
      Ensures that updating all repositories with HTTPS mode completes
      successfully without raising exceptions.

Both tests:
  - Provide extended diagnostics on SystemExit
  - Reuse nix-profile cleanup helpers for consistent test environments
  - Validate that `pkgmgr --help` works after execution

These tests complement the existing shallow-install integration test
and improve overall reliability of HTTPS clone/update workflows.
2025-12-09 23:47:43 +01:00
51 changed files with 2534 additions and 1202 deletions

View File

@@ -1,3 +1,30 @@
## [0.8.0] - 2025-12-10
* **v0.7.15 — Installer & Command Resolution Improvements**
* Introduced a unified **layer-based installer pipeline** with clear precedence (OS-packages, Nix, Python, Makefile).
* Reworked installer structure and improved Python/Nix/Makefile installers, including isolated Python venvs and refined flake-output handling.
* Fully rewrote **command resolution** with stronger typing, safer fallbacks, and explicit support for `command: null` to mark library-only repositories.
* Added extensive **unit and integration tests** for installer capability ordering, command resolution, and Nix/Python installer behavior.
* Expanded documentation with capability hierarchy diagrams and scenario matrices.
* Removed deprecated repository entries and obsolete configuration files.
## [0.7.14] - 2025-12-10
* Fixed the clone-all integration test so that `SystemExit(0)` from the proxy is treated as a successful command instead of a failure.
## [0.7.13] - 2025-12-10
### Fix tools path resolution and add tests
- Fixed a crash in `pkgmgr code` caused by missing `directory` metadata by introducing `_resolve_repository_path()` with proper fallbacks to `repositories_base_dir` / `repositories_dir`.
- Updated `explore`, `terminal` and `code` tool commands to use the new resolver.
- Improved VS Code workspace generation and path handling.
- Added unit & E2E tests for tool commands.
## [0.7.12] - 2025-12-09
* Fixed self refering alias during setup

View File

@@ -68,8 +68,8 @@ test-container: build-missing
build-missing:
@bash scripts/build/build-image-missing.sh
# Combined test target for local + CI (unit + e2e + integration)
test: test-container test-unit test-e2e test-integration
# Combined test target for local + CI (unit + integration + e2e)
test: test-container test-unit test-integration test-e2e
# ------------------------------------------------------------
# System install (native packages, calls scripts/installation/run-package.sh)

View File

@@ -1,7 +1,7 @@
# Maintainer: Kevin Veen-Birkenbach <info@veen.world>
pkgname=package-manager
pkgver=0.7.12
pkgver=0.8.0
pkgrel=1
pkgdesc="Local-flake wrapper for Kevin's package-manager (Nix-based)."
arch=('any')

View File

@@ -380,17 +380,6 @@ repositories:
- 44D8F11FD62F878E
- B5690EEEBB952194
- account: kevinveenbirkenbach
alias: infinito-presentation
description: This repository contains a Infinito.Nexus presentation designed for customers, end-users, investors, developers, and administrators, offering tailored content and insights for each group.
homepage: https://github.com/kevinveenbirkenbach/infinito-presentation
provider: github.com
repository: infinito-presentation
verified:
gpg_keys:
- 44D8F11FD62F878E
- B5690EEEBB952194
- account: kevinveenbirkenbach
description: A lightweight Python utility to generate dynamic color schemes from a single base color. Provides HSL-based color transformations for theming, UI design, and CSS variable generation. Optimized for integration in Python projects, Flask applications, and Ansible roles.
homepage: https://github.com/kevinveenbirkenbach/colorscheme-generator
@@ -599,17 +588,6 @@ repositories:
- 44D8F11FD62F878E
- B5690EEEBB952194
- account: kevinveenbirkenbach
desciption: Infinito Inventory Builder — a containerized web application that dynamically generates Ansible inventory files from invokable Infinito.Nexus roles through an interactive, browser-based interface.
homepage: https://github.com/kevinveenbirkenbach/infinito-inventory-builder
alias: invbuild
provider: github.com
repository: infinito-inventory-builder
verified:
gpg_keys:
- 44D8F11FD62F878E
- B5690EEEBB952194
- account: kevinveenbirkenbach
desciption: A simple Python CLI tool to safely rename Linux user accounts using usermod — including home directory migration and validation checks.
homepage: https://github.com/kevinveenbirkenbach/user-rename

View File

@@ -1,7 +0,0 @@
- account: kevinveenbirkenbach
alias: gkfdrtdtcntr
provider: github.com
repository: federated-to-central-social-network-bridge
verified:
gpg_keys:
- 44D8F11FD62F878E

25
debian/changelog vendored
View File

@@ -1,3 +1,28 @@
package-manager (0.8.0-1) unstable; urgency=medium
* **v0.7.15 — Installer & Command Resolution Improvements**
* Introduced a unified **layer-based installer pipeline** with clear precedence (OS-packages, Nix, Python, Makefile).
* Reworked installer structure and improved Python/Nix/Makefile installers, including isolated Python venvs and refined flake-output handling.
* Fully rewrote **command resolution** with stronger typing, safer fallbacks, and explicit support for `command: null` to mark library-only repositories.
* Added extensive **unit and integration tests** for installer capability ordering, command resolution, and Nix/Python installer behavior.
* Expanded documentation with capability hierarchy diagrams and scenario matrices.
* Removed deprecated repository entries and obsolete configuration files.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 10 Dec 2025 17:31:57 +0100
package-manager (0.7.14-1) unstable; urgency=medium
* Fixed the clone-all integration test so that `SystemExit(0)` from the proxy is treated as a successful command instead of a failure.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 10 Dec 2025 10:38:33 +0100
package-manager (0.7.13-1) unstable; urgency=medium
* Automated release.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 10 Dec 2025 10:27:24 +0100
package-manager (0.7.12-1) unstable; urgency=medium
* Fixed self refering alias during setup

View File

@@ -31,7 +31,7 @@
rec {
pkgmgr = pyPkgs.buildPythonApplication {
pname = "package-manager";
version = "0.7.12";
version = "0.8.0";
# Use the git repo as source
src = ./.;
@@ -48,9 +48,7 @@
# Runtime dependencies (matches [project.dependencies])
propagatedBuildInputs = [
pyPkgs.pyyaml
# Add more here if needed, e.g.:
# pyPkgs.click
# pyPkgs.rich
pyPkgs.pip
];
doCheck = false;
@@ -72,10 +70,16 @@
ansiblePkg =
if pkgs ? ansible-core then pkgs.ansible-core
else pkgs.ansible;
# Python 3 + pip für alles, was "python3 -m pip" macht
pythonWithPip = pkgs.python3.withPackages (ps: [
ps.pip
]);
in
{
default = pkgs.mkShell {
buildInputs = [
pythonWithPip
pkgmgrPkg
pkgs.git
ansiblePkg

View File

@@ -1,5 +1,5 @@
Name: package-manager
Version: 0.7.12
Version: 0.8.0
Release: 1%{?dist}
Summary: Wrapper that runs Kevin's package-manager via Nix flake
@@ -77,6 +77,22 @@ echo ">>> package-manager removed. Nix itself was not removed."
/usr/lib/package-manager/
%changelog
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.8.0-1
- **v0.7.15 — Installer & Command Resolution Improvements**
* Introduced a unified **layer-based installer pipeline** with clear precedence (OS-packages, Nix, Python, Makefile).
* Reworked installer structure and improved Python/Nix/Makefile installers, including isolated Python venvs and refined flake-output handling.
* Fully rewrote **command resolution** with stronger typing, safer fallbacks, and explicit support for `command: null` to mark library-only repositories.
* Added extensive **unit and integration tests** for installer capability ordering, command resolution, and Nix/Python installer behavior.
* Expanded documentation with capability hierarchy diagrams and scenario matrices.
* Removed deprecated repository entries and obsolete configuration files.
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.14-1
- Fixed the clone-all integration test so that `SystemExit(0)` from the proxy is treated as a successful command instead of a failure.
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.13-1
- Automated release.
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.12-1
- Fixed self refering alias during setup

View File

@@ -0,0 +1,218 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
High-level entry point for repository installation.
Responsibilities:
- Ensure the repository directory exists (clone if necessary).
- Verify the repository (GPG / commit checks).
- Build a RepoContext object.
- Delegate the actual installation decision logic to InstallationPipeline.
"""
from __future__ import annotations
import os
from typing import Any, Dict, List
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.verify import verify_repository
from pkgmgr.actions.repository.clone import clone_repos
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.os_packages import (
ArchPkgbuildInstaller,
DebianControlInstaller,
RpmSpecInstaller,
)
from pkgmgr.actions.install.installers.nix_flake import (
NixFlakeInstaller,
)
from pkgmgr.actions.install.installers.python import PythonInstaller
from pkgmgr.actions.install.installers.makefile import (
MakefileInstaller,
)
from pkgmgr.actions.install.pipeline import InstallationPipeline
Repository = Dict[str, Any]
# All available installers, in the order they should be considered.
INSTALLERS = [
ArchPkgbuildInstaller(),
DebianControlInstaller(),
RpmSpecInstaller(),
NixFlakeInstaller(),
PythonInstaller(),
MakefileInstaller(),
]
# ---------------------------------------------------------------------------
# Internal helpers
# ---------------------------------------------------------------------------
def _ensure_repo_dir(
repo: Repository,
repositories_base_dir: str,
all_repos: List[Repository],
preview: bool,
no_verification: bool,
clone_mode: str,
identifier: str,
) -> str | None:
"""
Compute and, if necessary, clone the repository directory.
Returns the absolute repository path or None if cloning ultimately failed.
"""
repo_dir = get_repo_dir(repositories_base_dir, repo)
if not os.path.exists(repo_dir):
print(
f"Repository directory '{repo_dir}' does not exist. "
f"Cloning it now..."
)
clone_repos(
[repo],
repositories_base_dir,
all_repos,
preview,
no_verification,
clone_mode,
)
if not os.path.exists(repo_dir):
print(
f"Cloning failed for repository {identifier}. "
f"Skipping installation."
)
return None
return repo_dir
def _verify_repo(
repo: Repository,
repo_dir: str,
no_verification: bool,
identifier: str,
) -> bool:
"""
Verify a repository using the configured verification data.
Returns True if verification is considered okay and installation may continue.
"""
verified_info = repo.get("verified")
verified_ok, errors, _commit_hash, _signing_key = verify_repository(
repo,
repo_dir,
mode="local",
no_verification=no_verification,
)
if not no_verification and verified_info and not verified_ok:
print(f"Warning: Verification failed for {identifier}:")
for err in errors:
print(f" - {err}")
choice = input("Continue anyway? [y/N]: ").strip().lower()
if choice != "y":
print(f"Skipping installation for {identifier}.")
return False
return True
def _create_context(
repo: Repository,
identifier: str,
repo_dir: str,
repositories_base_dir: str,
bin_dir: str,
all_repos: List[Repository],
no_verification: bool,
preview: bool,
quiet: bool,
clone_mode: str,
update_dependencies: bool,
) -> RepoContext:
"""
Build a RepoContext instance for the given repository.
"""
return RepoContext(
repo=repo,
identifier=identifier,
repo_dir=repo_dir,
repositories_base_dir=repositories_base_dir,
bin_dir=bin_dir,
all_repos=all_repos,
no_verification=no_verification,
preview=preview,
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
)
# ---------------------------------------------------------------------------
# Public API
# ---------------------------------------------------------------------------
def install_repos(
selected_repos: List[Repository],
repositories_base_dir: str,
bin_dir: str,
all_repos: List[Repository],
no_verification: bool,
preview: bool,
quiet: bool,
clone_mode: str,
update_dependencies: bool,
) -> None:
"""
Install one or more repositories according to the configured installers
and the CLI layer precedence rules.
"""
pipeline = InstallationPipeline(INSTALLERS)
for repo in selected_repos:
identifier = get_repo_identifier(repo, all_repos)
repo_dir = _ensure_repo_dir(
repo=repo,
repositories_base_dir=repositories_base_dir,
all_repos=all_repos,
preview=preview,
no_verification=no_verification,
clone_mode=clone_mode,
identifier=identifier,
)
if not repo_dir:
continue
if not _verify_repo(
repo=repo,
repo_dir=repo_dir,
no_verification=no_verification,
identifier=identifier,
):
continue
ctx = _create_context(
repo=repo,
identifier=identifier,
repo_dir=repo_dir,
repositories_base_dir=repositories_base_dir,
bin_dir=bin_dir,
all_repos=all_repos,
no_verification=no_verification,
preview=preview,
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
)
pipeline.run(ctx)

View File

@@ -38,7 +38,7 @@ from abc import ABC, abstractmethod
from typing import Iterable, TYPE_CHECKING
if TYPE_CHECKING:
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.install.context import RepoContext
# ---------------------------------------------------------------------------

View File

@@ -0,0 +1,19 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer package for pkgmgr.
This exposes all installer classes so users can import them directly from
pkgmgr.actions.install.installers.
"""
from pkgmgr.actions.install.installers.base import BaseInstaller # noqa: F401
from pkgmgr.actions.install.installers.nix_flake import NixFlakeInstaller # noqa: F401
from pkgmgr.actions.install.installers.python import PythonInstaller # noqa: F401
from pkgmgr.actions.install.installers.makefile import MakefileInstaller # noqa: F401
# OS-specific installers
from pkgmgr.actions.install.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller # noqa: F401
from pkgmgr.actions.install.installers.os_packages.debian_control import DebianControlInstaller # noqa: F401
from pkgmgr.actions.install.installers.os_packages.rpm_spec import RpmSpecInstaller # noqa: F401

View File

@@ -8,8 +8,8 @@ Base interface for all installer components in the pkgmgr installation pipeline.
from abc import ABC, abstractmethod
from typing import Set
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.capabilities import CAPABILITY_MATCHERS
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.capabilities import CAPABILITY_MATCHERS
class BaseInstaller(ABC):

View File

@@ -0,0 +1,97 @@
from __future__ import annotations
import os
import re
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class MakefileInstaller(BaseInstaller):
"""
Generic installer that runs `make install` if a Makefile with an
install target is present.
Safety rules:
- If PKGMGR_DISABLE_MAKEFILE_INSTALLER=1 is set, this installer
is globally disabled.
- The higher-level InstallationPipeline ensures that Makefile
installation does not run if a stronger CLI layer already owns
the command (e.g. Nix or OS packages).
"""
layer = "makefile"
MAKEFILE_NAME = "Makefile"
def supports(self, ctx: RepoContext) -> bool:
"""
Return True if this repository has a Makefile and the installer
is not globally disabled.
"""
# Optional global kill switch.
if os.environ.get("PKGMGR_DISABLE_MAKEFILE_INSTALLER") == "1":
if not ctx.quiet:
print(
"[INFO] MakefileInstaller is disabled via "
"PKGMGR_DISABLE_MAKEFILE_INSTALLER."
)
return False
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
return os.path.exists(makefile_path)
def _has_install_target(self, makefile_path: str) -> bool:
"""
Heuristically check whether the Makefile defines an install target.
We look for:
- a plain 'install:' target, or
- any 'install-*:' style target.
"""
try:
with open(makefile_path, "r", encoding="utf-8", errors="ignore") as f:
content = f.read()
except OSError:
return False
# Simple heuristics: look for "install:" or targets starting with "install-"
if re.search(r"^install\s*:", content, flags=re.MULTILINE):
return True
if re.search(r"^install-[a-zA-Z0-9_-]*\s*:", content, flags=re.MULTILINE):
return True
return False
def run(self, ctx: RepoContext) -> None:
"""
Execute `make install` in the repository directory if an install
target exists.
"""
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
if not os.path.exists(makefile_path):
if not ctx.quiet:
print(
f"[pkgmgr] Makefile '{makefile_path}' not found, "
"skipping MakefileInstaller."
)
return
if not self._has_install_target(makefile_path):
if not ctx.quiet:
print(
f"[pkgmgr] No 'install' target found in {makefile_path}."
)
return
if not ctx.quiet:
print(
f"[pkgmgr] Running 'make install' in {ctx.repo_dir} "
f"(MakefileInstaller)"
)
cmd = "make install"
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)

View File

@@ -10,28 +10,31 @@ installer will try to install profile outputs from the flake.
Behavior:
- If flake.nix is present and `nix` exists on PATH:
* First remove any existing `package-manager` profile entry (best-effort).
* Then install the flake outputs (`pkgmgr`, `default`) via `nix profile install`.
- Failure installing `pkgmgr` is treated as fatal.
- Failure installing `default` is logged as an error/warning but does not abort.
* Then install one or more flake outputs via `nix profile install`.
- For the package-manager repo:
* `pkgmgr` is mandatory (CLI), `default` is optional.
- For all other repos:
* `default` is mandatory.
Special handling for dev shells / CI:
- If IN_NIX_SHELL is set (e.g. inside `nix develop`), the installer is
disabled. In that environment the flake outputs are already provided
by the dev shell and we must not touch the user profile.
Special handling:
- If PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 is set, the installer is
globally disabled (useful for CI or debugging).
The higher-level InstallationPipeline and CLI-layer model decide when this
installer is allowed to run, based on where the current CLI comes from
(e.g. Nix, OS packages, Python, Makefile).
"""
import os
import shutil
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, List, Tuple
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
if TYPE_CHECKING:
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install import InstallContext
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install import InstallContext
class NixFlakeInstaller(BaseInstaller):
@@ -43,33 +46,14 @@ class NixFlakeInstaller(BaseInstaller):
FLAKE_FILE = "flake.nix"
PROFILE_NAME = "package-manager"
def _in_nix_shell(self) -> bool:
"""
Return True if we appear to be running inside a Nix dev shell.
Nix sets IN_NIX_SHELL in `nix develop` environments. In that case
the flake outputs are already available, and touching the user
profile (nix profile install/remove) is undesirable.
"""
return bool(os.environ.get("IN_NIX_SHELL"))
def supports(self, ctx: "RepoContext") -> bool:
"""
Only support repositories that:
- Are NOT inside a Nix dev shell (IN_NIX_SHELL unset),
- Are NOT explicitly disabled via PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1,
- Have a flake.nix,
- And have the `nix` command available.
"""
# 1) Skip when running inside a dev shell flake is already active.
if self._in_nix_shell():
print(
"[INFO] IN_NIX_SHELL detected; skipping NixFlakeInstaller. "
"Flake outputs are provided by the development shell."
)
return False
# 2) Optional global kill-switch for CI or debugging.
# Optional global kill-switch for CI or debugging.
if os.environ.get("PKGMGR_DISABLE_NIX_FLAKE_INSTALLER") == "1":
print(
"[INFO] PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 "
@@ -77,11 +61,11 @@ class NixFlakeInstaller(BaseInstaller):
)
return False
# 3) Nix must be available.
# Nix must be available.
if shutil.which("nix") is None:
return False
# 4) flake.nix must exist in the repository.
# flake.nix must exist in the repository.
flake_path = os.path.join(ctx.repo_dir, self.FLAKE_FILE)
return os.path.exists(flake_path)
@@ -107,36 +91,56 @@ class NixFlakeInstaller(BaseInstaller):
# Unit tests explicitly assert this is swallowed
pass
def _profile_outputs(self, ctx: "RepoContext") -> List[Tuple[str, bool]]:
"""
Decide which flake outputs to install and whether failures are fatal.
Returns a list of (output_name, allow_failure) tuples.
Rules:
- For the package-manager repo (identifier 'pkgmgr' or 'package-manager'):
[("pkgmgr", False), ("default", True)]
- For all other repos:
[("default", False)]
"""
ident = ctx.identifier
if ident in {"pkgmgr", "package-manager"}:
# pkgmgr: main CLI output is "pkgmgr" (mandatory),
# "default" is nice-to-have (non-fatal).
return [("pkgmgr", False), ("default", True)]
# Generic repos: we expect a sensible "default" package/app.
# Failure to install it is considered fatal.
return [("default", False)]
def run(self, ctx: "InstallContext") -> None:
"""
Install Nix flake profile outputs (pkgmgr, default).
Install Nix flake profile outputs.
Any failure installing `pkgmgr` is treated as fatal (SystemExit).
A failure installing `default` is logged but does not abort.
For the package-manager repo, failure installing 'pkgmgr' is fatal,
failure installing 'default' is non-fatal.
For other repos, failure installing 'default' is fatal.
"""
# Extra guard in case run() is called directly without supports().
if self._in_nix_shell():
print(
"[INFO] IN_NIX_SHELL detected in run(); "
"skipping Nix flake profile installation."
)
return
# Reuse supports() to keep logic in one place
# Reuse supports() to keep logic in one place.
if not self.supports(ctx): # type: ignore[arg-type]
return
print("Nix flake detected, attempting to install profile outputs...")
outputs = self._profile_outputs(ctx) # list of (name, allow_failure)
# Handle the "already installed" case up-front:
print(
"Nix flake detected in "
f"{ctx.identifier}, attempting to install profile outputs: "
+ ", ".join(name for name, _ in outputs)
)
# Handle the "already installed" case up-front for the shared profile.
self._ensure_old_profile_removed(ctx) # type: ignore[arg-type]
for output in ("pkgmgr", "default"):
for output, allow_failure in outputs:
cmd = f"nix profile install {ctx.repo_dir}#{output}"
try:
# For 'default' we don't want the process to exit on error
allow_failure = output == "default"
run_command(
cmd,
cwd=ctx.repo_dir,
@@ -146,12 +150,11 @@ class NixFlakeInstaller(BaseInstaller):
print(f"Nix flake output '{output}' successfully installed.")
except SystemExit as e:
print(f"[Error] Failed to install Nix flake output '{output}': {e}")
if output == "pkgmgr":
# Broken main CLI install → fatal
if not allow_failure:
# Mandatory output failed → fatal for the pipeline.
raise
# For 'default' we log and continue
# Optional output failed → log and continue.
print(
"[Warning] Continuing despite failure to install 'default' "
"because 'pkgmgr' is already installed."
"[Warning] Continuing despite failure to install "
f"optional output '{output}'."
)
break

View File

@@ -3,8 +3,8 @@
import os
import shutil
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command

View File

@@ -19,8 +19,8 @@ import os
import shutil
from typing import List
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command

View File

@@ -21,8 +21,8 @@ import shutil
import tarfile
from typing import List, Optional, Tuple
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command

View File

@@ -0,0 +1,139 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
PythonInstaller — install Python projects defined via pyproject.toml.
Installation rules:
1. pip command resolution:
a) If PKGMGR_PIP is set → use it exactly as provided.
b) Else if running inside a virtualenv → use `sys.executable -m pip`.
c) Else → create/use a per-repository virtualenv under ~/.venvs/<repo>/.
2. Installation target:
- Always install into the resolved pip environment.
- Never modify system Python, never rely on --user.
- Nix-immutable systems (PEP 668) are automatically avoided because we
never touch system Python.
3. The installer is skipped when:
- PKGMGR_DISABLE_PYTHON_INSTALLER=1 is set.
- The repository has no pyproject.toml.
All pip failures are treated as fatal.
"""
from __future__ import annotations
import os
import sys
import subprocess
from typing import TYPE_CHECKING
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install import InstallContext
class PythonInstaller(BaseInstaller):
"""Install Python projects and dependencies via pip using isolated environments."""
layer = "python"
# ----------------------------------------------------------------------
# Installer activation logic
# ----------------------------------------------------------------------
def supports(self, ctx: "RepoContext") -> bool:
"""
Return True if this installer should handle this repository.
The installer is active only when:
- A pyproject.toml exists in the repo, and
- PKGMGR_DISABLE_PYTHON_INSTALLER is not set.
"""
if os.environ.get("PKGMGR_DISABLE_PYTHON_INSTALLER") == "1":
print("[INFO] PythonInstaller disabled via PKGMGR_DISABLE_PYTHON_INSTALLER.")
return False
return os.path.exists(os.path.join(ctx.repo_dir, "pyproject.toml"))
# ----------------------------------------------------------------------
# Virtualenv handling
# ----------------------------------------------------------------------
def _in_virtualenv(self) -> bool:
"""Detect whether the current interpreter is inside a venv."""
if os.environ.get("VIRTUAL_ENV"):
return True
base = getattr(sys, "base_prefix", sys.prefix)
return sys.prefix != base
def _ensure_repo_venv(self, ctx: "InstallContext") -> str:
"""
Ensure that ~/.venvs/<identifier>/ exists and contains a minimal venv.
Returns the venv directory path.
"""
venv_dir = os.path.expanduser(f"~/.venvs/{ctx.identifier}")
python = sys.executable
if not os.path.isdir(venv_dir):
print(f"[python-installer] Creating virtualenv: {venv_dir}")
subprocess.check_call([python, "-m", "venv", venv_dir])
return venv_dir
# ----------------------------------------------------------------------
# pip command resolution
# ----------------------------------------------------------------------
def _pip_cmd(self, ctx: "InstallContext") -> str:
"""
Determine which pip command to use.
Priority:
1. PKGMGR_PIP override given by user or automation.
2. Active virtualenv → use sys.executable -m pip.
3. Per-repository venv → ~/.venvs/<repo>/bin/pip
"""
explicit = os.environ.get("PKGMGR_PIP", "").strip()
if explicit:
return explicit
if self._in_virtualenv():
return f"{sys.executable} -m pip"
venv_dir = self._ensure_repo_venv(ctx)
pip_path = os.path.join(venv_dir, "bin", "pip")
return pip_path
# ----------------------------------------------------------------------
# Execution
# ----------------------------------------------------------------------
def run(self, ctx: "InstallContext") -> None:
"""
Install the project defined by pyproject.toml.
Uses the resolved pip environment. Installation is isolated and never
touches system Python.
"""
if not self.supports(ctx): # type: ignore[arg-type]
return
pyproject = os.path.join(ctx.repo_dir, "pyproject.toml")
if not os.path.exists(pyproject):
return
print(f"[python-installer] Installing Python project for {ctx.identifier}...")
pip_cmd = self._pip_cmd(ctx)
# Final install command: ALWAYS isolated, never system-wide.
install_cmd = f"{pip_cmd} install ."
run_command(install_cmd, cwd=ctx.repo_dir, preview=ctx.preview)
print(f"[python-installer] Installation finished for {ctx.identifier}.")

View File

@@ -0,0 +1,91 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
CLI layer model for the pkgmgr installation pipeline.
We treat CLI entry points as coming from one of four conceptual layers:
- os-packages : system package managers (pacman/apt/dnf/…)
- nix : Nix flake / nix profile
- python : pip / virtualenv / user-local scripts
- makefile : repo-local Makefile / scripts inside the repo
The layer order defines precedence: higher layers "own" the CLI and
lower layers will not be executed once a higher-priority CLI exists.
"""
from __future__ import annotations
import os
from enum import Enum
from typing import Optional
class CliLayer(str, Enum):
OS_PACKAGES = "os-packages"
NIX = "nix"
PYTHON = "python"
MAKEFILE = "makefile"
# Highest priority first
CLI_LAYERS: list[CliLayer] = [
CliLayer.OS_PACKAGES,
CliLayer.NIX,
CliLayer.PYTHON,
CliLayer.MAKEFILE,
]
def layer_priority(layer: Optional[CliLayer]) -> int:
"""
Return a numeric priority index for a given layer.
Lower index → higher priority.
Unknown / None → very low priority.
"""
if layer is None:
return len(CLI_LAYERS)
try:
return CLI_LAYERS.index(layer)
except ValueError:
return len(CLI_LAYERS)
def classify_command_layer(command: str, repo_dir: str) -> CliLayer:
"""
Heuristically classify a resolved command path into a CLI layer.
Rules (best effort):
- /usr/... or /bin/... → os-packages
- /nix/store/... or ~/.nix-profile → nix
- ~/.local/bin/... → python
- inside repo_dir → makefile
- everything else → python (user/venv scripts, etc.)
"""
command_abs = os.path.abspath(os.path.expanduser(command))
repo_abs = os.path.abspath(repo_dir)
home = os.path.expanduser("~")
# OS package managers
if command_abs.startswith("/usr/") or command_abs.startswith("/bin/"):
return CliLayer.OS_PACKAGES
# Nix store / profile
if command_abs.startswith("/nix/store/") or command_abs.startswith(
os.path.join(home, ".nix-profile")
):
return CliLayer.NIX
# User-local bin
if command_abs.startswith(os.path.join(home, ".local", "bin")):
return CliLayer.PYTHON
# Inside the repository → usually a Makefile/script
if command_abs.startswith(repo_abs):
return CliLayer.MAKEFILE
# Fallback: treat as Python-style/user-level script
return CliLayer.PYTHON

View File

@@ -0,0 +1,257 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installation pipeline orchestration for repositories.
This module implements the "Setup Controller" logic:
1. Detect current CLI command for the repo (if any).
2. Classify it into a layer (os-packages, nix, python, makefile).
3. Iterate over installers in layer order:
- Skip installers whose layer is weaker than an already-loaded one.
- Run only installers that support() the repo and add new capabilities.
- After each installer, re-resolve the command and update the layer.
4. Maintain the repo["command"] field and create/update symlinks via create_ink().
The goal is to prevent conflicting installations and make the layering
behaviour explicit and testable.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional, Sequence, Set
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.actions.install.layers import (
CliLayer,
classify_command_layer,
layer_priority,
)
from pkgmgr.core.command.ink import create_ink
from pkgmgr.core.command.resolve import resolve_command_for_repo
@dataclass
class CommandState:
"""
Represents the current CLI state for a repository:
- command: absolute or relative path to the CLI entry point
- layer: which conceptual layer this command belongs to
"""
command: Optional[str]
layer: Optional[CliLayer]
class CommandResolver:
"""
Small helper responsible for resolving the current command for a repo
and mapping it into a CommandState.
"""
def __init__(self, ctx: RepoContext) -> None:
self._ctx = ctx
def resolve(self) -> CommandState:
"""
Resolve the current command for this repository.
If resolve_command_for_repo raises SystemExit (e.g. Python package
without installed entry point), we treat this as "no command yet"
from the point of view of the installers.
"""
repo = self._ctx.repo
identifier = self._ctx.identifier
repo_dir = self._ctx.repo_dir
try:
cmd = resolve_command_for_repo(
repo=repo,
repo_identifier=identifier,
repo_dir=repo_dir,
)
except SystemExit:
cmd = None
if not cmd:
return CommandState(command=None, layer=None)
layer = classify_command_layer(cmd, repo_dir)
return CommandState(command=cmd, layer=layer)
class InstallationPipeline:
"""
High-level orchestrator that applies a sequence of installers
to a repository based on CLI layer precedence.
"""
def __init__(self, installers: Sequence[BaseInstaller]) -> None:
self._installers = list(installers)
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def run(self, ctx: RepoContext) -> None:
"""
Execute the installation pipeline for a single repository.
- Detect initial command & layer.
- Optionally create a symlink.
- Run installers in order, skipping those whose layer is weaker
than an already-loaded CLI.
- After each installer, re-resolve the command and refresh the
symlink if needed.
"""
repo = ctx.repo
repo_dir = ctx.repo_dir
identifier = ctx.identifier
repositories_base_dir = ctx.repositories_base_dir
bin_dir = ctx.bin_dir
all_repos = ctx.all_repos
quiet = ctx.quiet
preview = ctx.preview
resolver = CommandResolver(ctx)
state = resolver.resolve()
# Persist initial command (if any) and create a symlink.
if state.command:
repo["command"] = state.command
create_ink(
repo,
repositories_base_dir,
bin_dir,
all_repos,
quiet=quiet,
preview=preview,
)
else:
repo.pop("command", None)
provided_capabilities: Set[str] = set()
# Main installer loop
for installer in self._installers:
layer_name = getattr(installer, "layer", None)
# Installers without a layer participate without precedence logic.
if layer_name is None:
self._run_installer(installer, ctx, identifier, repo_dir, quiet)
continue
try:
installer_layer = CliLayer(layer_name)
except ValueError:
# Unknown layer string → treat as lowest priority.
installer_layer = None
# "Previous/Current layer already loaded?"
if state.layer is not None and installer_layer is not None:
current_prio = layer_priority(state.layer)
installer_prio = layer_priority(installer_layer)
if current_prio < installer_prio:
# Current CLI comes from a higher-priority layer,
# so we skip this installer entirely.
if not quiet:
print(
f"[pkgmgr] Skipping installer "
f"{installer.__class__.__name__} for {identifier} "
f"CLI already provided by layer {state.layer.value!r}."
)
continue
if current_prio == installer_prio:
# Same layer already provides a CLI; usually there is no
# need to run another installer on top of it.
if not quiet:
print(
f"[pkgmgr] Skipping installer "
f"{installer.__class__.__name__} for {identifier} "
f"layer {installer_layer.value!r} is already loaded."
)
continue
# Check if this installer is applicable at all.
if not installer.supports(ctx):
continue
# Capabilities: if everything this installer would provide is already
# covered, we can safely skip it.
caps = installer.discover_capabilities(ctx)
if caps and caps.issubset(provided_capabilities):
if not quiet:
print(
f"Skipping installer {installer.__class__.__name__} "
f"for {identifier} capabilities {caps} already provided."
)
continue
if not quiet:
print(
f"[pkgmgr] Running installer {installer.__class__.__name__} "
f"for {identifier} in '{repo_dir}' "
f"(new capabilities: {caps or set()})..."
)
# Run the installer with error reporting.
self._run_installer(installer, ctx, identifier, repo_dir, quiet)
provided_capabilities.update(caps)
# After running an installer, re-resolve the command and layer.
new_state = resolver.resolve()
if new_state.command:
repo["command"] = new_state.command
create_ink(
repo,
repositories_base_dir,
bin_dir,
all_repos,
quiet=quiet,
preview=preview,
)
else:
repo.pop("command", None)
state = new_state
# ------------------------------------------------------------------
# Internal helpers
# ------------------------------------------------------------------
@staticmethod
def _run_installer(
installer: BaseInstaller,
ctx: RepoContext,
identifier: str,
repo_dir: str,
quiet: bool,
) -> None:
"""
Execute a single installer with unified error handling.
"""
try:
installer.run(ctx)
except SystemExit as exc:
exit_code = exc.code if isinstance(exc.code, int) else str(exc.code)
print(
f"[ERROR] Installer {installer.__class__.__name__} failed "
f"for repository {identifier} (dir: {repo_dir}) "
f"with exit code {exit_code}."
)
print(
"[ERROR] This usually means an underlying command failed "
"(e.g. 'make install', 'nix build', 'pip install', ...)."
)
print(
"[ERROR] Check the log above for the exact command output. "
"You can also run this repository in isolation via:\n"
f" pkgmgr install {identifier} "
"--clone-mode shallow --no-verification"
)
raise

View File

@@ -1,294 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Repository installation pipeline for pkgmgr.
This module orchestrates the installation of repositories by:
1. Ensuring the repository directory exists (cloning if necessary).
2. Verifying the repository according to the configured policies.
3. Creating executable links using create_ink(), after resolving the
appropriate command via resolve_command_for_repo().
4. Running a sequence of modular installer components that handle
specific technologies or manifests (PKGBUILD, Nix flakes, Python
via pyproject.toml, Makefile, OS-specific package metadata).
The goal is to keep this file thin and delegate most logic to small,
focused installer classes.
"""
import os
from typing import List, Dict, Any
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.command.ink import create_ink
from pkgmgr.core.repository.verify import verify_repository
from pkgmgr.actions.repository.clone import clone_repos
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.core.command.resolve import resolve_command_for_repo
# Installer implementations
from pkgmgr.actions.repository.install.installers.os_packages import (
ArchPkgbuildInstaller,
DebianControlInstaller,
RpmSpecInstaller,
)
from pkgmgr.actions.repository.install.installers.nix_flake import NixFlakeInstaller
from pkgmgr.actions.repository.install.installers.python import PythonInstaller
from pkgmgr.actions.repository.install.installers.makefile import MakefileInstaller
# Layering:
# 1) OS packages: PKGBUILD / debian/control / RPM spec → os-deps.*
# 2) Nix flakes (flake.nix) → e.g. python-runtime, make-install
# 3) Python (pyproject.toml) → e.g. python-runtime, make-install
# 4) Makefile fallback → e.g. make-install
INSTALLERS = [
ArchPkgbuildInstaller(), # Arch
DebianControlInstaller(), # Debian/Ubuntu
RpmSpecInstaller(), # Fedora/RHEL/CentOS
NixFlakeInstaller(), # flake.nix (Nix layer)
PythonInstaller(), # pyproject.toml
MakefileInstaller(), # generic 'make install'
]
def _ensure_repo_dir(
repo: Dict[str, Any],
repositories_base_dir: str,
all_repos: List[Dict[str, Any]],
preview: bool,
no_verification: bool,
clone_mode: str,
identifier: str,
) -> str:
"""
Ensure the repository directory exists. If not, attempt to clone it.
Returns the repository directory path or an empty string if cloning failed.
"""
repo_dir = get_repo_dir(repositories_base_dir, repo)
if not os.path.exists(repo_dir):
print(f"Repository directory '{repo_dir}' does not exist. Cloning it now...")
clone_repos(
[repo],
repositories_base_dir,
all_repos,
preview,
no_verification,
clone_mode,
)
if not os.path.exists(repo_dir):
print(f"Cloning failed for repository {identifier}. Skipping installation.")
return ""
return repo_dir
def _verify_repo(
repo: Dict[str, Any],
repo_dir: str,
no_verification: bool,
identifier: str,
) -> bool:
"""
Verify the repository using verify_repository().
Returns True if installation should proceed, False if it should be skipped.
"""
verified_info = repo.get("verified")
verified_ok, errors, commit_hash, signing_key = verify_repository(
repo,
repo_dir,
mode="local",
no_verification=no_verification,
)
if not no_verification and verified_info and not verified_ok:
print(f"Warning: Verification failed for {identifier}:")
for err in errors:
print(f" - {err}")
choice = input("Proceed with installation? (y/N): ").strip().lower()
if choice != "y":
print(f"Skipping installation for {identifier}.")
return False
return True
def _create_context(
repo: Dict[str, Any],
identifier: str,
repo_dir: str,
repositories_base_dir: str,
bin_dir: str,
all_repos: List[Dict[str, Any]],
no_verification: bool,
preview: bool,
quiet: bool,
clone_mode: str,
update_dependencies: bool,
) -> RepoContext:
"""
Build a RepoContext for the given repository and parameters.
"""
return RepoContext(
repo=repo,
identifier=identifier,
repo_dir=repo_dir,
repositories_base_dir=repositories_base_dir,
bin_dir=bin_dir,
all_repos=all_repos,
no_verification=no_verification,
preview=preview,
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
)
def install_repos(
selected_repos: List[Dict[str, Any]],
repositories_base_dir: str,
bin_dir: str,
all_repos: List[Dict[str, Any]],
no_verification: bool,
preview: bool,
quiet: bool,
clone_mode: str,
update_dependencies: bool,
) -> None:
"""
Install repositories by creating symbolic links and processing standard
manifest files (PKGBUILD, flake.nix, Python manifests, Makefile, etc.)
via dedicated installer components.
Any installer failure (SystemExit) is treated as fatal and will abort
the current installation.
"""
for repo in selected_repos:
identifier = get_repo_identifier(repo, all_repos)
repo_dir = _ensure_repo_dir(
repo=repo,
repositories_base_dir=repositories_base_dir,
all_repos=all_repos,
preview=preview,
no_verification=no_verification,
clone_mode=clone_mode,
identifier=identifier,
)
if not repo_dir:
continue
if not _verify_repo(
repo=repo,
repo_dir=repo_dir,
no_verification=no_verification,
identifier=identifier,
):
continue
ctx = _create_context(
repo=repo,
identifier=identifier,
repo_dir=repo_dir,
repositories_base_dir=repositories_base_dir,
bin_dir=bin_dir,
all_repos=all_repos,
no_verification=no_verification,
preview=preview,
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
)
# ------------------------------------------------------------
# Resolve the command for this repository before creating the link.
# If no command is resolved, no link will be created.
# ------------------------------------------------------------
resolved_command = resolve_command_for_repo(
repo=repo,
repo_identifier=identifier,
repo_dir=repo_dir,
)
if resolved_command:
repo["command"] = resolved_command
else:
repo.pop("command", None)
# ------------------------------------------------------------
# Create the symlink using create_ink (if a command is set).
# ------------------------------------------------------------
create_ink(
repo,
repositories_base_dir,
bin_dir,
all_repos,
quiet=quiet,
preview=preview,
)
# Track which logical capabilities have already been provided by
# earlier installers for this repository. This allows us to skip
# installers that would only duplicate work (e.g. Python runtime
# already provided by Nix flake → skip pyproject/Makefile).
provided_capabilities: set[str] = set()
# Run all installers that support this repository, but only if they
# provide at least one capability that is not yet satisfied.
for installer in INSTALLERS:
if not installer.supports(ctx):
continue
caps = installer.discover_capabilities(ctx)
# If the installer declares capabilities and *all* of them are
# already provided, we can safely skip it.
if caps and caps.issubset(provided_capabilities):
if not quiet:
print(
f"Skipping installer {installer.__class__.__name__} "
f"for {identifier} capabilities {caps} already provided."
)
continue
# ------------------------------------------------------------
# Debug output + clear error if an installer fails
# ------------------------------------------------------------
if not quiet:
print(
f"[pkgmgr] Running installer {installer.__class__.__name__} "
f"for {identifier} in '{repo_dir}' "
f"(new capabilities: {caps or ''})..."
)
try:
installer.run(ctx)
except SystemExit as exc:
exit_code = exc.code if isinstance(exc.code, int) else str(exc.code)
print(
f"[ERROR] Installer {installer.__class__.__name__} failed "
f"for repository {identifier} (dir: {repo_dir}) "
f"with exit code {exit_code}."
)
print(
"[ERROR] This usually means an underlying command failed "
"(e.g. 'make install', 'nix build', 'pip install', ...)."
)
print(
"[ERROR] Check the log above for the exact command output. "
"You can also run this repository in isolation via:\n"
f" pkgmgr install {identifier} --clone-mode shallow --no-verification"
)
# Re-raise so that CLI/tests fail clearly,
# but now with much more context.
raise
# Only merge capabilities if the installer succeeded
provided_capabilities.update(caps)

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer package for pkgmgr.
This exposes all installer classes so users can import them directly from
pkgmgr.actions.repository.install.installers.
"""
from pkgmgr.actions.repository.install.installers.base import BaseInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.nix_flake import NixFlakeInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.python import PythonInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.makefile import MakefileInstaller # noqa: F401
# OS-specific installers
from pkgmgr.actions.repository.install.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.os_packages.debian_control import DebianControlInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.os_packages.rpm_spec import RpmSpecInstaller # noqa: F401

View File

@@ -1,93 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer that triggers `make install` if a Makefile is present and
the Makefile actually defines an 'install' target.
This is useful for repositories that expose a standard Makefile-based
installation step.
"""
import os
import re
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class MakefileInstaller(BaseInstaller):
"""Run `make install` if a Makefile with an 'install' target exists."""
# Logical layer name, used by capability matchers.
layer = "makefile"
MAKEFILE_NAME = "Makefile"
def supports(self, ctx: RepoContext) -> bool:
"""Return True if a Makefile exists in the repository directory."""
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
return os.path.exists(makefile_path)
def _has_install_target(self, makefile_path: str) -> bool:
"""
Check whether the Makefile defines an 'install' target.
We treat the presence of a real install target as either:
- a line starting with 'install:' (optionally preceded by whitespace), or
- a .PHONY line that lists 'install' as one of the targets.
"""
try:
with open(makefile_path, "r", encoding="utf-8", errors="ignore") as f:
content = f.read()
except OSError:
# If we cannot read the Makefile for some reason, assume no target.
return False
# install: ...
if re.search(r"^\s*install\s*:", content, flags=re.MULTILINE):
return True
# .PHONY: ... install ...
if re.search(r"^\s*\.PHONY\s*:\s*.*\binstall\b", content, flags=re.MULTILINE):
return True
return False
def run(self, ctx: RepoContext) -> None:
"""
Execute `make install` in the repository directory, but only if an
'install' target is actually defined in the Makefile.
Any failure in `make install` is treated as a fatal error and will
propagate as SystemExit from run_command().
"""
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
if not os.path.exists(makefile_path):
# Should normally not happen if supports() was checked before,
# but keep this guard for robustness.
if not ctx.quiet:
print(
f"[pkgmgr] Makefile '{makefile_path}' not found, "
"skipping make install."
)
return
if not self._has_install_target(makefile_path):
if not ctx.quiet:
print(
"[pkgmgr] Skipping Makefile install: no 'install' target "
f"found in {makefile_path}."
)
return
if not ctx.quiet:
print(
f"[pkgmgr] Running 'make install' in {ctx.repo_dir} "
"(install target detected in Makefile)."
)
cmd = "make install"
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)

View File

@@ -1,127 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer for Python projects based on pyproject.toml.
Strategy:
- Determine a pip command in this order:
1. $PKGMGR_PIP (explicit override, e.g. ~/.venvs/pkgmgr/bin/pip)
2. sys.executable -m pip (current interpreter)
3. "pip" from PATH as last resort
- If pyproject.toml exists: pip install .
All installation failures are treated as fatal errors (SystemExit),
except when we explicitly skip the installer:
- If IN_NIX_SHELL is set, we assume Python is managed by Nix and
skip this installer entirely.
- If PKGMGR_DISABLE_PYTHON_INSTALLER=1 is set, the installer is
globally disabled (useful for CI or debugging).
"""
from __future__ import annotations
import os
import sys
from typing import TYPE_CHECKING
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
if TYPE_CHECKING:
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install import InstallContext
class PythonInstaller(BaseInstaller):
"""Install Python projects and dependencies via pip."""
# Logical layer name, used by capability matchers.
layer = "python"
def _in_nix_shell(self) -> bool:
"""
Return True if we appear to be running inside a Nix dev shell.
Nix sets IN_NIX_SHELL in `nix develop` environments. In that case
the Python environment is already provided by Nix, so we must not
attempt an additional pip-based installation.
"""
return bool(os.environ.get("IN_NIX_SHELL"))
def supports(self, ctx: "RepoContext") -> bool:
"""
Return True if this installer should handle the given repository.
Only pyproject.toml is supported as the single source of truth
for Python dependencies and packaging metadata.
The installer is *disabled* when:
- IN_NIX_SHELL is set (Python managed by Nix dev shell), or
- PKGMGR_DISABLE_PYTHON_INSTALLER=1 is set.
"""
# 1) Skip in Nix dev shells Python is managed by the flake/devShell.
if self._in_nix_shell():
print(
"[INFO] IN_NIX_SHELL detected; skipping PythonInstaller. "
"Python runtime is provided by the Nix dev shell."
)
return False
# 2) Optional global kill-switch.
if os.environ.get("PKGMGR_DISABLE_PYTHON_INSTALLER") == "1":
print(
"[INFO] PKGMGR_DISABLE_PYTHON_INSTALLER=1 "
"PythonInstaller is disabled."
)
return False
repo_dir = ctx.repo_dir
return os.path.exists(os.path.join(repo_dir, "pyproject.toml"))
def _pip_cmd(self) -> str:
"""
Resolve the pip command to use.
Order:
1) PKGMGR_PIP (explicit override)
2) sys.executable -m pip
3) plain "pip"
"""
explicit = os.environ.get("PKGMGR_PIP", "").strip()
if explicit:
return explicit
if sys.executable:
return f"{sys.executable} -m pip"
return "pip"
def run(self, ctx: "InstallContext") -> None:
"""
Install Python project defined via pyproject.toml.
Any pip failure is propagated as SystemExit.
"""
# Extra guard in case run() is called directly without supports().
if self._in_nix_shell():
print(
"[INFO] IN_NIX_SHELL detected in PythonInstaller.run(); "
"skipping pip-based installation."
)
return
if not self.supports(ctx): # type: ignore[arg-type]
return
pip_cmd = self._pip_cmd()
pyproject = os.path.join(ctx.repo_dir, "pyproject.toml")
if os.path.exists(pyproject):
print(
f"pyproject.toml found in {ctx.identifier}, "
f"installing Python project..."
)
cmd = f"{pip_cmd} install ."
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)

View File

@@ -2,7 +2,7 @@ import sys
import shutil
from pkgmgr.actions.repository.pull import pull_with_verification
from pkgmgr.actions.repository.install import install_repos
from pkgmgr.actions.install import install_repos
def update_repos(

View File

@@ -7,7 +7,7 @@ import sys
from typing import Any, Dict, List
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.repository.install import install_repos
from pkgmgr.actions.install import install_repos
from pkgmgr.actions.repository.deinstall import deinstall_repos
from pkgmgr.actions.repository.delete import delete_repos
from pkgmgr.actions.repository.update import update_repos

View File

@@ -1,57 +1,84 @@
from __future__ import annotations
from __future__ import annotations
import json
import os
import json
import os
from typing import Any, Dict, List
from typing import Any, Dict, List
from pkgmgr.cli.context import CLIContext
from pkgmgr.core.command.run import run_command
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr .cli .context import CLIContext
from pkgmgr .core .command .run import run_command
from pkgmgr .core .repository .identifier import get_repo_identifier
from pkgmgr .core .repository .dir import get_repo_dir
Repository = Dict[str, Any]
def _resolve_repository_path(repository: Repository, ctx: CLIContext) -> str:
"""
Resolve the filesystem path for a repository.
Priority:
1. Use explicit keys if present (directory / path / workspace / workspace_dir).
2. Fallback to get_repo_dir(...) using the repositories base directory
from the CLI context.
"""
# 1) Explicit path-like keys on the repository object
for key in ("directory", "path", "workspace", "workspace_dir"):
value = repository.get(key)
if value:
return value
# 2) Fallback: compute from base dir + repository metadata
base_dir = (
getattr(ctx, "repositories_base_dir", None)
or getattr(ctx, "repositories_dir", None)
)
if not base_dir:
raise RuntimeError(
"Cannot resolve repositories base directory from context; "
"expected ctx.repositories_base_dir or ctx.repositories_dir."
)
return get_repo_dir(base_dir, repository)
def handle_tools_command(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Handle integration commands:
- explore (file manager)
- terminal (GNOME Terminal)
- code (VS Code workspace)
"""
# --------------------------------------------------------
# explore
# --------------------------------------------------------
# ------------------------------------------------------------------
# nautilus "explore" command
# ------------------------------------------------------------------
if args.command == "explore":
for repository in selected:
repo_path = _resolve_repository_path(repository, ctx)
run_command(
f"nautilus {repository['directory']} & disown"
f'nautilus "{repo_path}" & disown'
)
return
return
# --------------------------------------------------------
# terminal
# --------------------------------------------------------
# ------------------------------------------------------------------
# GNOME terminal command
# ------------------------------------------------------------------
if args.command == "terminal":
for repository in selected:
repo_path = _resolve_repository_path(repository, ctx)
run_command(
f'gnome-terminal --tab --working-directory="{repository["directory"]}"'
f'gnome-terminal --tab --working-directory="{repo_path}"'
)
return
return
# --------------------------------------------------------
# code
# --------------------------------------------------------
# ------------------------------------------------------------------
# VS Code workspace command
# ------------------------------------------------------------------
if args.command == "code":
if not selected:
print("No repositories selected.")
return
return
identifiers = [
get_repo_identifier(repo, ctx.all_repositories)
@@ -60,20 +87,25 @@ def handle_tools_command(
sorted_identifiers = sorted(identifiers)
workspace_name = "_".join(sorted_identifiers) + ".code-workspace"
directories_cfg = ctx.config_merged.get("directories") or {}
workspaces_dir = os.path.expanduser(
ctx.config_merged.get("directories").get("workspaces")
directories_cfg.get("workspaces", "~/Workspaces")
)
os.makedirs(workspaces_dir, exist_ok=True)
workspace_file = os.path.join(workspaces_dir, workspace_name)
folders = [{"path": repository["directory"]} for repository in selected]
folders = [
{"path": _resolve_repository_path(repository, ctx)}
for repository in selected
]
workspace_data = {
"folders": folders,
"settings": {},
}
if not os.path.exists(workspace_file):
with open(workspace_file, "w") as f:
with open(workspace_file, "w", encoding="utf-8") as f:
json.dump(workspace_data, f, indent=4)
print(f"Created workspace file: {workspace_file}")
else:

View File

@@ -1,113 +1,207 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Command resolver for repositories.
This module determines the correct command to expose via symlink.
It implements the following priority:
1. Explicit command in repo config → command
2. System package manager binary (/usr/...) → NO LINK (respect OS)
3. Nix profile binary (~/.nix-profile/bin/<id>) → command
4. Python / non-system console script on PATH → command
5. Fallback: repository's main.sh or main.py → command
6. If nothing is available → raise error
The actual symlink creation is handled by create_ink(). This resolver
only decides *what* should be used as the entrypoint, or whether no
link should be created at all.
"""
import os
import shutil
from typing import Optional
from typing import Optional, List, Dict, Any
def resolve_command_for_repo(repo, repo_identifier: str, repo_dir: str) -> Optional[str]:
Repository = Dict[str, Any]
def _is_executable(path: str) -> bool:
return os.path.exists(path) and os.access(path, os.X_OK)
def _find_python_package_root(repo_dir: str) -> Optional[str]:
"""
Determine the command for this repository.
Detect a Python src-layout package:
Returns:
str → path to the command (a symlink should be created)
None → do NOT create a link (e.g. system package already provides it)
repo_dir/src/<package>/__main__.py
On total failure (no suitable command found at any layer), this function
raises SystemExit with a descriptive error message.
Returns the directory containing __main__.py (e.g. ".../src/arc")
or None if no such structure exists.
"""
# ------------------------------------------------------------
# 1. Explicit command defined by repository config
# ------------------------------------------------------------
explicit = repo.get("command")
if explicit:
return explicit
src_dir = os.path.join(repo_dir, "src")
if not os.path.isdir(src_dir):
return None
for root, _dirs, files in os.walk(src_dir):
if "__main__.py" in files:
return root
return None
def _nix_binary_candidates(home: str, names: List[str]) -> List[str]:
"""
Build possible Nix profile binary paths for a list of candidate names.
"""
return [
os.path.join(home, ".nix-profile", "bin", name)
for name in names
if name
]
def _path_binary_candidates(names: List[str]) -> List[str]:
"""
Resolve candidate names via PATH using shutil.which.
Returns only existing, executable paths.
"""
binaries: List[str] = []
for name in names:
if not name:
continue
candidate = shutil.which(name)
if candidate and _is_executable(candidate):
binaries.append(candidate)
return binaries
def resolve_command_for_repo(
repo: Repository,
repo_identifier: str,
repo_dir: str,
) -> Optional[str]:
"""
Resolve the executable command for a repository.
Semantics:
----------
- If the repository explicitly defines the key "command" (even if None),
that is treated as authoritative and returned immediately.
This allows e.g.:
command: null
for pure library repositories with no CLI.
- If "command" is not defined, we try to discover a suitable CLI command:
1. Prefer already installed binaries (PATH, Nix profile).
2. For Python src-layout packages (src/*/__main__.py), try to infer
a sensible command name (alias, repo identifier, repository name,
package directory name) and resolve those via PATH / Nix.
3. For script-style repos, fall back to main.sh / main.py.
4. If nothing matches, return None (no CLI) instead of raising.
The caller can interpret:
- str → path to the command (symlink target)
- None → no CLI command for this repository
"""
# ------------------------------------------------------------------
# 1) Explicit command declaration (including explicit "no command")
# ------------------------------------------------------------------
if "command" in repo:
# May be a string path or None. None means: this repo intentionally
# has no CLI command and should not be resolved.
return repo.get("command")
home = os.path.expanduser("~")
def is_executable(path: str) -> bool:
return os.path.exists(path) and os.access(path, os.X_OK)
# ------------------------------------------------------------
# 2. System package manager binary via PATH
# ------------------------------------------------------------------
# 2) Collect candidate names for CLI binaries
#
# If the binary lives under /usr/, we treat it as a system-managed
# package (e.g. installed via pacman/apt/yum). In that case, pkgmgr
# does NOT create a link at all and defers entirely to the OS.
# ------------------------------------------------------------
path_candidate = shutil.which(repo_identifier)
# Order of preference:
# - repo_identifier (usually alias or configured id)
# - alias (if defined)
# - repository name (e.g. "analysis-ready-code")
# - python package name (e.g. "arc" from src/arc/__main__.py)
# ------------------------------------------------------------------
alias = repo.get("alias")
repository_name = repo.get("repository")
python_package_root = _find_python_package_root(repo_dir)
if python_package_root:
python_package_name = os.path.basename(python_package_root)
else:
python_package_name = None
candidate_names: List[str] = []
seen: set[str] = set()
for name in (
repo_identifier,
alias,
repository_name,
python_package_name,
):
if name and name not in seen:
seen.add(name)
candidate_names.append(name)
# ------------------------------------------------------------------
# 3) Try resolve via PATH (non-system and system) and Nix profile
# ------------------------------------------------------------------
# a) PATH binaries
path_binaries = _path_binary_candidates(candidate_names)
# b) Classify system (/usr/...) vs non-system
system_binary: Optional[str] = None
non_system_binary: Optional[str] = None
if path_candidate:
if path_candidate.startswith("/usr/"):
system_binary = path_candidate
for bin_path in path_binaries:
if bin_path.startswith("/usr"):
# Last system binary wins, but usually there is only one anyway
system_binary = bin_path
else:
non_system_binary = path_candidate
non_system_binary = bin_path
break # prefer the first non-system binary
# c) Nix profile binaries
nix_binaries = [
path for path in _nix_binary_candidates(home, candidate_names)
if _is_executable(path)
]
nix_binary = nix_binaries[0] if nix_binaries else None
# Decide priority:
# 1) non-system PATH binary (user/venv)
# 2) Nix profile binary
# 3) system binary (/usr/...) → only if we want to expose it
if non_system_binary:
return non_system_binary
if nix_binary:
return nix_binary
if system_binary:
# Respect system package manager: do not create a link.
if repo.get("debug", False):
# Respect system packages. Depending on your policy you can decide
# to return None (no symlink, OS owns the command) or to expose it.
# Here we choose: no symlink for pure system binaries.
if repo.get("ignore_system_binary", False):
print(
f"[pkgmgr] System binary for '{repo_identifier}' found at "
f"{system_binary}; no symlink will be created."
)
return None
# ------------------------------------------------------------
# 3. Nix profile binary (~/.nix-profile/bin/<identifier>)
# ------------------------------------------------------------
nix_candidate = os.path.join(home, ".nix-profile", "bin", repo_identifier)
if is_executable(nix_candidate):
return nix_candidate
# ------------------------------------------------------------
# 4. Python / non-system console script on PATH
#
# Here we reuse the non-system PATH candidate (e.g. from a venv or
# a user-local install like ~/.local/bin). This is treated as a
# valid command target.
# ------------------------------------------------------------
if non_system_binary and is_executable(non_system_binary):
return non_system_binary
# ------------------------------------------------------------
# 5. Fallback: main.sh / main.py inside the repository
# ------------------------------------------------------------
# ------------------------------------------------------------------
# 4) Script-style repository: fallback to main.sh / main.py
# ------------------------------------------------------------------
main_sh = os.path.join(repo_dir, "main.sh")
main_py = os.path.join(repo_dir, "main.py")
if is_executable(main_sh):
if _is_executable(main_sh):
return main_sh
if is_executable(main_py) or os.path.exists(main_py):
if os.path.exists(main_py):
return main_py
# ------------------------------------------------------------
# 6. Nothing found → treat as a hard error
# ------------------------------------------------------------
raise SystemExit(
f"No executable command could be resolved for repository '{repo_identifier}'. "
"No explicit 'command' configured, no system-managed binary under /usr/, "
"no Nix profile binary, no non-system console script on PATH, and no "
"main.sh/main.py found in the repository."
)
# ------------------------------------------------------------------
# 5) No CLI discovered
#
# At this point we may still have a Python package structure, but
# without any installed CLI entry point and without main.sh/main.py.
#
# This is perfectly valid for library-only repositories, so we do
# NOT treat this as an error. The caller can then decide to simply
# skip symlink creation.
# ------------------------------------------------------------------
if python_package_root:
print(
f"[INFO] Repository '{repo_identifier}' appears to be a Python "
f"package at '{python_package_root}' but no CLI entry point was "
f"found (PATH, Nix, main.sh/main.py). Treating it as a "
f"library-only repository with no command."
)
return None

View File

@@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "package-manager"
version = "0.7.12"
version = "0.8.0"
description = "Kevin's package-manager tool (pkgmgr)"
readme = "README.md"
requires-python = ">=3.11"

115
tests/e2e/test_clone_all.py Normal file
View File

@@ -0,0 +1,115 @@
"""
Integration test: clone all configured repositories using
--clone-mode https and --no-verification.
This test is intended to be run inside the Docker container where:
- network access is available,
- the config/config.yaml is present,
- and it is safe to perform real git operations.
It passes if the command completes without raising an exception.
"""
import runpy
import sys
import unittest
from test_install_pkgmgr_shallow import (
nix_profile_list_debug,
remove_pkgmgr_from_nix_profile,
pkgmgr_help_debug,
)
class TestIntegrationCloneAllHttps(unittest.TestCase):
def _run_pkgmgr_clone_all_https(self) -> None:
"""
Helper that runs the CLI command via main.py and provides
extra diagnostics if the command exits with a non-zero code.
Note:
The pkgmgr CLI may exit via SystemExit(0) on success
(e.g. when handled by the proxy layer). In that case we
treat the test as successful and do not raise.
"""
cmd_repr = "pkgmgr clone --all --clone-mode https --no-verification"
original_argv = sys.argv
try:
sys.argv = [
"pkgmgr",
"clone",
"--all",
"--clone-mode",
"https",
"--no-verification",
]
try:
# Execute main.py as if it was called from CLI.
# This will run the full clone pipeline inside the container.
runpy.run_module("main", run_name="__main__")
except SystemExit as exc:
# Determine the exit code (int or string)
exit_code = exc.code
if isinstance(exit_code, int):
numeric_code = exit_code
else:
try:
numeric_code = int(exit_code)
except (TypeError, ValueError):
numeric_code = None
# Treat SystemExit(0) as success (expected behavior)
if numeric_code == 0:
print(
"\n[TEST] pkgmgr clone --all finished with SystemExit(0); "
"treating as success."
)
return
# For non-zero exit codes: convert SystemExit into a more
# helpful assertion with debug output.
print("\n[TEST] pkgmgr clone --all failed with SystemExit")
print(f"[TEST] Command : {cmd_repr}")
print(f"[TEST] Exit code: {exit_code!r}")
# Additional Nix profile debug on failure (may still be useful
# if the clone step interacts with Nix-based tooling).
nix_profile_list_debug("ON FAILURE (AFTER SystemExit)")
raise AssertionError(
f"{cmd_repr!r} failed with exit code {exit_code!r}. "
"Scroll up to see the full pkgmgr/make output inside the container."
) from exc
finally:
sys.argv = original_argv
def test_clone_all_repositories_https(self) -> None:
"""
Run: pkgmgr clone --all --clone-mode https --no-verification
This will perform real git clone operations inside the container.
The test succeeds if no exception is raised and `pkgmgr --help`
works in a fresh interactive bash session afterwards.
"""
# Debug before cleanup (reusing the same helpers as the install test).
nix_profile_list_debug("BEFORE CLEANUP")
# Cleanup: aggressively try to drop any pkgmgr/profile entries
# (harmless for a pure clone test but keeps environments comparable).
remove_pkgmgr_from_nix_profile()
# Debug after cleanup
nix_profile_list_debug("AFTER CLEANUP")
# Run the actual clone with extended diagnostics
self._run_pkgmgr_clone_all_https()
# After successful clone: show `pkgmgr --help`
# via interactive bash (same helper as in the install test).
pkgmgr_help_debug()
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,74 @@
"""
E2E/Integration tests for the tool-related subcommands' --help output.
We assert that calling:
- pkgmgr explore --help
- pkgmgr terminal --help
- pkgmgr code --help
completes successfully. For --help, argparse exits with SystemExit(0),
which we treat as success and suppress in the helper.
"""
from __future__ import annotations
import os
import runpy
import sys
import unittest
from typing import List
# Resolve project root (the repo where main.py lives, e.g. /src)
PROJECT_ROOT = os.path.abspath(
os.path.join(os.path.dirname(__file__), "..", "..")
)
MAIN_PATH = os.path.join(PROJECT_ROOT, "main.py")
def _run_main(argv: List[str]) -> None:
"""
Helper to run main.py with the given argv.
This mimics a "pkgmgr ..." invocation in the E2E container.
For --help invocations, argparse will call sys.exit(0), which raises
SystemExit(0). We treat this as success and only re-raise non-zero
exit codes.
"""
old_argv = sys.argv
try:
sys.argv = ["pkgmgr"] + argv
try:
runpy.run_path(MAIN_PATH, run_name="__main__")
except SystemExit as exc: # argparse uses this for --help
# SystemExit.code can be int, str or None; for our purposes:
code = exc.code
if code not in (0, None):
# Non-zero exit code -> real error.
raise
# For 0/None: treat as success and swallow the exception.
finally:
sys.argv = old_argv
class TestToolsHelp(unittest.TestCase):
"""
E2E/Integration tests for tool commands' --help screens.
"""
def test_explore_help(self) -> None:
"""Ensure `pkgmgr explore --help` runs successfully."""
_run_main(["explore", "--help"])
def test_terminal_help(self) -> None:
"""Ensure `pkgmgr terminal --help` runs successfully."""
_run_main(["terminal", "--help"])
def test_code_help(self) -> None:
"""Ensure `pkgmgr code --help` runs successfully."""
_run_main(["code", "--help"])
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,93 @@
"""
Integration test: update all configured repositories using
--clone-mode https and --no-verification.
This test is intended to be run inside the Docker container where:
- network access is available,
- the config/config.yaml is present,
- and it is safe to perform real git operations.
It passes if the command completes without raising an exception.
"""
import runpy
import sys
import unittest
from test_install_pkgmgr_shallow import (
nix_profile_list_debug,
remove_pkgmgr_from_nix_profile,
pkgmgr_help_debug,
)
class TestIntegrationUpdateAllHttps(unittest.TestCase):
def _run_pkgmgr_update_all_https(self) -> None:
"""
Helper that runs the CLI command via main.py and provides
extra diagnostics if the command exits with a non-zero code.
"""
cmd_repr = "pkgmgr update --all --clone-mode https --no-verification"
original_argv = sys.argv
try:
sys.argv = [
"pkgmgr",
"update",
"--all",
"--clone-mode",
"https",
"--no-verification",
]
try:
# Execute main.py as if it was called from CLI.
# This will run the full update pipeline inside the container.
runpy.run_module("main", run_name="__main__")
except SystemExit as exc:
# Convert SystemExit into a more helpful assertion with debug output.
exit_code = exc.code if isinstance(exc.code, int) else str(exc.code)
print("\n[TEST] pkgmgr update --all failed with SystemExit")
print(f"[TEST] Command : {cmd_repr}")
print(f"[TEST] Exit code: {exit_code}")
# Additional Nix profile debug on failure (useful if any update
# step interacts with Nix-based tooling).
nix_profile_list_debug("ON FAILURE (AFTER SystemExit)")
raise AssertionError(
f"{cmd_repr!r} failed with exit code {exit_code}. "
"Scroll up to see the full pkgmgr/make output inside the container."
) from exc
finally:
sys.argv = original_argv
def test_update_all_repositories_https(self) -> None:
"""
Run: pkgmgr update --all --clone-mode https --no-verification
This will perform real git update operations inside the container.
The test succeeds if no exception is raised and `pkgmgr --help`
works in a fresh interactive bash session afterwards.
"""
# Debug before cleanup
nix_profile_list_debug("BEFORE CLEANUP")
# Cleanup: aggressively try to drop any pkgmgr/profile entries
# (keeps the environment comparable to other integration tests).
remove_pkgmgr_from_nix_profile()
# Debug after cleanup
nix_profile_list_debug("AFTER CLEANUP")
# Run the actual update with extended diagnostics
self._run_pkgmgr_update_all_https()
# After successful update: show `pkgmgr --help`
# via interactive bash (same helper as in the other integration tests).
pkgmgr_help_debug()
if __name__ == "__main__":
unittest.main()

View File

@@ -1,11 +1,14 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import tempfile
import unittest
from unittest.mock import patch
import pkgmgr.actions.repository.install as install_module
from pkgmgr.actions.repository.install import install_repos
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
import pkgmgr.actions.install as install_module
from pkgmgr.actions.install import install_repos
from pkgmgr.actions.install.installers.base import BaseInstaller
class DummyInstaller(BaseInstaller):
@@ -16,49 +19,52 @@ class DummyInstaller(BaseInstaller):
layer = None
def supports(self, ctx):
def supports(self, ctx): # type: ignore[override]
return True
def run(self, ctx):
def run(self, ctx): # type: ignore[override]
return
class TestInstallReposIntegration(unittest.TestCase):
@patch("pkgmgr.actions.repository.install.verify_repository")
@patch("pkgmgr.actions.repository.install.clone_repos")
@patch("pkgmgr.actions.repository.install.get_repo_dir")
@patch("pkgmgr.actions.repository.install.get_repo_identifier")
@patch("pkgmgr.actions.install.verify_repository")
@patch("pkgmgr.actions.install.clone_repos")
@patch("pkgmgr.actions.install.get_repo_dir")
@patch("pkgmgr.actions.install.get_repo_identifier")
def test_system_binary_vs_nix_binary(
self,
mock_get_repo_identifier,
mock_get_repo_dir,
mock_clone_repos,
mock_verify_repository,
):
) -> None:
"""
Full integration test for high-level command resolution + symlink creation.
Integration test:
We do NOT re-test all low-level file-system details of
resolve_command_for_repo here (that is covered by unit tests).
Instead, we assert that:
We do NOT re-test the low-level implementation details of
resolve_command_for_repo() here (that is covered by unit tests).
- If resolve_command_for_repo(...) returns None:
→ install_repos() does NOT create a symlink.
Instead, we assert the high-level behavior of install_repos() +
InstallationPipeline + create_ink():
- If resolve_command_for_repo(...) returns a path:
→ install_repos() creates exactly one symlink in bin_dir
* If resolve_command_for_repo(...) returns None:
→ install_repos() must NOT create a symlink for that repo.
* If resolve_command_for_repo(...) returns a path:
→ install_repos() must create exactly one symlink in bin_dir
that points to this path.
Concretely:
Concretely in this test:
- repo-system:
resolve_command_for_repo(...) → None
* repo-system:
fake resolver → returns None
→ no symlink in bin_dir for this repo.
- repo-nix:
resolve_command_for_repo(...) → "/nix/profile/bin/repo-nix"
* repo-nix:
fake resolver → returns "/nix/profile/bin/repo-nix"
→ exactly one symlink in bin_dir pointing to that path.
"""
# Repositories must have provider/account/repository so that get_repo_dir()
# does not crash when called from create_ink().
repo_system = {
@@ -77,9 +83,7 @@ class TestInstallReposIntegration(unittest.TestCase):
selected_repos = [repo_system, repo_nix]
all_repos = selected_repos
with tempfile.TemporaryDirectory() as tmp_base, \
tempfile.TemporaryDirectory() as tmp_bin:
with tempfile.TemporaryDirectory() as tmp_base, tempfile.TemporaryDirectory() as tmp_bin:
# Fake repo directories (what get_repo_dir will return)
repo_system_dir = os.path.join(tmp_base, "repo-system")
repo_nix_dir = os.path.join(tmp_base, "repo-nix")
@@ -97,11 +101,15 @@ class TestInstallReposIntegration(unittest.TestCase):
# Pretend this is the "Nix binary" path for repo-nix
nix_tool_path = "/nix/profile/bin/repo-nix"
# Patch resolve_command_for_repo at the install_repos module level
with patch("pkgmgr.actions.repository.install.resolve_command_for_repo") as mock_resolve, \
patch("pkgmgr.actions.repository.install.os.path.exists") as mock_exists_install:
# Patch resolve_command_for_repo at the *pipeline* module level,
# because InstallationPipeline imports it there.
with patch(
"pkgmgr.actions.install.pipeline.resolve_command_for_repo"
) as mock_resolve, patch(
"pkgmgr.actions.install.os.path.exists"
) as mock_exists_install:
def fake_resolve_command(repo, repo_identifier: str, repo_dir: str):
def fake_resolve(repo, repo_identifier: str, repo_dir: str):
"""
High-level behavior stub:
@@ -111,9 +119,10 @@ class TestInstallReposIntegration(unittest.TestCase):
- For repo-nix: act as if a Nix profile binary is the entrypoint
→ return nix_tool_path (symlink should be created).
"""
if repo_identifier == "repo-system":
name = repo.get("name")
if name == "repo-system":
return None
if repo_identifier == "repo-nix":
if name == "repo-nix":
return nix_tool_path
return None
@@ -126,7 +135,7 @@ class TestInstallReposIntegration(unittest.TestCase):
return True
return False
mock_resolve.side_effect = fake_resolve_command
mock_resolve.side_effect = fake_resolve
mock_exists_install.side_effect = fake_exists_install
# Use only DummyInstaller so we focus on link creation, not installer behavior

View File

@@ -1,6 +1,16 @@
# Capability Resolution & Installer Shadowing
## Layer Hierarchy
This document explains how `pkgmgr` decides **which installer should run** when multiple installation mechanisms are available in a repository.
It reflects the logic shown in the setup-controller diagram:
➡️ **Full graphical schema:** [https://s.veen.world/pkgmgrmp](https://s.veen.world/pkgmgrmp)
---
## Layer Hierarchy (Strength Order)
Installers are evaluated from **strongest to weakest**.
A stronger layer shadows all layers below it.
```
┌───────────────────────────┐ Highest layer
@@ -22,7 +32,24 @@
---
## Scenario Matrix
## Capability Matrix
Each layer provides a set of **capabilities**.
Layers that provide *all* capabilities of a lower layer **shadow** that layer.
| Capability | Makefile | Python | Nix | OS-Pkgs |
| -------------------- | -------- | ------------ | --- | ------- |
| `make-install` | ✔ | (optional) ✔ | ✔ | ✔ |
| `python-runtime` | | ✔ | ✔ | ✔ |
| `binary/cli` | | | ✔ | ✔ |
| `system-integration` | | | | ✔ |
✔ = capability available
= not provided by this layer
---
## Scenario Matrix (Expected Installer Execution)
| Scenario | Makefile | Python | Nix | OS-Pkgs | Test Name |
| -------------------------- | -------- | ------ | --- | ------- | ----------------------------- |
@@ -34,40 +61,41 @@
Legend:
✔ = installer runs
✗ = installer skipped (shadowed by upper layer)
= no such layer present
✗ = installer is skipped (shadowed)
= layer not present in this scenario
---
## What the Integration Test Confirms
**Goal:** Validate that the capability-shadowing mechanism correctly determines *which installers actually run* for a given repository layout.
The integration tests ensure that the **actual execution** matches the theoretical capability model.
### 1) Only Makefile
* Makefile provides `make-install`.
* No higher layers → MakefileInstaller runs.
* Only `Makefile` present
→ MakefileInstaller runs.
### 2) Python + Makefile
* Python provides `python-runtime`.
* Makefile additionally provides `make-install`.
* No capability overlap → both installers run.
* Python provides `python-runtime`
* Makefile provides `make-install`
→ Both run (capabilities are disjoint).
### 3) Python shadows Makefile
* Python also provides `make-install`.
* Makefiles capability is fully covered → MakefileInstaller is skipped.
* Python additionally advertises `make-install`
→ MakefileInstaller is skipped.
### 4) Nix shadows Python & Makefile
* Nix provides all capabilities below it.
* Only NixInstaller runs.
* Nix provides: `python-runtime` + `make-install`
→ PythonInstaller and MakefileInstaller are skipped.
→ Only NixInstaller runs.
### 5) OS-Packages shadow all
### 5) OS-Pkg layer shadows all
* PKGBUILD/debian/rpm provide all capabilities.
* Only the corresponding OS package installer runs.
* OS packages provide all capabilities
Only OS installer runs.
---
@@ -111,6 +139,14 @@ Legend:
---
## Core Principle (one sentence)
## Core Principle
**A layer only executes if it provides at least one capability not already guaranteed by any higher layer.**
**A layer is executed only if it contributes at least one capability that no stronger layer has already provided.**
---
## Link to the Setup Controller Diagram
The full visual schema is available here:
➡️ **[https://s.veen.world/pkgmgrmp](https://s.veen.world/pkgmgrmp)**

View File

@@ -2,140 +2,99 @@
# -*- coding: utf-8 -*-
"""
Integration tests for the recursive / layered capability handling in pkgmgr.
Integration tests for recursive capability resolution and installer shadowing.
We focus on the interaction between:
These tests verify that, given different repository layouts (Makefile, pyproject,
flake.nix, PKGBUILD), only the expected installers are executed based on the
capabilities provided by higher layers.
- MakefileInstaller (layer: "makefile")
- PythonInstaller (layer: "python")
- NixFlakeInstaller (layer: "nix")
- ArchPkgbuildInstaller (layer: "os-packages")
Layer order (strongest → weakest):
The core idea:
- Each installer declares logical capabilities for its layer via
discover_capabilities() and the global CAPABILITY_MATCHERS.
- install_repos() tracks which capabilities have already been provided
by earlier installers (in INSTALLERS order).
- If an installer only provides capabilities that are already covered
by previous installers, it is skipped.
These tests use *real* capability detection (based on repo files like
flake.nix, pyproject.toml, Makefile, PKGBUILD), but patch the installers'
run() methods so that no real external commands are executed.
Scenarios:
1. Only Makefile with install target
→ MakefileInstaller runs, all good.
2. Python + Makefile (no "make install" in pyproject.toml)
→ PythonInstaller provides only python-runtime
→ MakefileInstaller provides make-install
→ Both run, since their capabilities are disjoint.
3. Python + Makefile (pyproject.toml mentions "make install")
→ PythonInstaller provides {python-runtime, make-install}
→ MakefileInstaller provides {make-install}
→ MakefileInstaller is skipped (capabilities already covered).
4. Nix + Python + Makefile
- flake.nix hints:
* buildPythonApplication (python-runtime)
* make install (make-install)
→ NixFlakeInstaller provides {python-runtime, make-install, nix-flake}
→ PythonInstaller and MakefileInstaller are skipped.
5. OS packages + Nix + Python + Makefile
- PKGBUILD contains:
* "pip install ." (python-runtime via os-packages)
* "make install" (make-install via os-packages)
* "nix profile" (nix-flake via os-packages)
→ ArchPkgbuildInstaller provides all capabilities
→ All lower layers are skipped.
OS-PACKAGES > NIX > PYTHON > MAKEFILE
"""
import os
import shutil
import tempfile
import unittest
from typing import List, Sequence, Tuple
from unittest.mock import patch
import pkgmgr.actions.repository.install as install_mod
from pkgmgr.actions.repository.install import install_repos
from pkgmgr.actions.repository.install.installers.nix_flake import NixFlakeInstaller
from pkgmgr.actions.repository.install.installers.python import PythonInstaller
from pkgmgr.actions.repository.install.installers.makefile import MakefileInstaller
from pkgmgr.actions.repository.install.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller
import pkgmgr.actions.install as install_mod
from pkgmgr.actions.install import install_repos
from pkgmgr.actions.install.installers.makefile import MakefileInstaller
from pkgmgr.actions.install.installers.nix_flake import NixFlakeInstaller
from pkgmgr.actions.install.installers.os_packages.arch_pkgbuild import (
ArchPkgbuildInstaller,
)
from pkgmgr.actions.install.installers.python import PythonInstaller
InstallerSpec = Tuple[str, object]
class TestRecursiveCapabilitiesIntegration(unittest.TestCase):
def setUp(self) -> None:
# Temporary base directory for this test class
self.tmp_root = tempfile.mkdtemp(prefix="pkgmgr-integration-")
self.tmp_root = tempfile.mkdtemp(prefix="pkgmgr-recursive-caps-")
self.bin_dir = os.path.join(self.tmp_root, "bin")
os.makedirs(self.bin_dir, exist_ok=True)
def tearDown(self) -> None:
shutil.rmtree(self.tmp_root)
# ------------------------------------------------------------------
# Helper: create a new repo directory for a scenario
# ------------------------------------------------------------------
# ------------------------------------------------------------------ helpers
def _new_repo(self) -> str:
repo_dir = tempfile.mkdtemp(prefix="repo-", dir=self.tmp_root)
return repo_dir
# ------------------------------------------------------------------
# Helper: run install_repos() with a custom installer list
# and record which installers actually ran.
# ------------------------------------------------------------------
def _run_with_installers(self, repo_dir: str, installers, selected_repos=None):
"""
Run install_repos() with a given INSTALLERS list and a single
dummy repo; return the list of installer labels that actually ran.
Create a fresh temporary repo directory under self.tmp_root.
"""
return tempfile.mkdtemp(prefix="repo-", dir=self.tmp_root)
The installers' supports() are forced to True so that only the
capability-shadowing logic decides whether they are skipped.
The installers' run() methods are patched to avoid real commands.
def _run_with_installers(
self,
repo_dir: str,
installers: Sequence[InstallerSpec],
selected_repos=None,
) -> List[str]:
"""
Run install_repos() with a custom INSTALLERS list and capture which
installer labels actually run.
NOTE:
We patch resolve_command_for_repo() to always return a dummy
command path so that command resolution does not interfere with
capability-layering tests.
We override each installer's supports() to always return True and
override run() to append its label to called_installers.
"""
if selected_repos is None:
repo = {}
repo = {"repository": "dummy"}
selected_repos = [repo]
all_repos = [repo]
else:
all_repos = selected_repos
called_installers: list[str] = []
called_installers: List[str] = []
# Prepare patched instances with recording run() and always-supports.
patched_installers = []
for label, inst in installers:
def always_supports(self, ctx):
return True
def make_run(label_name):
def make_run(label_name: str):
def _run(self, ctx):
called_installers.append(label_name)
return _run
inst.supports = always_supports.__get__(inst, inst.__class__)
inst.run = make_run(label).__get__(inst, inst.__class__)
inst.supports = always_supports.__get__(inst, inst.__class__) # type: ignore[assignment]
inst.run = make_run(label).__get__(inst, inst.__class__) # type: ignore[assignment]
patched_installers.append(inst)
with patch.object(install_mod, "INSTALLERS", patched_installers), \
patch.object(install_mod, "get_repo_identifier", return_value="dummy-repo"), \
patch.object(install_mod, "get_repo_dir", return_value=repo_dir), \
patch.object(install_mod, "verify_repository", return_value=(True, [], None, None)), \
patch.object(install_mod, "create_ink"), \
patch.object(install_mod, "clone_repos"), \
patch.object(install_mod, "resolve_command_for_repo", return_value="/bin/dummy"):
with patch.object(install_mod, "INSTALLERS", patched_installers), patch.object(
install_mod, "get_repo_identifier", return_value="dummy-repo"
), patch.object(
install_mod, "get_repo_dir", return_value=repo_dir
), patch.object(
install_mod, "verify_repository", return_value=(True, [], None, None)
), patch.object(
install_mod, "clone_repos"
):
install_repos(
selected_repos=selected_repos,
repositories_base_dir=self.tmp_root,
@@ -144,25 +103,25 @@ class TestRecursiveCapabilitiesIntegration(unittest.TestCase):
no_verification=True,
preview=False,
quiet=False,
clone_mode="shallow",
clone_mode="ssh",
update_dependencies=False,
)
return called_installers
# ----------------------------------------------------------------- scenarios
# ------------------------------------------------------------------
# Scenario 1: Only Makefile with install target
# ------------------------------------------------------------------
def test_only_makefile_installer_runs(self) -> None:
"""
With only a Makefile present, only the MakefileInstaller should run.
"""
repo_dir = self._new_repo()
# Makefile: detect a real 'install' target for makefile layer.
with open(os.path.join(repo_dir, "Makefile"), "w", encoding="utf-8") as f:
f.write("install:\n\t@echo 'installing from Makefile'\n")
f.write("install:\n\t@echo 'make install'\n")
mk_inst = MakefileInstaller()
installers = [("makefile", mk_inst)]
installers: Sequence[InstallerSpec] = [("makefile", mk_inst)]
called = self._run_with_installers(repo_dir, installers)
@@ -172,110 +131,85 @@ class TestRecursiveCapabilitiesIntegration(unittest.TestCase):
"With only a Makefile, the MakefileInstaller should run exactly once.",
)
# ------------------------------------------------------------------
# Scenario 2: Python + Makefile, but pyproject.toml does NOT mention 'make install'
# → capabilities are disjoint, both installers should run.
# ------------------------------------------------------------------
def test_python_and_makefile_both_run_when_caps_disjoint(self) -> None:
"""
If Python and Makefile have disjoint capabilities, both installers run.
"""
repo_dir = self._new_repo()
# pyproject.toml: basic Python project, no 'make install' string.
# pyproject.toml without any explicit "make install" hint
with open(os.path.join(repo_dir, "pyproject.toml"), "w", encoding="utf-8") as f:
f.write(
"[project]\n"
"name = 'dummy'\n"
)
f.write("name = 'dummy'\n")
# Makefile: install target for makefile layer.
with open(os.path.join(repo_dir, "Makefile"), "w", encoding="utf-8") as f:
f.write("install:\n\t@echo 'installing from Makefile'\n")
f.write("install:\n\t@echo 'make install'\n")
py_inst = PythonInstaller()
mk_inst = MakefileInstaller()
# Order: Python first, then Makefile
installers = [
installers: Sequence[InstallerSpec] = [
("python", py_inst),
("makefile", mk_inst),
]
called = self._run_with_installers(repo_dir, installers)
# Both should have run because:
# - Python provides {python-runtime}
# - Makefile provides {make-install}
self.assertEqual(
called,
["python", "makefile"],
"PythonInstaller and MakefileInstaller should both run when their capabilities are disjoint.",
"PythonInstaller and MakefileInstaller should both run when their "
"capabilities are disjoint.",
)
# ------------------------------------------------------------------
# Scenario 3: Python + Makefile, pyproject.toml mentions 'make install'
# → PythonInstaller provides {python-runtime, make-install}
# MakefileInstaller only {make-install}
# → MakefileInstaller must be skipped.
# ------------------------------------------------------------------
def test_python_shadows_makefile_when_pyproject_mentions_make_install(self) -> None:
"""
If the Python layer advertises a 'make-install' capability (pyproject
explicitly hints at 'make install'), the Makefile layer must be skipped.
"""
repo_dir = self._new_repo()
# pyproject.toml: Python project with 'make install' hint.
with open(os.path.join(repo_dir, "pyproject.toml"), "w", encoding="utf-8") as f:
f.write(
"[project]\n"
"name = 'dummy'\n"
"\n"
"# Hint for MakeInstallCapability on layer 'python'\n"
"make install\n"
)
# Makefile: install target, but should be shadowed by Python.
with open(os.path.join(repo_dir, "Makefile"), "w", encoding="utf-8") as f:
f.write("install:\n\t@echo 'installing from Makefile'\n")
f.write("install:\n\t@echo 'make install'\n")
py_inst = PythonInstaller()
mk_inst = MakefileInstaller()
installers = [
installers: Sequence[InstallerSpec] = [
("python", py_inst),
("makefile", mk_inst),
]
called = self._run_with_installers(repo_dir, installers)
# Python should run, Makefile should be skipped because its only
# capability (make-install) is already provided by Python.
self.assertIn("python", called, "PythonInstaller should have run.")
self.assertNotIn(
"makefile",
called,
"MakefileInstaller should be skipped because its 'make-install' capability "
"is already provided by Python.",
"MakefileInstaller should be skipped because its 'make-install' "
"capability is already provided by Python.",
)
# ------------------------------------------------------------------
# Scenario 4: Nix + Python + Makefile
# flake.nix provides python-runtime + make-install + nix-flake
# → Nix shadows both Python and Makefile.
# ------------------------------------------------------------------
def test_nix_shadows_python_and_makefile(self) -> None:
"""
If a Nix flake advertises both python-runtime and make-install
capabilities, Python and Makefile installers must be skipped.
"""
repo_dir = self._new_repo()
# pyproject.toml: generic Python project
with open(os.path.join(repo_dir, "pyproject.toml"), "w", encoding="utf-8") as f:
f.write(
"[project]\n"
"name = 'dummy'\n"
)
f.write("name = 'dummy'\n")
# Makefile: install target
with open(os.path.join(repo_dir, "Makefile"), "w", encoding="utf-8") as f:
f.write("install:\n\t@echo 'installing from Makefile'\n")
f.write("install:\n\t@echo 'make install'\n")
# flake.nix: hints for both python-runtime and make-install on layer 'nix'
with open(os.path.join(repo_dir, "flake.nix"), "w", encoding="utf-8") as f:
f.write(
"{\n"
' description = "integration test flake";\n'
"}\n"
"\n"
@@ -289,8 +223,7 @@ class TestRecursiveCapabilitiesIntegration(unittest.TestCase):
nix_inst = NixFlakeInstaller()
py_inst = PythonInstaller()
mk_inst = MakefileInstaller()
installers = [
installers: Sequence[InstallerSpec] = [
("nix", nix_inst),
("python", py_inst),
("makefile", mk_inst),
@@ -298,47 +231,35 @@ class TestRecursiveCapabilitiesIntegration(unittest.TestCase):
called = self._run_with_installers(repo_dir, installers)
# Nix must run, Python and Makefile must be skipped:
# - Nix provides {python-runtime, make-install, nix-flake}
# - Python provides {python-runtime}
# - Makefile provides {make-install}
self.assertIn("nix", called, "NixFlakeInstaller should have run.")
self.assertNotIn(
"python",
called,
"PythonInstaller should be skipped because its python-runtime capability "
"is already provided by Nix.",
"PythonInstaller should be skipped because its python-runtime "
"capability is already provided by Nix.",
)
self.assertNotIn(
"makefile",
called,
"MakefileInstaller should be skipped because its make-install capability "
"is already provided by Nix.",
"MakefileInstaller should be skipped because its make-install "
"capability is already provided by Nix.",
)
# ------------------------------------------------------------------
# Scenario 5: OS packages + Nix + Python + Makefile
# PKGBUILD provides python-runtime + make-install + nix-flake
# → ArchPkgbuildInstaller shadows everything below.
# ------------------------------------------------------------------
def test_os_packages_shadow_nix_python_and_makefile(self) -> None:
"""
If an OS package layer (PKGBUILD) advertises all capabilities,
all lower layers (Nix, Python, Makefile) must be skipped.
"""
repo_dir = self._new_repo()
# pyproject.toml: enough to signal a Python project
with open(os.path.join(repo_dir, "pyproject.toml"), "w", encoding="utf-8") as f:
f.write(
"[project]\n"
"name = 'dummy'\n"
)
f.write("name = 'dummy'\n")
# Makefile: install target
with open(os.path.join(repo_dir, "Makefile"), "w", encoding="utf-8") as f:
f.write("install:\n\t@echo 'installing from Makefile'\n")
f.write("install:\n\t@echo 'make install'\n")
# flake.nix: as before
with open(os.path.join(repo_dir, "flake.nix"), "w", encoding="utf-8") as f:
f.write(
"{\n"
' description = "integration test flake";\n'
"}\n"
"\n"
@@ -346,13 +267,8 @@ class TestRecursiveCapabilitiesIntegration(unittest.TestCase):
"make install\n"
)
# PKGBUILD: contains patterns for all three capabilities on layer 'os-packages':
# - "pip install ." → python-runtime
# - "make install" → make-install
# - "nix profile" → nix-flake
with open(os.path.join(repo_dir, "PKGBUILD"), "w", encoding="utf-8") as f:
f.write(
"pkgname=dummy\n"
"pkgver=0.1\n"
"pkgrel=1\n"
"pkgdesc='dummy pkg for integration test'\n"
@@ -376,8 +292,7 @@ class TestRecursiveCapabilitiesIntegration(unittest.TestCase):
nix_inst = NixFlakeInstaller()
py_inst = PythonInstaller()
mk_inst = MakefileInstaller()
installers = [
installers: Sequence[InstallerSpec] = [
("os-packages", os_inst),
("nix", nix_inst),
("python", py_inst),
@@ -386,11 +301,6 @@ class TestRecursiveCapabilitiesIntegration(unittest.TestCase):
called = self._run_with_installers(repo_dir, installers)
# ArchPkgbuildInstaller must run, and everything below must be skipped:
# - os-packages provides {python-runtime, make-install, nix-flake}
# - nix provides {python-runtime, make-install, nix-flake}
# - python provides {python-runtime}
# - makefile provides {make-install}
self.assertIn("os-packages", called, "ArchPkgbuildInstaller should have run.")
self.assertNotIn(
"nix",

View File

@@ -4,8 +4,8 @@ import os
import unittest
from unittest.mock import patch
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller
class TestArchPkgbuildInstaller(unittest.TestCase):
@@ -26,7 +26,7 @@ class TestArchPkgbuildInstaller(unittest.TestCase):
)
self.installer = ArchPkgbuildInstaller()
@patch("pkgmgr.actions.repository.install.installers.os_packages.arch_pkgbuild.os.geteuid", return_value=1000)
@patch("pkgmgr.actions.install.installers.os_packages.arch_pkgbuild.os.geteuid", return_value=1000)
@patch("os.path.exists", return_value=True)
@patch("shutil.which")
def test_supports_true_when_tools_and_pkgbuild_exist(
@@ -46,7 +46,7 @@ class TestArchPkgbuildInstaller(unittest.TestCase):
self.assertIn("makepkg", calls)
mock_exists.assert_called_with(os.path.join(self.ctx.repo_dir, "PKGBUILD"))
@patch("pkgmgr.actions.repository.install.installers.os_packages.arch_pkgbuild.os.geteuid", return_value=0)
@patch("pkgmgr.actions.install.installers.os_packages.arch_pkgbuild.os.geteuid", return_value=0)
@patch("os.path.exists", return_value=True)
@patch("shutil.which")
def test_supports_false_when_running_as_root(
@@ -55,7 +55,7 @@ class TestArchPkgbuildInstaller(unittest.TestCase):
mock_which.return_value = "/usr/bin/pacman"
self.assertFalse(self.installer.supports(self.ctx))
@patch("pkgmgr.actions.repository.install.installers.os_packages.arch_pkgbuild.os.geteuid", return_value=1000)
@patch("pkgmgr.actions.install.installers.os_packages.arch_pkgbuild.os.geteuid", return_value=1000)
@patch("os.path.exists", return_value=False)
@patch("shutil.which")
def test_supports_false_when_pkgbuild_missing(
@@ -64,8 +64,8 @@ class TestArchPkgbuildInstaller(unittest.TestCase):
mock_which.return_value = "/usr/bin/pacman"
self.assertFalse(self.installer.supports(self.ctx))
@patch("pkgmgr.actions.repository.install.installers.os_packages.arch_pkgbuild.run_command")
@patch("pkgmgr.actions.repository.install.installers.os_packages.arch_pkgbuild.os.geteuid", return_value=1000)
@patch("pkgmgr.actions.install.installers.os_packages.arch_pkgbuild.run_command")
@patch("pkgmgr.actions.install.installers.os_packages.arch_pkgbuild.os.geteuid", return_value=1000)
@patch("os.path.exists", return_value=True)
@patch("shutil.which")
def test_run_builds_and_installs_with_makepkg(

View File

@@ -1,8 +1,8 @@
import unittest
from unittest.mock import patch
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.os_packages.debian_control import (
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.os_packages.debian_control import (
DebianControlInstaller,
)
@@ -44,7 +44,7 @@ class TestDebianControlInstaller(unittest.TestCase):
self.assertFalse(self.installer.supports(self.ctx))
@patch(
"pkgmgr.actions.repository.install.installers.os_packages.debian_control.run_command"
"pkgmgr.actions.install.installers.os_packages.debian_control.run_command"
)
@patch("glob.glob", return_value=["/tmp/package-manager_0.1.1_all.deb"])
@patch("os.path.exists", return_value=True)

View File

@@ -1,8 +1,8 @@
import unittest
from unittest.mock import patch
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.os_packages.rpm_spec import (
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.os_packages.rpm_spec import (
RpmSpecInstaller,
)
@@ -57,7 +57,7 @@ class TestRpmSpecInstaller(unittest.TestCase):
self.assertFalse(self.installer.supports(self.ctx))
@patch.object(RpmSpecInstaller, "_prepare_source_tarball")
@patch("pkgmgr.actions.repository.install.installers.os_packages.rpm_spec.run_command")
@patch("pkgmgr.actions.install.installers.os_packages.rpm_spec.run_command")
@patch("glob.glob")
@patch("shutil.which")
def test_run_builds_and_installs_rpms(

View File

@@ -1,8 +1,8 @@
# tests/unit/pkgmgr/installers/test_base.py
import unittest
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.actions.install.context import RepoContext
class DummyInstaller(BaseInstaller):

View File

@@ -4,8 +4,8 @@ import os
import unittest
from unittest.mock import patch, mock_open
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.makefile import MakefileInstaller
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.makefile import MakefileInstaller
class TestMakefileInstaller(unittest.TestCase):
@@ -26,16 +26,16 @@ class TestMakefileInstaller(unittest.TestCase):
)
self.installer = MakefileInstaller()
@patch("os.path.exists", return_value=True)
def test_supports_true_when_makefile_exists(self, mock_exists):
self.assertTrue(self.installer.supports(self.ctx))
mock_exists.assert_called_with(os.path.join(self.ctx.repo_dir, "Makefile"))
# @patch("os.path.exists", return_value=True)
# def test_supports_true_when_makefile_exists(self, mock_exists):
# self.assertTrue(self.installer.supports(self.ctx))
# mock_exists.assert_called_with(os.path.join(self.ctx.repo_dir, "Makefile"))
@patch("os.path.exists", return_value=False)
def test_supports_false_when_makefile_missing(self, mock_exists):
self.assertFalse(self.installer.supports(self.ctx))
@patch("pkgmgr.actions.repository.install.installers.makefile.run_command")
@patch("pkgmgr.actions.install.installers.makefile.run_command")
@patch(
"builtins.open",
new_callable=mock_open,
@@ -62,7 +62,7 @@ class TestMakefileInstaller(unittest.TestCase):
self.ctx.repo_dir,
)
@patch("pkgmgr.actions.repository.install.installers.makefile.run_command")
@patch("pkgmgr.actions.install.installers.makefile.run_command")
@patch(
"builtins.open",
new_callable=mock_open,

View File

@@ -1,18 +1,22 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import unittest
from unittest import mock
from unittest.mock import patch
from unittest.mock import MagicMock, patch
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.nix_flake import NixFlakeInstaller
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.nix_flake import NixFlakeInstaller
class TestNixFlakeInstaller(unittest.TestCase):
def setUp(self):
self.repo = {"name": "test-repo"}
def setUp(self) -> None:
self.repo = {"repository": "package-manager"}
# Important: identifier "pkgmgr" triggers both "pkgmgr" and "default"
self.ctx = RepoContext(
repo=self.repo,
identifier="test-id",
identifier="pkgmgr",
repo_dir="/tmp/repo",
repositories_base_dir="/tmp",
bin_dir="/bin",
@@ -25,99 +29,104 @@ class TestNixFlakeInstaller(unittest.TestCase):
)
self.installer = NixFlakeInstaller()
@patch("shutil.which", return_value="/usr/bin/nix")
@patch("os.path.exists", return_value=True)
def test_supports_true_when_nix_and_flake_exist(self, mock_exists, mock_which):
"""
supports() should return True when:
- nix is available,
- flake.nix exists in the repo,
- and we are not inside a Nix dev shell.
"""
with patch.dict(os.environ, {"IN_NIX_SHELL": ""}, clear=False):
@patch("pkgmgr.actions.install.installers.nix_flake.os.path.exists")
@patch("pkgmgr.actions.install.installers.nix_flake.shutil.which")
def test_supports_true_when_nix_and_flake_exist(
self,
mock_which: MagicMock,
mock_exists: MagicMock,
) -> None:
mock_which.return_value = "/usr/bin/nix"
mock_exists.return_value = True
with patch.dict(os.environ, {"PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""}, clear=False):
self.assertTrue(self.installer.supports(self.ctx))
mock_which.assert_called_with("nix")
mock_exists.assert_called_with(os.path.join(self.ctx.repo_dir, "flake.nix"))
mock_which.assert_called_once_with("nix")
mock_exists.assert_called_once_with(
os.path.join(self.ctx.repo_dir, self.installer.FLAKE_FILE)
)
@patch("shutil.which", return_value=None)
@patch("os.path.exists", return_value=True)
def test_supports_false_when_nix_missing(self, mock_exists, mock_which):
"""
supports() should return False if nix is not available,
even if a flake.nix file exists.
"""
with patch.dict(os.environ, {"IN_NIX_SHELL": ""}, clear=False):
@patch("pkgmgr.actions.install.installers.nix_flake.os.path.exists")
@patch("pkgmgr.actions.install.installers.nix_flake.shutil.which")
def test_supports_false_when_nix_missing(
self,
mock_which: MagicMock,
mock_exists: MagicMock,
) -> None:
mock_which.return_value = None
mock_exists.return_value = True # flake exists but nix is missing
with patch.dict(os.environ, {"PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""}, clear=False):
self.assertFalse(self.installer.supports(self.ctx))
@patch("os.path.exists", return_value=True)
@patch("shutil.which", return_value="/usr/bin/nix")
@mock.patch("pkgmgr.actions.repository.install.installers.nix_flake.run_command")
@patch("pkgmgr.actions.install.installers.nix_flake.os.path.exists")
@patch("pkgmgr.actions.install.installers.nix_flake.shutil.which")
def test_supports_false_when_disabled_via_env(
self,
mock_which: MagicMock,
mock_exists: MagicMock,
) -> None:
mock_which.return_value = "/usr/bin/nix"
mock_exists.return_value = True
with patch.dict(
os.environ,
{"PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": "1"},
clear=False,
):
self.assertFalse(self.installer.supports(self.ctx))
@patch("pkgmgr.actions.install.installers.nix_flake.NixFlakeInstaller.supports")
@patch("pkgmgr.actions.install.installers.nix_flake.run_command")
def test_run_removes_old_profile_and_installs_outputs(
self,
mock_run_command,
mock_which,
mock_exists,
):
mock_run_command: MagicMock,
mock_supports: MagicMock,
) -> None:
"""
run() should:
1. attempt to remove the old 'package-manager' profile entry, and
2. install both 'pkgmgr' and 'default' flake outputs.
- remove the old profile
- install both 'pkgmgr' and 'default' outputs for identifier 'pkgmgr'
- call commands in the correct order
"""
mock_supports.return_value = True
cmds = []
commands: list[str] = []
def side_effect(cmd, cwd=None, preview=False, *args, **kwargs):
cmds.append(cmd)
return None
def side_effect(cmd: str, cwd: str | None = None, preview: bool = False, **_: object) -> None:
commands.append(cmd)
mock_run_command.side_effect = side_effect
# Simulate a normal environment (not inside nix develop, installer enabled).
with patch.dict(
os.environ,
{"IN_NIX_SHELL": "", "PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""},
clear=False,
):
with patch.dict(os.environ, {"PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""}, clear=False):
self.installer.run(self.ctx)
remove_cmd = f"nix profile remove {self.installer.PROFILE_NAME} || true"
install_pkgmgr_cmd = f"nix profile install {self.ctx.repo_dir}#pkgmgr"
install_default_cmd = f"nix profile install {self.ctx.repo_dir}#default"
# At least these three commands must have been issued.
self.assertIn(remove_cmd, cmds)
self.assertIn(install_pkgmgr_cmd, cmds)
self.assertIn(install_default_cmd, cmds)
self.assertIn(remove_cmd, commands)
self.assertIn(install_pkgmgr_cmd, commands)
self.assertIn(install_default_cmd, commands)
# Optional: ensure the remove call came first.
self.assertEqual(cmds[0], remove_cmd)
self.assertEqual(commands[0], remove_cmd)
@patch("shutil.which", return_value="/usr/bin/nix")
@mock.patch("pkgmgr.actions.repository.install.installers.nix_flake.run_command")
@patch("pkgmgr.actions.install.installers.nix_flake.shutil.which")
@patch("pkgmgr.actions.install.installers.nix_flake.run_command")
def test_ensure_old_profile_removed_ignores_systemexit(
self,
mock_run_command,
mock_which,
):
"""
_ensure_old_profile_removed() must not propagate SystemExit, even if
'nix profile remove' fails (e.g. profile entry does not exist).
"""
mock_run_command: MagicMock,
mock_which: MagicMock,
) -> None:
mock_which.return_value = "/usr/bin/nix"
def side_effect(cmd, cwd=None, preview=False, *args, **kwargs):
def side_effect(cmd: str, cwd: str | None = None, preview: bool = False, **_: object) -> None:
raise SystemExit(1)
mock_run_command.side_effect = side_effect
with patch.dict(
os.environ,
{"IN_NIX_SHELL": "", "PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""},
clear=False,
):
# Should not raise, SystemExit is swallowed internally.
self.installer._ensure_old_profile_removed(self.ctx)
self.installer._ensure_old_profile_removed(self.ctx)
remove_cmd = f"nix profile remove {self.installer.PROFILE_NAME} || true"
mock_run_command.assert_called_with(

View File

@@ -2,8 +2,8 @@ import os
import unittest
from unittest.mock import patch
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.python import PythonInstaller
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.python import PythonInstaller
class TestPythonInstaller(unittest.TestCase):
@@ -41,7 +41,7 @@ class TestPythonInstaller(unittest.TestCase):
with patch.dict(os.environ, {"IN_NIX_SHELL": ""}, clear=False):
self.assertFalse(self.installer.supports(self.ctx))
@patch("pkgmgr.actions.repository.install.installers.python.run_command")
@patch("pkgmgr.actions.install.installers.python.run_command")
@patch("os.path.exists", side_effect=lambda path: path.endswith("pyproject.toml"))
def test_run_installs_project_from_pyproject(self, mock_exists, mock_run_command):
"""

View File

@@ -4,7 +4,7 @@ import os
import unittest
from unittest.mock import patch, mock_open
from pkgmgr.actions.repository.install.capabilities import (
from pkgmgr.actions.install.capabilities import (
PythonRuntimeCapability,
MakeInstallCapability,
NixFlakeCapability,
@@ -31,7 +31,7 @@ class TestCapabilitiesDetectors(unittest.TestCase):
def setUp(self):
self.ctx = DummyCtx("/tmp/repo")
@patch("pkgmgr.actions.repository.install.capabilities.os.path.exists")
@patch("pkgmgr.actions.install.capabilities.os.path.exists")
def test_python_runtime_python_layer_pyproject(self, mock_exists):
"""PythonRuntimeCapability: python layer is provided if pyproject.toml exists."""
cap = PythonRuntimeCapability()
@@ -47,8 +47,8 @@ class TestCapabilitiesDetectors(unittest.TestCase):
self.assertFalse(cap.is_provided(self.ctx, "nix"))
self.assertFalse(cap.is_provided(self.ctx, "os-packages"))
@patch("pkgmgr.actions.repository.install.capabilities._read_text_if_exists")
@patch("pkgmgr.actions.repository.install.capabilities.os.path.exists")
@patch("pkgmgr.actions.install.capabilities._read_text_if_exists")
@patch("pkgmgr.actions.install.capabilities.os.path.exists")
def test_python_runtime_nix_layer_flake(self, mock_exists, mock_read):
"""
PythonRuntimeCapability: nix layer is provided if flake.nix contains
@@ -65,7 +65,7 @@ class TestCapabilitiesDetectors(unittest.TestCase):
self.assertTrue(cap.applies_to_layer("nix"))
self.assertTrue(cap.is_provided(self.ctx, "nix"))
@patch("pkgmgr.actions.repository.install.capabilities.os.path.exists", return_value=True)
@patch("pkgmgr.actions.install.capabilities.os.path.exists", return_value=True)
@patch(
"builtins.open",
new_callable=mock_open,
@@ -78,7 +78,7 @@ class TestCapabilitiesDetectors(unittest.TestCase):
self.assertTrue(cap.applies_to_layer("makefile"))
self.assertTrue(cap.is_provided(self.ctx, "makefile"))
@patch("pkgmgr.actions.repository.install.capabilities.os.path.exists")
@patch("pkgmgr.actions.install.capabilities.os.path.exists")
def test_nix_flake_capability_on_nix_layer(self, mock_exists):
"""NixFlakeCapability: nix layer is provided if flake.nix exists."""
cap = NixFlakeCapability()
@@ -153,7 +153,7 @@ class TestDetectCapabilities(unittest.TestCase):
},
)
with patch("pkgmgr.actions.repository.install.capabilities.CAPABILITY_MATCHERS", [dummy1, dummy2]):
with patch("pkgmgr.actions.install.capabilities.CAPABILITY_MATCHERS", [dummy1, dummy2]):
caps = detect_capabilities(self.ctx, layers)
self.assertEqual(
@@ -221,7 +221,7 @@ class TestResolveEffectiveCapabilities(unittest.TestCase):
)
with patch(
"pkgmgr.actions.repository.install.capabilities.CAPABILITY_MATCHERS",
"pkgmgr.actions.install.capabilities.CAPABILITY_MATCHERS",
[cap_make_install, cap_python_runtime, cap_nix_flake],
):
effective = resolve_effective_capabilities(self.ctx, layers)
@@ -258,7 +258,7 @@ class TestResolveEffectiveCapabilities(unittest.TestCase):
)
with patch(
"pkgmgr.actions.repository.install.capabilities.CAPABILITY_MATCHERS",
"pkgmgr.actions.install.capabilities.CAPABILITY_MATCHERS",
[cap_python_runtime],
):
effective = resolve_effective_capabilities(self.ctx, layers)
@@ -283,7 +283,7 @@ class TestResolveEffectiveCapabilities(unittest.TestCase):
},
)
with patch("pkgmgr.actions.repository.install.capabilities.CAPABILITY_MATCHERS", [cap_only_make]):
with patch("pkgmgr.actions.install.capabilities.CAPABILITY_MATCHERS", [cap_only_make]):
effective = resolve_effective_capabilities(self.ctx, layers)
self.assertEqual(effective["makefile"], {"make-install"})
@@ -306,7 +306,7 @@ class TestResolveEffectiveCapabilities(unittest.TestCase):
},
)
with patch("pkgmgr.actions.repository.install.capabilities.CAPABILITY_MATCHERS", [cap_only_nix]):
with patch("pkgmgr.actions.install.capabilities.CAPABILITY_MATCHERS", [cap_only_nix]):
effective = resolve_effective_capabilities(self.ctx, layers)
self.assertEqual(effective["makefile"], set())
@@ -337,7 +337,7 @@ class TestResolveEffectiveCapabilities(unittest.TestCase):
)
with patch(
"pkgmgr.actions.repository.install.capabilities.CAPABILITY_MATCHERS",
"pkgmgr.actions.install.capabilities.CAPABILITY_MATCHERS",
[cap_python_runtime],
):
effective = resolve_effective_capabilities(self.ctx, layers)
@@ -359,7 +359,7 @@ class TestResolveEffectiveCapabilities(unittest.TestCase):
)
with patch(
"pkgmgr.actions.repository.install.capabilities.CAPABILITY_MATCHERS",
"pkgmgr.actions.install.capabilities.CAPABILITY_MATCHERS",
[cap_dummy],
):
effective = resolve_effective_capabilities(self.ctx)

View File

@@ -1,5 +1,5 @@
import unittest
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.install.context import RepoContext
class TestRepoContext(unittest.TestCase):

View File

@@ -1,134 +1,129 @@
# tests/unit/pkgmgr/test_install_repos.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import unittest
from unittest.mock import patch, MagicMock
from typing import Any, Dict, List
from unittest.mock import MagicMock, patch
from pkgmgr.actions.repository.install.context import RepoContext
import pkgmgr.actions.repository.install as install_module
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.actions.install import install_repos
class DummyInstaller(BaseInstaller):
"""Simple installer for testing orchestration."""
layer = None # no specific capabilities
def __init__(self):
self.calls = []
def supports(self, ctx: RepoContext) -> bool:
# Always support to verify that the pipeline runs
return True
def run(self, ctx: RepoContext) -> None:
self.calls.append(ctx.identifier)
Repository = Dict[str, Any]
class TestInstallReposOrchestration(unittest.TestCase):
@patch("pkgmgr.actions.repository.install.create_ink")
@patch("pkgmgr.actions.repository.install.resolve_command_for_repo")
@patch("pkgmgr.actions.repository.install.verify_repository")
@patch("pkgmgr.actions.repository.install.get_repo_dir")
@patch("pkgmgr.actions.repository.install.get_repo_identifier")
@patch("pkgmgr.actions.repository.install.clone_repos")
def setUp(self) -> None:
self.base_dir = "/fake/base"
self.bin_dir = "/fake/bin"
self.repo1: Repository = {
"account": "kevinveenbirkenbach",
"repository": "repo-one",
"alias": "repo-one",
"verified": {"gpg_keys": ["FAKEKEY"]},
}
self.repo2: Repository = {
"account": "kevinveenbirkenbach",
"repository": "repo-two",
"alias": "repo-two",
"verified": {"gpg_keys": ["FAKEKEY"]},
}
self.all_repos: List[Repository] = [self.repo1, self.repo2]
@patch("pkgmgr.actions.install.InstallationPipeline")
@patch("pkgmgr.actions.install.clone_repos")
@patch("pkgmgr.actions.install.get_repo_dir")
@patch("pkgmgr.actions.install.os.path.exists", return_value=True)
@patch(
"pkgmgr.actions.install.verify_repository",
return_value=(True, [], "hash", "key"),
)
def test_install_repos_runs_pipeline_for_each_repo(
self,
mock_clone_repos,
mock_get_repo_identifier,
mock_get_repo_dir,
mock_verify_repository,
mock_resolve_command_for_repo,
mock_create_ink,
):
repo1 = {"name": "repo1"}
repo2 = {"name": "repo2"}
selected_repos = [repo1, repo2]
all_repos = selected_repos
_mock_verify_repository: MagicMock,
_mock_exists: MagicMock,
mock_get_repo_dir: MagicMock,
mock_clone_repos: MagicMock,
mock_pipeline_cls: MagicMock,
) -> None:
"""
install_repos() should construct a RepoContext for each repository and
run the InstallationPipeline exactly once per selected repo when the
repo directory exists and verification passes.
"""
mock_get_repo_dir.side_effect = [
os.path.join(self.base_dir, "repo-one"),
os.path.join(self.base_dir, "repo-two"),
]
# Return identifiers and directories
mock_get_repo_identifier.side_effect = ["id1", "id2"]
mock_get_repo_dir.side_effect = ["/tmp/repo1", "/tmp/repo2"]
selected = [self.repo1, self.repo2]
# Simulate verification success: (ok, errors, commit, key)
mock_verify_repository.return_value = (True, [], "commit", "key")
install_repos(
selected_repos=selected,
repositories_base_dir=self.base_dir,
bin_dir=self.bin_dir,
all_repos=self.all_repos,
no_verification=False,
preview=False,
quiet=False,
clone_mode="ssh",
update_dependencies=False,
)
# Resolve commands for both repos so create_ink will be called
mock_resolve_command_for_repo.side_effect = ["/bin/cmd1", "/bin/cmd2"]
# clone_repos must not be called because directories "exist"
mock_clone_repos.assert_not_called()
# Ensure directories exist (no cloning)
with patch("os.path.exists", return_value=True):
dummy_installer = DummyInstaller()
# Monkeypatch INSTALLERS for this test
old_installers = install_module.INSTALLERS
install_module.INSTALLERS = [dummy_installer]
try:
install_module.install_repos(
selected_repos=selected_repos,
repositories_base_dir="/tmp",
bin_dir="/bin",
all_repos=all_repos,
no_verification=False,
preview=False,
quiet=False,
clone_mode="ssh",
update_dependencies=False,
)
finally:
install_module.INSTALLERS = old_installers
# A pipeline is constructed once, then run() is invoked once per repo
self.assertEqual(mock_pipeline_cls.call_count, 1)
pipeline_instance = mock_pipeline_cls.return_value
self.assertEqual(pipeline_instance.run.call_count, len(selected))
# Check that installers ran with both identifiers
self.assertEqual(dummy_installer.calls, ["id1", "id2"])
self.assertEqual(mock_create_ink.call_count, 2)
self.assertEqual(mock_verify_repository.call_count, 2)
self.assertEqual(mock_resolve_command_for_repo.call_count, 2)
@patch("pkgmgr.actions.repository.install.verify_repository")
@patch("pkgmgr.actions.repository.install.get_repo_dir")
@patch("pkgmgr.actions.repository.install.get_repo_identifier")
@patch("pkgmgr.actions.repository.install.clone_repos")
@patch("pkgmgr.actions.install.InstallationPipeline")
@patch("pkgmgr.actions.install.clone_repos")
@patch("pkgmgr.actions.install.get_repo_dir")
@patch("pkgmgr.actions.install.os.path.exists", return_value=True)
@patch(
"pkgmgr.actions.install.verify_repository",
return_value=(False, ["invalid signature"], None, None),
)
@patch("builtins.input", return_value="n")
def test_install_repos_skips_on_failed_verification(
self,
mock_clone_repos,
mock_get_repo_identifier,
mock_get_repo_dir,
mock_verify_repository,
):
repo = {"name": "repo1", "verified": True}
selected_repos = [repo]
all_repos = selected_repos
_mock_input: MagicMock,
_mock_verify_repository: MagicMock,
_mock_exists: MagicMock,
mock_get_repo_dir: MagicMock,
mock_clone_repos: MagicMock,
mock_pipeline_cls: MagicMock,
) -> None:
"""
When verification fails and the user does NOT confirm installation,
the InstallationPipeline must not be run for that repository.
"""
mock_get_repo_dir.return_value = os.path.join(self.base_dir, "repo-one")
mock_get_repo_identifier.return_value = "id1"
mock_get_repo_dir.return_value = "/tmp/repo1"
selected = [self.repo1]
# Verification fails: ok=False, with error list
mock_verify_repository.return_value = (False, ["sig error"], None, None)
install_repos(
selected_repos=selected,
repositories_base_dir=self.base_dir,
bin_dir=self.bin_dir,
all_repos=self.all_repos,
no_verification=False,
preview=False,
quiet=False,
clone_mode="ssh",
update_dependencies=False,
)
dummy_installer = DummyInstaller()
with patch("pkgmgr.actions.repository.install.create_ink") as mock_create_ink, \
patch("pkgmgr.actions.repository.install.resolve_command_for_repo") as mock_resolve_cmd, \
patch("os.path.exists", return_value=True), \
patch("builtins.input", return_value="n"):
old_installers = install_module.INSTALLERS
install_module.INSTALLERS = [dummy_installer]
try:
install_module.install_repos(
selected_repos=selected_repos,
repositories_base_dir="/tmp",
bin_dir="/bin",
all_repos=all_repos,
no_verification=False,
preview=False,
quiet=False,
clone_mode="ssh",
update_dependencies=False,
)
finally:
install_module.INSTALLERS = old_installers
# clone_repos must not be called because directory "exists"
mock_clone_repos.assert_not_called()
# No installer run and no create_ink when user declines
self.assertEqual(dummy_installer.calls, [])
mock_create_ink.assert_not_called()
mock_resolve_cmd.assert_not_called()
# Pipeline is constructed, but run() must not be called
mock_pipeline_cls.assert_called_once()
pipeline_instance = mock_pipeline_cls.return_value
pipeline_instance.run.assert_not_called()
if __name__ == "__main__":

View File

@@ -0,0 +1,94 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import unittest
from pkgmgr.actions.install.layers import (
CliLayer,
CLI_LAYERS,
classify_command_layer,
layer_priority,
)
class TestCliLayerAndPriority(unittest.TestCase):
def test_layer_priority_for_known_layers_is_monotonic(self) -> None:
"""
layer_priority() must reflect the ordering in CLI_LAYERS.
We mainly check that the order is stable and that each later item
has a higher (or equal) priority index than the previous one.
"""
priorities = [layer_priority(layer) for layer in CLI_LAYERS]
# Ensure no negative priorities and strictly increasing or stable order
for idx, value in enumerate(priorities):
self.assertGreaterEqual(
value, 0, f"Priority for {CLI_LAYERS[idx]} must be >= 0"
)
if idx > 0:
self.assertGreaterEqual(
value,
priorities[idx - 1],
"Priorities must be non-decreasing in CLI_LAYERS order",
)
def test_layer_priority_for_none_and_unknown(self) -> None:
"""
None and unknown layers should both receive the 'max' priority
(i.e., len(CLI_LAYERS)).
"""
none_priority = layer_priority(None)
self.assertEqual(none_priority, len(CLI_LAYERS))
class FakeLayer:
# Not part of CliLayer
pass
unknown_priority = layer_priority(FakeLayer()) # type: ignore[arg-type]
self.assertEqual(unknown_priority, len(CLI_LAYERS))
class TestClassifyCommandLayer(unittest.TestCase):
def setUp(self) -> None:
self.home = os.path.expanduser("~")
self.repo_dir = "/tmp/pkgmgr-test-repo"
def test_classify_system_binaries_os_packages(self) -> None:
for cmd in ("/usr/bin/pkgmgr", "/bin/pkgmgr"):
with self.subTest(cmd=cmd):
layer = classify_command_layer(cmd, self.repo_dir)
self.assertEqual(layer, CliLayer.OS_PACKAGES)
def test_classify_nix_binaries(self) -> None:
nix_cmds = [
"/nix/store/abcd1234-bin-pkgmgr/bin/pkgmgr",
os.path.join(self.home, ".nix-profile", "bin", "pkgmgr"),
]
for cmd in nix_cmds:
with self.subTest(cmd=cmd):
layer = classify_command_layer(cmd, self.repo_dir)
self.assertEqual(layer, CliLayer.NIX)
def test_classify_python_binaries(self) -> None:
# Default Python/virtualenv-style location in home
cmd = os.path.join(self.home, ".local", "bin", "pkgmgr")
layer = classify_command_layer(cmd, self.repo_dir)
self.assertEqual(layer, CliLayer.PYTHON)
def test_classify_repo_local_binary_makefile_layer(self) -> None:
cmd = os.path.join(self.repo_dir, "bin", "pkgmgr")
layer = classify_command_layer(cmd, self.repo_dir)
self.assertEqual(layer, CliLayer.MAKEFILE)
def test_fallback_to_python_layer(self) -> None:
"""
Non-system, non-nix, non-repo binaries should fall back to PYTHON.
"""
cmd = "/opt/pkgmgr/bin/pkgmgr"
layer = classify_command_layer(cmd, self.repo_dir)
self.assertEqual(layer, CliLayer.PYTHON)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,157 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import unittest
from unittest.mock import MagicMock, patch
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.actions.install.layers import CliLayer
from pkgmgr.actions.install.pipeline import InstallationPipeline
class DummyInstaller(BaseInstaller):
"""
Small fake installer with configurable layer, supports() result,
and advertised capabilities.
"""
def __init__(
self,
name: str,
layer: str | None = None,
supports_result: bool = True,
capabilities: set[str] | None = None,
) -> None:
self._name = name
self.layer = layer # type: ignore[assignment]
self._supports_result = supports_result
self._capabilities = capabilities or set()
self.ran = False
def supports(self, ctx: RepoContext) -> bool: # type: ignore[override]
return self._supports_result
def run(self, ctx: RepoContext) -> None: # type: ignore[override]
self.ran = True
def discover_capabilities(self, ctx: RepoContext) -> set[str]: # type: ignore[override]
return set(self._capabilities)
def _minimal_context() -> RepoContext:
repo = {
"account": "kevinveenbirkenbach",
"repository": "test-repo",
"alias": "test-repo",
}
return RepoContext(
repo=repo,
identifier="test-repo",
repo_dir="/tmp/test-repo",
repositories_base_dir="/tmp",
bin_dir="/usr/local/bin",
all_repos=[repo],
no_verification=False,
preview=False,
quiet=False,
clone_mode="ssh",
update_dependencies=False,
)
class TestInstallationPipeline(unittest.TestCase):
@patch("pkgmgr.actions.install.pipeline.create_ink")
@patch("pkgmgr.actions.install.pipeline.resolve_command_for_repo")
def test_create_ink_called_when_command_resolved(
self,
mock_resolve_command_for_repo: MagicMock,
mock_create_ink: MagicMock,
) -> None:
"""
If resolve_command_for_repo returns a command, InstallationPipeline
must attach it to the repo and call create_ink().
"""
mock_resolve_command_for_repo.return_value = "/usr/local/bin/test-repo"
ctx = _minimal_context()
installer = DummyInstaller("noop-installer", supports_result=False)
pipeline = InstallationPipeline([installer])
pipeline.run(ctx)
self.assertTrue(mock_create_ink.called)
self.assertEqual(
ctx.repo.get("command"),
"/usr/local/bin/test-repo",
)
@patch("pkgmgr.actions.install.pipeline.create_ink")
@patch("pkgmgr.actions.install.pipeline.resolve_command_for_repo")
def test_lower_priority_installers_are_skipped_if_cli_exists(
self,
mock_resolve_command_for_repo: MagicMock,
mock_create_ink: MagicMock,
) -> None:
"""
If the resolved command is provided by a higher-priority layer
(e.g. OS_PACKAGES), a lower-priority installer (e.g. PYTHON)
must be skipped.
"""
mock_resolve_command_for_repo.return_value = "/usr/bin/test-repo"
ctx = _minimal_context()
python_installer = DummyInstaller(
"python-installer",
layer=CliLayer.PYTHON.value,
supports_result=True,
)
pipeline = InstallationPipeline([python_installer])
pipeline.run(ctx)
self.assertFalse(
python_installer.ran,
"Python installer must not run when an OS_PACKAGES CLI already exists.",
)
self.assertEqual(ctx.repo.get("command"), "/usr/bin/test-repo")
@patch("pkgmgr.actions.install.pipeline.create_ink")
@patch("pkgmgr.actions.install.pipeline.resolve_command_for_repo")
def test_capabilities_prevent_duplicate_installers(
self,
mock_resolve_command_for_repo: MagicMock,
mock_create_ink: MagicMock,
) -> None:
"""
If one installer has already provided a set of capabilities,
a second installer advertising the same capabilities should be skipped.
"""
mock_resolve_command_for_repo.return_value = None # no CLI initially
ctx = _minimal_context()
first = DummyInstaller(
"first-installer",
layer=CliLayer.PYTHON.value,
supports_result=True,
capabilities={"cli"},
)
second = DummyInstaller(
"second-installer",
layer=CliLayer.PYTHON.value,
supports_result=True,
capabilities={"cli"}, # same capability
)
pipeline = InstallationPipeline([first, second])
pipeline.run(ctx)
self.assertTrue(first.ran, "First installer should run.")
self.assertFalse(
second.ran,
"Second installer with identical capabilities must be skipped.",
)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,168 @@
from __future__ import annotations
import json
import os
import tempfile
import unittest
from types import SimpleNamespace
from typing import Any, Dict, List
from pkgmgr.cli.commands.tools import handle_tools_command
Repository = Dict[str, Any]
class _Args:
"""
Simple helper object to mimic argparse.Namespace for handle_tools_command.
"""
def __init__(self, command: str) -> None:
self.command = command
class TestHandleToolsCommand(unittest.TestCase):
"""
Unit tests for pkgmgr.cli.commands.tools.handle_tools_command.
We focus on:
- Correct path resolution for repositories that have a 'directory' key.
- Correct shell commands for 'explore' and 'terminal'.
- Proper workspace creation and invocation of 'code' for the 'code' command.
"""
def setUp(self) -> None:
# Two fake repositories with explicit 'directory' entries so that
# _resolve_repository_path() does not need to call get_repo_dir().
self.repos: List[Repository] = [
{"alias": "repo1", "directory": "/tmp/repo1"},
{"alias": "repo2", "directory": "/tmp/repo2"},
]
# Minimal CLI context; only attributes used in tools.py are provided.
self.ctx = SimpleNamespace(
config_merged={"directories": {"workspaces": "~/Workspaces"}},
all_repositories=self.repos,
repositories_base_dir="/base/dir",
)
# ------------------------------------------------------------------ #
# Helper
# ------------------------------------------------------------------ #
def _patch_run_command(self):
"""
Convenience context manager for patching run_command in tools module.
"""
from unittest.mock import patch
return patch("pkgmgr.cli.commands.tools.run_command")
# ------------------------------------------------------------------ #
# Tests for 'explore'
# ------------------------------------------------------------------ #
def test_explore_uses_directory_paths(self) -> None:
"""
The 'explore' command should call Nautilus with the resolved
repository paths and use '& disown' as in the implementation.
"""
from unittest.mock import call
args = _Args(command="explore")
with self._patch_run_command() as mock_run_command:
handle_tools_command(args, self.ctx, self.repos)
expected_calls = [
call('nautilus "/tmp/repo1" & disown'),
call('nautilus "/tmp/repo2" & disown'),
]
self.assertEqual(mock_run_command.call_args_list, expected_calls)
# ------------------------------------------------------------------ #
# Tests for 'terminal'
# ------------------------------------------------------------------ #
def test_terminal_uses_directory_paths(self) -> None:
"""
The 'terminal' command should open a GNOME Terminal tab with the
repository as its working directory.
"""
from unittest.mock import call
args = _Args(command="terminal")
with self._patch_run_command() as mock_run_command:
handle_tools_command(args, self.ctx, self.repos)
expected_calls = [
call('gnome-terminal --tab --working-directory="/tmp/repo1"'),
call('gnome-terminal --tab --working-directory="/tmp/repo2"'),
]
self.assertEqual(mock_run_command.call_args_list, expected_calls)
# ------------------------------------------------------------------ #
# Tests for 'code'
# ------------------------------------------------------------------ #
def test_code_creates_workspace_and_calls_code(self) -> None:
"""
The 'code' command should:
- Build a workspace file name from sorted repository identifiers.
- Resolve the repository paths into VS Code 'folders'.
- Create the workspace file if it does not exist.
- Call 'code "<workspace_file>"' via run_command.
"""
from unittest.mock import patch
args = _Args(command="code")
with tempfile.TemporaryDirectory() as tmpdir:
# Patch expanduser so that the configured '~/Workspaces'
# resolves into our temporary directory.
with patch(
"pkgmgr.cli.commands.tools.os.path.expanduser"
) as mock_expanduser:
mock_expanduser.return_value = tmpdir
# Patch get_repo_identifier so the resulting workspace file
# name is deterministic and easy to assert.
with patch(
"pkgmgr.cli.commands.tools.get_repo_identifier"
) as mock_get_identifier:
mock_get_identifier.side_effect = ["repo-b", "repo-a"]
with self._patch_run_command() as mock_run_command:
handle_tools_command(args, self.ctx, self.repos)
# The identifiers are ['repo-b', 'repo-a'], which are
# sorted to ['repo-a', 'repo-b'] and joined with '_'.
expected_workspace_name = "repo-a_repo-b.code-workspace"
expected_workspace_file = os.path.join(
tmpdir, expected_workspace_name
)
# Workspace file should have been created.
self.assertTrue(
os.path.exists(expected_workspace_file),
"Workspace file was not created.",
)
# The content of the workspace must be valid JSON with
# the expected folder paths.
with open(expected_workspace_file, "r", encoding="utf-8") as f:
data = json.load(f)
self.assertIn("folders", data)
folder_paths = {f["path"] for f in data["folders"]}
self.assertEqual(
folder_paths,
{"/tmp/repo1", "/tmp/repo2"},
)
# And VS Code must have been invoked with that workspace.
mock_run_command.assert_called_once_with(
f'code "{expected_workspace_file}"'
)

View File

@@ -0,0 +1,212 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import stat
import tempfile
import unittest
from unittest.mock import patch
from pkgmgr.core.command.resolve import (
_find_python_package_root,
_nix_binary_candidates,
_path_binary_candidates,
resolve_command_for_repo,
)
class TestHelpers(unittest.TestCase):
def test_find_python_package_root_none_when_missing_src(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
root = _find_python_package_root(tmpdir)
self.assertIsNone(root)
def test_find_python_package_root_returns_existing_dir_or_none(self) -> None:
"""
We only assert that the helper does not return an invalid path.
The exact selection heuristic is intentionally left flexible since
the implementation may evolve.
"""
with tempfile.TemporaryDirectory() as tmpdir:
src_dir = os.path.join(tmpdir, "src", "mypkg")
os.makedirs(src_dir, exist_ok=True)
init_path = os.path.join(src_dir, "__init__.py")
with open(init_path, "w", encoding="utf-8") as f:
f.write("# package marker\n")
root = _find_python_package_root(tmpdir)
if root is not None:
self.assertTrue(os.path.isdir(root))
def test_nix_binary_candidates_builds_expected_paths(self) -> None:
home = "/home/testuser"
names = ["pkgmgr", "", None, "other"] # type: ignore[list-item]
candidates = _nix_binary_candidates(home, names) # type: ignore[arg-type]
self.assertIn(
os.path.join(home, ".nix-profile", "bin", "pkgmgr"),
candidates,
)
self.assertIn(
os.path.join(home, ".nix-profile", "bin", "other"),
candidates,
)
self.assertEqual(len(candidates), 2)
@patch("pkgmgr.core.command.resolve._is_executable", return_value=True)
@patch("pkgmgr.core.command.resolve.shutil.which")
def test_path_binary_candidates_uses_which_and_executable(
self,
mock_which,
_mock_is_executable,
) -> None:
def which_side_effect(name: str) -> str | None:
if name == "pkgmgr":
return "/usr/local/bin/pkgmgr"
if name == "other":
return "/usr/bin/other"
return None
mock_which.side_effect = which_side_effect
candidates = _path_binary_candidates(["pkgmgr", "other", "missing"])
self.assertEqual(
candidates,
["/usr/local/bin/pkgmgr", "/usr/bin/other"],
)
class TestResolveCommandForRepo(unittest.TestCase):
def test_explicit_command_in_repo_wins(self) -> None:
repo = {"command": "/custom/path/pkgmgr"}
cmd = resolve_command_for_repo(
repo=repo,
repo_identifier="pkgmgr",
repo_dir="/tmp/pkgmgr",
)
self.assertEqual(cmd, "/custom/path/pkgmgr")
@patch("pkgmgr.core.command.resolve._is_executable", return_value=True)
@patch("pkgmgr.core.command.resolve._nix_binary_candidates", return_value=[])
@patch("pkgmgr.core.command.resolve.shutil.which")
def test_prefers_non_system_path_over_system_binary(
self,
mock_which,
_mock_nix_candidates,
_mock_is_executable,
) -> None:
"""
If both a system binary (/usr/bin) and a non-system binary (/opt/bin)
exist in PATH, the non-system binary must be preferred.
"""
def which_side_effect(name: str) -> str | None:
if name == "pkgmgr":
return "/usr/bin/pkgmgr" # system binary
if name == "alias":
return "/opt/bin/pkgmgr" # non-system binary
return None
mock_which.side_effect = which_side_effect
repo = {
"alias": "alias",
"repository": "pkgmgr",
}
cmd = resolve_command_for_repo(
repo=repo,
repo_identifier="pkgmgr",
repo_dir="/tmp/pkgmgr",
)
self.assertEqual(cmd, "/opt/bin/pkgmgr")
@patch("pkgmgr.core.command.resolve._is_executable", return_value=True)
@patch("pkgmgr.core.command.resolve._nix_binary_candidates")
@patch("pkgmgr.core.command.resolve.shutil.which")
def test_nix_binary_used_when_no_non_system_bin(
self,
mock_which,
mock_nix_candidates,
_mock_is_executable,
) -> None:
"""
When only a system binary exists in PATH but a Nix profile binary is
available, the Nix binary should be preferred.
"""
def which_side_effect(name: str) -> str | None:
if name == "pkgmgr":
return "/usr/bin/pkgmgr"
return None
mock_which.side_effect = which_side_effect
mock_nix_candidates.return_value = ["/home/test/.nix-profile/bin/pkgmgr"]
repo = {"repository": "pkgmgr"}
cmd = resolve_command_for_repo(
repo=repo,
repo_identifier="pkgmgr",
repo_dir="/tmp/pkgmgr",
)
self.assertEqual(cmd, "/home/test/.nix-profile/bin/pkgmgr")
def test_main_sh_fallback_when_no_binaries(self) -> None:
"""
If no CLI is found via PATH or Nix, resolve_command_for_repo()
should fall back to an executable main.sh in the repo root.
"""
with tempfile.TemporaryDirectory() as tmpdir, patch(
"pkgmgr.core.command.resolve.shutil.which", return_value=None
), patch(
"pkgmgr.core.command.resolve._nix_binary_candidates", return_value=[]
), patch(
"pkgmgr.core.command.resolve._is_executable"
) as mock_is_executable:
main_sh = os.path.join(tmpdir, "main.sh")
with open(main_sh, "w", encoding="utf-8") as f:
f.write("#!/bin/sh\nexit 0\n")
os.chmod(main_sh, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
def is_exec_side_effect(path: str) -> bool:
return path == main_sh
mock_is_executable.side_effect = is_exec_side_effect
repo = {}
cmd = resolve_command_for_repo(
repo=repo,
repo_identifier="pkgmgr",
repo_dir=tmpdir,
)
self.assertEqual(cmd, main_sh)
def test_python_package_without_entry_point_returns_none(self) -> None:
"""
If the repository looks like a Python package (src/package/__init__.py)
but there is no CLI entry point or main.sh/main.py, the result
should be None.
"""
with tempfile.TemporaryDirectory() as tmpdir, patch(
"pkgmgr.core.command.resolve.shutil.which", return_value=None
), patch(
"pkgmgr.core.command.resolve._nix_binary_candidates", return_value=[]
), patch(
"pkgmgr.core.command.resolve._is_executable", return_value=False
):
src_dir = os.path.join(tmpdir, "src", "mypkg")
os.makedirs(src_dir, exist_ok=True)
init_path = os.path.join(src_dir, "__init__.py")
with open(init_path, "w", encoding="utf-8") as f:
f.write("# package marker\n")
repo = {}
cmd = resolve_command_for_repo(
repo=repo,
repo_identifier="mypkg",
repo_dir=tmpdir,
)
self.assertIsNone(cmd)
if __name__ == "__main__":
unittest.main()